FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — May 30th 2020Your RSS feeds

Toyota’s first plug-in hybrid RAV4 Prime priced a skosh under $40,000

By Kirsten Korosec

When Toyota unveiled the 2021 Toyota RAV4 Prime in November, the vehicle garnered a lot of attention because it achieved two seemingly conflicting goals. It was Toyota’s most fuel efficient and one of its most powerful vehicles.

Now, it’s getting praise for managing a base price under $40,000. Toyota said Friday that the standard trim of the plug-in vehicle, the RAV4 Prime SE, will start at $39,220,  a price that includes the mandatory $1,120 destination charge.

This plug-in RAV4 will have an all-wheel drive, sport-tuned suspension. When in pure EV mode it has a manufacturer-estimated 42 miles of range — putting it ahead of other plug-in SUVs. Toyota said it has a also has up to a manufacturer-estimated 94 combined miles per gallon equivalent. We’re still waiting on official EPA estimates.

The vehicle has a tuned 2.5-liter, four-cylinder gasoline engine and when combined with the electric motors will deliver 302 horsepower and be able to travel from 0 to 60 miles per hour in a projected 5.8 seconds.

The plug-in RAV4 will be offered in two variants. Toyota equips all of its RAV4 models with its standard active safety systems that includes a pre-collision system with pedestrian detection, full-speed range dynamic radar cruise  control, lane departure alert with steering assist, automatic high beams, lane tracing assist and road sign assist.

The cheaper SE comes standard with some notable features like 18-inch painted and machined alloy wheels, heated front seats, a power liftgate, a 3-kilowatt onboard charger and a 8-inch touchscreen along with Amazon Alexa integration and Android Auto and Apple CarPlay compatibility. Some advanced driver assistance features such as blind spot monitor with rear cross traffic alert also comes standard.

There is a weather and moonroof package for an additional $1,665 upgrade, that adds extras like a heated steering wheel, heated rear outboard seats and rain-sensing windshield wipers with de-icer function.

The pricier XSE trim starts at $42,545 (with the destination price included) and offers more luxury touches such as a two-tone exterior paint scheme pairing a black roof with select colors, 19-inch two-tone alloy wheels, paddle shifters, wireless phone charger and a 9-inch touchscreen. There are several other upgrades, of course, including one for the multimedia system that adds dynamic navigation and a JBL speaker system. The daddy of upgrades on the XSE costs $5,760 and covers weather, audio and premium features including a heads-up display, panoramic moonroof, digital rearview mirror, surround-view cameras and four-door keyless entry.

The vehicle is expected to show up at dealerships this summer.

Zuckerberg explains why Facebook won’t take action on Trump’s recent posts

By Jonathan Shieber

In a statement posted to Facebook late Friday afternoon, Mark Zuckerberg offered up an explanation of why his company did not contextualize or remove posts from the accounts associated with President Donald Trump that appeared to incite violence against American citizens.

“We looked very closely at the post that discussed the protests in Minnesota to evaluate whether it violated our policies,” Zuckerberg wrote. “Our policy around incitement of violence allows discussion around state use of force, although I think today’s situation raises important questions about what potential limits of that discussion should be.”

Facebook’s position stands in sharp contrast to recent decisions made by Twitter, with the approval of its chief executive, Jack Dorsey, to screen a tweet from the President on Thursday night using a “public interest notice” that indicated the tweet violated its rules glorifying violence. The public interest notice replaces the substance of what Trump wrote, meaning a user has to actively click through to view the offending tweet.

Critics excoriated Facebook and its CEO for its decision to take a hands off approach to the dissemination of misinformation and potential incitements to violence published by accounts associated with the President and the White House. Some of the criticism has even come from among the company’s employees.

“I have to say I am finding the contortions we have to go through incredibly hard to stomach,” one employee, quoted by The Verge, wrote in a comment on Facebook’s internal message board. “All this points to a very high risk of a violent escalation and civil unrest in November and if we fail the test case here, history will not judge us kindly.”

Zuckerberg defended Facebook’s position saying that it would not take any action on the posts from the President because “we think people need to know if the government is planning to deploy force.”

Facebook’s chief executive also drew a sharp contrast between Facebook’s response to the controversy and that of Twitter, which has provided a fact check for one of the President’s tweets and hidden Thursday’s tweet behind a warning label for violating its policies on violence.

“Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician,” wrote Zuckerberg.

Twitter explained its decision in a statement. “This Tweet violates our policies regarding the glorification of violence based on the historical context of the last line, its connection to violence, and the risk it could inspire similar actions today,” the company said.

Twitter Comms

✔@TwitterComms

We have placed a public interest notice on this Tweet from @realdonaldtrump. https://twitter.com/realDonaldTrump/status/1266231100780744704 

Donald J. Trump

✔@realDonaldTrump

Replying to @realDonaldTrump

….These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!

“We’ve taken action in the interest of preventing others from being inspired to commit violent acts, but have kept the Tweet on Twitter because it is important that the public still be able to see the Tweet given its relevance to ongoing matters of public importance,” the Twitter statement continued.

Perhaps, as Zuckerberg suggests, Facebook will have an opportunity to provide some answers to the questions around what the limits should be around allowing the state discussion of incitements to violence. For now, the company’s response only begs more questions.

A link to the full post from Zuckerberg follows below:

This has been an incredibly tough week after a string of tough weeks. The killing of George Floyd showed yet again that…

Posted by Mark Zuckerberg on Friday, May 29, 2020

Yesterday — May 29th 2020Your RSS feeds

As wildfire season approaches, AI could pinpoint risky regions using satellite imagery

By Devin Coldewey

The U.S. has suffered from devastating wildfires over the last few years as global temperatures rise and weather patterns change, making the otherwise natural phenomenon especially unpredictable and severe. To help out, Stanford researchers have found a way to track and predict dry, at-risk areas using machine learning and satellite imagery.

Currently the way forests and scrublands are tested for susceptibility to wildfires is by manually collecting branches and foliage and testing their water content. It’s accurate and reliable, but obviously also quite labor intensive and difficult to scale.

Fortunately, other sources of data have recently become available. The European Space Agency’s Sentinel and Landsat satellites have amassed a trove of imagery of the Earth’s surface that, when carefully analyzed, could provide a secondary source for assessing wildfire risk — and one no one has to risk getting splinters for.

This isn’t the first attempt to make this kind of observation from orbital imagery, but previous efforts relied heavily on visual measurements that are “extremely site-specific,” meaning the analysis method differs greatly depending on the location. No splinters, but still hard to scale. The advance leveraged by the Stanford team is the Sentinel satellites’ “synthetic aperture radar,” which can pierce the forest canopy and image the surface below.

“One of our big breakthroughs was to look at a newer set of satellites that are using much longer wavelengths, which allows the observations to be sensitive to water much deeper into the forest canopy and be directly representative of the fuel moisture content,” said senior author of the paper, Stanford ecoydrologist Alexandra Konings, in a news release.

The team fed this new imagery, collected regularly since 2016, to a machine learning model along with the manual measurements made by the U.S. Forest Service. This lets the model “learn” what particular features of the imagery correlate with the ground-truth measurements.

They then tested the resulting AI agent (the term is employed loosely) by having it make predictions based on old data for which they already knew the answers. It was accurate, but most so in scrublands, one of the most common biomes of the American west and also one of the most susceptible to wildfires.

You can see the results of the project in this interactive map showing the model’s prediction of dryness at different periods all over the western part of the country. That’s not so much for firefighters as a validation of the approach — but the same model, given up to date data, can make predictions about the upcoming wildfire season that could help the authorities make more informed decisions about controlled burns, danger areas, and safety warnings.

The researchers’ work was published in the journal Remote Sensing of Environment.

Brex, the credit card for startups, cuts staff amid restructuring

By Natasha Mascarenhas

Brex, last valued at $2.6 billion, is restructuring its credit card for startups business and cut 62 staff members, the co-founders Pedro Franceschi and Henrique Dubugras said in a blog post.

“Today we’re restructuring the company to better align our priorities with this new reality, while simultaneously accelerating our product vision. With that, I have some very sad news to share. 62 people will be leaving Brex today,” the post reads.

The cuts come as Brex’s customer base itself is struggling to stay afloat amid COVID-19: high-growth startups. The trickle-down to Brex’s core business, which depends on its customers spending money, was thus expected.

Brex has already cut some customer credit limits to mitigate some of the exposure risk, The Information reported, and Dubugras confirmed. Customers say the credit limit cuts came without warning or notice.

Additionally, the company, launched in Brazil and graduated from Y Combinator, raised $150 million recently.

When TechCrunch talked to Dubugras about the latest fundraise, the co-founder said the capital was offensive, rather than defensive.

“I’m glad this round came together, but if it hadn’t, we would’ve been fine,” he said last week. “The capital is so we can play offensive while everyone else plays defensive.”

In the blog post, the co-founders wrote to former staffers.

“Please continue dreaming big and don’t lose the ambition that attracted you to Brex. Don’t let anything, not even a global pandemic, take that away from you. I wish we could give each one of you a hug, so instead I’ll end this message like I’d do it in Portuguese. Abraços, Pedro and Henrique.”

Those laid off will be provided with eight weeks of severance, their computer and equipment, and Brex will dedicate a part of its recruiting team to help find new opportunities for ex-staffers. Additionally, Brex is making adjustments to the equity cliff and has extended healthcare benefits through the end of 2020.

Brex has amassed $465 million in venture capital funding to date.

Sony will show off the first PlayStation 5 games on June 4th

By Greg Kumparak

Sony has been dishing out details on the PlayStation 5 piece-by-piece, rather than dropping all of the details at one big mega event. First came word of the Holiday 2020 release window. Then came an overview of the specs — like that it’ll have a super fast solid state drive by default. Most recently, they showed off the controller. (The divvied up approach makes sense, really; with the ongoing pandemic preventing events like E3 and GDC from happening… why wouldn’t Sony work on their own schedule and make every aspect its own mini-spectacle?)

The next glimpse they give, it seems, will be of the first games coming to the console.

This morning Sony announced that they’ll be hosting a live-streamed event on June 4th at 1pm Pacific. In a blog post about the event, Sony Interactive CEO Jim Ryan clarifies the focus:

We’ve shared technical specifications and shown you the new DualSense wireless controller. But what is a launch without games?

That’s why I’m excited to share that we will soon give you a first look at the games you’ll be playing after PlayStation 5 launches this holiday.

Ryan also notes that the event should last roughly an hour, but doesn’t suggest how many different games that’ll cover.

In a video that managed to pull in millions of views, Epic Games recently gave a first look at its upcoming Unreal Engine 5 running on pre-release PS5 hardware. Given that video’s success, I’d imagine that Sony is pretty dang eager to keep the early looks coming.

Will we finally see the console hardware itself? That’s still unclear. Seeing as they’ve pieced just about everything else out, though, I’d bet they’re saving that one for an event a bit closer to launch.

Jeremy Conrad left his own VC firm to start a company, and investors like what he’s building

By Connie Loizos

When this editor first met Jeremy Conrad, it was in 2014, at the 8,000-square-foot former fish factory that was home to Lemnos, a hardware-focused venture firm that Conrad had cofounded three years earlier.

Conrad —  who as a mechanical engineering undergrad at MIT worked on self driving cars, drones and satellites — was still excited about investing in hardware startups, having just closed a small new fund even while hardware was very unfashionable. One investment his team had made around that time was in Airware, a company that made subscription-based software for drones and attracted meaningful buzz and $118 million in venture funding before abruptly shutting down in 2018.

For his part, Conrad had already moved on, deciding in late 2017 that one of the many nascent teams that was camping out at Lemnos was on to a big idea relating the future of construction. Conrad didn’t have a background in real estate per se or even an earlier interest in the industry. But the “more I learned about it — not dissimilar to when I started Lemnos — It felt like there was a gap in the market, an opportunity that people were missing,” says Conrad from his home in San Francisco, where he has hunkered down throughout the COVID-19 crisis.

Enter Quartz, Conrad’s now 1.5-year-old, 14-person company, which quietly announced $7.75 million in Series A funding earlier this month, led by Baseline Ventures, with Felicis Ventures, Lemnos and Bloomberg Beta also participating.

What it’s selling to real estate developers, project managers and construction supervisors is really two things, which is safety and information. Using off-the-shelf hardware components that are reassembled in San Francisco and hardened (meaning secured to reduce vulnerabilities), the company incorporates its machine-learning software into this camera-based platform, then mounts the system onto cranes a construction sites. From there, the system streams 4K live feeds of what’s happening on the ground, while also making sense of the action.

Say dozens of concrete pouring trucks are expected on a construction site. The cameras, with their persistent view, can convey through a dashboard system whether and when the trucks have arrived and how many, says Conrad. It can determine how many people on are on a job site, and whether other deliveries have been made, even if not with a high degree of specificity. “We can’t say [to project managers] that 1,000 screws were delivered, but we can let them know whether the boxes they were expecting were delivered and where they were left,” he explains.

It’s an especially appealing proposition in the age of coronavirus, as the technology can help convey information that’s happening at a site that’s been shut down, or even how closely employees are gathered. Conrad says the technology also saves on time by providing information to those who might not otherwise be able to access it. Think of the developer who is on the 50th floor of the skyscraper he or she is building, or even the crane operator who is perhaps moving a two-ton object and has to rely on someone on the ground to deliver directions but can enjoy far more visibility with the aid of a multi-camera set-up.

Quartz, which today operates in California but is embarking on a nationwide rollout, was largely inspired by what Conrad was seeing in the world of self-driving. From sensors to self-perception systems, he knew the technologies would be even easier to deploy at construction sites, and he believed it could make them safer, too. Indeed, like cars, construction sites are astonishingly dangerous. According to the Occupational Safety and Health Administration, of the worker fatalities in private industry in 2018, more than 20% were in construction.

Conrad also saw an opportunity to take on established companies like Trimble, a 42-year-old, publicly traded, Sunnyvale, Ca.-based company that sells a portfolio of tools to the construction industry and charges top dollar for them, too. (Quartz is currently charging $2,000 per month per construction site for its series of cameras, their installation, a livestream and “lookback” data, though this may well rise at its adds additional features.)

It’s a big enough opportunity in fact, that Quartz is not alone in chasing it. Last summer, for example, Versatile, an Israeli-based startup with offices in San Francisco and New York City, raised $5.5 million in seed funding from Germany’s Robert Bosch Venture Capital and several other investors for a very similar platform,  though it uses sensors mounted under the hook of a crane to provide information about what’s happening. Construction Dive, a media property that’s dedicated to the industry, highlights many other, similar and competitive startups in the space, too.

Still, Quartz has Conrad, who isn’t just any founding CEO. Not only does he have that background in engineering, but having founded a venture firm and spent years as an investor may serve him well, too. He thinks a lot about the payback period on its hardware, for example.

Unlike a lot of founders, he also says he loves the fundraising process. “I get the highest quality feedback from some of the smartest people I know, which really helps focus your vision,” says Conrad, who says that Quartz, which operates in California today, is now embarking on a nationwide rollout.

“When you talk with great VCs, they ask great questions. For me, it’s best free consulting you can get.”

4 views on the future of retail and the shopping experience

By Natasha Mascarenhas

The global spread of COVID-19 and resulting orders to shelter in place have hit retailers hard.

As the pandemic drags on, temporary halts are becoming permanent closures, whether it’s the coffee shop next door, a historic bar or a well-known lifestyle brand.

But while the present is largely bleak, preparing for the future has retailers adopting technologies faster than ever. Their resilience and innovation means retail will look and fee different when the world reopens.

We gathered four views on the future of retail from the TechCrunch team:

  • Natasha Mascarenhas says retailers will need to find new ways to sell aspirational products — and what was once cringe-worthy might now be considered innovative.
  • Devin Coldewey sees businesses adopting a slew of creative digital services to prepare for the future and empower them without Amazon’s platform.
  • Greg Kumparak thinks the delivery and curbside pickup trends will move from pandemic-essentials to everyday occurrences. He thinks that retailers will need to find new ways to appeal to consumers in a “shopping-by-proxy” world.
  • Lucas Matney views a revitalized interest in technology around the checkout process, as retailers look for ways to make the purchasing experience more seamless (and less high-touch).

Alexa, how do I look?

Natasha Mascarenhas

SpaceX’s Starship SN4 launch vehicle prototype explodes after static engine fire test

By Darrell Etherington

SpaceX had just conducted yet another static fire test of the Raptor engine in its Starship SN4 prototype launch vehicle on Friday when the test vehicle exploded on the test stand. This was the fourth static fire test of this engine on this prototype, so it’s unclear what went wrong vs. other static fire attempts.

This was a test in the development of Starship, a new spacecraft that SpaceX has been developing in Boca Chica, Florida. Eventually, the company hopes to use it to replace its Falcon 9 and Falcon Heavy rocket, but Starship is still very early in its development phase, whereas those vehicles are flight-proven, multiple times over.

SpaceX had just secured FAA approval to fly its Starship prototype for short, suborbital test flights earlier this week. The goal was to fly this SN4 prototype for short distances following static fire testing, but that clearly won’t be possible now, as the vehicle appears to have been completely destroyed in the explosion following Friday’s test, as you can see below in the stream from NASASpaceflight.com.

The explosion occurred around 1:49 PM local time in Texas, roughly two minutes after it had completed its engine test fire. We’ve reached out to SpaceX to find out more about the cause of today’s incident, and whether anyone was potentially hurt in the explosion. SpaceX typically takes plenty of safety precautions when running these tests, including ensuring the area is well clear of any personnel or other individuals.

This isn’t the first time one of SpaceX’s Starship prototypes has met a catastrophic end; a couple of previous test vehicles succumbed to pressure testing while being put through their paces. This is why space companies test frequently and stress test vehicles during development – to ensure that the final operational vehicles are incredibly safe and reliable when they need to be.

SpaceX is already working on additional prototypes, including assembling SN5 nearby in Boa Chica, so it’s likely to resume its testing program quickly once it can clear the test stand and move in the newest prototype. This is a completely separate endeavor from SpaceX’s work on the Commercial Crew program, so that historic first test launch with astronauts on board should proceed either Saturday or Sunday as planned, depending on weather.

YouTube and Tribeca’s global online film festival starts today

By Anthony Ha

Today, the online film festival We Are One is kicking off 10 days of films, talks, musical performances and VR experiences.

The event is a collaboration between Tribeca Enterprises (the organization behind the Tribeca Film Festival) and YouTube, with help from 21 film festivals.

Think of it as an attempt to recreate some of the excitement of this year’s canceled festivals, and to showcase some of the films that would have screen there. Partner festivals include the Berlin International Film Festival, the Cannes Film Festival, the Sundance Film Festival, the Toronto Film Festival and the Venice Film Festival.

YouTube Chief Business Officer Robert Kyncl credited Tribeca for doing the “heavy lifting” of bringing all the festivals on-board and curating the lineup. He said that when Tribeca’s co-founder and CEO Jane Rosenthal first approached YouTube with the idea, “It sounded great to us, but it seemed impossible to actually execute — to get all of these important people around the world to agree to this one thing.”

However, Rosenthal and her team were able to pull it everything together in a short period of time, so YouTube is giving its part by giving the festival its online home. There will be more than 100 films screening on a schedule, just like a regular festival — although after many of the movies premiere, they will be available on-demand after it premieres for the duration of the event.

And again, it’s not just films, but the other festival programming too, like Tribeca Talks with directors like Guillermo del Toro and Francis Ford Coppola. YouTube channels like Lessons from the Screenplay, CineFix, Now You See It and La Blogotheque have also gotten involved by creating new content for the festival.

All the content is available for free, and Kyncl said that neither YouTube nor Tribeca is monetizing the event. Instead, they’re directing viewers to donate to COVID-19 relief efforts, including the World Health Organization, UNICEF, UNHCR, Save the Children, Doctors Without Borders, Leket Israel, GO Foundation and Give2Asia.

“We just see this as an immediate response with no commercial intent on our side,” he said.

And while We Are One was created in response to the COVID-19 pandemic, Kyncl sounds hopeful that there could be similar online festivals in the future — not that any online experience can fully replace the “human connection” of an in-person festival.

“The role that youTube can play for all the festivals in the future is, we can extend their reach … whether it’s creators who may be participants in their film festivals in the future, or just audiences who are absolutely participating, but I think we can expand their universe in any way they wish,” Kyncl said. But he added, “We’ve given zero thought given to it thus far. We’re all focused on making sure we can pull this off in short amount of time.”

Aaron Levie: ‘We have way too many manual processes in businesses’

By Ron Miller

Box CEO Aaron Levie has been working to change the software world for 15 years, but the pandemic has accelerated the move to cloud services much faster than anyone imagined. As he pointed out yesterday in an Extra Crunch Live interview, who would have thought three months ago that businesses like yoga and cooking classes would have moved online — but here we are.

Levie says we are just beginning to see the range of what’s possible because circumstances are forcing us to move to the cloud much faster than most businesses probably would have without the pandemic acting as a change agent.

“Overall, what we’re going to see is that anything that can become digital probably will be in a much more accelerated way than we’ve ever seen before,” Levie said.

Fellow TechCrunch reporter Jon Shieber and I spent an hour chatting with Levie about how digital transformation is accelerating in general, how Box is coping with that internally and externally, his advice for founders in an economic crisis and what life might be like when we return to our offices.

Our interview was broadcast on YouTube and we have included the embed below.


Just a note that Extra Crunch Live is our new virtual speaker series for Extra Crunch members. Folks can ask their own questions live during the chat, with past and future guests like Alexis Ohanian, Garry Tan, GGV’s Hans Tung and Jeff Richards, Eventbrite’s Julia Hartz and many, many more. You can check out the schedule here. If you’d like to submit a question during a live chat, please join Extra Crunch.


On digital transformation

The way that we think about digital transformation is that much of the world has a whole bunch of processes and ways of working — ways of communicating and ways of collaborating where if those business processes or that way we worked were able to be done in digital forms or in the cloud, you’d actually be more productive, more secure and you’d be able to serve your customers better. You’d be able to automate more business processes.

We think we’re [in] an environment that anything that can be digitized probably will be. Certainly as this pandemic has reinforced, we have way too many manual processes in businesses. We have way too slow ways of working together and collaborating. And we know that we’re going to move more and more of that to digital platforms.

In some cases, it’s simple, like moving to being able to do video conferences and being able to collaborate virtually. Some of it will become more advanced. How do I begin to automate things like client onboarding processes or doing research in a life sciences organization or delivering telemedicine digitally, but overall, what we’re going to see is that anything that can become digital probably will be in a much more accelerated way than we’ve ever seen before.

How the pandemic is driving change faster

Salesforce stock is taking a hit today after lighter guidance in yesterday’s earning’s report

By Ron Miller

In spite of a positive quarter with record revenue that beat analyst estimates, Salesforce stock was taking a hit today because of lighter guidance. Wall Street is a tough audience.

The stock was down $8.29/share or 4.58% as of 2:15 pm ET.

The guidance, which was a projection for next quarter’s earnings, was lighter than what the analysts on Wall Street expected. While Salesforce was projecting revenue for next quarter in the range of $4.89 to $4.90 billion, according to CNBC, analysts had expected $5.03 billion.

When analysts see a future that is a bit worse than what they expected, it usually results in a lower stock price and that’s what we are seeing today. It’s worth noting that Salesforce is operating in the same economy as everyone else and being a bit lighter on your projections in the middle of pandemic seems entirely understandable.

In yesterday’s report CEO Marc Benioff indicated that the company has been offering some customers some flexibility around payment as they navigate the economic fallout of COVID-19, and the company’s operating cash took a bit of a hit because of this.

“Operating cash flow was $1.86 billion, which was largely impacted by delayed payments from customers while sheltering in place and some temporary financial flexibility that we granted to certain customers that were most affected by the COVID pandemic,” president and CFO Mark Hawkins explained in the analyst call.

Still, the company reported revenue of $4.87 billion for the quarter, putting it on a run rate of $19.48 billion.

In a statement, David Hynes, Jr of Canaccord Genuity still remained high on Salesforce. “If you step back and think about what Salesforce is actually providing, tools that help businesses get closer to their customers are perhaps more important than ever in a slower-growth, socially distanced world. We have long reserved a spot for CRM among our top names in large cap, and we feel no differently about that view after what we heard last night. This is a high-quality firm with many levers to growth, and as such, we believe CRM is a good way to get a bit of defensive exposure to the favorable trends at play in software.”

The company is after all still on the path to a $20 billion in revenue. As Hynes points out, overall the kinds of tools that Salesforce offers should remain in demand as companies look for ways to digitally transform much more rapidly in our current situation, and look to companies like Salesforce for help.

Audi launches high-tech car unit Artemis to fast-track a ‘pioneering’ EV to market

By Kirsten Korosec

Audi has created a new business unit called Artemis to bring electric vehicles equipped with highly automated driving systems and other tech to market faster — the latest bid by the German automaker to become more agile and competitive.

The traditional automotive industry, where the design to start of production cycle might take five to seven years, has been grappling with how to bring new and innovative products to market more quickly to meet consumers’ fickle demands. The model is more akin to how Tesla or a consumer electronics company operates.

The first project under Artemis will be to “develop a pioneering model for Audi quickly and unbureaucratically,” Audi AG CEO Markus Duesmann said in a statement Friday. The unit is aiming to design and produce what Audi describes as a “highly efficient electric car” as early as 2024.

Artemis will be led by Alex Hitzinger, who was in charge of Audi’s Autonomous Intelligent Driving, the self-driving subsidiary that was launched just in 2017 to develop autonomous vehicle technology for the VW Group. AID was absorbed into the European headquarters of Argo AI, a move that was made after VW invested $2.6 billion in capital and assets into the self-driving startup.

Hitzinger, who takes the new position beginning June 1, will report directly to Duesmann. Artemis will be based at the company’s tech hub of its INCampus in Ingolstadt, Germany.

Artemis is under the Audi banner. However, the aim is for this group’s work to benefit brands under its parent company VW Group.  Hitzinger and the rest of his team will have access to resources and technologies within the entire Volkswagen Group . For instance, Car.Software, an independent business unit under the VW Group, will provide digital services to Artemis.  The upshot: to create a blueprint that will make VW Group a more agile automaker able to bring new and technologically advanced vehicles to market more quickly.

VW Group plans to produce and sell 75 electric vehicle models across its brands by 2029, a group that includes VW passenger cars and Audi. The creation of Artemis hasn’t changed Audi’s plans to produce 20 new all-electric vehicles and 10 new plug-in hybrids by 2025.

“The obvious question was how we could implement additional high-tech benchmarks without jeopardizing the manageability of existing projects, and at the same time utilize new opportunities in the markets,” Duesmann said.

Daily Crunch: Trump takes aim at social media companies

By Anthony Ha

President Trump follows through on his threat to challenge the legal protections enjoyed by social media and internet companies, Magic Leap’s CEO is stepping down and China sees its biggest autonomous driving round yet.

Here’s your Daily Crunch for May 29, 2020.

1. Trump signs an executive order taking direct aim at social media companies

Yesterday, President Donald Trump signed an executive order targeting the legal shield that internet companies rely on to protect them from liability for user-created content. Next, we’ll almost certainly see a court battle over whether the order is legal and enforceable.

While Trump and Attorney General William Barr have expressed interest in undermining Section 230 of the Communications Decency Act before, this week’s action was prompted by Twitter’s decision to add a fact-checking link to the president’s tweet about voting by mail. That conflict isn’t going away either, with Twitter adding a “public interest notice” to another of Trump’s tweets for glorifying violence.

2. Magic Leap CEO Rony Abovitz is out

Magic Leap founder and CEO Rony Abovitz announced that the company has secured a new bout of funding — but that Magic will be attempting a major turnaround without him at the helm.

3. SoftBank led $500M investment in Didi in China’s biggest autonomous driving round

As China’s largest ride-hailing provider with mountains of traffic data, Didi clearly has an upper hand in developing robotaxis, which could help address driver shortages in the long term. But it was relatively late to the field.

4. Cisco to acquire internet monitoring solution ThousandEyes

Cisco’s Todd Nightingale, writing in a blog post announcing the deal, said that the kind of data that ThousandEyes provides around internet user experience is more important than ever as internet connections have come under tremendous pressure.

5. Fintech regulations in Latin America could fuel growth or freeze out startups

Promoteo co-founder Ximena Aleman looks at what impact regulation has had so far in Latin America, and what needs to happen to strike a balance between sector growth and public trust. (Extra Crunch membership required.)

6. Uber UK launches Work Hub for drivers to find other gig jobs during COVID-19

The ride-hailing giant rolled out a similar feature in the U.S. back in April, offering drivers the ability to respond to job postings from around a dozen other companies, as well as the ability to receive orders through other Uber units: Eats, Freight and Works.

7. Join us June 3 for a contact-tracing and exposure-notification app development and deployment forum

We’re working with the COVID-19 Technology Task Force, as well as Harvard’s Berkman Klein Center, NYU’s Alliance for Public Interest Technology, Betaworks Studios and Hangar. We’ll be playing host to their live-streamed discussion around contact-tracing and exposure-notification applications, including demonstrations of some of the cutting-edge products that will be available in the U.S. to tackle these challenging, but crucial, tasks.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

Echo Looks will cease functioning in July, as Amazon discontinues the camera

By Brian Heater

Introduced in mid-2017, the Look was one of the more obscure — and, honestly, kind of bizarre — entries in the Echo line. It was a small camera designed to take videos and selfies of its owner, using machine learning to help choose outfits.

No surprise, really, that it never caught fire. And now, three years after its introduction, it’s dead. First noted by Voicebot.ai, Amazon sent a letter to customers noting that the camera has been discontinued — what’s more, service is going to completely shuttered in July.

Amazon confirmed the end of what seems to have amounted to an experiment and exercise in training a machine learning algorithm. The company tells TechCrunch,

When we introduced Echo Look three years ago, our goal was to train Alexa to become a style assistant as a novel way to apply AI and machine learning to fashion. With the help of our customers we evolved the service, enabling Alexa to give outfit advice and offer style recommendations. We’ve since moved Style by Alexa features into the Amazon Shopping app and to Alexa-enabled devices making them even more convenient and available to more Amazon customers. For that reason, we have decided it’s time to wind down Echo Look. Beginning July 24, 2020, both Echo Look and its app will no longer function. Customers will still be able to enjoy style advice from Alexa through the Amazon Shopping app and other Alexa-enabled devices. We look forward to continuing to support our customers and their style needs with Alexa.

Not a surprise, perhaps. But a bummer for those who spent the $200 on the product. For the looks of it, though, I don’t think the Look exactly caught the world on fire. It’s currently listed as the 51st best seller on Amazon’s list of Echo products. Honestly, there’s a decent chance this is the first time you’re hearing about it. Again, not surprising for what was always destined to be a niche addition to the Echo line.

The best investment every digital brand can make during the COVID-19 pandemic

By Walter Thompson
Steve Tan Contributor
Steve Tan is a Singapore-based serial entrepreneur and full-stack digital marketer with over 14 years of hands-on experience who is also the CEO and founder of Super Tan Brothers Pte. Ltd, which operates e-commerce, software, logistics, marketing, educational and investment companies around the globe.

Intuitively, stores that sell online should be making a killing during the COVID-19 pandemic. After all, everyone is stuck at home — and understandably more willing to shop online instead of at a traditional retailer to avoid putting themselves and others at medical risk. But the truth is, most smaller online stores have seen better days.

The primary challenge is that smaller shops often don’t have the logistics networks that companies like Amazon do. Consequently, they’re seeing substantially delayed delivery timelines, especially if they ship internationally. Customers obviously aren’t thrilled about that reality. And in many cases, they’re requesting refunds at a staggering rate.

I saw this play out firsthand in April. At that point, my stores were down 20% or in some cases even 30% in revenue. Needless to say, my team was freaking out. But there’s one thing we did that helped us increase our revenue over 200% since the pandemic, decrease refund requests and even strengthen our existing customer relationships.

We implemented a 24-hour live chat in all of our stores. Here’s why it worked for us and why every digital brand should be doing it too.

Avoid the common ‘unreachability’ frustration

When I started my first online store in 2006, challenges that bogged my team down often meant that my team’s first priority became resolving those challenges so that we could serve our customers faster. But admittedly, when these challenges came up, it became more difficult to balance communicating with our customers and resolving the issues that prevented us from fulfilling their orders quickly.

TinyML is giving hardware new life

By Walter Thompson
Adam Benzion Contributor
A serial entrepreneur, writer, and tech investor, Adam Benzion is the co-founder of Hackster.io, the world's largest community for hardware developers.

Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into that almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.

Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as “training data,” in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML-related tasks were translated to the cloud, creating latency, consuming scarce power and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the edge slower, more expensive and less predictable.

But thanks to recent advances, companies are turning to TinyML as the latest trend in building product intelligence. Arduino, the company best known for open-source hardware is making TinyML available for millions of developers. Together with Edge Impulse, they are turning the ubiquitous Arduino board into a powerful embedded ML platform, like the Arduino Nano 33 BLE Sense and other 32-bit boards. With this partnership you can run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low-powered microcontrollers.

Over the past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor and Arm’s CMSIS-NN. But building a quality dataset, extracting the right features, training and deploying these models is still complicated. TinyML was the missing link between edge hardware and device intelligence now coming to fruition.

Tiny devices with not-so-tiny brains

Bunq adds donations to charities and tests redesign

By Romain Dillet

Challenger bank Bunq is adding a new feature that lets you donate to charities directly from the app. In addition to that, Bunq is also in the process of redesigning its app. The company is launching a public beta test to get feedback from its users.

Other fintech startups, such as Revolut and Lydia, have launched donation features in the past. But in those cases, startups have selected a handful of charities.

Bunq has chosen a different approach, as you can create your own donation campaigns in the app. As long your local charity has an IBAN number, you can add it to Bunq’s donation feature. You can even add a local business in case you want to help them stay in business.

You can then invite other people to donate to your charities. You can also track the total amount of your donations, as well as the total donations from the entire Bunq user base.

The company has also been working on the third major version of the app. In order to test it before the public release, Bunq is launching a public beta program. The first build will roll out in the coming weeks.

In order to simplify navigation, Bunq has tried to remove clutter by focusing on one main button on each page. The app will be divided in four main tabs.

The first tab, called “Me,” will feature all your personal information — personal bank accounts, savings goals, etc. On the second tab, called “Us,” you can see information about Bunq, such as total investments and total donations. The third tab features your profile information.

Finally, the fourth tab is a dedicated camera button. It lets you scan invoices and receipts, which could be particularly useful for business customers. I’m not sure a lot of people use that feature, but things could still change before the final release.

Twitter, Reddit challenge US rules forcing visa applicants to disclose their social media handles

By Zack Whittaker

Twitter and Reddit have filed an amicus brief in support of a lawsuit challenging a U.S. government rule change compelling visa applicants to disclose their social media handles.

The lawsuit, brought by the Knight First Amendment Institute at Columbia University, the Brennan Center for Justice and law firm Simpson Thacher & Bartlett, seeks to undo both the State Department’s requirement that visa applicants must disclose their social media handles prior to obtaining a U.S. visa, as well as related rules over the retention and dissemination of those records.

Last year, the State Department began asking visa applicants for their current and former social media usernames, a move that affects millions of non-citizens applying to travel to the United States each year. The rule change was part of the Trump administration’s effort to expand its “enhanced” screening protocols. At the time, it was reported that the information would be used if the State Department determines that “such information is required to confirm identity or conduct more rigorous national security vetting.”

In a filing supporting the lawsuit, both Twitter and Reddit said the social media policies “unquestionably chill a vast quantity of speech” and that the rules violate the First Amendment rights “to speak anonymously and associate privately.”

Twitter and Reddit, which collectively have more than 560 million users, said their users — many of which don’t use their real names on their platforms — are forced to “surrender their anonymity in order to travel to the United States,” which “violates the First Amendment rights to speak anonymously and associate privately.”

“Twitter and Reddit vigorously guard the right to speak anonymously for people on their platforms, and anonymous individuals correspondingly communicate on these platforms with the expectation that their identities will not be revealed without a specific showing of compelling need,” the brief said.

“That expectation allows the free exchange of ideas to flourish on these platforms.”

Jessica Herrera-Flanigan, Twitter’s policy chief for the Americas, said the social media rule “infringes both of those rights and we are proud to lend our support on these critical legal issues.” Reddit’s general counsel Ben Lee called the rule an “intrusive overreach” by the government.

It’s not known how many, if any, visa applicants have been denied a visa because of their social media content. But since the social media rule went into effect, cases emerged of approved visa holders denied entry to the U.S. for other people’s social media postings. Ismail Ajjawi, a then 17-year-old freshman at Harvard University, was turned away at Boston Logan International Airport after U.S. border officials searched his phone after taking issue with social media postings of Ajjawi’s friends — and not his own.

Abed Ayoub, legal and policy director at the American-Arab Anti-Discrimination Committee, told TechCrunch at the time that Ajjawi’s case was not isolated. A week later, TechCrunch learned of another man who was denied entry to the U.S. because of a WhatsApp message sent by a distant acquaintance.

A spokesperson for the State Department did not immediately comment on news of the amicus brief.

How to upgrade your at-home videoconference setup: Lighting edition

By Darrell Etherington

In this installment of our ongoing series around making the most of your at-home video setup, we’re going to focus on one of the most important, but least well-understood or implemented parts of the equation: Lighting. While it isn’t actually something that requires a lot of training, expertise or even equipment to get right, it’s probably the number-one culprit for subpar video quality on most conference calls — and it can mean the difference between looking like someone who knows what they talk about, and someone who might not inspire too much confidence on seminars, speaking gigs and remote broadcast appearances.

Basics

You can make a very big improvement in your lighting with just a little work, and without spending any money. The secret is all in being aware of your surroundings and optimizing your camera placement relative to any light sources that might be present. Consider not only any ceiling lights or lamps in your room, but also natural light sources like windows.

Ideally, you should position yourself so that the source of brightest light is positioned behind your camera (and above it, if possible). You should also make sure that there aren’t any strong competing light sources behind you that might blow out the image. If you have a large window and it’s daytime, face the window with your back to a wall, for instance. And if you have a movable light or an overhead lamp, either move it so it’s behind and above your computer facing you, or move yourself if possible to achieve the same effect with a fixed-position light fixture, like a ceiling pendant.

Ideally, any bright light source should be positioned behind and slightly above your camera for best results.

Even if the light seems aggressively bright to you, it should make for an even, clear image on your webcam. Even though most webcams have auto-balancing software features that attempt to produce the best results regardless of lighting, they can only do so much, and especially lower-end camera hardware like the webcam built into MacBooks will benefit greatly from some physical lighting position optimization.

This is an example of what not to do: Having a bright light source behind you will make your face hard to see, and the background blown out.

Simple ways to level-up

The best way to step up beyond the basics is to learn some of the fundamentals of good video lighting. Again, this doesn’t necessarily require any purchases – it could be as simple as taking what you already have and using it in creative ways.

Beyond just the above advice about putting your strongest light source behind your camera pointed towards your face, you can get a little more sophisticated by adopting the principles of two- and three-point lighting. You don’t need special lights to make this work – you just need to use what you have available and place them for optimal effect.

  • Two-point lighting

A very basic, but effective video lighting setup involves positioning not just one, but two lights pointed towards your face behind, or parallel with your camera. Instead of putting them directly in line with your face, however, for maximum effect you can place them to either side, and angle them in towards you.

A simple representation of how to position lights for a proper two-point video lighting setup.

Note that if you can, it’s best to make one of these two lights brighter than the other. This will provide a subtle bit of shadow and depth to the lighting on your face, resulting in a more pleasing and professional look. As mentioned, it doesn’t really matter what kind of light you use, but it’s best to try to make sure that both are the same temperature (for ordinary household bulbs, how ‘soft,’ ‘bright’ or ‘warm’ they are) and if your lights are less powerful, try to position them closer in.

  • Three-point lighting

Similar to two-point lighting, but with a third light added positioned somewhere behind you. This extra light is used in broadcast interview lighting setups to provide a slight halo effect on the subject, which further helps separate you from the background, and provides a bit more depth and professional look. Ideally, you’d place this out of frame of your camera (you don’t want a big, bright light shining right into the lens) and off to the side, as indicated in the diagram below.

In a three-point lighting setup, you add a third light behind you to provide a bit more subject separation and pop.

If you’re looking to improve the flexibility of this kind of setup, a simple way to do that is by using light sources with Philips Hue bulbs. They can let you tune the temperature and brightness of your lights, together or individually, to get the most out of this kind of arrangement. Modern Hue bulbs might produce some weird flickering effects on your video depending on what framerate you’re using, but if you output your video at 30fps, that should address any problems there.

Go pro

All lights can be used to improve your video lighting setup, but dedicated video lights will provide the best results. If you really plan on doing a bunch of video calls, virtual talks and streaming, you should consider investing in some purpose-built hardware to get even better results.

At the entry level, there are plenty of offerings on Amazon that work well and offer good value for money, including full lighting kits like this one from Neewer that offers everything you need for a two-point lighting setup in one package. These might seem intimidating if you’re new to lighting, but they’re extremely easy to set up, and really only require that you learn a bit about light temperature (as measured in kelvins) and how that affects the image output on your video capture device.

If you’re willing to invest a bit more money, you can get some better quality lights that include additional features including wifi connectivity and remote control. The best all-around video lights for home studio use that I’ve found are Elgato’s Key Lights. These come in two variants, Key Light and Key Light Air, which retail for $199.99 and $129.99 respectively. The Key Light is larger, offers brighter maximum output, and comes with a sturdier, heavy-duty clamp mount for attaching to tables and desks. The Key Light Air is smaller, more portable, puts out less light at max settings and comes with a tabletop stand with a weighted base.

Both versions of the Key Light offer light that you can tune form very warm white (2900K) to bright white (7000K) and connect to your wifi network for remote control, either from your computer or your mobile device. They easily work together with Elgato’s Stream Deck for hardware controls, too, and have highly adjustable brightness and plenty of mounting options – especially with extra accessories like the Multi-Mount extension kit.

With plenty of standard tripod mounts on each Key Light, high-quality durable construction and connected control features, these lights are the easiest to make work in whatever space you have available. The quality of the light they put out is also excellent, and they’re great for lighting pros and newbies alike since it’s very easy to tune them as needed to produce the effect you want.

Accent your space

Beyond subject lighting, you can look at different kinds of accent lighting to make your overall home studio more visually interesting or appealing. Again, there are a number of options here, but if you’re looking for something that also complements your home furnishings and won’t make your house look too much like a studio set, check out some of the more advanced versions of Hue’s connected lighting system.

The Hue Play light bar is a great accent light, for instance. You can pick up a two pack, which includes two of the full-color connected RGB lights. You’ll need a Hue hub for these to work, but you can also get a starter pack that includes two lights and the hub if you don’t have one yet. I like these because you can easily hide them behind cushions, chairs, or other furniture. They provide awesome uplight effects on light-colored walls, especially if you get rid of other ambient light (beyond your main video lights).

To really amplify the effect, consider pairing these up with something one the Philips Hue Signe floor or table lamps. The Signe series is a long LED light mounted to a weighted base that provide strong, even accent light with any color you choose. You can sync these with other Hue lights for a consistent look, or mix and max colors for different dynamic effects.

On video, this helps with subject/background separation, and just looks a lot more polished than a standard background, especially when paired with defocused effects when you’re using better quality cameras. As a side benefit, these lights can be synced to movie and video playback for when you’re consuming video, instead of producing it, for really cool home theater effects.

If you’re satisfied with your lighting setup but are still looking for other pointers, check out our original guide, as well as our deep dive on microphones for better audio quality.

Uber’s latest feature lets riders book by the hour and make multiple stops

By Kirsten Korosec

Uber is bringing a new feature to the U.S. that lets users book rides for $50 an hour and make multiple stops as the ride-hailing company tries to respond to changing consumer needs during the COVID-19 pandemic.

The hourly booking feature, which is already available in a handful of international cities in Australia, Africa, Europe, and the Middle East, will launch in a dozen U.S. cities beginning Monday. The product will be available in Atlanta, Chicago, Dallas, Houston, Miami, Orlando, Tampa Bay, Philadelphia, Phoenix, Tacoma, Seattle and Washington, D.C. Uber said it expects to expand into other U.S. cities in the coming weeks.

Uber made the move in an effort to offer riders a more convenient way to get things done, and to provide an additional earnings opportunity for drivers as we move forward in this ‘new normal,’ Niraj Patel, director of rider operations at Uber said in a statement.

Riders who want to use the new feature start by selecting “hourly” in the app and then entering their initial stop. Riders can see the $50 hourly rate at a glance and compare to other options before committing to the trip. The rider selects the expected hours and can enter in multiple stops — as many as three including the destination.Uber Hourly for Rider feature

Image Credits: Uber

There are limitations to the feature, including mileage. In some cities, the hourly booking feature only allows drivers to travel up to 40 miles. Trips that travel farther than the mileage limit will be charged to the rider at a per mile rate. The same rule applies to trips the run over the booked hour; riders will be charged per minute over the hour.

Hourly booking cannot be used to travel to or from airports and trips must be within a city service area. The $50 hourly rate excludes tolls and surcharges.

❌