FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

China’s Xpeng in the race to automate EVs with lidar

By Rita Liao

Elon Musk famously said any company relying on lidar is “doomed.” Tesla instead believes automated driving functions are built on visual recognition and is even working to remove the radar. China’s Xpeng begs to differ.

Founded in 2014, Xpeng is one of China’s most celebrated electric vehicle startups and went public when it was just six years old. Like Tesla, Xpeng sees automation as an integral part of its strategy; unlike the American giant, Xpeng uses a combination of radar, cameras, high-precision maps powered by Alibaba, localization systems developed in-house, and most recently, lidar to detect and predict road conditions.

“Lidar will provide the 3D drivable space and precise depth estimation to small moving obstacles even like kids and pets, and obviously, other pedestrians and the motorbikes which are a nightmare for anybody who’s working on driving,” Xinzhou Wu, who oversees Xpeng’s autonomous driving R&D center, said in an interview with TechCrunch.

“On top of that, we have the usual radar which gives you location and speed. Then you have the camera which has very rich, basic semantic information.”

Xpeng is adding lidar to its mass-produced EV model P5, which will begin delivering in the second half of this year. The car, a family sedan, will later be able to drive from point A to B based on a navigation route set by the driver on highways and certain urban roads in China that are covered by Alibaba’s maps. An older model without lidar already enables assisted driving on highways.

The system, called Navigation Guided Pilot, is benchmarked against Tesla’s Navigate On Autopilot, said Wu. It can, for example, automatically change lanes, enter or exit ramps, overtake other vehicles, and maneuver another car’s sudden cut-in, a common sight in China’s complex road conditions.

“The city is super hard compared to the highway but with lidar and precise perception capability, we will have essentially three layers of redundancy for sensing,” said Wu.

By definition, NGP is an advanced driver-assistance system (ADAS) as drivers still need to keep their hands on the wheel and take control at any time (Chinese laws don’t allow drivers to be hands-off on the road). The carmaker’s ambition is to remove the driver, that is, reach Level 4 autonomy two to four years from now, but real-life implementation will hinge on regulations, said Wu.

“But I’m not worried about that too much. I understand the Chinese government is actually the most flexible in terms of technology regulation.”

The lidar camp

Musk’s disdain for lidar stems from the high costs of the remote sensing method that uses lasers. In the early days, a lidar unit spinning on top of a robotaxi could cost as much as $100,000, said Wu.

“Right now, [the cost] is at least two orders low,” said Wu. After 13 years with Qualcomm in the U.S., Wu joined Xpeng in late 2018 to work on automating the company’s electric cars. He currently leads a core autonomous driving R&D team of 500 staff and said the force will double in headcount by the end of this year.

“Our next vehicle is targeting the economy class. I would say it’s mid-range in terms of price,” he said, referring to the firm’s new lidar-powered sedan.

The lidar sensors powering Xpeng come from Livox, a firm touting more affordable lidar and an affiliate of DJI, the Shenzhen-based drone giant. Xpeng’s headquarters is in the adjacent city of Guangzhou about 1.5 hours’ drive away.

Xpeng isn’t the only one embracing lidar. Nio, a Chinese rival to Xpeng targeting a more premium market, unveiled a lidar-powered car in January but the model won’t start production until 2022. Arcfox, a new EV brand of Chinese state-owned carmaker BAIC, recently said it would be launching an electric car equipped with Huawei’s lidar.

Musk recently hinted that Tesla may remove radar from production outright as it inches closer to pure vision based on camera and machine learning. The billionaire founder isn’t particularly a fan of Xpeng, which he alleged owned a copy of Tesla’s old source code.

In 2019, Tesla filed a lawsuit against Cao Guangzhi alleging that the former Tesla engineer stole trade secrets and brought them to Xpeng. XPeng has repeatedly denied any wrongdoing. Cao no longer works at Xpeng.

Supply challenges

While Livox claims to be an independent entity “incubated” by DJI, a source told TechCrunch previously that it is just a “team within DJI” positioned as a separate company. The intention to distance from DJI comes as no one’s surprise as the drone maker is on the U.S. government’s Entity List, which has cut key suppliers off from a multitude of Chinese tech firms including Huawei.

Other critical parts that Xpeng uses include NVIDIA’s Xavier system-on-the-chip computing platform and Bosch’s iBooster brake system. Globally, the ongoing semiconductor shortage is pushing auto executives to ponder over future scenarios where self-driving cars become even more dependent on chips.

Xpeng is well aware of supply chain risks. “Basically, safety is very important,” said Wu. “It’s more than the tension between countries around the world right now. Covid-19 is also creating a lot of issues for some of the suppliers, so having redundancy in the suppliers is some strategy we are looking very closely at.”

Taking on robotaxis

Xpeng could have easily tapped the flurry of autonomous driving solution providers in China, including Pony.ai and WeRide in its backyard Guangzhou. Instead, Xpeng becomes their competitor, working on automation in-house and pledges to outrival the artificial intelligence startups.

“The availability of massive computing for cars at affordable costs and the fast dropping price of lidar is making the two camps really the same,” Wu said of the dynamics between EV makers and robotaxi startups.

“[The robotaxi companies] have to work very hard to find a path to a mass-production vehicle. If they don’t do that, two years from now, they will find the technology is already available in mass production and their value become will become much less than today’s,” he added.

“We know how to mass-produce a technology up to the safety requirement and the quarantine required of the auto industry. This is a super high bar for anybody wanting to survive.”

Xpeng has no plans of going visual-only. Options of automotive technologies like lidar are becoming cheaper and more abundant, so “why do we have to bind our hands right now and say camera only?” Wu asked.

“We have a lot of respect for Elon and his company. We wish them all the best. But we will, as Xiaopeng [founder of Xpeng] said in one of his famous speeches, compete in China and hopefully in the rest of the world as well with different technologies.”

5G, coupled with cloud computing and cabin intelligence, will accelerate Xpeng’s path to achieve full automation, though Wu couldn’t share much detail on how 5G is used. When unmanned driving is viable, Xpeng will explore “a lot of exciting features” that go into a car when the driver’s hands are freed. Xpeng’s electric SUV is already available in Norway, and the company is looking to further expand globally.

SambaNova raises $676M at a $5.1B valuation to double down on cloud-based AI software for enterprises

By Ingrid Lunden

Artificial intelligence technology holds a huge amount of promise for enterprises — as a tool to process and understand their data more efficiently; as a way to leapfrog into new kinds of services and products; and as a critical stepping stone into whatever the future might hold for their businesses. But the problem for many enterprises is that they are not tech businesses at their cores and so bringing on and using AI will typically involve a lot of heavy lifting. Today, one of the startups building AI services is announcing a big round of funding to help bridge that gap.

SambaNova — a startup building AI hardware and integrated systems that run on it that only officially came out of three years in stealth last December — is announcing a huge round of funding today to take its business out into the world. The company has closed in on $676 million in financing, a Series D that co-founder and CEO Rodrigo Liang has confirmed values the company at $5.1 billion.

The round is being led by SoftBank, which is making the investment via Vision Fund 2. Temasek and the Government of Singapore Investment Corp. (GIC), both new investors, are also participating, along with previous backers BlackRock, Intel Capital, GV (formerly Google Ventures), Walden International and WRVI, among other unnamed investors. (Sidenote: BlackRock and Temasek separately kicked off an investment partnership yesterday, although it’s not clear if this falls into that remit.)

Co-founded by two Stanford professors, Kunle Olukotun and Chris Ré, and Liang, who had been an engineering executive at Oracle, SambaNova has been around since 2017 and has raised more than $1 billion to date — both to build out its AI-focused hardware, which it calls DataScale and to build out the system that runs on it. (The “Samba” in the name is a reference to Liang’s Brazilian heritage, he said, but also the Latino music and dance that speaks of constant movement and shifting, not unlike the journey AI data regularly needs to take that makes it too complicated and too intensive to run on more traditional systems.)

SambaNova on one level competes for enterprise business against companies like Nvidia, Cerebras Systems and Graphcore — another startup in the space which earlier this year also raised a significant round. However, SambaNova has also taken a slightly different approach to the AI challenge.

In December, the startup launched Dataflow-as-a-service as an on-demand, subscription-based way for enterprises to tap into SambaNova’s AI system, with the focus just on the applications that run on it, without needing to focus on maintaining those systems themselves. It’s the latter that SambaNova will be focusing on selling and delivering with this latest tranche of funding, Liang said.

SambaNova’s opportunity, Liang believes, lies in selling software-based AI systems to enterprises that are keen to adopt more AI into their business, but might lack the talent and other resources to do so if it requires running and maintaining large systems.

“The market right now has a lot of interest in AI. They are finding they have to transition to this way of competing, and it’s no longer acceptable not to be considering it,” said Liang in an interview.

The problem, he said, is that most AI companies “want to talk chips,” yet many would-be customers will lack the teams and appetite to essentially become technology companies to run those services. “Rather than you coming in and thinking about how to hire scientists and hire and then deploy an AI service, you can now subscribe, and bring in that technology overnight. We’re very proud that our technology is pushing the envelope on cases in the industry.”

To be clear, a company will still need data scientists, just not the same number, and specifically not the same number dedicating their time to maintaining systems, updating code and other more incremental work that comes managing an end-to-end process.

SambaNova has not disclosed many customers so far in the work that it has done — the two reference names it provided to me are both research labs, the Argonne National Laboratory and the Lawrence Livermore National Laboratory — but Liang noted some typical use cases.

One was in imaging, such as in the healthcare industry, where the company’s technology is being used to help train systems based on high-resolution imagery, along with other healthcare-related work. The coincidentally-named Corona supercomputer at the Livermore Lab (it was named after the 2014 lunar eclipse, not the dark cloud of a pandemic that we’re currently living through) is using SambaNova’s technology to help run calculations related to some Covid-19 therapeutic and antiviral compound research, Marshall Choy, the company’s VP of product, told me.

Another set of applications involves building systems around custom language models, for example in specific industries like finance, to process data quicker. And a third is in recommendation algorithms, something that appears in most digital services and frankly could always do to work a little better than it does today. I’m guessing that in the coming months it will release more information about where and who is using its technology.

Liang also would not comment on whether Google and Intel were specifically tapping SambaNova as a partner in their own AI services, but he didn’t rule out the prospect of partnering to go to market. Indeed, both have strong enterprise businesses that span well beyond technology companies, and so working with a third party that is helping to make even their own AI cores more accessible could be an interesting prospect, and SambaNova’s DataScale (and the Dataflow-as-a-service system) both work using input from frameworks like PyTorch and TensorFlow, so there is a level of integration already there.

“We’re quite comfortable in collaborating with others in this space,” Liang said. “We think the market will be large and will start segmenting. The opportunity for us is in being able to take hold of some of the hardest problems in a much simpler way on their behalf. That is a very valuable proposition.”

The promise of creating a more accessible AI for businesses is one that has eluded quite a few companies to date, so the prospect of finally cracking that nut is one that appeals to investors.

“SambaNova has created a leading systems architecture that is flexible, efficient and scalable. This provides a holistic software and hardware solution for customers and alleviates the additional complexity driven by single technology component solutions,” said Deep Nishar, Senior Managing Partner at SoftBank Investment Advisers, in a statement. “We are excited to partner with Rodrigo and the SambaNova team to support their mission of bringing advanced AI solutions to organizations globally.”

Arm announces the next generation of its processor architecture

By Frederic Lardinois

Arm today announced Armv9, the next generation of its chip architecture. Its predecessor, Armv8 launched a decade ago and while it has seen its fair share of changes and updates, the new architecture brings a number of major updates to the platform that warrant a shift in version numbers. Unsurprisingly, Armv9 builds on V8 and is backward compatible, but it specifically introduces new security, AI, signal processing and performance features.

Over the last five years, more than 100 billion Arm-based chips have shipped. But Arm believes that its partners will ship over 300 billion in the next decade. We will see the first ArmV9-based chips in devices later this year.

Ian Smythe, Arm’s VP of Marketing for its client business, told me that he believes this new architecture will change the way we do computing over the next decade. “We’re going to deliver more performance, we will improve the security capabilities […] and we will enhance the workload capabilities because of the shift that we see in compute that’s taking place,” he said. “The reason that we’ve taken these steps is to look at how we provide the best experience out there for handling the explosion of data and the need to process it and the need to move it and the need to protect it.”

That neatly sums up the core philosophy behind these updates. On the security side, ArmV9 will introduce Arm’s confidential compute architecture and the concept of Realms. These Realms enable developers to write applications where the data is shielded from the operating system and other apps on the device. Using Realms, a business application could shield sensitive data and code from the rest of the device, for example.

Image Credits: Arm

“What we’re doing with the Arm Confidential Compute Architecture is worrying about the fact that all of our computing is running on the computing infrastructure of operating systems and hypervisors,” Richard Grisenthwaite, the chief architect at Arm, told me. “That code is quite complex and therefore could be penetrated if things go wrong. And it’s in an incredibly trusted position, so we’re moving some of the workloads so that [they are] running on a vastly smaller piece of code. Only the Realm manager is the thing that’s actually capable of seeing your data while it’s in action. And that would be on the order of about a 10th of the size of a normal hypervisor and much smaller still than an operating system.”

As Grisenthwaite noted, it took Arm a few years to work out the details of this security architecture and ensure that it is robust enough — and during that time Spectre and Meltdown appeared, too, and set back some of Arm’s initial work because some of the solutions it was working on would’ve been vulnerable to similar attacks.

Image Credits: Arm

Unsurprisingly, another area the team focused on was enhancing the CPU’s AI capabilities. AI workloads are now ubiquitous. Arm had already done introduced its Scalable Vector Extension (SVE) a few years ago, but at the time, this was meant for high-performance computing solutions like the Arm-powered Fugaku supercomputer.

Now, Arm is introducing SVE2 to enable more AI and digital signal processing (DSP) capabilities. Those can be used for image processing workloads, as well as other IoT and smart home solutions, for example. There are, of course, dedicated AI chips on the market now, but Arm believes that the entire computing stack needs to be optimized for these workloads and that there are a lot of use cases where the CPU is the right choice for them, especially for smaller workloads.

“We regard machine learning as appearing in just about everything. It’s going to be done in GPUs, it’s going to be done in dedicated processors, neural processors, and also done in our CPUs. And it’s really important that we make all of these different components better at doing machine learning,” Grisenthwaite said.

As for raw performance, Arm believes its new architecture will allow chip manufacturers to gain more than 30% in compute power over the next two chip generations, both for mobile CPUs but also the kind of infrastructure CPUs that large cloud vendors like AWS now offer their users.

“Arm’s next-generation Armv9 architecture offers a substantial improvement in security and machine learning, the two areas that will be further emphasized in tomorrow’s mobile communications devices,” said Min Goo Kim, the executive vice president of SoC development at Samsung Electronics. “As we work together with Arm, we expect to see the new architecture usher in a wider range of innovations to the next generation of Samsung’s Exynos mobile processors.”

❌