Waymo, the self-driving car company under Alphabet, has been testing in the suburbs of Phoenix for several years now. And while the sunny metropolis might seem like the ideal and easiest location to test autonomous vehicle technology, there are times when the desert becomes a dangerous place for any driver — human or computer.
The two big safety concerns in this desert region are sudden downpours that cause flash floods and haboobs, giant walls of dust between 1,500 and 3,000 feet high that can cover up to 100 square miles. One record-breaking haboob in July 2011 covered the entire Phoenix valley, an area of more than 517 square miles.
Waymo released Friday a blog post that included two videos showing how the sensors on its self-driving vehicles detect and recognize objects while navigating through a haboob in Phoenix and fog in San Francisco. The vehicle in Phoenix was manually driven, while the one in the fog video was in autonomous mode.
The point of the videos, Waymo says, is to show how, and if, the vehicles recognize objects during these extreme low visibility moments. And they do. The haboob video shows how its sensors work to identify a pedestrian crossing a street with little to no visibility.
Waymo uses a combination of lidar, radar and cameras to detect and identify objects. Fog, rain or dust can limit visibility in all or some of these sensors.
Waymo doesn’t silo the sensors affected by a particular weather event. Instead, it continues to take in data from all the sensors, even those that don’t function as well in fog or dust, and uses that collective information to better identify objects.
The potential is for autonomous vehicles to improve on visibility, one of the greatest performance limitations of humans, Debbie Hersman, Waymo’s chief safety officer wrote in the blog post. If Waymo or other AV companies are successful, they could help reduce one of the leading contributors to crashes. The Department of Transportation estimates that weather contributes to 21% of the annual U.S. crashes.
Still, there are times when even an autonomous vehicle doesn’t belong on the road. It’s critical for any company planning to deploy AVs to have a system that can not only identify, but also take the safest action if conditions worsen.
Waymo vehicles are designed to automatically detect sudden extreme weather changes, such as a snowstorm, that could impact the ability of a human or an AV to drive safely, according to Hersman.
The question is what happens next. Humans are supposed to pull over off the road during a haboob and turn off the vehicle, a similar action when one encounters heavy fog. Waymo’s self-driving vehicles will do the same if weather conditions deteriorate to the point that the company believes it would affect the safe operation of its cars, Hersman wrote.
The videos and blog post are the latest effort by Waymo to showcase how and where it’s testing. The company announced August 20 that it has started testing how its sensors handle heavy rain in Florida. The move to Florida will focus on data collection and testing sensors; the vehicles will be manually driven for now.
Waymo also tests (or has tested) its technology in and around Mountain View, Calif., Novi, Mich., Kirkland, Wash. and San Francisco. The bulk of the company’s activities have been in suburbs of Phoenix and around Mountain View.
Porsche’s venture arm has acquired a minority stake in TriEye, an Israeli startup that’s working on a sensor technology to help vehicle driver-assistance and self-driving systems see better in poor weather conditions like dust, fog and rain.
The strategic investment is part of a Series A financing round that has been expanded to $19 million. The round was initially led by Intel Capital and Israeli venture fund Grove Ventures. Porsche has held shares in Grove Ventures since 2017.
TriEye has raised $22 million to date. Terms of Porsche’s investment were not disclosed.
The additional funding will be used for ongoing product development, operations and hiring talent, according to TriEye.
The advanced driver-assistance systems found in most new vehicles today typically rely on a combination of cameras and radar to “see.” Autonomous vehicle systems, which are being developed and tested by dozens of companies such as Argo AI, Aptiv, Aurora, Cruise and Waymo, have a more robust suite of sensors that include light detection and ranging radar (lidar) along with cameras and ultrasonic sensors.
For either of these systems to function properly, they need to be able to see in all conditions. This pursuit of sensor technology has sparked a boom in startups hoping to tap into demand from automakers and companies working on self-driving car systems.
TriEye is one of them. The premise of TriEye is to solve the low visibility problem created by poor weather conditions. The startup’s co-founders argue that fusing existing sensors such as radar, lidar and standard cameras don’t solve this problem.
TriEye, which was founded in 2017, believes the answer is through short-wave infrared (SWIR) sensors. The startup said it has developed an HD SWIR camera that is a smaller size, higher resolution and cheaper than other technologies. The camera is due to launch in 2020.
The technology is based on advanced nano-photonics research by Uriel Levy, a TriEye co-founder and CTO who is also a professor at the Hebrew University of Jerusalem.
The company says its secret sauce is its “unique” semiconductor design that will make it possible to manufacture SWIR HD cameras at a “fraction of their current cost.”
TriEye’s technology was apparently good enough to get Porsche’s attention.
Michael Steiner, a Porsche AG board member focused on R&D, said the technology was promising, as was the team, which is comprised of people with expertise in deep learning, nano-photonics and semiconductor components.
“We see great potential in this sensor technology that paves the way for the next generation of driver assistance systems and autonomous driving functions,” Steiner said in a statement. “SWIR can be a key element: it offers enhanced safety at a competitive price.”
UPS said Thursday it has taken a minority stake in self-driving truck startup TuSimple just months after the two companies began testing the use of autonomous trucks in Arizona.
The size of minority investment, which was made by the company’s venture arm UPS Ventures, was not disclosed. The investment and the testing comes as UPS looks for new ways to remain competitive, cut costs and boost its bottom line.
TuSimple, which launched in 2015 and has operations in San Diego and Tucson, Arizona, believes it can deliver. The startup says it can cut average purchased transportation costs by 30%.
TuSimple, which is backed by Nvidia, ZP Capital and Sina Corp., is working on a “full-stack solution,” a wonky industry term that means developing and bringing together all of the technological pieces required for autonomous driving. TuSimple is developing a Level 4 system, a designation by the SAE that means the vehicle takes over all of the driving in certain conditions.
An important piece of TuSimple’s approach is its camera-centric perception solution. TuSimple’s camera-based system has a vision range of 1,000 meters, the company says.
The days of when highways will be filled with autonomous trucks are years away. But UPS believes it’s worth jumping in at an early stage to take advantage of some of the automated driving such as advanced braking technology that TuSimple can offer today.
“UPS is committed to developing and deploying technologies that enable us to operate our global logistics network more efficiently,” Scott Price, chief strategy officer at UPS said in a statement. “While fully autonomous, driverless vehicles still have development and regulatory work ahead, we are excited by the advances in braking and other technologies that companies like TuSimple are mastering. All of these technologies offer significant safety and other benefits that will be realized long before the full vision of autonomous vehicles is brought to fruition — and UPS will be there, as a leader implementing these new technologies in our fleet.”
UPS initially tapped TuSimple to help it better understand how Level 4 autonomous trucking might function within its network. That relationship expanded in May when the companies began using self-driving tractor trailers to carry freight on a freight route between Tucson and Phoenix to test if service and efficiency in the UPS network can be improved. This testing is ongoing. All of TuSimple’s self-driving trucks operating in the U.S. have a safety driver and an engineer in the cab.
TuSimple and UPS monitor all aspects of these trips, including safety data, transport time and the distance and time the trucks travel autonomously, the companies said Thursday.
UPS isn’t the only company that TuSimple is hauling freight for as part of its testing. TuSimple has said its hauling loads for for several customers in Arizona. The startup has a post-money valuation of $1.095 billion (aka unicorn status).
In two years, Voyage has gone from a tiny self-driving car upstart spun out of Udacity to a company able to operate on 200 miles of roads in retirement communities.
Now, Voyage is on the verge of introducing a new vehicle that is critical to its mission of launching a truly driverless ride-hailing service. (Human safety drivers not included.)
This internal milestone, which Voyage CEO Oliver Cameron hinted at in a recent Medium post, went largely unnoticed. Voyage, after all, is just a 55-person speck of a startup in an industry, where the leading companies have amassed hundreds of engineers backed by war chests of $1 billion or more. Voyage has raised just $23.6 million from investors that include Khosla Ventures, CRV, Initialized Capital and the venture arm of Jaguar Land Rover.
Still, the die has yet to be cast in this burgeoning industry of autonomous vehicle technology. These are the middle-school years for autonomous vehicles — a time when size can be misinterpreted for maturity and change occurs in unpredictable bursts.
The upshot? It’s still unclear which companies will solve the technical and business puzzles of autonomous vehicles. There will be companies that successfully launch robotaxis and still fail to turn their service into a profitable commercial enterprise. And there will be operationally savvy companies that fail to develop and validate the technology to a point where human drivers can be removed.
Voyage wants to unlock both.
China’s EHang, a company focused on developing and deploying autonomous passenger and freight low-altitude vehicles, will build out its first operational network of air taxis and transports in Guangzhou. The company announced that the Chinese city would play host to its pilot location for a citywide deployment.
The pilot will focus on not only showing that a low-altitude, rotor-powered aircraft makes sense for use in cities, but that a whole network of them can operate autonomously in concert, controlled and monitored by a central traffic management hub that Ehang will develop together with the local Guangzhou government.
Ehang, which was chosen at the beginning of this year by China’s Civil Aviation Administration as the sole pilot company to be able to build out autonomous flying passenger vehicle services, has already demonstrated flights of its Ehang 184 vehicles carrying passengers in Vienna earlier this year, and ran a number of flights in Guangzhou in 2018 as well.
In addition to developing the air traffic control system to ensure that these operate safely as a fleet working in the air above city at the same time, Ehang will be working with Guangzhou to build out the infrastructure needed to operate the network. The plan for the pilot is to use the initial stages to continue to test out the vehicles, as well as the vertiports it’ll need to support their operation, and then it’ll work with commercial partners for good transportation first.
The benefits of such a network will be especially valuable for cities like Guangzhou, where rapid growth has led to plenty of traffic and high density at the ground level. It could also potentially have advantages over a network of autonomous cars or wheeled vehicles, since those still have to contend with ground traffic, pedestrians, cyclists and other vehicles in order to operate, while the low-altitude air above a city is more or less unoccupied.
Self-driving startup Optimus Ride will become the first to operate a commercial self-driving service in the state of New York – in Brooklyn. But don’t expect these things to be contending with pedestrians, bike riders, taxis and cars on New York’s busiest roads; instead, they’ll be offering shuttle services within Brooklyn Navy Yards, a 300-acre private commercial development.
The Optimus Ride autonomous vehicles, which have six seats across three rows for passengers, and which also always have both a safety driver and another Optimus staff observer on board, at least for now, will offer service seven days a week, for free, running a service loop that will cover the entire complex. It includes a stop at a new ferry landing on-site, which means a lot of commuters should be able to pretty easily grab a seat in one for their last-mile needs.
Optimus Ride’s shuttles have been in operation in a number of different sites across the U.S., including in Boston, Virginia, California and Massachusetts.
The Brooklyn Navy Yards is a perfect environment for the service, since it plays host to some 10,000 workers, but also includes entirely private roads – which means Optimus Ride doesn’t need to worry about public road rules and regulations in deploying a commercial self-driving service.
May Mobility, an Ann Arbor-based startup also focused on low-speed autonomous shuttles, has deployed in partnership with some smaller cities and on defined bus route paths. The approach of both companies is similar, using relatively simple vehicle designs and serving low-volume ridership in areas where traffic and pedestrian patterns are relatively easy to anticipate.
Commercially viable, fully autonomous robotaxi service for dense urban areas is still a long, long way off – and definitely out of reach for startup and smaller companies in the near-term. Tackling commercial service in controlled environments on a smaller scale is a great way to build the business while bringing in revenue and offering actual value to paying customers at the same time.
Musk is due to speak at an AI conference, called the World Artificial Intelligence Conference, taking place in Shanghai on August 29-31. Replying to a tweet about the event he announced: “Will also be launching The Boring Company China on this trip.”
Will also be launching The Boring Company China on this trip
— E (@elonmusk) August 3, 2019
Another Twitter user chipped into the conversation to ask whether the company would also do underwater tunnels — to which Musk replied simply “yes“.
A securities filing last month revealed that the The Boring Company had raised its first outside investment via the sale of $120M in stock. So the company has some extra cash sloshing around to plough into new ventures.
It also recently landed its first commercial contract: $48.7M to build and operate an underground “people mover” in Las Vegas, focused on the Las Vegas Convention Center.
This underground ‘people mover’ is not, as you might imagine, a tried and tested metro train system. The plan apparently involves building two tunnels: One for vehicles (Musk does also sell electrics cars) and a second tunnel for pedestrians who will be carried in (modified) Tesla cars. The latter fully autonomous, under the plan.
Current generation Teslas are not capable of driving themselves, merely offering driving assistance features to humans. But autonomous driving inside a tunnel is about as much of a controlled environment you could hope for — without, y’know, sticking cars together on rails and making a driverless train (like the one that’s been serving London’s Docklands area since 1987).
The Las Vegas contract specifies three months of safety testing before Musk’s modified Teslas will be allowed to whisk people through the tunnel.
Another design that The Boring Company has proposed — for an ambitious Loop system from Washington, D.C. to Baltimore — is still on the drawing board, having attracted major safety concerns by failing to meet several key national safety standards, including lacking sufficient emergency exits and not taking note of the latest engineering practices.
So perhaps, in looking to expand The Boring Company by taking his spade to the Far East, Musk is hoping for a more accommodating set of building standards to drive an electric truck through.
Gatik AI, the autonomous vehicle startup that’s aiming for the sweet middle spot in the world of logistics, is officially on the road through a partnership with Walmart .
The company received approval from the Arkansas Highway Commissioner’s office to launch a commercial service with Walmart . Gatik’s autonomous vehicles (with a human safety driver behind the wheel) is now delivering customer online grocery orders from Walmart’s main warehouse to its neighborhood stores in Bentonville, Arkansas.
The AVs will aim to travel seven days a week on a two-mile route — the tiniest of slivers of Walmart’s overall business. But the goal here isn’t ubiquity just yet. Instead, Walmart is using this project to capture the kind of data that will help it learn how best to integrate autonomous vehicles into their stores and services.
Gatik uses Ford transit vehicles outfitted with a self-driving system. Co-founder and CEO Gautam Narang has previously told TechCrunch that the company can fulfill a need in the market through a variety of use cases, including partnering with third-party logistics giants like Amazon, FedEx or even the U.S. Postal Service, auto part distributors, consumer goods, food and beverage distributors as well as medical and pharmaceutical companies.
The company, which emerged from stealth in June, has raised $4.5 million in a seed round led by former CEO and executive chairman of Google Eric Schmidt’s Innovation Endeavors. Other investors include AngelPad, Dynamo Fund, Fontinalis Partners, Trucks Venture Capital and angel investor Lior Ron, who heads Uber Freight.
Gatik isn’t the only AV company working with Walmart. Walmart has partnerships with Waymo and Udelv. Both of these partnerships involve pilot programs in Arizona.
Udelv is testing the use of autonomous vans to deliver online grocery orders to customers. Last year, members of Waymo’s early rider program received grocery savings when they shopped from Walmart.com. The riders would then take a Waymo car to their nearby Walmart store for grocery pickup.
Earlier this month, TechCrunch held its annual Mobility Sessions event, where leading mobility-focused auto companies, startups, executives and thought leaders joined us to discuss all things autonomous vehicle technology, micromobility and electric vehicles.
Extra Crunch is offering members access to full transcripts of key panels and conversations from the event, such as Megan Rose Dickey‘s chat with Voyage CEO and cofounder Oliver Cameron and Uber’s prediction team lead Clark Haynes on the ethical considerations for autonomous vehicles.
Megan, Oliver and Clark talk through how companies should be thinking about ethics when building out the self-driving ecosystem, while also diving into the technical aspects of actually building an ethical transportation product. The panelists also discuss how their respective organizations handle ethics, representation and access internally, and how their approaches have benefitted their offerings.
Clark Haynes: So we as human drivers, we’re naturally what’s called foveate. Our eyes go forward and we have some mirrors that help us get some situational awareness. Self-driving cars don’t have that problem. Self-driving cars are designed with 360-degree sensors. They can see everything around them.
But the interesting problem is not everything around you is important. And so you need to be thinking through what are the things, the people, the actors in the world that you might be interacting with, and then really, really think through possible outcomes there.
I work on the prediction problem of what’s everyone doing? Certainly, you need to know that someone behind you is moving in a certain way in a certain direction. But maybe that thing that you’re not really certain what it is that’s up in front of you, that’s the thing where you need to be rolling out 10, 20 different scenarios of what might happen and make certain that you can kind of hedge your bets against all of those.
For access to the full transcription below and for the opportunity to read through additional event transcripts and recaps, become a member of Extra Crunch. Learn more and try it for free.
Megan Rose Dickey: Ready to talk some ethics?
Oliver Cameron: Born ready.
Clark Haynes: Absolutely.
Rose Dickey: I’m here with Oliver Cameron of Voyage, a self-driving car company that operates in communities, like retirement communities, for example. And with Clark Haynes of Uber, he’s on the prediction team for autonomous vehicles.
So some of you in the audience may remember, it was last October, MIT came out with something called the moral machine. And it essentially laid out 13 different scenarios involving self-driving cars where essentially someone had to die. It was either the old person or the young person, the black person, or the white person, three people versus one person. I’m sure you guys saw that, too.
So why is that not exactly the right way to be thinking about self-driving cars and ethics?
Haynes: This is the often-overused trolley problem of, “You can only do A or B choose one.” The big thing there is that if you’re actually faced with that as the hardest problem that you’re doing right now, you’ve already failed.
You should have been working harder to make certain you never ended up in a situation where you’re just choosing A or B. You should actually have been, a long time ago, looking at A, B, C, D, E, F, G, and like thinking through all possible outcomes as far as what your self-driving car could do, in low probability outcomes that might be happening.
Rose Dickey: Oliver, I remember actually, it was maybe a few months ago, you tweeted something about the trolley problem and how much you hate it.
Cameron: I think it’s one of those questions that doesn’t have an ideal answer today, because no one’s got self-driving cars deployed to tens of thousands of people experiencing these sorts of issues on the road. If we did an experiment, how many people here have ever faced that conundrum? Where they have to choose between a mother pushing a stroller with a child and a regular, normal person that’s just crossing the road?
Rose Dickey: We could have a quick show of hands. Has anyone been in that situation?
Earlier this month, TechCrunch held its annual Mobility Sessions event, where leading mobility-focused auto companies, startups, executives and thought leaders joined us to discuss all things autonomous vehicle technology, micromobility and electric vehicles.
Extra Crunch is offering members access to full transcripts key panels and conversations from the event, including our panel on micromobility where TechCrunch VC reporter Kate Clark was joined by investors Sarah Smith of Bain Capital Ventures, Michael Granoff of Maniv Mobility, and Ted Serbinski of TechStars Detroit.
The panelists walk through their mobility investment theses and how they’ve changed over the last few years. The group also compares the business models of scooters, e-bikes, e-motorcycles, rideshare and more, while discussing Uber and Lyft’s role in tomorrow’s mobility ecosystem.
Sarah Smith: It was very clear last summer, that there was essentially a near-vertical demand curve developing with consumer adoption of scooters. E-bikes had been around, but scooters, for Lime just to give you perspective, had only hit the road in February. So by the time we were really looking at things, they only had really six months of data. But we could look at the traction and the adoption, and really just what this was doing for consumers.
At the time, consumers had learned through Uber and Lyft and others that you can just grab your cell phone and press a button, and that equates to transportation. And then we see through the sharing economy like Airbnb, people don’t necessarily expect to own every single asset that they use throughout the day. So there’s this confluence of a lot of different consumer trends that suggested that this wasn’t just a fad. This wasn’t something that was going to go away.
For access to the full transcription below and for the opportunity to read through additional event transcripts and recaps, become a member of Extra Crunch. Learn more and try it for free.
Kate Clark: One of the first panels of the day, I think we should take a moment to define mobility. As VCs in this space, how do you define this always-evolving sector?
Michael Granoff: Well, the way I like to put it is that there have been four eras in mobility. The first was walking and we did that for thousands of years. Then we harnessed animal power for thousands of years.
And then there was a date — and I saw Ken Washington from Ford here — September 1st, 1908, which was when the Model T came out. And through the next 100 years, mobility is really defined as the personally owned and operated individual operated internal combustion engine car.
And what’s interesting is to go exactly 100 years later, September 2008, the financial crisis that affects the auto industry tremendously, but also a time where we had the first third-party apps, and you had Waze and you had Uber, and then you had Lime and Bird, and so forth. And really, I think what we’re in now is the age of digital mobility and I think that’s what defines what this day is about.
Ted Serbinski: Yeah, I think just to add to that, I think mobility is the movement of people and goods. But that last part of digital mobility, I really look at the intersection of the physical and digital worlds. And it’s really that intersection, which is enabling all these new ways to move around.
Clark: So Ted you run TechStars Detroit, but it was once known as TechStars Mobility. So why did you decide to drop the mobility?
Serbinski: So I’m at a mobility conference, and we no longer call ourselves mobility. So five years ago, when we launched the mobility program at TechStars, we were working very closely with Ford’s group and at the time, five years ago, 2014, where it started with the connected car, auto and [people saying] “you should use the word mobility.”
And I was like “What does that mean?” And so when we launched TechStars Mobility, we got all this stuff but we were like “this isn’t what we’re looking for. What does this word mean?” And then Cruise gets acquired for a billion dollars. And everyone’s like “Mobility! This is the next big gold rush! Mobility, mobility, mobility!”
And because I invest early-stage companies anywhere in the world, what started to happen last year is we’d be going after a company and they’d say, “well, we’re not interested in your program. We’re not mobility.” And I’d be scratching my head like, “No, you are mobility. This is where the future is going. You’re this digital way of moving around. And no, we’re artificial intelligence, we’re robotics.”
And as we started talking to more and more entrepreneurs, and hundreds of startups around the world, it became pretty clear that the word mobility is actually becoming too limiting, depending on your vantage where you are in the world.
And so this year, we actually dropped the word mobility and we just call it TechStars Detroit, and it’s really just intersection of those physical and digital worlds. And so now we don’t have a word, but I think we found more mobility companies by dropping the word mobility.
Fully self-driving passenger cars are not “just around the corner.” While the well-capitalized leaders — funded by corporations, multibillion-dollar VC funds or advertising revenue — are on more stable financial ground, many other full-stack autonomous vehicle startups may be looking for the off-ramp.
With no clear path to funds outside of venture capital, full-stack startups face two options: 1) get acquired for the talent and technology or 2) close shop. Cruise and Argo AI were big startup exits. Daimler Trucks acquired Torc Robotics (which did not follow the VC-startup model). And nuTonomy was marketed as a $450 million acquisition by Delphi/Aptiv.
But the most recent VC-backed valuations for some AV startups have stagnated at or below the $450 million mark, which doesn’t give much upside from their previous valuations in the height of the AV fervor. Without much further upside, it is more likely that many passenger car AV companies will close shop.
Full-stack autonomous passenger vehicle startups are dead.
Passenger car autonomy projects attracted a lot of capital and top talent in the past decade and produced tremendous technological advances in autonomous perception, path planning and control. What happens to the talent and technology when the passenger AV bubble bursts?
Well, there are more vehicles than just passenger cars. The DARPA Grand Challenge held over a decade ago is cited as the catalyst behind the GoogleX self-driving car project and the explosion of passenger car AVs. The advances made during the challenges also spilled over to off-highway vehicles. Since then, autonomous vehicles have been developed and deployed in defense as well as commercially in large-scale agriculture and mining.
It is widely observed that industrial, agriculture, construction and mining applications are better suited for near-term autonomy. There are defined automation tasks with clear ROI, there are fewer human-machine interactions and there are geo-fenced areas that bound the operational and safety requirements. These are simply more controlled environments than on city streets. Automation also can help offset critical labor shortages. It is difficult to attract a workforce at remote mines in the middle of vast deserts. Labor shortages for agriculture add tremendous uncertainty for growers who don’t know if they will be able to prepare and harvest their crops during short time windows.
With the help of those DARPA participants, Caterpillar developed semi- and fully autonomous haulage trucks and announced they have hauled more than 1 billion tons of material. Komatsu followed a day later by announcing that they reached the 2 billion ton milestone. These haulage trucks are the size of a house. John Deere, Case IH, New Holland and others have developed semi- and fully autonomous tractors on their own, and with the help of R&D companies. Most of these programs have been around for more than a decade now, but the rate of technological progress pales to that of the recent startup efforts.
From our vantage point as investors, we believe that we will see a similar spillover from the passenger car AV bubble into industrial, agriculture, construction and mining sectors. This will enhance existing autonomous programs, open up new ROI use cases in those sectors and reshape the autonomous vehicle business model in some of the sectors as smaller players gain access to top talent and technology.
The most significant technologies that will spill over into the off-highway vehicle market are machine perception, reinforcement learning for more complex robotic motion planning and functionally safe, mission-critical engineering requirements.
Perception systems deployed on mining and agricultural vehicles are not as cost-constrained as passenger cars. The price tags for some 700-series CAT haulage trucks exceed $5,000,000. These vehicles are equipped with ruggedized lidar, radar, cameras, etc., mostly for safety awareness. Costs of these systems will decline thanks to the cost-constrained designs for sensors driven by the automotive market.
Camera-based inference will allow these vehicles to further understand elements in their environment — allowing them to perform more complex navigational tasks and operations. Sensor fusion may allow agricultural vehicles to deploy optimal inputs to fields or mining vehicles to understand ore characteristics to increase productivity per scoop.
Reinforcement learning allows operators to “teach” algorithms to perform complex tasks and will create new use cases requiring complex robotic actuation. These use cases could be harvesting more than just broad-acre crops, moving dirt on-site, picking-and-placing of construction equipment for staging and much more. These robotic applications can be integrated on top of existing autonomous mobility platforms.
The most important criterion for these startups is an uncompromising approach to robustness and safety. Autonomy only achieves its full potential if the solution works with minimal downtime and improves safety (which is also tied to equipment replacement costs, worker compensation and insurance).
Recognizing these trends, we’ve made an investment into an AV startup that is deploying autonomous systems on Bobcat skid-steer and excavator vehicles in construction and working with large mining operations to automate all vehicles on the mine site.
We’ve also invested in an early-stage agriculture robotics company automating on-field applications that have been, thus far, untouched by automation.
This is only the start. There are many more opportunities in off-highway autonomy, and we’re continuing our search for companies in other off-highway applications.
A team of German researchers has created an automatic landing system for small aircraft that lets them touch down not only without a pilot, but without any of the tech on the ground that lets other planes do it. It could open up a new era of autonomous flight — and make ordinary landings safer to boot.
Now it would be natural to think that with the sophisticated autopilot systems that we have today, a plane could land itself quite easily. And that’s kind of true — but the autoland systems on full-size aircraft aren’t really autonomous. They rely on a set of radio signals emitted by stations only found at major airports: the Instrument Landing System, or ILS.
These signals tell the plane exactly where the runway is even in poor visibility, but even so an “automatic” landing is rarely done. Instead, the pilots — as they do elsewhere — use the autopilot system as an assist, in this case to help them locate the runway and descend properly. A plane can land automatically using ILS and other systems, but it’s rare and even when they do it, it isn’t truly autonomous — it’s more like the airport is flying the plane by wire.
But researchers at Technische Universität München (TUM, or think of it as Munich Tech) have created a system that can land a plane without relying on ground systems at all, and demonstrated it with a pilot on board — or rather, passenger, since he kept his hands in his lap the whole time.
The automated plane comes in for a landing.
A plane making an autonomous landing needs to know exactly where the runway is, naturally, but it can’t rely on GPS — too imprecise — and if it can’t use ILS and other ground systems, what’s left? Well, the computer can find the runway the way pilots do: with its eyes. In this case, both visible-light and infrared cameras on the nose of the plane.
TUM’s tests used a a single-passenger plane, a Diamond DA42 that the team outfitted with a custom-designed automatic control system and a computer vision processor both built for the purpose, together called C2Land. The computer, trained to recognize and characterize a runway using the cameras, put its know-how to work in May taking the plane in for a flawless landing.
As test pilot Thomas Wimmer put it in a TUM news release: “The cameras already recognize the runway at a great distance from the airport. The system then guides the aircraft through the landing approach on a completely automatic basis and lands it precisely on the runway’s centerline.”
You can see the full flight in the video below.
This is a major milestone in automated flight, since until now planes have had to rely on extensive ground-based systems to perform a landing like this one — which means automated landings aren’t currently possible at smaller airports or should something go wrong with the ILS. A small plane like this one is more likely to be at a small airport with no such system, and should a heavy fog roll in, an autoland system like this might be preferable to a pilot who can’t see in infrared.
Right now the tech is very much still experimental, not even at the level where it could be distributed and tested widely, let alone certified by aviation authorities. But the safety benefits are obvious and even as a backup or augmentation to the existing, rarely used autoland systems it would likely be a welcome addition.
As autonomous cars and robots loom over the landscapes of cities and jobs alike, the technologies that empower them are forming sub-industries of their own. One of those is lidar, which has become an indispensable tool to autonomy, spawning dozens of companies and attracting hundreds of millions in venture funding.
But like all industries built on top of fast-moving technologies, lidar and the sensing business is by definition built somewhat upon a foundation of shifting sands. New research appears weekly advancing the art, and no less frequently are new partnerships minted, as car manufacturers like Audi and BMW scramble to keep ahead of their peers in the emerging autonomy economy.
To compete in the lidar industry means not just to create and follow through on difficult research and engineering, but to be prepared to react with agility as the market shifts in response to trends, regulations, and disasters.
I talked with several CEOs and investors in the lidar space to find out how the industry is changing, how they plan to compete, and what the next few years have in store.
Their opinions and predictions sometimes synced up and at other times diverged completely. For some, the future lies manifestly in partnerships they have already established and hope to nurture, while others feel that it’s too early for automakers to commit, and they’re stringing startups along one non-exclusive contract at a time.
All agreed that the technology itself is obviously important, but not so important that investors will wait forever for engineers to get it out of the lab.
And while some felt a sensor company has no business building a full-stack autonomy solution, others suggested that’s the only way to attract customers navigating a strange new market.
It’s a flourishing market but one, they all agreed, that will experience a major consolidation in the next year. In short, it’s a wild west of ideas, plentiful money, and a bright future — for some.
I’ve previously written an introduction to lidar, but in short, lidar units project lasers out into the world and measure how they are reflected, producing a 3D picture of the environment around them.
This is it. The final call for all the mobility and transportation startuppers who want to save a solid Benjamin on their ticket to the TC Sessions: Mobility 2019 conference in San Jose, Calif. on July 10. The early-bird ticket price disappears tonight, June 14 at 11:59 p.m. (PT). Beat that deadline and buy a ticket — or pay full freight.
Get ready to experience a full day devoted to the revolution that’s taking place within the mobility and transportation industries. More than 1,000 people — the greatest minds, biggest names and influential thinkers, makers and investors — will attend a day packed with interviews, panel discussions, fireside chats, demos and workshops.
Along with TechCrunch editors, speakers will question assumptions and examine complex technological and regulatory issues. They’ll discuss capital investment concerns and look at the ethics and human factors in a future of autonomous cars, delivery robots and flying taxis.
Here’s a small sample of the programming that’s on tap. The event agenda can help you plan your day, although you may have to clone yourself to catch it all.
Building Business and Autonomy: Co-founder and CTO Jesse Levinson will be on hand to talk about Zoox, an independent autonomous vehicle company. Its cars can navigate tricky San Francisco streets — including the notoriously iconic Lombard Street. We’ll hear how Zoox plans to navigate the challenging road to business success.
The Future of Freight: The trucking industry is in serious trouble, and startups and OEMs are scrambling to come up with a solution. Volvo’s Jenny Elfsberg and Stefan Seltz-Axmacher of Starsky Robotics will join us to debate whether autonomous trucks are the fix we need or if another near-term technology can pave the way to a more efficient and profitable industry.
Will Venture Capital Drive the Future of Mobility? Michael Granoff of Maniv Mobility, Ted Serbinski of Techstars and Bain Capital’s Sarah Smith will debate the uncertain future of mobility tech and whether VC dollars are enough to push the industry forward.
Today’s the last day you can save $100 on your pass to the TC Sessions: Mobility 2019 conference in San Jose, Calif. on July 10. Buy your ticket by 11:59 p.m. (PT) tonight, June 14 or kiss that early bird — and $100 — goodbye.
Is your company interested in sponsoring or exhibiting at TC Sessions: Mobility? Contact our sponsorship sales team by filling out this form.
Cao Xudong turned up on the side of the road in jeans and a black T-shirt printed with the word “Momenta,” the name of his startup.
Before founding the company — which last year topped $1 billion in valuation to become China’s first autonomous driving “unicorn” — he’d already led an enviable life, but he was convinced that autonomous driving would be the real big thing.
Cao isn’t just going for the moonshot of fully autonomous vehicles, which he says could be 20 years away. Instead, he’s taking a two-legged approach of selling semi-automated software while investing in research for next-gen self-driving tech.
Cao, pronounced ‘tsao’, was pursuing his Ph.D. in engineering mechanics when an opportunity came up to work at Microsoft’s fundamental research arm in Asia, putatively the “West Point” for China’s first generation of artificial intelligence experts. He held out there for more than four years before quitting to put his hands on something more practical: a startup.
“Academic research for AI was getting quite mature at the time,” said now 33-year-old Cao in an interview with TechCrunch, reflecting on his decision to quit Microsoft. “But the industry that puts AI into application had just begun. I believed the industrial wave would be even more extensive and intense than the academic wave that lasted from 2012 to 2015.”
In 2015, Cao joined SenseTime, now the world’s highest-valued AI startup, thanks in part to the lucrative face-recognition technology it sells to the government. During his 17-month stint, Cao built the company’s research division from zero staff into a 100-people strong team.
Before long, Cao found himself craving for a new adventure again. The founder said he doesn’t care about the result as much as the chance to “do something.” That tendency was already evident during his time at the prestigious Tsinghua University, where he was a member of the outdoors club. He wasn’t particularly drawn to hiking, he said, but the opportunity to embrace challenges and be with similarly resilient, daring people was enticing enough.
And if making driverless vehicles would allow him to leave a mark in the world, he’s all in for that.
Cao walked me up to a car outfitted with the cameras and radars you might spot on an autonomous vehicle, with unseen computer codes installed in the trunk. We hopped in. Our driver picked a route from the high-definition map that Momenta had built, and as soon as we approached the highway, the autonomous mode switched on by itself. The sensors then started feeding real-time data about the surroundings into the map, with which the computer could make decisions on the road.
Momenta staff installing sensors to a testing car. / Photo: Momenta
Momenta won’t make cars or hardware, Cao assured. Rather, it gives cars autonomous features by making their brains, or deep-learning capacities. It’s in effect a so-called Tier 2 supplier, akin to Intel’s Mobileye, that sells to Tier 1 suppliers who actually produce the automotive parts. It also sells directly to original equipment manufacturers (OMEs) that design cars, order parts from suppliers and assemble the final product. Under both circumstances, Momenta works with clients to specify the final piece of software.
Momenta believes this asset-light approach would allow it to develop state-of-the-art driving tech. By selling software to car and parts makers, it not only brings in income but also sources mountains of data, including how and when humans intervene, to train its codes at relatively low costs.
The company declined to share who its clients are but said they include top carmakers and Tier 1 suppliers in China and overseas. There won’t be many of them because a “partnership” in the auto sector demands deep, resource-intensive collaboration, so less is believed to be more. What we do know is Momenta counts Daimler AG as a backer. It’s also the first Chinese startup that the Mercedes-Benz parent had ever invested in, though Cao would not disclose whether Daimler is a client.
“Say you operate 10,000 autonomous cars to reap data. That could easily cost you $1 billion a year. 100,000 cars would cost $10 billion, which is a terrifying number for any tech giant,” Cao said. “If you want to acquire seas of data that have a meaningful reach, you have to build a product for the mass market.”
Highway Pilot, the semi-autonomous solution that was controlling our car, is Momenta’s first mass-produced software. More will launch in the coming seasons, including a fully autonomous parking solution and a self-driving robotaxi package for urban use.
In the long run, the startup said it aims to tackle inefficiencies in China’s $44 billion logistics market. People hear about warehousing robots built by Alibaba and JD.com, but overall, China is still on the lower end of logistics efficiency. In 2018, logistics costs accounted for nearly 15 percent of national gross domestic product. In the same year, the World Bank ranked China 26th in its logistics performance index, a global benchmark for efficiency in the industry.
Cao Xudong, co-founder and CEO of Momenta / Photo: Momenta
Cao, an unassuming CEO, raised his voice as explained the company’s two-legged strategy. The twin approach forms a “closed loop,” a term that Cao repeatedly summoned to talk about the company’s competitive edge. Instead of picking between the presence and future, as Waymo does with Level 4 — a designation given to cars that can operate under basic situations without human intervention — and Tesla with half-autonomous driving, Momenta works on both. It uses revenue-generating businesses like Highway Pilot to fund research in robotaxis, and the sensor data collected from real-life scenarios to feed models in the lab. Results from the lab, in turn, could soup up what gets deployed on public roads.
During the 40-minute ride in midday traffic, our car was able to change lanes, merge into traffic, create distance from reckless drivers by itself except for one brief moment. Toward the end of the trip, our driver decided to grab the wheel for a lane change as we approached a car dangerously parked in the middle of the exit ramp. Momenta names this an “interactive lane change,” which it claims is designed to be part of its automated system and by its strict definition is not a human “intervention”.
“Human-car interaction will continue to dominate for a long time, perhaps for another 20 years,” Cao noted, adding the setup brings safety to the next level because the car knows exactly what the driver is doing through its inner-cabin cameras.
“For example, if the driver is looking down at their cellphone, the [Momenta] system will alert them to pay attention,” he said.
I wasn’t allowed to film during the ride, so here’s some footage from Momenta to give a sneak peek of its highway solution.
Human beings are already further along the autonomous spectrum than many of us think. Cao, like a lot of other AI scientists, believes robots will eventually take over the wheel. Alphabet-owned Waymo has been running robotaxis in Arizona for several months now, and smaller startups like Drive.ai are also offering a similar service in Texas.
Despite all the hype and boom in the industry, there remains thorny questions around passenger safety, regulatory schema and a host of other issues for the fast-moving tech. Uber’s fatal self-driving crash last year delayed the company’s future projects and prompted a public backlash. As a Shanghai-based venture capitalist recently suggested to me: “I don’t think humanity is ready for self-driving.”
The biggest problem of the industry, he argued, is not tech-related but social. “Self-driving poses challenges to society’s legal system, culture, ethics and justice.”
Cao is well aware of the contention. He acknowledged that as a company with the power to steer future cars, Momenta has to “bear a lot of responsibility for safety.” As such, he required all executives in the company to ride a certain number of autonomous miles so if there’s any loophole in the system, the managers will likely stumble across it before the customers do.
“With this policy in place, the management will pay serious attention to system safety,” Cao asserted.
Momenta’s new headquarters in Suzhou, China / Photo: Momenta
In terms of actually designing the software to be reliable and to trace accountability, Momenta appoints an “architect of system research and development,” who essentially is in charge of analyzing the black box of autonomous driving algorithms. A deep learning model has to be “explainable,” said Cao, which is key to finding out what went wrong: Is it the sensor, the computer, or the navigation app that’s not working?
Going forward, Cao said the company is in no rush to make a profit as it is still spending heavily on R&D, but he assured that margins of the software it sells “are high.” The startup is also blessed with sizable fundings, which Cao’s resume certainly helped attract, and so did his other co-founders Ren Shaoqing and Xia Yan, who were also alumni of Microsoft Research Asia.
As of last October, Momenta had raised at least $200 million from big-name investors including GGV Capital, Sequoia Capital, Hillhouse Capital, Kai-Fu Lee’s Sinovation Ventures, Lei Jun’s Shunwei Capital, electric vehicle maker NIO’s investment arm, WeChat operator Tencent and the government of Suzhou, which will house Momenta’s new 4,000 sq-meter headquarters right next to the city’s high-speed trail station.
When a bullet train speeds past Suzhou, passengers are able to see from their windows Momenta’s recognizable M-shape building, which, in the years to come, might become a new landmark of the historic city in eastern China.
If you’re wild about anything and everything related to mobility and transportation, you do not want to miss the TC Sessions: Mobility 2019 conference in San Jose, Calif. on July 10.
If you’re also wild about saving money, then synchronize your Apple watches — there are 48 hours left to score the early-bird price and save $100. That train leaves the station on Friday, June 14 at 11:59 p.m. (PT), so book your pass now.
More than 1,000 of the industries’ top technologists, founders, investors, engineers and researchers will be there to explore the current and future states of transformational technologies — flying taxis, delivery drones, dockless scooters, autonomous vehicles and more.
World-class speakers and TechCrunch editors will look at both the exciting benefits and the formidable challenges that will ultimately and profoundly affect billions of people around the world. Here’s a taste of what’s coming (you can also check out the full agenda):
Is your company interested in sponsoring or exhibiting at TC Sessions: Mobility? Contact our sponsorship sales team by filling out this form.
Argo AI, the Pittsburgh-based autonomous vehicle startup that Ford invested $1 billion in 2017, has expanded its testing to downtown Detroit with a new third-generation self-driving car.
Argo AI is developing the virtual driver system and high-definition maps designed for Ford’s self-driving vehicles. The third-generation self-driving vehicle is still based on a Ford Fusion Hybrid like its previous test cars.
This latest iteration has a number of mechanical, sensing, compute and software upgrades that will take the company a step closer to production specification. Importantly, these vehicles also have redundant braking and steering systems that help maintain vehicle motion control in case one of the units stops functioning.
The new cars are equipped with an upgraded sensor suite, including new sets of radar and cameras with higher resolution and higher dynamic range as well as a new computing system that has better processing power and improved thermal management systems, according to Argo AI co-founder and president Peter Rander.
This third-generation vehicle is being used on all the cities Argo is testing in.
Argo AI does much of its testing in Pittsburgh, where it’s based. The company is also testing its autonomous vehicle technology in Austin, Miami, Palo Alto, Washington, D.C., and Dearborn, Michigan. This latest expansion brings Argo’s test vehicles to Detroit, specifically Corktown and sections of downtown around Campus Martius Park.
Corktown is the center of Ford’s electric and autonomous vehicles business plan. The automaker will spend the next several years transforming at least 1.2 million square feet of space in Corktown and ultimately a “mobility corridor,” that ties hubs of research, testing and development in the academic hub of Ann Arbor to Ford’s Dearborn headquarters, and finally to Detroit.
During the Tesla Annual Shareholder Meeting that took place on Tuesday, Tesla CEO Elon Musk didn’t mince words when he talked about what he thinks of the value proposition of traditional fossil fuel vehicles. He called it “financially insane” to buy any car that isn’t an electric car capable of full autonomy — which, conveniently, currently is the type of vehicle that only Tesla claims to sell.
Musk reiterated a claim he’s made previously about Tesla vehicles, that all of its cars manufactured since October 2016 have everything they need to become fully autonomous — with those built before the release of its new autonomous in-car computer earlier this year needing only a computer swap, replacing the new Tesla-built computer for the Nvidia ones they shipped with.
The Tesla CEO also reiterated his claim from earlier this year that there will be 1 million robotaxis on the road as of next year, noting that it’s easy to arrive at that number if you consider that it includes all Teslas, including Model X, Model S and Model 3 sold between October 2016 and today.
Regarding Tesla’s progress with self-driving, Musk noted that by end of year, Tesla hopes to deliver autonomy such that while you’ll still have to supervise the driving in-car, it’ll get you from your garage to your workplace without intervention. He said that by next year, their goal is the same thing but without requiring supervision, and then some time after that, pending regulatory cooperation, they’ll be able to do full autonomy without anyone on board.
Musk ended this musing with a colorful metaphor, likening buying a car that’s powered by traditional fossil fuel and without any path to self-driving to someone today “riding a horse and using a flip phone.”
Lidar is a critical part of many autonomous cars and robotic systems, but the technology is also evolving quickly. A new company called Sense Photonics just emerged from stealth mode today with a $26M A round, touting a whole new approach that allows for an ultra-wide field of view and (literally) flexible installation.
Still in prototype phase but clearly enough to attract eight figures of investment, Sense Photonics’ lidar doesn’t look dramatically different from others at first, but the changes are both under the hood and, in a way, on both sides of it.
Early popular lidar systems like those from Velodyne use a spinning module that emit and detect infrared laser pulses, finding the range of the surroundings by measuring the light’s time of flight. Subsequent ones have replaced the spinning unit with something less mechanical, like a DLP-type mirror or even metamaterials-based beam steering.
All these systems are “scanning” systems in that they sweep a beam, column, or spot of light across the scene in some structured fashion — faster than we can perceive, but still piece by piece. Few companies, however, have managed to implement what’s called “flash” lidar, which illuminates the whole scene with one giant, well, flash.
That’s what Sense has created, and it claims to have avoided the usual shortcomings of such systems — namely limited resolution and range. Not only that, but by separating the laser emitting part and the sensor that measures the pulses, Sense’s lidar could be simpler to install without redesigning the whole car around it.
I talked with CEO and co-founder Scott Burroughs, a veteran engineer of laser systems, about what makes Sense’s lidar a different animal from the competition.
“It starts with the laser emitter,” he said. “We have some secret sauce that lets us build a massive array of lasers — literally thousands and thousands, spread apart for better thermal performance and eye safety.”
These tiny laser elements are stuck on a flexible backing, meaning the array can be curved — providing a vastly improved field of view. Lidar units (except for the 360-degree ones) tend to be around 120 degrees horizontally, since that’s what you can reliably get from a sensor and emitter on a flat plane, and perhaps 50 or 60 degrees vertically.
“We can go as high as 90 degrees for vert which i think is unprecedented, and as high as 180 degrees for horizontal,” said Burroughs proudly. “And that’s something auto makers we’ve talked to have been very excited about.”
Here it is worth mentioning that lidar systems have also begun to bifurcate into long-range, forward-facing lidar (like those from Luminar and Lumotive) for detecting things like obstacles or people 200 meters down the road, and more short-range, wider-field lidar for more immediate situational awareness — a dog behind the vehicle as it backs up, or a car pulling out of a parking spot just a few meters away. Sense’s devices are very much geared toward the second use case.
Particularly because of the second interesting innovation they’ve included: the sensor, normally part and parcel with the lidar unit, can exist totally separately from the emitter, and is little more than a specialized camera. That means that while the emitter can be integrated into a curved surface like the headlight assembly, while the tiny detectors can be stuck in places where there are already traditional cameras: side mirrors, bumpers, and so on.
The camera-like architecture is more than convenient for placement; it also fundamentally affects the way the system reconstructs the image of its surroundings. Because the sensor they use is so close to an ordinary RGB camera’s, images from the former can be matched to the latter very easily.
Most lidars output a 3D point cloud, the result of the beam finding millions of points with different ranges. This is a very different form of “image” than a traditional camera, and it can take some work to convert or compare the depths and shapes of a point cloud to a 2D RGB image. Sense’s unit not only outputs a 2D depth map natively, but that data can be synced with a twin camera so the visible light image matches pixel for pixel to the depth map. It saves on computing time and therefore on delay — always a good thing for autonomous platforms.
The benefits of Sense’s system are manifest, but of course right now the company is still working on getting the first units to production. To that end it has of course raised the $26 million A round, “co-led by Acadia Woods and Congruent Ventures, with participation from a number of other investors, including Prelude Ventures, Samsung Ventures and Shell Ventures,” as the press release puts it.
Cash on hand is always good. But it has also partnered with Infineon and others, including an unnamed tier-1 automotive company, which is no doubt helping shape the first commercial Sense Photonics product. The details will have to wait until later this year when that offering solidifies, and production should start a few months after that — no hard timeline yet, but expect this all before the end of the year.
“We are very appreciative of this strong vote of investor confidence in our team and our technology,” Burroughs said in the press release. “The demand we’ve encountered – even while operating in stealth mode – has been extraordinary.”
In a talk at the Uber Elevate Summit in Washington, D.C., today, U.S. Department of Transportation Secretary Elaine Chao shared a total overall figure for ongoing testing of autonomous vehicles on U.S roads: More than 1,400 self-driving cars, trucks and other vehicles are currently in testing by more than 80 companies across 36 U.S. states, plus DC itself.
This puts some sense of overall scale to the work being done to test and develop self-driving car tech in the U.S. For context, note that California, one of the first states to have implemented AV testing on public roads, currently has 62 companies registered to perform testing — which represents a significant chunk of that 80-plus figure provided by Secretary Chao.
Chao also shared that there are more than 1.59 million registered drones currently in the U.S., of which more than 372,000 are classified as commercial, with more than 136,000 registered commercial drone operators also on the books. That represents a net new job category, Chao noted.
The secretary also later emphasized that the DoT over which she presides and the current administration aim to be “tech neutral, and not command and control” and that the department is not “in the business of picking winners and losers,” something she said the assembled audience of mostly private-sector attendants would be “so pleased to hear.”
Under Chao, the DoT has introduced and continues to overhaul guidelines, rules and programs that favor and unblock industry and commercial access to autonomous driving, drone operation and spacecraft launch capabilities. Recently, Chao has come under fire for potential conflict of interest related to use of her position.