FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Lyft is adding Chrysler Pacificas to its AV fleet and opening a new dedicated self-driving test facility

By Darrell Etherington

Lyft has another year of building out its autonomous driving program under its belt, and the ride-hailing company has been expanding its testing steadily throughout 2019. The company says that it’s now driving four times more miles on a quarterly basis than it was just six months ago, and has roughly 400 people worldwide dedicated to autonomous vehicle technology development.

Going into next year, it’s also expanding the program by adding a new type of self-driving test car to its fleet: Chrysler’s Pacifica hybrid minivan, which is also the platform of choice for Waymo’s current generation of self-driving car. The Pacifica makes a lot of sense as a ridesharing vehicle, as it’s a perfect passenger car with easy access via the big sliding door and plenty of creature comforts inside. Indeed, Lyft says that it was chosen specifically because of its “size and functionality” and what those offer to the Lyft AV team when it comes to “experiment[ing] with the self-driving rideshare experience. Lyft says it’s currently working on building out these test vehicles in order to get them on the road.

Lyft’s choice of vehicle is likely informed by its existing experience with the Pacificas, which it encountered when it partnered with Waymo starting back in May, with that company’s autonomous vehicle pilot program in Phoenix, Ariz. That ongoing partnership, in which Waymo rides are offered on Lyft’s ride-hailing network, is providing Lyft with plenty of information about how riders experience self-driving ride-hailing, Lyft says. In addition to Waymo, Lyft is currently partnering with Aptiv on providing self-driving services commercially to the public through that company’s Vegas AV deployment.

In addition to adding Pacificas to its fleet alongside the current Ford Fusion test vehicles it has in operation, Lyft is opening a second facility in addition to its Level 5 Engineering Center, the current central hub of its global AV development program. Like the Level 5 Engineering Center, its new dedicated testing facility will be located in Palo Alto, and having the two close together will help “increase the number of tests we run,” according to Lyft. The new test site is designed to host intersections, traffic lights, roadway merges, pedestrian pathways and other features of public roads, all reconfigurable to simulate a wide range of real-world driving scenarios. Already, Lyft uses the GoMentum Station third-party testing facility located in Concord, Calif. for AV testing, and this new dedicated site will complement, rather than replace, its work at GoMentum.

Meanwhile, Lyft is also continually expanding availability of its employee self-driving service access. In 2019, it increased the availability of self-driving routes for its employees three-fold, the company says, and it plans to continue to grow the areas covered “rapidly.”

Nio, Intel’s Mobileye partner, to build self-driving electric cars for consumers

By Kirsten Korosec

Mobileye, the Israeli-based automotive sensor company acquired by Intel in 2017 for $15.3 billion, is partnering with Chinese electric car startup Nio to develop autonomous vehicles that consumers can buy.

The companies, which describe this as a “strategic collaboration,” aim to bring highly automated and autonomous vehicles to consumer markets in China and “other major territories.”

Under the agreement, Nio will engineer and manufacture a self-driving system designed by Mobileye. The self-driving system will target consumer autonomy — meaning cars people can buy — a departure from the traditional industry approach of developing autonomous vehicles just for ride-hailing services.

Nio will mass produce the system for Mobileye’s driverless ride-hailing services and also plans to integrate the technology into its electric vehicle lines for consumer markets. This variant will target initial release in China, with plans to subsequently expand into other global markets, the companies said.

The self-driving system will be based on Mobileye’s Level 4 AV kit and be engineered for automotive qualification standards, quality, cost and scale, the companies said in a joint statement.

One year ago, Volkswagen Group, Intel’s Mobileye and Champion Motors said they planned to deploy Israel’s first self-driving ride-hailing service in 2019 through a joint venture called New Mobility in Israel. The group was supposed to begin testing this year in Tel Aviv and roll out the service in phases until reaching full commercialization in 2022. (Intel and Mobileye began testing self-driving cars in Jerusalem in May 2018.)

TC Sessions: Mobility returns In 2020

By Kirsten Korosec

TC Sessions: Mobility is returning for a second year on May 14 in San Jose — a day-long event brimming with the best and brightest engineers, policymakers, investors, entrepreneurs and innovators, all of whom are vying to be a part of this new age of transportation.

Companies are racing to deploy autonomous vehicles and flying cars, scale their scooter operations and adjust to headwinds in the vehicle subscription and car-sharing businesses. At the center of the mobility maelstrom is TechCrunch. 

TechCrunch held its inaugural TC Sessions: Mobility event in summer 2019 with a mission to do more than highlight the next new thing. We aimed to dig into the how and why, the cost and impact to cities, people and companies, as well as the numerous challenges that lie along the way, from technological and regulatory to capital and consumer pressures.

We met our goal, and now we’re back to push further with TC Sessions: Mobility 2020.

Attendees of TC Sessions: Mobility can expect interviews with founders, investors and inventors, demos of the latest tech, breakout sessions, dozens of startup exhibits and opportunities to network and recruit.

If you’re wondering what to expect, take a look at some of the speakers we had onstage at the first event:

  • Amnon Shashua, Mobileye, co-founder, president and CEO
  • Dmitri Dolgov, Waymo, CTO
  • Summer Craze Fowler, Argo AI, chief security officer
  • Katie DeWitt, Scoot, VP of Product
  • Karl Iagnemma, Aptiv, president
  • Seleta Reynolds, head of the Los Angeles Department of Transportation
  • Caroline Samponaro, Lyft, head of Micromobility Policy
  • Ted Serbinski, Techstars, founder and managing director of The Mobility Program
  • Ken Washington, Ford, CTO
  • Sarah Smith, Bain Capital Ventures, partner
  • Dave Ferguson, Nuro, co-founder and president
  • Michael Granoff, Maniv Mobility, founder and managing partner
  • Jesse Levinson, Zoox, CTO and co-founder

TechCrunch will announce in the coming weeks and months the participants of TechCrunch Mobility’s fireside chats, panels and workshops.

Tickets are on sale now

Early-bird tickets are available now for $250 — that’s $100 savings before prices go up. Students can book a ticket for just $50. Book your tickets today.

Speaker Applications

We’re always looking for speakers for our events. Apply here.

Sponsorship Opportunities

Fill out this form and someone from our sales team will get right back to you about sponsorship opportunities for this event.

Waymo’s UX challenge is getting people to enjoy the ride

By Kirsten Korosec

Google has been working on autonomous vehicles — one of the biggest challenges in AI — for more than a decade, but it’s learning that the hardest part might just be getting people to enjoy the ride.

“This is an experience that you can’t really learn from someone else,” Waymo’s Director of Product Saswat Panigrahi told TechCrunch, while explaining the work he oversees around driverless development. “This is truly new.”

The sheer novelty of designing a UX (user experience) for driverless mobility has drawn Waymo away from hard science-based technologies where tech giants often feel most comfortable. In the place of data, sensor and neural net development, Waymo finds its driverless development gated by painstaking research into human factors and behavioral psychology. Despite making critical decisions to avoid delving into the mysteries of human behavior and interactions, Waymo is finding that such research is an unavoidable challenge on the road to driverless mobility.

“User research has always been a big part of the development process,” said Ryan Powell, the company’s head of UX Research and Design.

In 2012, when the Google Self-Driving Car program was “dogfooding” a highway-only driver assistance system called “AutoPilot,” its in-car cameras found that employees were over-relying on the limited automation in dangerous ways. As a result of videos showing Googlers putting on makeup, using multiple devices and even falling asleep while using the system that they’d been told required constant observation, the decision was made to cancel AutoPilot product plans and focus on fully autonomous driving. “That was a big moment for the user research team because we had a big impact on the work that we were doing at Waymo in terms of making that commitment to Level 4 autonomy,” Powell recalls.

Hailing a driverless ride in a Waymo

By Kirsten Korosec
Ed Niedermeyer Contributor
Ed Niedermeyer is an author, columnist and co-host of The Autonocast. His book, Ludicrous: The Unvarnished Story of Tesla Motors, was released in August 2019.

“Congrats! This car is all yours, with no one up front,” the pop-up notification from the Waymo One app reads. “This ride will be different. With no one else in the car, Waymo will do all the driving. Enjoy this free ride on us!”

Moments later, an empty Chrysler Pacifica minivan appears and navigates its way to my location near a park in Chandler, the Phoenix suburb where Waymo has been testing its autonomous vehicles since 2016.

Waymo, the Google self-driving-project-turned-Alphabet unit, has given demos of its autonomous vehicles before. More than a dozen journalists experienced driverless rides in 2017 on a closed course at Waymo’s testing facility in Castle; and Steve Mahan, who is legally blind, took a driverless ride in the company’s Firefly prototype on Austin’s city streets way back in 2015.

waymo Driverless notificationBut this driverless ride is different — and not just because it involved an unprotected left-hand turn, busy city streets or that the Waymo One app was used to hail the ride. It marks the beginning of a driverless ride-hailing service that is now being used by members of its early rider program and eventually the public.

It’s a milestone that has been promised — and has remained just out of reach — for years.

In 2017, Waymo CEO John Krafcik declared on stage at the Lisbon Web Summit that “fully self-driving cars are here.” Krafcik’s show of confidence and accompanying blog post implied that the “race to autonomy” was almost over. But it wasn’t.

Nearly two years after Krafcik’s comments, vehicles driven by humans — not computers — still clog the roads in Phoenix. The majority of Waymo’s fleet of self-driving Chrysler Pacifica minivans in Arizona have human safety drivers behind the wheel; and the few driverless ones have been limited to testing only.

Despite some progress, Waymo’s promise of a driverless future has seemed destined to be forever overshadowed by stagnation. Until now.

Waymo wouldn’t share specific numbers on just how many driverless rides it would be giving, only saying that it continues to ramp up its operations. Here’s what we do know. There are hundreds of customers in its early rider program, all of whom will have access to this offering. These early riders can’t request a fully driverless ride. Instead, they are matched with a driverless car if it’s nearby.

There are, of course, caveats to this milestone. Waymo is conducting these “completely driverless” rides in a controlled geofenced environment. Early rider program members are people who are selected based on what ZIP code they live in and are required to sign NDAs. And the rides are free, at least for now.waymo VID 20191023 093743

Still, as I buckle my seatbelt and take stock of the empty driver’s seat, it’s hard not to be struck, at least for a fleeting moment, by the achievement.

It would be a mistake to think that the job is done. This moment marks the start of another, potentially lengthy, chapter in the development of driverless mobility rather than a sign that ubiquitous autonomy is finally at hand.

Futuristic joyride   

A driverless ride sounds like a futuristic joyride, but it’s obvious from the outset that the absence of a human touch presents a wealth of practical and psychological challenges.

As soon as I’m seated, belted and underway, the car automatically calls Waymo’s rider assistance team to address any questions or concerns about the driverless ride — bringing a brief human touch to the experience.

I’ve been riding in autonomous vehicles on public roads since late 2016. All of those rides had human safety drivers behind the wheel. Seeing an empty driver’s seat at 45 miles per hour, or a steering wheel spinning in empty space as it navigates suburban traffic, feels inescapably surreal. The sensation is akin to one of those dreams where everything is the picture of normalcy except for that one detail — the clock with a human face or the cat dressed in boots and walking with a cane.

Other than that niggling feeling that I might wake up at any moment, my 10-minute ride from a park to a coffee shop was very much like any other ride in a “self-driving” car. There were moments where the self-driving system’s driving impressed, like the way it caught an unprotected left turn just as the traffic signal turned yellow or how its acceleration matched surrounding traffic. The vehicle seemed to even have mastered the more human-like driving skill of crawling forward at a stop sign to signal its intent.

Only a few typical quirks, like moments of overly cautious traffic spacing and overactive path planning, betrayed the fact that a computer was in control. A more typical rider, specifically one who doesn’t regularly practice their version of the driving Turing Test, might not have even noticed them.

How safe is ‘safe enough’?

Waymo’s decision to put me in a fully driverless car on public roads anywhere speaks to the confidence it puts in its “driver,” but the company was cagey about the specific source of that confidence.

Waymo’s Director of Product Saswat Panigrahi declined to share how many driverless miles Waymo had accumulated in Chandler, or what specific benchmarks proved that its driver was “safe enough” to handle the risk of a fully driverless ride. Citing the firm’s 10 million real-world miles and 10 billion simulation miles, Panigrahi argued that Waymo’s confidence comes from “a holistic picture.”

“Autonomous driving is complex enough not to rely on a singular metric,” Panigrahi said.

It’s a sensible, albeit frustrating, argument, given that the most significant open question hanging over the autonomous drive space is “how safe is safe enough?” Absent more details, it’s hard to say if my driverless ride reflects a significant benchmark in Waymo’s broader technical maturity or simply its confidence in a relatively unchallenging route.

The company’s driverless rides are currently free and only taking place in a geofenced area that includes parts of Chandler, Mesa and Tempe. This driverless territory is smaller than Waymo’s standard domain in the Phoenix suburbs, implying that confidence levels are still highly situational. Even Waymo vehicles with safety drivers don’t yet take riders to one of the most popular ride-hailing destinations: the airport.

The complexities of driverless

Panigrahi deflected questions about the proliferation of driverless rides, saying only that the number has been increasing and will continue to do so. Waymo has about 600 autonomous vehicles in its fleet across all geographies, including Mountain View, Calif. The majority of those vehicles are in Phoenix, according to the company.

However, Panigrahi did reveal that the primary limiting factor is applying what it learned from research into early rider experiences.

“This is an experience that you can’t really learn from someone else,” Panigrahi said. “This is truly new.”

Some of the most difficult challenges of driverless mobility only emerge once riders are combined with the absence of a human behind the wheel. For example, developing the technologies and protocols that allow a driverless Waymo to detect and pull over for emergency response vehicles and even allow emergency services to take over control was a complex task that required extensive testing and collaboration with local authorities.

“This was an entire area that, before doing full driverless, we didn’t have to worry as much about,” Panigrahi said.

The user experience is another crux of driverless ride-hailing. It’s an area to which Waymo has dedicated considerable time and resources — and for good reason. User experience turns out to hold some surprisingly thorny challenges once humans are removed from the equation.

WAYMO 1. edjpg

The everyday interactions between a passenger and an Uber or Lyft driver, such as conversations about pick-up and drop-offs as well as sudden changes in plans, become more complex when the driver is a computer. It’s an area that Waymo’s user experience research (UXR) team admits it is still figuring out.

Computers and sensors may already be better than humans at specific driving capabilities, like staying in lanes or avoiding obstacles (especially over long periods of time), but they lack the human flexibility and adaptability needed to be a good mobility provider.

Learning how to either handle or avoid the complexities that humans accomplish with little effort requires a mix of extensive experience and targeted research into areas like behavioral psychology that tech companies can seem allergic to.

Not just a tech problem

Waymo’s early driverless rides mark the beginning of a new phase of development filled with fresh challenges that can’t be solved with technology alone. Research into human behavior, building up expertise in the stochastic interactions of the modern urban curbside, and developing relationships and protocols with local authorities are all deeply time-consuming efforts. These are not challenges that Waymo can simply throw technology at, but require painstaking work by humans who understand other humans.

Some of these challenges are relatively straightforward. For example, it didn’t take long for Waymo to realize that dropping off riders as close to the entrance of a Walmart was actually less convenient due to the high volume of foot traffic. But understanding that pick-up and drop-off isn’t ruled by a single principle (e.g. closer to the entrance is always better) hints at a hidden wealth of complexity that Waymo’s vehicles need to master.

waymo interaction

As frustrating as the slow pace of self-driving proliferation is, the fact that Waymo is embracing these challenges and taking the time to address it is encouraging.

The first chapter of autonomous drive technology development was focused on the purely technical challenge of making computers drive. Weaving Waymo’s computer “driver” into the fabric of society requires an understanding of something even more mysterious and complex: people and how they interact with each other and the environment around them.

Given how fundamentally autonomous mobility could impact our society and cities, it’s reassuring to know that one of the technology’s leading developers is taking the time to understand and adapt to them.

With Garmin Autoland, small planes can land themselves if the pilot becomes incapacitated

By Frederic Lardinois

Here’s a horror scenario for you: You’re flying in a small plane and suddenly the single pilot who knows how to fly passes out. In the movies, somebody would probably talk one of the passengers through safely landing the plane. In reality, that’s unlikely. Flying planes is hard.

Now, however, planes outfitted with the Garmin G3000 flight deck will have the option to include a system that will land the plane in an emergency with just the push of a button.

Screen Shot 2019 10 31 at 9.20.13 AM 1

Autoland takes all the navigation and communications tech in the plane and combines that with a sophisticated autopilot. Once a passenger activates the autoland feature — or the plane determines the pilot is incapacitated — the system will look at all the available information about weather, remaining fuel on board and the local terrain to plot a route to the nearest suitable airport. It’ll even alert air traffic control about what’s happening, so they can route other planes around you.

The system also then takes over all of the touchscreens in the plane that are part of the G3000 flight deck and displays a simplified interface that allows the passengers to talk to air traffic control — and very little else.

Taking all of that information into account, the plane then plans the decent, lands the plane and shuts down the engines.

“The vision and development of the world’s first Autoland system for general aviation was a natural progression for Garmin as we looked at our aircraft systems and existing autonomous technologies and recognized it is our responsibility to use these building blocks to deliver a technology that will change lives and revolutionize air travel,” said Phil Straub, Garmin executive vice president and managing director of aviation.

garmin autoland piper m600

It’s important to note that this is meant to be a system that’s only activated in the case of an emergency. Because it automatically alerts the authorities when somebody presses the button, nobody is going to activate it unless it’s absolutely necessary — and the FAA would surely want to ask you a few questions.

It does show, however, that we’re getting closer to a time when autopilot systems get significantly smarter. The number of variables a system like Autoland has to deal with is relatively small compared to those an autonomous car in a city has to navigate, after all. And autopilot systems for planes have already become quite sophisticated.

The launch partners for Autoland are Piper and Cirrus, which are making it an option in the 2020 models of the Piper M600 turboprop and Cirrus Vision Jet, pending FAA authorization. Those cost a few million dollars, though, so you better save up. Existing planes with the Garmin G3000 cockpit may also be retrofitted with the autoland capability, but that’s up to the manufacturer.

Given how old the general aviation fleet in the U.S. is, you’re not going to see any planes with this feature at your local airport anytime soon, though. Most of those 1970s Cessna 150s for rent at your local FBO don’t even have an autopilot, after all.

Volvo creates a dedicated business for autonomous industrial and commercial transport

By Darrell Etherington

Volvo Group has established a new dedicated business group focused on autonomous transportation, with a mandate that covers industry segments like mining, ports and moving goods between logistics hubs of all kinds. The vehicle maker has already been active in putting autonomous technology to work in these industries, with self-driving projects — including at a few quarries and mines, and in the busy port located at Gothenburg, Sweden.

The company sees demand for this kind of autonomous technology use growing, and decided to establish an entire business unit to address it. The newly formed group will be called Volvo Autonomous Solutions, and its official mission is to “accelerate the development, commercialization and sales of autonomous transport solutions,” focused on the kind of transportation “where there is a need to move large volumes of goods and material on pre-defined routes, in receptive flows.”

Their anticipation of the growth of this sector comes in part from direct customer feedback, the automaker notes. It’s seen “significant increase in inquires from customers,” according to a statement from Martin Lundstedt, Volvo Group’s president and CEO.

Officially, Volvo Autonomous Solutions won’t be a formal new business area under its parent company until January 2020, but the company is looking for a new head of the unit already, and it’s clear they see a lot of potential in this burgeoning market.

Unlike autonomous driving for consumer automobiles, this kind of self-driving for fixed-route goods transportation is a nice match to the capabilities of technology as they exist today. These industrial applications eliminate a lot of the chaos and complexity of driving in, say, urban environments and with a lot of other human-driven vehicles on the road, and their routes are predictable and repeatable.

Waymo is creating 3D maps of Los Angeles to better understand traffic congestion

By Kirsten Korosec

Waymo, the autonomous vehicle company under Alphabet, has started creating 3D maps in some heavily trafficked sections of Los Angeles to better understand congestion there and determine if its self-driving vehicles would be a good fit in the city.

For now, Waymo is bringing just three of its self-driving Chrysler Pacifica minivans to Los Angeles to map downtown and a section of Wilshire Boulevard known as Miracle Mile.

Waymo employees will initially drive the vehicles to create 3D maps of the city. These maps are unlike Google Maps or Waze. Instead, they include topographical features such as lane merges, shared turn lanes and curb heights, as well as road types and the distance and dimensions of the road itself, according to Waymo. That data is combined with traffic control information like signs, the lengths of crosswalks and the locations of traffic lights.

Starting this week, Angelenos might catch a glimpse of Waymo’s cars on the streets of LA! Our cars will be in town exploring how Waymo's tech might fit into LA’s dynamic transportation environment and complement the City’s innovative approach to transportation. pic.twitter.com/REHfxrxqdL

— Waymo (@Waymo) October 7, 2019

Waymo does have a permit to test autonomous vehicles in California and could theoretically deploy its fleet in Los Angeles. But for now, the company is in mapping and assessment mode. Waymo’s foray into Los Angeles is designed to give the company insight into driving conditions there and how its AV technology might someday be used.

The company said it doesn’t plan to launch a rider program like its Waymo One currently operating in the suburbs of Phoenix. Waymo One allows individuals to hail a ride in one of the self-driving cars, which have a human safety driver behind the wheel.

The self-driving car company began testing its autonomous vehicles in and around Mountain View, Calif., before branching out to other cities — and climates — including Novi, Mich., Kirkland, Wash., San Francisco and, more recently, in Florida. But the bulk of the company’s activities have been in the suburbs of Phoenix  and around Mountain View — two places with lots of sun, and even blowing dust, in the case of Phoenix.

We’ll have self-flying cars before self-driving cars, Thrun says

By Josh Constine

Once you get up high enough, you don’t have to worry about a lot of the obstacles like pedestrians and traffic jams that plague autonomous cars. That’s why Sebastian Thrun, Google’s self-driving team founder turned CEO of flying vehicle startup Kitty Hawk, said on stage at TechCrunch Disrupt SF today that we should expect true autonomy to succeed in the air before the road.

“I believe we’re going to be done with self-flying vehicles before we’re done with self-driving cars” Thrun told TechCrunch reporter Kirsten Korosec.

Why? “If you go a bit higher in the air then all the difficulties with not hitting stuff like children and bicycles and cars and so on just vanishes . . . Go above the buildings, go above the trees, like go where the helicopters are!” Thrun explained, but noted personal helicopters are so noisy they’re being banned in some places like Napa, California.

That proclamation has wide reaching implications for how cities are planned and real estate is bought. We may need more vertical takeoff helipads sooner than we needed autonomous car-only road lanes. More remote homes in the forest that have only a single winding road that reaches them like those in Big Sur, California might suddenly become more accessible and thereby appealing to the affluent since they could just take a self-flying car to the city or office.

The concept could also have wide-reaching implications for the startup industry. Obviously Thrun’s own company Kitty Hawk would benefit from not being too early to market. Kitty Hawk announced its Heaviside vehicle today that’s designed to be ultra quiet. If the prophecy comes true, Uber which is investing in vertical take-off vehicles could also be in a better position than Lyft and other ride-hailing player focused on cars.

To make sure its vehicles don’t get banned and potentially pave the way for more aerial autonomy, Kitty Hawk recently recruited former FAA Administrator Mike Huerta as an advisor.

Eventually, Thrun says that because cars have to navigate indirect streets but in the air “we can go in a straight line, we believe we will be roughly a third of the energy cost per mile is Tesla.” And with shared UberPool style flights, he sees the cost of energy getting down to just “$0.30 per mile”.

But in the meantime, Thrun is trying to get people, including me, to stop saying flying cars. “I personally don’t like the word ‘flying car’, but it’s very catchy. The technical term is called eVTOL. These are typically electrically propelled vehicles, they can take off and land vertically, eVTOLs, vertical takeoff landing, so that you don’t need an airport. And then they fly very much like a regular plane.” We’ll see if that mouthful catches on, and if the skies get more congested before the roads thin out.

Kitty Hawk Heaviside starry night

Tesla acquires computer vision startup DeepScale in push toward robotaxis

By Kirsten Korosec

Tesla has acquired DeepScale, a Silicon Valley startup that uses low-wattage processors to power more accurate computer vision, in a bid to improve its Autopilot driver assistance system and deliver on CEO Elon Musk’s vision to turn its electric vehicles into robotaxis.

CNBC was the first to report the acquisition. TechCrunch independently confirmed the deal with two unnamed sources, although neither one would provide more information on the financial terms of the deal. 

Tesla vehicles are not considered fully autonomous, or Level 4, a designation by SAE that means the car can handle all aspects of driving in certain conditions without human intervention.

Instead, Tesla vehicles are “Level 2,” and its Autopilot feature is a more advanced driver assistance system than most other vehicles on the road today. Musk has promised that the advanced driver assistance capabilities on Tesla vehicles will continue to improve until eventually reaching that full automation high-water mark.

Earlier this year, Musk said Tesla would launch an autonomous ridesharing network by 2020. DeepScale, a four-year-old startup based in Mountain View, Calif., appears to be part of that plan. The acquisition also brings much needed talent to Tesla’s Autopilot team, which has suffered from a number of departures in the past year, The Information reported in July.

DeepScale has developed a way to use efficient deep neural networks on small, low-cost, automotive-grade sensors and processors to improve the accuracy of perception systems. These perception systems, which use sensors, mapping, planning and control systems to interpret and classify data in real time, are essential to the operation of autonomous vehicles. In short, these systems allow vehicles to understand the world around them.

The company argued that its method of using low-wattage and low-cost sensors and processors allowed it to deliver driver assistance and autonomous driving to vehicles at all price points.

The company had raised more than $18 million — in $3 million seed and $156 million Series A rounds — from investors that included Autotech VC, Bessemer, Greylock and Trucks VC.

On Monday, DeepScale’s co-founder Forrest Iandola posted an announcement on Twitter and updated his LinkedIn account. The Twitter message read “I joined the @Tesla #Autopilot team this week. I am looking forward to working with some of the brightest minds in #deeplearning and #autonomousdriving.”

I joined the @Tesla #Autopilot team this week. I am looking forward to working with some of the brightest minds in #deeplearning and #autonomousdriving.

— Forrest Iandola (@fiandola) October 1, 2019

In Tesla’s push toward “full self-driving,” it developed a new custom chip designed to those capabilities. This chip is now in all new Model 3, X and S vehicles. Musk has said that Tesla vehicles being produced now have the hardware necessary — computer and otherwise — for full self-driving. “All you need to do is improve the software,” Musk said in April at the company’s Autonomy Day.

Others in the industry have balked at those claims. Tesla and Musk have maintained the “improve software” line, and have continued to roll out improvements to the capability of Autopilot. Earlier this month, Tesla released a software update that adds new features to its cars. The update included Smart Summon, an autonomous parking feature that allows owners to use their app to summon their vehicles from a parking space.

Defining micromobility and where it’s going with business and mobility analyst Horace Dediu

By Megan Rose Dickey

Micromobility has taken off over the last couple of years. Between electric bike-share and scooter-share, these vehicles have made their way all over the world. Meanwhile, some of these companies, like Bird and Lime, have already hit unicorn status thanks to massive funding rounds.

Horace Dediu, the well-known industry analyst who coined the term micromobility as it relates to this emerging form of transportation, took some time to chat with TechCrunch ahead of Micromobility Europe, a one-day event focused on all-things micromobility.

We chatted about the origin of the word micromobility, where big tech companies like Apple, Google and Amazon fit into the space, opportunities for developers to build tools and services on top of these vehicles, the opportunity for franchising business models, the potential for micromobility to be bigger than autonomous, and much more.

Here’s a Q&A, which I lightly edited for length and clarity, I did with Dediu ahead of his micromobility conference.


Megan Rose Dickey: Hey, Horace. Thanks for taking the time to chat.

Horace Dediu: Hey, no problem. My pleasure.

Rose Dickey: I was hoping to chat with you a bit about micromobility because I know that you have the big conference coming up in Europe, so I figured this would be a good time to touch base with you. I know you’ve been credited with coining the term micromobility as it relates to likes of shared e-bikes and scooters.

So, to kick things off, can you define micromobility?

Dediu: Yes, sure. So, the idea came to me because I actually remembered microcomputing.

Aptiv and Hyundai form new joint venture focused on autonomous driving

By Darrell Etherington

Automaker Hyundai is forming a new joint venture with autonomous driving technology company Aptiv, with both parties taking a 50% ownership stake in the new company. The goal of the new venture will be to develop Level 4 and Level 5 production-ready self-driving systems intended for commercialization, with the goal of making those available to robotaxi and fleet operators, as well as other auto makers, by 2022.

The combined investment in the joint venture from both companies will total $4 billion in aggregate value (including the value of combined engineering services, R&D and IP) initially, according to Aptiv and Hyundai, and testing for their fully autonomous systems will begin in 2020 in pursuit of that 2022 commercialization target.

In terms of what each is bringing to the table, Aptiv will be delivering its autonomous driving tech, which it has been developing for many years — originally as part of global automotive industry supplier Delphi — as well as 700 employees working on AV tech. Hyundai Motor Group will provide a combined $1.6 billion in cash from across its subrands, vehicle engineering, R&D and access to its IP.

Heading up the new joint venture will be Karl Iagnemma, the president of Aptiv’s Autonomous Mobility group, and it’ll be headquartered in Boston and supported by additional technology centers in multiple locations in the U.S. and Asia.

Both companies have been demonstrating autonomous vehicle technologies for multiple years now, and Aptiv has been working with Lyft in Las Vegas on a public trial of autonomous robotaxi services since debuting the capabilities at CES in 2018. Aptiv’s Vegas pilot uses BMW 5 Series cars for its autonomous pickup fleet.

This joint venture should help them with bringing the technology to market with the scale of a global automaker, while Hyundai gains by being able to shore up its own work in self-driving with a partner that has invested in developing these solutions as a primary concern over many years.

Voyage raises $31 million to bring driverless taxis to communities

By Kirsten Korosec

Voyage, the autonomous vehicle startup that spun out of Udacity, announced Thursday it has raised $31 million in a round led by Franklin Templeton.

Khosla Ventures, Jaguar Land Rover’s InMotion Ventures and Chevron Technology Ventures also participated in the round. The company, which operates a ride-hailing service in retirement communities using self-driving cars supported by human safety drivers, has raised a total of $52 million since launching in 2017. The new funding includes a $3 million convertible note.

Voyage CEO Oliver Cameron has big plans for the fresh injection of capital, including hiring and expanding its fleet of self-driving Chrysler Pacifica minivans, which always have a human safety driver behind the wheel.

Ultimately, the expanded G2 fleet and staff are just the means toward Cameron’s grander mission to turn Voyage into a truly driverless and profitable ride-hailing company.

“It’s not just about solving self-driving technology,” Cameron told TechCrunch in a recent interview, explaining that a cost-effective vehicle designed to be driverless is the essential piece required to make this a profitable business.

The company is in the midst of a hiring campaign that Cameron hopes will take its 55-person staff to more than 150 over the next year. Voyage has had some success attracting high-profile people to fill executive-level positions, including CTO Drew Gray, who previously worked at Uber ATG, Otto, Cruise and Tesla, as well as former NIO and Tesla employee Davide Bacchet as director of autonomy.

Funds will also be used to increase its fleet of second-generation self-driving cars (called G2) that are currently being used in a 4,000-resident retirement community in San Jose, Calif., as well as The Villages, a 40-square-mile, 125,000-resident retirement city in Florida. Voyage’s G2 fleet has 12 vehicles. Cameron didn’t provide details on how many vehicles it will add to its G2 fleet, only describing it as a “nice jump that will allow us to serve consumers.”

Voyage used the G2 vehicles to create a template of sorts for its eventual driverless vehicle. This driverless product — a term Cameron has used in a previous post on Medium — will initially be limited to 25 miles per hour, which is the driving speed within the two retirement communities in which Voyage currently tests and operates. The vehicle might operate at a low speed, but they are capable of handling complex traffic interactions, he wrote.

“It won’t be the most cost-effective vehicle ever made because the industry still is in its infancy, but it will be a huge, huge, huge improvement over our G2 vehicle in terms of being be able to scale out a commercial service and make money on each ride,” Cameron said. 

Voyage initially used modified Ford Fusion vehicles to test its autonomous vehicle technology, then introduced in July 2018 Chrysler Pacifica minivans, its second generation of autonomous vehicles. But the end goal has always been a driverless product.

Voyage engineers Alan Mond and Trung Dung Vu

TechCrunch previously reported that the company has partnered with an automaker to provide this next-generation vehicle that has been designed specifically for autonomous driving. Cameron wouldn’t name the automaker. The vehicle will be electric and it won’t be a retrofit like the Chrysler Pacifica Hybrid vehicles Voyage currently uses or its first-generation vehicle, a Ford Fusion.

Most importantly, and a detail Cameron did share with TechCrunch, is that the vehicle it uses for its driverless service will have redundancies and safety-critical applications built into it.

Voyage also has deals in place with Enterprise rental cars and Intact insurance company to help it scale.

“You can imagine leasing is much more optimal than purchasing and owning vehicles on your balance sheet,” Cameron said. “We have those deals in place that will allow us to not only get the vehicle costs down, but other aspects of the vehicle into the right place as well.”

Starship Technologies CEO Lex Bayer on focus and opportunity in autonomous delivery

By Darrell Etherington

Starship Technologies is fresh off a recent $40 million funding round, and the robotics startup finds itself in a much-changed market compared to when it got its start in 2014. Founded by software industry veterans, including Skype and Rdio co-founder Janus Friis, Starship’s focus is entirely on building and commercializing fleets of autonomous sidewalk delivery robots.

Starship invented this category when it debuted, but five years later it’s one of a number of companies looking to deploy what essentially amounts to wheeled, self-driven coolers that can carry small packages and everyday freight, including fresh food, to waiting customers. CEO Lex Bayer, a former sales leader from Airbnb, took over the top spot at Starship last year and is eager to focus the company’s efforts in a drive to take full advantage of its technology and experience lead.

The result is transforming what looked, to all external observers, like a long-tail technology play into a thriving commercial enterprise.

“We want to do 100 universities in the next 24 months, and we’ll do about 25 to 50 robots on each campus,” Bayer said in an interview about his company’s plans for the future.

Meet Olli 2.0, a 3D-printed autonomous shuttle

By Kirsten Korosec

From afar, Olli resembles many of the “future is now!” electric autonomous shuttles that have popped up in recent years.

The tall rectangular pod, with its wide-set headlights and expansive windows nestled between a rounded frame, gives the shuttle a friendly countenance that screams, ever so gently, “come along, take a ride.”

But Olli is different in almost every way, from how it’s produced to its origin story. And now, its maker, Local Motors, has given Olli an upgrade in hopes of accelerating the adoption of its autonomous shuttles.

Meet Olli 2.0, a 3D-printed connected electric autonomous shuttle that Rogers says will hasten its ubiquity.

“The future is here; it’s just not evenly distributed,” Local Motors co-founder and CEO John B. Rogers Jr. said in a recent interview. “That’s something I say a lot. Because people often ask me, ‘Hey, when will I see this vehicle? 2023? What do you think?’ My response: It’s here now, it’s just not everywhere.”

Whether individuals will adopt Rogers’ vision of the future is another matter. But he argues that Olli 1.0 has already been a persuasive ambassador.

Olli 2.0 Left Door

Olli 1.0 made its debut in 2016 when it launched in National Harbor, Md., at a planned mixed-use development a few miles south of Washington, D.C. In the two years since, Olli has shown up at events such as LA Automobility, and been featured by various media outlets, including this one.  Heck, even James Cordon rode in it.

Local Motors, which was founded in 2007, and its Olli 1.0 shuttle are familiar figures in the fledgling autonomous vehicle industry. But they’re often overshadowed by the likes of Argo AI, Cruise, Uber and Waymo — bigger companies that are all pursuing robotaxis designed for cities.

Olli, meanwhile, is designed for campuses, low-speed environments that include hospitals, military bases and universities.

“The public isn’t going to see New York City with autonomous vehicles running around all the time (any time soon),” Rogers said. Campuses, on the other hand, are a sweet spot for companies like Local Motors that want to deploy now. These are places where mobility is needed and people are able to get up close and personal with a “friendly robot” like Olli, Rogers said. 

Olli 2.0

Olli and Olli 2.0 are clearly siblings. The low-speed vehicle has the same general shape, and a top speed of 25 miles per hour. And both have been crash tested by Local Motors and come with Level 4 autonomous capability, a designation by the SAE that means the vehicle can handle all aspects of driving in certain conditions without human intervention.

Olli 2.0 has a lot more range — up to 100 miles on a single charge, according to its spec sheet. The manufacturing process has been improved, and Olli 2.0 is now 80% 3D-printed and has hub motors versus the axle wheel motors in its predecessor. In addition, there are two more seats in Olli 2.0 and new programmable lighting.

But where Olli 2.0 really stands out is in the improved user interface and more choices for customers looking to customize the shuttle to suit specific needs. As Rogers recently put it, “We can pretty much make anything they ask for with the right partners.”

Local-Motors-Olli -2.0

The outside of Olli 2.0 is outfitted with a PA system and screens on the front and back to address pedestrians. The screen in the front can be shown as eyes, making Olli 2.0 more approachable and anthropomorphic.

Inside the shuttle, riders will find better speakers and microphones and touchscreens. Local Motors has an open API, which allows for an endless number of UI interfaces. For instance, LG is customizing media content for Olli based on the “5G future,” according to Rogers, who said he couldn’t provide more details just yet.

AR and VR can also be added, if a customer desires. The interior can be changed to suit different needs as well. For instance, a hospital might want fewer seats and more room to transport patients on beds. It’s this kind of customization that Rogers believes will give Local Motors an edge over autonomous shuttle competitors.

Local-Motors-Olli-2.0-Interior

Even the way Olli 2.0 communicates has been improved.

Olli 1.0 used IBM Watson, the AI platform from IBM, for its natural language and speech to text functions. Olli 2.0 has more options. Natural language voice can use Amazon’s deep learning chatbot service Lex and IBM Watson. Customers can choose one or even combine them. Both can be altered to make the system addressable to “Olli.”

The many people behind Olli

In the so-called race to deploy autonomous vehicles, Local Motors is a participant that is difficult to categorize or label largely due to how it makes its shuttles.

It’s not just that Local Motors’ two micro factories — at its Chandler, Ariz. headquarters and in Knoxville, Tenn. — are a diminutive 10,000 square feet. Or that these micro factories lack the tool and die and stamping equipment found in a traditional automaker’s factory. Or even that Olli is 3D-printed.

A striking and perhaps less obvious difference is how Olli and other creations from Local Motors, and its parent company Local Motors Industries, come to life. LMI has a co-creation and low-volume local production business model. The parent company’s Launch Forth unit manages a digital design community of tens of thousands of engineers and designers that co-creates products for customers. Some of those mobility creations go to Local Motors, which uses its low-volume 3D-printed micro factories to build Olli and Olli 2.0, as well as other products like the Rally Fighter.

This ability to tap into its community and its partnerships with research labs, combined with direct digital manufacturing and its micro factories, is what Rogers says allows it to go from design to mobile prototype in weeks, not months — or even years.

The company issues challenges to the community. The winner of a challenge gets a cash prize and is awarded royalties as the product is commercialized. In 2016, a Bogota, Colombia man named Edgar Sarmiento won the Local Motors challenge to design an urban public transportation system. His design eventually became Olli.

(Local Motors uses the challenges model to determine where Olli will be deployed, as well.)

New design challenges are constantly being launched to improve the UI and services of Olli, as well as other products. But even that doesn’t quite capture the scope of the co-creation. Local Motors partners with dozens of companies and research organizations. Its 3D-printing technology comes from Oak Ridge National Laboratory, and Olli itself involves a who’s who in the sensor, AV and supplier communities.

Startup Affectiva provides Olli’s cognition system, such as facial and mood tracking of its passengers and dynamic route optimization, while Velodyne, Delphi, Robotic Research and Axis Communications handle the perception stack of the self-driving shuttle, according to Local Motors. Nvidia and Sierra Wireless provide much of the Human Machine Interface. Other companies that supply the bits and pieces to Olli include Bosch, Goodyear, Protean and Eastman, to name just a few.

Where in the world is Olli?

Today, Olli 1.0 is deployed on nine campuses, the most recent ones at the Joint Base Myer – Henderson Hall, a joint base of the U.S. military located around Arlington, Va., which is made up of Fort Myer, Fort McNair and Henderson Hall. Olli was also introduced recently in Rancho Cordova, near Sacramento, Calif.

Production of Olli 2.0 began in July and deliveries will begin in the fourth quarter of this year. In the meantime, three more Olli shuttle deployments are coming up in the next six weeks or so, according to Local Motors, which didn’t provide further details.

Production of Olli 1.0 will phase out in the coming months as customer orders are completed. Olli will soon head to Europe, as well, with Local Motors planning to build its third micro factory in the region.

Didi Chuxing to launch self-driving rides in Shanghai and expand them beyond China by 2021

By Darrell Etherington

Didi Chuxing will begin picking up ride-hailing passengers with self-driving cars in Shanghai in just a few months, according to company CTO Zhang Bo (via Reuters). The plan is to roll out autonomous pick-ups in Shanghai first, starting in one district of the city, and then expand the program from there — finally culminating in the deployment of self-driving vehicles outside of China by 2021.

Like Uber’s autonomous test vehicles, Didi’s cars will be staffed with a human driver on board during the initial launch period, which awaits a few remaining licenses before it can actually begin serving human passengers. Self-driving rides will be free for customers, and Zhang said that more than 30 different vehicles will be offered for self-driving trips as part of the pilot.

After its initial pilot launch in Shanghai, Didi will look to expand its offerings to Beijing and Shenzhen as well, with hopes to be live in all three cities by 2020.

Didi is the largest ride-hailing company in China, and beat out an attempt by Uber to establish a presence in the market, resulting in Uber selling its Chinese business to Didi and exiting the market in 2016 (in exchange for a minority stake). We spoke to Didi’s CTO (who asked to be identified as “Bob” at the time, hence the lower-third in the video below) later that same year about why the company believes it has an advantage when it comes to data-driven technology development relative to Uber and other ride-hailing companies.

Aside from a general sense in the industry that autonomy is a likely, if not inevitable end goal for ride-hailing and other mobility services with a technological focus, Didi is also likely motivated by a need for drivers to meet demand — and drivers who can provide a safe and secure experience for passengers. The company revealed in July that it had proved more than 300,000 drivers didn’t meet its safety standards after overhauling those standards last year.

Earlier this month, Didi also announced that it was spinning out its autonomous driving unit as a separate company, with Zhang as CEO. It’ll look to develop tech for its own fleet, and work in partnership with automakers, including Toyota, in pursuit of commercializing and deploying autonomous driving.

Former Google X exec Mo Gawdat wants to reinvent consumerism

By Frederic Lardinois

Mo Gawdat, the former Google and Google X executive, is probably best known for his book Solve for Happy: Engineer Your Path to Joy. He left Google X last year. Quite a bit has been written about the events that led to him leaving Google, including the tragic death of his son. While happiness is still very much at the forefront of what he’s doing, he’s also now thinking about his next startup: T0day.

To talk about T0day, I sat down with the Egypt-born Gawdat at the Digital Frontrunners event in Copenhagen, where he gave one of the keynote presentations. Gawdat is currently based in London. He has adopted a minimalist lifestyle, with no more than a suitcase and a carry-on full of things. Unlike many of the Silicon Valley elite that have recently adopted a kind of performative aestheticism, Gawdat’s commitment to minimalism feels genuine — and it also informs his new startup.

07 28 19 Frontrunner 38“In my current business, I’m building a startup that is all about reinventing consumerism,” he told me. “The problem with retail and consumerism is it’s never been disrupted. E-commerce, even though we think is a massive revolution, it’s just an evolution and it’s still tiny as a fraction of all we buy. It was built for the Silicon Valley mentality of disruption, if you want, while actually, what you need is cooperation. There are so many successful players out there, so many efficient supply chains. We want the traditional retailers to be successful and continue to make money — even make more money.”

What T0day wants to be is a platform that integrates all of the players in the retail ecosystem. That kind of platform, Gawdat argues, never existed before, “because there was never a platform player.”

That sounds like an efficient marketplace for moving goods, but in Gawdat’s imagination, it is also a way to do good for the planet. Most of the fuel burned today isn’t for moving people, he argues, but goods. A lot of the food we buy goes to waste (together with all of the resources it took to grow and ship it) and single-use plastic remains a scourge.

How does T0day fix that? Gawdat argues that today’s e-commerce is nothing but a digital rendering of the same window shopping people have done for ages. “You have to reimagine what it’s like to consume,” he said.

The reimagined way to consume is essentially just-in-time shipping for food and other consumer goods, based on efficient supply chains that outsmart today’s hub and spoke distribution centers and can deliver anything to you in half an hour. If everything you need to cook a meal arrives 15 minutes before you want to start cooking, you only need to order the items you need at that given time and instead of a plastic container, it could come a paper bag. “If I have the right robotics and the right autonomous movements — not just self-driving cars, because self-driving cars are a bit far away — but the right autonomous movements within the enterprise space of the warehouse, I could literally give it to you with the predictability of five minutes within half an hour,” he explained. “If you get everything you need within half an hour, why would you need to buy seven apples? You would buy three.”

Some companies, including the likes of Uber, are obviously building some of the logistics networks that will enable this kind of immediate drop shipping, but Gawdat doesn’t think Uber is the right company for this. “This is going to sound a little spiritual. There is what you do and there is the intention behind why you do it,” he said. “You can do the exact same thing with a different intention and get a very different result.”

That’s an ambitious project, but Gawdat argues that it can be done without using massive amounts of resources. Indeed, he argues that one of the problems with Google X, and especially big moonshot projects like Loon and self-driving cars, was that they weren’t really resource-constrained. “Some things took longer than they should have,” he said. “But I don’t criticize what they did at all. Take the example of Loon and Facebook. Loon took longer than it should have. In my view, it was basically because of an abundance of resources and sometimes innovation requires a shoestring. That’s my only criticism.”

T0day, which Gawdat hasn’t really talked about publicly in the past, is currently self-funded. A lot of people are advising him to raise money for it. “We’re getting a lot of advice that we shouldn’t self-fund,” he said, but he also believes that the company will need some strategic powerhouses on its side, maybe retailers or companies that have already invested in other components of the overall platform.

T0day’s ambitions are massive, but Gawdat thinks that his team can get the basic elements right, be that the fulfillment center design or the routing algorithms and the optimization engines that power it all. He isn’t ready to talk about those, though. What he does think is that T0day won’t be the interface for these services. It’ll be the back end and allow others to build on top. And because his previous jobs have allowed him to live a comfortable life, he isn’t all that worried about margins either, and would actually be happy if others adopted his idea, thereby reducing waste.

Ford says its autonomous cars will last just four years

By Connie Loizos

The automotive industry has embraced — and advertised — self-driving cars as a kind of panacea that will solve numerous problems that modern society is grappling with right now, from congestion to safety to productivity (you can work while riding!).

Unfortunately, a very big question that has been almost entirely overlooked is: how long will these cars last?

The answer might surprise you. In an interview with The Telegraph in London, John Rich, who is the operations chief of Ford Autonomous Vehicles, revealed today that the “thing that worries me least in this world is decreasing demand for cars,” because “we will exhaust and crush a car every four years in this business.”

Four years! That’s not a very long lifespan, even compared with cars that undergo a lot of wear-and-tear, like New York City cabs, which were an average of 3.8 years old in 2017, meaning some were brand new and others had been in service for more than seven years.

It’s more surprising compared with the nearly 12 years that the average U.S. car owner hangs on to a vehicle. In fact, Americans are maintaining their cars longer in part because the technology used to make and operate them has advanced meaningfully. In 2002, according to the London-based research firm IHS Markit, the average age of a car in operation was 9.6 years.

So what’s the story with autonomous cars, into which many billions of investment capital is being poured? We first turned to Argo AI, a Pittsburgh, Pa.-based startup that raised $1 billion investment in funding from Ford three years ago and refueled this summer with $2.6 billion in capital and assets from Volkswagen as part of a broader alliance between VW Group and Ford. Argo is developing cars for Ford that it’s testing right now in five cities.

Since Ford will be operating the cars, Argo pointed us back to Ford’s Rich, who, while on the run, answered some our questions via email.

Asked how many miles Ford anticipates that the cars will travel each year — we wondered if this number would be more or less than a taxi or full-time Uber driver might traverse — he declined to say, telling us instead that while Ford isn’t sharing miles targets, the “vehicles are being designed for maximum utilization.

“Today’s vehicles spend most of the day parked. To develop a profitable, viable business model for [autonomous vehicles], they need to be running almost the entire day.”

Indeed, Ford right now plans to use the cars in autonomous fleets that will be used as a service by other companies, including as delivery vehicles. Asked if Ford also plans to sell the cars to individuals, Rich suggests it’s not in the plans right not, saying merely that Ford sees the “initial commercialization of AVs to be fleet-centric.”

We also wondered if Rich’s prediction for the lifespan of full self-driving cars ties to his expectation that Ford’s autonomous vehicles will be powered by internal combustion engines. Most carmakers appear to be investing in new combustible engine architectures that promise greater fuel efficiency and fewer emissions but that still require more parts than electric cars. (The more parts that are being stressed, the higher the likelihood that something will break.)

Rich says the idea is to transition to battery-electric vehicles (BEV) eventually, but that Ford also needs to “find the right balance that will help develop a profitable, viable business model. This means launching with hybrids first.”

In his words, the challenges with BEVs as autonomous vehicles right now: includes a “lack of charging infrastructure where we need to operate an AV fleet. Charging stations and infrastructure needs to be built that will add to the already capital-intensive nature of developing the AV technology and operations.”

Another challenge is the “depletion of range from on-board tech. Testing shows that upwards of 50 percent of BEV range will be used up due to the computing power of an AV system, plus the A/C and entertainment systems that are likely required during a ride hailing service or passenger comfort.”

Ford also worries about utilization, writes Rich, “The whole key to running a profitable AV business is utilization – if cars are sitting on chargers, they aren’t making money.”

And it’s worried about battery degradation, given that while “fast charging is needed daily to run an AV fleet, it degrades the battery if used often,” he says.

Of course, the world would be far better off without any combustion engine exhaust emissions, full stop. On the brighter side, while Ford’s cars may not be long for this world, between 80 and 86 percent of a car’s material can be recycled and reused. According to a trade group called the Institute of Scrap Recycling Industries (ISRI), the U.S. recycles 150 million metric tons of scrap materials every year altogether.

Fully 85 million tons of that is iron and steel; the ISRI says the U.S. recycles another 5.5 million tons of aluminum, a lighter but more expensive alternative to steel that carmakers also use.

The risks of amoral AI

By Jonathan Shieber
Kyle Dent Contributor
Kyle Dent is a Research Area Manager for PARC, a Xerox Company, focused on the interplay between people and technology. He also leads the ethics review committee at PARC.

Artificial intelligence is now being used to make decisions about lives, livelihoods and interactions in the real world in ways that pose real risks to people.

We were all skeptics once. Not that long ago, conventional wisdom held that machine intelligence showed great promise, but it was always just a few years away. Today there is absolute faith that the future has arrived.

It’s not that surprising with cars that (sometimes and under certain conditions) drive themselves and software that beats humans at games like chess and Go. You can’t blame people for being impressed.

But board games, even complicated ones, are a far cry from the messiness and uncertainty of real-life, and autonomous cars still aren’t actually sharing the road with us (at least not without some catastrophic failures).

AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. There is a general feeling that technology is inherently neutral — even among many of those developing AI solutions. But AI developers make decisions and choose tradeoffs that affect outcomes. Developers are embedding ethical choices within the technology but without thinking about their decisions in those terms.

These tradeoffs are usually technical and subtle, and the downstream implications are not always obvious at the point the decisions are made.

The fatal Uber accident in Tempe, Arizona, is a (not-subtle) but good illustrative example that makes it easy to see how it happens.

The autonomous vehicle system actually detected the pedestrian in time to stop but the developers had tweaked the emergency braking system in favor of not braking too much, balancing a tradeoff between jerky driving and safety. The Uber developers opted for the more commercially viable choice. Eventually autonomous driving technology will improve to a point that allows for both safety and smooth driving, but will we put autonomous cars on the road before that happens? Profit interests are pushing hard to get them on the road immediately.

Physical risks pose an obvious danger, but there has been real harm from automated decision-making systems as well. AI does, in fact, have the potential to benefit the world. Ideally, we mitigate for the downsides in order to get the benefits with minimal harm.

A significant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm. 

Buyer Beware

Buyers of the technology are at a disadvantage when they know so much less about it than the sellers do. For the most part decision makers are not equipped to evaluate intelligent systems. In economic terms, there is an information asymmetry that puts AI developers in a more powerful position over those who might use it. (Side note: the subjects of AI decisions generally have no power at all.) The nature of AI is that you simply trust (or not) the decisions it makes. You can’t ask technology why it decided something or if it considered other alternatives or suggest hypotheticals to explore variations on the question you asked. Given the current trust in technology, vendors’ promises about a cheaper and faster way to get the job done can be very enticing.

So far, we as a society have not had a way to assess the value of algorithms against the costs they impose on society. There has been very little public discussion even when government entities decide to adopt new AI solutions. Worse than that, information about the data used for training the system plus its weighting schemes, model selection, and other choices vendors make while developing the software are deemed trade secrets and therefore not available for discussion.

Image via Getty Images / sorbetto

The Yale Journal of Law and Technology published a paper by Robert Brauneis and Ellen P. Goodman where they describe their efforts to test the transparency around government adoption of data analytics tools for predictive algorithms. They filed forty-two open records requests to various public agencies about their use of decision-making support tools.

Their “specific goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness”. Nearly all of the agencies involved were either unwilling or unable to provide information that could lead to an understanding of how the algorithms worked to decide citizens’ fates. Government record-keeping was one of the biggest problems, but companies’ aggressive trade secret and confidentiality claims were also a significant factor.

Using data-driven risk assessment tools can be useful especially in cases identifying low-risk individuals who can benefit from reduced prison sentences. Reduced or waived sentences alleviate stresses on the prison system and benefit the individuals, their families, and communities as well. Despite the possible upsides, if these tools interfere with Constitutional rights to due process, they are not worth the risk.

All of us have the right to question the accuracy and relevance of information used in judicial proceedings and in many other situations as well. Unfortunately for the citizens of Wisconsin, the argument that a company’s profit interest outweighs a defendant’s right to due process was affirmed by that state’s supreme court in 2016.

Fairness is in the eye of the beholder

Of course, human judgment is biased too. Indeed, professional cultures have had to evolve to address it. Judges for example, strive to separate their prejudices from their judgments, and there are processes to challenge the fairness of judicial decisions.

In the United States, the 1968 Fair Housing Act was passed to ensure that real-estate professionals conduct their business without discriminating against clients. Technology companies do not have such a culture. Recent news has shown just the opposite. For individual AI developers, the focus is on getting the algorithms correct with high accuracy for whatever definition of accuracy they assume in their modeling.

I recently listened to a podcast where the conversation wondered whether talk about bias in AI wasn’t holding machines to a different standard than humans—seeming to suggest that machines were being put at a disadvantage in some imagined competition with humans.

As true technology believers, the host and guest eventually concluded that once AI researchers have solved the machine bias problem, we’ll have a new, even better standard for humans to live up to, and at that point the machines can teach humans how to avoid bias. The implication is that there is an objective answer out there, and while we humans have struggled to find it, the machines can show us the way. The truth is that in many cases there are contradictory notions about what it means to be fair.

A handful of research papers have come out in the past couple of years that tackle the question of fairness from a statistical and mathematical point-of-view. One of the papers, for example, formalizes some basic criteria to determine if a decision is fair.

In their formalization, in most situations, differing ideas about what it means to be fair are not just different but actually incompatible. A single objective solution that can be called fair simply doesn’t exist, making it impossible for statistically trained machines to answer these questions. Considered in this light, a conversation about machines giving human beings lessons in fairness sounds more like theater of the absurd than a purported thoughtful conversation about the issues involved.

Image courtesy of TechCrunch/Bryce Durbin

When there are questions of bias, a discussion is necessary. What it means to be fair in contexts like criminal sentencing, granting loans, job and college opportunities, for example, have not been settled and unfortunately contain political elements. We’re being asked to join in an illusion that artificial intelligence can somehow de-politicize these issues. The fact is, the technology embodies a particular stance, but we don’t know what it is.

Technologists with their heads down focused on algorithms are determining important structural issues and making policy choices. This removes the collective conversation and cuts off input from other points-of-view. Sociologists, historians, political scientists, and above all stakeholders within the community would have a lot to contribute to the debate. Applying AI for these tricky problems paints a veneer of science that tries to dole out apolitical solutions to difficult questions. 

Who will watch the (AI) watchers?

One major driver of the current trend to adopt AI solutions is that the negative externalities from the use of AI are not borne by the companies developing it. Typically, we address this situation with government regulation. Industrial pollution, for example, is restricted because it creates a future cost to society. We also use regulation to protect individuals in situations where they may come to harm.

Both of these potential negative consequences exist in our current uses of AI. For self-driving cars, there are already regulatory bodies involved, so we can expect a public dialog about when and in what ways AI driven vehicles can be used. What about the other uses of AI? Currently, except for some action by New York City, there is exactly zero regulation around the use of AI. The most basic assurances of algorithmic accountability are not guaranteed for either users of technology or the subjects of automated decision making.

GettyImages 823303786

Image via Getty Images / nadia_bormotova

Unfortunately, we can’t leave it to companies to police themselves. Facebook’s slogan, “Move fast and break things” has been retired, but the mindset and the culture persist throughout Silicon Valley. An attitude of doing what you think is best and apologizing later continues to dominate.

This has apparently been effective when building systems to upsell consumers or connect riders with drivers. It becomes completely unacceptable when you make decisions affecting people’s lives. Even if well-intentioned, the researchers and developers writing the code don’t have the training or, at the risk of offending some wonderful colleagues, the inclination to think about these issues.

I’ve seen firsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.

Recently we learned that Amazon abandoned an in-house technology that they had been testing to select the best resumes from among their applicants. Amazon discovered that the system they created developed a preference for male candidates, in effect, penalizing women who applied. In this case, Amazon was sufficiently motivated to ensure their own technology was working as effectively as possible, but will other companies be as vigilant?

As a matter of fact, Reuters reports that other companies are blithely moving ahead with AI for hiring. A third-party vendor selling such technology actually has no incentive to test that it’s not biased unless customers demand it, and as I mentioned, decision makers are mostly not in a position to have that conversation. Again, human bias plays a part in hiring too. But companies can and should deal with that.

With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.

Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the effects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its fitness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and trade-offs.

At this point, we might have to face the fact that our current uses of AI are getting ahead of its capabilities and that using it safely requires a lot more thought than it’s getting now.

Watch a Waymo self-driving car test its sensors in a haboob

By Kirsten Korosec

Waymo, the self-driving car company under Alphabet, has been testing in the suburbs of Phoenix for several years now. And while the sunny metropolis might seem like the ideal and easiest location to test autonomous vehicle technology, there are times when the desert becomes a dangerous place for any driver — human or computer.

The two big safety concerns in this desert region are sudden downpours that cause flash floods and haboobs, giant walls of dust between 1,500 and 3,000 feet high that can cover up to 100 square miles. One record-breaking haboob in July 2011 covered the entire Phoenix valley, an area of more than 517 square miles.

Waymo released Friday a blog post that included two videos showing how the sensors on its self-driving vehicles detect and recognize objects while navigating through a haboob in Phoenix and fog in San Francisco. The vehicle in Phoenix was manually driven, while the one in the fog video was in autonomous mode.

The point of the videos, Waymo says, is to show how, and if, the vehicles recognize objects during these extreme low visibility moments. And they do. The haboob video shows how its sensors work to identify a pedestrian crossing a street with little to no visibility.

Waymo uses a combination of lidar, radar and cameras to detect and identify objects. Fog, rain or dust can limit visibility in all or some of these sensors.

Waymo doesn’t silo the sensors affected by a particular weather event. Instead, it continues to take in data from all the sensors, even those that don’t function as well in fog or dust, and uses that collective information to better identify objects.

The potential is for autonomous vehicles to improve on visibility, one of the greatest performance limitations of humans, Debbie Hersman, Waymo’s chief safety officer wrote in the blog post. If Waymo or other AV companies are successful, they could help reduce one of the leading contributors to crashes. The Department of Transportation estimates that weather contributes to 21% of the annual U.S. crashes.

Still, there are times when even an autonomous vehicle doesn’t belong on the road. It’s critical for any company planning to deploy AVs to have a system that can not only identify, but also take the safest action if conditions worsen.

Waymo vehicles are designed to automatically detect sudden extreme weather changes, such as a snowstorm, that could impact the ability of a human or an AV to drive safely, according to Hersman.

The question is what happens next. Humans are supposed to pull over off the road during a haboob and turn off the vehicle, a similar action when one encounters heavy fog.  Waymo’s self-driving vehicles will do the same if weather conditions deteriorate to the point that the company believes it would affect the safe operation of its cars, Hersman wrote.

The videos and blog post are the latest effort by Waymo to showcase how and where it’s testing. The company announced August 20 that it has started testing how its sensors handle heavy rain in Florida. The move to Florida will focus on data collection and testing sensors; the vehicles will be manually driven for now.

Waymo also tests (or has tested) its technology in and around Mountain View, Calif., Novi, Mich., Kirkland, Wash. and San Francisco. The bulk of the company’s activities have been in suburbs of Phoenix  and around Mountain View.

❌