FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Lyft is adding Chrysler Pacificas to its AV fleet and opening a new dedicated self-driving test facility

By Darrell Etherington

Lyft has another year of building out its autonomous driving program under its belt, and the ride-hailing company has been expanding its testing steadily throughout 2019. The company says that it’s now driving four times more miles on a quarterly basis than it was just six months ago, and has roughly 400 people worldwide dedicated to autonomous vehicle technology development.

Going into next year, it’s also expanding the program by adding a new type of self-driving test car to its fleet: Chrysler’s Pacifica hybrid minivan, which is also the platform of choice for Waymo’s current generation of self-driving car. The Pacifica makes a lot of sense as a ridesharing vehicle, as it’s a perfect passenger car with easy access via the big sliding door and plenty of creature comforts inside. Indeed, Lyft says that it was chosen specifically because of its “size and functionality” and what those offer to the Lyft AV team when it comes to “experiment[ing] with the self-driving rideshare experience. Lyft says it’s currently working on building out these test vehicles in order to get them on the road.

Lyft’s choice of vehicle is likely informed by its existing experience with the Pacificas, which it encountered when it partnered with Waymo starting back in May, with that company’s autonomous vehicle pilot program in Phoenix, Ariz. That ongoing partnership, in which Waymo rides are offered on Lyft’s ride-hailing network, is providing Lyft with plenty of information about how riders experience self-driving ride-hailing, Lyft says. In addition to Waymo, Lyft is currently partnering with Aptiv on providing self-driving services commercially to the public through that company’s Vegas AV deployment.

In addition to adding Pacificas to its fleet alongside the current Ford Fusion test vehicles it has in operation, Lyft is opening a second facility in addition to its Level 5 Engineering Center, the current central hub of its global AV development program. Like the Level 5 Engineering Center, its new dedicated testing facility will be located in Palo Alto, and having the two close together will help “increase the number of tests we run,” according to Lyft. The new test site is designed to host intersections, traffic lights, roadway merges, pedestrian pathways and other features of public roads, all reconfigurable to simulate a wide range of real-world driving scenarios. Already, Lyft uses the GoMentum Station third-party testing facility located in Concord, Calif. for AV testing, and this new dedicated site will complement, rather than replace, its work at GoMentum.

Meanwhile, Lyft is also continually expanding availability of its employee self-driving service access. In 2019, it increased the availability of self-driving routes for its employees three-fold, the company says, and it plans to continue to grow the areas covered “rapidly.”

Hailing a driverless ride in a Waymo

By Kirsten Korosec
Ed Niedermeyer Contributor
Ed Niedermeyer is an author, columnist and co-host of The Autonocast. His book, Ludicrous: The Unvarnished Story of Tesla Motors, was released in August 2019.

“Congrats! This car is all yours, with no one up front,” the pop-up notification from the Waymo One app reads. “This ride will be different. With no one else in the car, Waymo will do all the driving. Enjoy this free ride on us!”

Moments later, an empty Chrysler Pacifica minivan appears and navigates its way to my location near a park in Chandler, the Phoenix suburb where Waymo has been testing its autonomous vehicles since 2016.

Waymo, the Google self-driving-project-turned-Alphabet unit, has given demos of its autonomous vehicles before. More than a dozen journalists experienced driverless rides in 2017 on a closed course at Waymo’s testing facility in Castle; and Steve Mahan, who is legally blind, took a driverless ride in the company’s Firefly prototype on Austin’s city streets way back in 2015.

waymo Driverless notificationBut this driverless ride is different — and not just because it involved an unprotected left-hand turn, busy city streets or that the Waymo One app was used to hail the ride. It marks the beginning of a driverless ride-hailing service that is now being used by members of its early rider program and eventually the public.

It’s a milestone that has been promised — and has remained just out of reach — for years.

In 2017, Waymo CEO John Krafcik declared on stage at the Lisbon Web Summit that “fully self-driving cars are here.” Krafcik’s show of confidence and accompanying blog post implied that the “race to autonomy” was almost over. But it wasn’t.

Nearly two years after Krafcik’s comments, vehicles driven by humans — not computers — still clog the roads in Phoenix. The majority of Waymo’s fleet of self-driving Chrysler Pacifica minivans in Arizona have human safety drivers behind the wheel; and the few driverless ones have been limited to testing only.

Despite some progress, Waymo’s promise of a driverless future has seemed destined to be forever overshadowed by stagnation. Until now.

Waymo wouldn’t share specific numbers on just how many driverless rides it would be giving, only saying that it continues to ramp up its operations. Here’s what we do know. There are hundreds of customers in its early rider program, all of whom will have access to this offering. These early riders can’t request a fully driverless ride. Instead, they are matched with a driverless car if it’s nearby.

There are, of course, caveats to this milestone. Waymo is conducting these “completely driverless” rides in a controlled geofenced environment. Early rider program members are people who are selected based on what ZIP code they live in and are required to sign NDAs. And the rides are free, at least for now.waymo VID 20191023 093743

Still, as I buckle my seatbelt and take stock of the empty driver’s seat, it’s hard not to be struck, at least for a fleeting moment, by the achievement.

It would be a mistake to think that the job is done. This moment marks the start of another, potentially lengthy, chapter in the development of driverless mobility rather than a sign that ubiquitous autonomy is finally at hand.

Futuristic joyride   

A driverless ride sounds like a futuristic joyride, but it’s obvious from the outset that the absence of a human touch presents a wealth of practical and psychological challenges.

As soon as I’m seated, belted and underway, the car automatically calls Waymo’s rider assistance team to address any questions or concerns about the driverless ride — bringing a brief human touch to the experience.

I’ve been riding in autonomous vehicles on public roads since late 2016. All of those rides had human safety drivers behind the wheel. Seeing an empty driver’s seat at 45 miles per hour, or a steering wheel spinning in empty space as it navigates suburban traffic, feels inescapably surreal. The sensation is akin to one of those dreams where everything is the picture of normalcy except for that one detail — the clock with a human face or the cat dressed in boots and walking with a cane.

Other than that niggling feeling that I might wake up at any moment, my 10-minute ride from a park to a coffee shop was very much like any other ride in a “self-driving” car. There were moments where the self-driving system’s driving impressed, like the way it caught an unprotected left turn just as the traffic signal turned yellow or how its acceleration matched surrounding traffic. The vehicle seemed to even have mastered the more human-like driving skill of crawling forward at a stop sign to signal its intent.

Only a few typical quirks, like moments of overly cautious traffic spacing and overactive path planning, betrayed the fact that a computer was in control. A more typical rider, specifically one who doesn’t regularly practice their version of the driving Turing Test, might not have even noticed them.

How safe is ‘safe enough’?

Waymo’s decision to put me in a fully driverless car on public roads anywhere speaks to the confidence it puts in its “driver,” but the company was cagey about the specific source of that confidence.

Waymo’s Director of Product Saswat Panigrahi declined to share how many driverless miles Waymo had accumulated in Chandler, or what specific benchmarks proved that its driver was “safe enough” to handle the risk of a fully driverless ride. Citing the firm’s 10 million real-world miles and 10 billion simulation miles, Panigrahi argued that Waymo’s confidence comes from “a holistic picture.”

“Autonomous driving is complex enough not to rely on a singular metric,” Panigrahi said.

It’s a sensible, albeit frustrating, argument, given that the most significant open question hanging over the autonomous drive space is “how safe is safe enough?” Absent more details, it’s hard to say if my driverless ride reflects a significant benchmark in Waymo’s broader technical maturity or simply its confidence in a relatively unchallenging route.

The company’s driverless rides are currently free and only taking place in a geofenced area that includes parts of Chandler, Mesa and Tempe. This driverless territory is smaller than Waymo’s standard domain in the Phoenix suburbs, implying that confidence levels are still highly situational. Even Waymo vehicles with safety drivers don’t yet take riders to one of the most popular ride-hailing destinations: the airport.

The complexities of driverless

Panigrahi deflected questions about the proliferation of driverless rides, saying only that the number has been increasing and will continue to do so. Waymo has about 600 autonomous vehicles in its fleet across all geographies, including Mountain View, Calif. The majority of those vehicles are in Phoenix, according to the company.

However, Panigrahi did reveal that the primary limiting factor is applying what it learned from research into early rider experiences.

“This is an experience that you can’t really learn from someone else,” Panigrahi said. “This is truly new.”

Some of the most difficult challenges of driverless mobility only emerge once riders are combined with the absence of a human behind the wheel. For example, developing the technologies and protocols that allow a driverless Waymo to detect and pull over for emergency response vehicles and even allow emergency services to take over control was a complex task that required extensive testing and collaboration with local authorities.

“This was an entire area that, before doing full driverless, we didn’t have to worry as much about,” Panigrahi said.

The user experience is another crux of driverless ride-hailing. It’s an area to which Waymo has dedicated considerable time and resources — and for good reason. User experience turns out to hold some surprisingly thorny challenges once humans are removed from the equation.

WAYMO 1. edjpg

The everyday interactions between a passenger and an Uber or Lyft driver, such as conversations about pick-up and drop-offs as well as sudden changes in plans, become more complex when the driver is a computer. It’s an area that Waymo’s user experience research (UXR) team admits it is still figuring out.

Computers and sensors may already be better than humans at specific driving capabilities, like staying in lanes or avoiding obstacles (especially over long periods of time), but they lack the human flexibility and adaptability needed to be a good mobility provider.

Learning how to either handle or avoid the complexities that humans accomplish with little effort requires a mix of extensive experience and targeted research into areas like behavioral psychology that tech companies can seem allergic to.

Not just a tech problem

Waymo’s early driverless rides mark the beginning of a new phase of development filled with fresh challenges that can’t be solved with technology alone. Research into human behavior, building up expertise in the stochastic interactions of the modern urban curbside, and developing relationships and protocols with local authorities are all deeply time-consuming efforts. These are not challenges that Waymo can simply throw technology at, but require painstaking work by humans who understand other humans.

Some of these challenges are relatively straightforward. For example, it didn’t take long for Waymo to realize that dropping off riders as close to the entrance of a Walmart was actually less convenient due to the high volume of foot traffic. But understanding that pick-up and drop-off isn’t ruled by a single principle (e.g. closer to the entrance is always better) hints at a hidden wealth of complexity that Waymo’s vehicles need to master.

waymo interaction

As frustrating as the slow pace of self-driving proliferation is, the fact that Waymo is embracing these challenges and taking the time to address it is encouraging.

The first chapter of autonomous drive technology development was focused on the purely technical challenge of making computers drive. Weaving Waymo’s computer “driver” into the fabric of society requires an understanding of something even more mysterious and complex: people and how they interact with each other and the environment around them.

Given how fundamentally autonomous mobility could impact our society and cities, it’s reassuring to know that one of the technology’s leading developers is taking the time to understand and adapt to them.

The risks of amoral AI

By Jonathan Shieber
Kyle Dent Contributor
Kyle Dent is a Research Area Manager for PARC, a Xerox Company, focused on the interplay between people and technology. He also leads the ethics review committee at PARC.

Artificial intelligence is now being used to make decisions about lives, livelihoods and interactions in the real world in ways that pose real risks to people.

We were all skeptics once. Not that long ago, conventional wisdom held that machine intelligence showed great promise, but it was always just a few years away. Today there is absolute faith that the future has arrived.

It’s not that surprising with cars that (sometimes and under certain conditions) drive themselves and software that beats humans at games like chess and Go. You can’t blame people for being impressed.

But board games, even complicated ones, are a far cry from the messiness and uncertainty of real-life, and autonomous cars still aren’t actually sharing the road with us (at least not without some catastrophic failures).

AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. There is a general feeling that technology is inherently neutral — even among many of those developing AI solutions. But AI developers make decisions and choose tradeoffs that affect outcomes. Developers are embedding ethical choices within the technology but without thinking about their decisions in those terms.

These tradeoffs are usually technical and subtle, and the downstream implications are not always obvious at the point the decisions are made.

The fatal Uber accident in Tempe, Arizona, is a (not-subtle) but good illustrative example that makes it easy to see how it happens.

The autonomous vehicle system actually detected the pedestrian in time to stop but the developers had tweaked the emergency braking system in favor of not braking too much, balancing a tradeoff between jerky driving and safety. The Uber developers opted for the more commercially viable choice. Eventually autonomous driving technology will improve to a point that allows for both safety and smooth driving, but will we put autonomous cars on the road before that happens? Profit interests are pushing hard to get them on the road immediately.

Physical risks pose an obvious danger, but there has been real harm from automated decision-making systems as well. AI does, in fact, have the potential to benefit the world. Ideally, we mitigate for the downsides in order to get the benefits with minimal harm.

A significant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm. 

Buyer Beware

Buyers of the technology are at a disadvantage when they know so much less about it than the sellers do. For the most part decision makers are not equipped to evaluate intelligent systems. In economic terms, there is an information asymmetry that puts AI developers in a more powerful position over those who might use it. (Side note: the subjects of AI decisions generally have no power at all.) The nature of AI is that you simply trust (or not) the decisions it makes. You can’t ask technology why it decided something or if it considered other alternatives or suggest hypotheticals to explore variations on the question you asked. Given the current trust in technology, vendors’ promises about a cheaper and faster way to get the job done can be very enticing.

So far, we as a society have not had a way to assess the value of algorithms against the costs they impose on society. There has been very little public discussion even when government entities decide to adopt new AI solutions. Worse than that, information about the data used for training the system plus its weighting schemes, model selection, and other choices vendors make while developing the software are deemed trade secrets and therefore not available for discussion.

Image via Getty Images / sorbetto

The Yale Journal of Law and Technology published a paper by Robert Brauneis and Ellen P. Goodman where they describe their efforts to test the transparency around government adoption of data analytics tools for predictive algorithms. They filed forty-two open records requests to various public agencies about their use of decision-making support tools.

Their “specific goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness”. Nearly all of the agencies involved were either unwilling or unable to provide information that could lead to an understanding of how the algorithms worked to decide citizens’ fates. Government record-keeping was one of the biggest problems, but companies’ aggressive trade secret and confidentiality claims were also a significant factor.

Using data-driven risk assessment tools can be useful especially in cases identifying low-risk individuals who can benefit from reduced prison sentences. Reduced or waived sentences alleviate stresses on the prison system and benefit the individuals, their families, and communities as well. Despite the possible upsides, if these tools interfere with Constitutional rights to due process, they are not worth the risk.

All of us have the right to question the accuracy and relevance of information used in judicial proceedings and in many other situations as well. Unfortunately for the citizens of Wisconsin, the argument that a company’s profit interest outweighs a defendant’s right to due process was affirmed by that state’s supreme court in 2016.

Fairness is in the eye of the beholder

Of course, human judgment is biased too. Indeed, professional cultures have had to evolve to address it. Judges for example, strive to separate their prejudices from their judgments, and there are processes to challenge the fairness of judicial decisions.

In the United States, the 1968 Fair Housing Act was passed to ensure that real-estate professionals conduct their business without discriminating against clients. Technology companies do not have such a culture. Recent news has shown just the opposite. For individual AI developers, the focus is on getting the algorithms correct with high accuracy for whatever definition of accuracy they assume in their modeling.

I recently listened to a podcast where the conversation wondered whether talk about bias in AI wasn’t holding machines to a different standard than humans—seeming to suggest that machines were being put at a disadvantage in some imagined competition with humans.

As true technology believers, the host and guest eventually concluded that once AI researchers have solved the machine bias problem, we’ll have a new, even better standard for humans to live up to, and at that point the machines can teach humans how to avoid bias. The implication is that there is an objective answer out there, and while we humans have struggled to find it, the machines can show us the way. The truth is that in many cases there are contradictory notions about what it means to be fair.

A handful of research papers have come out in the past couple of years that tackle the question of fairness from a statistical and mathematical point-of-view. One of the papers, for example, formalizes some basic criteria to determine if a decision is fair.

In their formalization, in most situations, differing ideas about what it means to be fair are not just different but actually incompatible. A single objective solution that can be called fair simply doesn’t exist, making it impossible for statistically trained machines to answer these questions. Considered in this light, a conversation about machines giving human beings lessons in fairness sounds more like theater of the absurd than a purported thoughtful conversation about the issues involved.

Image courtesy of TechCrunch/Bryce Durbin

When there are questions of bias, a discussion is necessary. What it means to be fair in contexts like criminal sentencing, granting loans, job and college opportunities, for example, have not been settled and unfortunately contain political elements. We’re being asked to join in an illusion that artificial intelligence can somehow de-politicize these issues. The fact is, the technology embodies a particular stance, but we don’t know what it is.

Technologists with their heads down focused on algorithms are determining important structural issues and making policy choices. This removes the collective conversation and cuts off input from other points-of-view. Sociologists, historians, political scientists, and above all stakeholders within the community would have a lot to contribute to the debate. Applying AI for these tricky problems paints a veneer of science that tries to dole out apolitical solutions to difficult questions. 

Who will watch the (AI) watchers?

One major driver of the current trend to adopt AI solutions is that the negative externalities from the use of AI are not borne by the companies developing it. Typically, we address this situation with government regulation. Industrial pollution, for example, is restricted because it creates a future cost to society. We also use regulation to protect individuals in situations where they may come to harm.

Both of these potential negative consequences exist in our current uses of AI. For self-driving cars, there are already regulatory bodies involved, so we can expect a public dialog about when and in what ways AI driven vehicles can be used. What about the other uses of AI? Currently, except for some action by New York City, there is exactly zero regulation around the use of AI. The most basic assurances of algorithmic accountability are not guaranteed for either users of technology or the subjects of automated decision making.

GettyImages 823303786

Image via Getty Images / nadia_bormotova

Unfortunately, we can’t leave it to companies to police themselves. Facebook’s slogan, “Move fast and break things” has been retired, but the mindset and the culture persist throughout Silicon Valley. An attitude of doing what you think is best and apologizing later continues to dominate.

This has apparently been effective when building systems to upsell consumers or connect riders with drivers. It becomes completely unacceptable when you make decisions affecting people’s lives. Even if well-intentioned, the researchers and developers writing the code don’t have the training or, at the risk of offending some wonderful colleagues, the inclination to think about these issues.

I’ve seen firsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.

When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.

Recently we learned that Amazon abandoned an in-house technology that they had been testing to select the best resumes from among their applicants. Amazon discovered that the system they created developed a preference for male candidates, in effect, penalizing women who applied. In this case, Amazon was sufficiently motivated to ensure their own technology was working as effectively as possible, but will other companies be as vigilant?

As a matter of fact, Reuters reports that other companies are blithely moving ahead with AI for hiring. A third-party vendor selling such technology actually has no incentive to test that it’s not biased unless customers demand it, and as I mentioned, decision makers are mostly not in a position to have that conversation. Again, human bias plays a part in hiring too. But companies can and should deal with that.

With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.

Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the effects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its fitness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and trade-offs.

At this point, we might have to face the fact that our current uses of AI are getting ahead of its capabilities and that using it safely requires a lot more thought than it’s getting now.

❌