Waymo's So-Called Robo-Taxi Launch Reveals a Brutal Truth

The outfit that started as Google's self-driving car team still relies on human safety operators—indicating just how hard this problem really is.
Image may contain Car Vehicle Transportation Automobile Road Wheel Machine and Police Car
Even after this "launch," safety drivers will stay behind the wheel of Waymo's robo-cars.Waymo

Waymo, the frontrunner in the self-driving car industry, today announces the moment everyone has been waiting for: It is officially “launching” a robo-taxi service in Chandler, Arizona, wherein riders will use an app to hail the vehicles to take them anywhere in an 80 to 100 square mile area, for a price.

“Today, we're taking the next step in our journey with the introduction of our commercial self-driving service, Waymo One,” Waymo CEO John Krafcik wrote in a blog post.

The banner Waymo is unfurling, though, is tattered by caveats. Waymo One will only be available to the 400 or so people already enrolled in Waymo’s early rider program, which has been running in the calm, sunny Phoenix suburb of Chandler for about 18 months. (They can bring guests with them and have been freed from non-disclosure agreements that kept them from publicly discussing their experiences.) More glaringly, the cars will have a human behind the wheel, there to take control in case the car does something it shouldn’t.

So no, this is not the anyone-can-ride, let-the-robot-drive experience Waymo and its competitors have been promising for years. Building a reliably safe system has proven far harder than just about everyone anticipated and its cars aren’t ready to drive without human oversight. But Waymo promised to launch a commercial service sometime in 2018, it didn’t want to miss its deadline and risk its reputation as the leader of the industry it essentially created, and not even the might of Waymo parent company Alphabet can delay the end of the calendar year.

So Waymo is pushing out a software update, tweaking its branding, and calling it a launch.

Using the Waymo One app, members of Waymo’s early rider program can hail rides and even bring friends along.

Waymo

It’s a letdown, yes, but we get it: This is perhaps the hardest technological challenge of the modern era. And even those who understand it best seem to have underestimated its gnarliness. Just a year ago, Krafcik crowed that Waymo was taking the safety operators out of its cars. “Fully self-driving cars are here,” he said onstage at the Web Summit in Lisbon. And from there, he promised, adding in paying passengers would be an easy move. “The difference between those two things is relatively slight,” Krafcik told reporters. “You’ve still got a fully driverless car interacting with the world, all of the other human-driven cars, pedestrians and cyclists and other things that are on the road at the same time.”

The drivers are still here. The bravado is gone. And the true timeline toward true driverlessness is becoming apparent. “We’re still looking towards a time when a car comes and picks you up and there’s no one in it and it’s truly your space,” says Waymo spokesperson Liz Markman. “But it’s going to be a gradual path to get there.”

The path is gradual because for all our flaws, humans are great at driving, at least when we pay attention. And for all their talents, robots are terrible at it. A human driver’s skills scale and adapt well to just about any situation: Make a left there, and you can make it anywhere. Robots’ skills are more context-specific, what roboticists call brittle. Even in placid, boring Chandler, Waymo’s cars encounter an endless variety of situations involving pedestrians, cyclists, scooter-ers, and other drivers. “Humans adapt incredibly well to those scenarios,” says Matt Johnson-Roberson, who co-directs the University of Michigan Ford Center for Autonomous Vehicles. “Cars do not.”

And the risk of failure is far more serious than a 404 page or a frustrated user. As Uber proved in March and human drivers prove every day, mistakes on the road can be deadly. So until Waymo or any other developer is really, truly sure their system is safe, the human overseer will stay put. “What you’re seeing is a reflection of our iterative and incremental approach,” says Waymo product chief Dan Chu.

And self-driving systems are so intricate and elaborate that updating the software is more like tweaking a personality than fixing a set of bugs. You might address merging onto the highway by tuning the car to be a bit more aggressive, only to find that behavior creates unwanted results elsewhere on the road, says Johnson-Roberson. “With these highly complex systems, it is unclear what one change will do.”

In the decade since it started out as Google’s self-driving car project, Waymo has made astounding progress. Its cars have driven 10 million miles on public roads in 25 US cities, and another 10 billion in simulation. Chandler locals say they drive well, if too conservatively. The company has helped shape legislation governing the technology, tackled the knotty problem of keeping a fleet of cars in good shape and on the road, and it has built a dispatch system that connects riders and rides.

But Waymo hasn’t reached a point where it’s confident in its cars’ ability to stay safe without human oversight. And it’s not sure when it will get there. “We’re definitely under no illusions about the path ahead of us,” Chu says. “I think anyone who says otherwise doesn’t understand the challenges as deeply as we do.”

Now it's just a matter of cracking it.

Aarian Marshall contributed reporting.


More Great WIRED Stories