Technology

Where are the self-driving cars up to?

 By Andrea Signorelli

Even just a few years ago, if you asked car industry experts when self-driving cars would arrive you would have received a unanimous response: in 2020. That was the year marked on the calendar, the moment when autonomous cars would begin to drive around our cities for real, picking their way through traffic and taking us to our destination while we, freed from the burden of driving, could get on with other things…

Now 2020 has arrived, but the rise of autonomous cars is still limited to experimental vehicles – accumulating thousands of miles of experience under human supervision – and to driver-assistance software, as installed in Teslas. So what went wrong and when will the great promise of self-driving cars truly be fulfilled? “We underestimated the difficulties,” Ford President Jim Hackett admitted in a meeting last April at the Detroit Economic Club, clarifying what everyone now knows: progress in this area is difficult, slow and very expensive. So expensive in fact that Ford and Volkswagen have recently announced that they will join forces in an attempt to overcome the obstacles, while Bryan Salesky – CEO of Argo, one of the best-known startups in the industry – has admitted that creating self-driving cars able to go to any destination is “still a long way in the future.” This is all something of a cold shower for an automotive industry that has invested billions of dollars in the promise of self-driving cars and thought it was embarking on the next great mobility revolution. Hardly anyone dares make forecasts on the advent of self-driving cars anymore – and those who do put it at 2030 or even beyond.

limits-self-driving-cars
Test track for automated driving in Berlin (Reuters)

A lenghty learning

But why all these difficulties, since the necessary technology – radars, cameras, sensors and deep-learning software that can react to in-journey situations – has already been fully developed? A bit of background is perhaps needed. All artificial intelligence algorithms, including for driving, base their knowledge on statistics. They need hundreds of thousands, if not millions, of data points on their task to work out a given situation or what they have to do. What’s more, before giving satisfactory results they have to experiment thousands and thousands of times through “trial and error”. That is how, for example, an image recognition algorithm learns to identify a cat or play a video game. As its training goes on, AI gradually works out what are statistically the distinctive features of a cat or the right reaction to a given situation in the game, until it becomes capable of making the correct decision in a very high percentage of cases. But when it comes to driving on the road, it is extremely complicated to find a statistical pattern that can tell the algorithm what move to make in a given situation. It’s one thing to learn to recognise road signs and correctly decipher their meanings, but it’s quite another to understand how to get through a city strewn by pedestrians who behave unpredictably, scooters who slalom through traffic, bicycles going the wrong way down the street, and cars that can make sudden and unexpected manoeuvres. In a nutshell – as a study recently demonstrated – AI still struggles to deal with unforeseen events. “They’re leaning on the big data because that’s the crutch that they have, but there’s no proof that ever gets you to the level of precision that we need,” AI expert Gary Marcus has explained

limits-self-driving-cars
File photograph shows video captured by a Google self-driving car coupled with the same street scene as the data is visualized by the car (Reuters)

An unpredictable world

In city traffic, for example, there are simply too many unknowns for artificial intelligence to fit them into a predictable model. Will a person who suddenly appears from behind a tram decide to wait for the car to pass by or try to cross first? And will the moped that looks like it’s about to cut across us change its mind at the last second, or will it actually do it? “Today’s technology cannot handle the randomness of behaviour,” says Volvo’s autonomous driving director Markus Rothoff. Then there is the whole issue of the “micro-manoeuvres” that human beings refine over years and years. For example, when we see a car moving too slowly in front of us, we might imagine that the driver is looking for parking and leave space so that they can back up and park. Or if we see someone coming out from an intersection on the left, we can decide to move slightly to the right just in case they don’t stop exactly where they are supposed to (and therefore where an algorithm would expect).
Human beings can manage these obstacles by relying on experience and common sense – yet we are still wrong in many cases, sometimes with tragic results. Deep-learning algorithms based solely on the experience obtained from statistical data can only cope in strictly controlled environments, like an airport, or where far fewer unknowns occur, such as on the motorways of the future.

limits-self-driving-cars
Google's self-driven car involved in a car accident in Arizona (KTNV Channel 13 Las Vegas)

Alternative routes

We also have to note that, so far, when autonomous cars have been faced with an unexpected and unknown situation have always reacted in the same way: by slamming on the brakes. Obviously, a car that brakes sharply when faced with any unexpected scenario does not make for a pleasant journey. Rather than a car driven by an infallible algorithm, it would feel like always being chauffeured by the most inexperienced learner driver. That much was clear even a number of years ago. At the time, though, it was thought that progress in deep learning would be fast enough to solve these problems on time and meet the promised deadline. Things have turned out very differently and, above all, city driving has proved to be a very hard nut to crack. So what can we expect in the future? Firstly, we can’t rule out that constant innovations in artificial intelligence will make it possible to overcome the current problems. Deep-learning pioneer Yann LeCun is working to make artificial intelligence less dependent on an exorbitant amount of data and able to learn in a way that is more like human beings. The timescale, however, may still be very long. Other experts, including Andrew Ng, have suggested that we should rethink how cities work: “Safe autonomous cars will require modest infrastructure changes, designs that make them easily recognized and predictable, and that pedestrians and human drivers understand how computer driven cars behave.” Essentially, we will not only have to carry on training cars so that they can learn to navigate around human beings, but we will also have to make people behave in a more rational and predictable way, in a sense training them to co-exist with autonomous cars (even if this prospect may not be particularly attractive). In the meantime, we will have to make do with autonomous “robo-taxis” that operate in simple environments such as airports, campuses or hospitals, fitted with increasingly sophisticated driver assistants that can brake in front of sudden obstacles or adjust direction if you move out of lane. That may not be the self-driving cars we expected, but is no small achievement all the same.

READ MORE: Amazing AI by Eniday Staff


about the author
Andrea Signorelli
Born in Milan in 1982, he writes about the interaction between new technologies, politics and society. He collaborates with La Stampa, Wired, Esquire, Il Tascabile and others. In 2017 he published “Rivoluzione Artificiale: l’uomo nell’epoca delle macchine intelligenti” for Informant Edizioni.