Technology

Amazing AI

 By Eniday Staff

Google unveiled its Duplex program to an astounded audience in May 2018. The new software could, by itself, book a table at a restaurant without the person on the other end of the phone realising they were speaking with artificial intelligence. Duplex could not only hold a conversation, but even imitate the pauses and fillers typical of human speech…

Besides flabbergasting people, Duplex raised a fair amount of concern. How could we be sure AI would not use its ability to imitate people to deceive us? Would it be necessary to force software to declare its artificial nature? Were we at the beginning of an evolution that would see AI resemble human beings more and more, eventually even developing a conscience? These are fascinating questions, but have little to do with what Duplex actually demonstrated. “Google Duplex is not the advance toward meaningful A.I. that many people seem to think”.  But why so limited a range of possibilities? The answer lies in the properties of deep learning, the algorithms on which artificial intelligence relies. To hold a conversation, AI must be trained using hundreds of thousands of pieces of data, from which it learns all the possible interactions in a chat between people. With this information, it can statistically determine the proper response to a given question. It is a complicated process that can only work if the conversation is extremely restricted, as with a restaurant booking. “The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals”, Marcus goes on. “The explanation is that the field of A.I. doesn’t yet have a clue how to do any better. Open-ended conversation on a wide range of topics is nowhere in sight”.

A demonstration of Google Duplex at Google I/O 2018

One task at a time

This is true not only of specialist linguistic algorithms, but all kinds of artificial intelligence, including image-recognition systems, which are charged with, among other things, recognising people or animals in photos. “Even small changes to pictures can completely change the system’s judgment”, writes Russell Brandon on The Verge. “An algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot – even if it’s seen pictures of housecats and jaguars”. To put it concisely, AI cannot generalise or extrapolate. These skills are fundamental to human intelligence. They allow us to hold conversations of all kinds and recognise an ocelot even if all we know is that it is roughly halfway between a domestic cat and a jaguar. From this point of view, artificial intelligence is not just vastly inferior to that of humans, but not even on the right track to one day overtaking it. Of course, none of this is to understate the impressive advances in deep learning. Today, among a raft of other highly useful skills, artificial intelligence algorithms can diagnose cancer, help lawyers find links between legal documents, translate increasingly well between languages, and identify attacks by hackers before human experts can. What these algorithms share is their use of statistics and the limits on their artificial intelligence: they can only do one thing at a time, and are therefore known to specialists as artificial narrow intelligence (ANI). An algorithm designed for translation cannot diagnose cancer and a system for recognising cats cannot spot a cow when it sees one. AI algorithms can only perform single tasks. If you want to use them for something different, you need to reprogram them afresh. As Oxford physicist David Deutsch explains, in scientific terms, artificial intelligence does not have the human ability to adapt an existing behavioural repertoire to new challenges, without recourse to a mechanism of trial and error or information from a third party. Besides being unable to extrapolate or generalise, Deutsch points out, it lacks creativity and common sense, the two basic characteristics of general intelligence as properly understood. On the matter of creativity, we might raise some objections. After all, the software that beat the human world champion at Go, an incredibly complicated board game, did so by making moves that looked like mistakes. In a sense, the AI was demonstrating creativity by departing from traditional strategy, as a human player would never have used such tactics. That is not all. On 25 October 2018, the portrait “Edmond de Belamy” sold at Christie’s for 432,000 dollars. The artist was not a human, but artificial intelligence. It was shown a huge amount of historic art by its programmers and processed it all into a painting of its own. Rather than being creative, the AI had imitated human creativity. But then, could we not say the same of man-made art? Ultimately, no painter, sculptor or performer has ever created anything from absolutely nothing. They have always reworked, recreated, reinvented what has gone before in the history of art. The datasets that AI uses to make art could be something like the inspiration that contemporary artists get from those of the past.

deep-learning-ai-progress
The portray "Edmond de Belamy" (Ovious)

Now we come to common sense. The failings of artificial intelligence in this respect are exemplified by Facebook’s persistent failure to remove undesirable content. The algorithm charged with detecting images that break the rules of the social network cannot distinguish between sexual photos, which need removing, and nudes in artwork, which are allowed. This lack of common sense also makes it impossible for an algorithm to distinguish between racist posts, which are banned, and posts that mock racist arguments, if we can call them that. Only human beings currently have the skills needed to make such subtle and important distinctions. But we cannot rule out AI one day managing to overcome its limitations. In fact, investors around the world are hoping it will do just that, and in 2018 pumped a record $9.3 billion into start-up companies working in artificial intelligence. This was a 72% increase on the figure for the previous year. Hints of real intelligence are glinting through, then. In October 2016, the company DeepMind, a subsidiary of Google published a study in the journal Nature, describing an AI they had created that was able to plan the quickest routes on the London Underground in one go, without multiple attempts. This represented enormous progress, and was possible because the machine had been trained, through trial and error, on maps of underground systems in other cities. As it did so, its neural network learnt to store useful information in its memory and call it to mind when needed. The system, known as a differentiable neural computer, therefore had an external memory that reused what it had learnt to make new deductions. Although this AI too is limited to single tasks, its ability to apply what it has consigned to memory and therefore prove it can learn in general terms, is the first step towards real artificial intelligence, of the human kind. It is capable of embryonic thinking.

deep-learning-ai-progress
A scheme of the architecture "differentiable neural computer" (deepmind.com)

A fair fight

More recently, we have seen the spread of so-called (GAN). These systems use two different algorithms to compete, spurring each other on to get the best results. How exactly do they work? To begin with, the two algorithms are trained by being fed different data, relevant to their respective functions; one, for example, might be shown hundreds of thousands of images of cats. The first algorithm, known as the generator, uses its database to create original images. The second, the discriminator, is shown those images and has to determine whether they were in fact created by the generator or instead come from the database. It is a little like an art critic trying to spot a forgery. The more accurate the generator’s work, the more likely it is to fool the discriminator. Every time the second algorithm correctly identifies and rejects the work of the generator, the process begins again, and the generator has to improve if it wants to outwit its opponent. But the discriminator is doing the same, honing its skill at spotting the generator’s work and always levelling the competition, as happens with critics and forgers. This system has been used to create “people who don’t exist”, which went viral a few months ago, as well as unsettling deepfakes and the sort of artworks mentioned above. Most of all, this system is the next big step in artificial intelligence. Its full potential is yet to reveal itself, but comes from something intrinsically human, or animal: the ability to collaborate or compete to achieve things. AI may lack common sense and the capacity to generalise, but it is still making constant, and often impressive, progress. Real general artificial intelligence, that can meet or overtake that of humans, is still far beyond the horizon. Indeed it may never come, but the endless triumphs of deep learning suggest it is best not to rule anything out.

READ ALSO: How AI can read your personality by Agata Boxe

about the author
Eniday Staff