The Perse School

Artificial intelligence: our final invention?

Is it ‘Hasta la vista, baby’ for humans, now that computers can be trained to learn?  Or will intelligent machines save, rather than take, lives?

That was the fascinating topic that Professor Chris Bishop, Lab Director of Microsoft Research here in Cambridge, sought to answer in his lecture on Artificial Intelligence (AI), one of a series of community lectures at The Perse to mark our 400th anniversary.

Professor Bishop started by giving the audience a short introduction to the origins of this area of science, beginning with Alan Turning, an early proponent of AI, who conceptualised machines with the capabilities of the human brain. In the 1950s, AI experimentation was limited to writing code that would enable computers to ‘understand’ a problem – such as a question written by the user – and offer a response based on following precise instructions to make binary calculations. Some of the results resemble today’s ‘chatbots’, used in customer service. At this stage in AI’s history, its future did not look impressive – it was slow and the answers were often baffling in their irrelevance – and it fell out of general favour.

In 1997, the IBM Deep Blue computer’s defeat of world chess champion Garry Kasparov was impressive – the game is, after all, often considered as the pinnacle of human thinking.  However, while the computer could be taught to play first-class chess, it lacked the key advantage of the human brain: its adaptability. Chess was all that the program ‘knew’, and it could not automatically apply its intelligence to any other game.

Advances in adaptability were quietly being made in the preceding decades, in the field of neural networks. Neurons, and the synapses that connect them, are what allow our brains to perform their “millions of miracles of information processing”.  Not all neurons fire all the time – a simple equation describes when a neuron will fire and when it will rest. In the late 1950s Rosenblatt’s ‘Perceptron’ machine, with its random wiring and motors akin to synapses, effectively mimicked the human brain by its ability to change its own wiring. Perceptron could solve certain simple problems, such as distinguishing between shapes, but not other equally simple problems, and certainly not complex ones.

A breakthrough came with the application of the understanding that, like a child soaking up language, systems learn with time and a great deal of data. And, like the multi-layered brain, two-layered neural networks work better than one. Microsoft’s Kinect sensor for the Xbox, which uses a related technology called a decision forest, was trained to track the movement of a controller-less player, by learning 31 different body parts and then receiving 1 million examples of poses. During play, data it receives from a camera and sensor sees the environment not as a flat image, but as infra-red dots arranged in 3D. The difference between the observed and expected dot positions is used to calculate the depth at each pixel, in turn classified as a body part or background.

Probability underpins modern approaches to machine learning. Claude Shannon wrote “Information is the degree of surprise”. When a machine receives surprising information it evolves its prediction; when not surprising the information only confirms, and the prediction stands. ‘Collaborative filtering’ – the process by which film buffs receive tailored recommendations – is evidence of this learning in practice: the more films a user picks and rates, the more data available to the machine and the more confident it becomes in its probabilities (and the more appropriate the recommendations).

Artificial intelligence is currently a hot topic. The BBC recently devoted an entire week to the topic and luminaries like Stephen Hawking have even predicted that AI will destroy humanity. Suddenly everyone wants to know whether AI is friend or foe and, as the audience’s questions revealed, concerns are widespread – from the danger posed by criminal minds to the risk of job loss.

While acknowledging that “it won’t all be plain sailing”, Professor Bishop is confident that the benefits outweigh the risks: autonomous vehicles could save a million lives a year by preventing avoidable accidents, and data could be used to personalise medicine. He asked the audience to imagine a time when machines carried out the tedious and time-consuming everyday tasks, leaving humans more time to pursue their interests.

The technology can help us not only outsource or organise tasks, but to connect with others. Deep neural networks are helping computers to recognise, and even translate, speech, powering today’s digital assistants like Cortana. Professor Bishop advised that Microsoft is committed to real-time translation in Skype that, while not yet perfect, will allow a comprehensible conversation between speakers of different languages in the not too distant future.

The real risk, in his eyes, was that emotional and often ill-informed concerns about the technology might cause society to reject a potential force for good. Professor Bishop ended by urging “a reasoned, informed and rational debate based on evidence”. An intelligent one, then.

For details of upcoming 400th anniversary community lectures please see our 400th anniversary events. Our lecture series is delivered with the support of the Cambridge News.

 
 
Calendar Site Search