Deep in the heart of Imperial College, London, a computer is learning how to play Pac-Man. Like many humans, it struggles to get the hang of the classic 1980s video game at first.
With time though, experience helps it decide which manoeuvres will allow it to evade the clutches of a relentless gang of animated ghosts.
The point of teaching a computer to master Pac-Man is to help it “think” and learn like a human. That is a prospect not everyone feels comfortable with. Fears have been voiced by scientists as eminent as Professor Stephen Hawking that computers could become so clever that they turn against their makers.
Murray Shanahan, professor of cognitive robotics at Imperial, believes that while we should be thinking hard about the moral and ethical ramifications of AI, computers are still decades away from developing the sort of abilities they’d need to enslave or eliminate humankind and bringing Hawking’s worst fears to reality. One reason for this is that while early artificial intelligence systems can learn, they do so only falteringly.
For instance, a human who picks up one bottle of water will have a good idea of how to pick up others of different shapes and sizes. But a humanoid robot using an AI system would need a huge amount of data about every bottle on the market. Without that, it would achieve little more than getting the floor wet.
Using video games as their testing ground, Shanahan and his students want to develop systems that don’t rely on the exhaustive and time-consuming process of elimination – for instance, going through every iteration of lifting a water bottle in order to perfect the action – to improve their understanding.
They are building on techniques used in the development of DeepMind, the British AI startup sold to Google in 2014 for a reported £400m. DeepMind was also developed using computer games, which it eventually learned to play to a “superhuman” level, and DeepMind programs are now able to play – and defeat – professional players of the Chinese board game Go.
Shanahan believes the research of his students will help create systems that are even smarter than DeepMind.
Both DeepMind and its successors involve “deep reinforcement learning” – giving computers the tools to draw conclusions based on large amounts of data, in the way that humans make assumptions based on experience. The potential applications are vast, from helping doctors diagnose patients to spotting faults in infrastructure such as transport networks – and other uses that even its inventors are yet to conceive of.
But measuring progress in AI is not easy. The layperson usually cites the Turing test, developed by Bletchley Park codebreaker Alan Turing in 1950. It focuses on whether a computer can convince a human in a blind test that they are talking to another human. But that test, says Shanahan, is more about “tricking” people through mimicry than developing AI genuinely capable of learning.
Nor does AI come down to the abilities of one machine in isolation. In another corner of the labyrinthine Imperial campus, researchers are working on a very different piece of the puzzle.
Aerial robotics lecturer Dr Mirko Kovac and PhD student Talib Alhinai recently emerged triumphant from Drones for Good, the closest thing there is to a World Cup of drones. Unmanned aerial vehicles (UAVs), to give them their less sinister title, are controlled by humans so do not, in themselves, constitute AI. But Kovac says his drones could form part of whole AI towns, where basic services are performed by a web of AI-driven systems.
His team’s most recent design was a UAV capable of identifying a leak in a gas or oil pipe and plugging it with polyurethane foam, which could spare a human engineer the time, effort and danger.
A drone plugged into an AI network, he says, could in theory spot someone having a heart attack and call an ambulance. Kovac and his team have developed a valuable patent portfolio that could become a tasty morsel for a corporate giant investing in future technologies. With British universities producing this level of talent, it is no surprise that the DeepMind deal has been followed by further evidence that an AI industry is flourishing from an academic base.
As well as the DeepMind takeover, London’s place at the intersection of business and academia has been highlighted by Microsoft’s $250m (£177m) takeover of predictive keyboard app SwiftKey, which started life at University College, London. The app’s ability to predict users’ next word – based on analysis of their writing style – has proved a worldwide hit. Uses for AI in big business – and therefore the potential for investment – are significant: a recent report by Bank of America Merrill Lynch estimated that the AI industry will be $70bn by 2020.
Only last week, Royal Bank of Scotland unveiled Luvo, an AI system that will help call-centre staff answer customers’ questions more quickly and efficiently. And for businesses looking to take advantage of new technology, London colleges such as Imperial and UCL – coupled with Oxford and Cambridge – offer a trove of talent and ideas.
This burgeoning network of academic excellence has attracted some of the world’s brightest minds, all keen to be part of an environment reminiscent of San Francisco’s web startup hub. “This is a scene where everyone knows each other. You can’t help being caught up in the excitement of it,’ says Shanahan.
His PhD student Marta Garnelo is a regular attendee at London.AI, a weekly event where enthusiasts come for seminars and talks given by experts, followed by beer and pizza. London.AI was founded Alex Flamant and John Henderson, both of whom are involved in identifying startups with the potential to be the next big thing. Tickets cost £5 and all proceeds go to Code Club, a nationwide network of volunteer-led after-school coding clubs for children aged 9-11.
Flamant says there are usually a few corporate talent-spotters at such events, which are the ideal place for talented young people to show off their skills. He is about to join venture capital firm Notion Capital, where he will specialise in identifying the next big thing in AI.
“There’s nothing like London. If you have an idea and you want to get it funded, London’s the best place,” he says. “The grey matter is here, the money is here, the young passionate entrepreneurs from all over Europe are here. The legend of London.AI is that if you go there, you get acquired shortly after.”
And just as Silicon Valley attracts the best talent from around the world, London’s AI ingenues have a global pedigree. Imperial’s students come from countries such as Greece, the UAE, Thailand, Spain and Iran, signalling the appeal London now has an academic centre of excellence in this field.
But what is striking about these students is their awareness that their projects could one day become multimillion-dollar business propositions.
“We’re part of something bigger than academia. We’re close to the market and we can interact with industry,” says Iranian post-doctoral student Feryal Mehraban Pour Behbahani. “It gives young people with ideas the feeling that they can pursue them. Now there’s a momentum that wasn’t there a couple of years ago.”
Fellow post-doctoral student Anastasia Sylaidi, from Greece, agrees that the capital is the hot place to be for AI.
“London is a startup hub and it’s interesting to be exposed to what’s happening in industry while you’re working in research.”
These highly intelligent and articulate students are not here because they expect to become multimillionaires. But it’s hard to escape the feeling – in the wake of DeepMind and SwiftKey – that they if they want to, the door is open.
One reason corporate behemoths are willing to spend so much money on artificial intelligence is that the global talent pool is still relatively limited. London has proved a particularly good hunting ground for Silicon Valley stalwarts ready to spend big on the most promising AI inventions and, most importantly, on the people who came up with them.
DEEPMIND When Google spent £400m on machine-learning startup DeepMind, it was a ringing endorsement of the wealth of talent in London’s artificial intelligence scene.
The firm was founded in 2010 by chess prodigy and neuroscientist Demis Hassabis with University College London colleague Shane Legg and Mustafa Suleyman. They are said to have turned down an offer from Facebook before agreeing to the Google deal, which was reportedly overseen personally by the company’s then chief executive, Larry Page.
For Google, the deal was as much about acquiring the most talented brains in artificial intelligence as getting its hands on DeepMind’s technology. DeepMind is about reinforcement learning, or teaching computers to learn skills at the speed a human can. Google thinks this technology will become central to our lives.
DeepMind’s creators would show it classic computer games, then find ways to help it learn to play them more quickly. Last October, DeepMind’s AlphaGo program became the first to defeat a professional player at Go, the traditional Chinese board game: it defeated European champion Fan Hui 5-0. This week it will take on Lee Sedol, who has been the world’s top Go player for a decade.
SWIFTKEY The other big takeover of a British AI firm was Microsoft paying $250m for mobile-phone keyboard creator SwiftKey. Jon Reynolds and Ben Medlock, who founded the firm in 2008, reportedly pocketed $30m each.
The price tag was extraordinary for a company that had just reported a fall in revenues, from £9.9m to £8.4m, after making its app free. But the appeal for Microsoft lay in the potential to export that technology to other parts of the empire. Microsoft wanted to integrate that technology with its own Word Flow keyboard app, and was prepared to pay top dollar for the privilege.
SwiftKey is more than just an alternative keyboard: it uses high-quality predictive text, based on artificial intelligence, to suggest the word a user will type next, having analysed their writing style. The keyboard supports more than 100 languages and has been used by astrophysicist Stephen Hawking, for whom the company built a special tool to assist him in giving lectures.
AI ON THE BIG SCREEN
According to researchers at Imperial College, one of the most realistic cinema portrayals of artificial intelligence is the 2015 film Ex Machina, written and directed by Alex Garland. The film charts the efforts of a young programmer to assess the abilities of a humanoid AI system built by an eccentric scientist.
What’s different about Ex Machina, they say, is that it offers a relatively accurate depiction of the long and laborious process of building and tweaking a robot, with the techniques and processes researchers are using today.
AI is the process of building a machine that can learn, and replicate, human behaviour - Ex Machina details this in all its frustration and, admittedly, existential horror.
AI has been around for a while in Hollywood: the prime example of film’s obsession with machines that think is Hal, the computer in 2001: A Space Odyssey. adapted from the novel by Arthur C Clarke, raised the prospect of humans being supplanted by machine intelligence, with the line, “I’m sorry Dave, I’m afraid I can’t do that,” summing up a new human insecurity, as Hal refused to open the pod bay doors.
The notion of an uncooperative robot was explored more comically in Douglas Adams’ Hitchhikers’ Guide To The Galaxy, which was also adapted for the big screen. Adams gave us Marvin, a robot with “a brain the size of a planet”, who also happened to be depressed.
In 1999, The Matrix toyed with the difference between human and machine intelligence by posing the hypothetical question of whether human reality is just a construct built by machines to keep us quiet.
And 2013’s Her explored the potential endgame of human interaction with machines, with its hero falling in love with a hyper-intelligent operating system.
guardian.co.uk © Guardian News and Media Limited 2010