Bill Gates is the latest prominent figure from the technology industry to express concern about the future evolution of artificial intelligence, although he thinks it will be “decades” before super-intelligent machines pose a threat to humans.
He joins Elon Musk and Stephen Hawking in suggesting that the march of AI could be an existential threat to humans. The former Microsoft boss gave his opinion during his latest Ask Me Anything (AMA) interview on the Reddit networking site.
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super-intelligent. That should be positive if we manage it well,” wrote Gates.
“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Musk spoke out in October 2014 during an interview at the AeroAstro Centennial Symposium, telling students that the technology industry should be thinking hard about how it approaches AI advances in the future.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
‘AI doomsday scenarios belong more in the realm of science fiction’
In a December interview, Professor Hawking went further. “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”
Future risks of artificial intelligence are being discussed widely and publicly within the technology industry, and even researchers who think warnings about machines extinguishing the human race are nonsense, are alive to the need to continue exploring the risks.
“AI doomsday scenarios belong more in the realm of science fiction than science fact. However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems,” admitted an article co-written earlier in January by Eric Horvitz, director of the Microsoft Research lab, and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.
That post outlined three key risks around artificial intelligence: programming errors in AI software; cyber-attacks on AI systems by criminals, terrorists and government-backed hackers; and so-called Sorcerer’s Apprentice scenarios, when AI systems respond to human instructions in unexpected (and possibly dangerous) ways.
“Each of the three important risks outlined above... is being addressed by current research, but greater efforts are needed,” wrote Horvitz and Dietterich, calling for more collaboration and funding to explore the challenges. “We must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.”
‘Technology is not making people less intelligent’
During his AMA interview, Gates also talked about his work on a “personal agent” technology within Microsoft that will “remember everything and help you go back and find things and help you pick what things to pay attention to... it will work across all your devices”.
He also described the bitcoin cryptocurrency as “exciting” but said it wasn’t currently viable for use in the developing world. “For our [Bill and Melinda Gates] Foundation work we are doing digital currency to help the poor get banking services.
“We don’t use bitcoin specifically for two reasons,” he wrote. “One is that the poor shouldn’t have a currency whose value goes up and down a lot compared to their local currency. Second is that if a mistake is made in who you pay then you need to be able to reverse it so anonymity wouldn’t work.”
Gates was also asked whether technology “has made the masses less intelligent”. He replied: “Technology is not making people less intelligent. Technology is letting people get their questions answered better so they stay more curious. It makes it easier to know a lot of topics which turns out to be pretty important to contribute to solving complex problems.”
This article was written by Stuart Dredge, for theguardian.com on Thursday 29th January 2015 12.57 Europe/Londonguardian.co.uk © Guardian News and Media Limited 2010