Unregulated AI Can Be a Danger to Humanity, Elon Musk Says
He wants action before robots are heading down the street killing people.
Elon Musk is on the cutting edge of technology. From SpaceX to the Tesla to a new hyperloop train system, Musk stays informed on all the latest in electronics to both make life easier and benefit the planet. There is one area, however, that he thinks needs immediate government regulation: Artificial Intelligence.
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told the National Governors Association this past weekend. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
AI is making incredible leaps, from Google's AlphaGo AI beating a human grand master at the strategy board game Go to Facebook's various AIs communicating in their own language. And there are currently no limits to its expansion. Musk is looking further down the road, though, where super-intelligent AIs could become a problem.
Current work on "stupid" AI could eventually lead to what the AI community calls "artificial general intelligence," along the lines of the futuristic movies we see, such as Skynet from The Terminator and VIKI from I, Robot.
"AI is a rare case where we need to be proactive about regulation instead of reactive," Musk said, "because I think by the time we are reactive in AI regulation, it’s too late.” He said that government stepping in after “a whole bunch of bad things happen” won't work because AI represents “a fundamental risk to the existence of civilization.”
David Ha, who is working on Google Brain, disagreed with Musk to a degree via Twitter, seeing more of a threat in AI being used to “mask unethical human activities.”
Check out Musk's full talk below. His comments on AI start around the 48-minute mark.
-
John Keefer posted a new article, Unregulated AI Can Be a Danger to Humanity, Elon Musk Says
-
-
If he wants to be taken seriously he needs to offer specifics. What particular company/project is building full general AI that is at risk of escaping the control of its creators? I don't think there is any such product today. And if not then how are we going to regulate it? What law is he proposing be passed?
-
-
-
-