Tag it:
Delicious
Furl it!
Spurl
NewsVine
Reddit
Digg

Earlier this month, an open letter about the future of artificial intelligence (AI), signed by a number of high-profile scientists and entrepreneurs, spurred a new round of harrowing headlines, such as “Top Scientists Have an Ominous Warning About Artificial Intelligence” and “Artificial Intelligence Experts Pledge to Protect Humanity from Machines.”

rise_of_robots.jpg

Let’s get one thing straight: A world in which humans are enslaved or destroyed by superintelligent machines of our own creation is purely science fiction. Like every other technology, artificial intelligence (AI) has risks and benefits, but we cannot let fear dominate the conversation or guide AI research.

The notion of the intelligence explosion arises from Moore’s Law, the observation that the speed of computers has been increasing exponentially since the 1950s. It’s a leap, however, to go from this idea to unchecked growth of machine intelligence.

First, ingenuity is not the sole bottleneck to developing faster computers. The machines need to be built, which requires real-world resources. Indeed, Moore’s Law comes with exponentially increasing production costs as well—mass production of precision electronics does not come cheap. Further, there are fundamental physical laws—quantum limits—that restrict how quickly a transistor can do its work. Non-silicon technologies may overcome those limits, but such devices remain highly speculative.

In addition to physical laws, we know a lot about the fundamental nature of computation and its limits. For example, some computational puzzles, such as figuring out how to factor a number and thereby crack online cryptography schemes, are generally believed to be unsolvable by any fast program. They are part of a class of mathematically defined problems that have resisted any attempt at scalable solution. Most computational problems that we associate with human intelligence are known to be in this class.

Wait a second, you might say. How does the human mind manage to solve mathematical problems that computer scientists believe can’t be solved? We don’t. By and large, we cheat. We build a cartoonish mental model of the elements of the world that we’re interested in and then probe the behavior of this invented miniworld. Our ability to propose and ponder and project credible futures comes at the cost of accuracy. It is a logical impossibility that these computers would be able to accurately simulate reality faster than reality itself.

To be clear, there are indeed concerns about the near-term future of AI—algorithmic Wall Street traders crashing the economy or sensitive power grids shutting down electricity for large swaths of the population. There’s also a concern that systemic biases within academia and industry prevent underrepresented minorities from participating. These worries should play a central role in the development and deployment of new ideas.

But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

Michael Littman is a professor of computer science and coleader of Brown’s Humanity-Centered Robotics Initiative. This article is excerpted from the website livescience.com     

Illustration by Tim Cook
 





Comments (1)
06/15/15
 
Professor Littman neglects the possibility that a superintelligent AI could also cheat, and cheat much better than humans at that. The computer programs that beat me at chess don't do anything silly like solve NP-complete problems or accurately simulate reality faster than reality itself: they simply ponder and project credible futures.
 
This email address is being protected from spam bots, you need Javascript enabled to view it

Name and Class Year:
Email:
Comment:

Code:* Code