Far-flung fears of AI weaponry and superintelligence come from big names in science, tech

artificial-intelligence-robotSkynet. HAL 9000. The Matrix. The Joshua Computer from WarGames. It’s not hard to look around popular culture and find examples of artificial intelligence (AI) stirring a span of doomsday fears stemming from either a lack of understanding of AI’s dangers to the actual targeting of humans by AI weapons. Of course, there’s also a bright side to artificial intelligence as exemplified by such entertainment stars as R2-D2, Wall-E, Rosie from The Jetsons and KITT from Knight Rider.

Here on IPWatchdog, we’ve discussed in the past both the doomsday fears and the futuristic utopias that many have suggested relative to robotics innovations. Clearly technology inspires, both for good and ill. Recently, some leaders of the scientific community and high tech industries have gone on record decrying the potential ethical pit traps posed by AI. The names of Elon Musk, Steve Wozniak and Stephen Hawking, along with several faculty members from academic institutions like Oxford and the Massachusetts Institute of Technology, are among hundreds of signatures found attached to an open letter published by the Future of Life Institute which calls for “concrete research directions” to pursue for ensuring that AI remains a social benefit. Bill Gates has also said publicly through the social media outlet Reddit that he is “in the camp that is concerned about super intelligence.”

The weaponization of AI is one area of technology development from which much of this fear seems to extend. Musk, Hawking and Wozniak, along with Noam Chomsky, have all signed another Future of Life Institute open letter on autonomous weaponry which reads “If any major military power pushes ahead with AI weapons development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.” The development of autonomous weapons is not as far-fetched as it may seem to some readers. During one of our many profiles of Samsung Electronics Co. (KRX:005930), we picked up on a news piece about a robotic sentry gun developed by that Korean electronics conglomerate. The robotic sentry has machine gun and grenade launcher elements and the gun is currently installed near the Korean Demilitarized Zone (DMZ).

The cold and calculating effects of superintelligence, an artificial intelligence on a level far beyond normal human intelligence, offer even more cause for concern to some.  Robotics operate according to certain defined rules but superintelligence could allow a robot to pursue the completion of a job to the point that it’s socially detrimental. One potential example proffered by Swedish philosopher Nick Bostrom imagines a superintelligent paper clip maximizer that is tasked to make one million paper clips. With the ability to reassess its task for checking its work or performing other functions in support of its main task, the superintelligent machine could continue to suck up resources in perpetuity. Bostrom and others have even wondered what would happen if a robot decided that humans were simply an obstacle to a task. Anyone who has seen the 2004 movie I, Robot will be familiar with some of this argument.

AI patent pieOn the other side, while it’s tough to take a stance against proactive measures to protect humans from smarter machines, it is true that those fears wouldn’t be realized for many, many years yet, if they were to come to pass at all. A February 2015 piece published in the MIT Technology Review states that we don’t even have “a clear path to how [general-purpose artificial intelligence] could be achieved.” This in a world where we already have such robust computing technologies as IBM Watson’s cognitive computing and natural language question/answer abilities. Where patents are concerned, our readers may be interested to note that the top three U.S. patent portfolios in artificial intelligence belong to Microsoft (NASDAQ:MSFT) (1,069 patents), IBM (NYSE:IBM) (668 patents) and Google (NASDAQ:GOOG) (420 patents), according to statistics collected from Innography.

Many artificial intelligence researchers are the ones saying there is no reason to panic, and they may have a good point. Science and technology writer Erik Sofge wrote a piece published by Popular Science which quotes a series of top voices from the AI world who essentially say that even if there was cause for worry, we won’t need to start having those discussions for a long time. Yoshua Bengio, a top deep learning researcher working at the University of Montreal, remarked to Sofge that “we would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far from even that.” And it’s important to point out that despite their reputations as brilliant minds, neither Hawking, a cosmologist and theoretical physicist, nor Musk, who is focused on alternative energy and space exploration, have any real background in AI and machine learning. Wozniak’s work with personal computers might overlap but he could just as easily have as truly poor an understanding of the technological field as the other two; the requirements of artificial intelligence development ask very specific questions which are very difficult to answer. It could be that all three of these brilliant minds are rather rank beginners in this debate. (As we’ve pointed out elsewhere, this wouldn’t be the first time that the Luddite within Elon Musk came out to decry something on baseless grounds.)

It’s a little strange that we’d be brought into a political moment by such a cadre of seemingly benevolent minds, especially without an extant problem truly presenting itself. One explanation could be cultural. There is a real preponderance of negative messages regarding artificial intelligence in Western popular culture. We’ve had few innocuous examples like Rosie the Maid although we’ve seen plenty of the wasteland devastation wrought by machines in the Terminator and Matrix motion picture series. Meanwhile, in Japan, both pop culture and consumer markets are much friendlier to AI and robotics in general and intelligent machines are seen in more utopian terms. The Japanese government is a leading funder of artificial intelligence-related technologies and more than a quarter million industrial robot workers are used in that country, making it the world’s most highly roboticized workforce.

With all the fear mongering being doled out in recent months, a growing concern among those who feel positively about AI’s development is that misunderstandings in the mainstream media could wind up leading to cuts in research funding. America has been lagging in the global AI development race since giving up its lead on research efforts during the 1980s to Japan. There is not a terrible amount of funding for artificial intelligence available in this country but some funding programs are coordinated by agencies such as the National Science Foundation and the Paul G. Allen Family Foundation. Interestingly, an increase in AI research funding stems from the growing tide of doomsday fears; for instance, Elon Musk recently bequeathed $10 million to the Future of Life Institute to fund 37 research projects focused on artificial intelligence innovations, such as making sure that AI properly explains its actions to humans or programs for aligning AI with human interests.

A very wise man once said, “the only thing we have to fear is fear itself.” Interestingly in this case, fear might actually be causing some productive motion towards a quicker maturation of artificial intelligence, even if it’s only shaving a short sliver of time off of a waiting period that will still last decades.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

3 comments so far.

  • [Avatar for Vlad the Skewerer]
    Vlad the Skewerer
    August 9, 2015 06:44 pm

    My fear is more that humans and robotic weapons will be linked together and the processing power of the human brain, the speed of thought will guide semi autonomous robots. One person controlling an army of drones from a heads up display, drones that could be all over the world.

  • [Avatar for Anon]
    Anon
    August 6, 2015 07:47 am

    Benny,

    There is no such thing as not living under the threat of politically motivated violence.

    Whether or not we permit ourselves to realize this is quite a different matter.

    The phrase “fat, dumb and happy” applies fully to the condition that you describe.

  • [Avatar for Benny]
    Benny
    August 6, 2015 05:18 am

    Whats new? in the 1980s’ the Navys’ Phalanx CIWS shipboard defence system had a “shoot down anything picked up on radar as soon as it comes in range and don’t bother asking permission first” mode. (It did sound a little bell just before opening fire, though). The logic behind that (and other autonomous weapons) was, by the time you get permission to open fire it’s too late to be effective. Even an electric fence is a form of autonomous weapon. Those opposing autonomous (or any other kind of) weapons generally enjoy the luxury of not living under the threat of politically motivated violence.