Ethical, legal questions arise as scientists work to teach robots to feel pain

Brent Spiner

Brent Spiner, who played Lt. Commander Data, speaking at the 2016 San Diego Comic-Con International in San Diego, California. Photo by Gage Skidmore. CC BY-SA 3.0.

The question of whether robots will ever think like humans — or even surpass us in intelligence — has been asked ever since humankind started dabbling with robotics. In the many years since then, other questions have been raised, such as: Will robots ever experience human emotions, attractions or love? Can they have a sense of humor? Can they feel empathy? “Star Trek” fans will recognize these questions as a constant source of inquiry relating to Lieutenant Commander Data, the Android Starfleet officer portrayed by Brent Spiner in Star Trek: The Next Generation. Ultimately, the pursuit of emotional understanding led to an “emotion chip” being implanted into Data’s positronic net.

But what about pain? Will robots ever be able to feel pain? While emotions might have been difficult for Mr. Data to control without affecting his efficiency, there are benefits that come from teaching robots about pain. Some scientists are working diligently to answer whether robots can be taught to feel pain, and their work raises other questions as well. How will robots respond to pain? Is this an ethical endeavor? What are the potential problems?

How Could Robots Feel Pain?

A team of researchers from Stanford and University of Rome-La Sapienza were able to program an arm-like robot they designed to avoid collisions with humans and other obstacles coming from different directions at different velocities. In Germany, another group is experimenting with an artificial nervous system for robots, which could teach machines how to feel pain, as well as how to react to it. And at Leibnize University of Hannover, researchers believe that robots could use the sensations of pain as a form of protection in hazardous situations.

Robots with fingertip sensors currently exist. They can detect changes in contact pressure and temperature. Depending on the level of pressure or temperature experienced, the arm’s responses will differ. This is really no different from the way humans and animals utilize this sensory feature. Pain is a response of the neurological system to help us evade danger and avoid injury. A stimulus given to a robot that induces a similar evasive behavior could be thought of as teaching the robot to “feel pain.”

Thinking and Feeling Go Hand in Hand

If robots could become more humanlike in the way they respond to stimuli, they could potentially become even more efficient and safe as they go about their operations. There are patents for advanced robotics that include concepts for machines with true human intelligence that learn through experience — including through tactile feeling. Such machines would be capable of storing information and retrieving it or modifying it in response to certain situations or tasks, much like a human brain. In essence, these machines would be able to learn from the past to predict the future.

One way robots could learn is through experiencing feelings of pain and pleasure, just as humans do throughout their lives. For example, a robotic car could be programmed to experience pain when hit with a rock, or when someone slams one of its doors or when someone yells loudly. This was the invention disclosed in U.S. Patent Application No. 20080256008, titled Human Artificial Intelligence Machine. The patent application explains that this “feeling” of pain is created through the use of a loop recurring in a pathway. The robot is programmed to have a simple loop in memory, which allows knowledge to be built upon recursively. There are many other practical applications for pain in robotics, and scientists are currently exploring some of those potential benefits.

The Benefits of Robotic Pain

One way we use robots is the navigation of dangerous situations, in which robots perform tasks that would put a human worker at high risk of injury or death. A highly radioactive environment is one such example. If robots were able to experience pain, and interpret this type of sensory data as a threat to their physical existence, they would be better able to protect themselves from harm and complete tasks more efficiently. To return to “Star Trek,” Lieutenant Commander Data was able to identify atmospheric and environmental threats to his well-being, even if he was forced to describe them with a machine’s characteristic detachment.

Interestingly, there’s also the possibility that pain sensors for robots could in turn protect humans. Robots and humans already work together in a variety of settings, and as human-robot interaction becomes more common in the modern workplace, accident prevention will become more and more important. A robot would ideally be able to detect unforeseen disturbances, and consider and rate the potential damage caused by their interaction with said disturbance. They could then react to counter-disturbances in different ways depending on how it’s classified. What we’re talking about here is anticipation and adaptation — qualities prized in humans, but which artificial life-forms have so far struggled to duplicate.

An Ethical Dilemma

Renounced science fiction writer Isaac Asimov developed The Three Laws of Robotics in the mid-20th century. They are as follows:

  1. A robot may not injure or allow injury to a human being.
  2. A robot must obey orders as long as they don’t conflict with the first law.
  3. A robot must protect its own existence as long as it doesn’t violate the first or second laws in doing so.

This code is a sound one, but it’s also incomplete, since it only really deals with the ethics of how robots should behave toward human beings and themselves. Moving forward, as robots begin to think and feel in new ways, a brand-new code of ethics will be necessary — one that spells out laws for how humans should behave toward their robotic creations, rather than the other way around. Would you feel comfortable with torturing a robot that could sense pain? What would it take for you to develop an emotional connection to a machine? Humanlike features or behaviors? A cute or soft exterior?

What we’re discussing here is nothing less than the framework for interspecies relations. That probably sounds pretty far-fetched, but it won’t be so fanciful after the first artificial lifeform passes the Turing Test a little more convincingly than what we’ve seen so far. Nevertheless, experiments have shown that, over time, people do develop sympathy for their robot companions. People can converse with social robots that display lifelike facial expressions. Many even name their devices. Soldiers have even honored their robot helpers with medals or funerals for services that include stepping on landmines. The robots they’re working with aren’t remotely humanoid or personable, but they do risk themselves in order to keep their compatriots safe. That’s the kind of service that transcends physical differences, and might just result in some unlikely feelings of attachment.

The Emergence of Robotic Rights

In the European Union, humankind’s emotional connection to robots has combined with the new robotic industrial revolution to make way for the beginnings of social rights for machines. The EU may adopt a draft plan to reclassify some machines as electronic persons, which would require new methods of taxation and new ways of thinking about Social Security and legal liability. Under the proposal, these machines would be able to trade currency, make copyright claims and even compel their owners to provide and pay into pensions. Quaint science-fiction outings from the ‘70s and ‘80s had us believing these were fanciful daydreams, fit only for surreal, allegorical storylines, but we’ve actually reached the point where these ethical questions need to be addressed in a real way.

Meanwhile, as automation takes hold like never before, and threatens humankind with unprecedented unemployment, robots across Europe and around the world are quickly advancing to replace human workers in factory assembly lines, healthcare settings, surgical positions and even the service and tourism industries. If, down the road, these robots become even more humanlike, it would only make sense to provide them with at least some rights — and to, in turn, demand certain responsibilities in exchange. Would such developments simply be excessive bureaucracy that impedes progress in the field, or is it as a necessary step to protect the well-being of our robots?

It’s too early to answer these questions fully. Nonetheless, we must consider them today, earnestly, as the world may get its first glimpse of uncannily human machines in the next few decades, as we continue the quest to create life in our own image, like one of the gods from our fables.

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

21 comments so far.

  • [Avatar for Curious]
    Curious
    October 19, 2016 02:28 pm

    The bottom line is, how do humans program a robot to respond to stimulus? You could, for example, program a guided missile to disable its’ detonator if its’ sensors detect more than a certain number of human IR signatures within its’ blast zone, and then call it “empathy”. But this is the human programmer’s empathy, not the machine’s.
    How are humans taught empathy? It isn’t an innate quality — it is taught. Kids say the cruelest things because many/most have a poorly developed sense of empathy. Just like kids need to be taught empathy, so should artificial intelligence.

    Empathy is both a luxury of and requirement of a modern society. In a primitive society (e.g., you eat what you kill), there is little need for empathy.

  • [Avatar for Anon]
    Anon
    October 18, 2016 05:51 pm

    Benny,

    Must the phrase appear to be pertinent?

    Are you aware of the phrase?

    Are you aware that the phrase IS pertinent even while not being mentioned?

    Lastly, do you really think playing coy with this concept advances the discussion?

  • [Avatar for Gene Quinn]
    Gene Quinn
    October 18, 2016 02:04 pm

    Benny-

    You are a funny guy. What about you being discredited by being ignorant on the issue of robotics?

    I love how when you are wrong about something you turn into a petulant child. But… but… but… LOL.

    You started by rather stupidly treating this article as a joke and lecturing everyone that the author doesn’t understand robotics because a robot brain is just a motherboard. Instead, of course, you proved you know nothing about robotics, electronics, computers, or really being human.

    Pain is all about a feedback loop, Benny. Computer software and, therefore, robotics are all about feedback loops. That the previous two statements needed to be said to someone who professes to be a technologist is ridiculous.

    I think you need to grow up. This article is fine. The fact that you don’t like it speaks far more about you than anything else.

    -Gene

  • [Avatar for Benny]
    Benny
    October 18, 2016 07:09 am

    The author references US application 20080256008. Did any of you read it? Claim 1 is pure scribble, suggesting a machine that can “predict the past with pinpoint accuracy” , will “universilize pathways in optimal pathway”, and use a “3 dimensional memory” among other technically incomprehensible buzzwords. The specification is not enabling. And don’t just take it from me – the examiner couldn’t make head or tail of it either. What about predicting the future using a time machine? (claim 4). Not that time machines are unknown to the inventor – the same inventor also filed an application titled “a practical time machine”. Mentioning this application as an example of robotic technology discredits both the article and its’ author.

  • [Avatar for Benny]
    Benny
    October 18, 2016 06:48 am

    Anon,
    the phrase “the Singularity” doesn’t appear in the article.

  • [Avatar for Anon]
    Anon
    October 18, 2016 06:44 am

    Benny,

    You continue to evade the point underlying the discussion: the Singularity.

  • [Avatar for Benny]
    Benny
    October 18, 2016 04:06 am

    Curious,
    The terms good and bad are subjective. Computer software/hardware is a stimulus/response machine – the response is programmed as a function of the stimulus. Adding a positive feedback loop, where the response is altered if it creates a further stimulus is just a variation on the theme – a form of machine learning, but nothing to do with emotion. Yes, the classic nerve response to pain stimulus in humans is machine like in that it is reflexive. The bottom line is, how do humans program a robot to respond to stimulus? You could, for example, program a guided missile to disable its’ detonator if its’ sensors detect more than a certain number of human IR signatures within its’ blast zone, and then call it “empathy”. But this is the human programmer’s empathy, not the machine’s.

  • [Avatar for Anon]
    Anon
    October 17, 2016 07:45 pm

    Curious,

    I would think that Benny would stick to his “non-Singularity” position, and distinguish the difference between the human “wetware” and the machine “soft/firm/hard-ware” by saying that the human electrical signal and processing require “the soul” to qualify as true pain – abstracting as it were the sensation to the emotional state.

    I think that he is unreachable as long as he maintains a “no-Singularity” viewpoint.

  • [Avatar for Curious]
    Curious
    October 17, 2016 05:59 pm

    Don’t lose sight of the fact that robot behaviour is just a computer program. Can a software routine feel pain? Not.
    Pain in a human is an electrical signal processed by wetware as something ‘bad.’

    Pain in a robot can also be an electrical signal processed by hardware/software as something ‘bad.’

  • [Avatar for Anon]
    Anon
    October 17, 2016 11:48 am

    I would be remiss if I did not mention a few other movies that delve into the subject matter: Terminator and The Matrix series…

  • [Avatar for Anon]
    Anon
    October 17, 2016 11:44 am

    Benny,

    Your opinion on the Singularity is obvious, even if you do not answer my question.

    However, the discussion here does not adhere to that facet of your opinion.

  • [Avatar for Anon]
    Anon
    October 17, 2016 11:42 am

    Asimov’s Three Laws are a genius of simplicity.

    Enforcement is just another word for accountability.

    But even (especially!!) humans in different systems lack that.

    It is indeed fear that intelligent (singularity) machines would also lack that.

    Ex Machina is one example already given. Another is I Robot (with a mixture of both “good” and “bad” machine intelligencia (VICKI and Sonny).

  • [Avatar for Benny]
    Benny
    October 17, 2016 11:39 am

    Curious,
    If you are suggesting egulating rbot behaviour by means of a feedback loop, fine, that’s the way robots have always worked. Don’t lose sight of the fact that robot behaviour is just a computer program. Can a software routine feel pain? Not.

  • [Avatar for Curious]
    Curious
    October 17, 2016 09:43 am

    @7 … submitted before I had a chance to finish my thought.

    As a feedback mechanism, pain is a very useful tool (provided to us by evolution) to keep human beings (and all beings for that matter) from doing things that they probably shouldn’t do.

    Physical pain keeps me from keeping my hand in the fire long enough so that it will burn. Emotional pain also keeps from doing (and not doing) certain things. It isn’t a perfect feedback mechanism, but it certainly has value.

    If one thinks of the first of Asimov’s laws of robotics (i.e., “A robot may not injure a human being or, through inaction, allow a human being to come to harm”), a feedback mechanism by which to enforce that would be in the form of seeing another human being in pain. In modern society, the vast majority of people are uncomfortable or worse (i.e., via empathy) when seeing another person hurt.

    The biggest monsters are those people that can inflict pain on others without being affected by it. To the extent that robots become a greater part of modern society, I think it is imperative that they experience pain.

  • [Avatar for Curious]
    Curious
    October 17, 2016 09:27 am

    Pain is a feedback mechanism.

  • [Avatar for Benny]
    Benny
    October 16, 2016 11:53 am

    Gene,
    I work for a robotics company. I know what goes on inside.

  • [Avatar for Anon]
    Anon
    October 16, 2016 11:21 am

    I would hazard a guess that Benny does not have any belief in the Singularity (or does not even understand the concept behind the Singularity).

  • [Avatar for Night Writer]
    Night Writer
    October 16, 2016 11:08 am

    I think Ex Machina illustrates why the robots need to under pain. And it is a great question what is pain. In Ex Machina a robot games a human into falling in love with it and then uses the human to escape. So, the robot in Ex Machina understood what love is in humans.

    (All the nudity was supposed to get you to understand that this was nudity of a machine–not real. The relationship of the robot was not real either, but just as compelling as the nudity.)

    Best A.I. movie ever–by far. It is too bad it has all that nudity as it probably prevents it from being more mainstream.

  • [Avatar for Gene Quinn]
    Gene Quinn
    October 16, 2016 10:50 am

    Benny-

    You should lodge a formal complaint with your criticism aimed at those scientists working on the projects mentioned, those inventors working on solutions mentioned, and the ethicists considering the moral questions.

    Frankly, it seems to me that your comment is what is disconnected from the reality of robotics.

    -Gene

  • [Avatar for Benny]
    Benny
    October 16, 2016 08:15 am

    This article reads like science fiction and seems to be disconnected from the reality of robotics. Deep down, robots are nothing more than electric motors or transducers controlled by computer programs — in many cases, not particlarly complex pograms. The “brain” of a robot is no different than the motherboard of your personal computer.

  • [Avatar for Prizzi's Glory]
    Prizzi’s Glory
    October 15, 2016 03:03 pm

    Megan Ray Nichols seems to have neither watched the Battlestar Galactica reboot (or it’s prequel Caprica) nor read Brian Herbert’s expansion of the Dune series. The autonomous thinking machine Erasmus puts tremendous effort into understanding humans and finally experiences emotional pain when the Butlerian jihadists murder his ward Gilbertus Albans, the first Mentat.