The question of whether robots will ever think like humans — or even surpass us in intelligence — has been asked ever since humankind started dabbling with robotics. In the many years since then, other questions have been raised, such as: Will robots ever experience human emotions, attractions or love? Can they have a sense of humor? Can they feel empathy? “Star Trek” fans will recognize these questions as a constant source of inquiry relating to Lieutenant Commander Data, the Android Starfleet officer portrayed by Brent Spiner in Star Trek: The Next Generation. Ultimately, the pursuit of emotional understanding led to an “emotion chip” being implanted into Data’s positronic net.
But what about pain? Will robots ever be able to feel pain? While emotions might have been difficult for Mr. Data to control without affecting his efficiency, there are benefits that come from teaching robots about pain. Some scientists are working diligently to answer whether robots can be taught to feel pain, and their work raises other questions as well. How will robots respond to pain? Is this an ethical endeavor? What are the potential problems?
How Could Robots Feel Pain?
A team of researchers from Stanford and University of Rome-La Sapienza were able to program an arm-like robot they designed to avoid collisions with humans and other obstacles coming from different directions at different velocities. In Germany, another group is experimenting with an artificial nervous system for robots, which could teach machines how to feel pain, as well as how to react to it. And at Leibnize University of Hannover, researchers believe that robots could use the sensations of pain as a form of protection in hazardous situations.
Robots with fingertip sensors currently exist. They can detect changes in contact pressure and temperature. Depending on the level of pressure or temperature experienced, the arm’s responses will differ. This is really no different from the way humans and animals utilize this sensory feature. Pain is a response of the neurological system to help us evade danger and avoid injury. A stimulus given to a robot that induces a similar evasive behavior could be thought of as teaching the robot to “feel pain.”
Thinking and Feeling Go Hand in Hand
If robots could become more humanlike in the way they respond to stimuli, they could potentially become even more efficient and safe as they go about their operations. There are patents for advanced robotics that include concepts for machines with true human intelligence that learn through experience — including through tactile feeling. Such machines would be capable of storing information and retrieving it or modifying it in response to certain situations or tasks, much like a human brain. In essence, these machines would be able to learn from the past to predict the future.
One way robots could learn is through experiencing feelings of pain and pleasure, just as humans do throughout their lives. For example, a robotic car could be programmed to experience pain when hit with a rock, or when someone slams one of its doors or when someone yells loudly. This was the invention disclosed in U.S. Patent Application No. 20080256008, titled Human Artificial Intelligence Machine. The patent application explains that this “feeling” of pain is created through the use of a loop recurring in a pathway. The robot is programmed to have a simple loop in memory, which allows knowledge to be built upon recursively. There are many other practical applications for pain in robotics, and scientists are currently exploring some of those potential benefits.
The Benefits of Robotic Pain
One way we use robots is the navigation of dangerous situations, in which robots perform tasks that would put a human worker at high risk of injury or death. A highly radioactive environment is one such example. If robots were able to experience pain, and interpret this type of sensory data as a threat to their physical existence, they would be better able to protect themselves from harm and complete tasks more efficiently. To return to “Star Trek,” Lieutenant Commander Data was able to identify atmospheric and environmental threats to his well-being, even if he was forced to describe them with a machine’s characteristic detachment.
Interestingly, there’s also the possibility that pain sensors for robots could in turn protect humans. Robots and humans already work together in a variety of settings, and as human-robot interaction becomes more common in the modern workplace, accident prevention will become more and more important. A robot would ideally be able to detect unforeseen disturbances, and consider and rate the potential damage caused by their interaction with said disturbance. They could then react to counter-disturbances in different ways depending on how it’s classified. What we’re talking about here is anticipation and adaptation — qualities prized in humans, but which artificial life-forms have so far struggled to duplicate.
An Ethical Dilemma
Renounced science fiction writer Isaac Asimov developed The Three Laws of Robotics in the mid-20th century. They are as follows:
- A robot may not injure or allow injury to a human being.
- A robot must obey orders as long as they don’t conflict with the first law.
- A robot must protect its own existence as long as it doesn’t violate the first or second laws in doing so.
This code is a sound one, but it’s also incomplete, since it only really deals with the ethics of how robots should behave toward human beings and themselves. Moving forward, as robots begin to think and feel in new ways, a brand-new code of ethics will be necessary — one that spells out laws for how humans should behave toward their robotic creations, rather than the other way around. Would you feel comfortable with torturing a robot that could sense pain? What would it take for you to develop an emotional connection to a machine? Humanlike features or behaviors? A cute or soft exterior?
What we’re discussing here is nothing less than the framework for interspecies relations. That probably sounds pretty far-fetched, but it won’t be so fanciful after the first artificial lifeform passes the Turing Test a little more convincingly than what we’ve seen so far. Nevertheless, experiments have shown that, over time, people do develop sympathy for their robot companions. People can converse with social robots that display lifelike facial expressions. Many even name their devices. Soldiers have even honored their robot helpers with medals or funerals for services that include stepping on landmines. The robots they’re working with aren’t remotely humanoid or personable, but they do risk themselves in order to keep their compatriots safe. That’s the kind of service that transcends physical differences, and might just result in some unlikely feelings of attachment.
The Emergence of Robotic Rights
In the European Union, humankind’s emotional connection to robots has combined with the new robotic industrial revolution to make way for the beginnings of social rights for machines. The EU may adopt a draft plan to reclassify some machines as electronic persons, which would require new methods of taxation and new ways of thinking about Social Security and legal liability. Under the proposal, these machines would be able to trade currency, make copyright claims and even compel their owners to provide and pay into pensions. Quaint science-fiction outings from the ‘70s and ‘80s had us believing these were fanciful daydreams, fit only for surreal, allegorical storylines, but we’ve actually reached the point where these ethical questions need to be addressed in a real way.
Meanwhile, as automation takes hold like never before, and threatens humankind with unprecedented unemployment, robots across Europe and around the world are quickly advancing to replace human workers in factory assembly lines, healthcare settings, surgical positions and even the service and tourism industries. If, down the road, these robots become even more humanlike, it would only make sense to provide them with at least some rights — and to, in turn, demand certain responsibilities in exchange. Would such developments simply be excessive bureaucracy that impedes progress in the field, or is it as a necessary step to protect the well-being of our robots?
It’s too early to answer these questions fully. Nonetheless, we must consider them today, earnestly, as the world may get its first glimpse of uncannily human machines in the next few decades, as we continue the quest to create life in our own image, like one of the gods from our fables.