How to Make People Think Robots and Corpses Have Feelings

Photo Credit: Disney
Photo Credit: Disney

The right to die has played a critical role in the development of the doctor/patient relationship. It was families clamoring for the right to allow their loved ones to die who forced the world to recognize that physicians’ medical decisions aren’t just medical decisions, but involve enormous value judgments. In 1975, Karen Ann Quinlan’s loving parents asked her doctors to remove her ventilator, Quinlan having suffered irreversible brain damage that put her in a persistent vegetative state. Her doctors refused, saying such an action was medically inappropriate. The New Jersey Supreme Court, and the majority of the lay public, concluded that the doctors were exceeding their authority, in making moral judgments about whether Quinlan should live or die.
When I tell people Quinlan’s story (for example, in my book Critical Decisions), I present it as an example of the distinction between medical facts and value judgments. Physicians typically hold expertise about medical facts – about whether people like Quinlan in persistent vegetative state can experience pain or joy; about whether or not her ventilator was prolonging her life. But decisions about whether to keep Quinlan on the ventilator are value judgments, and physicians have no special expertise, or power, to make these decisions.
As it turns out, I’m partly wrong about the distinction between medical facts and value judgments. Recent research on, among other things, people’s attitudes towards robots has shown that sometimes medical judgments – whether, say, a person with persistent vegetative state can experience pain – are influenced by our moral thinking. Sometimes, when our moral sensibilities are offended, we attribute feelings and intentions to beings incapable of harboring such states of mind.
In one study, for example, the researchers described persistent vegetative state to participants. They explained that people in PVS have no conscious awareness – cannot experience pain or joy, and are unaware of their surroundings. They then asked participants to imagine that a nurse was purposely disconnecting a patient’s feeding tube at night, hoping the patient would die so the relatives would get their inheritance (and pass part of that inheritance to the nurse). Pretty evil, yes? I agree. And because of the evilness of this act, people began attributing feelings to the patient with PVS. When some humanoid thing is harmed, we recognize the immorality of those harms. Consequently, many of us begin attributing feelings and thoughts to the person or thing being harmed.
How do I know that it was the evilness of the act that caused people to believe that the patient with PVS could experience pain? Because another group of people were given the same scenario, but were told that the tube feedings were interrupted at night by accident, due to a technical error. This group of participants were significantly less likely to attribute states of mind to the patient.
Not convinced? (To read the rest of this article, please visit Forbes.)

PeterUbel