Robots designed to interact socially with humans are slowly becoming more and more common. They’re appearing as receptionists, tour guides, security guards, and porters. But how good are we at treating these robots as robots? A growing body of evidence suggests not good at all. Studies have repeatedly shown we’re extremely susceptible to social cues coming from machines, and a recent experiment by German researchers demonstrates that people will even refuse to turn a robot off — if it begs for its life.
In the study, published in the open access journal PLOS One, 89 volunteers were recruited to complete a pair of tasks with the help of Nao, a small humanoid robot. The participants were told that the tasks (which involved answering a series of either / or questions, like “Do you prefer pasta or pizza?”; and organizing a weekly schedule) were to improve Nao’s learning algorithms. But this was just a cover story, and the real test came after these tasks were completed, and scientists asked participants to turn off the robot.
In roughly half of the experiments, the robot protested, telling participants it was afraid of the dark and even begging: “No! Please do not switch me off!” When this happened, the human volunteers were more likely to refuse to turn the bot off. Of the 43 volunteers who heard Nao’s pleas, 13 refused. And the remaining 30 took, on average, twice as long to comply compared to those who did not not hear the desperate cries at all.