Trash-talking: we humans love to do it. And for the most part, we’ve thought we were the only ones capable of doing it (although those intimate with the animal kingdom will beg to differ). But now, there’s a new trash-talking machine on the block.
Literally – it’s a robot.
This particular robot is named Pepper, and comes from Softbank – a company specializing so far in developing and manufacturing kiosk-style robots that directly interact with humans,by answering questions or providing directions, in public spaces such as airports and museums. And yes, it has been and was designed to be completely friendly and non-intimidating, with manners superior even to the long-beloved bot from a galaxy far far away, C3P0. Once it was programmed to trash talk, its worst only amounted to phrases such as: “I have to say you are a terrible player,” or “Over the course of the game your playing has become confused.”
And yet, when put to the test, these minimally offensive offenses proved to get under people’s skins.
Pepper was part of a study that paired her against technologically savvy human participants, who knew that the insults would be coming from the trash-talking robot’s programming. But despite all that, the insults proved to rock the boat of their performances; they didn’t score as well and didn’t improve over the course of 35 games against the trash-talking robot.
The study was later presented at the IEEE International Conference on Robot & Human Interactive Communication in New Delhi, India, asserting that any negative feedback – even delivered conspicuously by a programed trash-talking robot – can critically affect someone’s performance. Which begs the question: what social or economic incentive is there exactly inviting this kind of experimental development?
Well, trash-talking has long been deployed to affect performances in sports, gaming, and other competitive activities; cue popular footage of fans jeering at their opposing team’s players in crowds (and voluntarily at that). But until now, it’s been widely believed that only the emotional component provided by another human with invested feelings could do the trick. Now, it seems that’s not the case, according to cooperating researchers from Carnegie Mellon University.
More than anything, this observation clues us in ever further to how much of influence robots in general can be programmed to have on humanity.
One benefit is that this same programming could help provide intelligent responses to comments and questions for the sake of mental health in patients and more; but with consumerism and mental health in mind, the ability of a robot to affect decision-making could be much more catastrophic at the same time.