Oct 15 2014
From performing surgery and flying planes to babysitting kids and driving cars, today’s robots can do it all. With chatbots such as Eugene Goostman recently being hailed as “passing” the Turing test, it appears robots are becoming increasingly adept at posing as humans. While machines are becoming ever more integrated into human lives, the need to imbue them with a sense of morality becomes increasingly urgent. But can we really teach robots how to be good?
An innovative piece of research recently published in the Journal of Experimental & Theoretical Artificial Intelligence looks into the matter of machine morality, and questions whether it is “evil” for robots to masquerade as humans.
Drawing on Luciano Floridi's theories of Information Ethics and artificial evil, the team leading the research explore the ethical implications regarding the development of machines in disguise. 'Masquerading refers to a person in a given context being unable to tell whether the machine is human', explain the researchers – this is the very essence of the Turing Test. This type of deception increases “metaphysical entropy”, meaning any corruption of entities and impoverishment of being; since this leads to a lack of good in the environment – or infosphere – it is regarded as the fundamental evil by Floridi. Following this premise, the team set out to ascertain where 'the locus of moral responsibility and moral accountability’ lie in relationships with masquerading machines, and try to establish whether it is ethical to develop robots that can pass a Turing test.
Six significant actor-patient relationships yielding key insights on the matter are identified and analysed in the study. Looking at associations between developers, robots, users and owners, and integrating in the research notable examples, such as Nanis' Twitter bot and Apple's Siri, the team identify where ethical accountabilities lie – with machines, humans, or somewhere in between?
But what really lies behind the robot-mask, and is it really evil for machines to masquerade as humans? 'When a machine masquerades, it influences the behaviour or actions of people [towards the robot as well as their peers]', claim the academics. Even when the disguise doesn't corrupt the environment, it increases the chances of evil as it becomes harder for individuals to make authentic ethical decisions. Advances in the field of artificial intelligence have outpaced ethical developments and humans are now facing a new set of problems brought about by the ever-developing world of machines. Until these issues are properly addressed, the question whether we can teach robots to be good remains open.