CRF Blog

Teaching Robots Right From Wrong

by David De La Torre

1843 magazine reports on artificial intelligence experts Teaching Robots Right From Wrong.

Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they’re built from clay or silicon, always work in our best interests? Should we teach them to think for themselves? And if so, how are we to teach them right from wrong?

In 2017, this is an urgent question. Self-driving cars have clocked up millions of miles on our roads while making autonomous decisions that might affect the safety of other human road-users. Roboticists in Japan, Europe and the United States are developing service robots to provide care for the elderly and disabled. One such robot carer, which was launched in 2015 and dubbed Robear (it sports the face of a polar-bear cub), is strong enough to lift frail patients from their beds; if it can do that, it can also, conceivably, crush them. Since 2000 the US Army has deployed thousands of robots equipped with machineguns, each one able to locate targets and aim at them without the need for human involvement (they are not, however, permitted to pull the trigger unsupervised).

Public figures have also stoked the sense of dread surrounding the idea of autonomous machines. Elon Musk, a tech entrepreneur, claimed that artificial intelligence is the greatest existential threat to mankind. Last summer the White House commissioned four workshops for experts to discuss this moral dimension to robotics. As Rosalind Picard, director of the Affective Computing Group at MIT puts it: “The greater the freedom of a machine, the more it will need moral standards.” [more]