CRF Blog

The Problem with ‘Friendly’ Artificial Intelligence

by Bill Hayes

In The Problem with ‘Friendly’ Artificial Intelligence for The New Atlantis, Adam Keiper and Ari N. Schulman look at some problems with machine “morality.”

There are, however, at least two reasons it is worth attending to the matter of machine morality today. First, there exists a community of activists striving to hasten a future of intelligent machines, human enhancement, and other radically transformative developments. It is still a relatively fractious and fringe movement, but it comprises think tanks, endowed projects at major universities (including Oxford), academics the world over, a dedicated “university” backed by the likes of Google and NASA, regular conferences, bestselling authors, bloggers, and a growing public audience. Its ideas seem increasingly influential in mainstream scientific circles, and indeed, are in some ways just an extension of the basic premises of the scientific project — Cartesian method and Baconian mastery taken to somewhat absurd logical extremes. These committed advocates have made machine morality a matter of public debate, and their contentions, some of which are profoundly wrongheaded, should not go unanswered.

Second, we should care about machine morality for a more practical reason: We have already entered the age of increasingly autonomous robots. This is not a matter of distant divination. To be sure, robots in industrial settings remain largely “dumb,” and today’s consumer robots are basically just appliances or toys. But the United States has been developing and deploying military robots with wheels and wings — like the Predator drones, which are now remotely controlled by people who may be on the other side of the world. These machines are already capable of acting with some degree of autonomy. So how much autonomy is appropriate, especially when intentional acts of attacking and killing are a possibility? Military doctrine now requires that human beings be kept “in the loop” — so that whenever force is used, human beings must approve, and responsibility remains in the hands of the individuals who give the affirmative orders. But even today, the possibility of accidents raises vexing legal and ethical questions. And looking just a short distance ahead, more advanced autonomous military weapons systems now seem imminent; they might operate so efficiently that the requirement for real-time human oversight could be considered a strategically intolerable delay. The nearness at hand of machines with agency and lethality, and the likelihood that machines with similar degrees of autonomy could be arriving in non-military settings before too long, makes machine morality a matter well worth studying now. [more]