Policywise

Should we create moral machines?  

The development of artificial intelligence (AI) that can act autonomously has raised important questions about the nature of machine decision-making and its potential capabilities. Is it possible to implement an ethical dimension to autonomous machines, i.e., is ethics “computable”? Are autonomous machines capable of factoring moral considerations into their decisions? Can AI be programmed to know the difference between “right” and “wrong?”

Once the topic of science fiction novels, these questions are leading to discussions about the actual creation of moral machines, also known as artificial moral agents, and have largely contributed to the expansion of the field of machine ethics. Some scholars have attempted to stop this development in its tracks, asking not about whether we can create moral machines, and instead asking whether we should.

According to a recent article, the answer is a resounding “no.” In a series of nine studies, researchers examined people’s comfortability with machines making morally relevant decisions in a variety of situations, including medical, legal, and military, as well as potential ways to increase acceptability of moral machines.

The studies found that even if outcomes are positive, people are averse to machines making morally relevant decisions. Additionally, despite the identification of potential routes for improving acceptability, researchers believe that this aversion would prove to be a significant obstacle for integrating moral machines into our society.

Although this study shows strong opinions from the public against moral machines, some scholars believe that the creation of moral machines is inevitable, and if this is the case, we need to discern how moral machines will be treated in our legal system and society as a whole.

When it comes to moral and legal status, the best comparison we can make as of now is to the status of animals in society. There is much debate surrounding both the morality of animals and whether or not they are able to behave on the basis of morals, as well as the moral status of animals and the amount of respect, welfare, and protections that should be given to them.

If we do create machines that can factor moral considerations into decision-making, can we say for sure that these machines have morality? Does being identified as an entity with morality grant these machines with moral status similar to that of humans? If so, would this moral status suggest that moral machines deserve legal rights and protections?

Attempting to determine the potential legal status of moral machines raises even further questions. In large part, these moral machines may be developed utilizing our own definitions of ethical principles, societal expectations and notions of “right” and “wrong.” In a society that is punitive by nature when individuals commit acts that we deem as “wrong,” it will be important to consider how society plans to treat moral machines that make such decisions.

Can punishment be attributed to moral machines? What kinds of actions fit under this category and what would this punishment look like?

Whether or not the creation of moral machines is inevitable, it is crucial for us to investigate these questions to better understand our own conceptions of moral status and morality. No matter what, the emergence of moral machines in society will undoubtedly require conversations between scientists, ethicists, policy-makers, philosophers, and the public to understand their role in our society moving forward.

-By Meghan Hurley, summer intern at the Center of Medical Ethics and Health Policy at Baylor College of Medicine and currently pursuing a master’s degree in Bioethics at Emory University Emory University

Leave a Reply

Your email address will not be published. Required fields are marked *