If you think about ethics and robots, what comes to mind? “The Jetsons?” “2001: A Space Odyssey?” “Terminator?” I recently led a bioethics journal club session to discuss two articles about artificial intelligence (AI), autonomy and moral decision-making: Rieder and colleagues’ 2020 article and Borenstein and Arkin’s 2016 article. Both of these articles put forward interesting ideas like what limits we should place on AI and robot autonomy, how we should encode morality, and how AI and social robots might nudge us to be more moral people.
I thought these ideas would be perfect discussion topics, but something unexpected emerged from some of the participants: fear. Specifically, participants were concerned that AI would chip away at our autonomy and inevitably take over. Is this slippery slope argument valid or a logical fallacy? If it is likely that AI and robotic technologies will emerge in almost all aspects of our lives, is terminator inevitable?
How autonomous should robots be?
The question of autonomy is a double-edged sword. More autonomous machines may increase efficiency and decrease human error. This is a potential benefit, say when your future self-driving car doesn’t get distracted by a text as it drives you to your destination. But autonomous robots can also present concerns about safety and control. Due to this concern, Reider et al (2020) argue that we should not make AI technologies that have human levels of autonomy. Although still in the realm of speculative ethics, human-level autonomy may be less of a concern if we know that AI will make moral decisions. This brings us to the next question.
How should we encode morality into AI technologies?
Second-order ethics describes how we “teach” morality to others, whether they be children, non-human animals or AI technologies (Reider et al). For the sci-fi connoisseurs amongst us, this may bring to mind Isaac Asimov’s rules of robotics from his “I Robot” fiction series, the first and most important of which is that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
This may be a good starting place, but such laws may be insufficient or even flawed for two reasons. First, ethical decisions are more likely to fall into morally gray areas that require more robust ethical frameworks. For example, an autonomous vehicle programmed to protect humans from harm would not have clear instructions on whether to hit two pedestrians or swerve into a wall and kill its own passenger.
It would be unclear whether robots working in social roles would do harm if they deceived people with dementia into believing they were mutually attached companions. Further, there is ambiguity around whether protecting human beings entails individual protection or public health protection that could violate individuals’ autonomy.
Second, AI may not be naturally as malicious as we may imagine. The thinking goes that if we allow AI to self-learn and make its own decisions, humans would not be safe or could lose their autonomy. Is this true? Steven Pinker refutes this argument in his book, “How the Mind Works,” saying:
“Why command a robot not to do harm – wouldn’t it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem?… We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolence –like vision, motor coordination, and common sense – does not come free with computation but has to be programmed in.”
Could we be projecting our own human nature onto new technologies? Borenstein and Arkin (2016) even discuss the possibility of using AI to nudge us to be better people. For example, robots and other AI technologies could remind us to donate to charity, call our mother on her birthday or put down our phone to spend more time with our children.
Who should decide?
So far, I have answered questions with yet more questions. One answer that I can commit to is that these questions should not be answered by a few people or a few technology companies. It is important that we start open, democratic, and transparent discussions about health, safety, and autonomy to assess what is good for us, for others sharing this Earth, for Earth itself.
This should be an iterative process to prevent a slippery slope to an unsafe or less autonomous world for humans. We need to start to talk about how to encode AI to be a force for good and we need to start sooner than later.
-By Dr. Joelle Robertson-Preidler, clinical ethics fellow, Center for Medical Ethics and Health Policy