“Which suicides are morally permissible to prevent?” This question caught my attention during a session at the American Society of Bioethics and Humanities (ASBH) annual conference last month. The conversation focused on the use of artificial intelligence (AI) in healthcare, specifically in algorithms designed to prevent suicide. This question seemed audacious. Shouldn’t we prevent all suicides? Don’t all lives hold equal weight?
During this session, I witnessed the intersection of medicine, business and bioethics. I was inspired to think about the parallels to the material I study as an M.D./MBA student and the key takeaways.
Initially, presenters highlighted that several companies use AI to deliver mental health services, but there are minimal regulations regarding the usage of AI on these platforms. Simply put, this could lead to mental healthcare that worsens existing mental health conditions. In addition, presenters highlighted the fact that algorithms to predict suicide are currently inadequate in their predictive ability.
However, the presenter introduced an intriguing nuance in the discussion of these algorithms’ scope. She started her presentation by detailing the tragic story of a 13-year-old boy who died by suicide, an incident that warrants close monitoring and prevention efforts. But it raises questions about other groups – like whether we should also restrict the right of terminally ill patients to seek physician-assisted suicide or target specific populations. While it’s evident that certain high-risk groups, like first responders and law enforcement personnel, deserve monitoring, the presenter also raised an interesting point about individuals undergoing elective cosmetic procedures, like breast augmentation surgery, who also exhibit an elevated risk of suicide.
The questions raised by this discussion prompted me to wonder how we train these models and how these models have evolved. I am not an expert in AI, but after a quick literature review, I found that our collective understanding of foundational models and the data used to train suicide prevention algorithms is lacking and that there are ongoing efforts to demystify these models and evaluate how they work.
This session at ASBH helped me realize that AI is a tool that we don’t fully understand yet but will play a key part in our lives, especially within the healthcare field, going forward.
A key question that we should ask is: what are the responsibilities, if any, of companies that develop and market these AI-based solutions?
I believe that these companies have an immense responsibility to apply the ethical frameworks that we learned in medical school and that were reiterated at ASBH in developing these tools. Specifically, companies should be transparent about how they build models – so that those in industry, academia and policymaking can have the information they need to make informed decisions.
Underscoring this point, President Biden signed an executive order on Oct. 30, 2023, discussing the society-wide effort needed to amplify AI’s benefits and mitigate its substantial risks. In a slew of new guidelines, he ensures that the National Institute of Standards and Technology will set standards to ensure AI systems are safe and trustworthy, the Department of Commerce will develop guidance for content authentication, and the Department of Justice and civil rights offices will address algorithmic discrimination through training and coordination.
He also guarantees to invest in better training for workers so that they can effectively participate in the AI-based labor market, to catalyze AI research and to promote a fair, open and competitive AI ecosystem so that entrepreneurship can thrive. In addition, he requires that companies developing foundation models that pose a serious risk to national security notify the government and share the results of their safety tests.
Watching this Executive Order be released a few weeks after I attended this conference, I realized just how timely and important this issue is. I am thankful for these discussions at ASBH for helping me think deeply about the technologies changing our lives and corporations’ ethical responsibilities moving forward. As I continue with my M.B.A. before I return to medical school next year, I hope to learn more about business ethics and the key roles companies hold in shaping our future alongside our institutions and governments.
By Maya Guhan, M.D. candidate, Baylor College of Medicine Class of 2025, and recipient of the Laurence McCullough Travel Award. She is also an M.B.A. candidate in the Rice Jones School of Business.