Policywise

AI will improve healthcare, but doctors and patients need legal safety net

Bhattad and Jain argue that “artificial intelligence (AI) is the driving force of the latest technological developments in medical diagnosis with a revolutionary impact” in a recent paper. But what happens when an AI produces a wrong breast cancer diagnosis (perhaps based on a bias), the physician accordingly fails to assign the right treatment, and the patient suffers metastasized cancer?

AI technologies employ machine learning to learn from new data by identifying complex, latent (“hidden”) patterns in datasets. These systems are increasingly being used to assist in patient healthcare, be it by predicting outcomes or identifying pathology. However, this complexity can reach a point where neither the developers nor the operators understand the logic behind the production of the output. These “black-box” systems ingest data and output results without revealing their processes for doing so.

This opaqueness is the first impediment to claiming compensation when problems ensue. Not being able to backwards engineer the AI decision makes it difficult for potential plaintiffs to identify the defect and figure out where the fault originated. Moreover, producers have two defenses at their disposal which would preclude liability (development risks and complying with regulations).

A plaintiff might then attempt to claim under medical malpractice (suing the physician). The degree of explainability (i.e. human interpretability) varies from one system to another and could be a central factor in delineating physician liability.

However, reliance on information whose provenance is unexplainable should not by itself be considered unethical or negligent. Durán and Jongsma argue that in the absence of transparency and explainability, trust can be satisfactorily founded on the epistemological understanding that the AI system will produce the right output most of the time – the idea of “computational reliabilism.” Accordingly, a system audit that demonstrates consistent accuracy could counteract the patient’s claim of physician negligence.

A second factor that may impact liability is the degree of reliance demonstrated by the physician. In a hypothetical scenario where a physician trusts an erroneous AI decision which goes against the consensus of doctors, Tobia et al. have shown that jurors are more likely than not to consider that physician reasonable (not negligent).

This is important because it demonstrates that except for using the AI incorrectly, the physician’s reliance on the AI decision is safe from liability, even when that output goes against the established medical consensus and is ultimately proven wrong.  A patient-plaintiff is therefore left in a very awkward position: harm has been caused but neither physician nor producer can (or should) be blamed.

I propose that the way out of this legal black hole is not a strict liability model (which would automatically blame the physician for her reliance) but the creation of a federal fund, which would compensate damage produced by a medical device’s AI component in scenarios where neither the producer nor the physician can predictably be held liable.

A strong policy basis already exists: Executive Order 13859 makes it clear that the government’s policy is “to sustain and enhance the…leadership position of the U.S. in AI.”

Transferring this nascent technology’s cost of liability to physicians and producers through insurance premiums would counteract these policy objectives. Instead, the cost should be budgeted within the government’s ambitious spending plan and patients should be provided with a safety net if public trust is to be maintained.

The potential for AI to revolutionize healthcare seems inestimable. As change shakes the foundations of centuries-old legal ideas, new legal solutions must keep pace.

-By Alex Tsalidis, summer intern in the Center for Medical Ethics and Health Policy at Baylor College of Medicine and a law student at the University of Cambridge

Leave a Reply

Your email address will not be published. Required fields are marked *