Magazine Article | February 28, 2020

The Inevitable Collision of AI Healthcare Technologies And Product Liability

Source: Life Science Leader

By Lora Spencer

Welcome to the ubiquitous world of artificial intelligence (AI). AI is now present in nearly every industry, from transportation to communications to retail to healthcare. It has changed the way we move, communicate, shop, receive medical treatment, and monitor personal health.

There is no one, universally accepted, AI definition. However, AI is broadly defined as the use of computer science to enable intelligent machine learning. AI, through various techniques (e.g., machine learning, deep learning), enables a machine’s ability to interpret data, learn from that data, and make recommendations or decisions based on that data.

Recent developments in healthcare utilizing machine learning and deep learning processes have witnessed the advent of emerging AI healthcare technologies that improve diagnostic accuracy and efficiency, predict illnesses, automate routine healthcare tasks, and refine processes and care beyond human capabilities. For example, consider an AI imaging system used to provide real-time cancer diagnoses, an AI program that predicts the onset of injury days before onset, an AI device that estimates the probability of a heart attack, AI technologies designed to minimize administrative burdens and costs, and AI technology that identifies no-value or low-value work-related activities and eliminates workflow inefficiencies. These current and emerging examples are just the beginning — the healthcare industry is heavily invested in all things AI.

A recent Forbes article titled “AI (Artificial Intelligence) What’s The Next Frontier For Healthcare?” states that overall AI healthcare spending is expected to exceed $36 billion by 2025. Transformative AI healthcare trends anchoring this forecast include the AI combined with data analytics, genomics, electronic medical records, and wearables that are predicted to drive personalized medicine and deliver improved patient care so patients can be diagnosed and treated earlier and more accurately.

THE INTERSECTION OF AI HEALTHCARE TECHNOLOGIES AND PRODUCT LIABILITY LAW

A well-known property law axiom, “You can’t have the benefit without the burden,” is similarly applicable to product liability law. Undoubtedly, AI’s transformative technologies offer profound benefits to the healthcare industry. However, AI also carries an inescapable burden of risk and mistakes.

Failures or mistakes in healthcare can lead to patients being harmed. Imagine an AI imaging system designed to diagnose cancer that misreads a radiological scan and misdiagnoses a malignant tumor for a benign tumor or misdiagnoses a benign tumor for a malignant tumor. Consider an AI technology designed to predict and prevent adverse drug events that recommends a wrong drug for a patient. Envision an AI technology designed to read a patient’s genomic sequence to deliver precision medicine that misreads the sequence and recommends an inappropriate treatment. Such failures or mistakes can result in widespread harm to hundreds if not thousands of patients. This resulting harm, for technologies deemed to be a “product,” is where AI healthcare technologies and product liability law intersect.

Product liability law, as applicable here, “broadly refers to the legal responsibility for injury for harm resulting from a product’s intended use.” When an individual uses a product in its intended manner and is harmed, a claim may be brought for damage arising from the product’s use and the resulting injury. However, in the context of harms resulting from AI healthcare technologies, there may be multiple companies involved in the algorithmic design of the product, numerous entities included in the chain of distribution, and “black box” adaptive technology (the algorithm “adapts” to new data to create its own algorithm, without a complete understanding of the how or why) versus locked (the algorithm entered “locks” and does not change) that complicate the legal responsibility for resulting harm. Thus, complex questions will arise: “Whom may be sued for the resulting harm? Who, among the multiple parties, will be at fault? What claims may be brought? What damages are available?” Unfortunately, the answers to these questions are not straightforward.

Product liability case law addressing AI healthcare technologies is underdeveloped; it will take years to catch up to these novel and emergent technologies, and even then it will always lag. And while product liability case law in general is well developed, it will likely adapt to address some of AI’s complicated liability questions and multifarious issues.

"Undoubtedly, AI’s transformative technologies offer profound benefits to the healthcare industry. However, AI also carries an inescapable burden of risk and mistakes."

Under the current product liability framework, the potential theories of liability available to a claimant when bringing a product liability claim are negligence, strict liability, design defect, manufacturing defect, failure to warn, and breach of warranty. Because product liability law is not uniform, the specific theories available to a claimant vary by jurisdiction. Thus, questions: such as: “Whom should be sued? What theories of liability are available? What damages may be recovered? and How will fault be attributed and apportioned depending on the jurisdiction?” For example, in some jurisdictions, a claimant may be able to bring a claim under one or more of the previously mentioned theories of liability, while in others, a claimant may only be entitled to a single cause of action as set forth by the state’s product liability act (which provides exclusive remedies for product liability claims).

Accordingly, under the complex nature of U.S. product liability laws, untangling fault related to AI healthcare technologies will prove multidimensional. The jurisdictional landscape, where AI healthcare technologies and product liability law will inevitably intersect and collide, will govern the parties that may be sued, available theories of liabilities, apportionment of fault, and recoverable damages.

HOW TO PREPARE

As previously discussed, AI technology case law is underdeveloped, leaving critical liability questions unanswered. Moreover, how or whether product liability jurisprudence will be altered remains to be seen. In the interim, life science leaders should monitor autonomous vehicle litigation and legislation in the transportation industry and software-induced harm litigation in the airline industry (where non-AI software design systems were recently attributed to two fatal airline crashes) for potential insights into what to expect when AI involving healthcare technologies and product liability law collide.

Nevertheless, preparation, prevention, and protection are essential considerations for defending against this inevitable collision. Most specifically, designers and manufacturers should:

  • Ascertain and adhere to relevant government and industry standards during the design and manufacture of AI emerging technologies. (It is noteworthy that the regulatory framework for AI technology is being created. For example, see U.S. Food & Drug Administration, Software as a Medical Device (SAMD): Clinical Evaluation, Guidance for Industry and Food and Drug Administration Staff, December 8, 2017).
  • Develop and implement an evaluation program to ensure compliance with those standards.
  • Document the processes, findings, and timeline of such evaluations to minimize design defects and manufacturing defects to demonstrate concern for quality and safety.
  • Ensure that all warnings and labels are adequate and accurate.

In summary, to maximize the benefits of emerging AI healthcare technologies while minimizing the burden of the inherent product liability risks, you should adopt specific procedures that: 1) demonstrate concern and effort in the design and manufacture of the AI healthcare technology products placed in the stream of commerce; 2) test, evaluate, test, evaluate (rinse and repeat) the technology before placing it into the stream of commerce; and 3) issue adequate and appropriate warnings and instructions related to the technology.

LORA SPENCER is an associate with the Reed Smith Life Sciences and Health Industry Group. She focuses her practice on product liability and pharmaceutical and medical device defense.