By Kip Wolf
In 2020 it is difficult to find someone in a professional setting who has not at least heard of artificial intelligence (AI), whether you simply saw mention of the topic on the news or in a movie, or you actually perform data mining, data analysis, or data science in your professional career.
A contemporary application of AI is machine learning (ML), whereby machines are provided with access to data and are left to learn from its processing and analysis. The potential rewards from ML and AI are as great as the rewards that we anticipate from cell and gene therapy; we hope for not just therapeutic solutions but possible cures for disease in the near future. Like ML and AI, cell and gene therapy solutions present a challenge (particularly for sponsors and regulators) in that the interpretability and explainability of results are challenging.
Interpretability is the extent to which a cause-and-effect relationship may be observed in a system, which, in the case of cell and gene therapy, is the individual patient’s human body. This is possible in most cases. Explainability, however, is the extent to which the internal mechanics and processing of a result may be explained in detail. This presents greater challenges. Recent regulatory approvals suggest that, as it relates to gene therapy, interpretability, if not explainability, is possible.
CELL AND GENE THERAPY SUCCESS, A HARBINGER FOR NEW DATA TOOLS AND RELATED REGULATIONS
Innovations in cell and gene therapy hold great promise, especially for the treatment in the near future of rare diseases. Recent approvals for Kymriah (Novartis), Luxturna (Spark Therapeutics), and Zolgensma (AveXis, a Novartis company) are the first approvals by the FDA of cell and gene therapies for the U.S. market and represent significant milestones in life sciences. Some of the regulatory challenges for therapies such as CAR T-cell therapy over traditional medicine relate to the understanding of the biological mechanism of action, the adjusting of expectations for limited explainability, and the ability to describe the interpretability of the results. The patient success stories are inspiring, as are the stories of when these complicated products continue to pass regulatory scrutiny for approval.
"If we don’t adapt both the regulatory landscape and our preparedness to adjust to it, we will lose opportunities for innovation and health/value improvements."
We are anticipating more and more cell and gene therapies in the coming years, many of which may very well lead to some cures for disease. There is so much promise in this innovative area that health authorities have been adding resources to prepare for the expected increase in demand. The FDA is ramping up to be prepared for the onslaught of new applications, anticipating the approval of 10 to 20 cell and gene therapies per year by 2025. Regulatory approval for new therapies (such as those indicated above) should establish a precedent for increased risk tolerance. Those approvals also symbolize a willingness of health authorities to entertain complicated topics such as understanding and validating “learning algorithms” and redefining related “controls” because the interpretability and explainability challenges presented by ML and AI are potentially as ambiguous as those of cell and gene therapy.
"We must develop a new paradigm that honestly and proactively embraces the ambiguity of machine learning and AI to best understand and define a revised risk profile for adoption in life sciences."
In our modern world, machine-generated data is being created at exponential rates. Data created in life sciences has increased greatly from improvements in rapid-throughput technologies, automated labs and manufacturing processes, and the proliferation of audit trails, to name a few examples. The creation of data and metadata has rapidly outstripped the ability for humans to review it. If we are to maintain the level of quality review for data that results in health agency approvals and a quality supply of lifesaving products, we will need to find new and innovative ways to ensure data quality (i.e., data quality by design) and improve data review (e.g., automated review of data and metadata). The means for processing this data will most likely need to employ the benefits of AI and ML.
Regulations and health authorities have traditionally approached expectations for requirements by describing the “what” but not the “how.” Some of those gaps have been filled by guidance and standards that have become more prolific in recent years as more clarity is needed around implementation of regulations to match the speed at which technology is advancing and data is being created. As information technology requirements and Big Data expectations become more of the norm and less of the exception, we can expect to see regulations change to be more explicit in these areas. We may see more guidance and standards before we see revised or new regulations, considering guidance and standards tend to be implemented more quickly than regulation reform. In the meantime, guidance and standards continue to be the vehicles by which we attempt to keep up with the pace at which data creation is surpassing our ability to manage and interpret it. We need standards in life sciences and healthcare for data governance, data integrity, and AI. Already we are told that there are working groups from the likes of ASQ (American Society for Quality), ANSI (American National Standards Institute), and ISO (International Organization for Standardization) trying to tackle broad standards or specifications for data integrity and AI.
A NEW RISK PROFILE — A NEW PARADIGM
If we don’t adapt both the regulatory landscape and our preparedness to adjust to it, we will lose opportunities for innovation and health/value improvements. We exist in a very litigious society with zero/low tolerance for risk where citizens continue to demand new and innovative therapies for reasons of unmet medical needs almost as much as for commercial value. Match this condition with the volatile issues of health insurance, reimbursement, or ethics, and we have a perfect storm where we cannot afford to ignore the opportunities that machine learning and AI present to data analysis for innovation.
Therefore, we cannot take a classic approach to the debate (e.g., decade-old regulatory approaches with predicate rules, Part 11, or data integrity) and expect the same success as before. This will surpass our ability to regulate efficiently and effectively. We must develop a new paradigm that honestly and proactively embraces the ambiguity of machine learning and AI to best understand and define a revised risk profile for adoption in life sciences. Our ability to adapt and redefine our risk tolerance through expansion and adjustments based on new risk profiles generated by these innovative technologies will determine how soon we adopt these technologies to reap the incredible benefits.
If we are able to adjust our expectations both as sponsors identifying the clinical endpoints and as regulators reviewing the license applications for innovative cell and gene therapies, we should also be able to adjust our expectations for interpretability and explainability of machine learning and AI solutions that support life sciences. Likewise, if we are able to shift our risk profile and related tolerances for cell and gene therapies, it would beg the question: “Can we do the same for machine learning and AI solutions?” While explainability may be limited, interpretability should be the goal.
KIP WOLF is a principal at Tunnell Consulting, where he leads the data integrity practice.