Guest Column | March 29, 2021

AI/ML-Enabled Medical Devices — 4 Keys To Obtain Global Regulatory Approval

By John Giantsidis, president, CyberActa, Inc.

Artificial intelligence (AI) and machine learning (ML) are disrupting and improving our world. With intelligent machines capable of high-level cognitive processes like thinking, perceiving, learning, problem-solving, and decision-making, coupled with advances in data collection and aggregation, analytics, and computer processing power, AI and ML present opportunities to complement and supplement human intelligence. Application of AI and ML in medical devices is making possible AI/ML-driven diagnostics and personalized treatments.

Artificial intelligence has the potential to provide incremental value to medical device designers and manufacturers and is expected to be the key source of competitive advantage for firms that adopt it. Global regulators have understood that AI and ML models do not fit well within the current medical device regulatory framework and are feverishly working to achieve a harmonized approach to the management of AI medical devices. The International Medical Device Regulators Forum is attempting to standardize oversight of AI- and ML-based medical devices, and the terminology associated with those devices, among its members. Nonetheless, aside from the regulatory submission and review process, which as of right now is very different from jurisdiction to jurisdiction, medical device companies must realize that data is one of the primary drivers of AI/ML solutions and, thus, appropriate handling of data to ensure privacy and security is of prime importance. Challenges include data usage without consent, risk of identification of individuals through data, data selection bias and the resulting discriminatory nature of AI/ML models, and asymmetry in data aggregation.

By digesting the different jurisdictional AI/ML regulatory frameworks that have been released (draft or enforceable), along with our own experience with the different agencies, we have identified the common denominators that, if properly implemented and operationalized, would enable medical device companies to mount a compelling approach to global commercialization of AI/ML medical devices.

We have distilled the four common enablers that would accelerate regulatory review, approval, and subsequent commercialization:

  • Software Risk Management
  • Algorithm Design
  • Quality of Data
  • Security

1. Software Risk Management

Considering that one of the most widely used standards (IEC 62304) for applying a risk-based approach to software development and maintenance throughout the product life cycle does not consider artificial intelligence or machine learning, any firm’s software risk management activities should be updated based on:

  • the intended use of the software (target disease, clinical use, importance, urgency)
  • usage scenarios (applicable population, target users, places of use, clinical processes)
  • core functions (processing objects, data compatibility, functional types) throughout the software life cycle process.

The risk of clinical use of software should also include false negatives and false positives, where a false negative is a missed diagnosis, which may lead to delay in follow-up diagnosis and treatment activities, especially in evaluating the risk of delay in diagnosis and treatment of rapidly progressing diseases.

2. Algorithm Design

Consider algorithm selection, algorithm training, network security protection, and algorithm performance evaluation in the algorithm design. Design data-driven and knowledge-driven algorithms to improve the explanatory nature of the algorithms. Algorithm selection should specify the name, structure (e.g., number of layers, parameter size), flowchart, out-of-the-box framework (e.g., TensorFlow, Caffe, PyTorch), input and output, operating environment, and algorithm source basis. At the same time, clarify the principles, methods, and risk considerations of algorithm selection and design, such as quantitative error, gradient disappearance, overfit, and white boxing. If you are using migration learning techniques, supplement those with summary information such as data set construction, validation, and validation of pre-trained models.

Base the algorithmic training on the training set, tuning set training, and tuning, with clearly evaluated indicators, training methods, training objectives, tuning methods, and training data volume such as the evaluation indicator curve. Base the evaluation indicators on clinical needs, such as sensitivity, specificity, etc. Training methods include, but are not limited to, the set-aside method and the cross-validation method. Training objectives should meet clinical requirements and be supported by evidence such as ROC curves. The tuning method should clarify the algorithm optimization strategy and implementation method. The evaluation indicator curve should be able to confirm the advent and effectiveness of algorithmic training.

3. Quality Of Data

Data collection should consider compliance and diversity of data sources, epidemiological characteristics of targeted diseases, and data quality control requirements. Data sources should ensure data diversity on a compliance basis to improve the ability to generalize algorithms, such as representative clinical institutions from as many different geographies and levels as possible and as many devices acquiring data as possible. Use acquisition parameters that maximize your data acquisition.

Epidemiological characteristics of the target disease include, but are not limited to, disease composition (e.g., classification, station), population distribution (e.g., health, patient’s sex, age, occupation, geography, lifestyle), statistical indicators (e.g., morbidity, prevalence, cure rate, mortality rate, survival rate), and the impact of complications and similar diseases of the target disease.

It is important that the acquisition equipment clarify the compatibility requirements and its acquisition requirements. Base the compatibility requirements on data generation methods (direct generation, indirect generation) to provide a list of acquisition equipment compatibility or technical requirements, clear acquisition equipment manufacturers, model specifications, performance indicators, and other requirements, if there are no specific requirements for acquisition equipment that provide appropriate support information. Acquisition requirements should specify the acquisition method (e.g., regular imaging, enhanced imaging), acquisition protocol (e.g., MRI imaging sequence), acquisition parameters (e.g., CT load voltage, load current, load time, layer thickness), acquisition accuracy (e.g., resolution, sample rate), and other requirements.

The training set shall ensure that the sample distribution is balanced, and the test set and the tuning set shall ensure that the sample distribution conforms to the actual clinical situation. The training set needs to include the samples of the training set, the tuning set, or the two intersections of those sets.

4. Security

The intended use of the software, usage scenarios, and core functions, based on confidentiality, integrity, availability, and other network security characteristics should determine the software network security capacity-building requirements to deal with network threats such as cyberattacks and data theft. Common network threats to this type of software include, but are not limited to, framework vulnerability attacks (the use of algorithms to exploit the out-of-the-frame framework itself), vulnerabilities for network attacks, and data pollution (network attacks through the pollution of input data). A helpful ENISA report considers the different stages of the AI life cycle from requirements analysis to deployment and the ecosystem of AI systems and applications; it also provides the identification of assets of the AI ecosystem as a fundamental step in pinpointing what needs to be protected and what could possibly go wrong in terms of security of the AI ecosystem.

Conclusion

Regulators have recognized that AI and ML technologies pose several challenges from a regulatory perspective. They will be asking questions about how to determine when changes to an algorithm are so significant that they merit reevaluation of the medical device and its safety and effectiveness. There is a flurry of activity to address the gaps in technological and regulatory perspective, such as ISO/IEC 22989 Artificial Intelligence – Concepts and terminology, ISO/IEC 23053 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML), and ISO/IEC 23894 Information Technology – Artificial Intelligence – Risk Management, among others.

Medical device manufacturers can still design, develop, and commercialize while the streamlining of global regulations and the harmonization of international standards are taking place by understanding the privacy framework in the applicable jurisdiction and being able to demonstrate adherence to the local jurisdictional expectations.

About The Author:

JohnJohn Giantsidis is the president of CyberActa, Inc, a boutique consultancy empowering medical device, digital health, and pharmaceutical companies in their cybersecurity, privacy, data integrity, risk, SaMD regulatory compliance, and commercialization endeavors. He is also a member of the Florida Bar’s Committee on Technology and a Cyber Aux with the U.S. Marine Corps. He holds a Bachelor of Science degree from Clark University, a Juris Doctor from the University of New Hampshire, and a Master of Engineering in Cybersecurity Policy and Compliance from The George Washington University. He can be reached at john.giantsidis@cyberacta.com.