Regulation

 AI in med tech: how should it be regulated?

In May, the FDA approved an innovative AI-driven algorithm to support clinical decision-making at the point of care. But, regulatory standards are only in the early stages; with FDA head Scott Gottlieb writing recently about setting transparent standards to accelerate the entry of AI advances into the med tech space. Sally Turner explores the dangers of holding back on strong AI regulation in healthcare, and what benefits could effective regulation provide.  

O

ver the past 18 months, the US Food and Drug Administration (FDA) has faced a major challenge in updating its regulations, as a result of an influx of artificial intelligence (AI) devices set to transform healthcare. A statement from FDA Commissioner Scott Gottlieb acknowledged that the FDA’s traditional approach to overseeing certain healthcare products did not align with the types of innovations being developed.


‘Our approach to regulating these novel, swiftly evolving products must foster, not inhibit, innovation’, wrote Gottlieb, as the FDA announced plans to evolve its policies to advance development and oversight of innovative digital health tools.

Consumers and clinicians are increasingly embracing these technologies – from fitness trackers to high-end imaging devices – which provide them with a range of valuable health information. A year on, how is the FDA’s new guidance in this area standing up to scrutiny?

 OsteoDetect and the AI innovators

We do not classify all medical devices with artificial intelligence into one category

In May 2018, the FDA approved Imagen OsteoDetect, heralded by clinicians and the trade media as one of the first AI algorithms able to support clinical decision-making directly at the point of care. OsteoDetect is a form of computer-aided detection and diagnosis software designed to detect wrist fractures in adult patients. The product was fast-tracked through the FDA’s De Novo premarket review pathway, which grants certain low-risk devices approval via an efficient and clear-cut process.

The FDA is keen to point out that it has also cleared other software with AI function, and the market is diversifying. In February 2018, the organisation permitted clinical decision support software for alerting providers of a potential stroke in patients, and in April it approved the first medical device to use AI to detect ‘greater than a mild level of the eye disease diabetic retinopathy’ in adults who have diabetes.





“We do not classify all medical devices with artificial intelligence into one category,” explains Stephanie Caccomo, spokesperson for the FDA. “We classify medical products based on their intended use, so we do not have a way of tracking which are the first AI products.”

However, it is clear that imaging analytics are a leading the way in terms of AI development in healthcare, offering the potential to better identify a range of issues from fractures to tumours.

 A new FDA framework for AI

Gottlieb’s move towards ‘modern, flexible, risk-based approaches to regulation in this area’ resulted in the FDA’s Digital Health Innovation Action Plan, which outlines its efforts to ensure only high-quality, safe and effective digital products make it to the marketplace.

“The FDA recognises that artificial intelligence holds enormous promise for the future of medicine,” says Caccomo, “and we’re actively developing a new regulatory framework to promote innovation in this space, and support the use of AI-based technologies. As we explore and test our Pre-Cert pilot—where we focus on a firm’s underlying quality— we’ll account for one of the greatest benefits of machine learning – that it can continue to learn and improve as it is used.”

We’ll account for one of the greatest benefits of machine learning – that it can continue to learn and improve as it is used.

In 2017, the FDA made important provisions to the 21st Century Cures Act, to offer further clarity on where the FDA sees its role in digital health, and crucially, where the FDA sees no need for involvement. Two further draft guidance plans are also being rolled out; guidelines on Clinical and Patient Decision Support Software outlines the FDA’s approach to clinical decision support software (CDS) with the aim of encouraging developers to ‘create, adapt and expand the functionalities’ of their software to help clinicians diagnose and treat a range of health conditions; secondly, Software as a Medical Device: Clinical Evaluation will offer globally recognised principles that can be drawn upon to set international standards in clinical AI.

The FDA points out that approval will only be given for certain types of low risk support software that are to be used in a context where a patient or a caregiver can independently review the basis of the digital treatment recommendation. Caccomo adds: “Although the 21st Century Cures Act removed certain software functions from FDA’s purview, there are many types of software intended to support health care professionals that are not affected by the Cures Act amendments to the Food Drug and Cosmetic Act or this guidance, such as software that perform calculations routinely used in clinical practice.”

 The challenge ahead

The FDA faces some tough challenges in its management of digital tools. It is likely that emerging developments in AI, and innovative algorithms, may in some cases fail to meet the FDA’s recent draft guidance. Clinical tools powered by AI undoubtedly have many advantages, but the massive amounts of data they provide may make them less accessible to end-users who are more familiar with traditional analytics.

Clinical tools powered by AI undoubtedly have many advantages, but the massive amounts of data they provide may make them less accessible to end-users.

This new revolution in digital diagnostics offers devices that are able to process billions of data points in seconds, offering vital input into clinician’s decision-making. However, given the vast amount of information AI is able to integrate, its reasoning for proposing a specific course of action may not be clear to the clinician.

The FDA has stated that OsteoDetect ‘is an adjunct tool and is not intended to replace a clinician’s review of the radiograph or his or her clinical judgment’. Radiologists are able to use the software to pinpoint the location of the fracture, but it is they who decide on the action that should be taken.

 Future-proofing AI regulation

We are reaching a point where digital AI-driven devices are able to replicate the abilities of human health specialists, but whether FDA policies can keep a pace with these innovations remains to be seen.

Caccomo asserts that that FDA will make sure that all aspects of its regulatory framework, such as new software validation tools, are sufficiently flexible to keep pace with the unique attributes of this rapidly advancing field.

We need patients and providers to understand the connection between decision-making in traditional healthcare settings and the use of these advanced technologies.

“Employing the Pre-Cert approach to AI may allow a firm to make certain minor changes to its devices without having to make submissions each time,” she says. “We know that our approach to AI regulation must establish appropriate guardrails for patients. And even as we cross-new frontiers in innovation, we must make sure that these novel technologies can deliver benefits to patients by meeting our standards for safety and effectiveness.”

As the FDA begins to open the door to AI technology, it will be interesting to see how the axis of learning between this technology and clinicians develops. Caccomo concludes:

“We know that to support the widespread adoption of AI tools, we need patients and providers to understand the connection between decision-making in traditional healthcare settings and the use of these advanced technologies.”