Feature

AI Act: Implications for the medical device sector in the EU  

The European Union has pioneered the world’s first comprehensive regulation on artificial intelligence, known as the AI Act. This major legislation has significant implications for manufacturers of medical devices that want to market their technology within the European Union. By Bernard Banga.

Credit: Shutterstock

In March 2024, after more than two years of work, the Artificial Intelligence (AI) Act was finally approved by the Council of the European Union (EU), the Presidency of the Council and the European Parliament. The EU is now therefore set to introduce the world’s first comprehensive regulation designed to govern the use of AI. Companies and stakeholders will have a general transition period of three years to comply with the new requirements. The legislation will have an impact on manufacturers of medical devices powered by or incorporating AI, especially since they are already grappling with integrating the requirements of the EU’s new Medical Device Regulation (MDR) and In Vitro Diagnostic Devices Regulation (IVDR). The scope of the new Act will include the rapidly growing market for the use of AI in medicine in Europe. According to the latest forecasts from the International Data Corporation, this market is projected to reach €179 billion by 2026, with an annual growth rate (CAGR) of 25.5% over the period 2022-2026.

New definitions of AI systems 

The AI Act is designed to encompass all manufacturers, suppliers, importers, distributors and users of AI systems, whether natural persons or legal entities, that are headquartered in the EU or whose products are marketed within the EU. Its definition of AI is no longer based solely on the use of one or more algorithmic techniques, but now refers to that of the Organisation for Economic Co-operation and Development (OECD), describing it as ‘a machine-based system’ (rather than ‘software’) that is ‘designed to operate with varying levels of autonomy’ and is able to ‘generate outputs such as predictions, content, recommendations, or decisions’. ‘This new definition is more focused on the capabilities of automatic learning and the concept of autonomous operation’, commented Frédéric Barbot, Scientific Coordinator of the Centre for Clinical Investigation and Technological Innovation 1429 (CIC 1429) at the Raymond Poincaré University Hospital in Garches, France. 

The AI Act introduces a new category of AI systems: ‘general purpose’ AI models, which include foundation models and generative AI systems. General purpose AI systems can be used and adapted for a wide range of applications for which they have not been intentionally and specifically designed. The AI Act notably strengthens the regulations for these general purpose AI systems, requiring them to comply with numerous obligations, including in terms of evaluation and transparency. ‘They must also be registered in a European Union database separate from EUDAMED’, clarified Barbot. 

Heavy compliance requirements for medical devices 

The AI Act has two objectives: stimulating innovation in AI and, more importantly, mitigating risks through standards to ensure greater security, transparency and respect for the fundamental rights of European citizens. The EU’s flagship legislative initiative prioritises a risk-based approach categorised into four levels: minimal, low, high and unacceptable risk. Minimal-risk AI systems will be permitted without any restrictions but are encouraged to adhere to a voluntary code of conduct. Low-risk AI systems will be subject to limited transparency requirements, including notifying users that they are ‘generated by artificial intelligence’. High-risk AI systems will be authorised for the European market subject to compliance with pre-market evaluation requirements. And finally, the use of AI systems presenting an unacceptable level of risk will be prohibited within the EU. 

All medical AI systems are classified as high-risk AI, even if they are not considered ‘high-risk’ devices under the MDR and IVDR. ‘This could complicate the regulatory approval of numerous Class IIa, IIb, or III software applications under Rule 11 of the MDR’, explained Bardot. ‘The AI Act sets out rigorous compliance requirements for medical devices used in combination with AI, notably in relation to the challenge of keeping up with changes in the AI model and the potential for risk reassessment’, he added.  

Medical device manufacturers will also have to navigate the overlaps and inconsistencies between certain terms and definitions in the AI Act and other EU legislation, particularly concerning quality management systems and risk management in the MDR and IVDR, and the transparency rules set out in the General Data Protection Regulation (GDPR). 

Prohibitions and penalties 

Notified bodies responsible for conformity assessments will need to expand their teams to include personnel who are highly skilled in AI. This may mean an increase in the cost of their services as well as an alignment of examination deadlines, potentially hindering access to AI-assisted medical technologies in the EU. 

The AI Act prohibits the use of AI systems for behaviour manipulation, individual categorisation and real-time remote biometric identification. This encompasses certain AI systems that are designed or likely to substantially alter human behaviour in a manner that may cause psychological or physical harm, including AI applications in neurotechnology. AI systems using subliminal techniques are also prohibited, unless used for approved therapeutic purposes. Violating these prohibitions could result in fines of up to €40 million, or 7% of a company’s annual global turnover. Breaching AI Act requirements may also lead to a fine of €15 million or 3% of global turnover, while providing incorrect information to notified bodies can result in a fine of €10 million or 2% of global turnover. EU citizens will be able to file complaints.  

The EU has established an AI Office tasked with coordinating the implementation of the legislation across the EU, as well as designating national authorities for overseeing AI. 

Industry views on the AI Act   

AI Act: Pioneering pragmatism in data quality and innovation 
‘The requirements of the AI Act regarding training, validation, and test datasets, including labels, stipulate that they must be relevant, sufficiently representative, and properly controlled to detect errors, and as comprehensive as possible according to their purposes. This represents a highly pragmatic approach to data quality reality. Additionally, the use of real-world healthcare data, even with incomplete data, is desirable. Furthermore, the AI Act introduces regulatory sandboxes that are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems under the supervision of regulatory authorities before commercialization.’ 

Frédéric Barbot, Scientific Coordinator of the Centre for Clinical Investigation and Technological Innovation 1429 (CIC 1429), Raymond Poincaré University Hospital, Garches, France 

Comment from COCIR 
‘The adoption of the Artificial Intelligence Act (AIA) will potentially impact the availability of AI-based medical devices in the EU market due to overlaps with existing legislation regulating the medical device sector, including the MDR’, stated Jan-Willem Scheijgrond, COCIR President. The new complex landscape of regulatory requirements in the EU has the potential to result in inconsistencies, duplications and uncertainties, ultimately delaying patient access to products and posing significant barriers to innovation in the EU. 

Some of the requirements introduced by the AI Act, including management and conformity assessment procedures, are thus in conflict with existing elements of the MDR. ‘MDR already ensures patient safety aspects of medical devices. [. . .] Notified bodies will have to go through parallel processes for quality management systems and post-market surveillance’, continued Scheijgrond. ‘This may result in additional conformity assessments, documentation sets, and incompatible harmonised standards for each device. This may significantly constrain Notified Bodies.’ Further administrative inefficiencies will be created by the duplication of registration requirements, since manufacturers of medical devices associated with an AI system classified as a biometric categorisation system will need to register not only in EUDAMED, but also in a future AI Act database. ‘Impacts may include extended certification procedures, increased development costs, prioritization of other markets, and diversion of resources from research and development’, stated Scheijgrond, warning that the ultimate outcome will be reduced access to safe medical technologies for EU citizens and patients. In a statement, ‘COCIR recommendations on the Artificial Intelligence Act (AIA)’s alignment with the Medical Devices Regulation (MDR)’, COCIR is therefore calling for ‘effective alignment mechanisms between the AI Board, Medical Device Coordination Group, and stakeholders in the upcoming implementation of AIA which ensure safety, performance, and effectiveness of AI-enabled Medical Devices.’ 

COCIR is the European trade association representing the medical imaging, radiotherapy, health ICT and electromedical industries. It was founded in 1959 and is a non-profit association headquartered in Brussels (Belgium). Since 2007, it has also had a China Desk based in Beijing. 

The paper showcased attempts to make GPT-4 produce data that supported an unscientific conclusion – in this case, that penetrating keratoplasty had worse patient outcomes than deep anterior lamellar keratoplasty for sufferers of keratoconus, a condition that causes the cornea to thin which can impair vision. Once the desired values were given, the LLM dutifully compiled a database that to an untrained eye would appear perfectly plausible.

Taloni explained that, while the data would fall apart under statistical scrutiny, it didn’t even push the limits of what Chat-GPT can do. “We made a simple prompt […] The reality is that if someone was to create a fake data set, it is unlikely that they would use just one prompt. [If] they find an issue with the data set, they could fix it with consecutive prompts and that is a real problem. 

“There is this sort of tug of war between those who will inevitably try to generate fake data and all of our defensive mechanisms, including statistical tests and possibly software trained by AI.”

The issue will only worsen as the technology becomes more widely adopted too. Indeed, a recent GlobalData survey found that while only 16.1% of respondents from its Hospital Management industry website reported that they were actively using the technology, a further 26.8% said either that they had plans to use it or were exploring its potential use.

Nature worked with two researchers, Jack Wilkinson and Zewen Lu, to examine the dataset using techniques that would commonly be used to screen for authenticity. They found a number of errors including a mismatch of names and sexes of ‘patients’ and lack of a link between pre- and post-operative vision capacity. 

In light of this, Wilkinson, senior lecturer in Biostatistics at the University of Manchester, explained in an interview with Medical Device Network that he was less concerned by AI’s potential to increase fraud.

“I started asking people to generate datasets using GPT and having a look at them to see if they could pass my checks,” he said. “So far, every one I’ve looked at has been pretty poor. To be honest [they] would fall down under even modest scrutiny.” 

He acknowledged fears like those raised by Dr. Taloni about future improvements in AI-generated datasets but ultimately noted that most data fraud is currently done by “low-skill fabricators,” and that “if those people don’t have that knowledge, they don’t know how to prompt Chat-GPT to have it either.”

The problem for Wilkinson is how widespread falsification already is, even without generative AI. 

Caption: The US Pentagon is seeking to reduce carbon emissions through a range of programmes, but will it go far enough? Credit: US DoD

The mine’s concentrator can produce around 240,000 tonnes of ore, including around 26,500 tonnes of rare earth oxides.

Gavin John Lockyer, CEO of Arafura Resources

Total annual production

$345m: Lynas Rare Earth's planned investment into Mount Weld.

Caption. Credit: 

Phillip Day. Credit: Scotgold Resources

EU, US, and China: Divergent approaches and priorities 

EU: Security and fundamental rights. With the AI Act, the 27 member states of the EU have developed the world’s first comprehensive AI regulation. The EU’s focus is on ensuring the security and fundamental rights of citizens while also facilitating investment and innovation in AI. 

US: Innovation and competitiveness. While the US does not have a specific federal framework to regulate AI, in October 2023, President Joe Biden signed an executive order to oversee its use. This directive aims to safeguard US citizens from the potential risks of AI and establish standards for safety, security, privacy protection and oversight in AI development and usage. The US’s focus is on innovation, competitiveness and economic growth, seeking to foster the rapid development of new technologies, including in the medical field. The US regulatory system is often regarded as being particularly agile, enabling companies to rapidly bring products to market. Instead of adopting a centralised approach, the US evaluates risks on a case-by-case basis, considering the specific characteristics of medical devices. 

China: Embracing socialist values and safeguarding national security. In August 2023, China introduced new regulations governing AI-generated content. These regulations consist of 24 new rules intended to maintain strict control while encouraging the development of generative AI. They require AI systems to adhere to the fundamental values of socialism and not to threaten national security. Suppliers must present AI-generated content as such, and prevent any discrimination based on gender or ethnic group when designing algorithms. China, which is investing heavily in AI and associated technologies, seeks to facilitate market access for both domestic and foreign companies, potentially resulting in expedited approval procedures.