Feature

Hospital 2040: Will AI regulation safeguard the public, or stifle innovation?

As AI tools are rolled out across healthcare sectors world-wide, governments are seeking to regulate the tech to ensure public safety, but some say that regulation could slow down innovation. By Joshua Silverwood.

Credit: Yuri A via Shutterstock.

The rising tide of artificial intelligence (AI) has made healthcare stakeholders across the world nervous about the future, as governments worldwide start ramping up plans for healthcare regulation. 

Proponents of AI have touted the tech’s ability to clear administrative backlogs whilst also being key in the discovery and development of new drugs. However, governments such as the US have rolled out controls and steps in hopes of controlling the technology amid fears that its growth could destabilize parts of the industry.

Regardless, AI is here to stay, so it follows that regulation will be an inevitable consequence of its growth.

In October of 2023, the World Health Organization (WHO) released what it describes as six key regulatory considerations with a focus on ensuring that the tech is used safely within healthcare. 

Among the six considerations, the WHO is calling for governments and organisations to “foster trust” among the public stressing the importance of transparency and documentation, such as through documenting the entire product lifecycle and tracking development processes. 

Another consideration reads: “Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners, can help ensure products and services stay compliant with regulation throughout their lifecycles.” 

It comes after the US president, Joe Biden, signed a new executive order setting out the need for a new set of guidelines intended to regulate AI within the United States, with a particular focus on its implementation in healthcare. 

Issued on 30 October, the executive order will require AI developers to share their safety test results and other critical information with the U.S. government. 

The executive order reads: “Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. 

“The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI. 

“Through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.”

It comes as the UK Prime Minister, Rishi Sunak, announced that the UK would be establishing “the world’s first AI safety institute” as part of a speech delivered earlier in October, ahead of the world’s first global AI safety summit later this year.

Thematic research by GlobalData found that in 2022 the global AI market was worth $81.8bn, with that figure projected to grow by 31.6%, up to $323.3bn by 2027. A key portion of this is set to take place throughout the medical device market, with the sector expected to reach $1.2bn by 2027, up from $336m in 2022. 

GlobalData forecasts suggest that the market for AI for the entire healthcare industry will reach $18.8bn by 2027, up from $4.8bn in 2022. 

However, not everyone thinks that increased regulation is the most important consideration for AI at present. In June of 2023, the World Economic Forum (WEF) warned that increased and poorly thought-out regulation could stifle innovation in the space and could even lead to worse product safety. 

Writing for the WEF, David Alexandru Timis, said: “Recent calls in the AI space have sought to expand the scope of the regulation, classifying things like general purpose AI (GPAI) as inherently ‘high risk’. This could cause huge headaches for the innovators trying to ensure that AI technology evolves in a safe way. 

“Classifying GPAI as high risk or providing an additional layer of regulation for foundational models without assessing their actual risk, is akin to giving a speeding ticket to a person sitting in a parked car, regardless of whether it is safely parked and the handbrake is on. Just because the car can in theory be deployed in a risky way.” 

AI tools have already been implemented in a significant number of healthcare services worldwide, making the debate over how these systems should be regulated much more crucial now as it starts to become commonplace in the sector. 

In June of this year, the UK government announced a £21m rollout of AI tools across the National Health Service (NHS) aimed at diagnosing patients faster in indications such as cancers, strokes and heart conditions. 

The announcement also contained a plan to bring AI stroke diagnosis technology to 100% of stroke networks by the end of 2023, up from 86% at present. The UK government has said that the use of AI in the NHS has already had an impact on outcomes for patients, with AI in some cases halving the time for stroke victims to get treatment. 

Stephen Powis, NHS national medical director, said: “The NHS is already harnessing the benefits of AI across the country in helping to catch and treat major diseases earlier, as well as better managing waiting lists so patients can be seen quicker.”