Opinion

“At least I have my health”: the reputational impact of AI in healthcare

As AI becomes increasingly commonplace in the medical device sector, the process for how safety and efficacy are demonstrated will be a significant consideration for clinicians seeking to implement the devices, regulatory agencies and device manufacturers. Marcus Smith, managing director EMEA at Polecat, explains.

Aworld where machines are able to curate human healthcare is not far away. Medical technology giant, GE Healthcare, made headlines recently for partnering with Franciscan Health to implement a control centre to synchronise all elements of a patient's hospital experience. AI’s potential in healthcare, however, extends beyond managing work flows – it has the potential to transform fundamentally how healthcare is delivered and the way we think about medicine.


We were surprised that when analysing the global AI conversation online and across social media over the past three months – health, not jobs, is the most talked about issue. There is growing noise around AI’s role in healthcare – some people are positive, but others are yet to be convinced about putting their health in the hands of a machine.


Perception is important. Everyone is a stakeholder in healthcare as it impacts all of us. With such a major change just around the corner, it is important healthcare providers understand the concerns of their patients and those who work in the industry.


Our data show that the main concern for patients and medical professionals is culpability when mistakes or problems occur. Health is our most valuable asset, so entrusting it to a machine is a difficult decision for both patients and those responsible for looking after us.


Healthcare AI therefore faces a significant challenge: how can it win the same level of trust as doctors, traditionally one of the most reputable and respected professions, seen by many as highly educated, motivated and altruistic. When our experience in healthcare is focused on human care, being treated by machines will seem alien – how AI is implemented and how it interacts with patients is an issue that needs to be addressed.

Marcus Smith, managing director EMEA, Polecat.

Image courtesy of Polecat

Answering the concerns of stakeholders

When reciting the Hippocratic Oath, doctors acknowledge “warmth, sympathy, and understanding may outweigh the surgeon's knife or the chemist's drug” and swear to protect patients’ privacy – pledges an unconscious machine cannot fulfil.


While AI cannot swear to act in our best interests and carry out best practice, it can be subject to regulation and scrutiny. In the same way doctors are regulated and responsible for their actions, so should AI be. This is not an easy issue; the distinction between whether culpability lies in the application or creation of the technology is not clear cut. Regulation and legislation will be vitally important to reassure patients that AI is part of a wider network which is human and ultimately responsible and accountable for what happens to them.


Interestingly, another concern flagged by our data is patients’ privacy. While doctors swear to keep patients’ medical detail private, they are not exposed to cyber attacks and are not controlled by a company. Data use is in the public eye as never before. While it is under greater regulatory scrutiny under GDPR and other data protection laws, there is still a tension when giving machines access to our most sensitive information – an issue firms will have to address.

In the near future AI will be able to cut costs and improve efficiencies across a complex healthcare ecosystem.

While AI diagnosis is some way off, it is already evident that even in the near future AI will be able to cut costs and improve efficiencies across a complex healthcare ecosystem – streamlining workflow and ensuring greater accuracy. Healthcare is expensive and labour-intensive, and with growing, ageing populations across the developed world, automation is vital to drive down costs and improve efficiencies to reduce the burden on already stretched service providers.


The introduction of AI into stretched health services to allow medical staff to focus on caring for individuals is inevitable. As the technology becomes more sophisticated, medical professionals will increasingly rely on it. However, for the foreseeable future, humans will always have the ultimate say in complex medical matters. It is therefore important for firms to communicate the evolving role of the human healthcare professional as much as the growing capabilities of the machine. We need to anticipate growing concerns of patients and public interest to manage expectations in a more machine-led healthcare environment.

Highlighting the rewards of AI in healthcare

AI should be seen as an extension of human medical professionals. Improving our ability to treat patients, while taking away administrative burdens to allow us to focus on more valuable tasks. As with other emerging technologies, firms must bring the conversation back into the real world – communicating likely scenarios, offering transparency around safety and clarity over responsibility.


Trust will not be won overnight. Firms like GE Healthcare need to argue the case for greater adoption of AI to all stakeholders – from patients, to health services and governments. They must show a proven track record demonstrating that the technology is equal to, if not better than, human care – helping to deliver more accurate diagnoses and precise surgery. Communicating progress around the testing of AI in healthcare and its regulation is vital. Only when patients are comfortable with and understand the technology, will gradual introduction and normalisation possible.

Communicating progress around the testing of AI in healthcare and its regulation is vital.

AI could be a ‘big bang’ moment for healthcare, but in a world where social media can spread horror stories like wildfire and fake news can damage reputation, healthcare firms must work hard to build a case. Constant communication, engagement and transparency around current issues such as culpability and privacy are essential, alongside monitoring for emerging debates as the technology and its applications advance. Only by proving safety and effectiveness in the long-term, will firms encourage acceptance and adoption.