THOUGHT LEADERSHIP

AI could save the NHS, but risks sharing private patient data

Oct 15, 2025

AI could save the NHS, but risks sharing private patient data

This is a repost of an article by FLock.io CEO Jiahao Sun originally published in Pharmaphorum.

In June, the NHS paused a “ground-breaking” AI project that was training a model on data from 57 million patients in the UK. The model, called Foresight, was set to help doctors predict disease and forecast hospitalisation rates, but was halted over data privacy concerns.

It’s becoming increasingly evident that simply anonymising and de-identifying sensitive data is no panacea for big tech’s data privacy problem. In the case of a data breach, someone can easily use patterns to re-identify the data, especially with such enormous and rich datasets.

Concerns are continuously mounting for patients’ data privacy. In previous years, AI was primarily used for administrative streamlining. Now, the top application is processing or analysing sensitive medical data, which is deeply troubling given how insecure today’s centralised AI systems are.

And yet, it’s easy to see the allure of centralised AI. Healthcare systems worldwide have buckled. The pandemic exposed deep vulnerabilities in medicine, and hospitals are sprinting to catch up.

AI offers an enticing fix: early disease detection and drug discovery to save more lives. The public sector aims to spend less time on administration and more time delivering the services people rely on.

The question is: should patient data be shared with big tech to create an AI-driven NHS?

👋 Hi there! If you’re here to find out more about FLock.io, follow us on X, read our docs and sign up to AI Arena on train.flock.io]

The power of AI in action

The results of AI are already tangible. In the NHS, hospitals using an AI agent for admin have treated 114 more operating theatre in-patients every month. Last year, AI saved a stroke victim’s life by pinpointing the exact location of a blood clot in her brain in a medical emergency, sparking calls for it to be rolled out nationwide.

Prime Minister Keir Starmer praised it as “the power of AI in action” in a speech about the government’s plan to train models on patient data. He hopes it will “predict and prevent” strokes in the future.

In January, the government set out a blueprint to turbocharge AI and unleash it across the UK. The NHS is in the midst of its 10-Year Plan to go from paper to digital. It’s building a world-first AI system based on its IT software, the NHS Federated Data Platform. AI is currently being piloted to help free up hospital beds being used by people who are fit to be sent home.

The brand new AI stethoscope is another example of a brilliant piece of technology that can detect heart conditions in seconds. Developed by Imperial College London, it enhances a classic medical tool that was invented in France in 1816 and upgrades it to the 21st century.

But the dangers of power concentration in medicine have been well documented within big pharma. During WW2, pharmaceutical firms promoted and sold methamphetamine to soldiers. When patients’ data is concentrated, we may face similar manipulation driven by profit.

The vulnerabilities of health data

Centralised data servers are a double-edged sword in AI, particularly in medicine. The core issue is not what it does, but how it works. The medical data is sent to a massive, central cloud server to be analysed by AI trained on data from tens of thousands of patients. These servers are often concentrated to large tech conglomerates that possess and train the models, highlighting significant security risks.

Sending sensitive patient data from private hospital servers to the cloud for analysis should sound alarm bells. The patient and the healthcare system lose data control as soon as it leaves the hospital system. This single point of failure significantly increases the real possibility of disastrous data breaches.

The vulnerabilities of health data have already been exemplified in recent ransomware attacks that sent the NHS reeling. A cyberattack against a pathology laboratory, Synnovis, caused colossal disruption to NHS hospitals in London last June. The Russian cyber-criminal group Qilin shared almost 400GB of private information on their darknet site. The attack led to the postponement of over 1,000 operations and 3,000 appointments and disrupted blood services, with a patient death later linked to the incident.

Ultimately, the goal should be to achieve the life-saving benefits of AI, but without the data privacy trade-off. The NHS now has comprehensive electronic patient records, making it well-positioned to perform impactful health research and prevent public health risks.

Hospitals should be empowered to leverage algorithms that can identify cancers in medical imaging with accuracy that exceeds human radiologists. They should harness AI, trained on diverse datasets, to reduce healthcare inequalities – gender bias can be catastrophic when identifying heart attacks in women.

Looking to decentralised AI

All this and beyond can be achieved through Federated Learning (FL), a form of decentralised AI (DeAI). It allows collaborative model training without exposing raw data. Instead of data moving to the model, the model (or a version of it) moves to the data.

FL is a privacy-preserving, GDPR-compliant alternative to the big tech “black box” of traditional centralised AI dominating the market. The core benefit is that sensitive patient data never leaves its secure, local environment (a hospital’s server). The NHS would maintain full control over its data, and wouldn’t need to relinquish its most valuable asset to a third-party corporation.

By training models on diverse datasets from many different hospitals, which serve diverse populations, the final model is less prone to biases. This leads to more equitable healthcare outcomes.

While the NHS puts the Foresight project on hold, governments like the UK should listen to calls for a decentralised future that prioritises innovation that isn’t at the expense of anyone’s personal data.

About the author

Jiahao Sun is the founder and CEO of FLock.io, a decentralised AI platform. An Oxford alumnus with expertise in both AI and blockchain, he previously served as director of AI for the Royal Bank of Canada and an AI Research Fellow at Imperial College London. Sun founded FLock.io to pioneer privacy-centric AI solutions, aiming to democratise AI development through secure, collaborative model training and deployment.

Weekly newsletter

No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.

Read about our privacy policy