
Zero Trust - Celerity Limited
Secure your data, eliminate risk and harness the power of Zero Trust.
A core tension in healthcare is this: AI has the potential to transform diagnosis, care pathways and operational efficiency, but the stakes are high. Patient safety, privacy, equity, legal liability and public trust all hang in the balance. Recent research shows that while 17% of healthcare professionals now use generative AI tools such as ChatGPT every day, and 39% use them at least weekly, more than half (53%) have received no formal guidance on how to do so safely.¹ This highlights both the momentum behind AI adoption and the urgent need for clear governance, ethical standards and infrastructure capable of supporting responsible deployment.
In the NHS, the balance is especially delicate. Patient data is extremely sensitive; decisions informed by AI can directly affect people’s lives; and legal/regulatory scrutiny is intense. The NHS England AI team states that their mission is to embed “responsible, ethical and sustainable AI into NHS services” while preserving transparency, safety and public trust. The NHS Transformation Directorate also publishes guidance on the information governance implications of AI in health settings.
Nevertheless, many health organisations are just beginning to grapple with how to manage AI risk, shadow use, and oversight. This blog offers a three-step framework (discover, build, govern) tailored to the NHS and health providers.
Uncover Unauthorised AI use to Protect Patient Data and Trust
What is “Shadow AI” in a health context?
Shadow AI refers to the ad hoc or unsanctioned use of AI tools: for example, a clinician using ChatGPT to summarise consultation notes, or a department experimenting with generative tools to draft patient education content, without formal oversight.
In the NHS, this is especially dangerous: clinicians or administrators might inadvertently input patient-identifiable information (names, conditions, histories) into external AI systems. That can violate data protection law or consent agreements, and risk reidentification or leakage of sensitive health data.
Many health systems are actively combatting shadow AI. For example, US hospitals are restricting AI tools’ ability to access the internet until governance is in place, tracking what tools are deployed, and creating central registries of AI solutions in use.
What to do in your NHS organisation / trust
Audit current AI use: Survey clinicians, admin staff and research teams to find where AI is already being used (in pilots, side-projects, departmental experiments).
Classify risk levels: Some uses (e.g. patient triage, decision support) are high risk; others (e.g. drafting non-clinical documents) are lower risk.
Raise awareness / training: Ensure all staff understand the rules around using AI, particularly around inputting patient data, privacy, and quality of output.
Create or update policy: Mandate approval processes for new AI tools, and require logging and oversight for AI adoption.
Shadow AI eradication is not about banning AI but bringing it into the fold under well-managed control, which is especially critical in healthcare.
Embed Safety, Privacy, and Integrity from Design Through Deployment
Once an NHS trust or integrated care board (ICB) decides to adopt or pilot an AI tool, security and data protection must be baked in, not bolted on later. In health, this includes:
Data anonymisation / pseudonymisation / encryption: Any patient data used in model training or inference should be de-identified where possible. The NHS “Buyer’s Guide to AI in Health and Care” emphasises the need for a mapped data flow, data governance agreements, and DPIAs (Data Protection Impact Assessments) when data can identify individuals.
Information Governance (IG) integration: IG teams must be engaged from the start, not after the fact. In practice, governance processes (e.g. legal basis, consents, data sharing) should be negotiated early.
Threat modelling & adversarial risk: Health AI systems must plan for tampering, adversarial attacks (e.g. manipulated inputs), model inversion, and reidentification attacks.
Access controls and monitoring: Restrict who can query or fine-tune the model; maintain logs and audit trails; require human review of outputs.
Validation, monitoring, and degradation control: Continuously monitor the AI performance in live settings; detect drift or bias; have fallback mechanisms. In healthcare, models may degrade when patient cohorts shift or new variants emerge (for disease models).
Clinical and safety oversight: Ensure outputs are reviewed by clinicians; AI should augment, not replace, professional judgment. In the NHS AI adoption framework, clinical staff must maintain oversight and accountability for decisions informed by models.
Regulatory alignment: If the AI tool qualifies as a medical device or clinical decision support, engage with the MHRA or equivalent regulators. In fact, the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) is leading in safe AI regulation in healthcare and has launched sandboxes for testing new medical AI tools.
An NHS trust implementing AI in radiology, for example, must consider all these steps before deploying a diagnostic assistant, so that patient safety, legal compliance, and clinical acceptance are assured.
Set Up Policies, Oversight, Accountability, and Transparency
Even with controlled deployment, governance is the backbone of trustworthy AI. In health, governance needs to be especially explicit and aligned with regulatory, ethical, and professional standards.
Explainability & transparency
AI decisions (e.g. why a scan was flagged, why a risk score was elevated) must be explainable to clinicians and patients. Black-box models without explanation are risky in healthcare settings.
Accountability & roles
Every AI tool should have a defined owner, e.g. a responsible clinician, CIO, or committee. If something goes wrong (a misdiagnosis, patient harm), clear roles must exist to manage redress, investigation, and learning.
Bias, fairness, and equity
Models must be audited for bias by demographic, and monitored for fairness over time. Patients should have recourse to contest decisions influenced by AI.
Ethical oversight
Many NHS trusts and health bodies are embedding an AI ethics function or engaging with the NHS AI Ethics Initiative. NHS Transformation Directorate Ethical review boards or committees should review high-risk AI proposals.
Governance frameworks & policy adaptations
Some NHS organisations are building formal AI governance frameworks. For a breast screening AI pilot, one trust explicitly created an AI implementation and governance framework before deployment. In addition, many NHS entities maintain policies that require requests for new AI solutions to be assessed by IG / IT and to complete Data Protection Impact Assessments. For example, the Humber & North Yorkshire ICB has an “AI Governance Policy” mandating these controls.
Conformance with law and future regulation
The forthcoming EU AI Act (or UK equivalents) treats many healthcare AI systems as “high-risk,” demanding high standards of governance, data management, human oversight, and conformity assessment.
The UK government’s parliamentary committees are advocating for AI oversight to support safe NHS adoption.
The broader AI regulatory regime in the UK is evolving; sectoral regulators (e.g. MHRA) are being empowered to oversee health AI tools.
For the NHS (and health systems globally), AI must be adopted with caution and rigor. Here’s a summary of how the three steps come together in health:
Discover shadow AI across clinical, research, admin and support teams, to ensure nothing slips through unmanaged.
Build security, privacy, safety, and oversight from day one , not as afterthoughts.
Govern actively, with roles, policies, audits, explainability, ethics, and compliance.
If NHS trusts or health systems can align on this approach, they can responsibly unlock AI’s power, supporting faster diagnosis, more personalised care, operational savings, and better outcomes, without compromising patient safety, privacy, equity or trust.
Book an AI risk workshop with Celerity and IBM to take the first step toward secure, governed, and trustworthy AI.
Secure your data, eliminate risk and harness the power of Zero Trust.
Protecting your business from threats and data loss.
Identifying unlicensed software, monitoring license usage, and ensuring that your organisation abides by its license agreements.