In April our team attended the IGPP AI in Healthcare Conference in Salford, and our own Simon Kofkin-Hansen took to the stage to share Celerity's perspective on what it takes to move AI from pilot to production in the NHS.
The room brought together clinicians, regulators, NHS digital leads, AI vendors and policy makers, and the conversation felt notably different to many AI events we've attended. There was far less speculation about what AI might do in future and far more focus on what's getting in the way right now. That shift mattered.
Here's what stood out to us.
The infrastructure challenge underpinned everything
Pritesh Mistry from The King's Fund was direct: safe AI adoption requires solid foundations data quality, governance, leadership and organisational culture, and many NHS organisations aren't there yet.
The King's Fund has highlighted in its research that parts of the NHS still have staff working on machines that struggle to run basic applications simultaneously. AI cannot deliver on its promise when it sits on top of fragile infrastructure.
Rhod Joyce from NHS England reinforced this from the primary care perspective. With roughly 6,000 GP practices across the UK, each running different technology and working in different ways, national rollout of AI tools is as much a logistics and governance challenge as a technical one. Standards like DCB0129, the clinical risk management framework for health IT manufacturers, are under active review to keep pace, but there is still significant ground to cover.
This is the work we engage with every day at Celerity. It may not be the most exciting part of the AI conversation, but it's where progress is won or lost.
Celerity on stage: bridging the gap from pilot to production
Simon's presentation, 'From Pilot to Production', addressed what we see as the central challenge facing NHS AI adoption. He opened with a finding from the Nuffield Trust and Royal College of General Practitioners: 28% of UK GPs are already using AI tools. Adoption is growing, but the reality is that most NHS AI projects still stall before reaching scale, and the reasons are consistent, governance bottlenecks, data that isn't ready, and digital maturity that varies significantly from one trust to the next.
Simon's argument was that the organisations making meaningful progress aren't necessarily the ones with the most advanced AI tools. They're the ones that have invested in the groundwork: leadership commitment, honest infrastructure assessments, and the foundational digital capabilities that make everything else possible.
It resonated. The gap between a successful pilot and a sustainable, scaled deployment is where many organisations need the most support, and it's a gap that won't close without addressing the basics first.
Trust was the recurring theme
Nicola Byrne, the National Data Guardian, opened the conference and set the tone for the day. Her message was clear: AI in healthcare only works if patients, clinicians and service users trust it. That has real implications for how tools are designed, deployed and communicated.
Dr Ryan Samuels from Eolas Medical brought published evidence into the discussion. The Waldock, Darzi et al. (2026) study found that their AI-powered clinical decision support tool produced zero prescribing errors in simulation testing, compared with significant error rates among clinicians using traditional PDF guidelines. With the Eolas platform already in use across over 85% of acute NHS hospitals in England, this is evidence grounded in practice, not theory. But his broader point was equally important: even strong evidence isn't enough on its own. Trust has to be built through transparency, proper sourcing and clear accountability.
Dr Mike Nix from Leeds Teaching Hospitals reinforced this from the clinical deployment side. Human factors and data quality matter as much as the model itself. If clinicians don't understand what an AI tool is doing and why, adoption won't follow, and it shouldn't.
The regulatory landscape is maturing
One of the more encouraging signals from the day came from Natasha Motsi at the MHRA, who presented the AI Airlock programme, the UK's first regulatory sandbox for AI as a Medical Device. Now entering its third phase with confirmed multi-year funding of £1.2 million per year through to 2029, the programme is creating a structured environment for regulators and innovators to test approaches together.
The second phase examined technologies ranging from AI-powered clinical note-taking to advanced cancer diagnostics and eye disease detection tools. Critically, the findings are feeding directly into the National AI Commission's work on future regulation, so this isn't operating in isolation. It's shaping the framework that will govern how AI medical devices reach patients.
Keeley Crockett from Manchester Metropolitan University added further context on how parliamentary inquiries and local policy pilots are contributing to the broader governance picture. The regulatory environment is catching up. There's still work to do, but the trajectory is encouraging for organisations planning their AI strategies now.
The workforce discussion went beyond upskilling
The panel on building an AI-enabled workforce, featuring Linda Vernon from Lancashire & South Cumbria ICB, Dan Howl from BCS and Iblal Rakha from NHS England, raised questions that extended well beyond training programmes. Equity of access came through strongly. Not every clinician has the same level of digital confidence, and not every part of the country has the same resources or support structures.
One panellist noted that Britain is the third-largest AI adopter globally, behind China and the US, yet the average reading age in the UK is twelve. AI literacy isn't just a clinical challenge. It's a much broader one, and it will shape how effectively the workforce can engage with these tools in practice.
Earlier in the day, Aoife Clarke from Beam introduced the concept of "narrative latency", the delay between a clinical event and its accurate documentation. Her argument was that complex care settings need AI approaches specifically designed for nuanced, multi-layered clinical narratives, not generic transcription tools. As AI documentation products multiply across the market, that distinction is going to matter more, not less.
Our perspective
If there was one overarching theme, it's that the conversation has moved on. No one at the conference was debating whether AI has a role in healthcare. The question now is whether the infrastructure, governance and workforce can keep pace with the technology.
For us at Celerity, that's where the focus needs to be. Not on chasing the latest AI tool, but on helping NHS organisations take an honest look at where they stand today and build the digital foundations that make safe, scalable AI adoption possible. That means data readiness. It means governance and cybersecurity. It means resilient IT environments that clinicians can rely on day to day. The technology is ready. The harder question is whether the foundations are, and that's where we believe the real opportunity lies.
To discuss how your organisation can build the digital foundations for AI adoption, get in touch with our team.