Blog
Ethical AI in the Spotlight: What the Midlands Police Case Means for the Public Sector
Hannah Boswell
18 February 2026
Secure data optimisation & proactive backup
Proactive Licensing, Compliance & Asset Management
Agile, Modular, & Secure Cyber Security & Managed Siem
Manage & Transform Multi-Cloud, Hybrid & On-Premise
When an AI failure makes the news, the conversation quickly turns to the algorithm. What did the model get wrong? Who built it? Could the training data have been better? These are reasonable questions. But in most public sector AI failures, the technology isn’t where things went wrong.
The recent Midlands Police case is a governance failure. And the conditions that produced it (unclear accountability, inadequate oversight, AI operating in high-stakes environments without the right safeguards) are not unique to policing. If your organisation uses AI to inform decisions that affect people, this blog deserves more than a passing read.
There’s a persistent assumption in public sector technology that AI risk is primarily a technical problem. People say: Get a better model, clean the data, run more tests before you go live.
These things matter, but they don’t address the question that causes most problems in practice: what happens when the AI does something you didn’t expect?
Who is accountable? Who finds out? What is the process for investigating it, escalating it, and fixing it? If you can’t answer those questions clearly right now, for the AI systems you currently operate, that’s your governance gap… and it’s worth closing before scale makes it harder.
We work with public sector organisations on AI deployment and the readiness picture is consistent. Awareness of governance requirements is high. Actual readiness is much lower. These are the five areas that separate organisations with genuine AI governance from those with good intentions:
AI systems in the public sector typically involve multiple teams, vendors, and procurement routes. Without a named owner who is responsible for what the model does, and who is also answerable when it doesn’t perform as expected, accountability disappears into the gaps between teams. That’s not a theoretical risk, it’s how most AI incidents go unaddressed for too long.
Responsible AI starts with responsible data. You need to know what data is feeding your AI systems, how it’s classified, who can access it, and whether its use is appropriate and auditable. Most organisations overestimate their visibility here. AI doesn’t create this problem, but it does make it much harder to contain.
AI systems change. Accuracy drifts. Behaviour shifts in ways that aren’t always obvious from the outputs. A model that performed well at deployment may not perform the same way 18 months later. Without a process for reviewing AI systems against their intended purpose on a regular basis, you won’t know there’s a problem until it’s already a big one.
The majority of public sector AI deployments involve third-party tools. Procuring a tool doesn’t transfer the accountability that comes with using it. You need to understand what AI is embedded in the products you use, how those models behave, and what contractual assurances you have around bias, accuracy, and security. Many organisations haven’t had this conversation with their suppliers yet.
If an AI system produces a harmful or discriminatory output, what happens? Who identifies it? Who is notified, and in what order? What is the process for investigating, responding, and reporting? Organisations that haven’t defined this in advance are forced to improvise at precisely the moment when a clear, calm process matters most.
The gap between awareness and readiness is real, and it tends to grow as AI adoption accelerates. Organisations that have moved quickly to deploy AI tools can find themselves managing a growing estate of AI-enabled processes with limited visibility over how they’re performing or where the risks are building up.
That’s not a reason to slow down. It’s a reason to get the governance in place now, while it’s still manageable.
The organisations we see getting this right share one thing in common. They treat AI governance as an operational capability, not a compliance checkbox. It’s built in from the start, maintained continuously, and owned at a senior level. They’re not waiting for regulation to force the issue. They’re using governance as the thing that makes faster, more confident AI deployment possible.
That’s the shift worth making. Not AI governance as a brake on progress, but AI governance as the foundation that makes progress sustainable.
If you’re not sure where your organisation currently stands, the five areas above are a good starting point for an honest internal conversation. You don’t need everything to be perfect before you deploy AI. You need to know where the gaps are, have a plan to close them, and be clear on accountability at every stage.
That’s what good AI governance looks like in practice. And it’s what cases like the Midlands Police one makes it increasingly hard to put off.
Celerity works with public sector organisations to deploy AI that is governed, monitored, and managed from day one — not bolted on afterwards. If you’d like to talk through where your organisation stands, find out more about our Managed AI service or get in touch.
Blog
18 February 2026
Blog
11 February 2026
24 November 2025