When artificial intelligence hits the headlines, trust in public services is immediately put under pressure.
At Celerity, we work closely with public sector organisations across the UK to help them adopt data and AI responsibly and at scale. Recent coverage of the Midlands Police AI case has once again highlighted how quickly public confidence can be affected when the use of AI outpaces governance, transparency and organisational readiness.
While the details of the case continue to be examined, we believe it should be viewed not as a one-off failure, but as a clear signal of a wider challenge facing the public sector: AI adoption is accelerating faster than many organisations’ ability to govern it effectively.
At the same time, demand for AI and automation continues to grow. Public bodies are under intense pressure to modernise services, improve productivity and deliver better outcomes with constrained budgets and limited resources. Against this backdrop, ethical AI is no longer simply a technical consideration – it is a leadership issue and a fundamental requirement for maintaining public trust.
Over the past year, in partnership with GovNews, we have been exploring how public sector organisations can move beyond tactical AI experimentation and build the foundations needed to unlock the full strategic value of AI – without undermining confidence in the services they provide.
Why Ethical AI is Rising Up the Public Sector Agenda
From our work with central and local government, we are seeing ethical AI move rapidly up leadership agendas.
This shift is not driven by one police force or one technology. It reflects a broader national push to use artificial intelligence to improve public services and productivity, reinforced by initiatives such as the UK’s AI Opportunities Action Plan and the growing focus on digital and data transformation across government.
At the same time, ongoing financial and workforce pressures are accelerating the move towards automation and AI-enabled decision support. However, ambition without strong foundations creates risk.
Many organisations are discovering that their existing governance frameworks, accountability structures and data management practices are not keeping pace with the speed of AI adoption. The growing gap between what AI can do and what organisations are ready to manage exposes public bodies to reputational damage, regulatory scrutiny and legal challenge – and, critically, the loss of public trust.
This aligns closely with what we found through our joint research with GovNews: the challenge for the public sector is not a lack of AI ambition, but a lack of organisational readiness.
The Opportunity of AI in Public Services
We see first-hand the value AI can deliver for public sector organisations when it is implemented responsibly.
AI can help organisations automate high-volume administrative and operational processes, personalise citizen services at scale, improve forecasting and service design, and unlock new value from existing public sector data.
These opportunities are driving rapid experimentation across local government, policing and other public services.
However, the same technologies introduce significant risk when deployed without strong foundations. In our experience, issues such as bias and unintended harm often stem from weak data quality and immature data governance. A lack of transparency in automated or AI-assisted decisions is frequently caused by insufficient design standards and oversight.
This mirrors guidance from the Information Commissioner’s Office on AI and data protection, which highlights the importance of accountability, transparency and data protection when using AI in public services.
What the Midlands Police Case Really Tells us
From our perspective, the Midlands Police case is best understood as an organisational and governance challenge, rather than a purely technical one.
AI systems do not operate in isolation. They sit within complex organisational, cultural and operational environments. When public concerns arise, they are rarely focused on the technology itself. Instead, they centre on a lack of clear explanation of how decisions are being supported by AI, uncertainty over who is accountable, and fears that automated processes may be unfair or opaque.
While policing is a highly visible example, we see similar risks across other public sector functions, including housing, benefits, planning and social care – where AI-supported decisions can have a direct and lasting impact on people’s lives.
One of the strongest messages from our Ethical AI research is that failures are rarely driven by poor intent. They are far more often the result of weak data foundations, unclear ownership and under-developed governance models.
Ethical AI is not About Slowing Down Innovation
A common misconception we encounter is that ethical AI creates friction or slows progress. In practice, the opposite is true.
At Celerity, we view ethical AI as an enabler of sustainable innovation. Strong governance provides a steering mechanism that allows organisations to move forward with confidence. Clear transparency standards build trust with citizens and stakeholders. Most importantly, recognising that AI is as much about people, culture and organisational change as it is about technology ensures that responsibility is shared across the organisation.
This approach is increasingly reflected in the UK’s emerging regulatory and assurance landscape, including the government’s pro-innovation approach to regulating AI.
What we Learned through our Ethical AI Programme
Through our Ethical AI programme – including a joint whitepaper with GovNews, a live webinar and on-demand content – we gathered insight from senior leaders across the public sector.
We found that many leaders remain concerned that ethical AI may limit innovation, or that productivity-focused tools carry relatively low risk. In reality, even small-scale or internal AI use cases can introduce significant governance and reputational exposure if they are not properly controlled.
This reflects wider independent research into public sector AI adoption and readiness, which highlights similar challenges around skills, governance and organisational capability.
What leaders are most concerned about is accountability in practice: who owns AI-supported decisions, how governance and assurance operate day to day, and how AI use will be perceived by citizens, regulators and auditors.
One of the most consistent findings was that responsibility for ethical AI is still too often placed solely with digital or IT teams, despite the clear leadership and organisational implications.
What Ethical AI Looks Like in Practice for Public Sector Organisations
Based on our research and delivery experience, we believe several foundations are essential for ethical and scalable AI in the public sector. These foundations directly support fairness and accountability in automated decision-making.
They include:
- clear executive ownership and accountability for AI use
- defined governance and assurance structures
- transparency and explainability for AI-supported decisions
- strong data foundations, quality management and data provenance
- proportionate risk and impact assessments
- open communication with citizens and staff about how AI is being used and why
These foundations enable organisations to move beyond isolated pilots and towards trusted, organisation-wide transformation.
AI used purely as a productivity tool can deliver short-term gains, but it can also create hidden operational and reputational risk.
Strategic AI must be secure, transparent, explainable and trusted. Public sector organisations that focus only on efficiency risk missing AI’s wider strategic potential – and increasing their exposure at the same time.
At Celerity, we work with public sector leaders to design AI programmes that support long-term service improvement, workforce transformation and public confidence – not just short-term automation.
Ethical AI is how Public Sector Organisations Earn Permission to Innovate
Public trust in public services is hard won and easily lost. For citizen-facing organisations, ethical AI is not optional – it is fundamental to legitimacy, accountability and confidence.
The leadership decisions being made today will shape how AI is experienced by citizens and staff for years to come.
For organisations looking to explore this further, our joint whitepaper with GovNews, Beyond productivity tools: Unlocking the strategic value of AI in local government, offers practical guidance on how public sector organisations can put the right ethical, data and governance foundations in place to enable responsible and trusted AI adoption.
Read the Whitepaper