
Zero Trust - Celerity Limited
Secure your data, eliminate risk and harness the power of Zero Trust.
81% of leaders say secure and trustworthy AI is essential to business success, yet only 24% of generative AI projects are currently being secured*. The gap between AI innovation and AI security is especially risky for the public sector.
As local councils explore generative AI to improve citizen services, streamline operations, or reduce costs, the focus must shift to AI governance and security. Without robust policies and visibility, shadow AI can compromise sensitive AI data, breach compliance rules, and reduce public trust.
This blog outlines three steps that councils should follow to secure AI within their organisation. Looking to secure your AI tools? Book an AI risk workshop with Celerity and IBM to take the first step toward secure, governed, and trustworthy AI.
Uncover Unauthorised AI Use to Protect Sensitive AI Data
Shadow AI refers to the use of AI tools within an organisation, without IT approval. For example, employees may use tools like ChatGPT to optimise policy work, enhance communications and create draft content for projects such as grant applications – all without an IT teams knowledge.
AI can help council teams work smarter, but it also comes with risks. Staff might accidentally share sensitive or personal data outside council systems when using AI tools, which could lead to a data breach. Plus, AI doesn’t always get things right—its answers can be off or biased. If decisions are made based on these without double-checking, it could mean unfair service and unhappy residents.
To prevent shadow AI, organisations should run an internal exercise to understand where AI is being used and the associated risks. It's important that all employees understand safe use of AI and company policies around AI.
Embed AI Security into Every Generative AI Project
As AI projects become more common in councils, it's important to make security part of the plan from the start. Every AI models must have the right measures in place to prevent tampering, unauthorised access and IP theft. For example, if a team builds a chatbot fine-tuned on council housing data and doesn't restrict access, the model and data could be exploited or cloned.
Whenever you input data into AI models, whether it is citizen data, HR data or financial records, it must be encrypted and anonymised so it can’t be exploited. Sharing confidential information into AI tools, without the right safeguards, is a breach of GDPR which could result in hefty fines and reputational damage for a council. For example, if a social care department tests AI to summarise case notes, without anonymisation, real names and conditions are exposed in prompts, this is a GDPR breach.
Organisations should identify and document the risks of generative AI as part of their overall cyber security and risk management process. This should be reviewed with every new project.Project stakeholders must make sure security controls and governance are followed. By partnering with trusted technology providers, local authorities can build secure AI workflows and focus on innovation with confidence.
Establish Trustworthy AI Through Policy and Oversight
With new rules around AI, councils need to get ready for changing expectations from government and regulators. The EU AI Act, for example, is raising the bar on human oversight and compliance, and could influence regulations beyond Europe.
For councils to benefit from AI, it has to be used in a way that’s open, fair and easy to review. Good AI governance means focusing on three things:
Explainability: Be able to show how AI made its decision, with clear reasons a human can check and explain.
Accountability: Assign an owner to each AI tool, set clear policies for acceptable use, and know what to do if something goes wrong.
Bias and fairness: Regularly check AI models for bias and make sure people have a way to challenge decisions that affect them.
Setting up the right rules and ownership will help councils use AI legally, ethically, and with public trust.
AI could revolutionise the way councils operate, but it must be adopted responsibly. Councils have a duty to protect AI data, uphold compliance, and build public trust through secure and transparent AI usage. Starting with identifying Shadow AI across teams, embedding security within AI projects and prioritising AI governance so that every AI tool is explainable, accountable and fair.
Trustworthy AI isn’t just about technology. It’s about policy, accountability, and proactive risk management. By prioritising AI governance and AI security, councils can safely embrace AI and deliver meaningful innovation for the communities they serve.
Book an AI risk workshop with Celerity and IBM to take the first step toward secure, governed, and trustworthy AI.
*Source: IBM Institute for Business Value Study 2024
Secure your data, eliminate risk and harness the power of Zero Trust.
Protecting your business from threats and data loss.
Identifying unlicensed software, monitoring license usage, and ensuring that your organisation abides by its license agreements.