This is a pattern that repeats across local authorities, NHS trusts, central government departments and education bodies. A pilot is approved. An AI tool is selected. Work begins. Governance, compliance and risk assessments follow, weeks or months into the process. Approval bodies raise concerns. Procurement stalls. The pilot loses momentum. The organisation concludes that AI is not yet viable and the cycle begins again.
The conclusion is wrong. AI is viable. The deployment approach is not.
The five reasons AI projects fail in local government
1. Governance is treated as a final step, not a foundation
Most AI deployments begin with a tool selection, not a governance assessment. By the time governance teams are consulted, commitments have already been made. Data processing approaches are established. The AI model is chosen. Retrofitting governance to a deployment that was not designed with it in mind is slow, expensive and often impossible without restarting the project.
2. Generic AI platforms are not built for the public sector regulatory environment
AI tools designed for commercial markets do not arrive with UK GDPR compliance, WCAG accessibility, ISO 42001 alignment or adherence to the UK Government AI Playbook built in. They require significant additional configuration to meet public sector requirements. That configuration burden falls on the organisation, and most organisations do not have the capacity or expertise to carry it.
3. Accountability for AI decisions is not defined before deployment
Public sector organisations are accountable for every decision they make. When AI is involved in a decision, accountability does not transfer to the AI or to the platform provider. It remains with the officer and the organisation. If it is not clear who approved what, on what basis and with what information, the organisation cannot defend the decision. This ambiguity is identified by governance teams as a blocker, and it should be.
4. Procurement and approval processes are not designed for AI deployment timelines
Standard procurement frameworks were not written with AI deployment in mind. Data protection impact assessments, IT security reviews, equality impact assessments and information governance approvals can each take weeks when run in sequence. Without a clear, pre-assembled governance package that addresses each requirement simultaneously, projects queue repeatedly through approval stages and lose momentum.
5. Pilots are designed to test the technology rather than the governance
A well-designed AI pilot in local government tests whether the deployment can be governed safely, audited completely and approved within the organisation's existing compliance framework. Most pilots test whether the AI produces useful outputs. These are different questions. A pilot that passes the second test but not the first cannot progress to full deployment, which is why many technically successful pilots are never scaled.
How the failure sequence unfolds
The sequence begins with a legitimate problem in a service area. Demand is rising. A backlog is growing. Staff capacity is insufficient to meet it. A service lead identifies AI as a potential solution and begins the process of introducing it.
The failure is not technical. It is architectural. And it is entirely preventable.
-
Stage one: tool selection
A tool is identified, often following a demonstration or a recommendation from another authority. A decision is made to pilot it. Budget is allocated. A project team is assembled. At this stage, governance considerations are noted as a future task.
-
Stage two: pilot commencement
The pilot begins. Data is connected. Staff are trained. Initial results are promising. The project team reports progress to senior leadership and begins planning for a wider rollout. Governance documentation is initiated in parallel but is not yet complete.
-
Stage three: governance assessment
A data protection impact assessment is requested. The information governance team reviews the data processing approach and identifies gaps. The IT security team raises questions about the AI model's data handling. The DPO requests documentation that the platform provider has not supplied. A legal opinion is sought on liability for AI-assisted decisions.
Each of these workstreams takes time. They are often run in sequence because each one requires outputs from the previous one. The project timeline slips.
-
Stage four: approval friction
The project enters a formal approval process. Each governance body has questions that were not anticipated at the pilot stage. Some of those questions cannot be answered without technical information from the platform provider. Some of that information is not available. Approval is conditional on resolution of outstanding items. The project waits.
-
Stage five: loss of momentum
Delays compound. The staff trained on the pilot tool are now working on other priorities. The service lead who championed the project faces questions from senior leadership about the timeline and the cost. The organisation begins to question whether the investment is justified. The pilot is paused pending resolution of governance issues.
In many cases, the pilot is never resumed. The organisation records it as a failed pilot and attributes the failure to the complexity of AI, rather than to the absence of governance infrastructure at the point of inception.
What a governance-first AI deployment looks like
A governance-first AI deployment inverts the sequence described above. Governance is not the final step. It is the starting point.
Before any tool is selected, the organisation establishes the governance framework that every deployment must satisfy. That framework covers data protection requirements, accountability structures, human oversight obligations, audit requirements and the applicable compliance standards. Any tool or workflow introduced into the organisation must operate within that framework from day one.
This approach has three operational consequences:
First, approval processes accelerate significantly. When governance is already in place and a deployment can demonstrate compliance with it, approval bodies have less to assess. The questions that typically stall projects have already been answered. The documentation that governance teams require already exists.
Second, the risk profile of each deployment is clear and manageable before it goes live. The organisation knows exactly what data is being processed, by what AI model, under what policy authority, with what human oversight mechanism and with what audit record. There are no undiscovered risks waiting in the approval queue.
Third, each successful deployment builds the internal evidence base for the next one. When a service lead can demonstrate a 28 per cent reduction in contact centre demand, with a complete governance record and a clean audit trail, the argument for the next deployment is significantly easier to make.
How Arto addresses the structural causes of failure
Arto is built governance-first.
Every workflow created or deployed on the platform automatically inherits a compliance framework built on UK GDPR, WCAG 2.2, ISO 42001, the OECD AI Principles and the 10 principles of the UK Government AI Playbook. There is no separate governance configuration step. There is no compliance layer to add afterwards. The governance infrastructure is the platform. This directly addresses cause 1. Governance is not treated as a final step because it is embedded in every deployment from the moment a workflow is created.
Arto is built for, and with, the UK public sector
Because Arto is built exclusively for UK public sector, the compliance standards it satisfies are the standards that UK public sector approval bodies require. There is no gap between what the platform provides and what the DPO, IT security team or information governance team will ask for. The documentation required for approval is generated automatically on every workflow run. This directly addresses cause 2. The configuration burden does not fall on the organisation because the platform was built for this environment from the start.
Every AI decision made through Arto is recorded in an immutable audit trail.
The trail captures what triggered the workflow, what data was accessed, what the AI agent did, what human oversight gates were passed, who provided sign-off and what output was produced. That record is available immediately for ICO review, legal challenge, internal audit or scrutiny committee. This directly addresses cause 3. Accountability is not ambiguous. It is recorded, attributed and preserved permanently.
Arto pre-populates the Assurance Designer with the governance documentation for every Arto Supported Flow.
That record, combined with the audit trail, constitutes a pre-assembled governance package that covers the documentation requirements of data protection impact assessments, IT security reviews and information governance approvals simultaneously. Approval processes that typically run in sequence can run in parallel because the evidence base for each one already exists. This directly addresses cause 4. Procurement and approval processes do not have to be redesigned. They can proceed with the documentation Arto produces as standard.
Arto provides a workflow testing environment
The workflow testing environment allows organisations to run a deployment against test data before it touches live systems or service users. The audit trail produced during testing is the same format as in live operation. A pilot designed using Arto tests governance viability as well as technical performance. This directly addresses cause 5. A successful pilot is one that demonstrates both that the AI produces useful outputs and that the deployment can be governed safely. Arto is designed so that both questions are answered simultaneously.
Where to go from here
Starting out
If your organisation is considering AI for the first time, start with the practical guide to safe deployment.
How to start using AI safelyCurrently blocked
If a deployment is already in the approval process and facing the issues described above, the guide to getting AI approved addresses each step.
How to get AI approvedReady to see Arto in practice
If you want to see how Arto handles the governance requirements for your specific service area, speak with the team.
Book a governance review