Why a public sector AI business case is different from a private sector one
In the private sector, the primary question for any investment is: what is the financial return and when do we break even? The answer to that question usually determines whether a proposal is approved.
In local government, the decision-making framework is different. A chief executive, a finance director, or a cabinet member is not primarily asking 'will this generate profit?' They are asking four questions, roughly in this order. First: does this create legal or reputational risk that we cannot manage? Second: does this improve outcomes for the residents we serve? Third: does this help our overstretched teams deliver more with what they have? Fourth: does this reduce cost?
The financial return question is real and important. But it is rarely the argument that unlocks approval in a risk-averse, resident-accountable, publicly scrutinised organisation. A business case that leads with cost savings and mentions governance as a footnote is less likely to succeed than one that leads with managed risk, demonstrable service improvement, and financial return as the consequence of both.
IN SHORT: In public sector, lead with risk reduction and service quality. Financial return is the outcome of getting those right, not the reason to proceed.
The six sections of an effective AI business case in local government
The sections below represent the structure that senior public sector leaders expect to see in a technology investment proposal. Each section serves a specific purpose for a specific part of the approval audience. A business case that omits any of them will likely generate a question that delays approval.
Section 1 - The problem and why it must be addressed now
Who reads this section most carefully: The chief executive and the service lead's director. They want to understand why action is needed and why now, not later.
What to include: A specific, quantified description of the current operational problem. How many cases, how many officer hours, what the statutory compliance position is, and what happens if nothing changes. For SEND: 36% of councils complete EHC plan reviews on time and the statutory accounting override ended in March 2026, creating direct financial risk. For revenues: 400-600 changes of circumstances per week, each taking 45-90 minutes, creating backlogs and incorrect bills. Make the problem feel real and urgent.
Common mistake: Starting with the solution rather than the problem. Approvers who do not understand why the problem is serious enough to act on will not read the rest of the case carefully.
Section 2 - The governance and compliance position
Who reads this section most carefully: The monitoring officer, DPO, and IT director. They will read this section first because their job is to block proposals that create legal exposure.
What to include: Confirmation that the AI deployment will be governed to the required standard: the lawful basis for processing resident data, the DPIA status, the human oversight model, the audit trail, the data residency position, and the equality impact assessment. Name the standards: UK GDPR, ISO 42001, WCAG 2.2, the UK Government AI Playbook. Show that governance is built into the platform, not added as an afterthought.
Common mistake: Treating governance as a box to tick rather than an argument. The governance section is often the most important section for getting the proposal past the monitoring officer and DPO. A weak governance section generates questions that can delay approval by months.
Section 3 - The impact on residents and service quality
Who reads this section most carefully: Cabinet members, elected members, and the chief executive. They are accountable to residents and to the public. They want to know whether this makes services better.
What to include: Specific, resident-visible improvements. For SEND: families receive legally required reviews on time, rather than waiting 18 months for a review they are entitled to. For revenues: residents receive accurate bills same-day rather than waiting weeks, reducing incorrect bills and debt accumulation. For planning: applicants know within seconds what is wrong with their submission, rather than waiting weeks for a validation letter. Frame outcomes in terms of the resident's experience, not the officer's.
Common mistake: Describing operational improvements only in terms of officer time saved. Elected members care about what residents experience. An officer saving 45 minutes per case is not compelling to a cabinet member. A resident receiving an accurate bill on the day they report a change is.
Section 4 - The impact on staff and teams
Who reads this section most carefully: The service lead's peers, HR directors, and any trade union consultation. They want to know whether this threatens jobs or creates unreasonable workload.
What to include: A clear description of what AI handles and what officers continue to do. AI takes the administrative and information-processing work. Officers retain the decisions, the professional judgements, and the resident relationships. Name specifically what officers do with the capacity released: complex casework, proactive resident contact, the work that was being postponed because the administrative burden was too high. Reference the AI and Your Team page or the Arto staff engagement framing for the specific language.
Common mistake: Ignoring this section or treating it as unimportant. A business case that does not address staff impact will generate questions from HR, trade unions, and elected members that delay approval. Address it directly rather than hoping no one raises it.
SECTION 5 - The financial case
Who reads this section most carefully: The finance director and the s151 officer. They will verify every figure and challenge any assumption they cannot trace.
What to include: Time saved per case (in officer hours) multiplied by transaction volume multiplied by the relevant officer grade cost. Apply your council's own figures for each of these three inputs: do not rely on generic benchmarks as the only evidence. The POC specifications provide starting-point estimates (planning validation: 30-45 minutes per invalid application; revenues change of circumstances: 45-90 minutes per change reduced to under 5 minutes; SEND: 4-6 hours per week per coordinator). These are estimates from pre-deployment specifications. Adjust for your council's specific context and clearly label them as starting-point projections. Add the live result: Redcar and Cleveland Council, 28% reduction in contact centre demand within three months.
Common mistake: Presenting POC benchmark figures as guaranteed outcomes without adjustment or qualification. A finance director who identifies that you have taken a benchmark figure and applied it unchanged to your council's specific context will question the rigour of the whole case.
SECTION 6 - The deployment plan and risk register
Who reads this section most carefully: The IT director and the programme board. They want to know what they are committing to and what could go wrong.
What to include: A phased deployment plan: one workflow, one service area first. The land-and-expand model. What the first three months look like, what go-live requires, what the IT integration involves, and what the exit position is if the deployment is paused. A risk register with three to five specific risks and the mitigations for each: data governance risk (mitigated by DPIA and DPO sign-off), staff acceptance risk (mitigated by HITL design and change programme), integration risk (mitigated by pre-built connectors for the relevant back-office system).
Common mistake: Omitting the risk register or making it generic. A business case with no risk register, or one that says 'standard project risks apply,' signals that the service lead has not thought through what could go wrong. A specific risk register with specific mitigations shows the opposite.
Financial figures for a public sector AI business case
Planning
Planning validation: 30 to 45 minutes saved per invalid application. Approximately 30% of planning applications nationally are invalid at first submission. A council receiving 1,500 applications per year has around 450 invalid applications. At 37 minutes average: approximately 278 officer hours per year on validation letters alone. At a Grade 6 or 7 planning officer cost (including on-costs), calculate the salary equivalent.
Consultation analysis: 3 to 5 hours saved per contested application on consultation response reading and committee report drafting. Apply to your council's contested application volume.
Children's Services SEND
4 to 6 hours per week per SEND coordinator saved in chasing and scheduling. Statutory compliance improvement from approximately 36% to near 100% on-time EHC plan reviews. Secondary financial benefit: SEND tribunal applications are the primary driver of SEND overspending. Timely reviews with properly updated plans reduce tribunal applications. For councils facing S114 risk from SEND deficits, this is potentially the largest single financial argument.
Children's Services MASH
20 to 40 minutes per referral for structured triage analysis preparation. Primary financial argument is consistency and Ofsted compliance rather than direct time saving. Inconsistent threshold decisions identified by Ofsted represent inspection risk. An AI system that enforces consistent framework application on every referral addresses this specifically.
Revenues and Benefits
Change of circumstances: 45 to 90 minutes per change reduced to under 5 minutes for standard automated cases. 70 to 80% automation rate for standard changes. A billing authority processing 500 changes per week, with 70% automation at an average saving of 50 minutes, is saving approximately 292 officer hours per week. This is the largest volume financial case in the POC specifications.
Enforcement vulnerability screening: reduced Ombudsman compensation (500 to 2,000 pounds per upheld case) and enforcement agent fees (310 pounds minimum per case) on vulnerable households. The primary argument here is risk avoidance rather than time saving.
Contact Centre (live result)
Redcar and Cleveland Council: 28% reduction in contact centre demand within three months of Arto deployment. This is the only live result on the site. All other figures above are POC estimates. Apply your council's cost per contact and annual contact volume to model the financial value of a comparable reduction.
The objections senior leaders raise and how to address them
Five objections come up consistently in public sector AI approval processes. Preparing for them in advance is more effective than encountering them for the first time in the presentation.
Objection | Why they raise it | How to address it |
We do not have the budget for this. | Capital investment in technology competes with statutory services in constrained budgets. Finance directors are sceptical of technology ROI claims. | Present a revenue argument rather than a capital one. AI deployment is a recurring operational saving, not a one-time investment. The time saving per week starts from the first live run. Show the payback period in months, not years. Start with the lowest-cost deployment: one workflow in one service area. |
Our IT team will not approve it. | IT directors are accountable for data security, system stability, and procurement compliance. They are cautious about external platforms accessing back-office systems. | Lead with the governance documentation: ISO 27001, AWS London hosting, UK GDPR compliance, the DPIA framework. Show that the integration model for the relevant back-office system is documented and tested. Offer a security assessment meeting with the product team before the formal submission. |
What if something goes wrong? | Elected members and chief executives have seen high-profile technology failures. They are risk-averse about anything that could become a reputational story. | The governance architecture addresses this specifically. Every AI output requires officer sign-off before action. Every execution produces an audit trail. The HITL design means an AI error does not automatically affect a resident: it is flagged for officer review. The risk register in Section 6 of the business case shows you have thought through the failure modes. |
Will this replace jobs? | Trade unions, HR directors, and elected members with strong workforce constituencies will raise this question. In some councils it can be a political blocker. | Be specific about what AI handles (information processing, administration, scheduling) and what officers continue to do (decisions, professional judgements, resident contact). Show that capacity is released for higher-value work rather than removed. Reference the AI and Your Team page for the detailed framing. |
Other councils have had AI projects fail. Why will this be different? | The service lead may have seen colleagues at other councils spend significant budget on AI pilots that were blocked, produced no results, or created governance problems. | The failure mode for most public sector AI pilots is governance: the platform did not have the compliance infrastructure, the approval process was not completed, or the deployment was too broad. A single governed workflow, already approved for public sector use, with documented compliance built in, is a different risk profile from a bespoke AI project. |
How Arto makes the business case easier to build and sustain
Three aspects of Arto's design directly support the business case for AI in public sector.
Pre-built governance documentation reduces the Section 2 workload
The Assurance Designer in Arto pre-populates the technical governance record for every Arto Supported Flow: data scope, KSB profile, safeguards, HITL configuration, DPIA supplementary evidence. The service lead does not build the governance section of the business case from scratch. The technical evidence already exists. The organisation's task is to review it, confirm the context-specific details, and obtain the SRO and DPO sign-off.
DPO sign-off and the governance package
Built-in ROI measurement produces the Section 5 data automatically
Every Arto Supported Flow includes built-in return on investment measurement. Teams set their own baseline figures. The platform calculates time saved per run, cost saved per run, and cumulative impact over time from the first live execution. The financial data for the business case continues to be produced automatically as evidence of ongoing return, not just as a one-time projection.
The land-and-expand model reduces the Section 6 risk profile
Arto is designed to be deployed one workflow at a time in one service area. This makes the deployment plan section of the business case straightforward: the scope is bounded, the integration is documented, the governance is pre-configured, and the exit position is clear. A failed or paused deployment of one workflow in one service area is a contained operational matter. It does not require unwinding a complex multi-system integration.