What AI in Arto actually does, and what it does not do
Arto deploys AI to handle a specific part of the work, the administrative, analytical, and information-processing tasks that currently take officer time without requiring professional judgement. The part that requires professional judgement, contextual knowledge, and accountability for decisions affecting residents stays with the officer, because that is what the role is for.
AI handles | Officer decides |
Reading all incoming referral information and extracting key facts | Making the threshold decision: Section 47, Child in Need, Early Help, or No Further Action |
Checking the threshold framework criteria against the referral information | Applying professional judgement, knowledge of the family, and contextual factors the framework does not capture |
Identifying gaps in the referral information and flagging them | Deciding whether to proceed on incomplete information or request more |
Checking previous case history in the back-office system | Interpreting the history in the context of the current referral and the family's circumstances |
Checking EHC plans against their review deadlines and identifying those at risk | Deciding whether to proceed with a review when a child's circumstances have changed significantly |
Sending advice requests to schools, health and social care professionals | Deciding what the assembled advice means for the child's plan and what changes to make |
Checking a planning application against the Local Validation Checklist | Exercising planning judgement on borderline cases and communicating with applicants |
Calculating council tax liability or benefit entitlement for standard changes | Handling discretionary decisions, complex cases, anomalies, and cases where the rules do not clearly apply |
Generating draft letters, summaries, and documentation for officer review | Reviewing, amending and approving every output before it is sent or acted upon |
What KSB profiles are and why they matter
KSB stands for Knowledge, Skills and Behaviours. KSB frameworks are the professional standards that define what someone in a specific public sector role is trained to know, is qualified to do, and is accountable for. They are the basis for apprenticeship standards, professional qualifications, and job role definitions across local government, the NHS, education and wider public sector.
Every professional role in public services has a documented scope: what the role's expertise covers and where its authority ends. A duty social worker is trained to apply threshold frameworks, maintain case records, and make allocation decisions, but not to overrule a consultant's medical assessment. A planning officer is qualified to assess applications against the Local Validation Checklist and the development plan, but not to make legal determinations that require a solicitor. These scopes are not arbitrary. They reflect years of professional framework development, workforce policy, and hard-learned lessons about what happens when roles act outside their documented boundaries.
When Arto maps an AI agent to a KSB profile, it means the agent's scope is defined by that professional framework. The agent operates within what the role's knowledge and skill framework encompasses. It cannot access data outside that scope, make recommendations outside that scope, or generate outputs that exceed that scope. The KSB boundary is not a preference setting that can be overridden at runtime, it is the architecture of how the agent is configured.
This is what distinguishes Arto from a general-purpose AI assistant deployed to a team. A general-purpose AI has no professional scope. It will attempt whatever it is asked. It has no concept of 'this decision requires a qualified social worker' or 'this assessment requires an officer with legal authority.' Arto's KSB-mapped agents do. The boundary is real because it is built into the agent's configuration, not just stated in a policy document.
What AI cannot replace in public sector work
AI can process information faster than a human, apply consistent criteria across hundreds of cases, and handle routine transactions without fatigue. These are the things that make AI useful as a tool for public sector teams under pressure. But they are not the things that make public sector work what it is.
Professional judgement
Professional judgement is the application of expertise, experience, and contextual knowledge to a specific situation that does not fit neatly into a framework. It is what a social worker applies when a referral describes a family known to services and the current concern does not, on paper, meet the threshold — but something about the pattern does not sit right. It is what a planning officer applies when an application technically meets the checklist requirements but the site is in a context the checklist did not anticipate.
KSB frameworks define the scope within which professional judgement is exercised. They do not replace it. AI can apply the framework consistently and flag the cases where the framework gives an ambiguous result. The judgement about what to do with that ambiguity remains with the professional.
Accountability
Every decision that affects a resident's life — a safeguarding threshold decision, a benefits calculation, a planning determination — carries legal and ethical accountability. That accountability cannot be delegated to a software system. It is held by a named professional, attributable to their expertise and their authority, and subject to scrutiny, challenge, and review.
Arto's design makes this explicit. Every workflow execution requires a named officer to review the AI output and record their decision. That officer is identifiable in the audit trail. Their decision is attributed to them, not to the AI. If the decision is challenged, the record shows who made it and on what basis. The accountability is theirs because the decision is theirs.
Human relationship
A significant part of public sector casework involves direct contact with residents — the family in crisis, the resident who cannot understand their bill, the applicant who has invested months in a planning application. These interactions require empathy, communication, trust, and the ability to read a situation in ways that go beyond the information in a case record.
AI does not conduct those conversations. It does not contact residents or represent the council in direct interactions with the people it serves, except through clearly defined automated communications where the resident understands the nature of the interaction. The officer who picks up the phone, visits the family, or sits in the meeting is doing something that AI cannot replicate and that Arto does not attempt to replace.
AI in safeguarding: what this means for social workers
The question of AI replacing public sector staff is felt most acutely in safeguarding. Social work with children and families is a profession whose authority rests on human judgement about human situations. The idea that an AI system might be making, or influencing, threshold decisions about whether a child is at risk raises legitimate concerns that deserve a specific answer.
Arto does not replace the social worker's decision.
It provides a structured AI-assisted analysis before the decision is made, every time.
What the MASH triage workflow does is this: when a safeguarding referral arrives, Arto reads the referral, checks previous case history in the back-office system, applies the council's threshold framework to the referral information, and produces a structured summary for the duty social worker. That summary is ready before the social worker opens the case.
The social worker reads the Arto summary. They apply their professional knowledge, their understanding of the family and any previous involvement, and their own assessment of what the referral means. They make the threshold decision. Their decision is recorded in the audit trail alongside the AI summary, both are preserved. When the social worker's decision differs from what the AI analysis suggested, that difference is part of the record.
What this means for social workers in practice is not a threat to their role, it is a change in what the start of their triage process looks like. Instead of opening a referral and beginning the information-gathering and framework-application from scratch, they have a structured brief waiting. The professional decision, the judgement about what to do, and the accountability for that decision remain entirely theirs.
For teams under referral pressure, a typical MASH team receives 50 to 150 referrals per week and the triage process takes 20 to 40 minutes each, this matters. The AI handles the information processing that takes time. The social worker applies the expertise that makes the decision defensible.
Having an honest conversation with your team about AI
Workers in public services who are cautious about AI are not being obstructive. They have seen transformation projects that promised to make their work easier and instead changed what their job was or reduced the number of people doing it. Their concerns are proportionate to their experience.
Introducing AI to a team well means being specific about what it will and will not do. Not 'AI will help with admin', but which specific tasks the AI handles, what the officer still does, and how the review and sign-off process works. Workers who understand precisely what the AI does tend to have much less anxiety about it than workers who are told the benefits without the detail. The detail is reassuring, not worrying.
The conversations worth having early are: what does the workflow do step by step? What does the officer still review and approve? What happens when the officer disagrees with the AI output? What data does the AI access? Is that data leaving the organisation? Who sees the AI's outputs? How are decisions attributed?
These are the questions that a duty social worker, a SEND coordinator, or a revenues officer will ask. They are good questions. The answers, in the case of an Arto Supported Flow, are specific and reassuring. The AI does the preparation. The officer makes the decision. The record shows the officer's decision, not the AI's suggestion. No automated decision affects a resident without officer sign-off.
Where to go from here
How AI works in casework
The three modes of AI involvement in casework, triage support, orchestration, and transaction automation, and what officers do in each.
Casework and triageHow governance keeps officers in control
The human oversight gates, audit trail, and governance certificate that make officer authority structural rather than optional.
How it worksStart with one workflow
The lowest-risk way to begin: one workflow, one service area, your team reviews every output before it is used.
Getting started