Abstract
In November 2025, ai@cam worked with Hopkins Van Mil to convene public dialogues with 95 Cambridgeshire residents to explore their expectations, hopes, and concerns regarding the use of AI in local government. This interim report explores the findings from that first phase. The dialogue reveals cautious optimism: residents recognise the substantial potential for AI to enhance service delivery, and emphasise that its adoption must adhere to defined safeguards.
Participants emphasise that AI should generate demonstrable improvements in people’s lives rather than serve primarily as a cost-saving mechanism. They expect AI to enhance, not replace, human roles - automating routine tasks so staff can focus on empathy, creativity and nuanced judgment.
Key Findings
- AI should be deployed only where it delivers clear public benefit.
Residents want AI used to address real service pressures, such as heavy administrative workloads or slow processes. Innovation for its own sake, or deployment driven primarily by cost-savings that do not deliver service improvements, will not gain public trust or support. - Humans must remain central to all decisions that affect people’s lives.
Participants support AI automating routine tasks but are opposed to AI making final decisions, particularly in sensitive areas involving vulnerable individuals. They expect councils to protect existing jobs, invest in staff training, and ensure robust human oversight is built into AI tools from the start. - Public trust requires full transparency, clear accountability and evidence that AI works.
Residents want to know when AI is used, how it operates, what data it relies on, and who is responsible for it. They expect AI systems to be rigorously tested, reliable, unbiased and independently evaluated before wider adoption. - AI systems must be inclusive, accessible and secure for all residents.
Participants expect human alternatives to remain available, systems that work for people with different levels of digital literacy, robust protections of personal data and strong safeguards against system failure or malicious attack. - Residents expect to play an active role in shaping how AI is developed and deployed.
Participants want meaningful involvement throughout the process, from setting priorities to designing, testing and monitoring AI systems. They also expect councils to consider broader implications, including environmental impacts, data security, future workforce skills and alignment with wider public service goals.
Impact and Next Steps
The findings from this interim report are shaping the direction of our Local Government AI Accelerator, and we encourage applicants to draw on these insights when developing their proposals. This can include consideration of:
- Decision-making boundaries: Is AI making recommendations that humans review, or final decisions that directly affect people’s lives? If the latter, how will meaningful human oversight be ensured?
- Transparency and consent: How will residents know when AI is being used in their interactions with the council? What information will they receive about how it works and what data it uses?
- Accessibility and inclusion: How will the solution work for residents who are digitally excluded, have disabilities, speak English as a second language, or face other barriers?
- Workforce impact: How will this affect council staff roles and skills? What training or support will be needed?
- Exit and contingency planning: What happens if the AI system fails or needs to be withdrawn? How will services continue? What skills and capabilities need to be maintained?
A second phase of dialogue will take place in March 2026, and insights will continue to inform the programme. The interim public dialogue report can be found here.