From ideas to impact: Early lessons from our AI for Operations programme

22 December 2025

Back to blog

ai@cam’s AI for Ops initiative now has a cohort of projects underway where colleagues are actively exploring how AI could improve their work. We’ve seen great enthusiasm for this work, demonstrating energy and readiness for change. The projects also show colleagues aren’t only looking for quick wins - they’re thinking about the broader changes AI could bring in their areas, asking questions about purpose and process that will be essential to realising AI’s potential.

Process innovation and product innovation working together

When organisations think about innovation, product innovation often comes first: new tools, new outputs, new services we can deliver through capabilities such as AI chatbots, automated reporting, or enhanced search. Product innovations can deliver business value in the near-term, but focusing only on these leaves bigger questions about workflows unexamined.

Process innovation takes a different approach - rethinking how work gets done, not just automating existing workflows. Product innovation asks “what new things can we build?” Process innovation first asks “should we be doing this work at all, and if so, why and how?”

This distinction is particularly important when we face radical technological change. We’re in a moment of information asymmetry: we cannot know what optimal operational processes will look like in an AI-enabled future because the scale of potential change is so vast. Many existing processes pre-date these capabilities, creating an opportunity to consider whether automation alone is the right approach.

This uncertainty opens space for questions we don’t often get to ask: why does this process exist? What purpose does it serve? What would it look like if we designed it from scratch today?

This means starting with purpose. Regulatory documentation exists to prompt critical reflection on the impact of innovation. Minutes exist to ensure both secretariat and committee members understood what happened in a meeting; that the secretariat is actively managing decision-making and engaged in understanding the meeting. Student contact processes exist because we want to support students, and they signal how we treat people.

When automation prioritises efficiency over purpose, there is a risk of streamlining activity without improving the underlying function, creating tools that make things faster but not necessarily better. This is where the distinction between product and process innovation becomes critical: product innovation might automate minute-taking; process innovation asks why the meeting was needed, and what purpose the record serves.

Our AI for Ops community of practice provides a forum for practitioners to explore these issues together, sharing lessons from what works and working together on shared challenges.

When AI meets high stakes processes

Some operational contexts demand particular attention when AI enters the conversation: recruitment, admissions, assessment – areas where bias, fairness, and human judgement are paramount. Many colleagues working in these areas are already thinking carefully about these dimensions.

Current approaches to AI governance often centre on “empowering users” through training and guidance, relying on humans-in-the-loop to identify and intervene to prevent potential issues. These measures form an important foundation. As our community of practice develops, we’re learning together about how these arrangements operate in practice, including how oversight is maintained over time and how AI outputs are interpreted in day-to-day decision-making.

Research from deployment studies offers useful insights here, suggesting that generative AI tools can lead to measurable changes in how people engage with critical thinking – capabilities that are central to effective oversight.

This reflects a broader pattern: AI deployment can expose existing organisational tensions. We’ve seen this repeatedly in deployment failures across the world, where AI has negative impacts on people already struggling to work the system. Where processes have become distanced from their original purpose, AI is more likely to reinforce those patterns than to resolve them. The technology amplifies what’s already there, for better or worse.

Strategic choices about sovereignty and capability

Much current thinking about operational AI makes use of widely available commercial platforms. These options offer short term benefits, while also raising important questions about institutional dependency. Our projects are helping us think through where institutional capability is needed and where external platforms can be effectively leveraged. This involves careful consideration of:

Integration with core business processes: Some activities are so deeply woven into how the institution functions that handing them to external tools means redesigning operations. What does it mean to outsource these areas? What control do we retain?

Areas of unique advantage: Cambridge has distinctive characteristics, mission-critical functions, and competitive advantages. Where do these intersect with operational AI?

Security and vulnerability surfaces: Different operational activities create different attack and vulnerability surfaces. What new risks emerge when we rely on external platforms across all operational activities?

Long-term costs and dependencies: Every external platform creates ongoing costs; financial, operational, strategic. Who gains value from institutional data? What dependencies are being created? What happens if a vendor changes terms, raises prices, or disappears?

Operational AI requires building lasting capability, and that means making deliberate strategic choices about what to build, what to buy, and what must remain under institutional control.

A vision for AI in operations

These observations suggest some principles for moving forward:

  1. Start with purpose. Before automating, be clear about why the process exists. If it’s about human judgement or relationship-building, automation may be inappropriate regardless of technical feasibility. If the purpose has become obscured by accumulated procedure, this is an opportunity to redesign.
  2. Expect AI to surface organisational realities. Cultural issues, process problems, or equity gaps may become more visible through deployment. This is valuable information that can guide thoughtful scaling and create opportunities for improvement. The diverse perspectives in our community of practice are helping us recognise and address these realities together.
  3. Think strategically about dependencies. Not everything needs to be built in-house, but institutions need clear thinking about what capabilities must remain under institutional control, where external platforms create acceptable dependencies, and where the risks outweigh the efficiency gains.
  4. Take governance from principles to practice. Accountability requires more than documentation when the tools themselves may change human capacity for oversight. This points to the value of structural solutions alongside policy, with governance designed to enable good process innovation rather than constrain it.

A positive vision for AI in operations – one already emerging in many of our projects – would see these tools automate high-volume administrative tasks, freeing colleagues to focus on work requiring human judgement and relationship-building. AI would handle procedural work in research administration, allowing staff to provide strategic grant support. Collections teams could spend more time on curation. Student services could respond faster to routine questions whilst investing more in complex support cases.

The appetite for AI in operations is clear. Realising its potential will depend on building on this enthusiasm from the ground-up while developing the strategic capabilities needed to support effective and sustainable use over time.