28 April 2026

AI Policy's Missing Half

AI Policy's Missing Half

Reflections on giving evidence to the Business and Trade Select Committee, 14 April 2026 - Prof Neil Lawrence.

I find myself thinking about the theme at the heart of Monty Python’s Life of Brian when reflecting on AI policy. The premise is that the people of Judea, threatened by Roman imperialism, frightened and uncertain, latch on to an ordinary person as a potential saviour. The film’s comedy comes from how desperately people want a voice to follow when the ground underneath them feels unstable.

Opening the first evidence session of their new inquiry on AI, business, and the future of the workforce, the Chair of the Business and Trade Select Committee asked how I would characterise the global AI race and where the UK sits within it. In that moment, the Life of Brian came back into my mind. Not because the policy world is comic, but because of the dynamic we see in policy discussions about AI. We are in a moment of technical and economic uncertainty. That uncertainty makes us susceptible to anyone who arrives with a simple, confident story about how to proceed. The AI race is part of those stories. You can see its footprints in hyperscale partnerships, the sovereignty unit, the compute target. Each one looks like a decision. Most of them are reflexes.

That’s not to say these interventions aren’t necessary. But they are also obscuring the important question. What would have to be true for AI to make the UK’s nurses, schoolteachers, small business owners, logistics managers, accountants, lawyers and others, better at their jobs? My evidence tried to suggest that question is at the heart of what the inquiry is about: how AI is adopted and what that means for the future of work and our shared prosperity.

We have supply-side policy. We need a demand-side strategy.

If you sit through enough policy meetings on AI, a pattern becomes hard to miss. Almost everyone advising government has a supply-side interest. They sell infrastructure, or they build models, or they run platforms. These perspectives do matter and need to be represented. But it does mean that the menu of options ministers see is shaped, structurally, by those suppliers’ interests. The result is a policy agenda that mistakes the input for the output.

We want is broad-based productivity growth, better public services, and a workforce engaged in quality work, with more agency over its own institutional decision making. The inputs we have arranged are foreign capital and foreign software. Neither becomes British productivity on its own. It’s all the work in between: the adoption work, the institutional work, the tool-building work, that is missing.

I told the Committee that if they were looking for a single intervention, it would not be on the supply side. Major US technology firms have enormous (and growing) lobbying capability. They have deep pockets. We do not need to worry about whether their interests are being represented. We need to worry about whether everyone else’s are. What about the mainstream UK businesses that have been left behind by previous waves of digital technology? What about the small and medium enterprises that struggle to invest? What about the councils, hospitals and schools that — left to the market — will be sold platforms rather than supported to build and adapt? Those are the parts of the economy where the productivity prize lives, but they are almost entirely absent from the conversation that ministers are hearing.

We have policy tools that we can use to respond to this. The Digital Markets, Competition and Consumers Act gives the Competition and Markets Authority powers to keep markets contestable for small and medium suppliers. Unfortunately its role seems to have been misunderstood in some quarters. A pro-innovation agenda is not the same as a pro-big-tech agenda. Pro-innovation, properly understood, means making sure that there is room for the next wave of UK entrants to build, validate and scale, and that requires using the powers we already have to deliver these outcomes.

What demand-side work looks like

A team in Cambridge has been collaborating with South Cambridgeshire District Council on planning. Council officers had already formed AI clubs of their own — they wanted to understand the technology and use it in their work. Researchers from the Universities of Liverpool and Cambridge have been helping them build planning tools that they themselves design, deploy and maintain. The work is doing several things at once: it is solving a real operational problem for the council, it is building durable institutional capability inside the council itself, and it is generating expertise that could catalyse a business serving local authorities across the country.

The South Cambridgeshire example is not unusual. It is unusual only in being mentioned. Across the country there are nurses who could help design tools that make nursing better. There are teachers who could help design tools that that support their teaching. There are small business owners who could adopt AI in ways that make their firms more. None of this requires a frontier model. Most of it does not even require very much money. What it requires is that the policy environment notices it exists, takes it seriously, and stops assuming that the only stories worth telling are those with a US tech company’s logo on them.

Missing the demand side conversation means that government is also missing the fundamental shifts in the way the technology is being deployed. The assumption underlying recent AI policy has been that innovation requires training large models, and that training large models requires capital, infrastructure and scale that only a handful of organisations can provide. While these models are a necessary component, the frontier of innovation has shifted. Recent progress has come from combining and directing existing models rather than building new ones. The frontier is one of interacting models. Models that can be convened and instructed directly in the English language (or Mandarin, Frech, Urdu or Spanish). Models that can write software that is tailored for a business’s needs rather than a big-tech’s bottom line.

That changes where the value is. It means we should expect the cost and skill threshold for institution-led adoption to drop sharply. It means that the demand side is where the next decade’s value will be created. That value will be generated by the people closest to the problems, in the institutions where the work happens. But none of this will happen unless we treat the demand side as a core component of national strategy.

Although we’ve dragged our feet on this, it’s not too late. This programme has some reasonably clear contours:

  • closing the adoption gap between large firms and SMEs as a competitiveness issue;
  • investing in the organisational and management capacity that determines whether institutions can absorb new tools;
  • supporting the system of open innovation, including open-source software, that gives our businesses access to this technology;
  • using public procurement to create contestable markets and first-customer pathways for UK suppliers;
  • valuing universities for the capability they build;
  • treating public trust, transparency and accountability as adoption infrastructure rather than as ethics-shaped obstacles to it.

We can’t outspend the largest economies on compute. But we can outcompete by sharing the lessons of clever deployment with each other. The new capabilities come with new risks. Our focus should be on sharing how AI is deployed safely, productively, and in ways that make our institutions stronger rather than more on companies that don’t align with UK interests.

Campus UK

At the heart of this is what I call Campus UK. If you draw a circle around the educational and research expertise that can be reached by train within the United Kingdom — Southampton, Cambridge, Manchester, Oxford, Glasgow, Derby, Sheffield and a great many places in between — the concentration of knowledge inside that circle is unmatched. We are, by virtue of geography, unusually well placed to translate research into adoption.

But we do not currently reward our universities for delivering on that. The incentives push us more towards the cover of Nature than towards the cover of our local paper. But that forgets why many of these Universities were developed. Our civic universities were founded to support advance learning and knowledge for the benefit of their city and region. AI presents an opportunity for them to do just that. We should be proud of the international quality of our research. But we should also be proud of Lincoln’s work supporting farmers through agricultural robotics and Nottingham Trent’s work improving inclusion of students with learning disabilities and autism in education. AI will revolutionise our lives and our economy. Campus UK is the route to putting our citizens and companies at the heart of that revolution.

Our own work in Cambridge has these ambitions at its heart. To learn from our colleagues and collaborators and to share those learnings through our policy work on AI sovereignty, our accelerator programmes with local authorities, and our interdisciplinary research incubator. Perhaps none of these are as glamorous as the frontier lab announcements that are the fodder of current government policy. But they are not meant to be. They are meant to be useful.

Taking the demand side seriously

This is not to be against the large technology companies. They do serious work, and we want them present in the UK. But let’s not confuse their presence with a strategy. On its own their presence doesn’t doesn’t solve the productivity problem of a British SME. It doesn’t solve the workflow problem of a planning officer. It doesn’t solve the staffing pressure on an NHS ward.

I hope on the back of my evidence that the Committee, and the wider policy conversation, will take the demand side seriously as a matter in its own right. Not as a downstream consequence of supply. But as the battleground where the economic and public value of AI must be realised. Perhaps that’s a less dramatic framing than a global race, but it is a framing that could work for the UK.

Neil Lawrence is theDeepMind Professor of Machine Learning at the University of Cambridge and Chairof ai@cam. He gave oral evidence to the Business and Trade Select Committee on14 April 2026 alongside Dame Wendy Hall. The full transcript is available here.