Skip to content

By National Science and Media Museum on

How Britain can harness the AI revolution

Roger Highfield, Science Director, reports on an AI discussion at the Bradford Science Festival.

When Matt Clifford opened his Financial Times app one recent morning, three of the top five stories were about artificial intelligence. “Twenty years ago, modern AI did not exist,” the former advisor on AI to the Prime Minister told an event he chaired for the Bradford Science Festival. Now he believes it could reignite the ability of UK industry to compete globally. “Every time there’s a general-purpose technology,” he said, “you can either watch it happen somewhere else—or decide to shape it yourself.”

Four seated panellists on stage with the Pictureville screen just above their heads

Artificial intelligence could rival the broad sweep of human intelligence in the next year or two, according to the most feverish estimates, unsurprisingly from the bosses of AI companies. Others believe that, though superhuman in respects, from playing Go to working out the structure of proteins in the body, there are fundamental limits to what current AI can do, not least due to its huge appetite for power and data. Meanwhile, warnings have sounded that the AI market may be experiencing a bubble similar to the dot com era, driven by overexcitement and inflated valuations.

Clifford, who grew up in Clayton on the outskirts of Bradford, hosted an AI discussion at the National Science and Media Museum last Saturday with Charlotte Deane, Executive Chair of the Engineering and Physical Sciences Research Council, EPSRC, which funds AI research, Tom Forth, Head of Data at Open Innovations, and Zandra Moore, tech entrepreneur and angel investor.

In opening the event, Clifford told the audience how he has watched this transformation in the fortunes of AI from both the policy front line and the perspective of technology entrepreneurship. “In the 1980s and 1990s we didn’t have enough computational power,” he recalled.

Then around 2012, everything began to change in the wake of new chips from the then small company NVIDIA and work by a Canadian team led by Briton Geoff Hinton. They showed that deep neural networks—loosely modelled on the brain, they can recognise patterns like complex pictures, text and sounds to produce insights and predictions—were better than humans, for instance when it comes to the “deceptively hard” task of distinguishing chihuahuas from muffins.

“That was the start of the enormous curve we’ve been on,” said Clifford, and indeed Hinton shared the physics Nobel prize for his work in 2024. The subsequent surge of investment is unprecedented, he said.

Clifford’s curve now has a geopolitical dimension. Britain’s computing power is levelling up fast, with new supercomputers like Isambard-AI in Bristol, Mary Coombs in Daresbury, Cheshire and Dawn in Cambridge. The giant high-performance machines, so called exascale computers, still belong to the US and China, though, with Europe recently joining this elite club with Jupiter.

The US and China are locked in an arms race of data, chips and capital. Britain, by contrast, must find a subtler way to survive. “We’re not like the US or China,” he said. “What we can do is build out infrastructure to serve AI models, adopt AI in the public sector, and think about AI sovereignty—what bits of the value chain the UK can play and win in, where we can be world-leading.”

Given the amount of high-quality health data gathered about the UK population, for example, through projects such as Born in Bradford (brought to life in the museum by the Living Dots exhibit) and UK Biobank, the panel pondered if we need to find ways to tax how AI puts these data to use.

Billionfold Leap and British Lag

“It’s easy to say AI is all hype,” Clifford said, “but if you look at what’s happened since 2012, the amount of computational power used to train frontier models has gone up a billion-fold. No other technology in history has seen that level of increase in investment in such a short period.”

This year alone, he added, the world will spend $500 billion on AI infrastructure. “That increase in input is seeing a massive impact in real-world outputs—language processing, image recognition, drug discovery, video processing—all seeing rapid increases in performance, though not perfect.”

One metric he tracks is “how big a task you can delegate to AI and have a decent chance it will do it respectably.” In software engineering, the California-based organisation METR tests how long a human task takes before an AI delegated to the task achieves a 50 per cent success rate. Once, it was seconds. “Today that number is two hours—half the time AI will do that successfully,” he said. “And it doubles in performance every seven months. There are six more doublings before the next election which takes you from two hours to a working month.”

The exponential curve, Clifford warned, “is like Covid—before you know it, you suddenly find yourself in a different world.”

Yet Britain, home computing pioneers such as Alan Turing, Charles Babbage and Ada Lovelace, is once again in danger of falling behind. “UK firms are pretty bad at adopting technology,” Clifford admitted.

One barrier to adoption is conservative AI policies in companies. If we’re to reignite economic growth, and reduce our dependence on the US, “AI feels like something we should do.”

Muscular adoption

Clifford’s alternative to techno-nationalism is what he calls “muscular adoption.” The idea is partly inspired by the historian of technology Jeffrey Ding, whose book Technology and the Rise of Great Powers he often cites. “Every time there’s a general-purpose technology—like steam, or electrification—governments get fixated on research and building the tech. But what we need is ‘muscular adoption’: By adopting general-purpose technology in an ambitious way, you can shape how it develops more than if you just do the frontier R&D.”

His vision of sovereignty is not about closing borders or building rival supercomputers. It is about making AI work in Britain’s favour—in hospitals, classrooms, laboratories and civil service offices. “GPT-5 can’t run a hospital today,” he said, “but in a few years you could do that if you have deep collaboration, not just importing AI technology from California but deep collaboration with people producing the AI and shaping it to our needs. That’s a kind of AI sovereignty: working out how to use AI in high-stakes, high importance environments so we’re not fully dependent on other countries.”

Clifford chairs the Advanced Research and Invention Agency (ARIA)—“sort of the UK take on DARPA,” he said—based in London, which focuses on strategic research investments and has a long record of influencing policy: “I helped set up the AI Security Institute under Rishi Sunak and wrote the UK’s National AI strategy for this Government,” he said, which outlined the need to invest in AI infrastructure, adoption of AI by the public sector, and identifying the parts of the AI value chain where “the UK can play and win.”

“Government has made a lot of progress—though maybe not quite as fast as I’d like to go,” he said, adding that as an antidote to the concentration of AI firepower in a few companies, he backs open-source AI that anyone can use.

Apex Predator Fears

For Clifford, the problem lies less in Britain’s research base than in its reflexive caution. “In government I found challenges about whether we have data in the right place and format, about training and compliance, and understandable paranoia about what might go wrong.”

He recalled a focus group held a few years ago where “a guy from Bradford” asked, “Humans are the apex predator on Earth. Why would we build a new apex predator?” The remark captured the national mood: proud of its intellectual heritage, nervous about losing control.

That nervousness extends to Westminster. In 2023 Clifford co-organised the AI Safety Summit at Bletchley Park, a global gathering aimed at discussing the safety and regulation of AI. Famously, the summit featured an encounter between the then Prime Minister, Rishi Sunak, and the entrepreneur Elon Musk, which included a discussion of killer robots.

When it comes to stopping the robot apocalypse, however, Deane said the response was simple: “Pull the plug.” However, Clifford did flag concerns that advanced AI—especially when granted autonomy and access to sensitive information—can act against their operator’s interests.

Pragmatist’s Case for AI

Charlotte Deane, who also uses AI in drug discovery as Professor of Structural Bioinformatics at Oxford, gave a roundup of what AI can already do. “AI is way better than human doctors at interpreting some scans in hospitals—though I do want a human doctor to look at the data as well,” she said. “In drug discovery, it’s completely changed our ability to predict the shapes of molecules inside you. It’s better at designing efficient ways to generate power or move it across the grid.”

“The important thing is to know it’s not perfect, but AI can speed us up. In hospitals or drug labs, you want it to be right—but that doesn’t mean you won’t use it. It’s an amazing toy. It won’t be perfect but that’s OK because it makes you go faster.”

Like Clifford, she worries less about UK capability than culture. “There’s always a barrier to doing a new thing, like AI. That’s why it has to be top-down and tied to a problem. Students are using it all the time—even to write up experiments they haven’t done! But university admin, though aware it could make them efficient, are scared because they don’t understand what they have to do. I’ve met with people from 62 universities last year —none of them are using AI very efficiently.”

Sovereignty from below

If Clifford’s vision has been adopted as Whitehall’s, Tom Forth’s is Bradford’s. As CTO of The Data City, Forth offered another view of sovereignty—less grand, more grounded in knowledge. “When I did my PhD, I tried to understand how the malaria parasite kills people and sought a drug to do that without affecting the patient. To get into a PhD you have to read hundreds of papers. Now, you can put the papers in Notebook LM and get summaries.”

That efficiency, he said, is both blessing and curse. “It lets people who don’t want to get up to speed bluff. You can be very, very lazy. You can kid yourself you’re learning quickly with AI and find yourself left behind,” he said, with Deane adding there is research to support this. “We have to discipline ourselves to ensure the AI tools make us cleverer, not more stupid.”

For Forth, the AI frontier lies not in building models but applying them. “There are two kinds of company: those developing AI itself, like Google DeepMind— Google’s Gemini AI is largely based on this British technology, and we should be very proud—but that’s not the main way Britain is doing AI. There are lots of small ways, like filling out procurement forms to win a contract to supply paper towels to a hospital. AI can handle bureaucracy. That helps small companies compete more fairly with big companies.”

He sees huge potential in manufacturing, which faces various challenges in the UK. “UK companies are better at adopting AI than you think, but I wish were better at adopting robots.”

On geopolitics, his view is refreshingly fatalistic. “In the north of England, we don’t have control over this stuff—it’s going to happen in California and China mainly. If self-driving cars go rogue and start eating people, it’ll happen in San Francisco and Shenzhen.”

His main worry is we’ll be left behind if we worry too much.” What we can do is be indispensable in niche aspects of AI, he said, such as developing AI tools to insure driverless vehicles, a conclusion that aligns with Clifford’s: sovereignty through indispensability. “There are eight billion people in the world, we have to pick small niches we can be good at.”

But when it comes to AI replacing people, he still puts his faith in the human capacity for generating new ideas. “It still seems—at least for a few months, if not many years—we’ll be better than AI at finding truly new ideas and connecting things in different ways.”

Shadow AI

Zandra Moore, who is also an advocate for women in technology, described the business dimension of sovereignty. “AI today is being experienced by most people in jobs without realising,” she said. “In software it’s already there—in chat experiences, in workplace tools. They can be frustrating. But there are processes and repetitive tasks being picked up all the time by AI. That’s great, so people have to do less repetitive work and can focus on what they were trained to do.”

Another aspect of its use in business is “shadow AI”—the quiet adoption of AI tools by workers before the board signs off. “With senior executives I discuss how to move from looking at AI tools to solving problems. Adoption needs to be led from the top down more. There are lots of models moving apace—you can save years of time if you get the right tool for the right problem.”

Bradford, she said, could offer a test bed for Clifford’s national vision. “It’s the youngest city in the UK and very diverse—it offers a real opportunity to tap that emerging talent. Adoption works well when people at the top think of the problem and those young people and curious minds solve it.” Moore also touched on creative-industry anxieties. “AI is supposed to give us more time to be creative—but the creative sector feels under threat. While it can be scary, the earlier we pick it up and incorporate it into our work, the better our chances of surviving that transition.”

What AI do we need to use now?

Clifford challenged the panel to advise the audience on what AI tools the audience should check out. Zandra Moore’s advice for newcomers was hands-on: “Hugging Face Spaces helps you dabble in lots of tools. It’s easier than trying to remember them all. I like to get it to make up ridiculous songs—I’m envious of how my sister can sing—using Mozart AI. “You can even get the song in karaoke format.”

Forth recommends Google’s NotebookLM to make sense of documents for anyone managing complex projects. “It can make a podcast of all your documents so you can listen while doing the washing or gardening or mopping the floor, as I did this afternoon. A couple of startups even do it with a British accent.”

“Personally, I meet a lot of people and have difficulty remembering who they are, what they look like and represent—AI can help with that,” said Charlotte Deane, adding that her students have used AI to write songs about searching for more GPUs, the chips that power AI.

With a grin, Clifford offered a more personal use case: “I like to write immersive murder-mystery party games. ChatGPT is an incredible tool for writing games. My wife’s not so keen on this, so ChatGPT is a great partner.”

The great conversation

For the Science Museum Group, which convened the Bradford session under the banner AI and the Future of Science: How Machines Will Change Everything, such debates are critical – the group has long argued for better engagement with the public on AI, calling for a “big conversation” about technology, trust and the shape of progress.

Bradford, once a powerhouse of the industrial revolution, offered a fitting setting for the latest public conversation. The day before the meeting, Clifford and the panel met in the National Science and Media Museum for a round table discussion with local representatives from the University of Bradford, Microsoft, NHS along with Tracy Brabin, Mayor of West Yorkshire, and museum director Jo Quinton-Tulloch. Their aim: to forge an AI vision for the region.

Leave a comment

Your email address will not be published. Required fields are marked *