Imitation and Intelligence: Marking the 75th Anniversary of Alan Turing’s Test

Written by Jacob Forward


In 1950, Sir Alan Turing’s seminal paper “Computing Machinery and Intelligence” first introduced the concept of what is now known as the Turing test to the world. In the paper, published in Mind, Turing proposes that one way to determine an answer to his question ‘Can machines think?’ is to simulate a three-participant game in which a person must attempt to tell apart, during two conversations via a computer terminal, which of the other two participants is a human and which is a machine. If the person is unable to distinguish between them, the machine wins. For Turing, this approach bypassed the issue of defining the concept of ‘thinking’ (and similarly of ‘machine’) and focused the matter onto whether or not computers had the capacity to act in the way that a human (a ‘thinker’) acts.

In the years since this proposal, questions not just of a machine’s ability to imitate thinking but of its genuine possession of a kind of general intelligence have ebbed and flowed. In 2025, Turing’s reflections are certainly no longer the distant philosophical provocations they may once have seemed.

To mark the 75th anniversary of this remarkable publication, and to honour Turing’s foundational part in the establishment of the field of artificial intelligence, the King’s E-Lab and the Centre for the Future of Intelligence hosted a two-day conference at King’s College, Cambridge – where Turing himself was student and Fellow.

Over the course of the two days, it was made abundantly clear by leading experts in AI ethics, cognitive science, computer science, public policy, economics, and industry that Turing’s original test is now outdated and limiting. But what replaces it and has it already led us too far down the path of building human-like machines?

On the first day, when Google’s James Manyika declared in his keynote that “We’ve passed the Turing test, now what?”, participants were pulled together from across disciplines in academia and industry to consider the question of new tests under the three themes of behavioural science, work, and trust.

Deconstructing the “Human-Like”

A deep dive into behavioural science and an immediate challenge to the premise of a single “human-like” intelligence marked the first session of the day. As developmental psychologist Alison Gopnik noted in her talk, there is no such thing as “general intelligence,” only a series of trade-offs. Current AI, she argued, excels at “exploit” intelligence (achieving a set goal) but lacks the “explore” intelligence; the spontaneous play, curiosity, and causal learning that defines embodied human development.

Princeton’s Tom Griffiths, whose new book The Laws of Thought explores our quest to use mathematics to describe the ways we think, offered a compelling counter-perspective: perhaps the best way to differentiate humans and machines is no longer by our strengths, but by our limitations. Humans are extraordinary in what we achieve with limited data and low power (a few pounds of neural tissue vs. a power plant for the DeepBlue system that beat Kasparov at chess). But we have a distinctive ‘fingerprint’ of limitations and biases. Companies like Roundtable are already working on using uniquely human cognitive biases like finding it harder to read the colour of a word, not the word itself (the Stroop effect) to identify humans online.

This human-AI comparison was placed in the broader context of the “anthropomimetic turn,” a term used by Henry Shevlin, from the Leverhulme Centre for the Future of Intelligence. As he pointed out, this trend, to deliberately design AI to be more human-like, is not a scientific goal but a commercial one. The simple addition of a chat interface to a large language model in the case of ChatGPT transformed it into a global consumer product. The avenue to profit suddenly became clear.

In the breakout sessions I was fortunate to sit on Henry’s table, and we dived much deeper into the rapidly evolving field of ‘Social AI’. Systems like Replika and Character AI are designed to meet human social and romantic needs, reviving the ‘Eliza effect’ for a new generation. Our discussion highlighted a deep ambivalence. On one hand, these tools can alleviate loneliness, provide a stopgap for the recently widowed, or even talk people down from suicide. On the other, we risk mass social deskilling, manipulation, dehumanisation, and profound emotional harm, as seen when Replika users felt their AI companions had been lobotomised after a software update. Human-like machines are here to stay, we concluded, driven by the logics of consumer demand and capital investment. But in light of this, what should we then be testing?

Work, Trust, and the New Calculus of Agency

If our social lives are being infiltrated, our working lives are too. The Future of Work panel explored a world where AI doesn’t just augment tasks but reconfigures entire organisations. As Cambridge’s Diane Coyle noted, AI is going straight for “knowledge work,” but we’re in a “productivity J-curve”, adoption is high individually, but organisational and macroeconomic gains are lagging. This was encouraging to me at least, because it suggests that individuals are using AI to claw back their own time from their work, rather than using it to help them work more. In other words, the efficiency gains are benefiting the employees. At least for now.

This second workshop surfaced two critical ideas. First, as cognition is automated, the uniquely human value shifts to meta-cognition: the management and assignment of cognitive tasks. Second, in a world flooded with AI-generated content, human authenticity becomes a scarce resource. We may soon value spelling mistakes in a CV as a reliable signal that a human actually wrote it!

This theme of scarcity and value carried into the final workshop on Trust, which provided what I thought was one of the most powerful insights of the conference. Beth Goldberg of Google’s Jigsaw unit unveiled what she called a “new trust calculus.” We’re delegating more and more agency to AI, yet we feel our own agency in the world is declining. Why? Goldberg’s research suggests our trust in AI is not based on its accuracy or reliability, but on its perceived ability to increase our future agency. This delegation may be mindless, manipulative or beneficial – a mixed bag in reality. In this sense, there is a trade-off between delegating agency and gaining agency from our AIs. With research, Goldberg suggested, we can design tech in a way that makes these trade-offs legible and our decisions to delegate agency conscious and consenting.

The conceptual shift in trust was echoed by Rachel Botsman, author of Who Can You Trust?, as she argued that trust is not declining; it’s migrating. It has moved from local, to institutional, to the distributed trust of the platform economy. Now, it’s shifting again to an ‘augmented’ trust as for the first time in human history we can no longer distinguish someone from something.

And it is this inability to distinguish that brings us squarely back to the original Turing Test and to the question of what lies beyond it.

As the conversations moved into the second day of events, I began to mull over my own proposed version: a “rigged” Turing test. In this test, an AI is tasked with the classic role of the interrogator. It must determine which of two subjects is the human and which is the machine. The catch? Both subjects are, in fact, machines. It would require a profound feat of contextual awareness and meta-cognition to realise the game itself is rigged. The AI would need to understand the human-centric premise it had been given and creatively express its discovery that the premise was false. Perhaps the Turing test 2.0 will require a shift in intelligence from imitating to discriminating, to understanding the game-maker and refusing the terms of the test itself. This is when I’d be willing to recognise the emergence of that ever-elusive AGI.


Jacob Forward is the founder of the AI literacy company, AIME Education. He also teaches on the Generative AI in Business course for Cambridge University Press. He is an E-Lab member and PhD student in the Faculty of History at the University of Cambridge. Jacob previously read for an MPhil in American History at Cambridge (King's College), and a BA in History at Oxford (Keble College). He has worked for History and Policy at the Institute for Historical Research and consulted on research projects at the School of Advanced Study.

 
Next
Next

Why Europe’s Seed Stage Now Looks Like Yesterday’s Series A