Toksvig on Turing: The Spirit of Humans (and Berries)

Written by Sophie Harbour


Early computer memory, Toksvig remarked, was almost run on Gin. That’s right, the alcoholic juniper berry beverage, which has undergone a revival of popularity in the UK, was Alan Turing’s proposed solution to the challenge of memory storage for the digital computer.[1] With similar properties to mercury, the substance, Turing proposed, had alcohol and water in just the right proportions to store the necessary vibrations. 

What an enticing idea, the host remarked, for the foundation of such revolutionary technology.

Yet a deeper reflection on the risks of AI can have an ironically sobering effect. These risks require careful consideration to balance the otherwise tipsy glow of a transformed future of technological efficiency, and the risk that framed the discussion of the day was bias. Specifically, the potential of bias to be reproduced by such tools.

The concern over how AI may perpetuate bias has seen a steady rise. Studies, for example, suggest its influence can amplify our own existing biases and prove worryingly persistent. In addition, as Toksvig demonstrated herself, gender and race stereotypes feature prominently in AGI. These can be in obvious ways, whereby giving simple prompts to generative AI tools mean that doctors are men and carers are women, or that start-up founders bear continual resemblance to young “tech bros”. They can also feature in more subtle ways, whereby the absence of certain experiences in training data means that whole worlds can be absent as a reference point for these tools as Toksvig humorously demonstrated by outlining her failed attempts to make an image generator produce a suitable image for her well-crafted sewing pun on ‘a bias cut’.[2] Repeated efforts resulted in increasingly “butch” pictures of what cutting looks like (usually some form of electric saw or large knife) – sewing apparently not in the “catalogue of possibilities”.

How in danger are we of integrating, at pace and at scale, a set of technologies that seem to illicit histories of societal bias? And what may be the consequences? How much of our worlds (and crucially of the worlds of younger and younger students who have little grasp on a reality without AI and AGI) will see various stereotypes become a foundation of the material that trains our systems of information? Where can bias lead?

With Turing as the inspiration of the lecture series, it is important to remember that his legacy not only demonstrates the world-changing possibilities of intellectual endeavour but also the cruel consequences of societal judgement. In 1952, Turing was arrested and charged with ‘indecency’ after a brief relationship with another man. At the time, the punishment for homosexuality was chemical castration. Many believe that Turing went on to take his own life, although some suggest the suicide ruling is inconclusive.

If artificial portrayals of the world are powerful and if they do produce bias, will they bleed into reality with harmful consequences? And how do we address these risks?

We may have the opportunity to correct the biases displayed in AI systems – and perhaps such tools have the potential to serve as a check on indelible human biases – but the way to achieve this requires time and effort. In Toksvig’s lecture, two themes stood out for their ability to help in the endeavour to embrace the potential of AI while remaining wary of its risks: Humour and curiosity are the order of the day.

Humour

It is a particular kind of talent to employ humour in a manner that puts an audience at ease with one’s first spoken words, but lifting spirits with wit is a practice that the previous host of QI has mastered. To balance the weight of AI’s risks and consequences, Toksvig elicited as much laughter as she did worry.

Humour is also a powerful tool that does more than lighten a mood. Laughter can open minds, setting ajar a door through which more serious reflections can often be prompted. While the consequences of prejudice are no joke, the manner with which we approach the daunting task of addressing these challenges and of having these difficult conversations matters. Humour, Toksvig suggested during audience questions, is not a form of pandering but something that “keeps life going”. It is also something that builds connection, and this kind of connection is invaluable in the endeavour to explore and address the risks of AI.

Curiosity

Humans know a lot, and across the globe, societies have amassed a wealth of knowledge about the natural and biological worlds.

This, however, should never prevent us from recognising what we still do not know. While it is not uncommon to come across the statement: ‘we know little about AI – it’s workings, its potentials, its consequences’, it can be easy to forget that the same is true of many other, more tangible parts of life. The depth of our lack of knowledge, with things that now seem ordinary in comparison to AI, remains significant. In everything from navigation to emotion, for example, we are still unsure of how the human brain often functions.[3]

Everyday we learn new things about our world despite having inhabited it for millennia. Just this year, a “new” butterfly species was discovered in Canada’s Rocky Mountains. The aptly named Satyrium curiosolus or Curiously Isolated Hairstreak makes the point. We need curious minds.

And finally, as a compliment to this curiosity, we should encourage humility in our pursuits. We must be more curious not just about the world around us, but about how others have experienced it. This forms part of the motivation for Toksvig’s latest project Mappa Mundi which compiles the experiences of women, of hardship and of triumph, in a compelling atlas of stories.

It takes serious efforts to listen to, and to document, stories that have been obscured in the collection of global knowledge systems. It is to this task that attention should fully turn. This is crucial if we are to ensure that it is a multitude of recordings of reality, and of knowledge, that form the basis of the training data for future AI systems.

Overall, the 2025 Alan Turin Lecture helped to visualise the dangers of generative AI given the human potential for, and history of, bias. It also demonstrated the value of using truly human tools – humour and curiosity – to tackle the risks that this bias creates. It left most with a little faith in the possibility that human spirit is a necessary balance to the apparent wonders of a more artificial future (though perhaps that it might also need, every now and again, a tot of a juniper berry ‘spirit’ to help it along).

 

The full recording of Toksvig’s lecture can be found here: https://www.youtube.com/watch?v=AGGu8x7MzFg

[1] According to a speech by Wilkes (see speech here)

[2] A ‘bias cut’ is a technique in weaving when one cuts on the diagonal grain (at 45 degrees) of the fabric rather than the straight and cross grains to give fabric more drape, stretch, and a flattering fit.

[3] For examples on the former read Dark and Magical Places: The Neuroscience of Navigation (2022) by Christopher K. Germer and for the latter, Lisa Feldman’s Barrett’s How Emotions are Made - The Secret Life of the Brain (2017) is brilliant.


Sophie Harbour is the King’s E-Lab Coordinator and a Research Associate at the Cambridge Peaceshaping, Climate, and Conflict Lab (CPCCL). She earned her PhD in Political Theory from the Department of Politics and International Studies at the University of Cambridge in 2025, with research focusing on the potential of care ethics to inform approaches to political representation and accounts of political judgement. In addition to her academic research, Sophie has served as an external research consultant for Boston Consulting Group (BCG) and the Organisation for Economic Co-operation and Development (OECD), focusing on theories of climate mobility, water governance, and social innovation.

 
Next
Next

What do Aviation and Synthetic Biology have in Common? The Journey of Neela Biotech