‘Judement is Turned Backwards and Justice Standeth Afar Off’*: Considerations of AI Adjudication Via LLMs

Written by Demetrius Floudas


'Original Graphics created by Zara Gounden, New York University'

This is Part II of the essay.  Part I can be found here.

The essence of judicial process is inextricably linked to its human element. This transcends sentimentality: magistrates are not merely technocrats applying predetermined rules; they are interpreters of law that weigh ambiguous evidence for factual probability and moral resonance; they apply discretion informed by societal context and lived experience; and consider the broader social implications of their decisions. This phronesis[1] has always been considered essential for the maintenance of justice. The deep-seated belief that only live persons can grasp the nuances of morality, empathy, and psycho-social context has been the bedrock upon which judicial legitimacy rests.[2]

At the heart of this debate lies the concept of natural justice, which holds that certain principles of fairness and equity are inherent in life’s condition.[3] These are seen as indispensable to the proper functioning of the legal system and include the right to a fair trial, the right to be judged by an impartial tribunal of one’s peers, the right to be heard, the right to reasoned deliberation and several other interconnected bulwarks.  Evolved over centuries of jurisprudential development, they rest upon a foundational assumption that has remained largely unexamined until now: that dispensing justice requires essentially human capacities. This lies so deeply embedded within the substratum of juridical discipline that, were we to attempt removing it, the whole structure could come crashing down like a mikado tower. 

For millennia, the only exception to this person-centric model has been the appeal to a divine authority—theodicy. In trials by ordeal or combat, judgment was left to God; the role of mortals was to administer the process and deduce the divine will.  Modern states have replaced the deity with the people,[4] but the core precept remains: lawfulness is an act of man’s conscience, moral agency and the enduring social contract.

In Part I of this essay, it was posited that within a few years judicial systems shall face a disagreeable Scylla & Harybdis rupture: either reject AI-generated filings and perpetuate the quandary of the current status quo; or embrace self-operating adjudication in order to administer the inevitable procedural cataclysm.

Although not conceived to grapple with this particular predicament, the 2025 LLMxLaw event was - like its 2024 predecessor- impeccably organised, well-attended and truly worthwhile.  Informal chats with several participants, speakers and entrepreneurs proved quite illuminating for the purposes of drafting the second segment of this analysis.  Talented and well-informed, the techno-optimist hackers were invariably dedicated to creating an unencumbered access to tribunals for the end-users of their outputs. When approached about possible unforeseen outcomes of removing barriers to entry, the responses were intriguing: many professed that this was an unavoidable shape of a computerised future, whereby non-artificial judiciary would be confined to a Supreme Court, overseeing the vast Machinery (literally) of Justice operating under them; some remained adamant that AI legal app proliferation serves some greater good, even when its immediate fallout appears catastrophic; others rejected the danger to juridical systems altogether, dismissing many concerns as alarmism.

A very common objection was that their impressive handiwork would serve simply as an auxiliary means for litigation, utilised by grateful live counsel to streamline arduous workflows that could consume weeks otherwise. What may happen to the hapless judges who would consequently be inundated by boundless oceans of motions and data appeared to be a conundrum that few had ever reflected upon – even the qualified attorneys.  Certain delegates presented the argument that pretty soon more advanced versions of AI Co-Counsel tools could come to the rescue of overwhelmed magistrates in the guise of chatbot Co-Bencher systems, which would then bravely digest and evaluate the incoming deluge. Nevertheless, as we now comprehend the fundaments of due process, such a framework for silicone amicus curiae would require that every AI-generated finding be subsequently certified by a live expert. Which brings the situation back to square one…

 

Even as a thought experiment, the sheer momentum of technological advancement and systemic pressure would not allow the perpetuation of a comfortable, stable, individual-in-the-loop equilibrium.  Once a mechanised bench demonstrates superior speed and cost-efficiency in traffic disputes or debt collection cases, pressure will mount to expand its remit, driven by the permanently frightful backlog pressures and budgetary constraints.[5] Besides, the notion of an individual arbiter effectively auditing thousands of computer rulings daily is administrative fantasy. Oversight will quickly degrade into brisk rubber-stamping, replicating the governance dilemmas observed in social media content moderation.  

Ultimately, we encounter again that unpalatable choice: If the alternative is a structure so clogged that it offers no resolution at all, is not then a swift, impartial, robotic decision preferable?[6]  Logical, predictable, and to all intents and purposes blind? Yet we have already observed how technology reshapes society without consent; before this new frontier, we must strive to ensure that benches remain not just efficient but also humane and equitable.[7] Beyond the practical challenge, an algorithmic bench is also a philosophical one.[8] It forces us to contend once again with those nagging questions about the nature of regulation, the role of ethics, the limits of technology, the extent of individual fallibility and judicial legitimacy.[9]

On a side – but pertinent – note, the perfunctory adoption at the hackathon of gauche business-school buzzwords when contriving interventions to the legal system was somewhat disconcerting:  hype about ‘systemic disruptors’ or ‘legaltech moonshots’ during discourse on the future of law sound as jarring and disquieting to a jurist as enthusiastic encouragement of mass euthanasia at an NHS policy forum to a medic. Justice is emphatically not a product; its ‘disruption’ carries existential societal risks!

Academic institutions bear particular responsibility at this juncture.  The King’s E-Lab must sustain and even expand its exploration of how LLMs might benefit lawyers. Maintaining a secure distance from gamified ‘law apps’, it can foster constructive opportunities to promote responsible technology and adopt an articulate strategic vision which shall inform future research and events on the parameters of automated judgments. The E-Lab is in a position where it may well emerge as a trailblazer in the promotion of mature and trustworthy adjudicatory tools -instead of additional gizmos for pressed-for-time advocates. It can actively engage in designing, stress-testing and cross-evaluating experimental algorithmic decision-making models, as well as incubating holistic solutions that could act as safeguards of legal systems, designed to uphold and buttress rather than ‘disrupt’. A comparable coherent and far-sighted strategy can – and should – readily become its priority in this area.

In conclusion, the crux will be not whether AI can judge us but to what extent we may be willing to be judged by it. In answering that question, one must remember that the pursuit of justice is not merely a technological endeavour but a profoundly human one. The assumption that machine adjudication represents inescapable progress, rather than a choice requiring cautious deliberation, may be a gratuitous abdication of human agency in shaping mankind’s institutional future.

* “Judgment is turned away backward, and justice standeth afar off: for truth is fallen in the street, and equity cannot enter”.  Isaiah 59:14.

[1] Φρόνησις - practical wisdom.

[2] Vidaki & Papakonstantinou, Democratic legitimacy of AI in judicial decision-making. AI & Soc (2025).

[3]  Sourdin, What if judges were replaced by AI? SSRN Electron J. 2022.

[4] The continued use of oaths in contemporary justice worldwide is a notable legacy of ancient theodicy rules.

[5] Sourdin, Judge v robot?: Artificial intelligence and judicial decision-making., New South Wales Law J. 2018, 41(4):1114–1133

[6] Helberger, Araujo & de Vreese, Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. ComputLawSecurRev 39 (2020), 105456.

[7] Gravett, Judicial Decision-Making in the Age of Artificial Intelligence. In: Sousa et al (eds) Multidisciplinary Perspectives on Artificial Intelligence and the Law. Law, Governance and Technology Series 58. Springer, Cham. 2024.

[8] Martinho, Surveying Judges about artificial intelligence: profession, judicial adjudication, and legal principles. AI & Soc 40, 569–584 (2025).

[9] Winter, The Challenges of Artificial Judicial Decision-Making for Liberal Democracy. In: Bystranowski et al (eds) Judicial Decision-Making. Economic Analysis of Law in European Legal Scholarship, vol 14. Springer.


Demetrius A. Floudas is a practicing Advocate; Visiting Scholar in AI Governance at Downing College, Cambridge; and member of the AI@Cam Unit. He is an Artificial Intelligence policy-maker & regulatory strategist; he has contributed to the drafting Plenary and WG2+3 of EU AI Office’s Code of Practice for General-Purpose Artificial Intelligence; the UNESCO Guidelines for Use of AI in Courts & Tribunal; the OECD risk thresholds for advanced AI; and is the Editor in the ‘AI & Law’ Section of the PhilPapers academic repository. Demetrius is also Adj. Professor at the Law Faculty of Immanuel Kant Baltic Federal University in Kaliningrad, where he lectures on Artificial Intelligence Regulation; Fellow of the Hellenic Institute of Foreign and International Law; and AI Expert at the European Institute of Public Administration. He previously served as Regulatory Policy Lead of the Global Trade Programme at the UK Foreign, Commonwealth & Development Office and as head of the scientific team preparing the revision of the Civil Code of Armenia. In his spare time, he provides commentary to a number of international think-tanks and organisations, with his views frequently appearing in media worldwide.

'Original Graphics created by Zara Gounden, New York University'

Next
Next

New Year, New Resolution? Why Resilience Outperforms Resolution