Document Type
Article
Publication Date
2025
Abstract
As hype around the transformative effects of large language models (LLMs) has taken center stage in popular culture, some judges and legal scholars have suggested that LLMs have the potential to improve the objectivity of judicial decision-making. Proponents argue that using LLMs to find empirical 'evidence' of legal text's meaning can reduce the role of judges' subjective choices, ensuring that judicial rulings faithfully reflect the people's understanding of legal rules, and grounding legal interpretation in a sophisticated empirical investigation of real language use in social context. To the contrary, we argue that LLM jurisprudence underscores the discretionary decisions required to infer ordinary meaning; highlights the inescapable reality that the meaning and application of legal terms is inherently normative; and demonstrates the lack of democratic legitimacy of crowdsourcing legal meaning. We argue that the feature of LLMs that makes them so seductive for legal interpretation – their potential ability to approximate 'ordinary' people's understanding of legal text – reveals the political illegitimacy of empirical judging. We conclude with recommendations and warnings for practitioners in this space.
Recommended Citation
Dasha Pruss & Jessie Allen,
Against AI Jurisprudence: Large Language Models and the False Promises of Empirical Judging,
Proceedings of the AAAI/ACM Conference on AI, Ethics and Society (AIES), forthcoming
(2025).
Available at:
https://scholarship.law.pitt.edu/fac_articles/626
Included in
Artificial Intelligence and Robotics Commons, Constitutional Law Commons, Courts Commons, Critical and Cultural Studies Commons, Judges Commons, Jurisprudence Commons, Law and Philosophy Commons, Law and Society Commons, Legal Ethics and Professional Responsibility Commons, Legal Theory Commons, Science and Technology Law Commons, Science and Technology Studies Commons, Speech and Rhetorical Studies Commons