Document Type

Article

Publication Date

2025

Abstract

As hype around the transformative effects of large language models (LLMs) has taken center stage in popular culture, some judges and legal scholars have suggested that LLMs have the potential to improve the objectivity of judicial decision-making. Proponents argue that using LLMs to find empirical 'evidence' of legal text's meaning can reduce the role of judges' subjective choices, ensuring that judicial rulings faithfully reflect the people's understanding of legal rules, and grounding legal interpretation in a sophisticated empirical investigation of real language use in social context. To the contrary, we argue that LLM jurisprudence underscores the discretionary decisions required to infer ordinary meaning; highlights the inescapable reality that the meaning and application of legal terms is inherently normative; and demonstrates the lack of democratic legitimacy of crowdsourcing legal meaning. We argue that the feature of LLMs that makes them so seductive for legal interpretation – their potential ability to approximate 'ordinary' people's understanding of legal text – reveals the political illegitimacy of empirical judging. We conclude with recommendations and warnings for practitioners in this space.

Share

COinS