Document Type


Publication Date



Most readers of this Article probably have encountered – and been frustrated by – password complexity requirements. Such requirements have become a mainstream part of contemporary culture: "the more complex your password is, the more secure you are, right?" So the cybersecurity experts tell us… and policymakers have accepted this "expertise" and even adopted such requirements into law and regulation.

This Article asks two questions. First, do complex passwords actually achieve the goals many experts claim? Does using the password "Tr0ub4dor&3" or the passphrase "correcthorsebatterystaple" actually protect your account? Second, if not, then why did such requirements become so widespread?

Through analysis of historical computer science and related literature, this Article reveals a fundamental disconnect between the best available scientific knowledge and the application of that knowledge to password policy development. Discussions with leading computer scientists during this period suggests that this disconnect cannot be fully explained by a simple failure to identify the shortcomings of complex passwords. Nor can it be fully explained by a failure of computer science research to consider the user design implications of password complexity and associated research in psychology. Rather, this Article proposes that the disconnect resulted from a "stovepiping" failure of a different type – the failure to connect the results of scientific knowledge to a characterization which could drive a shift in policy direction.

The result is that what was required was not merely new computer science evidence, but the characterization of that evidence within a framework demonstrating that continuing the original course of action was actually result in a worse condition than originally existed. This type of net benefit/loss economic framing was largely missing from the discourse regarding authentication at the time, and indeed, remains deeply undertheorized in contemporary discourage regarding cybersecurity policy.

The implications of these results are compelling. If the assertions in this Article are correct, the technical complexity of society has vastly outstripped our policymaking process' ability to keep pace. A dystopian view of this result suggests we are headed toward technocracy. (How did you feel the last time Facebook or Google implemented a major overhaul?) A perhaps more optimistic view, however, suggests that such technical complexity is not a new concept in relative terms, and that historical context can provide some guidance as to how to adapt.

The optimistic view suggests the conclusion that looking to the process for regulating the practices of medicine, aviation, and other technologies which were at the time vastly outpacing the knowledge of policymakers can afford suggestions as to how policymakers should proceed in the Information Age. Developing a science of cybersecurity and requiring evidence-based policymaking provide solutions not only applicable to the specific problems presented in this Article, but also potentially for other highly-technical subjects faced by an increasingly complex society.

Simply put, cybersecurity policymaking must, as with other technical fields, move towards requiring evidence-based policymaking in the first instance. To do otherwise in such a highly-technical and rapidly-evolving field undermines the very purposes of the regulatory process itself, particularly in the context of delegation to “expert” administrative agencies. This Article examines that concept through the lens of the specific problem of password complexity, and offers a policymaking prescription by way of example: the myth of "risk prevention" must be replaced with the empirically-founded calculus of "risk management." And the primary question to be addressed must not be "is your system secure?", but rather "do your risk mitigation techniques match your risk tolerance?"