法律顾问网欢迎您访问!法律顾问网力图打造最专业的律师在线咨询网站.涉外法律顾问\知识产权法律顾问\商务法律顾问 法律顾问、委托电话:13930139603,投稿、加盟、合作电话:13932197810 网站客服:点击这里联系客服   法律文书 | 在线咨询 | 联系我们 | 设为首页 | 加入收藏
关键字:

律师咨询电话13930139603

首 页 | 法治新闻 | 民法顾问 | 刑法顾问 | 普法常识 | 法律援助 | 社团顾问 | 商法顾问 | 律师动态 | 公益讼诉 | 执行顾问 | 经典案例 | 法律法规

国际贸易

知识产权

税收筹划

公司事务

土地房产

建筑工程

合同纠纷

债权债务


劳动争议


医疗纠纷


交通事故


婚姻家庭
商法顾问 国际贸易 | 银行保险 | 证券期货 | 公司法律 | 司法鉴定 | 合同纠纷 | 网络法律 | 经济犯罪 | 知识产权 | 债权债务 | 房地产  
法律英语  
独家:剑桥美国法律史三
出处:法律顾问网·涉外www.flguwen.com     时间:2010-11-19 16:28:00

the cambridge history of law in america
volume iii
The Twentieth Century and After (1920–)
Law stands at the center of modern American life. Since the 1950s, American
historians have produced an extraordinarily rich and diverse literature that has
vastly expanded our knowledge of this familiar and vital yet complex and multifaceted
phenomenon. But few attempts have been made to take full account of
law’s American history. The Cambridge History of Law in America has been designed
for just this purpose. In three volumes we put on display all the intellectual vitality
and variety of contemporary American legal history.We present as comprehensive
and authoritative an account as possible of the present understanding and range of
interpretation of the history of American law. We suggest where future research
may lead.
In 1941, Henry Luce of Time Magazine named the twentieth century “the American
Century.” For our purposes it begins afterWorldWar I: the war was a watershed
that foreshadowed a new American state form and role, confirmed the dominance of
the new American corporate economy, and gave rise to a new range of international
ambitions and relationships. Each arena saw such an intensification of the role of
law that by its end, “the American Century was being called ‘Law’s Century.’” Not
everything that seemed new was, but by and large this final volume of the History is
about accelerating change: innovation in the state, in legal thought and education,
in professional organization and life, and in American federalism and governance;
innovation at the intersection of law with explosive struggles around race, gender,
class, and sexual liberation and the full emergence of “rights” discourse, along
with its limitations and blind spots; and the mobilization of “rights” and “law” to
“legalize” the world. In the early twenty-first century, about the only prediction
we can confidently make is that change is not yet done with us.
The Cambridge History of Law in America has been made possible by the generous
support of the American Bar Foundation. Volumes I and II cover the history of law
in America, respectively, from the first moments of English colonizing through the
creation and stabilization of the republic; and from the foundation of the republic
until the immediate aftermath of WorldWar I.
Michael Grossberg is the Sally M. Reahard Professor of History and a Professor of
Law at Indiana University. His research focuses on the relationship between law
and social change, particularly the intersection of law and the family.
Christopher Tomlins is Senior Research Fellow at the American Bar Foundation
in Chicago. His research encompasses the relationship among labor, colonization,
and law in early America; the conceptual history of police in Anglo-American law
and politics; and the place of historical materialism in legal theory.
Cambridge Histories Online © Cambridge University Press, 2008
Cambridge Histories Online © Cambridge University Press, 2008
the cambridge history of
law in america
volume iii
The Twentieth Century and After (1920– )
Edited by
MICHAEL GROSSBERG
Indiana University
CHRISTOPHER TOMLINS
The American Bar Foundation, Chicago
Cambridge Histories Online © Cambridge University Press, 2008
cambridge university press
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi
Cambridge University Press
32 Avenue of the Americas, New York, ny 10013-2473, usa
www.cambridge.org
Information on this title: www.cambridge.org/9780521803076
c Cambridge University Press 2008
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without
the written permission of Cambridge University Press.
First published 2008
Printed in the United States of America
A catalog record for this publication is available from the British Library.
Library of Congress Cataloging in Publication Data
The Cambridge history of law in America / edited by Michael Grossberg,
Christopher Tomlins.
p. cm.
Includes bibliographical references and index.
isbn 978-0-521-80307-6 (hardback)
1. Law – United States – History. I. Grossberg, Michael, 1950– II. Tomlins,
Christopher L., 1951– III. Title.
kf352.c36 2007
349.73–dc22 2007017606
isbn 978-0-521-80307-6 hardback
Cambridge University Press has no responsibility for
the persistence or accuracy of urls for external or
third-party Internet Web sites referred to in this publication
and does not guarantee that any content on such
Web sites is, or will remain, accurate or appropriate.
Cambridge Histories Online © Cambridge University Press, 2008
contents
Editors’ Preface page vii
1 Law and the State, 1920–2000: Institutional Growth and
Structural Change 1
daniel r. ernst
2 Legal Theory and Legal Education, 1920–2000 34
william w. fisher iii
3 The American Legal Profession, 1870–2000 73
robert w. gordon
4 The Courts, Federalism, and the Federal Constitution,
1920–2000 127
edward a. purcell, jr.
5 The Litigation Revolution 175
lawrence m. friedman
6 Criminal Justice in the United States 195
michael willrich
7 Law and Medicine 232
leslie j. reagan
8 The Great Depression and the New Deal 268
barry cushman
9 Labor’sWelfare State: DefiningWorkers, Constructing
Citizens 319
eileen boris
10 Poverty Law and Income Support: From the Progressive Era
to theWar onWelfare 359
gwendolyn mink, with samantha ann majic
and leandra zarnow
v
Cambridge Histories Online © Cambridge University Press, 2008
vi Contents
11 The Rights Revolution in the Twentieth Century 377
mark tushnet
12 Race and Rights 403
michael j. klarman
13 Heterosexuality as a Legal Regime 442
margot canaday
14 Law and the Environment 472
betsy mendelsohn
15 Agriculture and the State, 1789–2000 522
victoria saker woeste
16 Law and Economic Change During the Short Twentieth
Century 563
john henry schlegel
17 The Corporate Economy: Ideologies of Regulation and
Antitrust, 1920–2000 613
gregory a. mark
18 Law and Commercial Popular Culture in the
Twentieth-Century United States 653
norman l. rosenberg
19 Making Law, MakingWar, Making America 680
mary l. dudziak
20 Law, Lawyers, and Empire 718
yves dezalay and bryant g. garth
Bibliographic Essays 759
Notes on Contributors 903
Index 905
Cambridge Histories Online © Cambridge University Press, 2008
editors’ preface
In February 1776, declaiming against the oppressive and absolute rule of
“the Royal Brute of Britain,” the revolutionary pamphleteer Tom Paine
announced to the world that “so far as we approve of monarchy . . . in
America the law is king”! Paine’s declaration of Americans’ “common
sense” of the matter turned out to be an accurate forecast of the authority
the legal order would amass in the revolutionary republic. Indeed, Paine’s
own fiery call to action was one of the stimuli that would help his prediction
come true. We know ourselves that what he claimed for law then
mostly remains true now. Yet, we should note, Paine’s claim was not simply
prophecy; it made sense in good part because of foundations already laid.
Long before 1776, law and legal institutions had gained a place of some
prominence in the British American colonies. The power and position of
law, in other words, are apparent throughout American history, from its
earliest moments. The three volumes of The Cambridge History of Law in
America explain why Paine’s synoptic insight should be understood as both
an eloquent foretelling of what would be and an accurate summation of what
already was.
The Cambridge History of Law in America belongs to a long and proud
scholarly tradition. In March 1896, at the instigation of FrederickWilliam
Maitland, Downing Professor of the Laws of England at Cambridge University,
and of Henry Jackson, tutor in Greek at Trinity College, the syndics
of Cambridge University Press invited the University’s Regius Professor
of Modern History, Lord John Dalberg Acton, to undertake “the general
direction of a History of theWorld.” Six months later Acton returned with
a plan for a (somewhat) more restrained endeavor, an account of Europe and
the United States from The Renaissance to The Latest Age. Thus was born The
Cambridge Modern History.
Acton’s plan described a collaborative, collectively written multivolume
history. Under general editorial guidance, each volume would be
divided among “specially qualified writers” primed to present extensive and
vii
Cambridge Histories Online © Cambridge University Press, 2008
viii Editors’ Preface
authoritative accounts of their subjects.1 They were to imagine themselves
writing less for other professional historians than for a more general audience
of “students of history” – anyone, that is, who sought an authoritative,
thoughtful, and sophisticated assessment of a particular historical subject or
issue. Acton envisioned a history largely clean of the professional apparatus
of reference and citation – texts that would demonstrate the “highest pitch
of knowledge without the display,” reliant for their authority on the expertise
of the authors chosen to write them. And although it was intended that
the History be the most complete general statement of historical knowledge
available, and to that extent definitive, Acton was not interested in simply
reproducing (and thus by implication freezing) what was known. He desired
that his authors approach the task critically, strive for originality in their
research, and take it on themselves to revise and improve the knowledge
they encountered.2
Acton did not live to see even the first volume in print, but between
1902 and 1911 The Cambridge Modern History appeared in twelve substantial
volumes under the editorial direction of Adolphus Ward and Stanley
Leathes. The History quickly found a broad audience – the first volume, The
Renaissance, sold out in a month. Other Cambridge histories soon followed:
The Cambridge History of English Literature, which began to appear under
Ward’s editorship in 1907; The Cambridge Medieval History (1911–36); The
Cambridge History of American Literature (1917–21); The Cambridge Ancient
History (1923–39); The Cambridge History of the British Empire (1929–67);
The Cambridge History of India (1922–60), and more. All told, close to a
hundred Cambridge histories have been published. More than fifty are currently
in print. Cambridge histories have justly become famous. They are
to be found in the collections of libraries and individuals throughout the
world.
Acton’s plan for The Cambridge Modern History invoked certain essentials –
an ideal of collective authorship and a commitment to make expertise accessible
to a wider audience than simply other specialists. To these he added
grander, programmatic touches. The History would be “an epic,” a “great
argument” conveying “forward progress . . . upward growth.” And it would
provide “chart and compass for the coming century.” Such ambitions are
1 When, early on, Acton ran into difficulties in recruiting authors for his intimidating
project, Maitland gently suggested that “his omniscient lordship” simply write the whole
thing himself. Acton (we note with some relief) demurred. There is humor here, but also
principle. Collective authorship is a practice ingrained in the Cambridge histories from
the beginning.
2 Our account of Acton’s plan and its realization gratefully relies throughout on Josef
L. Altholz, “Lord Acton and the Plan of the Cambridge Modern History,” The Historical
Journal, 39, no. 3 (September 1996), 723–36.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface ix
characteristic of Acton’s moment – the later nineteenth century – when in
Britain and Continental Europe history still claimed an educative mantle
“of practical utility,” the means rather than science (or law) to equip both
elites and ordinary citizens “to deal with the problems of their time.” It
was a moment, also, when history’s practitioners could still imagine filling
historical time with a consistent, standardized account – the product, to be
sure, of many minds, but minds that thought enough alike to agree on an
essential common purpose: “men acting together for no other object than
the increase of accurate knowledge.” Here was history (accurate knowledge)
as “the teacher and the guide that regulates public life,” the means by which
“the recent past” would yield up “the key to present time.” Here as well,
lest we too quickly dismiss the vision as na¨ıve or worse, was the shouldering
of a certain responsibility. “We have to describe the ruling currents, to
interpret the sovereign forces, that still govern and divide the world. There
are, I suppose, at least a score of them, in politics, economics, philosophy
and religion. . . . But if we carry history down to the last syllable of recorded
time, and leave the reader at the point where study passes into action, we
must explain to him the cause, and the growth, and the power of every great
intellectual movement, and equip him for many encounters of life.”
Acton’s model – a standard general history, a guiding light produced
by and for an intellectually confident elite – could not survive the shattering
effects of two world wars. It could not survive the democratization of
higher education, the proliferation of historical scholarship, the constant
emergence of new fields and subdisciplines, the eventual decentering of
Europe and “the West.” When, amid the rubble and rationing of a hastily
de-colonizing post–World War II Britain, Cambridge University Press’s
syndics decided a revised version was required – a New Cambridge Modern
History for a new day – their decision acknowledged how much the world
had changed. The revised version bore them out. Gone was Acton’s deep
faith in history’s authority and grandeur. The general editor, G. N. Clark,
wrote, “Historians in our self-critical age are aware that there will not
be general agreement with their conclusions, nor even with some of the
premises which they regard as self-evident. They must be content to set out
their own thought without reserve and to respect the differences which they
cannot eradicate” – including, he might have added (but perhaps there was
no need) the many fundamental differences that existed among historians
themselves. Cambridge histories no longer aspired to create standardized
accounts of the way things had been nor to use the past to pick the lock on
the future. The differences in perspective and purpose that a less confident,
more self-critical age had spawned were now the larger part of the picture.
Yet the genre Acton helped found has now entered its second century. It
still bears, in some fashion, his imprint. The reason it has survived, indeed
Cambridge Histories Online © Cambridge University Press, 2008
x Editors’ Preface
prospered, has less to do with some sense of overall common purpose than
the more modest but nevertheless essential precept of continued adherence
to certain core principles of design simply because they have worked: individual
scholars charged to synthesize the broad sweep of current knowledge
of a particular topic, but also free to present an original interpretation aimed
at encouraging both reflection and further scholarship, and an overall architecture
that encourages new understandings of an entire subject or area of
historical scholarship. Neither encyclopedias nor compilations, textbooks
nor works of reference, Cambridge histories have become something quite
unique – each an avowedly collective endeavor that offers the single best
point of entry to the wide range of an historical subject, topic, or field;
each in overall conceptual design and substance intent not simply on defining
its field’s development to date but on pushing it forward with new
ideas. Critique and originality, revision and improvement of knowledge –
all remain germane.
Readers will find that The Cambridge History of Law in America adheres to
these core goals. Of course, like other editors we have our own particular
ambitions. And so the three volumes of this Cambridge history have been
designed to present to full advantage the intellectual vitality and variety of
contemporary American legal history. Necessarily then – and inevitably –
The Cambridge History of Law in America dwells on areas of concern and interpretive
debates that preoccupy the current generation of legal historians.
We do not ignore our predecessors.3 Nor, however, do we attempt in the
body of the History to chart the development of the field over their time and
ours in any great detail. Readers will find a more substantial accounting of
that development in the bibliographic essays that accompany each chapter,
but as editors we have conceived our job to be to facilitate the presentation
of as comprehensive and authoritative a rendition of the present understanding
of the history of American law as possible and to suggest where
future research may lead.
Cambridge histories always define their audiences widely; ours is no
exception. One part of our intended audience is scholarly, but hardly confined
to other legal historians; they are already the best equipped to know
something of what is retailed here. So to an important extent we try to look
past legal historians to historians at large. We also look beyond history to
scholars across the broad sweep of law, the humanities, and the social sciences
– indeed to any scholar who may find a turn to law’s history useful (or
simply diverting) in answering questions about law and society in America.
3 See, for example, the graceful retrieval and reexamination of themes from the “imperial
school” of American colonial historians undertaken by Mary Sarah Bilder in Volume I,
Chapter 3.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xi
A second part of our audience is the legal profession. Lawyers and judges
experience in their professional lives something of a practical encounter
with the past, although the encounter may not be one they would recognize
as “historical.” As John Reid has written, “The lawyer and the historian have
in common the fact that they go to the past for evidence, but there the similarity
largely ends.” Here lawyers and judges can discover for themselves
what historians do with evidence. In the process, they will also discover
that not inconsiderable attention has been paid to their own lives and experiences.
Legal historians have always known how important legal thought
and legal education are in the formation of the professional world of the law,
and both feature prominently in this History. Here the profession encounters
the history of its activities and of the medium it inhabits from a standpoint
outside itself.
The third segment of our intended audience is the general public. Our
purposes in this encounter are not Acton’s.We do not present this History as
the means to educate a citizenry to deal with the problems of the moment.
(Indeed, it is worth noting that in America law appropriated that role to
itself from the earliest days of the republic.) Like G. N. Clark, today’s
historians live in self-critical times and have lower expectations than Lord
Acton of what historical practice might achieve. That said, readers will find
that this History touches on many past attempts to use law to “deal with”
many past problems: in the America where law is king, it has been law’s fate
to be so employed. And if their accounts leave some of our authors critical
in their analysis of outcomes or simply rueful in recounting the hubris (or
worse) of the attempts, that in itself can be counted an education of sorts.
Moreover, as Volume III’s chapters show repeatedly, Americans continue
to turn to law as their key medium of private problem solving and public
policy formation and implementation, and on an expanding – global –
stage. In that light, there is perhaps something for us to learn from Acton’s
acknowledgment that the scholar-expert should not abandon the reader “at
the point where study passes into action.” We can at the very least offer
some reflection on what an encounter with the past might bring by way of
advice to the “many encounters of life” lying ahead.
In reaching all three of our intended audiences, we are greatly assisted
by the pronounced tendency to “demystify” and diversify its subject that
has characterized American legal history for a half-century. To some, the
field’s very title – “legal history” – will conjure merely an arcane preoccupation
with obscure terminologies and baffling texts, the doctrines and
practices of old (hence defunct) law, of no obvious utility to the outsider
whether historian or social scientist or practicing lawyer or just plain citizen.
No doubt, legal history has at times given grounds to suppose that such
a view of the discipline is generally warranted. But what is interesting
Cambridge Histories Online © Cambridge University Press, 2008
xii Editors’ Preface
in American legal history as currently practiced is just how inappropriate
that characterization seems.
To read the encomia that have accumulated over the years, one might
suppose that the demise of legal history’s obscurity was the single-handed
achievement of one man, JamesWillard Hurst, who on his death in 1997 was
described in the New York Times as “the dean of American legal historians.”
Indeed, Hurst himself occasionally suggested the same thing; it was he who
came up with the aphorism “snakes in Ireland” to describe legal history in
America at the time he began working in the field in the 1930s. Though not
an immodest man, it seems clear whom he cast as St. Patrick. Yet the Times’
description was merited. Hurst’s lifework – the unpacking of the changing
roles of American law, market, and state from the early nineteenth to the
early twentieth centuries – set the agenda of American legal historians
from the 1950s well into the 1980s. That agenda was a liberation from
narrower and more formalistic preoccupations, largely with the remote
origins of contemporary legal doctrine or with the foundations of American
constitutionalism, that had characterized the field, such as it was, earlier
in the century. Most important, Hurst’s work displayed some recognition
of the multidimensionality of law in society – as instrument, the hallmark
with which he is most associated, but also as value and as power. Hurst,
in short, brought legal history into a continuing dialogue with modernity,
capitalism, and the liberal state, a dialogue whose rich dividends are obvious
in this History.
Lawyers have sometimes asked aggressively anachronistic questions of
history, like – to use an apocryphal example of Robert Gordon’s – “Did the
framers of the Constitution confer on the federal government the power
to construct an interstate highway system?” Hurstian legal history did not
indulge such questions. But Hurstians did demonstrate a gentler anachronism
in their restriction of the scope of the subject and their interpretation
of it. Famously, for Hurst, American legal history did not begin until the
nineteenth century. And when it did begin it showed a certain consistency
in cause and effect. As Kermit Hall summarized the view in 1989, “Our
legal history reflects back to us generations of pragmatic decision making
rather than a quest for ideological purity and consistency. Personal
and group interests have always ordered the course of legal development;
instrumentalism has been the way of the law.”4 The Hurstian determination
to demystify law occasionally reduced it to transparency – a dependent
variable of society and economy (particularly economy) tied functionally to
social and economic change.
4 Kermit L. Hall, The Magic Mirror: Law in American History (New York, 1989), 335.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xiii
As a paradigm for the field, Hurstian legal history long since surrendered
its dominance. What has replaced it? In two words, astonishing variety.
Legal historians are aware that one cannot talk or write about economic
or social or political or intellectual history, or indeed much of any kind of
history, without immediately entering into realms of definition, prohibition,
understanding, practice, and behavior that must imply law to have
meaning. Try talking about property in any of those contexts, for example,
without implying law. Today’s legal historians are deeply engaged across
the full range of historical investigation in demonstrating the inextricable
salience of law in human affairs. As important, the interests of American
historians at large have never been more overtly legal in their implications
than now. To take just four popular areas of inquiry in American history –
citizenship and civic personality, identity, spatiality, and the etiology of
social hierarchy and subordination – it is simply impossible to imagine
how one could approach any of these areas historically without engaging
with law, legal ideology, legal institutions, legal practices, and legal discourse.
Legal historians have been and remain deeply engaged with and
influenced by social history, and as that field has drifted closer and closer to
cultural history and the historical construction of identity so legal history
has moved with it. The interpretive salience of race and ethnicity, of gender
and class is as strong in contemporary legal historical practice as in any
other realm of history. Add to that the growing influence of legal pluralism
in legal history – the migration of the field from a focus on “the law” to
a focus on the conditions of existence of “legality” and the competition of
many alternative “legalities” – and one finds oneself at work in a field of
immense opportunity and few dogmas.
“Astonishing variety” demonstrates vitality, but also suggests the benefits
of a judicious collective effort at authoritative summation. The field
has developed at an extraordinary rate since the early 1970s, but offers no
work that could claim to approach the full range of our understanding of the
American legal past.5 The Cambridge History of Law in America addresses both
5 The field has two valuable single-author surveys: Lawrence M. Friedman’s A History of
American Law (New York, 1973; 3rd ed. 2005) and Kermit Hall’s The Magic Mirror.
Neither approaches the range of what is on display here. The field also boasts volumes
of cases and commentary, prepared according to the law teaching “case book” model,
such as Stephen B. Presser and Jamil S. Zainaldin, Law and Jurisprudence in American
History: Cases and Materials (St. Paul, MN, 1980; 6th ed. 2006) and Kermit Hall, et al.,
American Legal History, Cases and Materials (New York, 3rd ed., 2003). There also exist
edited volumes of commentary and materials that focus on broad subject areas within
the discipline of legal history; a preponderance deal with constitutional law, such as
Lawrence M. Friedman and Harry N. Scheiber, eds., American Law and the Constitutional
Order: Historical Perspectives (Cambridge, MA, 1978; enlarged ed. 1988). Valuable in
Cambridge Histories Online © Cambridge University Press, 2008
xiv Editors’ Preface
the vitality of variety and its organizational challenge. Individually, each
chapter in each volume is a comprehensive interrogation of a key issue in a
particular period of American legal history. Each is intended to extend the
substantive and interpretative boundaries of our knowledge of that issue.
The topics they broach range widely – from the design of British colonizing
to the design of the successor republic and of its successive nineteenthand
twentieth-century reincarnations; from legal communications within
empires to communications among nation-states within international law
to a sociology of the “legalization” that enwraps contemporary globalism;
from changes in legal doctrine to litigation trend assessments; from clashes
over law and religion to the intersection of law and popular culture; from
the movement of peoples to the production of subalternship among people
(the indigenous, slaves, dependents of all kinds); and from the discourse
of law to the discourse of rights. Chapters also deal with developments
in specific areas of law and of the legal system – crime and criminal justice,
economic and commercial regulation, immigration and citizenship,
technology and environment, military law, family law, welfare law, public
health and medicine, and antitrust.6
Individual chapters illustrate the dynamism and immense breadth of
American legal history. Collectively, they neither exhaust its substance nor
impose a new interpretive regimen on the field. Quite the contrary, The
Cambridge History of Law in America intentionally calls forth the broad array
of methods and arguments that legal historians have developed. The contents
of each volume demonstrate not just that expansion of subject and
method is common to every period of American legal history but also that
as the long-ascendant socio-legal perspective has given way to an increasing
diversity of analytical approaches, new interpretive opportunities are rife
everywhere. Note the influence of regionalism in Volume I and of institutionalism
in Volume II. Note the attention paid in Volume III not only to
race and gender but also to sexuality. The History shows how legal history
their own right, such volumes are intended as specific-purpose teaching tools and do not
purport to be comprehensive. Finally, there are, of course, particular monographic works
that have proven widely influential for their conceptual acuity, or their capacity to set
a completely new tone in the way the field at large is interpreted. The most influential
have been such studies as James Willard Hurst, Law and the Conditions of Freedom in
the Nineteenth-Century United States (Madison, WI, 1956), and Morton J. Horwitz, The
Transformation of American Law, 1780–1860 (Cambridge, MA, 1977).
6 Following the tradition of Cambridge histories, each chapter includes only such footnotes
as the author deems necessary to document essential (largely primary) sources. In place
of the dense display of citations beloved of scholarly discourse that Acton’s aesthetic
discouraged, each author has written a bibliographic essay that provides a summary of
his or her sources and a guide to scholarly work on the subject.
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xv
has entered dialogue with the full array of “histories” pursued within the
academy – political, intellectual, social, cultural, economic, business, diplomatic,
and military – and with their techniques.
The Cambridge History of Law in America is more than the sum of its
parts. The History’s conceptual design challenges existing understandings
of the field.We divide the American legal past into three distinct eras and
devote a complete volume to each one: first Early America, then The Long
Nineteenth Century, and last The Twentieth Century and After. The first volume,
Early America, examines the era from the late sixteenth century through the
early nineteenth – from the beginnings of European settlement through the
creation and stabilization of the American republic. The second volume,
The Long Nineteenth Century, begins with the appearance of the United States
in the constituted form of a nation-state in 1789; it ends in 1920, in the
immediate aftermath of World War I, with the world poised on the edge
of the “American Century.” The final volume, The Twentieth Century and
After, concentrates on that American century both at home and abroad
and peers into the murk of the twenty-first century. Within each of these
broad chronological divisions occurs a much more detailed subdivision
that combines an appreciation of chronology with the necessities of topical
specialization.
Where appropriate, topics are revisited in successive volumes (crime and
criminal justice, domestic relations law, legal thought, and legal education
are all examples). Discussion of economic growth and change is ubiquitous,
but we accord it no determinative priority. To facilitate comparisons and
contrasts within and between eras, sequences of subjects have been arranged
in similar order in each volume. Specific topics have been chosen with an eye
to their historical significance and their social, institutional, and cultural
coherence. They cannot be walled off from each other, so readers will notice
substantive overlaps when more than one author fastens on the same issues,
often to create distinct interpretations of them. History long since ceased to
speak with one voice. In this History, readers are invited into a conversation.
Readers will notice that our chronology creates overlaps at the margins
of each era. They will also notice that some chapters focus on only particular
decades within a specific era7 or span more than one era.8 All this is
7 Chronologically specific topics – the American Revolution and the creation of the republic
in Volume I, the Civil War in Volume II, the New Deal era in Volume III – are treated
as such. Chapters on the legal profession in Volumes II and III divide its development at
the Civil War, as do those, in Volume II, on the state and on industrial organization.
8Volume II’s chapter on the military deals with both the nineteenth and twentieth centuries,
as do Volume III’s chapters on agriculture and the state and on law and the
environment. The latter chapter, indeed, also gestures toward the colonial period.
Cambridge Histories Online © Cambridge University Press, 2008
xvi Editors’ Preface
intentional. Historians construct history by placing subjects in relation to
each other within the continuum of historical time. Historians manipulate
time by creating periods to organize the placement of subjects. Thus, when
historians say that a subject has been “historicized,” they mean it has been
located in what they consider its appropriate historical-temporal context or
period. Slicing and dicing time in this fashion is crucial to the historian’s
objective of rendering past action coherent and comprehensible, but necessarily
it has a certain arbitrariness. No matter how familiar – the colonial
period, the Gilded Age, the Progressive period, and so forth – no historical
period is a natural division: all are constructs. Hence we construct three
“eras” in the interests of organizational coherence, but our overlaps and the
distinct chronologies chosen by certain of our authors allow us to recognize
different temporalities at work.
That said, the tripartite division of these volumes is intended to provide
a new overall conceptual schema for American legal history, one that is
broad and accommodating but that locates legal history in the contours of
American history at large. Maitland never forgot that, at bottom, just as
religious history is history not theology, legal history is history not law.
Notwithstanding law’s normative and prescriptive authority in “our” culture,
it is a phenomenon for historical inquiry, not the source of an agenda.
And so we take our cue, broadly, from American history. If it is anything,
American history is the history of the colonization and settlement of the
North American mainland, it is the history of the creation and expansion
of an American nation-state, and it is the history of that state’s place in
and influence on the world at large. The contents and the organization of
The Cambridge History of Law in America speak to how law became king
in this America and of the multitudinous empire of people and possibilities
over which that king reigned. Thus we address ourselves to the endless
ramifications, across more than four centuries, of the meaning of Tom
Paine’s exclamation in 1776.
The Cambridge History of Law in America could not have been produced
without the support and commitment of the American Bar Foundation,
Cambridge University Press, and our cadre of authors.We thank them all.
The American Bar Foundation housed the project and, together with the
Press, funded it. The Foundation was there at the creation: it helped initiate
the project by sponsoring a two-day meeting of an ad hoc editorial consulting
group in January 2000. Members of that group (Laura Edwards, Tony
Freyer, Robert Gordon, Bruce H. Mann, William Novak, Stephen Siegel,
Barbara Young Welke, and Victoria Saker Woeste) patiently debated the
editors’ initial thoughts on the conceptual and intellectual direction that the
History should follow and helped identify potential contributors. Since then,
Cambridge Histories Online © Cambridge University Press, 2008
Editors’ Preface xvii
the project has benefited from the support of two ABF directors, Bryant
Garth and his successor Robert Nelson, and the sustained and enthusiastic
interest of the Foundation’s Board of Directors during the tenure of
four Board presidents: Jacqueline Allee, M. Peter Moser, the late Robert
Hetlage, and David Tang.We owe a particular debt of gratitude to Robert
MacCrate for his early support and encouragement. As all this suggests, the
American Bar Foundation’s role in the production of The Cambridge History
of Law in America has been of decisive importance. The part the Foundation
has played underlines its standing as the preeminent research center for
the study of law and society in the United States and its long tradition of
support for the development of American legal history.
Cambridge University Press has, of course, been central to the project
throughout. We are grateful to the syndics for their encouragement and
to Frank Smith and his staff in New York for their assistance and support.
Frank first suggested the project in 1996. He continued to suggest it for
three years until we finally succumbed. During the years the History has been
in development, Frank has accumulated one responsibility after another at
the Press. Once we rubbed shoulders with the Executive Editor for Social
Sciences. Now we address our pleas to the Editorial Director for Academic
Books. But Frank will always be a history editor at heart, and he has maintained
a strong interest in this History, always available with sage advice
as the project rolled relentlessly onward. He helped the editors understand
the intellectual ambitions of a Cambridge history. Those who have had the
privilege of working with Frank Smith will know how important his advice
and friendship have been to us throughout.
Finally, the editors want to thank the authors of the chapters in these
volumes. A project like this is not to every author’s taste – some took
to it more easily than others. But together the sixty authors who joined
us to write the History have done a magnificent job, and we are deeply
grateful to every one. From the beginning our goal was not only to recruit
as participants those whom all would identify as leading figures of our field
but also to include those who, we were confident, would be leading figures
of its next generation.We are delighted that so many of each were willing.
We acknowledge also those who were unable for one reason or another to
see an initial commitment through to the end: their efforts, too, helped us
define and establish the project. And obviously, we owe a particular debt to
those others who came later to take the places of the fallen.
To oversee a project in which so many people have at one time or another
been involved has seemed on occasion like being the mayors of a village.
People arrive and (much less frequently, thank goodness) depart. Those who
settle in for the duration become a community of friends and neighbors.
Over time, one learns much from one’s friends and neighbors about the joys
Cambridge Histories Online © Cambridge University Press, 2008
xviii Editors’ Preface
and vicissitudes of life. One learns who (and whose family) may be ailing,
and who is well. One learns of hurts and difficulties; one revels in successes.
And one may learn, as we did so sadly in August 2006, of an untimely
death. Notwithstanding the demands of his immensely successful career in
academic administration, our colleague Kermit Hall never laid down his
historian’s pen and was an enthusiastic participant in this project. He died
suddenly and unexpectedly. His contributions to the field have been great,
and he is greatly missed.
Throughout, the many authors in this project have responded courteously
to our editorial advice. They have reacted with grace and occasional humor
to our endless demands that they meet their deadlines. Sometimes they even
sent their manuscripts too. Most important, they have striven to achieve
what we asked of them – the general goals of a Cambridge history and the
specific goals of this history, as we have described them in this preface. Their
achievements are evident in the pages of each volume. In an individualistic
intellectual culture, the scholarship on display here demonstrates the
possibilities inherent in a collective intellectual enterprise. In the end, of
course, the editors, not the authors, are responsible for the contents of these
volumes. Yet, it is the authors who have given the History its meaning and
significance.
Michael Grossberg
Christopher Tomlins
Cambridge Histories Online © Cambridge University Press, 2008
1
law and the state, 1920–2000: institutional
growth and structural change
daniel r. ernst
Belief that the United States occupies an exceptional place in world history
has been a persistent element of the American creed. The founding of the
nation was a new birth of freedom, Americans have been taught; it delivered
them from the class conflict and ideological strife that have plagued the rest
of the modern world. Not infrequently, seekers of the ultimate source of
the United States’ exceptionalism have settled on the peculiarly fragmented
nature of its government. The nation was born in a revolt against the modern
state. In Europe, standing armies, centralized taxation, juryless courts,
and national bureaucracies loyal to a distant sovereign were the hallmarks of
the proudest monarchies. To Revolutionary America, they were evidence of
tyrannous intent, “submitted to a candid world.” To prevent such abominations
from reappearing in the new nation, Americans shattered sovereignty
into legislative, executive, and judicial fragments and embedded them in
their states’ written constitutions. The Federal Constitution of 1787 went
further, for it also divided sovereignty between the national government
and the states. The result, as John Quincy Adams observed, was “the most
complicated government on the face of the globe.”1
The new nation had plenty of law and plenty of local governments ready,
willing, and able to promote private economic endeavor with grants of
public land and public money. What the United States lacked, however,
was centralized administration, a counterpart to the royal bureaucracies of
Europe capable of consistently implementing national policies. The central
government had to entrust the enforcement of an order to “agents
over whom it frequently has no control, and whom it cannot perpetually
direct,” explained Alexis de Tocqueville. Tocqueville approved of such an
arrangement for a democracy, because it prevented a tyrannous majority
from imposing its will on the nation. If the American state ever became as
1 John Quincy Adams, The Jubilee of the Constitution (New York, 1839), 115.
1
Cambridge Histories Online © Cambridge University Press, 2008
2 Daniel R. Ernst
wieldy as its European counterparts, he warned, “freedom would soon be
banished from the NewWorld.”2
Centralized administration finally came to the United States in the twentieth
century in three waves of state-building. Each was consolidated into a
durable political “regime,” an amalgam of institutions, elites, social forces,
and ideas that, for a time, established fundamental set of assumptions about
politics for all major political actors. Each political regime emerged when
war, other national emergency, or a period of unusual social ferment created
a demand for a new bureaucracy or the transformation of an existing
one. These administrative responses followed no master plan. The new or
reformed administrative bodies were hastily assembled from whatever form
of governance seemed most promising in the midst of political battles in a
deeply divided state.
Administration was employed domestically in five different ways. First,
it was used to conduct command-and-control regulation through administrative
orders that told social or economic actors how to behave. Second, it
was employed in the work of social insurance, the public provision of compensation
for the misfortunes that regularly beset the members of industrial
societies. Third, it was used to deploy the power of the state to collect tariffs,
impose taxes, and issue public debt. Not infrequently, the ends sought were
social or economic, as well as fiscal. Fourth, administration was used in the
conduct of state capitalism – the public creation of economic infrastructure
or the conferral of grants, loans, and other public benefits to encourage
private individuals to create the infrastructure themselves. Finally, new
administrative structures were created to assist or supplant the courts in
the work of social police, the preservation of domestic tranquility.
Once each state-building moment passed, a period of consolidation
ensued, during which older institutions, elites, and socially dominant
groups reasserted themselves until an accommodation of the old and new
was reached. Here, we begin with the consolidation of the 1920s, in which
the new bureaucracies managed to acquire a subordinate place within a state
dominated by courts and political parties. Centralized administration came
into its own in a second cycle of state-building and consolidation, which
commenced in the New Deal, fully emerged during World War II, and
persisted well into the Cold War. We conclude with a third cycle, set off
by the new “public interest” politics of the 1960s and 1970s and brought
to a halt by a series of contractions in the 1980s and 1990s.
Tocqueville’s warning notwithstanding, administration won a place in
the American polity, but only on terms fixed by lawyers – not only those
appointed to the judiciary or government legal staffs but also those in private
2 Alexis de Tocqueville, Democracy in America, trans. Henry Reeve (London, 1862), 319–20.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 3
law firms, corporate law departments, and public interest groups. Throughout
the twentieth century, the lawyers, their clients, and their political allies
demanded that bureaucrats respect an ancient ideal, that of “a government
of laws and not of men.” Each consolidation had its own version of the
ideal, which located the sources of the “laws” that constrained the “men”
(and women) of government in different entities: the bench, the needs of
modern society, or the welfare of a nation of consumers. In each consolidation,
political actors dominant in an earlier political regime invoked the
“supremacy of law” ideal to constrain an administrative innovation that
placed them at a disadvantage. But only in the last of the twentieth century’s
three cycles did consolidation attempt a general contraction of the
administrative state. Significantly, this was the only one of the twentieth
century’s consolidations in which economists, the dominant profession of
the market, effectively rivaled lawyers, the dominant profession of the state,
as articulators of public policy.
I. ADMINISTRATION UNDER COURTS AND PARTIES
Our chronological point of departure, 1920, came just after the crest of
the wave of state-building that had occurred during the Progressive era.
That wave emerged at the state and local level in the 1890s and re,ached
the federal government by World War I. During the 1920s, most of the
new bureaucracies struggled to become autonomous parts of the American
state. On one side they were challenged by judges, who doubted the
bureaucrats’ expertise and commitment to due process. On another, they
faced demands for appointments and policies that promoted the interests
of the nation’s bottom-up, patronage-oriented political parties. Administration,
then, was contained by older, more familiar political structures; in
the 1920s the American state still bore more than a passing resemblance to
the one Tocqueville knew.
In 1920, price-and-entry regulation by independent commission, created
outside the regular departments of the executive branch, was the most
salient feature of the American administrative state. Railroad commissions
had been the first to arrive on the scene, established by the states after the
CivilWar and at the federal level, in the guise of the Interstate Commerce
Commission (ICC), in 1887. Commissions limited entry into a regulated
industry to firms with the requisite know-how and financial backing. They
also set the rates businesses could charge for their goods and services and
imposed a host of other rules. Railroad commissions, for example, developed
and enforced detailed safety regulations, ordered companies to share
freight cars, and decreed when railroads might abandon service to particular
stations.
Cambridge Histories Online © Cambridge University Press, 2008
4 Daniel R. Ernst
At the federal level, the ICC was joined in 1913 by the Federal Reserve
Board, which governed the banking industry, and in 1914 by the Federal
Trade Commission (FTC), which policed unfair business practices. In the
states, the focus of regulation shifted away from intercity railroads (which
became the ICC’s exclusive preserve) to other matters. In Texas, for example,
the “Railroad Commission” regulated the increasingly important oil and gas
industry. More common was a turn to the regulation of municipal utilities,
such as electricity, water, natural gas, streetcars, and subways. NewYork and
Wisconsin created the first public utilities commissions (PUCs) in 1907.
Seven years later all but three states had at least one PUC.
The bellwether program of social insurance, in the United States as elsewhere,
was workers’ compensation, a system of fixed payments to the victims
of workplace injuries and their dependents. Between 1911 and 1920 fortytwo
American states enacted compensation schemes for industrial accidents;
two more followed in the 1920s. After several false starts, federal commissions
for harbor workers and the residents of the District of Columbia were
created in 1927 and 1928.
American reformers argued that the United States ought to follow other
industrial nations by extending the social insurance concept to cover life’s
other misfortunes, such as old age, unemployment, and illness. An indigenous
precedent existed in pensions for CivilWar veterans and their dependents,
but it was a somewhat dubious one, as a series of Republican administrations
had put the system to partisan use. Only in the category of “mothers’
pensions” did the United States lead the world. These quite meager payments
were intended to keep mothers who lacked able-bodied husbands in
the home, where they could look after their children. Forty states had some
form of mothers’ pensions by the end of 1920. Four other states and the
District of Columbia followed suit in the next decade.
The most important administrative innovations in the area of fiscal management
involved taxation. State and local governments had long relied on
property taxation to finance their activities, but by the end of the nineteenth
century the manipulation of assessments by political machines had become
a scandal. One Progressive reform was to shift responsibility from local officials
to statewide “equalization boards.” Another was to shift to new forms
of taxation that were more difficult to use to reward political friends and
punish political enemies. Income taxation soon became the reformers’ tax
of choice.Wisconsin implemented an income tax in 1911 as part of a broad
campaign of Progressive reform. After spreading to other states, income
taxes would account for 22 percent of all state revenue in 1922.
On the federal level, the ratification of the Sixteenth Amendment in 1913
was quickly followed by the adoption of a modest income tax, covering only
2 percent of the American workforce and intended as a first step in reducing
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 5
federal reliance on tariffs. Coverage expanded with the United States’ entry
intoWorldWar I, and a new tax on profits was instituted. The staff of the
Bureau of Internal Revenue (the predecessor of the Internal Revenue Service)
increased from 4,000 in 1913 to 15,800 in 1920. Prominent economists
and Wall Street lawyers were appointed to high positions in the Treasury
Department, where they formed a tax policy group of unprecedented ability
and sophistication. Although some of the wartime innovations – such as
the excess profits tax – did not survive the Republicans’ return to power in
1921, World War I remained an object lesson in how to use federal taxes
to make economic and even social policy.
In the field of state capitalism, most conferrals of public benefits to promote
economic development still followed the nineteenth-century practice
of distributing grants outright, with few strings attached. Such grants
might have become vehicles of planning had recipients been required to follow
specific policies (such as the preservation of the environment) and some
administrative body been given the job of making sure that they did. But
the dominant policy in the distribution of public largess had not been planning,
but rather what the legal historian Willard Hurst called “the release
of individual creative energy.”3 That policy persisted into the 1920s.
More creative use of administration was evident in the construction and
maintenance of public infrastructure. Road-building had long been the
work of local governments, but in 1916 Washington stepped in with a
“grant-in-aid” program. Public ownership of other forms of transportation
was rarer, although the railroad industry was briefly nationalized during
World War I and a permanent, government-owned “merchant marine”
was created when transatlantic shipping became too risky for private carriers.
State ownership of other public utilities was also limited. Revelations
of political corruption brought an end to a late-nineteenth-century trend
toward the creation of city-owned water, gas, and streetcar companies.
Thereafter, urban voters preferred private ownership coupled with regulation
by a statewide public utility commission. At the federal level, war
again provided the impetus for an exceptional case of state ownership. In
1916 Woodrow Wilson approved the development of hydroelectric power
at a government-owned dam across the Tennessee River at Muscle Shoals,
Alabama, for use in the production of explosives and fertilizer. Completed
in 1925, the facility’s full potential was not realized until a staunch advocate
of public power, Franklin Delano Roosevelt, won the presidency.
In the field of social police, administrators captured relatively little
ground from the courts, which invoked the powerful constitutional
3 James Willard Hurst, Law and the Conditions of Freedom in the Nineteenth-Century United
States (Madison, WI, 1967), 6.
Cambridge Histories Online © Cambridge University Press, 2008
6 Daniel R. Ernst
tradition that held their procedures to be the surest defender of the rights
and liberties of the subject. The settlement of labor disputes was a case
in point. Many states had created boards for the voluntary mediation and
arbitration of labor disputes after the Civil War, and a federal system for
arbitrating railway labor disputes was established after the Pullman boycott
of 1894. During World War I, the U.S. Army insisted on minimum labor
standards in its contracts for uniforms, and the federal government created
several commissions and boards to mediate labor disputes. The most powerful
of these agencies, the National War Labor Board (NWLB), brought
labor leaders and businessmen together under the joint chairmanship of
a former president (William Howard Taft) and a nationally known labor
lawyer (FrankWalsh). But the state boards had no power to compel workers
or employers to accept their recommendations, and the NWLB was abolished
in 1919. Criminal prosecutions and court injunctions remained the
dominant mode of policing labor disputes until the New Deal.
Only in the field of immigration, where the objects of social policing
were not citizens, did administration make major inroads on the judiciary.
For most of the nineteenth century, federal courts had directed the
exclusion of aliens. Even Chinese immigrants, singled out for especially
unfavorable treatment in 1882, could remove their cases from the purview
of customs officials into federal courts. In 1891, however, Congress established
a Bureau of Immigration and subsequently empowered it to decide
the citizenship status of all immigrants. The U.S. Supreme Court put some
of the Bureau’s determinations beyond judicial review in the Ju Toy decision
of 1905. Equally deferential decisions would help keep immigration
an area of extraordinary administrative discretion throughout the twentieth
century.
The administrators of the Progressive state were thus a miscellany of
officials, scattered across the social and economic landscape, who answered
to no single authority, tyrannous or otherwise. Still, their mere presence
was hard for lawyers to square with the Tocquevillean notion that Americans
were exceptionally free from governmental control. They turned to an
Englishman, Albert Venn Dicey, for help. In his Introduction to the Study of
the Law of the Constitution (1885), Dicey contrasted the “rule of law” in common
law countries with the “administrative law” that prevailed in France
and other civil law jurisdictions on the European continent. In common law
countries, Dicey argued, citizens could contest the actions of administrators
in the “ordinary courts of the land” – that is, in courts of general jurisdiction
whose main work was the resolution of the disputes of private parties.
In France and elsewhere, citizens could only appeal to specialized courts
embedded in the very bureaucracies whose orders they contested. Translated
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 7
into an American idiom, Dicey taught that American could have both
bureaucracy and a “government of laws,” so long as administrators’ actions
could be challenged in courts presided over by common law judges.
Throughout the twentieth century, American judges routinely pledged
their fidelity to Dicey’s notion of the rule of law. Just as routinely, they
departed from it in practice. One striking example involved the nondelegation
doctrine, the principle that lawmaking power vested in a legislature
might not be delegated to any other public institution or official.
Applied strictly, the doctrine would have kept any number of administrative
agencies from promulgating rules and regulations in support of their
statutory missions. In a series of decisions between 1904 and 1928, however,
the U.S. Supreme Court upheld sweeping delegations by employing
the fiction that administrative officials were merely executing the clearly
defined will of Congress. So long as a statute embodied an “intelligible principle,”
the Court decided, the non-delegation doctrine was satisfied. Vague
standards such as the ICC’s charge to set “just and reasonable” rates or the
Federal Radio Commission’s mandate to issue licenses in accordance with
the “public interest, convenience, and necessity” easily passed constitutional
scrutiny.
Courts also deferred to administrators by refusing to make their own
determinations of the facts supporting administrative rulings. In 1897
the U.S. Supreme Court had crippled the ICC by permitting railroads
to introduce new evidence in federal court when contesting the commission’s
request for an injunction. By the 1920s judges had rejected the “de
novo review” of most facts and upheld agencies’ findings whenever backed
by substantial evidence in the record, even though the judges themselves
would have decided the matter differently if free to do so. To be sure, de
novo review was not abandoned totally. In Crowell v. Benson (1932), for
example, Chief Justice Charles Evans Hughes insisted that federal courts
make their own determination of the facts “upon which the enforcement of
the constitutional rights of the citizen depend.”4 But other judges did not
apply Hughes’s “constitutional fact” doctrine widely, and soon commentators
were complaining that the judiciary had abdicated in favor of the ICC,
public utility commissions, and workers’ compensation commissions.
Many other forms of administration were immune from even “substantial
evidence” review on the ground that they dispensed “privileges” rather
than determined “rights.” For example, unless statutes provided otherwise,
courts could not interfere with administrators as they distributed pensions,
deported aliens, sold public land, awarded government contracts and loans,
4 Crowell v. Benson, 285 U.S. 22, 56 (1932).
Cambridge Histories Online © Cambridge University Press, 2008
8 Daniel R. Ernst
parceled out grants-in-aid to the states, employed public workers, or decided
which periodicals were eligible for the Post Office’s low-cost, “second-class”
mailing privilege.
Some observers attributed the judges’ infidelity to Dicey’s ideal to a
failure of will when confronting an avalanche of administrative decisions.
Others maintained that they were simply recognizing obvious and inherent
differences between adjudication and administration. The judges who
staffed Dicey’s “ordinary courts” were of necessity generalists. Administrators,
in contrast, developed and applied the specialized expertise that
modern times demanded. Courts were passive bodies that acted only when
some party brought disputes before them; administrators could conduct
investigations on their own initiative. Courts issued final decrees in discrete
cases; administrators could continuously review prior decisions and
engaged in rulemaking based on knowledge acquired by their own staffs.
Judges deferred to administrators with a reputation for employing their
expertise and procedural flexibility competently and in the public interest.
If they suspected that decisions were made for personal gain or to reward a
political constituency, they usually found a way to avenge the rule of law.
In the 1920s, the varying treatment that federal judges accorded agencies
they trusted and those they did not can be seen by contrasting the ICC and
the FTC. Federal judges were extremely deferential to the ICC and placed
some of its “negative orders” (decisions not to proceed against the subject of
a complaint) beyond judicial review. In contrast, they ran roughshod over
the FTC. The U.S. Supreme Court insisted that federal judges make their
own determination of what constituted “unfair methods of competition.”
When intermediate federal courts reversed the FTC’s findings of facts, the
Supreme Court usually affirmed, even though Congress had directed that
the commission’s determinations be considered “conclusive.”
The difference in judicial treatment turned on the great disparity in the
professionalism of the two agencies’ staffs and the extent to which their
procedures tracked those of the courts. The ICC had a tradition of nonpartisanship
dating from the appointment of its first chairman, the great
Michigan judge Thomas Cooley. It had able economists and secretaries, and
in 1916 its large legal staff was brought within the federal civil service. In
most respects, its procedures were familiar to any courtroom lawyer, and
its orders were backed up with published opinions that compared favorably
with those of the courts. The FTC was another matter. From the start it
was plagued by weak commissioners, selected more for their service to
their party than their knowledge of business affairs. From 1925 onward,
its chairman was William E. Humphrey, an outrageously partisan and
pro-business Republican. Neither the commissioners nor their politically
appointed lawyers paid any attention to the FTC’s small economic staff,
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 9
and the commissioners gave little indication of the reasoning behind their
decisions. Senators roamed the halls at will in search of commissioners to
lobby.
At the end of the 1920s, then, administration was a familiar but subordinate
feature of the American state. The speed and flexibility that made it an
attractive alternative to courts and legislatures also attracted the suspicions
of a jealous judiciary and the unwanted attention of politicians seeking new
ways to reward contributors and constituents. Many American bureaucracies
had acquired administrative “capacity” – the ability to solve problems
and achieve ends – but few enjoyed express judicial or legislative recognition
of their “autonomy” – the ability to formulate goals and policies
independently of private interests, political parties, and other arms of the
state. That would be forthcoming only after an unprecedented economic
crisis, a second world war, and a recasting of administrative procedure in
ways that allowed lawyers greater leverage within the administrative process
itself.
II. STORM OVER THE NEW DEAL
The legal history of the American administrative state did not deviate from
the path of uncoordinated, sporadic growth on ground left unoccupied by
courts and party-dominated legislatures until an economic catastrophe of
unprecedented proportions hit the nation at the end of the 1920s. The stock
market crash of 1929 and the ensuing downward spiral of business activity
left nearly a quarter of the American workforce unemployed and elicited
a wide range of proposals from reformers, universities, civic associations,
private foundations, and government officials. The Republican president
Herbert Hoover was cautious in sampling these wares, but his Democratic
successor enthusiastically experimented with one innovative use of administration
after another. Typically, new “emergency” or “alphabet” agencies
were created as independent commissions to implement the proposals. The
most successful agencies acquired the funds, staff, and procedures to formulate
policies without returning to Congress and to obtain compliance with
its orders with only occasional resorts to the courts.
Two vast schemes of command-and-control regulation created during the
first months (the First Hundred Days) of Franklin Roosevelt’s presidency
showed how vital “state autonomy” was for a new agency. The National
Recovery Administration (NRA) was created to reduce the overproduction
of goods that was the most puzzling phase of the depression. In 1933 no profession,
academic discipline, or arm of the state had the detailed knowledge
of the hundreds of industries that the NRA regulated, so its administrators
turned the job of drafting regulations over to “code authorities” made up of
Cambridge Histories Online © Cambridge University Press, 2008
10 Daniel R. Ernst
leading businessmen. In theory, the NRA’s staff was to review their work,
but the staff lacked the expertise and authority to second-guess the industrial
representatives. By early 1935 most observers were convinced that
the legislative power Congress had delegated to a supposedly independent
agency was actually being exercised by the industrialists themselves. In
contrast, the principal agricultural agency of the First Hundred Days, the
Agricultural Adjustment Administration (AAA), was more successful in
its quest for autonomy. It attacked the problem of excess supply by paying
farmers to cut back on their production of wheat, corn, cotton, tobacco,
rice, hogs, and milk, with the money coming from a tax on the processors
of these items. Local committees of farmers were to assist in deciding whose
acreage was to be reduced and how subsidies were to be distributed, but
they did so under the direction of the large and well-established extension
service of the U.S. Department of Agriculture and with the assistance of
experts in the country’s many land-grant universities.
Similar success was enjoyed by the Securities and Exchange Commission
(SEC), created in 1934 after a year’s experience with the regulation of the
issuance of stocks and bonds by the FTC. The Securities and Exchange
Commission bore a superficial resemblance to theNRAin that it asked stock
dealers and exchanges to codify their best practices and relied on accountants
to develop and enforce the intricate reporting requirements for each new
issue of stocks and bonds. But the SEC was no rubber stamp: unusually able
lawyers had drafted its organic act and served as commissioners or members
of its legal staff. The agency retained a reputation for efficiency and expertise
long after other New Deal agencies had slipped into quiescence. The SEC
also benefited from the unusual sensitivity of securities markets to publicity.
The issuance of an administrative “stop order,” which blocked an offering
until some discrepancy in a company’s registration statement was resolved,
could scare off investors. The damage was done long before the order could
be challenged in court.
The New Deal also produced a landmark in the history of social insurance
and social provision, the Social Security Act of 1935. One part of the statute
federalized the states’ mothers’ pensions, but, at the insistence of Southern
Democrats, it left broad discretion to state officials. In the South, officials
were careful not to let these “welfare” payments upset the domination
of whites. Everywhere, recipients had to submit to intrusive, stigmatizing
guidelines. The statute’s provisions for wage earners, such as unemployment
insurance and old age insurance, were quite different. These “social security”
payments were funded by the contributions of workers and their employers
and were treated as unconditional entitlements. Old age pensions were
exclusively administered by a federal Social Security Board; unemployment
payments were distributed under strict guidelines set by federal officials.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 11
State capitalism took a great leap forward during the New Deal. The
Tennessee Valley Authority (TVA) was created in the first days of the Roosevelt
presidency to use the cheap electricity generated at Muscle Shoals to
promote economic development in an impoverished region. The Bonneville
Power Administration, created in 1937, brought the federal generation of
electric power to the Pacific Northwest. More generally, New Dealers lent
on a massive scale to corporations, cooperatives, homeowners, consumers,
and localities. The Reconstruction Finance Corporation was created during
Hoover’s administration to serve as a safety net for faltering banks. Under
FDR, it became a vast and diversified lender to private business and other
NewDeal agencies. Smaller, more specialized programs proliferated to guarantee
farm and home loans, consumer purchases of electrical appliances and
equipment, rural electrical cooperatives, and municipal utilities.
Business leaders cooperated in such largess-distributing programs, but
they fiercely resisted a New Deal innovation in the field of social police,
the National Labor Relations Board (NLRB). A series of labor boards had
been created under the NRA in response to union organizing drives in such
mass production industries as electrical products and automobiles. After
the NRA was declared unconstitutional, Congress created the NLRB in
1935 as a quasi-judicial, independent commission and charged it with outlawing
“unfair labor practices,” much as the FTC had been given the job of
punishing “unfair trade practices.” The NLRB’s legal staff was more able
than the FTC’s, and many of its lawyers passionately believed in the cause
of organized labor. Although employers denounced the NLRB as a radical
transgression of American liberty, its powers were quite modest when compared
with systems of labor governance elsewhere in the industrial world.
Rather than produce detailed schedules of wages and work rules, for example,
the New Deal left the terms of labor contracts to the employers and the
unions themselves. Neither the NLRB nor various bodies created within
the Department of Labor to oversee government contracts and enforce minimumlabor
standards ever developed into the national “employment courts”
commonly found in other industrial countries.
The New Deal’s experiments in administration may have seemed modest
when compared with the centralized bureaucracies of European nations,
but they were quite enough to set off a fierce debate over whether bureaucracy
was compatible with the rule of law. Most of the major New Deal
agencies were greeted with a barrage of injunctions challenging their constitutionality.
In one eight-month period alone, the NLRB’s lawyers confronted
more than eighty suits inspired by a model brief prepared by a
committee of lawyers affiliated with the corporate-financed American Liberty
League. Such campaigns could not stop the spread of administration,
but they did succeed in formalizing the hearings in which administrators
Cambridge Histories Online © Cambridge University Press, 2008
12 Daniel R. Ernst
passed judgment on legally protected rights. In the late 1930s, Congress
increasingly showed interest in administrative reform as a way of keeping
FDR from converting the alphabet agencies into an independent political
base.
From the vantage point of the U.S. Supreme Court’s decisions in 1935
and 1936, one would not have predicted the survival of much of the New
Deal. In January 1935 prohibition of “hot oil” shipments (excess petroleum
shipped across state lines) became the first major New Deal policy to fall,
on the surprising ground that it violated the non-delegation doctrine. In
May the U.S. Supreme Court struck down the NRA as an unconstitutional
delegation of legislative power and an intrusion of the federal government
into matters pertaining to the states. In January 1936, the Court declared
that the AAA’s tax on food processors could not be squared with the Constitution’s
requirement that expenditures promote “the general welfare.”
In May 1936 it struck down an NRA-like scheme to promote collective
bargaining and fix prices in the coal industry. The TVA survived the Court’s
scrutiny, but otherwise the New Deal’s prospects looked bleak at the end
of the 1935–36 term.
Bleak, but not hopeless. The NRA and coal cases involved de facto delegations
of legislative power to business groups with only modest review by
public officials. Better crafted delegations to stronger agencies might well
survive judicial review. The AAA had an impressive administrative staff,
and the constitutional infirmity the Court identified was easily corrected by
paying for crop reduction out of the general revenues of the federal government.
Still, President Roosevelt was not content to hope for more favorable
decisions from the Supreme Court as then constituted. Emboldened by his
landslide reelection, he announced in early February 1937 a plan to appoint
additional justices to the Supreme Court.
What influence the “Court-packing” plan had on the justices before its
defeat in the summer of 1937 is difficult to gauge. The justices were already
showing signs of greater tolerance for the alphabet agencies before the plan
was announced. In December 1936, for example, a majority passed up
chances to attack the SEC’s power to restructure public utilities and another
New Deal agency’s funding of municipal power plants. More dramatic was
the justices’ upholding of the NLRB and the Social Security Act just months
after the Court-packing plan became public. The Supreme Court upheld
a reconstituted AAA in 1938, a new coal commission in 1940, and the
federal minimum wage in 1941.
As the constitutional barricades fell, those who sought to restrain the
federal agencies fell back on administrative law. In three cases decided in
the spring of 1936, the U.S. Supreme Court seemed to reaffirm its fidelity
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 13
to Dicey’s notion of the rule of law. In the Jones decision, Justice George
Sutherland denounced the SEC’s refusal to let a would-be issuer of securities
withdraw his registration after a stop order proceeding had commenced. In
St. Joseph Stock Yards, Chief Justice Hughes extended his “constitutional
fact” doctrine to the question of whether rates fixed by regulators were
confiscatory. And in the Morgan case, Hughes required the U.S. Department
of Agriculture to follow procedures “akin to that of a judge” when fixing
rates for livestock dealers.5
Despite these harbingers, the anticipated Dicey revival never arrived.
Too many social and economic groups counted on administration to subject
the normal run of its actions to what FDR called “the stately rituals of
the courts.” What gained ground instead was an alternate understanding
of the rule of law that provided a rationale for the growth of bureaucratic
autonomy during the New Deal. It held that law was not a set of abstract,
general principles, best divined by judges, but rather a set of procedures and
processes that permitted all kinds of state actors to identify and articulate
a socially functional result or policy. In effect, the government of laws was
to become a government of social rationality. Judges still held a privileged
position under the new dispensation, but they were to be more respectful
of the competence of other institutions of the state. They were not to insist
on their own understanding of the public good, but to ensure that other
public officials exercised their power in good faith through procedures that
were likely to produce socially optimal results. James Landis, dean of the
Harvard Law School, magisterially reassured readers of his lectures on The
Administrative Process that the new approach did not threaten “our ideal of
the ‘supremacy of law.’” Rather, it raised the ideal “to new heights where
the great judge, like the conductor of a many tongued symphony . . . makes
known through the voice of many instruments the vision that has been
given him of man’s destiny upon this earth.”6
Beginning in his second term, FDR’s nominations ensured that a majority
of the Supreme Court justices considered courts and agencies to be
“collaborative instrumentalities of justice” – as Landis’s mentor, Justice
Felix Frankfurter, put it in a 1941. Federal judges insisted that agencies
give the individuals and groups whose rights were directly affected by
their decisions an opportunity to be heard. Whenever agencies resorted
to formal adjudication, courts tended to measure their hearings against the
benchmark of judicial proceedings. Most agencies proactively “judicialized”
5 Jones v. SEC, 298 U.S. 1 (1936); St. Joseph Stock Yards Co. v. United States, 298 U.S. 38
(1936); Morgan v. United States, 298 U.S. 468, 481 (1936).
6 James M. Landis, The Administrative Process (New Haven, 1938), 155.
Cambridge Histories Online © Cambridge University Press, 2008
14 Daniel R. Ernst
their formal adjudications to avoid the courts’ rebukes. Wherever this was
done, lawyers and their clients acquired greater leverage over the agency’s
decision-making process.
The judicialization of agencies’ formal procedures led administrators to
develop informal ways of obtaining compliance. For example, officials at
the Bureau of Internal Revenue settled most tax disputes through correspondence
and conferences. If the members of the Federal Communications
Commission (FCC) were scandalized by one of MaeWest’s double entendres,
they were more likely to use a speech or press release to caution broadcasters
than a lengthy revocation hearing. SEC lawyers found that a simple “deficiency
letter” brought corporations to heel just by threatening the adverse
publicity of a stop order proceeding. Formal adjudications were only the
tip of the iceberg; informal action, the great mass below the waterline.
With the courts proving an unreliable ally, critics of the administrative
process turned to Congress, where a coalition of Republicans and antiadministration
Democrats had been alarmed by the Court-packing plan
and FDR’s attempt to purge his Congressional rivals in the Democratic
primaries of 1938 and looked for ways to check the growing power of the
alphabet agencies. The Walter-Logan bill, drafted by a committee of the
American Bar Association, won increasing support after its introduction
in January 1939. The bill sought to curb what Roscoe Pound, the former
dean of the Harvard Law School, called “administrative absolutism” in
three ways. First, it would mandate an “internal” separation of powers
by providing for appeals of formal adjudications to independent review
boards established within each commission or department. Second, it would
enact a new standard for reviewing agencies’ fact finding to promote more
aggressive judicial oversight. Finally, it would permit the review of “any
affirmative or negative decision, order, or act in specific controversies which
determines the issues therein involved” – an ambiguous provision, but one
that might subject even informal actions to judicial scrutiny.
Congress passed theWalter-Logan bill in 1940, but it did not do so out
of a principled commitment to Dicey’s rule of law. Its exemption of most
agencies created before the New Deal suggested that its main goal was to
denyFDRthe administrative patronage he needed to build a liberal political
party centered on the presidency. FDR’s veto of the bill in December 1940
created a legislative stalemate that persisted for the duration of the war. In
that interval, administration proved itself by helping convert the struggling
prewar economy into an awesome engine of war production and economic
growth. In the process, the New Deal political regime was consolidated.
Programs of state capitalism that rewarded capitalists for overseeing the
war machine became an inextricable part of American governance; those
that targeted the persistently unemployed or regarded the long-term needs
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 15
of the nation were discarded. When Congress returned to the subject of
federal administrative procedure after the war, it showed more interest in
bolstering the rule of lawyers within the administrative process than the
rule of law through the courts.
III. WAR AND THE SHADOW OF WAR
The United States met the exigencies of World War II and the Cold War
that followed with a massive expansion of the federal bureaucracy and an
updating of administrative techniques pioneered during the New Deal.
Civilian employment in the federal government jumped from just over
1 million civilian employees in 1940 to just under 4 million in 1945.
War regulations swelled the Federal Register from 5,307 pages in 1940 to
17,339 pages in 1943. Not all parts of the New Deal’s administrative
legacy were deemed serviceable for the war and postwar states, however.
Agencies charged with breaking up industries, economic planning, or the
redistribution of wealth were cabined in or abolished; those that promoted
growth through the distribution of government largess or the manipulation
of the public finance flourished.
To be sure, a series of war agencies brought command-and-control regulation
to the entire economy. The War Production Board (WPB) allocated
resources through a complicated system of allowances and priorities. The
Office of Price Administration (OPA) fixed retail prices, controlled rents,
and ultimately rationed more than 90 percent of consumer goods, including
food, gasoline, and clothing. The War Labor Board (WLB) administered
a freeze on wages, and the War Food Administration directed agricultural
production with procedures developed by the AAA.
But all these activities were expressly temporary and relied heavily on
the cooperation of private actors. Advocates of industrial planning and
wealth redistribution survived in a few agencies (notably the OPA), but as
the war proceeded they became an increasingly embattled minority within
the federal bureaucracy. A conservative Congressional majority dismantled
the New Deal’s most ambitious planning body, the National Resource Planning
Board, in 1943, and Congressional committees repeatedly harried the
“draft dodgers” of the OPA. Military officials acquired the upper hand
in directing the war economy, and in civilian agencies leadership passed
to “dollar-a-year” men who took Washington jobs with no thought of a
permanent career in public service.
The expansion of state capitalism, in the guise of public contracts and
loans, was even more impressive. From the summer of 1940 through the
fall of 1944, the federal government awarded $175 billion in war contracts,
two-thirds of which went to the nation’s 100 largest corporations. At first,
Cambridge Histories Online © Cambridge University Press, 2008
16 Daniel R. Ernst
military and civilian procurement bureaus lacked the staff and expertise
to gather data on contractors’ costs, profits, finances, and subcontracting.
Speed of delivery, not low prices or enlightened social policies, was the
priority. In time, the WPB, Army, and Navy created cost-analysis sections
and legal divisions to draft contracts, renegotiate prices when they resulted
in excessive profits, and punish breaches of contractual terms. Businessmen
who objected could not go straight to the courts, but had to start with boards
of contract appeals created within each military branch. The lessons of
wartime procurement would generally be followed in defense expenditures
after VJ Day.
A revolution in the fiscal state made the massive expenditures of the war
and postwar years possible. Before the war, New Dealers had used the federal
income tax to target America’s wealthiest. The revenue acts of 1942 and
1943 vastly expanded its coverage to reach the middle class and instituted
the automatic withholding of taxes from wages and salaries.With the stroke
of a pen, the federal government could take more money out of taxpayers’
paychecks or add to their take-home pay. Together with other wartime
innovations, such as improvements in the issuance of public debt, the federal
government acquired the means to stabilize business cycles and encourage
investment without intruding into the production decisions of individual
businessmen. The Employment Act of 1946 made the maintenance of high
levels of employment a responsibility of the federal government and created
a Counsel of Economic Advisors to guide policymakers.
Existing social insurance programs, such as old age and survivors insurance,
were put on a secure financial footing as revenues from war-swollen
paychecks grew more rapidly than disbursements. But attempts to expand
the public welfare state by creating national health insurance failed in 1943
and again in 1945, even with the backing of President Harry Truman.
(Medicare and Medicaid, which covered the elderly and the poor, would not
appear until 1965.) Veterans – numbering 19 million in 1950 – greatly
benefited from welfare programs of their own, including unemployment
insurance; job placement; grants for tuition, room, and board; and guaranteed
loans. But for others the more significant development was the growth
of the so-called private welfare state: pension and health plans funded by
employers and managed by private insurance companies. Several firms had
experimented with “welfare capitalism” during the 1920s and 1930s, but it
took a resurgent economy, the demands of the labor movement, a wartime
freeze on salaries (but not fringe benefits), and the favorable tax treatment
of employers’ contributions to spread employer-provided benefits across
American industry. Once again, American policymakers counted on the
private sector to provide benefits that were disbursed through wholly public
schemes in other industrialized nations.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 17
Finally, the power and the limits of the war and postwar states can be seen
in the field of social police. When a real or imagined threat to social order
lacked political power, the administrative state could subject it to ambitious
systems of social control. The most dramatic example was the wartime
internment of 120,000 first- and second-generation persons of Japanese
descent residing in California and the Pacific Northwest – two-thirds of
whom were American citizens. African Americans, whose labor was needed
for the war economy, forced the creation of a Fair Employment Practices
Committee (FEPC) to check racist whites, but the opposition of Southern
Congressmen left it without adequate means of enforcement. More effective
administrative systems appeared after the war, including “little” FEPCs in
twenty-five states and, in 1964, the federal Equal Employment Opportunity
Commission.
Of the usual targets of social policing, organized labor fared the best.
Its disputes were settled by a War Labor Board (WLB), whose orders were
backed by FDR’s authority to seize the plants of recalcitrant employers
and cancel the draft deferments of striking workers. Unions pledged not to
strike and accepted a freeze on wages for the duration of the war. In return,
theWLBrequired that employees maintain their union membership for the
duration of a contract. It also provided for the arbitration of grievances, a
process that produced a large body of industrial “law” beyond the domain of
the courts. In 1947 Congress reacted to a strike wave with the Taft-Hartley
Act, which (among other things) directed the NLRB to punish the unfair
practices of unions as well as employers. Hearings on labor racketeering
led to the passage in 1959 of legislation regulating unions’ internal affairs.
Still, the wartime bargain held into the 1970s: unions enjoyed the benefits
of state-sponsored collective bargaining in return for help in organizing the
industrial workforce.
A final form of social police, targeting members of the Communist Party,
appeared at war’s end and persisted throughout the 1950s. In 1945 the
American Communist Party was in decline, weakened by internal schism.
Then, the outbreak of a Cold War with the Soviet Union heightened fears
of espionage, which had a factual basis in some (but very far from all)
of the intelligence gathering conducted by the FBI since the 1930s. To
head off action by the Republican majority in Congress, in 1947 President
Truman ordered all federal agencies and departments to establish review
boards to determine whether employees were disloyal to the United States.
Because public employment was deemed a privilege, not a right, the boards’
procedural safeguards were lower than those of the courts. For example,
loyalty review boards could consider evidence that was never revealed to
employees, who were thereby denied the chance to cross-examine their
accusers.
Cambridge Histories Online © Cambridge University Press, 2008
18 Daniel R. Ernst
The loyalty review boards, deportations of foreign-born Communists,
forced resignations of union leaders affiliated with the Communist Party,
trials of Communists under anti-subversion and espionage statutes, and
sensational Congressional hearings transformed anti-Communism from a
somewhat marginal political phenomenon into a national obsession. The
administrative policing of Communists targeted real threats to national
security, but it swept far too broadly and ruined the lives of many innocent
persons. Further, the Red-baiting it fostered impugned the loyalty of those
who advocated social reforms that were common elsewhere in the world.
The vast wartime expansion of the federal administrative state took place
largely beyond the reach of judicial review. The awarding of a war contract,
for example, was deemed the conferral of a privilege, not the recognition
of a right, so that Congress could require recipients to pursue any disputes
over contracts in administrative bodies, immune from all but the most
limited judicial review. Although the OPA’s enforcement suits clogged the
federal district courts, the agency’s preferred method of bringing businesses
to heel was to deny them subsidies, another unreviewable “privilege.” As
during the New Deal, the overwhelming majority of disputes were resolved
through negotiation and settlement without a formal hearing, a pattern that
would continue into the 1950s.
On those occasions when disputes were appealed to the courts, the prewar
pattern of judicial deference continued. The Supreme Court instructed
federal judges to accept departures from judicial rules of evidence and to tolerate
remedies no court could order. The Supreme Court welcomed appeals
that gave them the chance to rebuke agencies that had lost its confidence.
(The FCC fared particularly poorly in the 1940s.) More commonly, it upheld
administrators. Between 1941 and 1946 it reversed the decisions of eight
leading agencies only 28 percent of the time.
If the federal judiciary thus proved an unreliable ally in resisting the
wartime state, Congress beckoned as an alternative. All agencies were
created under the ultimate authority of some statute, most were run by
appointees subject to senatorial confirmation, and most were dependent on
Congress for annual appropriations. Congress had appointed special committees
to scrutinize the NLRB and other New Deal agencies before the
war. More special committees were created to oversee the war effort. Some,
such as the Senate committee chaired by Harry S. Truman, were temperate,
but others, such as the House Select Committee to Investigate Acts
of Executive Agencies Beyond the Scope of Their Authority, were openly
hostile to the administrative process.
Yet, many in Congress became convinced that it lacked the tools to
oversee administrative agencies in a meaningful way. True, its committees
had occasionally ousted administrators who made unpopular decisions.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 19
Further, appropriations committees had refined the practice of disciplining
agencies by cutting or threatening to cut their budgets. In 1943, for
example, a Congressional committee prohibited theNLRBfrom proceeding
against employers who had entered into sweetheart, “closed-shop” contracts
with company-dominated unions. But Congressional insiders knew such
instances to be exceptional, and they decided that stronger measures were
required. In 1946 Congress created permanent oversight committees, each
supported by four professional and six clerical staffers, to police agencies
within their jurisdictions.
In 1946 Congress also sought to discipline the federal agencies by passing
the Administrative Procedure Act (APA), which had been drafted by a
committee of the American Bar Association. It was a much milder measure
than the Walter-Logan bill. The APA committed vast realms of informal
agency action to administrators’ discretion, with only a remote possibility of
judicial review on the ground that it was arbitrary and capricious. The act’s
requirements for rulemaking were easily met: agencies need only give notice
of an impending regulation, provide an opportunity for interested parties
to submit written comments, and not behave arbitrarily or capriciously.
Seem,ingly the APA’s most significant change came in the area of formal
adjudication. It declared that findings of fact in decisions made on the
record after an opportunity for a hearing had to be based on “substantial
evidence on the record considered as a whole.” In Universal Camera (1951),
the U.S. Supreme Court, speaking through Justice Frankfurter, announced
that Congress had meant to express a “mood” in favor of closer judicial
scrutiny of the factual basis of agencies’ decisions.7 Apparently lower federal
judges took the hint: reversal rates in appeals from federal agencies to the
U.S. Courts of Appeals were slightly higher in the 1950s than in the 1940s.
Yet, in no year did the federal courts affirm agencies’ actions less than 70
percent of the time, and none of the courts’ decisions attacked the core of
the administrative process.
Realizing that the courts could not possibly review more than a small
fraction of agencies’ decisions, the APA settled for “judicializing” the agencies
themselves by increasing the independence and authority of the hearing
examiners who took evidence, made findings of fact, and prepared recommendations
for an agency’s chief administrators. Under the act, hearing
examiners (renamed “administrative law judges” in 1972) had to be a distinct
corps within the agency; they could not be a “prosecutor” one day
and a “judge” the next. The hearing examiners were not to consult with
an agency’s investigators or prosecutors without giving all parties notice
and an opportunity to participate. Commissioners were not required to
7 Universal Camera Corp. v. NLRB, 340 U.S. 474, 478 (1951).
Cambridge Histories Online © Cambridge University Press, 2008
20 Daniel R. Ernst
accept a hearing examiner’s report, but, after the Universal Camera decision,
those who rejected a report’s conclusions in cases turning on the credibility
of witnesses could expect a skeptical reception in an appeal to the
courts.
The legislation of 1946 completed the domestication of the New Deal’s
administrative state. Those without social power derived little comfort
from the new regime. When the U.S. Supreme Court ruled that aliens
had to be given hearings that met the standards of the APA before they
could be deported, Congress promptly amended an appropriations act for
the Immigration and Naturalization Service to overturn the decision. But
for businesspeople buoyed by the return of economic prosperity, the system
was quite satisfactory. A new breed of Washington lawyers provided
inside knowledge of how administrators exercised their discretion in distributing
contracts, loans, surplus defense facilities, licenses, and favorable
tax rulings. Some also explained how Congressmen could be induced to
hurl thunderbolts at uncooperative agencies. Should an agency persist in an
unfavorable ruling or a costly regulation, these same lawyers could exploit
the procedural guarantees of the APA to string out proceedings for months
or even years. Delay became a chronic problem in the federal regulatory
agencies of the 1950s and 1960s. For example, the FDA started to establish
standards for peanut butter in 1959 but – thanks to the efforts of the
masterfulWashington lawyer, Thomas Austern – did not promulgate them
until 1971.
In the 1950s it became increasingly obvious that something had gone
terribly wrong with the administrative process in general and the independent
regulatory commissions in particular. During the New Deal, James
Landis had defended administration as a way to bring to bear on social
problems more expertise than the courts possessed. When he revisited the
regulatory commissions at the request of president-elect John F. Kennedy in
1960, Landis concluded the agencies’ expertise was more fiction than fact.
Agency staffs needed better pay, he announced, and their top officials ought
to be appointed from the staff, rather than chosen from the ranks of campaign
contributors, ex-Congressmen, and industry representatives. Landis
deplored commissioners who made their decisions in secret for obscure reasons
and then instructed staff members to justify the result. The staffers, he
noted, could only do so on narrow grounds, because they knew they might
have to defend an inconsistent result the next day.
To some extent, such charges could be sidestepped by shifting the defense
of the administrative process from functionalist to pluralist grounds. Even
if the commissions’ procedures did not produce expert solutions to social
problems, defenders argued, they gave economic groups the opportunity
to press their interests on a specialized body, which then struck a balance
that tolerably promoted the interests of all.Washington lawyers were
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 21
particularly drawn to this rationale. Time-consuming, judicialized procedures
were required for the proper presentation of their clients’ needs, they
maintained.
Political scientists in the 1950s had a less sanguine view. Commissions
could not strike a balance of the relevant interests, they argued, because the
commissioners had been “captured” by the industries they were supposed to
regulate. Aloofness from partisan politics had not made commissions truly
independent; it had only cut them off from popular sources of political
strength. To persuade Congress to maintain their budgets and jurisdiction,
commissioners needed the help of the lobbyists for the industries they
regulated, and this help would not be forthcoming if they regulated too
aggressively. The vigorous young agencies of the New Deal had become
senile, the political scientists argued, wasting away under the debilitating
disease of industry capture.
The political scientists and a parade of presidential task forces did not produce
a popular demand for reform. For that, scandal was required. In 1957,
at the instigation of Speaker Sam Rayburn, the House Commerce Committee
created a special subcommittee on legislative oversight and provided it
with a small staff, including a chief counsel, who thought he had a broad
mandate to sniff out corruption. By early 1958, the staff had discovered
that an FCC commissioner was taking bribes and that President Dwight
Eisenhower’s most trusted presidential assistant had intervened in FTC and
SEC proceedings after receiving a fur coat from a targeted businessman.
Most inconveniently, the chief counsel revealed that the committee chairman
who appointed him had recently acquired, on very favorable terms,
a large stake in a company that was then quite unexpectedly awarded a
profitable television license. The chairman was not amused and fired the
chief counsel. Although the subcommittee continued its investigations, the
affair suggested that Congress lacked the will to oversee agencies effectively.
The search was then on for other ways to make federal regulatory agencies
as independent of business interests as they were of Congress and the
judiciary. The political scientists’ preferred solution was to abolish independent
commissions and transfer their functions to the executive departments.
Such a move would place the decision makers within a hierarchy headed
by the president, the one figure in Washington accountable to a national
electorate. Occasionally a presidential task force or renegade commissioner
endorsed the idea, but Congress had no interest in boosting presidential
power at its own expense. Other opposition emerged from leading lawyers,
who still believed that the commissions could regain their autonomy if they
were given the right procedures, honest and expert leaders, and well-trained
staffs.
That neither the political scientists’ nor the lawyers’ solutions would reform
the administrative process became apparent during John F. Kennedy’s
Cambridge Histories Online © Cambridge University Press, 2008
22 Daniel R. Ernst
presidency. Landis recommended that JFK ask Congress to give commissioners
and their staff longer terms and better salaries, give chairmen greater
authority within their commissions, and establish a White House oversight
office. After Congress signaled its limited interest by rejecting three of his
six reorganization plans, JFK did not bother to send over legislation for a
presidential oversight office. Kennedy’s appointees were vastly superior to
those of Truman and Eisenhower, but for the most part Lyndon Johnson
reverted to the practice of rewarding contributors and party stalwarts. In
general, Kennedy and LBJ, like many postwar liberals, preferred to promote
economic growth through tax cuts, rather than by expanding the regulatory
state.
If neither Congress, nor the presidency, nor “the Best and the Brightest”
could restore the agencies’ autonomy, the judiciary at the dawn of the sixties
appeared no more promising. To be sure, during the late 1950s the U.S.
Supreme Court and U.S. Court of Appeals for the District of Columbia
Circuit had remanded several FCC cases for hearings on whether licenses
ought to be rescinded in light of Congressional revelations of corruption.
Overturning the tainted decisions of a notoriously politicized agency was
one thing, however; second-guessing commissioners on an ongoing basis
quite another. The federal judges still saw their job as ensuring that the state
treated individuals fairly, and they still tended to equate fair treatment with
the procedures of the courts. Few were eager to inquire into commissioners’
motives, so long as procedural niceties were observed. When, in 1957, the
D.C. Circuit judge David Bazelon voted to overturn a decision of the Federal
Power Commission (FPC) because its commissioners could not possibly have
read, much less deliberated on, the 20,000-page record in the time they had
it before them, he did so alone. The other two judges in his panel backed
the FPC, and the Supreme Court rejected a further appeal.
Yet when a new cycle of state-building and consolidation commenced in
the 1960s and 1970s, federal judges were in the vanguard, marching at the
head of representatives of those who had lost out in the consolidation of the
New Deal regime. Judges who had always thought of rights as guarantees of
individual autonomy against a hostile state suddenly saw them as claims of
individuals on the state for the support and protection that made autonomy
possible. It was not an insight they arrived at on their own.
IV. THE RIGHTS REVOLUTION AND THE
ADMINISTRATIVE STATE
The civil rights movement of the 1950s and early 1960s showed how
popular protests could be recast as rights and asserted in the courts. Among
those who took note were the members of what became known as the
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 23
consumer movement. Its leaders recast the capture theories of the political
scientists into the language of popular protest and judicially enforceable
claims on the state.
The founder of the movement was Ralph Nader, a child of Lebanese
immigrants. In 1958, while still a student at the Harvard Law School,
Nader started documenting the automobile industry’s seeming indifference
to the defective designs of its products. In 1963 he went to Washington
to serve as a researcher for an executive official and a Senate investigation
of the automotive industry. Two years later he won national attention by
publishing Unsafe at Any Speed, an expos´e of General Motors’ concealment
of a life-threatening defect in its Corvair model. Injuries resulting from the
defect were, Nader charged, violations of the “body rights” of Americans,
which deserved as vigorous a response as violations of civil rights.
A small army of law students and young lawyers, dubbed “Nader’s
Raiders,” flocked to the activist’s Center for the Study of Responsive Law.
Between 1969 and 1970, the Center published scathing expos´es of the FTC,
the ICC, the FDA, the National Air Pollution Control Administration, and
other agencies. The Naderites also took their charges to Congress, where
they found allies among sympathetic subcommittee chairpersons, newly
empowered by a series of reforms. Congress responded with more rigorous
oversight and legislation that opened agencies’ decision making to greater
public scrutiny.
Soon other advocacy groups adopted the tactics of the consumer movement.
Many originated as cadres ofWashington-based activists, funded first
by foundation grants and then by contributions from a dispersed, national
constituency. Some of the new advocacy groups remained small, but others
acquired substantial memberships. Probably the largest growth occurred in
the environmental movement. “Conservationist” groups, such as the Sierra
Club and the Audubon Society, saw their membership jump by one-third
between 1970 and 1971. New groups, including Friends of the Earth, the
Environmental Defense Fund (EDF), and the Natural Resources Defense
Council (NRDC), were founded to advance a broader agenda of environmental
protection. When Public Citizen, a Nader-sponsored organization,
convened the first national gathering of public interest groups in 1976,
more than 100 organizations sent representatives.
The public interest movement produced a distinctive style of regulation.
As we have seen, before the 1960s, economic regulation commonly affected a
single industry or sector by setting prices, mandating services, and limiting
entry. The “new social regulation” of the 1960s and 1970s, in contrast, cut
across industries to protect consumers, the environment, and the health
and safety of workers. Its hallmark was a focus on quality of life issues that
were more intangible than the economic concerns of the older regulatory
Cambridge Histories Online © Cambridge University Press, 2008
24 Daniel R. Ernst
agencies. The risks of harm to people and the environment addressed by
the new laws were hard to assess, not simply for technological reasons but
also because of the open-ended nature of the values at stake.
Legislation creating the new social regulation poured out of Congress
until the recession of 1974–75. Consumer laws passed between 1966 and
1968 regulated automobile safety, cigarette labeling, truth in packaging,
the marketing of meat and poultry, and consumer credit. A burst of environmental
and health and safety legislation followed. Some of the legislation
revitalized existing agencies, such as the FTC. More striking was the creation
of a new administrative bodies, including, in 1970, the Environmental
Protection Agency (EPA), the National Highway Traffic Safety Administration
(NHTSA), and the Occupational Safety and Health Administration
(OSHA) and, in 1972, the Consumer Products Safety Commission.
Just as the goals of the new regulation differed from those of the older
commissions, so did its form, in two respects. First, the new statutes generally
had more specific delegations of legislative power than the legislation of
the early twentieth century. Instead of handing an agency a blank check to
act in the public interest, advocates and their Congressional allies enacted
extremely detailed provisions, even to the point of specifying numerical
goals. Oversight continued after passage in the guise of subcommittee hearings
and staff reports that threatened agencies with budget cuts should they
fail to follow Congress’s lead.
Second, the new social regulation was much more likely to take the form
of rulemaking than trial-type adjudication. Most agencies created before
the 1960s preferred the flexibility of case-by-case decision making, but the
new social regulation required widely applicable standards with precisely
specified content, something hard to produce one case at a time. In addition,
the expos´es of Nader and his associates revealed how corporate influence
could flourish under an ad hoc approach. Rulemaking was more general
in scope and was deemed harder to use to reward particular firms or
industries.
Complementing the new social regulation were changes in the legal
profession and administrative law. Within the legal profession, the crucial
development was the emergence of public interest lawyers, paid not
by clients but out of foundation grants, federal salaries, or court-awarded
attorneys’ fees. The new breed first appeared during theWar on Poverty. In
the early 1960s, a small group of lawyers, law professors, and social workers
in New York and New Haven, funded by modest grants from the Ford
Foundation and the federal government, developed a plan to win procedural
rights for the recipients of welfare, who, under existing law, could
not effectively object if administrators terminated their benefits unfairly
or subjected them to demeaning supervision. For some of the lawyers,
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 25
due process was an end in itself; for others it was a way to make the existing
system so burdensome that Congress would be forced to substitute a
guaranteed national income. The ranks of the anti-poverty lawyers grew
dramatically after Congress established a national legal services program in
the federal Office of Economic Opportunity. In 1965, 400 lawyers worked
in the nation’s legal aid societies; in 1972, 2,660 did, thanks to federal
funding.
In their fight for welfare rights, the anti-poverty lawyers won some landmark
cases, such as Goldberg v. Kelly (1970), which established a constitutional
right to a fair hearing before welfare benefits could be terminated.
They lost others. Win or lose, they showed how litigation and publicity
could be used to reform public bureaucracies. Soon, lawyers critical of other
federal agencies followed their lead. Starting in 1970, the Ford Foundation
gave grants to a number of public interest law firms, including the Citizens
Communications Center, the Mexican American Legal Defense Fund, and
three environmental groups: the EDF, the Sierra Club Legal Defense Fund,
and the NRDC. By 1976 some seventy-five public interest law firms, law
centers, and legal clinics were in existence. They were supported by foundations,
membership organizations, and (under the Clean Water Act and
some forty-five other statutes) awards of attorneys fees.
The public interest lawyers’ litigation would have gone nowhere had the
courts not been willing to remake fundamental doctrines of administrative
law. The first doctrine was the law of standing, which determined whether
litigants’ interests were substantial enough to justify their participation in
a suit. Before the 1960s, the right to challenge administrative agencies was
limited to companies regulated by the agency and business competitors.
Everyone else was part of the general public, which, in theory, already had
a champion in the agency itself. In the late 1960s and early 1970s, activists
won the right to appear in administrative proceedings to assert their own
notion of the public interest. An early landmark case was Scenic Hudson
(1965), which held that the “aesthetic, conservational, and recreational”
interests of a coalition of landowners and nature lovers gave them standing
to participate in an FPC hearing on the licensing of a hydroelectric power
plant.8 Later courts sided with a church group that sought to participate
in the FCC’s review of the racially biased programming of a television
station, the NationalWelfare Rights Organization in its bid to shape welfare
programs in six states, an environmental group that contested the building
of a federal highway through a park, and, in an extreme case, a group of
law students who challenged the ICC’s decision to place a surcharge on the
shipment of recycled materials. (The students had advanced the somewhat
8 Scenic Hudson Preservation Conference v. FPC, 354 F. 2d 608, 615–17 (2d Cir. 1965).
Cambridge Histories Online © Cambridge University Press, 2008
26 Daniel R. Ernst
doubtful theory that the fee would lead to more litter near their homes.)
Congress followed the courts by allowing “any person” to sue under the
Clean Air Act of 1970 and at least fourteen other statutes.
A right to participate would have meant little had the federal judges
not also decided to review the agencies’ rulemaking more aggressively.
The APA directed courts to uphold agencies’ rules unless the process that
produced them had been “arbitrary and capricious.” Starting in the early
1970s, the federal judiciary, led by the D.C. Circuit, started applying the
arbitrary and capricious standard with unprecedented strictness to ensure
that administrators had taken a “hard look” at the environmental, health,
and safety risks involved. One view, propounded by Judge David Bazelon,
was dubbed the “procedural” hard look. Bazelon argued that judges could
not hope to master the merits of the scientific and technical issues presented
in appeals from environmental and safety agencies, but they could specify
the procedures that would let public interest lawyers, who had the requisite
knowledge, do the job. He argued that courts ought to impose additional
procedural requirements on rulemaking, such as the right of any interested
party to cross-examine witnesses at a public hearing. In contrast, Bazelon’s
colleague on the D.C. Circuit, Harold Leventhal, called for “substantive”
hard look review, in which judges scrutinized the merits of an agency’s decision.
Other federal judges joined the fray, until, in Vermont Yankee (1978),
the Supreme Court seemingly endorsed substantive hard look review and
unambiguously rejected Bazelon’s procedural approach.9
Federal judges scrutinized the reasoning behind such decisions as the
Department of Agriculture’s refusal to ban the pesticide DDT, the Atomic
Energy Commission’s failure to prepare environmental impact statements,
the EPA’s regulation of leaded gasoline, and the National Highway Traffic
Safety Administration’s recission of a rule requiring automatic seatbelts
and air bags in automobiles. In each case the courts acted not, as Dicey
envisioned, to limit the reach of administration in the interest of private
rights, but to urge agencies to regulate even more aggressively in the interest
of health, safety, and the environment.
By the mid-1970s, the New Deal regime had been significantly recast.
Federal agencies still engaged in command-and-control regulation, but
their every move was followed by consumer-oriented legislative subcommittees,
public interest lawyers, and the courts. New administrative bodies
9 Environmental Defense Fund, Inc. v. Ruckleshaus, 439 F.2d 584 (D.C. Cir. 1971) (Bazelon,
J.); Greater Boston Television Corp. v. FCC, 444 F.2d. 841 (D.C. Cir. 1970) (Leventhal, J.);
Vermont Yankee Nuclear Power Corp. v. Natural Resources Defense Council, Inc., 435 U.S. 519
(1978).
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 27
issued regulations that crossed industrial lines with a specificity not seen
since the days of the OPA. The tight administrative world presided over by
theWashington lawyer was opened up to the influence of a more diffuse collection
of “issue networks” composed of Congressional officials, administrative
agencies, law firms, advocacy groups, foundations, and university-based
experts. A new political regime had emerged and was ripe for consolidation.
V. THE CONTRACTING STATE
The early 1970s would prove to be the high-water mark of the federal administrative
state in the twentieth century. Thereafter, the regulatory environment
turned increasingly hostile. First, the economic prosperity that had
generally prevailed since the early 1950s ended with the recession of 1973–
74, to be replaced by a combination of unemployment and inflation. Fears
that regulation was lessening the competitiveness of American industry in
the global economy would persist throughout the remainder of the century.
Second, a social and cultural backlash emerged that accused the revolution
in rights of transforming individual rights into entitlements for selfish
social groups. Critics charged that public interest advocates did not really
represent the public, just their own, idiosyncratic agendas. Third, business
leaders created their own network of Washington insiders. The number
ofWashington-based corporate “government affairs” offices quadrupled
between 1968 and 1978; the number ofWashington lawyers jumped from
16,000 to 26,000 between 1972 and 1978. Finally, the intensification of
the ColdWar in the 1980s revived claims that America occupied an exceptional
place in world history because of its respect for freedom, especially
the freedom to do business in unregulated markets.
The roots of the most dramatic of the late-twentieth-century contractions
of the administrative state ran back to the 1950s when a group of economists
launched an attack on the received wisdom that public utilities were “natural
monopolies” requiring regulation. Nothing prevented the opening of such
industries to competition, the economists argued, except the selfish interests
of regulated companies and their unions, who counted on commissions to set
rates that guaranteed their profits and wages. If the political scientists of the
1950s had questioned the autonomy of the federal regulatory commissions,
these economists, in effect, questioned their capacity by arguing that the
problem of natural monopoly they were created to address did not in fact
exist and that unregulated markets would better promote social welfare.
Although this heterodoxy gained adherents in the economists’ ranks during
the 1960s and early 1970s, even true believers doubted that deregulation
would happen any time soon. The regulated industries and their unions
Cambridge Histories Online © Cambridge University Press, 2008
28 Daniel R. Ernst
were too influential in Congress, the consumers who stood to benefit from
competitive prices too diffuse, for so dramatic a reversal of public policy to
occur.
Yet, deregulation came to America in the mid-1970s in a hurry.
Harbingers included the FCC’s loosening ofAT&T’s monopoly of the manufacture
of telephone equipment and a statute abolishing fixed commissions
for the sale or purchase of stock. The Airline Deregulation Act of 1978
was an early, widely noted landmark. It was soon followed by the deregulation
of railroads, trucking, bus transportation, banking, long-distance
phone service, natural gas, and crude oil. The grand old patriarch of the federal
administrative state, the ICC, barely survived with a much-diminished
mandate. It was finally abolished at the end of 1995, on the eve of a second
wave of deregulation affecting the transmission of electricity and local
telephone service.
The campaign to abolish price-and-entry regulation triumphed because
it met the needs of a variety of political actors. The consumer movement
shared the deregulators’ dim view of the regulatory commissions and joined
their call to abolish the Civil Aeronautics Board (CAB). Congressional liberals,
such as Senator Edward Kennedy, embraced deregulation to show that
they could be as responsive to consumers’ concerns as any Nader Raider.
The two presidents plagued by the stagflation of the 1970s, Gerald Ford
and Jimmy Carter, saw deregulation as a way to lower prices, increase productivity,
and spur economic growth without increasing the federal deficit.
Even many commissioners found deregulation to be a smart career move.
The most prominent of the deregulating commissioners, the economist
and CAB chairman Alfred Kahn, pursued pro-competitive policies as a
matter of principle. Others, noting the acclaim lavished on Kahn, seemed
motivated not so much by conviction as eagerness to jump on a political
bandwagon.
Nader, Kennedy, and Carter favored not only the end of price-and-entry
regulation but also the continuation of the new social regulation. Others
in the late 1970s and early 1980s thought that both kinds of regulation
had gone too far. They pointed to the flood of detailed regulations pouring
from the environmental and safety agencies, including an incongruous
OSHA rule requiring portable toilets for cowboys. The cost of enforcing and
complying with the new rules soon became a major complaint. The federal
government’s expense in enforcing the new social regulations jumped from
$539 million in 1970 to more than $5 billion ten years later. Business’s
compliance costs, although harder to estimate, were much higher. In 1997
the Office of Management and Budget put the cost of enforcing and complying
with major federal rules at $279 billion, of which the majority was
spent on environmental protection.
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 29
Rulemaking itself grew expensive and time consuming. The Administrative
Procedure Act had simply required that agencies give interested parties
notice of an impending rule and an opportunity to submit written comments.
With the rise of hard look review, however, agencies added a series
of court-like procedures and produced what became known as “hybrid rulemaking.”
Commonly, all interested parties were entitled to present their
views orally; often each had a right to cross-examine witnesses. Courts
required agencies to respond to every significant objection and to show
that the rules they proposed were superior to those advanced by interested
parties. To anticipate the second-guessing of the judiciary, staffs compiled
mountainous records that took years to complete. By the mid-1980s, the
“ossification” of rulemaking had become a common lament.
The burdens of the new social regulation set off a search for alternatives.
Some agencies abandoned notice-and-comment rulemaking for adjudication.
The NHTSA, for example, shifted its energies from issuing rules to
individual recalls of defective automobiles. Others rediscovered the virtues
of informal action, such as press releases, interpretive rules, policy statements,
and emergency procedures in which their discretion was unquestioned.
Finally, many added a consensus-building phase to the rulemaking
process, known as “regulatory negotiation.” Under traditional notice-andcomment
rulemaking, interested parties could not see a rule until it was
published in the Federal Register. In a “reg-neg” proceeding, relevant “stakeholders,”
including manufacturers, trade associations, and environmental
and consumer groups, participated in the initial formulation of the rule.
First employed in 1983 by the Federal Aviation Administration to develop
a rule governing the flight time of airline personnel, reg-neg spread to other
agencies and received Congress’s blessing in 1990. Despite skeptics’ claims
that administrators sacrificed too much to gain a consensus and ended up
in court anyway, negotiated rulemaking was a well-established feature in
the regulatory landscape at the end of the twentieth century.
A more general consolidation of the public interest state drew on an economic
concept, the cost-benefit analysis. The new social regulation established
absolute standards of health and safety without suggesting that anything
less was acceptable, even if the cost of compliance proved enormous.
Presidents Nixon, Ford, and Carter all tried to temper rulemaking with various
forms of review within the White House, without much effect. Soon
after his inauguration, however, Ronald Reagan announced that agencies
would be required to prepare “Regulatory Impact Analyses” for any new
rule that annually cost business $100 million or more and to submit the
rule to the Office of Information and Regulatory Affairs (OIRA), established
in the final days of the Carter administration, for an independent
review. Rules that failed OIRA’s review were returned to the agencies with
Cambridge Histories Online © Cambridge University Press, 2008
30 Daniel R. Ernst
a request for further study. Most of the more than 21,000 rules submitted to
OIRA in the 1980s were adopted without change, but the percentage that
passed unscathed dropped, from 87 percent in 1981 to under 71 percent
in the last four years of the decade. In several prominent cases, involving
rules on exposure to asbestos and noxious chemicals, OIRA’s “return letters”
made the proposed regulations politically untenable and forced their
withdrawal.
In the late 1980s and 1990s, cost-benefit analysis spread across the regulatory
landscape. OIRA review continued under Presidents George H.W.
Bush and Bill Clinton, although Clinton made it somewhat more agencyfriendly.
In 1994 Republican majorities in the House and Senate repealed
a “zero-tolerance” standard for pesticide residues in processed food and
required the EPA to conduct cost-benefit analyses in implementing the
Safe Water Drinking Act (1974). Clinton vetoed legislation mandating
cost-benefit analysis for all rulemaking, but did sign a bill requiring agencies
to submit major rules for Congressional review at least sixty days before
their effective date. Meanwhile, cost-benefit analysis spread to the states. By
2000, more than half formally required assessments of the economic impact
of agency rulemaking, and several had created offices of regulatory reform
to conduct the reviews. An attempt in the late 1990s to require cost-benefit
analysis as a matter of constitutional law in implementing the Clean Air
Act would ultimately be turned aside in the U.S. Supreme Court’s decision
in American Trucking (2001).10 Still, at the end of the century cost-benefit
analysis remained a powerful means by which business groups could make
their interests felt within the public interest state.
Economists also suggested that some command-and-control regulation
be replaced with programs that provided “market incentives” to comply
with environmental or safety standards. “Pay-as-you-throw” systems, in
which municipalities billed homeowners in keeping with the amount of
solid waste they discarded, provide a simple example; “tradable permit
systems” a more complex one. Public officials set a target for the total
amount of emissions of some noxious substance and then licensed individual
polluters to produce a part of the whole. A company emitting less than its
share could sell its unused rights to a “dirtier” business. The proceeds of the
sale were a powerful incentive to create more efficient techniques of pollution
control. The tradable permit idea was incorporated into a plan to reduce
emissions of sulfur dioxide, the noxious component of acid rain. Other
applications in the 1990s included the reduction of leaded gasoline, the
phasing out of ozone-depleting chlorofluorocarbons, and the preservation
of wetlands and historic structures.
10 Whitman v. American Trucking Associations, 531 US 457 (2001).
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 31
A final consolidation owed less to economists than to a reassertion of
Tocquevillean tradition by a protest movement originating in the American
West. ManyWesterners had long resented the federal government’s control
of the public domain, which accounted for a great deal of the land in their
states. They saw the environmental protection laws of the 1970s as a new
front inWashington’s war on theWest. Reports of landowners imprisoned
for filling in “wetlands” that were located far from any body of water but
happened to be the site of vernal pools struck them as the modern equivalent
of the abuses of George III. They responded by invoking the spirit of the
American Revolution and launching a property rights movement. In 1995,
between 600 and 1,500 property rights groups were thought to be in
existence. Some were fronts for business interests, but others were authentic,
grassroots organizations.
Like the public interest lawyers of the 1960s and 1970s, the property
rights advocates turned to the law. Once again, foundations provided seed
money, but this time the funders were conservative stalwarts, such as the
John M. Olin, Sarah Scaife, and Philip McKenna foundations. A major campaign
was launched to challenge federal regulation as a taking of private
property without just compensation. In the 1980s and 1990s the campaign
met with some success when the U.S. Supreme Court overturned regulatory
actions relating to the management of coastlines and flood plains,
but it would stall after the turn of the century when a majority of the
Court turned its back on the earlier rulings.11 In the meantime, the property
rights movement explored legislative avenues. At least thirteen states
adopted “look-before-you-leap” statutes, which required agencies to prepare
“Takings Impact Analyses” before issuing regulations.
By the end of the 1990s, the property rights movement had established
itself as a counterweight to the public interest movement of the 1970s,
but neither its lawsuits nor other attempts to contract the regulatory state
had brought an end to administration in America. To be sure, deregulation
tended to shift the locus of policymaking back to the courts. As regulators
exited, public prosecutors and private individuals sometimes stepped
in with criminal prosecutions and class action suits. More importantly,
Americans relied too heavily on administration in its various guises to ever
accept a wholesale return to the nineteenth-century state of courts and parties.
Thus, even when Congress ended the federal entitlement program of
aid to families with dependent children, it replaced it with block grants
that came with many strings attached and many administrators to pull
11 Lucas v. South Carolina Coastal Council, 505 U.S. 1003 (1992); Dolan v. City of Tigard, 512
U.S. 687 (1994); Tahoe-Sierra Preservation Council, Inc. v. Tahoe Regional Planning Agency,
535 U. S. 302 (2002).
Cambridge Histories Online © Cambridge University Press, 2008
32 Daniel R. Ernst
them. At the end of the twentieth century, Americans continued to make
policy through tax laws and other tools of the fiscal state. The need for new
expertise in the area of public contracts would become painfully obvious
in 2001, when California was forced to make disastrous long-term agreements
with the deregulated suppliers of electric power. The social policing
of immigrants and other aliens remained largely beyond the reach of the
courts, and a dramatic expansion of the national security state was as close
as the attacks of September 11, 2001.
CONCLUSION
Tocqueville was wrong. Americans could have centralized administration
and still be free. If, as President Ronald Reagan claimed, America was the
exemplary city on a hill, pointing the way to freedom for the rest of the
world, it was no less exemplary in its reliance on bureaucracy to promote
the welfare of its citizens.
As the United States proceeded through the Progressive, New Deal,
and public interest cycles of state-building and consolidation, centralized
administration became inextricably bound up in American political life.
As disparate social groups jostled for power within a new political regime,
administrative bodies grew in importance. Bureaucracy held some groups
together in durable, power-wielding coalitions and relegated others to the
margins of public debate and policymaking. No amount of impassioned
oratory could transport the United States to the stateless Eden of its mythic
past. At the end of the twentieth century, abolishing administration in all
of its guises would have meant the abolition of politics itself.
Lawyers were vital both to the emergence of new administrative structures
and to their consolidation into stable political regimes. In time they
overcame the limitations of their traditional orientation toward the courts
and turned their energies to building strong and autonomous bureaucracies;
they were, for example, the principal inventors of the informal means agencies
used to win compliance with their wishes. Yet, lawyers were also driven,
out of professional interest and their own acceptance of the “supremacy of
law” ideal, to build internal checks on the administrative discretion. Lawyers
judicialized the administrative process during the consolidation of the New
Deal regime; they ossified rulemaking during the regime that followed.
The lawyers’ dominance of the state suffered its severest challenge in the
last of the twentieth century’s state-building cycles. Economists were the
vanguard of the deregulation movement, champions of cost-benefit analysis,
and inventors of market-based alternatives to command-and-control
regulation. None of these initiatives succeeded in banishing bureaucracy
Cambridge Histories Online © Cambridge University Press, 2008
Law and the State, 1920–2000 33
from America, however, and as long as it remains law and lawyers will not
be obsolete. To the contrary: judging from the first years of a new political
regime emerging out of the War on Terror, the need for a profession committed
to the supremacy of law will be as great in the new century as at any
moment in American legal history.
Cambridge Histories Online © Cambridge University Press, 2008
2
legal theory and legal education,
1920–2000
william w. fisher iii
The overall trajectory of American legal theory during the twentieth century
was as follows. At the outset, a formalist faith gripped the judiciary
and the law schools. Resistance to that vision among judges, lawyers, and
law teachers gradually increased, ultimately finding full expression in the
legal realist movement of the 1920s and 1930s. The realist wave ebbed in
the 1940s, but left behind a host of new questions concerning the nature
and scope of judicial discretion, the role of “policy” in lawmaking and
legal interpretation, the appropriate relationship between public and private
power, which branches of government should be entrusted with which
legal issues, and, most broadly, the meaning and feasibility of “the rule of
law.” After World War II, a new orthodoxy emerged, offering answers to
those questions that seemed convincing to most legal scholars and lawmakers.
Beginning in the 1960s, that new faith – dubbed by its successors,
“process theory” – in turn came under attack, not from a single direction
but from many angles simultaneously. The attackers, marching under
the banners of “law and economics,” “law and society,” “Kantian liberalism,”
“republicanism,” “critical legal studies,” and “feminist legal theory,”
offered radically different visions of the nature and purposes of law. Each
group attracted many adherents, but none swept the field. The net result is
that, in the early twenty-first century, legal discourse in the United States
consists of a cacophonous combination of issues and arguments originally
developed by rival movements, some now defunct and others still with us.
Many aspects of the history of legal education during the twentieth
century – for example, the periodic efforts to reshape law school curriculum
and pedagogy and the steady increase in the importance of interdisciplinary
teaching and scholarship – are best understood as outgrowths or expressions
of the struggles among the competing groups of theorists. Other aspects
of legal education – most importantly, the changing size and shape of the
bottleneck through which students must pass to gain entry to the bar – were
34
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 35
shaped instead by the complex relationship in American culture between
exclusionary impulses (xenophobia, racism, anti-Semitism, and sexism) and
inclusionary, egalitarian impulses. The net result is that the bench, bar,
student bodies, and law faculties of today are by no means demographic
“mirrors of America,” but they are substantially more diverse than their
counterparts a century ago.
In this chapter, I trace the development of these two aspects of twentiethcentury
American law – legal theory and legal education – identifying, when
appropriate, connections between them.
I. THEORY
The Rise of Realism
“Formalism,” “mechanical jurisprudence,” “classical legal thought” – these
are among the labels that were attached, after the fact, to the collection of
attitudes and methods that dominated American legal thought and practice
between roughly the 1870s and the 1930s. In the view of its critics (our
primary concern here), this outlook had two related dimensions. First, it
was a distinctive style of judicial reasoning. When confronted with difficult
cases, judges during this period were much less likely than their predecessors
during the antebellum period to seek outcomes that would advance
public policy (for example, by creating incentives for economic development)
or foster equity (for example, by obliging parties to abide only by
commitments they had voluntarily made) and much more likely to look
for guidance to precedent (decisions rendered previously by other courts
in analogous cases). When directly relevant precedents were unavailable,
judges commonly would seek to extract from loosely related prior decisions
general principles (the more general the better) from which answers
to the problems before them might be deduced. Policy considerations, if
addressed at all, would be invoked only at the highest level of abstraction –
when selecting the “first principles” that formed the top of a chain of deductive
reasoning.
Some historians have contended that this dimension of the classical outlook
was causally connected to the second: a tendency to resolve cases in
socially or politically conservative ways. Between the CivilWar andWorld
War I, state and federal courts invented several new legal remedies (such as
the labor injunction) and new common law rules (such as the doctrine of tortious
interference with contractual relations) that strengthened the hands
of employers in struggles with their employees, narrowly construed legislative
efforts (such as the Sherman Act) to limit concentrations of economic
Cambridge Histories Online © Cambridge University Press, 2008
36 William W. Fisher III
power, and interpreted the Due Process Clause of the Federal Constitution
in ways that shielded corporate property rights and employers’ “freedom of
contract” against legislative encroachment.
To be sure, even during the heyday of classicism, there were countercurrents.
Some lawyers and judges persisted in openly seeking to resolve hard
cases in ways that advanced and reconciled considerations of policy and
justice. Businesses did not always prevail in legal contests against workers
or consumers. And a small group of legal scholars – some proclaiming
adherence to what they called “sociological jurisprudence” – denounced the
classical reasoning style on both philosophic and political grounds.
Three of these early critics were to prove especially influential. In his
judicial opinions, books, and articles, Justice Oliver Wendell Holmes, Jr.
attacked his contemporaries for failing to recognize that “[t]he life of the
law has not been logic; it has been experience,” for purporting to derive
the answers to “concrete cases” from a few “general propositions,” and for
reading “Mr. Herbert Spencer’s Social Statics” into the Fourteenth Amendment.
He urged them instead to accept “the right of the majority to embody
their opinions into law” and to replace muddled natural law theories with a
harshly positivist perspective: “The prophecies of what the courts will do in
fact, and nothing more pretentious, are what I mean by the law.” In his early
writings, Roscoe Pound similarly denounced the “mechanical” mode of reasoning
on which the Supreme Court had come to depend and contemporary
jurisprudence’s infatuation with outmoded images of the “self-reliant man.”
Law, he insisted, must be brought into alignment with modern “social, economic
and philosophical thinking” – and, specifically, must acknowledge
that justice entails not merely “fair play between individuals,” but “fair
play between social classes.” Finally, Yale Law School’sWesley Hohfeld, in
a dense but brilliant pair of articles, fought the aggregative te,ndencies of
classicism, arguing that any legal doctrine can and should be broken down
into logically independent combinations of elemental entitlements, each of
which could only be justified through an examination of its “purpose” and
its “effect.”
In the 1920s and early 1930s, a group of young scholars, most of them
affiliated with Yale, Columbia, or Johns Hopkins Universities, drew on
Holmes’s, Pound’s, and Hohfeld’s arguments to create the methodological
movement that came to be known as legal realism. Two impulses, in
addition to the usual desire of each generation to explode the conventions
of the preceding one, help explain the force and shape of realism. First,
powerful national political movements – initially Progressivism, later the
New Deal – stimulated and guided the younger scholars in crafting alternatives
to the conservatism of classicism. Second, recent innovations in several
other academic fields helped discredit the classical mode of reasoning.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 37
Pragmatism in philosophy, non-Euclidean geometry, theories of relativity in
physics, and the rising disciplines of anthropology and psychology all called
into question the value of axioms and theorems, induction and deduction,
and formal rules as ways of resolving controversies and organizing social
life.
From these materials, the realists fashioned two clusters of arguments –
the first descriptive, the second normative. The foundation of the former was
Holmes’s insistence that the objective of legal analysis was to predict “what
the courts will do in fact.” If that is the end, the realists argued, then the
“traditional legal rules and concepts” that figured so prominently in classical
opinions and scholarship were largely useless. In part, their irrelevance
was a function of their internal inconsistency. For almost every common
law precedent, canon of statutory interpretation, and legal principle, there
existed an equal and opposite precedent, canon, or principle. Even an adept
logician could not derive from such contradictory propositions determinate
answers to concrete questions. John Dewey and a few other realists argued
that the problem ran deeper still: the analytical tools that classical writers
purported to employ to reason deductively from premises to outcomes or
analogically from one case or issue to another were far shakier than they
realized.
In short, doctrine and logic play much smaller roles in determining how
courts decide cases than is usually supposed. To the question of what then
does explain judicial decisions, the realists offered various answers. Some
pointed to judges’ “hunches.” In Joseph Hutchinson’s words, “[t]he vital
motivating impulse for decision is an intuitive sense of what is right or
wrong in a particular case.” Others, like Jerome Frank, emphasized judges’
idiosyncratic personalities. Still others, like Felix Cohen, while agreeing
that judges’ “prejudices” were crucial, saw them as more systematic, more
likely to be shaped by the worldview of the social class from which most
judges were drawn, and thus more predictable.
These views, in turn, prompted the realists to regard judicial opinions
with skepticism, even condescension. The ostensible function of opinions
was of course to explain how courts reached their determinations and thus,
among other things, to provide guidance to judges and litigants confronting
similar controversies in the future. However, their real function, the realists
claimed, was to “rationalize” and “legitimate” the courts’ rulings, concealing
from the public at large and indeed from the judges themselves the
considerations, often unsavory, that truly underlay them.
Unfortunately, the realists’ normative arguments – their reflections on
what Karl Llewellyn referred to as “ought-questions” – were less coherent
and trenchant. They did, however, develop a few major themes that, reconfigured,
were to play substantial roles in subsequent schools of American
Cambridge Histories Online © Cambridge University Press, 2008
38 William W. Fisher III
legal thought. The first may be described as “particularism.” In various
contexts, realists argued, general categories should be broken down into
smaller units. For example, following Pound, they argued that scholars
should be more interested in “real” or “working” rules (descriptions of
how courts were actually resolving disputes) than in “paper” or “black
letter” rules (the norms they ostensibly invoked in justifying their decisions).
Adherence to that guideline, the realists contended, would likely
reveal that judges (especially trial judges) were far more sensitive to the
peculiarities of the fact patterns they confronted than is usually supposed.
The net result: an accurate map of the landscape of the law, useful in guiding
clients, would consist of more – and more specific – norms than could
be found in the standard treatises. When crafting new rules, a lawmaker
(whether a judge or a legislator) should likewise avoid the temptation to
engage in excessive generalization. Social and commercial relations vary
radically along several axes. Assuming that it was worthwhile to attempt
to formulate norms that covered more than the facts of the case at hand (a
matter on which the realists disagreed), such norms should reach no further
than the set of similar controversies. So, for example, a rule governing the
foreclosure of farm mortgages might make some sense, but probably not a
rule governing foreclosure of all mortgages, and certainly not a rule that
purported to specify remedies for breaches of contracts of all sorts.
The second theme may be described as “purposive adjudication.” Wise
interpretation of a legal rule, they argued, required looking behind the
language of the norm in question to the social policy that it was designed
to advance. That conviction prompted them, when promulgating legal rules
(such as the Uniform Commercial Code) to make their purposes explicit.
In Llewellyn’s words, “the rightest and most beautiful type of legal rule, is
the singing rule with purpose and with reason clear.”
The realists’ commitment to purposive adjudication raised a further,
more difficult question: how does a lawmaker (legislator or judge) go about
selecting the policies that should be advanced in a particular context? Their
responses were disappointing. One, Felix Cohen, made a valiant effort to
construct and defend a comprehensive utilitarian theory as a beacon for lawmakers.
Most of Cohen’s comrades were less ambitious, contenting themselves
with an insistence on the wide variety of policies – from the creation
of incentives for productive activity, to fostering social cooperation and
“team play,” to increasing the efficiency of the “legal machinery,” to equalization
of “men’s . . . access to desired things,” to providing “a right portion
of favor, of unearned aid or indulgence to those who need it” – that ought
to be considered by lawmakers. But when such goals conflict, how is one
to choose among them? By looking to custom, some realists suggested.
Immanent in extant social practices (such as the conduct of the better sort
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 39
of merchant) were standards that could and should be employed by lawmakers
when selecting and enforcing norms binding on everyone. Not much of
an answer.
The Legacy of Realism
By the end of the 1930s, legal realism as a coherent movement had died.
In part, its demise can be attributed to increasing hostility, both from
other legal scholars and from the public at large, to the views expressed by
its adherents. Opponents of the New Deal resented the realists’ vigorous
sponsorship or defense of Roosevelt’s policies. And a growing group of critics
argued that the realists’ positivism and tendencies toward ethical relativism
had helped weaken the nation’s intellectual defenses against the rising tide
of Fascism in Europe. In the face of these criticisms, some realists publicly
disavowed positions they had taken during the 1920s. The diminution of
the scholarly output of others was probably caused as much by the lack of
fresh ideas as it was by self-doubt or regret.
But the legacy of realism was powerful and durable. The Humpty-
Dumpty of classicism had been irremediably broken. New conceptions of
the nature and function of law and the proper responsibilities of the various
participants in the legal system had to be devised.
Three implications of the realists’ arguments made the task especially difficult
and urgent. First, their insistence on the ubiquity of judicial lawmaking,
the large zone of discretion that courts inevitably have when resolving
cases, called into question the central principle of democratic theory: the
proposition that the people themselves choose (either directly or through
elected representatives) the laws by which they are governed. Second, the
same theme, combined with the realists’ emphasis on the roles played by
“hunches” and “prejudices” in judges’ deliberations, intensified many Americans’
long-standing doubts concerning the legitimacy of judicial review –
the courts’ practice (nowhere authorized by the federal or state constitutions)
of striking down legislation they deem inconsistent with constitutional
provisions. Third, several aspects of the realists’ vision of the way the legal
system did and should operate were difficult to reconcile with the central
Anglo-American ideal of the rule of law – in brief, the conviction that the
state may legitimately impose its will on persons only through the promulgation
(by lawmakers who do not know the identities of those affected) and
enforcement (by judges who are free from bias and immune to pressure) of
general, clear, well-publicized rules that are capable of being obeyed.
In short, the realists left their successors a formidable challenge: how to
reshape or recharacterize the legal system in a way that, without relying on
the discredited bromides of classicism, offered Americans reassurance that
Cambridge Histories Online © Cambridge University Press, 2008
40 William W. Fisher III
they lived in a democracy, that the exercise of judicial review was legitimate,
and that the rule of law was attainable.
Legal Process
The first group to take up the task eventually came to be known as the
“legal process” school. Its leading figures were Lon Fuller, Henry Hart,
Albert Sacks, Erwin Griswold, Paul Freund, and Louis Jaffe at Harvard;
Alexander Bickel and Harry Wellington at Yale; and Herbert Wechsler at
Columbia. They surely did not agree on all things, but they shared many
convictions and, more important, a sensibility – centered on the values of
moderation, craft, and “sound judgment” – that would set the dominant
tone of American legal theory until the middle of the 1960s.
In some respects, the legal process theorists merely reasserted (in more
measured form) ideas first developed by the realists. For example, they
were quick to acknowledge that there were multiple “right answers” to
many of the controversies that were presented to modern courts – that the
law, in short, was not determinate. The process theorists also agreed with
the realists about both the importance of purposive adjudication and the
multiplicity of values advanced by the typical legal norm. So, for example,
Lon Fuller, in perhaps his most famous article, contended that underlying
the requirement that, to be enforceable, a contract must rest on “bargainedfor
consideration” were several distinct social values: the need to “caution”
private parties when they are about to make legally binding promises,
providing judges subsequently obliged to interpret those promises with
good evidence of what had been intended, and “channeling” the parties
into choosing efficient and informative forms. Underlying the system of
contract law as a whole were still other, more general values: respecting
“private autonomy,” protecting persons’ reasonable reliance on promises
made by others, and preventing unjust enrichment. In all cases involving the
consideration doctrine, Fuller argued, judges must attend to these various
purposes. In easy cases, they would all point in the same direction, and
the judges would likely not even be aware of their salience; in hard cases,
the purposes would conflict, and the judges would be obliged consciously
to weigh and balance them. But to every case they were germane. Only
one aspect of Fuller’s analysis departed from the methodology developed
by Llewellyn and Cohen: his insistence (of which he made much during his
subsequent career) that the policies underlying the rules must be considered
part of the law, not as external considerations that judges invoked only when
the law “gave out.”
In other respects, however, process theory deviated sharply from realism.
Most importantly, while the realists’ emphasis on the role of discretion and
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 41
policymaking in adjudication tended to blur distinctions among the kinds
of reasoning employed by the three branches of government, the process
theorists were adamant that the separate branches had very different jobs
and should do them in very different ways. Specifically, decisions whose
resolution depended either on the expression of “preferences” or on political
compromises could and should be addressed either by a legislature or by the
public at large through “a count of noses at the ballot box.” Decisions (such
as the appointment of judges, the setting of tariff policy, or the detailed
regulation of industries) with respect to which context-specific exercises of
“expertise” were more important than consistency or predictability were
best handled by the executive branch or by administrative agencies. Last
but not least, problems “which are soluble by methods of reason” were
properly allocated to the judiciary. So long as the branch to which an issue
had been correctly assigned had resolved it in a procedurally proper manner,
the process theorists argued, the other branches should ordinarily defer to
its judgment.
The notion that the special responsibility of judges was to resolve disputes
through “reason” – or “reasoned elaboration” – was the centerpiece
of process theory. It encompassed at least three, related guidelines. First,
“reasoned” deliberation was “dispassionate.” Process theorists agreed with
Felix Frankfurter that a judge must assume a posture of “intellectual disinterestedness
in the analysis of the factors involved in the issues that call
for decision. This in turn requires rigorous self-scrutiny to discover, with
a view to curbing, every influence that may deflect from such disinterestedness.”
Second, when possible (typically at the appellate level), judges
should consult with their colleagues before coming to conclusions. Such
collegial consultation would reveal which of each judge’s inclinations were
idiosyncratic (and thus should be rejected) and generally would facilitate
“the maturing of collective thought.” Finally, judges must in their opinions
explain their reasoning thoroughly, both to provide effective guidance to
future litigants and to enable constructive criticism of their decisions.
The last and most controversial of the propositions associated with process
theory was first developed by Herbert Wechsler – although it was subsequently
adopted and applied by Archibald Cox and others. It came into play
only in the special context of judicial review. When a judge was called on to
determine whether a statute was consistent with a constitution, Wechsler
argued, the set of considerations he or she might legitimately consider was
narrower than the set appropriate in other sorts of controversies. Specifically,
the judge could only rely on “reasons . . . that in their generality and their
neutrality transcend any immediate result that is involved.” The concept of
“neutrality” was crucial but slippery. ToWechsler, it did not mean that the
“value” in question must not affect different groups differently. It meant,
Cambridge Histories Online © Cambridge University Press, 2008
42 William W. Fisher III
rather, that the “value and its measure must be determined by a general
analysis that gives no weight to accidents of application, finding a scope
that is acceptable whatever interest, group, or person may assert the claim.”
What made this seemingly innocuous norm so notorious is that, in the
1959 article in which he first developed it, Wechsler argued that it could
not be reconciled with the Supreme Court’s decision in Brown v. Board of
Education, which had held that the maintenance of racially segregated public
schools violated the Equal Protection Clause of the Federal Constitution.
Not all process theorists followed Wechsler on this issue, but some did.
And this particular implication of their arguments did not bode well for
the hegemony of process theory when, in the 1960s, controversies over race,
voting, and sexuality increasingly assumed center stage in American politics
and law.
Law and Economics
During the 1940s and 1950s, economists began with some frequency to
address issues close to the hearts of legal scholars. In perhaps the most
influential of those forays, Arthur Pigou argued that situations of the sort
that dominate the law of torts – that is, when one party behaves in a fashion
that causes an injury to another party – could and should be managed
by selecting rules that forced the actors to “internalize” all of the costs of
their behavior, including the losses sustained by the victims. How? Various
devices might be employed, but the most straightforward would be to make
the actors liable for all of the victims’ injuries.
In 1960, the economist Ronald Coase published an article offering an
alternative way of analyzing the same class of controversies. In “The Problem
of Social Cost,” Coase developed four related arguments. First, the aspiration
of the legal system in cases of the sort considered by Pigou should not be
merely to force actors to internalize the “social costs” associated with their
activities but, more broadly, “to maximize the value of production” – taking
into account the welfare and conduct of all affected parties. So, for example,
a rule making each actor liable for the injuries associated with his conduct
might not be socially optimal if the victims could more cheaply alter their
own behavior in ways that would avoid the harms. Second, in considering
possible solutions to such problems, it was important not to treat the active
party as the sole “cause” of the resultant injuries – and thus presumptively
the proper bearer of financial responsibility. Typically, both parties “are
responsible and both should be forced to include the loss . . . as a cost in
deciding whether to continue the activity which gives rise to” the injury.
Third, in all such cases, if “there were no costs involved in carrying out
market transactions,” “the decision of the courts concerning liability for
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 43
damage would be without effect on the allocation of resources,” because the
parties themselves would enter into agreements that would compel the party
who could avoid the damage most cheaply to do so. (This third argument is
what George Stigler subsequently dubbed the “Coase theorem.”) Fourth and
finally, in the overwhelming majority of cases in which transaction costs did
prevent such efficiency-enhancing private arrangements, the choice of legal
rule would affect the allocation of resources. In such cases, wise lawmakers
should consider the relative costs of a wide variety of rules and disputeresolution
mechanisms, selecting the combination with the lowest total
costs.
This cluster of arguments proved inspirational, launching a thousand
scholarly ships. The largest group pursued the fourth of Coase’s lines. What
set of legal rules, they asked, would foster the most efficient allocation
of resources in particular contexts, assuming that transaction costs would
prevent the achievement of optimal solutions in such settings through
free bargaining? To some doctrinal fields – contracts, torts, property, and
antitrust, for example – such an inquiry seemed obviously pertinent. But
the same methodology was soon applied to many fields with respect to
which cost minimization might have seemed less germane – criminal law,
family law, civil procedure, and constitutional law, among others. Legions
of lawyer-economists set off on quests of this sort, but one, Richard Posner,
towered above the others. In tens of books and hundreds of articles, he
brought his particular version of the wealth-maximization criterion to bear
on virtually every field of both public and private law.
Another group of scholars focused on Coase’s observation that even when
the absence of transaction costs made the choice of legal rule irrelevant
from the standpoint of economic efficiency, that choice would affect the
relative wealth of the affected parties. In the second-most influential article
within the law-and-economics genre, Guido Calabresi and Douglas
Melamed treated such “distributional considerations” as equal in importance
to efficiency considerations when deciding not just which party to
a given transaction or controversy should be given the legal entitlement
but also whether a “property rule,” “liability rule,” or “inalienability rule”
should be selected as the right mechanism for protecting that entitlement.
Taking this recommendation to heart, several economists and legal scholars
argued for years whether the non-waivable implied warranty of habitability
that now governs residential leaseholds in most American jurisdictions did
or did not improve the lot of the poor tenants it was ostensibly designed
to serve. Other fields to which this approach has been extensively applied
include tax and employment law.
A third group of scholars set out to refine the simplistic conception
of people as rational utility-maximizers on which Coase’s original
Cambridge Histories Online © Cambridge University Press, 2008
44 William W. Fisher III
arguments – and, in particular, his famous third claim – appeared to rest.
Once one introduces more realistic assumptions concerning people’s abilities
first to discern their own desires and interests and then to determine
how best to achieve them, these scholars asked, How is the selection of
either efficient or distributionally fair rules affected? Their answers varied
widely.
One of the factors that contributed to the enormous popularity of economic
analyses of these various sorts is that they enabled their practitioners
to avoid the ethical pluralism that had characterized both of the preceding
two major schools of American legal theory. The realists had insisted
and the process theorists had acknowledged that a diverse array of policies
were relevant to every legal rule or issue. As noted above, the process
theorists had argued that a wise, mature judge or other decision maker
could derive from those competing considerations sensible, if not necessarily
determinate answers to particular questions. But, in the 1960s, more
and more participants in legal culture came to doubt that the “balancing”
method commended by the process theorists had any bite at all. To some
of those skeptics, economic analysis offered clarity and rigor. For Posner
and his followers, the ideal of allocative efficiency offered a single beacon,
the conscientious pursuit of which would make possible the socially beneficial
reorganization of the entire legal system. For other economists, like
Calabresi and Melamed, who were equally concerned with distributional
considerations, the normative field was more complex, but nowhere near as
chaotic as the sets of values associated with realism or process theory.
At the outset of the law-and-economics movement, its political valence
was unclear. Although some aspects of “The Problem of Social Cost” were
distinctly conservative in tone – for example, Coase’s sweeping declaration
that “economists, and policymakers generally, have tended to over-estimate
the advantages which come from governmental regulation” – other passages
expressed skepticism that unregulated private markets would foster
economic efficiency. And whether exploration of the distributional consequences
of legal rules will lead to liberal or conservative recommendations
depends, of course, on the distributional criterion one is seeking to advance.
Nevertheless, over time, economic analysis within legal scholarship came
increasingly to be associated with the political Right. In part, this association
was due to the notoriety and influence of a cluster of scholars centered
at the University of Chicago who did indeed think that governmental intervention
in private markets almost always wrought more harm than good.
In part, it also resulted from most economists’ insistence on the superiority
of their perspective and their skepticism about the insights that could
be derived from any other methodology. Whatever the cause, by the late
1970s, economists dominated the conservative end of the political spectrum
at most American law schools, and their increasingly confident assaults on
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 45
scholars to their left contributed heavily to bitter battles over curricula and
faculty appointments.
Law and Society
The economists were not the only group of legal scholars disappointed
by process theory who sought inspiration and guidance from some other
academic discipline. Some turned to sociology, others to philosophy, still
others to history.
The path to sociology was already reasonably well marked. Around the
turn of the century, MaxWeber had written provocatively about connections
between law and social activity. Holmes, in “The Path of the Law,” had
famously predicted that “the statistics guy” would be “the man of the
future.” And some of the legal realists had undertaken extensive (albeit
not always fruitful) empirical studies of “the law in action.” In the early
1960s, a rapidly growing group of scholars, many of them professors at the
University ofWisconsin Law School, built on these foundations a full-blown
movement they dubbed “law and society.”
Among the pioneers was Stewart Macaulay. In his most famous article,
“Non-Contractual Relations in Business: A Preliminary Study,” Macaulay
broke sharply with the kind of legal scholarship in general and contracts
scholarship in particular exemplified by Fuller’s article on “Consideration
and Form.” A contract, he argued, is best understood as a social institution,
not a legal form: “a contract, as I use the term here, involves two distinct
elements: (a) rational planning of the transaction with careful provision for
as many future contingencies as can be foreseen, and (b) the existence or use
of actual or potential legal sanctions to induce performance of the exchange
or to compensate for non-performance.” Drawing on extensive empirical
work concerning business practices inWisconsin, Macaulay contended that
business enterprises employ contracts, so defined, under circumstances and
for reasons quite different from those presumed by traditional legal scholars.
For example, often a firm enters into a contract more to clarify its own
internal structure – say, to improve communication between production
and marketing divisions – than to organize its relationship with the other
party. The breach of a contract typically leads to renegotiation of the parties’
relationship. Lawsuits to enforce bargains are rare and are typically motivated
more by the thirst for revenge than by the hope of recovering damages
or securing specific performance. In general, Macaulay found, contracts are
less important than ongoing relationships among enterprises in organizing
business and distributing their fruits.
To many scholars, the methodology exemplified by Macaulay’s article
seemed compelling. Detailed, empirically grounded, “bottom-up” studies
of how people and enterprises actually use the law offered more insight,
Cambridge Histories Online © Cambridge University Press, 2008
46 William W. Fisher III
they believed, than the “top down” approaches of all other schools of legal
theory. Many set about documenting in various contexts the gap between
the “law on the books” and “the law in action.” Others explored the ways in
which legal norms affect the contents of bargains made in their “shadow.”
Still others studied the relative costs in practice of various forms of dispute
resolution (often concluding that mediation and arbitration systems were
superior to litigation). Finally, many explored the extent to which the regulatory
and social welfare initiatives of the Great Society did (or, more often,
did not) achieve their professed ends.
Like economic analysis, sociological analysis of law had no necessary political
tilt. However, the large majority of empirical studies of the types just
summarized terminated in criticisms of the existing legal order – specifically,
in contentions that the law was biased in favor of the rich on one or
more of four levels. First, the substantive rules are commonly designed to
enhance or protect the interests “of those in positions of wealth and authority.”
Second, even when unbiased, the rules are commonly interpreted in
ways that favor the powerful. Third, the legal profession is organized in ways
that favor the “haves” in their struggles with the “have-nots.” For example,
as Marc Galanter pointed out in a seminal article, the canons of ethics permit
the lawyers for “repeat players” (typically businesses) to use the litigation
game strategically – settling or abandoning unpromising cases while vigorously
pursuing cases with attractive facts in hopes of securing favorable
precedents – but forbid the lawyers for “one-shotters” (typically individuals
pressing claims against the repeat players) to do the same. Fourth, the
legal system as a whole is organized in a fashion that enables “the haves” to
invoke it more shrewdly and effectively than the have-nots. For example, as
Galanter pointed out, the complexity and ambiguity of many of its norms
and the many opportunities for appeal favor parties with access to sophisticated
(expensive) counsel and the financial ability to tolerate long delays
in the issuance of judgments.
The result was that while the adherents of the fading legal process school
occupied the political center of most law school faculties, and the majority
of the law-and-economics scholars stationed themselves on the Right, those
associated with the law and society movement usually found themselves on
the Left.
Law and Philosophy
In the 1970s and 1980s, significant numbers of legal scholars began to draw
on moral and political philosophy to propose modifications of American
legal doctrine. They fell into two reasonably distinct subgroups, each looking
to a different body of argument then popular in philosophy departments.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 47
The first subgroup was influenced most heavily by the resurgence of interest
among English and American philosophers in the work of Immanuel
Kant. H. L. A. Hart, writing in 1977, summarized as follows this reorientation
of perspective:
We are currently witnessing, I think, the progress of a transition from a once widely
accepted old faith that some form of utilitarianism, if only we could discover the
right form, must capture the essence of political morality. The new faith is that
the truth must lie not with a doctrine that takes the maximisation of aggregate
or average general welfare for its goal, but with a doctrine of basic human rights,
protecting specific basic liberties and interests of individuals.
Among the philosophers taking this neo-Kantian tack, the most prominent
was John Rawls. Of Rawls’ many arguments, the most important was his
theory of distributive justice. In brief, Rawls argued that inequality in
the distribution of “primary goods” is legitimate only if, by increasing
incentives for productivity, it leaves the members of the lowest group in
the society no worse off than they would have been under conditions of
perfect equality.
Among the legal scholars who looked for guidance to Kant and (to a
lesser extent) Rawls were Bruce Ackerman, Ronald Dworkin, Charles Fried,
David Richards, and, in some of his work, Frank Michelman. They shared a
methodology, encapsulated in the slogan: “The right is prior to the good.”
Less cryptically, they argued that every government has a responsibility to
establish and enforce a system of basic rights and liberties, but lacks legitimate
authority to encourage or compel adherence to particular ways of
living. In a polity organized on those principles, people would be accorded
the respect they are due as autonomous moral agents, permitted and empowered
to select and pursue their own goals so long as they did not interfere
with the comparable liberties of others.
From this common methodological platform, however, the members of
this group derived radically different recommendations for legal reform.
Michelman, for example, relied heavily on Rawls to urge the Supreme
Court to increase the stringency of its review of statutes adversely affecting
the poor – for example, by striking down legislation that made access to
public office dependent on “economic vicissitude” or failed to abide by the
principle that “each child must be guaranteed the means of developing
his competence, self-knowledge, and tastes for living.” Fried, by contrast,
argued on Kantian premises that contract law should be refashioned so as
to limit liability to situations in which a person has broken a freely made
promise – that is, has violated a commitment he has imposed on him- or
herself – and denounced the steadily growing roles played in American
law by the idea that contractual duties should be created or construed so
Cambridge Histories Online © Cambridge University Press, 2008
48 William W. Fisher III
as to advance “the community’s” goals and standards. Dworkin, in one of
his many articles and books on public and private law, argued that, in
determining the latitude that the state enjoys to regulate pornography,
we should be sure to respect persons’ “right to moral independence” –
their “right not to suffer disadvantage in the distribution of social goods
and opportunities . . . [solely because] their officials or fellow-citizens think
that their opinions about the right way to lead their own lives are ignoble
and wrong.” Fidelity to this principle, he concluded, requires striking down
anti-pornography legislation to the extent it is motivated either by the belief
that the attitudes about sexuality contained in pornographic materials are
“demeaning or bestial” or by the desire to relieve people of their disgust at
the knowledge that their neighbors are looking at “dirty pictures” – but
does not require invalidation of legislation driven by people’s desire “not to
encounter genital displays on the way to the grocer” or by a demonstrated
link between pornography and crime. Using standard labels, Michelman’s
argument might be described as progressive, Fried’s as conservative, and
Dworkin’s as liberal. Divergence of this sort made the political cast of
Kantian legal theory intriguingly ambiguous.
The members of the second of the two subgroups derived inspiration
from Hegel and Aristotle, rather than Kant. They rejected their colleagues’
insistence on the priority of the right over the good, arguing instead that, in
Michael Sandel’s words, “we cannot justify political arrangements without
reference to common purposes and ends, and . . . we cannot conceive our
personhood without reference to our role as citizens, and as participants in a
common life.” Thus freed from the Kantian ban on governmental promotion
of substantive visions of the good life, they set about elaborating the social
and legal arrangements that would most facilitate human flourishing.
Some examples: Margaret Jane Radin of Stanford and Jeremy Waldron
of Berkeley argued in separate essays that the best justification for and
guide to the reform of the institution of private property are that it enables
people more fully to realize their selves – for example, by forming identitystabilizing
attachments to physical objects, by cultivating the virtues of
prudence and responsibility, by affording them zones of privacy, or by providing
them the means of self-fulfilling acts of generosity. Dan Kahan, who
would later join the Yale faculty, argued that group-libel laws (statutes that
proscribe speech or expressive action designed to foster hatred of particular
racial, ethnic, or religious groups) should be deemed compatible with
the First Amendment because they protect the “constitutive communities”
central to many people’s ability to form, modify, and implement rich conceptions
of personhood. Finally, Kenneth Karst of UCLA argued that “intimate
associations,” including marriages and non-marital partnerships, were
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 49
crucial in cultivating attributes central to self-realization – “caring, commitment,
intimacy, self-identification” – and thus that the courts should
allow legislatures to interfere with such associations only if they have strong,
non-pretextual reasons for doing so.
One variant of this general approach proved by far the most popular.
The substantive vision on which it was based was the cluster of ideals now
known as classical republicanism: the notions, in brief, that a good life is a
virtuous life, that one component of virtue is a willingness to subordinate
one’s private interests to the welfare of the community as a whole, and that
only through active participation in the deliberative politics of a republic
is true self-realization possible. In the late 1960s and 1970s, an important
group of historians had excavated this belief system, identified its roots in the
writings of Aristotle and Machiavelli, and showed the important roles it had
played in eighteenth-century British politics, in helping fuel the American
Revolution, in shaping the Federal Constitution, and in inspiring various
nineteenth-century reform movements. In the 1980s, legal scholars began to
take note. Partly because many of the Founders seemed to have been steeped
in republicanism, and partly because (at least if purged of its patriarchal,
xenophobic, and militaristic dimensions) it offered an alternative to the
time-worn ideology of liberalism, it seemed to provide a promising criterion
with which to reevaluate a wide variety of doctrines in both public and
private law.
In the pioneering essay of this ilk, Cass Sunstein argued that several extant
doctrines – including the “rationality requirement” that the Supreme Court
had derived from the Due Process Clause of the Fourteenth Amendment,
the “public use” requirement in the Eminent Domain Clause of the Fifth
Amendment, and the “hard-look” doctrine in administrative law – were
designed at least in part to compel or encourage legislators to engage
in republican-style deliberation “instead of responding mechanically to
interest-group pressures.” In Sunstein’s view, the courts should go further in
this general direction, invalidating or impeding legislation whose content or
genesis conflicted with the republican ideal. In several subsequent articles,
Frank Michelman invoked republicanism in more complex and tentative
ways. Less confident of the substantive merits of the ideology, Michelman
nevertheless emphasized its heuristic value and contended that it alone
provided a plausible way of reconciling two propositions equally central
to our political culture: “first, that the American people are politically free
insomuch as they are governed by themselves collectively, and, second, that
the American people are politically free insomuch as they are governed by
laws and not men.” Convinced, several other scholars began introducing
republican themes into casebooks, law-review articles, and classrooms.
Cambridge Histories Online © Cambridge University Press, 2008
50 William W. Fisher III
The heyday of this mini-movement came in 1987, when roughly a thousand
law professors attended a session at the annual meeting of the Association
of American Law Schools at which Sunstein and Michelman tried
to address the criticisms of their arguments that had been made both by
historians (who found their efforts to apply ancient ideas to modern issues
troublingly anachronistic) and legal scholars who found the organicist, communitarian
aspects of republicanism either na¨ıve or repellent. Since then,
this particular star in the firmament of legal theory has faded substantially,
but has not disappeared altogether.
Critical Legal Studies
The first national conference on Critical Legal Studies (CLS) was held in
Madison, Wisconsin, in March 1977. It attracted a wonderfully motley
group of scholars (some of them former Marxists disillusioned by the sectarianism
of the Left in the 1960s; many of them liberals disillusioned by
the apparent failure of the civil rights movement and by the association of
the Democratic Party with the war in Vietnam; and a few of them sociologists
unsatisfied by the fare available at law and society conferences), legal
activists (many working to improve the positions of workers or poor residential
tenants), and law students. During the next few years, the number
of people who attended the annual CLS meetings grew rapidly, and the body
of writing they published mushroomed. After 1980, however, internecine
struggles, denials of tenure to some of leading members of the movement,
and the increasing disaffection of others eroded its ranks. By the early 1990s,
it was moribund.
Though short-lived, CLS had a profound and lasting impact on American
legal thought. As was true of legal realism, many of its most controversial
claims later became widely accepted. And it helped spawn other clusters
of people and ideas – critical race theory, feminist legal theory, and queer
theory – that would remain vital far beyond its demise.
The central thesis of CLS was that legal discourse is highly patterned –
and, more particularly, that it is organized around a series of oppositions
or contradictions. The most detailed and influential map of those patterns
was contained in Duncan Kennedy’s pioneering 1976 essay, “Form and
Substance in Private Law Adjudication.” Kennedy argued that much legal
argumentation could be reduced to two long-standing debates – the first
over whether legal norms are best cast in the form of “clearly defined,
highly administrable, general rules” or in the form of “equitable standards
producing ad hoc decisions with relatively little precedential value”;
the second over whether the content of legal norms should be guided
by the substantive values associated with “individualism” or the values
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 51
associated with “altruism.” The latter pair of terms, Kennedy defined as
follows:
The essence of individualism is the making of a sharp distinction between one’s
interests and those of others, combined with the belief that a preference in conduct
for one’s own interests is legitimate, but that one should be willing to respect the
rules that make it possible to coexist with others similarly self-interested. The form
of conduct associated with individualism is self-reliance. This means an insistence
on defining and achieving objectives without help from others (i.e., without being
dependent on them or asking sacrifices of them). It means accepting that they will
neither share their gains nor one’s own losses. And it means a firm conviction that
I am entitled to enjoy the benefits of my efforts without an obligation to share or
sacrifice them to the interests of others. . . .
The essence of altruism is the belief that one ought not to indulge a sharp preference
for one’s own interest over those of others. Altruism enjoins us to make sacrifices,
to share, and to be merciful.
The arguments deployed in favor of any one of these positions, Kennedy
argued, were “stereotyped,” predictable, choreographed. For example, rules
are conventionally defended on the grounds that they restrain official arbitrariness
and favoritism, that they promote certainty (thus assisting private
parties in planning their affairs), that they minimize judicial discretion and
thus are more consistent than standards with democratic theory, and so
forth. Standards are conventionally defended on the grounds that they are
capable of advancing social objectives more precisely than inevitably underor
over-inclusive rules, that they are less likely to exacerbate inequalities of
bargaining power, that they are less “dynamically unstable” because judges
feel less need to carve exceptions out of them to favor sympathetic litigants,
and so forth. Individualism is commonly buttressed with arguments that
self-interestedness “is a moral good in itself,” that the “invisible hand” will
convert myriad uncoordinated selfish actions into collective gains, and that
well-meant state efforts to curb selfish conduct typically do more harm than
good. Altruism is buttressed by predictable criticisms of each of the foregoing
propositions. Kennedy’s most original claim was that the two rhetorical
axes are connected – specifically, that the moral, economic, and political
arguments associated with rules resonate with corresponding arguments
for individualism and that there exists a comparable homology between the
arguments for standards and the arguments for altruism. Now comes the
rub. One can imagine larger argumentative structures – ways of stacking or
arranging the pair of rhetorical axes – that would give lawmakers and law
interpreters guidance concerning which set of claims (rules/individualism
or standards/altruism) should be given precedence in which circumstances.
Indeed, in both of what Kennedy dubbed the “pre-classical” period of
Cambridge Histories Online © Cambridge University Press, 2008
52 William W. Fisher III
American law (roughly 1800–1870) and the “classical” period (roughly
1850–1940), overarching theories were in place that purported to do just
that. Those theories, however, have since collapsed. The result is that, today,
the two sets of arguments are on the same plane. It is no longer possible
to depict one as constituting the “core” of the legal system and the other
as the “periphery.” Rather, “[e]very occasion for lawmaking will raise the
fundamental conflict of individualism and altruism, on both a substantive
and a formal level.”
Other writers associated with the CLS movement emphasized other tensions
within legal argumentation. Some put more weight on what Mark
Kelman described as “the contradiction between a commitment to the
traditional liberal notion that values or desires are arbitrary, subjective,
individual, and individuating while facts or reason are objective and universal
and a commitment to the ideal that we can ‘know’ social and ethical
truths objectively (through objective knowledge of true human nature) or
to the hope that one can transcend the usual distinction between subjective
and objective in seeking moral truth.” Others focused on (again quoting
Kelman) “the contradiction between a commitment to an intentionalistic
discourse, in which human action is seen as the product of a self-determining
individual will, and a determinist discourse, in which the activity of nominal
subjects merits neither, respect nor condemnation because it is simply
deemed the expected outcome of existing structures.” But common to most
CLS writing was a conviction that deep divides of this general sort were
ubiquitous in American law.
This characterization of contemporary legal discourse had several important
implications. The most important, perhaps, is that legal decision making
– at both the legislative and the judicial levels – is highly indeterminate.
Contradictory arguments of equal stature can be brought to bear on
almost every issue. More subtly, many of those arguments, closely examined,
consist of alloys, in which a large dollop of ideas drawn from one
end of a spectrum is tempered by a few ideas drawn from the opposite
end. For example, individualism is not a purely egoistic ideal, insofar as it
acknowledges some duties to consider the welfare of others, just as altruism
is not pure self-abnegation, but rather recognizes the legitimacy in
many contexts of the pursuit of self-interest. Such tensions internal to each
cluster of arguments increase the chances that a shrewd speaker of legal
language could “flip” a conventional defense of any given proposition into
a defense of its opposite. This is not to suggest that CLS scholars thought
that legal decision making was unpredictable. Most freely acknowledged
that, in Joseph Singer’s words, a combination of “shared understandings
of proper institutional roles and the extent to which the status quo should
be maintained or altered, . . . ‘common sense’ understandings of what rules
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 53
mean, . . . conventions (the identification of rules and exceptions), and politics
(the differentiation between liberal and conservative judges)” often
made it easy to predict how a court would resolve a given dispute. More
fundamentally, even (or especially) Duncan Kennedy acknowledged that,
for “mysterious” reasons, it is often impossible even for determined and
sophisticated lawyers to construct plausible arguments for certain positions.
But the zone of freedom is substantially wider than is commonly
thought.
The sharply different depictions of the American legal system offered by
all of the then-prominent schools of legal theory struck CLS scholars as ludicrous,
pernicious, or both. They were especially scornful of process theory.
Attempts to differentiate issues appropriate for resolution by the judiciary,
issues best left to the legislature, and issues most sensibly decided through
exercises of executive or administrative discretion in their judgment at
best only separated the choices confronting lawmakers into boxes. None
of the methodologies that process theorists urged on officials of the three
branches – and certainly not the kind of wise “balancing” of multiple competing
policy considerations that they advocated for the judiciary – seemed
to CLS scholars to provide any meaningful guidance.
In the judgment of CLS scholars, the lawyer-economists should be commended
for acknowledging the many choices confronting lawmakers, but
their quest (or, more precisely, the quest of the subset of lawyer-economists
bent on maximizing allocative efficiency) to develop a methodology that
would enable determinate, socially beneficial resolution of those choices had
failed. In part, that failure derived from what CLS scholars referred to as
“the offer-asking problem”: when measuring the “wealth” fostered by a particular
legal rule, should the value of the goods or states of affairs it affected
(such as habitable apartments or protection against sexual assault) be priced
on the basis of the amount of money consumers would be willing and able
to pay to obtain them or the amount of money consumers would demand in
return for surrendering them? The economists themselves were aware that
the answers to these two inquiries would sometimes diverge – for instance,
when the impact of the rule in question was large in relation to the total
wealth of the affected parties – but they argued that circumstances in which
that divergence would render the economic inquiry indeterminate were rare.
Scholars like Mark Kelman, Ed Baker, and Duncan Kennedy, drawing on
recent work by psychologists like Daniel Kahneman, Amos Tversky, and
Richard Thaler, contended that gaps between “offer” and “asking” prices
were both larger and more common than the economists believed and thus
more threatening to the methodology as a whole.
An even more serious problem was what the CLS scholars called “general
indeterminacy.” Suppose, to illustrate, an economist or judge wishes
Cambridge Histories Online © Cambridge University Press, 2008
54 William W. Fisher III
to determine which combination of nuisance and premises-liability rules
would most promote economic efficiency. The answer is likely to hinge on
the order in which she considers the two fields. If, say, she starts by determining
the optimal nuisance rule and then, taking as given the entitlements
produced by that analysis and the associated effects on landowners’ wealth,
she determines the optimal rules governing landowners’ liability to injured
trespassers, she is likely to select a combination of rules different from the
combination she would have generated if she proceeded in the opposite
order. The more numerous the issues to be considered, the more likely it is
that the sequence in which they are addressed will affect the outcome. The
lawyer-economists had not and could not point to any meta-criterion that
would dictate one sequence rather than another.
Some of the efforts by legal scholars to glean insight from moral philosophy
– in particular, the attempts by a subgroup to articulate visions of
human flourishing and then to identify legal reforms that would advance
those visions – struck CLS scholars as less laughable. Indeed, in the late
1980s, some scholars formerly associated with CLS embarked on projects
of just that sort. But to the majority, the Aristotelian expedition, though
perhaps admirable, was doomed to failure. Peer into your soul – or reflect on
the best shared aspirations and commitments of your fellow citizens – and
you are likely to find not the seeds of a coherent conception of the good life
and the good society, but yet more contradictory impulses. In Kennedy’s
words,
Most participants in American legal culture believe that the goal of individual
freedom is at the same time dependent on and incompatible with the communal
coercive action that is necessary to achieve it. Others (family, friends, bureaucrats,
cultural figures, the state) are necessary if we are to become persons at all – they provide
us the stuff of our selves and protect us in crucial ways against destruction. . . .
But at the same time that it forms and protects us, the universe of others (family,
friendship, bureaucracy, culture, the state) threatens us with annihilation and urges
upon us forms of fusion that are quite plainly bad rather than good. . . . Through
our existence as members of collectives, we impose on others and have imposed on
us hierarchical structures of power, welfare, and access to enlightenment that are
illegitimate, whether based on birth into a particular social class or on the accident
of genetic endowment. The kicker is that the abolition of these illegitimate
structures, the fashioning of an unalienated collective existence, appears to imply
such a massive increase of collective control over our lives that it would defeat its
purpose.
The bleakness of this outlook prompted many critics – including some on
the political Left – to reject CLS as a theory of despair. To some extent, the
charge is fair. Kennedy himself acknowledged that his methodology could
be characterized as “tragic.” Certainly, irony – a sense of the frequency with
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 55
which good people are corrupted, well-meant reform efforts go awry, and in
general “things fall apart” – permeates CLS writings. But the aspiration, at
least, of the participants in the movement was to fuel, not enervate, projects
for political and economic change by discrediting arguments that depicted
the legal system as running reasonably well and susceptible of only modest
adjustment, by exposing the extent to which it was designed to advance
the interests of the wealthy and powerful, and by contributing to activist
lawyers’ awareness of the degree to which it was unstable and malleable.
Feminist Legal Theory
In the last quarter of the twentieth century, a growing group of scholars
began to examine closely the relationships among law, gender, and sexuality.
Their work rapidly became increasingly influential, despite (or perhaps
because) of the range and depth of their disagreements.
The first of the issues on which they diverged was the ideal of equality.
For centuries, successive groups of legal reformers in the United States
have been striving to eliminate inequalities in the positions of women and
men. In the early nineteenth century, for example, an improbable alliance of
Jacksonian politicians, businessmen, and early feminists sought legislative
changes that would give married women the same rights to engage in business
and manage their own property that their husbands already enjoyed.
The late nineteenth and early twentieth centuries witnessed a similar struggle
to accord women the right to vote. In the late twentieth century, analogous
campaigns were mounted to purge discrimination against women
in the workplace. And so forth. Some of these reformers argued (occasionally
successfully) that, to provide women true substantive equality, it was
necessary to accord them “special” (i.e., unequal) treatment – for example,
by providing them health benefits to cover the costs associated with
pregnancy. But the ultimate goal always remained to use the law to place
women on a par with men. An important line of theoretical writing, beginning
with John Stuart Mill’s, “On the Subjection ofWomen,” fed and was
fed by these initiatives. The primary theme of this body of writing is that
women have the same capacities and deserve the same legal entitlements
as men.
In the 1970s and early 1980s, more and more feminist legal theorists
repudiated this liberal vision and strategy. They took the position that
women are different – have different experiences, outlooks, and needs –
and that both a genuine understanding of women and the identification of
opportunities for progressive legal reform require taking those differences
seriously. The divisions within this group, however, were just as sharp as
the divide between its members and the liberal feminists.
Cambridge Histories Online © Cambridge University Press, 2008
56 William W. Fisher III
The members of one subgroup – sometimes known as “maternal” or
“cultural” feminists – were inspired by the work of Carol Gilligan, Nancy
Chodorow, Jean Baker Miller, and Anne Schaef, who documented important
differences in the self-conceptions and habits of mind of girls and boys,
women and men. RobinWest summarizes this body of work as follows:
[A]ccording to Gilligan (and her subjects), women view themselves as fundamentally
connected to, not separate from, the rest of life. This difference permeates virtually
every aspect of our lives. According to the vast literature on difference now being
developed by cultural feminists, women’s cognitive development, literary sensibility,
aesthetic taste, and psychological development, no less than our anatomy, are
all fundamentally different from men’s, and are different in the same way: unlike
men, we view ourselves as connected to, not separate from, the other. As a consequence,
women’s ways of knowing are more “integrative” than men’s; women’s
aesthetic and critical sense is “embroidered” rather than “laddered;” women’s psychological
development remains within the sphere of “attachment” rather than
“individuation.”
The most significant aspect of our difference, though, is surely the moral difference.
According to cultural feminism, women are more nurturant, caring, loving,
and responsible to others than are men. This capacity for nurturance and care dictates
the moral terms in which women, distinctively, construct social relations:
women view the morality of actions against a standard of responsibility to others,
rather than against a standard of rights and autonomy from others. As Gilligan
puts it:
The moral imperative . . . [for] women is an injunction to care, a responsibility to
discern and alleviate the “real and recognizable trouble” of this world. For men,
the moral imperative appears rather as an injunction to respect the rights of others
and thus to protect from interference the rights to life and self-fulfillment.
The sources of these differences were much debated by the members of
the group.Were they rooted somehow in biology? The results of evolution?
The byproducts of a childrearing system based on mothering – so that, in
Carrie Menkel-Meadow’s words, “growing up is a process of identification
and connection for a girl and separation and individuation for a boy”?
The byproducts of women’s experiences in taking care of young children?
The answers were uncertain. What was clear, however, was the presence of
systematic and durable differences between the genders.
That insight, in the judgment of the cultural feminists, had various implications
for law. Menkel-Meadow, for example, predicted that, once women
lawyers achieved a critical mass, we would likely see several changes in
the practice of law (for example, more use of mediation, more settlements,
less reliance in jury arguments on rhetorical styles based on “persuasive
intimidation” and more efforts on the part of advocates to create “a personal
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 57
relationship with the jury in which they urge the jurors to examine their
own perceptions and values and encourage them to think for themselves”);
in the organization of the profession (for example, more collegiality in writing
briefs, changes in the canons of ethics softening a lawyer’s obligation to
serve her client’s needs exclusively and mandating more disclosures of information
to opponents); and in legal doctrine (for example, wider definitions
of relevance and admissibility in the law of evidence). Other scholars were
even more explicit in urging that major fields of law should be modified to
make them less male and more female. For example, Leslie Bender argued
on Gilliganesque premises that the law of torts should be refashioned so as
to permit a plaintiff who makes a minimal showing that a defendant has
exposed her to serious risk to begin collecting from the defendant medical
expenses, lost wages, and other damages – and if her claim is ultimately
found to be meritorious, to force the defendant, not only to pay the plaintiff
money, but to assume non-delegable responsibility to provide her direct
physical care.
A second subgroup, led by Catharine MacKinnon, argued that the gender
differences identified by the cultural feminists, if they existed at all,
were the fruits of a socioeconomic system that enabled women to acquire
status and power only through their associations with men. A reform program
that celebrated and sought to generalize feminine virtues thus seemed
distinctly unpromising. Rather, MacKinnon argued, we should focus on a
different respect in which women are different: namely, that they are dominated
by men. That dominance has many dimensions, but at base it is
sexual. The nub of the matter, she argued, is that, in contemporary society,
men fuck, while women are fucked. MacKinnon’s claim was sweeping: “the
molding, direction, and expression of sexuality organizes society into two
sexes – women and men – which division underlies the totality of social
relations.” The central project of men, she argued, was to control all aspects
of women’s sexuality, from reproduction to “the social rhythms and mores
of sexual intercourse.” In this, they have been highly successful, not just
through the establishment and enforcement of formal rules that reinforce
their sexual power, but more fundamentally through the elaboration of an
ideal of femininity, centered on the traits of docility, softness, passivity,
nurturance, weakness, narcissism, incompetence, domesticity, and fidelity,
all of which implicitly emphasize women’s sexual accessibility and subordination.
Females internalize that ideal in order to become women; to
be a woman is to be sexually desirable to men by manifesting these
features.
The mission of feminism, MacKinnon claimed, is to overturn this structure
of domination. The obstacles are formidable. The infusion of contemporary
institutions and culture with the male point of view is so thorough
Cambridge Histories Online © Cambridge University Press, 2008
58 William W. Fisher III
that it is extremely difficult for women to achieve an independent vantage
point. In a passage that revealed at once the harshness of her diagnosis of
the current situation and the ambitiousness of her hopes for the future, she
argued as follows:
Feminism criticizes this male totality without an account of our capacity to do
so or to imagine or realize a more whole truth. Feminism affirms women’s point
of view by revealing, criticizing, and explaining its impossibility. This is not a
dialectical paradox. It is a methodological expression of women’s situation, in which
the struggle for consciousness is a struggle for world: for a sexuality, a history, a
culture, a community, a form of power, an experience of the sacred.
The task of constructing such a consciousness would be made easier if we
could eliminate the legal rules that sustain male dominance. Proceeding on
that assumption, MacKinnon and her allies launched in the 1980s and ‘90s
a formidable set of reform initiatives. The most successful and deservedly
famous was their effort to establish the illegality of sexual harassment.
Almost as notorious was their campaign to tighten prohibitions on the
distribution of pornography. The city of Indianapolis did indeed adopt such
an ordinance, but a federal court struck it down as a violation of the First
Amendment. (Not all feminists were dismayed by the court’s ruling; socalled
“sex-positive” or “sex-affirmative” feminists thought the suppression
of pornography would do more harm than good.) MacKinnon’s most recent
initiative has been an effort to secure international legal recognition of rape
as a war crime.
Is there any thing, then, that feminist legal theorists have in common?
Perhaps one – a methodology. Much more than any of the other groups
of scholars we have considered, feminist legal theorists were and are concerned
with the manner in which insights concerning the nature of law
are developed and disseminated. Specifically, they emphasize conversations
with or among women. For some, like Joan Williams, this commitment
is connected to an “antifoundationalist epistemology” – the notion that
our identities and our aspirations are entirely socially constructed and thus
that the only way in which we can hope to identify normative criteria is to
explore and debate the shared commitments of the communities to which
we belong and in which we must continue to make ourselves. For others,
like MacKinnon, it is rooted in appreciation of the revelatory power of the
activity of “consciousness raising”:
Consciousness raising is the major technique of analysis, structure of organization,
method of practice, and theory of social change of the women’s movement. In
consciousness raising, often in groups, the impact of male dominance is concretely
uncovered and analyzed through the collective speaking of women’s experience,
from the perspective of that experience.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 59
Whatever its origins, this approach differs radically from the solitary,
introspective methods employed by most other American legal theorists –
indeed, by most scholars of all sorts.
To sum up, American legal theory in the twentieth century can be divided
roughly into thirds. In the first trimester, scholars associated initially with
sociological jurisprudence and then with legal realism led an ultimately
successful assault on the fortress of classical legal thought. In the second,
a new orthodoxy emerged, organized around the methodological commitments
and political centrism of legal process theory. In the third, process
theory fell from grace, succeeded not by a single revolutionary creed, but
by sustained conflict between the adherents of several incompatible schools
of thought: law and economics, law and society, several variants of moral
philosophy, critical legal studies, and feminist legal theory. At the beginning
of the twenty-first century, no resolution of this controversy was yet
in sight.
II. EDUCATION
The Emergence of the Harvard System
The central event in the history of American legal education was the
establishment and dissemination of the Harvard model. This transformation
began in 1870, when President Charles Eliot of Harvard University
appointed Christopher Columbus Langdell as dean of the law school there,
and Langdell, with Eliot’s aid, set in motion a set of related changes in the
structure and pedagogy of the school. By 1920, the majority of American
law schools – and virtually all of the elite, full-time schools – had implemented
most aspects of the new system (some eagerly, some grudgingly,
some after bitter internal struggles), and the increasingly powerful American
Bar Association and Association of American Law Schools, formerly
often divided on the issue, were now reasonably united in advocating its
universal adoption.
Although the transformation in legal education was well underway before
1920, understanding the new model at its inception is crucial to comprehension
of developments in legal education that occurred thereafter. So let
us first briefly review its main features.
The Harvard system had five related components. First, law should be
learned, not through an apprenticeship, not in an undergraduate program,
but through a three or four-year formal program of study in a graduate
school.
Second, the primary materials one studied in law school were appellate
judicial opinions applying legal doctrines to particular sets of facts. In his
Cambridge Histories Online © Cambridge University Press, 2008
60 William W. Fisher III
pioneering casebook on the law of contracts, Langdell justified this so-called
case method on the following grounds: “[L]aw, considered as a science,
consists of certain principles or doctrines. To have such mastery of these as
to be able to apply them with constant facility and certainty to the evertangled
skein of human affairs, is what constitutes a true lawyer . . . and
the shortest and the best, if not the only way of mastering the doctrine
effectually is by studying the cases in which it is embodied. . . . Moreover,
the number of legal doctrines is much less than is commonly supposed.”
Gradually, this initial justification gave way to a different theory. James Barr
Ames, Langdell’s successor as dean at Harvard, contended that the purpose
of the case method was not to teach students the content of legal principles,
which were too multifarious to be conveyed in any course of study, but
rather to equip them with “the power of solving legal problems” – in other
words, to train them to “think like lawyers.” (The second justification,
even more than the first, suggested that the jurisdictions from which the
judicial opinions in question were drawn were unimportant. It thus made
sense for students in schools located in different states to learn from the
same casebooks – and for students to attend law schools in states other than
those in which they expected to practice. Thus was born the idea of the
“national law school.”)
Third, classroom instruction consisted primarily of so-called Socratic
questioning. The professor asked students to describe the facts of the cases
and to analyze the courts’ reasoning. Through repeated inquisitorial exercises,
the students were expected to learn how to ferret out the principles
underlying decisions and to recognize the relatively few instances in
which courts had gone astray. (Interestingly, recent scholarship suggests that
Langdell himself, the popularizer if not the inventor of this method, used
it in a somewhat different spirit, encouraging students to think critically
and frequently acknowledging “his ignorance or uncertainty about points
of doctrine.” But, in this respect, Langdell seems to have been atypical.)
Fourth, the subjects taught in this fashion should consist of “pure law”
courses. Political science, philosophy, and economics had no place, so the
proponents of the model argued, in a law school curriculum. Indeed, though
the set of subjects taught in most law schools in 1920 was somewhat larger
than the set taught in the middle of the nineteenth century, it did not differ
in kind.
Fifth and finally, students demonstrated their competence not by writing
essays expounding legal doctrine, but by applying what they had learned to
hypothetical problems. Initially, those problems were brief and schematic.
Over time, they became increasingly complex.
Several factors help explain why this system took root and then, like
kudzu, spread so rapidly. The system rested on a particular variant of the
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 61
old idea of law as a science that both resonated with the classical style of
legal thought (which, as we have seen, was dominant around the turn of
the century) and appealed to university administrators then in the process
of refashioning American higher education along German lines. It was also,
in Ames’s words, “a virile system,” in which learning was achieved through
self-reliance, struggle, and competition, activities celebrated by the thenpopular
ideology of Social Darwinism. On a more practical level, it was
inexpensive, enabling small faculties to teach large bodies of students. In
the opinion of advocates such as Eliot and of some modern historians (such
asWilliam LaPiana), it was functional, in the senses that it did a good job of
imparting to students skills they would actually need when practicing law
(although many practitioners during the period were skeptical on precisely
this point) and that insights gleaned through combative Socratic exchanges
were more likely to be retained by students than knowledge imparted
through more traditional lectures. In the opinion of other historians (such
as Harry First), it served the less noble interests of a subset of law schools in
controlling the market for legal education and of established practitioners
in reducing competition in the provision of legal services. Finally, in the
opinions of still others (such as Robert Stevens and Jerold Auerbach), it was
one of many devices by which elite lawyers sought to limit the number of
Irish, Italians, Poles, Jews, and African Americans who entered the profession
– and to inculcate “proper principles” and respect for the American
system of government in the few who were admitted. Whatever its causes,
by 1920, it exerted a powerful grip on American legal education.
Criticisms (Round One)
In the first half of the twentieth century, the Harvard model was attacked
from two quarters, but withstood both assaults. The first came from Alfred
Reed, a non-lawyer who, under the auspices of the Carnegie Foundation,
published a set of high-profile studies of legal education and the legal profession
in the United States. In Reed’s view, the joint aspiration of the
elite university-affiliated schools, the ABA, and the AALS to create a “unitary”
bar through universal adoption of the Harvard system was misguided.
Instead of seeking to eliminate the rapidly growing set of unaccredited, proprietary,
part-time, and night law schools, which catered to poorer students
and second-generation immigrants, the bar should embrace them. Drawing
loosely on the British system, which separated lawyers into barristers
and solicitors, Reed argued that the United States, as a large, pluralistic
society, needed more than one type of lawyer. The elite schools should train
the elite; the proprietary schools should train the rest. Reed’s comments on
pedagogy were closely related to this vision. The case method and Socratic
Cambridge Histories Online © Cambridge University Press, 2008
62 William W. Fisher III
questioning, he acknowledged, were excellent tools in the hands of “genuine
scholars” training smart, well-prepared students. But they were inferior to
older, more straightforward teaching techniques when it came to training
the harried students of “ordinary” abilities who filled the proprietary
schools.
As one might imagine, Reed’s report found favor among the deans and
faculties of the proprietary schools, but did not persuade the increasingly
consolidated leadership of the ABA and AALS. Elihu Root of Harvard,
then chair of the ABA Section of Legal Education and Admissions to the
Bar, denounced Reed’s proposal for a stratified bar as undemocratic and un-
American and his overall message as “reactionary,” “narrow,” and “unfair.”
Arthur Corbin of Yale, then president of the AALS, was similarly hostile.
The increasingly shrill complaints of people like Gleason Archer, dean of
Boston’s Suffolk Law School (which, though unaccredited, was then the
largest law school in the world), that the elite were conspiring to drive
them out of business fell on deaf ears.
The second of the attacks came from inside the elite law schools themselves.
For years, some of the faculty of major schools other than Harvard
had been expressing doubts about the merits of the case method, Socratic
questioning, and exclusive focus on “pure law” subjects. For example, in
1912, George Chase, formerly a professor at Columbia and by then the dean
of the New York Law School, argued that “case-books take a good deal more
space to set forth the law on a given subject than do text-books, and even
then they may not do this with satisfactory completeness,” and that “it will
not seem surprising that a law school using treatises as the fundamental
basis of its instruction can cover the same field of legal knowledge in a
shorter time than schools which confine themselves to case-books.” Legal
realism threw wood onto this smoldering fire. For example, Jerome Frank,
drawing directly on his views concerning the limited explanatory or predictive
power of appellate opinions, argued that, if students were to learn the
law through the study of cases, they should at least be provided with full
information concerning the genesis of those controversies and the various
factors, both “rational” and “non-rational,” that shaped the conduct of the
parties, lawyers, juries, and judges. More broadly, he urged law schools to
recapture some of the good features of the old “legal apprenticeship system”
– for example, by requiring students to visit trial and appellate courts
and to participate in legal clinics, providing legal aid to the poor, to the government,
or to quasi-governmental agencies. Karl Llewellyn echoed many
of Frank’s arguments and in addition urged law schools to abandon their
misguided effort to separate pure law topics from the “background of social
and economic fact and policy.” History, philosophy, economics, and the like,
he contended, should be introduced into the law school curriculum, not by
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 63
creating courses offering interdisciplinary “perspectives” on doctrine, but
by integrating serious analysis of such matters into every course.
Sentiments of these sorts, widely shared at schools where realism was
well represented, generated some serious efforts to institute major pedagogic
reforms. The most serious of all came at Columbia, where, with the
encouragement of Dean Harlan Fiske Stone, ten faculty committees worked
for two years to refashion the curriculum along “functional” lines. The effort
bore some fruit – a few new courses, most pertaining to economics or trade
regulation; some unconventional casebooks; and the addition of business
experts, philosophers, and political scientists to the law school faculty. But
the reformers lost the faculty fight over the committees’ more sweeping
recommendations. After the appointment in 1928 of a new, more conservative
dean, the principal agitators resigned. William O. Douglas and
Underhill Moore left for Yale, and Herman Oliphant and Hessel Yntema
joined Walter Wheeler Cook in founding a new research center at Johns
Hopkins.
Influenced in part by the arrival of Douglas and Moore,Yale Law School at
the end of the decade experimented with its own curriculum a more modest
scale. Dean Robert Hutchins was supportive, and some new empirically
oriented courses were developed. But increasing disillusionment concerning
the insights into law that could be gleaned from the social sciences and
Hutchins’ departure for the University of Chicago stunted the initiative.
The Johns Hopkins Institute, for its part, fell prey to economic pressure.
Disdaining the training of practitioners and focused exclusively on research,
it was financially dependent on donors. Funded for only five years, it could
not survive the philanthropic drought of the Depression.
Hegemony and Evolution
The main storyline in American legal education during the remainder of
the twentieth century was the continued spread and consolidation of the
Harvard model. The ABA and AALS, working increasingly collaboratively,
adopted ever stricter guidelines – intended to apply to all law schools – on
minimum numbers of faculty, maximum student/faculty ratios, the number
of years of undergraduate study that were required for admittance (first two,
then three, finally four), and the size of and funding for law libraries. For
many years, these guidelines were paper tigers. Students graduating from
nonconforming (and thus unaccredited) schools could still take state bar
examinations and thus enter the profession. But the rules gradually grew
teeth. Bar examiners acceded to pressure from the elite schools to adopt
questions that resembled the problem-based questions used in the course
examinations in the elite schools, and state legislatures began to make some
Cambridge Histories Online © Cambridge University Press, 2008
64 William W. Fisher III
of the guidelines (for example, two years of college study before law school
and three years of law study) mandatory for admission to practice. California
continued to allow graduates of unaccredited schools to become lawyers,
but required them to take special tests from which students in accredited
schools were exempt.
Many of the unaccredited proprietary schools responded to these growing
pressures by conforming. A growing percentage adopted the case method.
Some added undergraduate programs to their curricula, enabling admitted
students to perform their obligatory years of pre-law study before beginning
their law school courses. Others hired new faculty and expanded their
libraries. But, just as the proponents of the new rules anticipated, many
of the lower tier schools were incapable of complying and went out of
business. The net result: the percentage of students enrolled in accredited
schools steadily rose.
Yet, even as its grip was tightening, the Harvard system of legal education
began to change – incrementally, to be sure, but ultimately in substantial
ways. Perhaps the most obvious area of adjustment concerned the subject
matter of the courses offered in the accredited schools. The absolute numbers
of course offerings increased steadily. Equally important, the proportion
focused exclusively on pure law topics slowly declined, whereas the proportion
overtly addressing “policy” issues or drawing on disciplines other than
law rose. This trend accelerated after 1970, reinforced by the addition of
courses concerned with (and typically promoting) means of dispute resolution
other than litigation – negotiation, mediation, and arbitration. The
impact on this axis of change of the major schools of legal thought traced in
the first half of this essay was obvious: More and more courses addressed such
themes as the legal process, law and economics (in general or of particular
subjects), law and society, critical theory, and feminist legal theory.
Even by mid-century, the number of course offerings in most schools was
such that no student could take them all in three years. As a result, all schools
(even Harvard, nudged by a 1947 report from a curricular reform committee
chaired by Lon Fuller) reduced the number of courses students were obliged
to take, increasing their freedom to pick and choose in their second and third
years from a growing collection of electives. Another side effect was that the
average size of upper-level classes decreased steadily. That trend, plus the
proliferation of seminars, modeled loosely on those available in arts-andsciences
graduate schools, afforded some students increased contact with
faculty members.
Another area of adjustment concerned the character of assigned readings.
Casebooks containing nothing but appellate opinions were gradually
displaced by collections of “cases and materials” – the “materials” typically
consisting of bits and pieces of philosophy, sociology, political science,
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 65
economics, and editorial commentary. The organization of the new books
commonly reflected the schools of legal thought that their authors found
most congenial. For example, Lon Fuller’s 1947 contracts casebook bears
many marks of legal realism. Most famously, the placement of the section
on remedies at the beginning rather than at the end of the book was clearly
motivated by the realist insistence that rights are derivative of remedies, not
the reverse – that a right exists only to the extent that effective procedures
are in place to enforce it. But it also showed the extent to which Fuller
thought he had transcended realism. Specifically, the inclusion of many
notes and references designed to elucidate the various policies underlying
each doctrine reflected Fuller’s faith that a mature judge or other decision
maker could, through careful weighing of those considerations in particular
contexts, resolve controversies among contracting parties in wise and
reasonably determinate ways.
Pedagogy changed too. Professors continued to question students. But
gradually, as the century wore on, the interrogations became less fierce,
less concerned with explicating cases, and more with exploring policy
issues. Professors tipped their hands more, humiliated students less, and
interspersed Socratic questioning increasingly often with mini-lectures.
Defenders of the new style argued that it was both more efficient and
more humane than the older approach. Traditionalists, like Roger Cramton,
lamented the resultant decline in “the kind of hard-nosed, analytical and
disciplined thinking on which the best law schools used to pride themselves”
and attributed the declension to growing “malaise” among law teachers –
“uncertainty about what they are teaching and why.” (Interestingly, critics
from the Left offered a similar diagnosis. Roberto Unger, for example,
closed his book on The Critical Legal Studies Movement with a harsh depiction
of the mainstream law teachers that the movement was seeking to discredit
and displace: “[T]hey were like a priesthood that had lost their faith and
kept their jobs. They stood in tedious embarrassment before cold altars.”)
Clinical legal education also rose in importance and popularity during the
second half of the twentieth century. Clinical instruction has a long pedigree.
The apprenticeship system by which most early nineteenth-century lawyers
were trained can fairly be described as a form of clinical teaching. Around the
turn of the century, a few law schools sought to recapture some of the benefits
of that system by establishing legal aid clinics in which students could
gain experience representing real (typically poor) clients. The University of
Pennsylvania did so in 1893, the University of Denver in 1904, Harvard
itself in 1912, and Yale in 1915. But several factors reduced the impact
of these early programs. With rare exceptions (such as at the University
of Southern California), students could not earn credit for participating
in them, the instructors who guided the students lacked both tenure and
Cambridge Histories Online © Cambridge University Press, 2008
66 William W. Fisher III
prestige, and the most ambitious and competitive students usually avoided
them. As we have seen, some legal realists argued that these programs
should be radically expanded and made central to legal education, but their
agitation had little impact.
Beginning in the 1960s, three forces combined to boost clinical education
substantially. The first and probably most important was money.
Between 1959 and 1965, the National Council on Legal Clinics, supported
by the Ford Foundation, awarded grants totaling roughly $500,000 to nineteen
law schools to enable them to create or expand clinical programs. In
1968, the Ford Foundation increased this level of support dramatically. Over
the next decade, through the Council on Legal Education for Professional
Responsibility (CLEPR), it granted roughly $12 million to more than 100
law schools to help them increase their for-credit clinical offerings. The second
factor was social and political unrest. Starting in the 1960s, growing
numbers of students became dissatisfied with the apolitical or conservative
character of regular law school instruction and saw in the expanding clinics
opportunities to put their skills to progressive purposes even before graduating.
The growing set of clinical instructors, most of them drawn from
the public interest bar, were eager to satisfy this demand. Third, the organized
bar became increasingly convinced that the law schools were failing in
their responsibility to provide students practical lawyering skills – facility
in legal research, document drafting, counseling, initiating litigation, and
so forth – and urged the schools to fill the gap through increased clinical
instruction. One relatively late manifestation of this pressure was the “Mac-
Crate Report” (named after Robert MacCrate, the chair of the ABA Task
Force from which it issued), which, among other things, urged the schools
to create more “opportunit[ies] for students to perform lawyering skills
with appropriate feedback.” By the turn of the century, the intersection of
these forces had prompted the large majority of law schools to offer their
students for-credit clinical instruction.
The final dimension along which the Harvard model changed was the
manner in which students were differentiated. In the 1920s and 1930s,
the elite schools sorted students, not at the doorstep, but after they were
admitted. Even Harvard demanded of applicants nothing more than an
undergraduate degree from an accredited college. But then more than half
of each entering class “flunked out” before graduation. Gradually, the elite
schools became ever more selective in determining which candidates they
would admit while reducing the percentages they discarded after admission.
This change is not to suggest, however, that the law school experience for
admitted students became more egalitarian. On the contrary, the divisions
drawn among the students became ever sharper.
One of the principal vehicles of stratification was the law review – an
institution (puzzling to academics in other disciplines) in which students
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 67
select, edit, and publish most of the articles written by law professors. The
first law reviews were established in the late nineteenth century. Their number
increased slowly in the first quarter of the twentieth century and rapidly
thereafter. One of the reasons for their proliferation was that membership
on the editorial board of a law review, typically determined entirely on the
basis of first-year grades, came to function as a badge – a signal to prospective
employers, among other audiences, of students’ abilities and accomplishments.
(Another reason, as Karl Llewellyn acidly observed, is that law
review members in their second year of school could obtain from their comrades
in their third year effective, personalized instruction, including close
editing of their written work, that the law school faculty was unable or
unwilling to provide them.) By the middle of the century, competition for
such positions became fierce. Students’ job opportunities, self-images, and
friendship networks came to depend, to distressing degrees, on whether
they had “made” law review. Starting in the 1970s, the proliferation of
student-edited journals and the growing status of interdisciplinary work
eroded the accreditation power of the flagship law reviews, but at the end
of the century it was still formidable.
Diversity
Over the course of the twentieth century, the range of options open to people
other than white men who wished to obtain legal educations expanded
slowly and erratically. In 1900, no top school admitted women, although
some second-tier schools – Iowa, Michigan, Boston University, and Hastings
– had opened their doors to them. But the most prestigious institutions
– Harvard and Yale among them – expressly refused to do so. Some
proprietary schools sought to fill the resultant gap. For example, in 1908,
Arthur MacLean founded the Portia Law School in Boston, initially limiting
admission to women students. Over the course of the early twentieth century,
the top schools, one after another, relented. Harvard, to its shame, was
the last of the lot, waiting until 1950. As one might expect, the net result
was that, in the second half of the century, the percentage of women among
law students increased steadily. By 2000, women constituted a majority of
the graduates of several schools.
In the early twentieth century, the sharply limited opportunities available
to African Americans worsened even further. The campaign to “raise standards”
in legal education had the predictable effect (arguably, the purpose)
of constricting the number of African Americans who could gain access to
the profession. In part, this constriction resulted from the increase in the
number and height of the hurdles that one had to clear to be admitted,
disadvantaging African Americans who, on average, had more limited educations
and financial resources. And it part, it resulted from the adverse effect
Cambridge Histories Online © Cambridge University Press, 2008
68 William W. Fisher III
of the campaign on schools that specialized in training African Americans.
In 1928, there were four such schools: Howard, Freylinghuysen, Simmons,
and Virginia Union. A decade later, only Howard was thriving.
Only after 1950 did the situation materially improve. Some law schools
(most of them in the South) were forced through litigation to abandon
admissions policies that overtly discriminated against African Americans.
Then, in the 1960s, other law schools adopted affirmative action admissions
policies that, in one way or another, granted preferential treatment to
African American, Hispanic, and Native American applicants.
In the late twentieth century, affirmative action was employed in the
United States in a wide variety of economic and social contexts in efforts
to remedy histories of i,nvidious discrimination. In most of those settings,
it was highly controversial, and its application to law school admissions
was no exception. Some observers defended its use either as essential to
affording members of minority groups access to positions of power (many
of which required legal training) from which they had long been wrongly
excluded or as necessary to provide all law students a learning environment
in which could be found a range of views (on matters of all sorts) that
was representative of the opinion spectrum of the society at large. Other
observers criticized the practice either as unjust (to the whites disadvantaged
by it) or as corrosive of sound pedagogy. Richard Posner, for example,
traced the decreased use of Socratic questioning in the classroom in part
to affirmative action, “which, virtually by definition, entails the admission
of minority students less qualified on average than the law school’s nonminority
students, hence more likely to be embarrassed by the ‘cold call’
method of Socratic teaching.”
To some extent, the struggle over the legitimacy of affirmative action –
both in the context of law school admissions and in other settings – was a
legal question. When employed by public institutions (such as law schools
associated with state universities), it was challenged as violative of the Equal
Protection Clause of the Federal Constitution, and when employed by private
institutions, it was challenged as violative of civil rights statutes. Not
surprisingly, law school faculty frequently expressed views about the merits
of those challenges. In their arguments, the commentators often drew
explicitly on one or another of the then-popular schools of legal thought.
For example, Terrence Sandalow and John Ely both offered defenses of
affirmative action grounded in process theory, and Ronald Dworkin drew
overtly on his particular brand of Kantian liberalism in justifying the practice.
But the connection between scholars’ theoretical commitments and
their views on the issue was not tight; scholars within a given school of
thought sometimes disagreed. For instance, whereas some economists (like
Posner) criticized the practice, others (like Robert Cooter) argued that,
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 69
at least under some circumstances, it could be efficient. And while many
scholars affiliated either with critical legal studies or critical race theory
(such as Duncan Kennedy and Charles Lawrence) defended its use by law
schools, others (such as Richard Delgado) were much more skeptical of the
practice.
In the end, affirmative action survived (more or less) the legal attack on
it. In the 1978 Bakke case, a plurality of the Supreme Court, in an opinion
by Justice Powell, recognized that the promotion of diversity within its
student body was “a constitutionally permissible goal for an institution
of higher education.” Commenting specifically on its use by law schools,
Powell observed, “The law school, the proving ground for legal learning
and practice, cannot be effective in isolation from the individuals and institutions
with which the law interacts. Few students and no one who has
practiced law would choose to study in an academic vacuum, removed
from the interplay of ideas and the exchange of views with which the law
is concerned.” In the 2003 Grutter cases, the Court took much the same
position, upholding reference to race in admissions decisions, so long as
it is achieved not by mechanically adding points to applicants’ scores to
reflect their racial identities, but by taking race into account when making
individualized admission decisions.
By most accounts, affirmative action, at least as employed in law school
admission decisions, has been an enormous success. For example, an empirical
study of the effects of the race-conscious admissions policies employed
by the University of Michigan Law School since the late 1960s, concluded
as follows:
By any of our study’s measures Michigan’s minority alumni are, as a group, highly
successful in their careers. Although, as a group, they entered Michigan with
lower LSAT scores and lower UGPAs [undergraduate grade point averages] than
other students, in their jobs immediately after law school and in their jobs today,
Michigan’s minority alumni are professionals fully in the mainstream of the American
economy. They are well represented in all sectors of the legal profession. They
are successful financially, leaders in their communities, and generous donors of
their time to pro bono work and nonprofit organizations. Most are happy with
their careers, and minority alumni respond no differently than white alumni when
asked about overall career satisfaction. LSAT scores and UGPA scores, two factors
that figure prominently in admissions decisions, correlate with law school grades,
but they seem to have no relationship to success after law school, whether success
is measured by earned income, career satisfaction, or service contributions. If
admission to Michigan had been determined entirely by LSAT scores and UGPA,
most of the minority students who graduated from Michigan would not have been
admitted even though the measures that would have worked to exclude them seem
to have virtually no value as predictors of post-law school accomplishments and
success.
Cambridge Histories Online © Cambridge University Press, 2008
70 William W. Fisher III
Criticisms (Round Two)
In the last two decades of the twentieth century, the dramatic increase in
the diversity of law school student bodies helped fuel another round of calls
for reform of the character and content of legal education. In the judgment
of the critics, the (reformed) Harvard model remained inexcusably sexist,
racist, and conservative. Three clusters of criticisms loomed largest.
First, many feminists scholars argued that American law schools were
inhospitable places for women students. To some extent, this was the result
of overtly sexist behavior by male students or by the overwhelmingly male
faculty. In class, women students were interrupted more often and were
called on less often. When judging moot court competitions, faculty judges
would comment on women students’ dress. Criminal law professors would
deliberately ask women to state the facts of rape cases. Male pronouns were
commonly employed to refer to judges, lawyers, and reasonable persons;
female pronouns were employed to refer to emotional or unstable persons.
Casebooks and syllabi omitted or deemphasized topics of particular interest
to women. The extent to which gender bias contributed to the origins or resolutions
of particular controversies or to the shape of particular doctrines was
typically ignored. And so forth. More fundamentally, various aspects of the
prevailing pedagogy, the critics argued, disadvantaged women. The ethos of
“rigor”; the privileging of general rules and arguments over context-specific
considerations; the hierarchical, authoritarian Socratic method; inattention
to the wisdom that can be gleaned from personal experiences – all these contributed
to an environment hostile to the “female voice” and intimidating
to women students.
Empirical studies lent support to these claims. The most comprehensive
was conducted at the University of Pennsylvania in the early 1990s. Its
principal findings were that the Socratic method made women students
there feel “strange, alienated, and ‘delegitimated’”; that, as a result, women
participated in classroom discussions less often than men; and that, by the
end of their first year of legal education, women students were three times
less likely than men to rank in the top 10 percent of their class. In language
that echoed one branch of feminist legal theory, the authors of the study
concluded that even women who do well academically succeed in part by
transforming themselves: “For these women, learning to think like a lawyer
means learning to think and act like a man. As one male professor told a
first-year class, ‘to be a good lawyer, behave like a gentleman.’” A less formal
study conducted at Harvard in 2002 came to similar conclusions: female
students were less likely than males to talk in class or to graduate with honors
and more likely to describe the law school experience as “alienating” –
although they were more likely than men to occupy top-tier positions in
student-run journals.
Cambridge Histories Online © Cambridge University Press, 2008
Legal Theory and Legal Education, 1920–2000 71
Critical race theorists offered analogous criticisms. The curricula of most
law schools neglected racial issues, they argued, and the prevailing pedagogy
erected unnecessary barriers for members of minority groups. They
urged greater use in the classroom of such devices as narratives, simulations,
and “reflection pieces,” which would both empower minority students and
highlight the racial dimensions of legal controversies and doctrines. Of special
concern to many critical race theorists was the under-representation of
minorities in legal scholarship. Derrick Bell, Richard Delgado, and Mari
Matsuda, among others, argued that persons of color, largely because of
their experiences of racial oppression, had something distinctive to contribute
to scholarly debates, but had trouble finding publication outlets.
Partly for that reason, they urged law schools to employ affirmative action,
not just (as suggested above) when deciding which students to admit, but
also when hiring and promoting faculty (although at the same time they
warned of the hazards of “tokenism”). Many white professors and a few
minority professors (for example, Randall Kennedy and Stephen Carter)
contended, by contrast, that affirmative action was inappropriate in this
context; a genuinely meritocratic standard was sufficient.
The third cluster of criticisms came from scholars associated with
CLS. The most ambitious and influential essay in this genre was Duncan
Kennedy’s 1982 pamphlet, Legal Education and the Reproduction of Hierarchy.
Kennedy’s thesis was that, in myriad ways, law schools convey to students
that “it is natural, efficient, and fair for law firms, the bar as a whole, and
the society the bar services to be organized in their actual patterns of hierarchy
and domination.” Among the features that contribute to this message
are: the “patriarchal” Socratic method, still used often in first-year classes,
which inculcates ambivalence and conservatism; the technique of burying
emotional or outrageous cases within casebooks dominated by run-of-themill
cases, which pressures students to ignore their moral intuitions; the
failure to provide students training in practical skills of lawyering, leaving
them little choice but to seek employment after graduation in private law
firms, which replicate the controlled and supervised law school experience;
and a rigid grading system, which reinforces students’ senses of both the
inevitability and the justice of hierarchy. Only radical change in many of
these dimensions could make the schools effective training grounds for
lawyers interested in progressive social and political work.
As was true of the first round of criticisms, these attacks on the dominant
form of legal education had relatively little impact. Overtly sexist
behavior by faculty and students diminished. Some schools gave preferential
treatment to minorities and, less often, to women in faculty hiring
and promotion. And a few dedicated Left professors – such as Gerald
Lopez at Stanford – developed courses and clinical programs intended to be
more politically progressive. But, by the turn of the century, no school had
Cambridge Histories Online © Cambridge University Press, 2008
72 William W. Fisher III
developed a “radically reconceived training regimen.” A chastened version
of the Harvard model still ruled the waves.
CONCLUSION
In retrospect, we can see that some innovations both in American legal
theory and in American legal education were shaped or provoked by developments
in other dimensions of American politics and culture. For example,
legal realism was inspired in part by Progressivism and was reinforced by
the New Deal. Likewise, the effort during the 1960s and 1970s to achieve
greater diversity in law school student bodies and the intense concern on the
part of several groups of legal theorists in the various meanings of “equality”
are traceable in large part to the civil rights movement. Other innovations
seem more connected to developments in other academic disciplines. For
example, to some extent legal realism echoed recent developments in psychology
and anthropology; neo-Kantian legal theory was inspired, as the
label suggests, by contemporary currents in philosophy; and CLS incorporated
aspects of structuralism and postmodernism. Still other innovations
seem at least partially serendipitous; a particular person with an idiosyncratic
set of ideas happened to occupy a position of influence at a particular
time, and much changed as a result. Examples of such figures would include
Langdell, whose educational philosophy so heavily colored late-nineteenthand
twentieth-century pedagogy, and Richard Posner, whose limitless faith
in the power of economics and seemingly boundless energy were crucial in
launching and sustaining a variant of utilitarian analysis that continues to
infuse large sectors of legal scholarship and instruction.
The result of this confluence of forces is a highly distinctive legal culture
and system of legal education. Scholars and students from other countries
who come to law schools in the United States are often disoriented. Much,
initially, seems to them peculiar. Whether that distinctiveness will survive
the twenty-first century remains to be seen.
Cambridge Histories Online © Cambridge University Press, 2008
3
the american legal profession, 1870–2000
robert w. gordon
This chapter deals with two broad topics. One is the “legal profession,” the
formal institutions and organizations through which associations of lawyers
seek and exercise state authority to regulate training for and admission to
their guilds, to enforce their rules against members, and to protect their
privileges against outsiders. The other and much broader topic is that of
lawyers themselves, the people and occupational groups who make up to the
profession, their work and social roles and their social standing, economic
condition, and political influence. In the United States all lawyers have since
the Revolution formally belonged to a single, unified profession, licensed
by the states where they practice. There are no official ranks or specialties of
lawyers, such as the English distinction between barristers (trial lawyers) and
solicitors; the French among avocats, avou´es, conseils juridiques, and notaires;
or the German between the private profession of advocate and the public
professions of civil servant, prosecutor, and judge, each calling for a different
training, examination and career path. But in reality the legal profession is
many, not one: a collection of occupational groups that work at very diverse
practice tasks, enjoy very different levels of status and income, and play
very different roles in the economy, politics, and society.
The chapter begins with the founding of professional organizations and
institutions around 1870 and their growth and development up to 1970.
It describes the “professional project” of organizational entrepreneurs from
the elites of the urban bar, who launched several initiatives to create new
rules to govern and organize the profession of law and new institutions
to carry them into effect. Over the next hundred years these initiatives
would gradually transform the profession. They were part of more general
movements to organize middle-class occupations as professions – claiming
a cognitive basis in science, requiring university education and credentials
for entry, occupying a market niche protected from the competition of lay
providers, and conferring an exalted social status and cultural authority.
The lawyers who set these changes in motion believed that their society
73
Cambridge Histories Online © Cambridge University Press, 2008
74 Robert W. Gordon
was experiencing a social and political crisis of misgovernment and class
division. Their authority and status as professionals were undermined by
the entry of new ethnic immigrant groups, and their dominance of public
life by corrupt alliances between machine politicians and upstart corporate
wealth. The movement to professionalize the legal system entailed working
to create a corps and institutions – both private lawyers and public officers,
such as judges, prosecutors, and administrators – capable of restoring the
“rule of law,” meaning governance as the applied technique of an educated
elite trained and skilled in a specialized legal science and operating through
procedures of which they possessed distinctive mastery. This professional
reform movement required restoring respect for lawyers and the courts as
independent guardians of the constitutions and legal tradition. It required
excluding or disciplining ethnic newcomers. And it required replacing (in
some part) governance through alliances among political parties, ethnicimmigrant
urban machines, and new business interests with governance by
civically virtuous professionals.
In the first section I tell the story of the mixed motives powering these
professional reform movements and of their mixed achievements. As will
be seen, they slowly but ultimately succeeded in their aim of closing off the
profession to all but college and law school trained aspirants who passed a
bar exam. They failed, however, to keep out ethnic newcomers, though they
largely confined them (as well as the few African Americans and women
who gained entry) to the profession’s lower rungs. The ethics codes and
disciplinary machinery they set up proved ineffectual to police any but
the most egregious misconduct of non-elite practitioners. Their efforts to
mark off and control the market for their services were partly successful,
but limited by encroachments of competing professions and lay providers.
They reclaimed important public legal posts for “merit” appointments, but
left large enclaves such as state lower court judges and prosecutors under
the influence of patronage machines and special interest politics. Before the
1970s the bar associations helped set up some charitable legal aid offices in
major cities, but otherwise did little to make legal services available to the
poor; thereafter they became champions of publicly funded legal services,
but with meager results. Though virtually every field of law grew more
specialized and technically demanding, the legal elite’s aspirations to build
a corpus of legal science that would legitimate their authority, as scientific
medicine had done for physicians, were defeated by a lack of consensus over
the content of legal science and its uncertain relevance to lawyers’ work.
The next two sections shift the focus to “lawyers at work,” a subject
divided into two periods roughly tracking the emergence and growth of
distinctive types of legal practice. The first period, 1870–1930, sees the
rise of the corporate law firm, plaintiff’s personal injury practice, and public
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 75
interest lawyering. The second, 1930–1970, describes the new specialties
emerging from the statutory and administrative innovations of the New
Deal and the postwar political-economic order, as well as from the rights
revolution of the 1960s and 1970s. Throughout, the emphasis is on lawyers
in sectors whose practices underwent big changes, who themselves changed
the world around them in significant ways, and who attracted sufficient
attention from biographers and historians so that their stories can be told.
(These criteria tend to leave out the largest sector of the bar in the entire long
century, solo and small practitioners. Their practices did not change much
over time, and they tended to be the objects and casualties of larger social
forces rather than instigators of them. For both reasons, few materials exist
from which to write their history.) The elite corporate bar is given special
attention, because of its role in building state institutions, promoting legal
and constitutional ideologies, and mediating between business and the
state.
In the final section the two narratives of the “professional project” and of
“lawyers at work” are combined, in an account that I call “Expansion and
Upheaval,” which traces the major transformation since 1970 of the demographics,
institutions, and ideals of the profession and of lawyers’ practices.
During these years the profession tripled in size and admitted women and
minorities in significant numbers. Corporate law firms multiplied, grew
to enormous size, went national and international, and began to claim the
largest share of total legal business. Personal injury practice entered the
age of the mass-tort class action. Public interest “cause” lawyers added
new constituencies and began to play a regular role in governance. All these
changes in turn had a dramatic impact on the aspirations and institutions of
traditional professionalism. The ideal of a single unified profession receded
as social distance and income differentials widened between its upper and
lower tiers. The ideals of independence and public service virtually disappeared
among private lawyers, though they found a new home among
public interest lawyers and non-governmental organizations (NGOs).
I. THE ORGANIZED BAR AND ITS PROFESSIONAL PROJECTS:
1870–1970
Lawyers’ jobs and lives in 1870 were not very different from what they had
been at mid-century. Lawyers’ numbers (ca. 40,000 in 1870) in proportion
to population were about the same. Only fifteen states required any formal
preparation for admission to the bar, such as a cursory oral examination or
three years of apprenticeship. Only about 1,600 lawyers, or 3 percent of the
bar, had attended a law school, usually for one or two years at most. Nearly
all lawyers were in private practice, and they usually practiced alone or in
Cambridge Histories Online © Cambridge University Press, 2008
76 Robert W. Gordon
two- to three-person partnerships. The profession was highly stratified and
its incomes widely dispersed. At the top, lawyers grew wealthy from retainers
from merchants, manufacturers, banks, insurance companies, and especially
from railroads. But even elite lawyers were rarely specialists; they
still made their public reputations as trial lawyers, representing prominent
clients in divorce, will, and libel contests and acting for murderers in criminal
cases and for tort plaintiffs in civil suits against businesses. As they had
since 1800, a small corps of elite lawyers virtually monopolized practice
before the highest state and federal courts. Lawyers also dominated high
elective and appointive office; two-thirds of U.S. presidents, senators, governors,
and top executive appointments; and of course the entire judiciary
above petty misdemeanor and probate courts were lawyers. At the bottom
of the profession lawyers could not make a living at law alone; they scraped
by on a practice of miscellaneous small pickings from individual clients –
debt collection, real estate deals and disputes, writing and probating wills,
criminal cases – combined with non-legal business on the side.
Reform movements of small groups of elite urban lawyers would eventually
build the institutions that organized the modern legal profession. They
began with the formation of bar associations in major urban centers – New
York, Boston, Chicago, St. Louis, and Cleveland. New York’s experience
was copied most extensively and widely. In 1870 a group of elite lawyers
signed a “call” to form a city bar association, composed of the “best men”
of the bar – about 10 percent of the city’s lawyers at the outset. The immediate
provocation was a series of scandals. Boss Tweed’s Tammany machine
controlled the election of several state court judges, who at the machine’s
bidding immunized its associates from criminal prosecution, used their
patronage powers to hire its cronies as receivers and court officers, and were
suspected of taking bribes from litigants in the struggle between Jim Fisk
and Jay Gould on one side, and Cornelius Vanderbilt on the other, for
control of the Erie Railroad. The new bar association’s aims were to purge
the bench of corrupt judges, to take a leading role in reforming judicial
elections by nominating capable and honest judges and lengthening their
terms to make them more independent of party bosses, and to indict Boss
Tweed. Interestingly, however, many of the lawyers involved in organizing
the city bar association were themselves deeply implicated in the Tweed-
Erie scandals; they had represented Fisk, Gould and the Erie Railroad, and
similar clients. David Dudley Field, who barely escaped censure by the
new city bar association for his work for the Erie Railroad, was later one
of the organizers of the American Bar Association (1878), as well as a crusader
for removing corrupt judges from the bench. Evidently these lawyers
were trying to address the conditions of their own degradation by imposing
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 77
practice standards and conditions that would limit their clients’ and their
own opportunities to corruption.
The same lawyers formed the backbone of general Mugwump and Progressive
reform movements organized to enact anti-corruption legislation,
create a merit-based civil service removed from political patronage, support
non-partisan or “Fusion” candidates for state and local office, and seek
electoral and jurisdictional reforms that would reduce the influence of populist
and machine politics and restrict working-class voting. Their major
effort was to enhance the authority and jurisdiction of federal courts and of
state constitutional courts and to establish expert non-partisan administrative
commissions, such as Charles Francis Adams’s Railroad Commission
in Massachusetts and former Judge Thomas M. Cooley’s federal Interstate
Commerce Commission – all seen as sources of neutral law and administration
above special interest politics. Anglophile in cultural pretensions, the
reformers coveted the tight guild organization, social standing, and clubby
solidarity of English barristers, the elite professionalism of the English
civil service, and the exalted status of English judges. From the start the
professionalization movements had mixed motives – high-minded civic
reform combined with exclusion and scapegoating of ethnic newcomers,
especially Jews from Eastern Europe. As will be seen, they also had mixed
achievements.
Building new professional institutions was a part of this broader agenda
of civic and political reform. The most enduring institutions of the period
were the new bar associations and the new schools of law.
The first bar associations were little more than social clubs of “the best
men.” By 1916 there were more than 600 bar associations, and they had
developed a fairly consistent and uniform agenda. The central aim was to
restrict entry to the legal profession, first by requiring passage of a bar
examination and later by raising educational standards to graduation from
law school and at least a high-school degree before that. These were high
barriers: only 2 percent of Americans had a high-school degree in 1870
and only 8.6 percent in 1910; as late as 1940 only 6 percent had a college
degree. The bar associations also sought to close down alternative routes
to practice, such as apprenticeship, the diploma privilege (the admission of
graduates of a state’s law schools without examination), and especially the
part-time night law schools proliferating in the cities. The night schools
were the quickest route into practice for immigrant lawyers; by 1915 they
turned out almost half the total number of new lawyers.
The spearhead of the restrictive efforts was the very unrepresentative
American Bar Association – in 1910 a group of 3,700 mostly big-city
lawyers comprising just 3 percent of all lawyers (by 1920 still only
Cambridge Histories Online © Cambridge University Press, 2008
78 Robert W. Gordon
9 percent.) The bar’s elites looked enviously at the medical profession,
which in response to the Carnegie Foundation’s Flexner Report (issued
in 1910) had shut down all but the few American Medical Associationapproved
schools as the path to a medical license. The ABA’s Root Report
of 1921 spelled out the same goal: to persuade licensing authorities in all
the states to restrict entry to applicants who satisfied the ABA’s law school
and pre-legal educational requirements.
The restrictionists partly succeeded and partly failed. In the long run
they won the battle for the bar exam and formal educational credentials.
They gradually drove out admission through the diploma privilege and
apprenticeship in almost every state; introduced written bar exams; and, in
states with large cities and immigrant populations, reduced bar pass rates.
By 1935 only nine states limited entry to graduates of ABA-approved law
schools, by 1937 twenty states did so, and by 1979 that number increased
to forty-six states. By 1949, 85 percent of new lawyers had some law school
training. But the elites failed to close down the part-time night schools,
which flourished and multiplied to the point of graduating over half of all
new lawyers until the Depression killed most of them off. Suffolk Law School
in Boston enrolled 4,000 students in 1928. By 1946 the ABA had induced
every state to exclude non-citizens, though this ban was eventually struck
down by the U.S. Supreme Court in In Re Griffiths (1973). And the nativist
project to cleanse the bar of what Henry S. Drinker, a prominent legal
ethicist, called “Russian Jew boys . . . up out of the gutter . . . following the
methods their fathers had been using in selling shoe-strings and other merchandise,”
failed completely. New Jewish and Catholic immigrant lawyers
flooded into the profession. In New York City, most dramatically, Jewish
lawyers rose from 26 percent of new bar admissions between 1910 and
1920 to 80 percent between 1930 and 1934, stabilizing thereafter at
50 percent.
In a 1920 report on legal education for the Carnegie Foundation, Alfred Z.
Reed had recommended keeping the night schools open, but using them to
train the lower corps of a formally differentiated bar. The journeymen would
do personal injury work, divorces, debt collection, and the like. An upper bar
of university law-school educated counselors would retain practice in higher
courts and complex corporate transactions.1 Bar leaders like Elihu Root
and Harlan Fiske Stone indignantly rejected Reed’s report: they wanted
a unified bar, but without its polluting lower half. They succeeded only
in restricting the foreign born and their children to the lower rungs of
the profession, not in keeping them out. Yet over time, the requirement
for years of higher education – four of college, three of law school by the
1 Alfred Z. Reed, Training for the Public Profession of the Law (New York, 1921).
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 79
post-World War II period – coupled with steeply rising tuition costs after
the 1970s, undoubtedly narrowed the class background of lawyers; in 1969
only one-fifth came from families at the bottom 70 percent of occupations.
Though the bar could not control the ethnic composition of the profession,
its requirements of educational credentials and bar exams played some
part in restricting the number of new lawyers. The proportion of lawyers to
the U.S. population was remarkably stable for three-quarters of the century.
It rose between 1870 and 1900, from 1.07 per thousand to 1.5 per thousand
(from 40,000 lawyers to 115,000), fell back to 1.16 per thousand in 1920,
expanded again somewhat in 1920–30 to 1.35 per thousand, and then contracted
in the Depression and wartime to 1.21 per thousand in 1960. In
the 1970s, as we will see, controls on entry collapsed and the numbers of
new entrants exploded.
The elite bar worried enough about the threat to professional standards
and values from Jews and Catholics to try to limit their entry. Its own and
the dominant society’s gender and racial customs kept women and African
Americans down to derisory numbers until the 1970s. The bar in 1900 was
exclusively white and male with token exceptions: There were about 730
African American lawyers in the entire country and about 1,000 women
lawyers. The ABA refused to admit African Americans to membership until
1943; they formed their own professional organization, the National Bar
Association, in 1925. No Southern school after Redemption – with the
prominent exception of Howard University Law School in Washington,
D.C. – would admit African Americans before 1935; several states later
opened all-black law schools simply to forestall integration orders.
Women fought their own long battle for admission to practice. In the
most famous challenge to state laws excluding women from practice, Myra
Bradwell of Illinois argued that such laws abridged the privileges and immunities
of citizens to choose their professions. The Supreme Court rejected the
claim in 1873, upholding discriminatory licensing laws as a valid exercise
of the police power. Justice Joseph Bradley in a concurring opinion said that
the “paramount destiny and mission of women are to fulfill the noble and
benign offices of wife and mother” and they were thus “unfit . . . for many
of the occupations of civil life.”2 Some state courts disagreed, however, and
between 1869 and 1899, thirty-five states and territories, often under pressure
from lawsuits, admitted women to practice – even though in most of
them women could not vote or hold office. All but Delaware and Rhode
Island admitted women by 1918.
Legal barriers to admission turned out to be the least of obstacles in
the path of women to practice. Many schools refused to admit women as
2 Bradwell v. Illinois, 83 U.S. (16 Wall) 130, 141–2 (1873).
Cambridge Histories Online © Cambridge University Press, 2008
80 Robert W. Gordon
students: Harvard held out until 1950. From 1900 to 1930, the largest
numbers were graduated from schools founded specially for women, like
Portia Law School in Boston and the Washington College of Law in the
District of Columbia. Washington College trained women for entry into
government law jobs, which discriminated less than private employers, but
warned graduates they would probably have to start out as stenographers
even after admission to the bar. Male lawyers viewed women as intruders on
a masculine preserve, especially the courtroom. Women’s gentler natures
would be damaged by the rough and tumble of adversary combat and the
vulgar realities of crime and civil strife, or else they would damage the
cause of justice by undermining it with sentimentality or using feminine
wiles to seduce juries. Judges and lawyers treated women with undisguised
hostility.
Even when young Jewish men, and much more rarely, occasional women
and African Americans, made their way onto the first rungs of the meritocratic
ladder – elite law schools, law review, and high class standing – almost
none of them, until the 1970s, were ever hired at major law firms. The stories
are legendary – the future Supreme Court justices Ruth Bader Ginsburg
and Sandra Day O’Connor, the African American lawyers Raymond Pace
Alexander and William T. Coleman, Jr., all stars of their respective law
school classes, were turned away from every law firm in the cities where
they first applied. Between 1890 and 1920, all African American lawyers
admitted to the Philadelphia bar practiced alone or in all-black firms or
worked for the government.
Indeed the exclusion of African Americans and women from the upper
bar was so taken for granted before the 1970s that it hardly raised an
eyebrow except among those who suffered from it. The most glaringly
visible exclusion, because the number of affected lawyers was so large, was
of Jewish men. A 1939 report on Jewish lawyers in New York practice
found that they made up more than half of the total number of lawyers
in the city, but few of them had jobs in corporate law firms, either at the
beginning or end of their careers. As late as 1960 Jerome Carlin’s study of
social stratification in New York City’s bar concluded that “a Jewish lawyer
who achieved high academic standing (that is, was selected for staff of law
review) in an Ivy League school has no better chance of being in a large firm
than a Protestant lawyer who did not ‘make law review’ and who attended
a non-Ivy League school.”3 The exceptions mostly held jobs in firms that
served specifically Jewish clienteles, such as banking houses and department
stores, and the occasional liberal firm founded by a mixture of Gentile and
Jewish partners, such as PaulWeiss Rifkind Wharton & Garrison.
3 Jerome Carlin, Lawyers’ Ethics: A Survey of the New York City Bar 30 (1966).
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 81
Blocked from the conventional pathway to success – big-firm transactional
practice on behalf of corporate clients – what did marginal lawyers
do to advance in their profession? Their motto might have been, “If you
can’t join ‘em, sue ‘em.” The usual choice of occupation was litigation representing
the other side from the elite bar’s corporate clients. This was
often simply routine personal injury work – tort plaintiffs’ suits against
railroads and streetcar companies or worker’s compensation claims – but
could also be fairly complex litigation, such as derivative suits or proxy
fights against corporations. Labor law was often a Jewish specialty as well,
attracting lawyers from immigrant socialist families to the workers’ cause.
Women were steered away from the courtroom: a survey of 1920 found
that most women thought their best opportunities were in office practices,
such as trusts and estates, domestic relations, real estate, and social welfare
law. In fact, they were mostly confined to general office practice. As will be
seen, Jewish, African American, and women lawyers also dominated what
we now call public interest and cause lawyering.
Ethics and Discipline
Prominent among the historical ambitions of the newly organized profession
was the development of ethical standards and disciplinary machinery to
improve the ethics of lawyers and judges and to police or expel the deviants.
At the start the reformers took a broad view of the offenders, targeting
their own kind as well as immigrant parvenus. Lawyers debating the first
Canons of Ethics chastised corporate lawyers for tutoring wealthy clients
in how to skirt or evade the law. The Boston Bar Association, for example,
drafted comments for the ABA’s 1908 Ethics Committee arguing that it
was “a scandal to the profession that unscrupulous businessmen can find
able lawyers who devise or perfect schemes for evading the law, for imposing
on investors, and for working injuries to the public; who assist in the work
of improperly influencing legislature and city counsel; and nevertheless,
contrive to maintain a high standing among their brethren. We think it
is the duty of the bar to hold to obloquy and contempt lawyers who thus
prostitute their calling.”4
Bar association speakers and writers on ethics delivered hundreds of
jeremiads between 1890 and 1920 lamenting the increasing commercialization
of the bar and its growing dependence on corporate clienteles; they
continued to hold out the ideal of the lawyer as an independent objective
advisor. As a practical matter, however, the new grievance committees of
4 Lucien Alexander, Memorandum for Use of ABA’s Committee to Draft Canons of Professional
Ethics (Chicago, 1908), 123.
Cambridge Histories Online © Cambridge University Press, 2008
82 Robert W. Gordon
the elite bar associations focused their crusades almost entirely on lower tier
attorneys, the personal injury plaintiffs’ bar. Elite lawyers always disdained
contingency fee arrangements as “mere ventures . . . no better than a lottery
ticket,”5 but could not regulate them without taking on the principle of
free contracting between lawyer and client. With increasing frequency in
the 1920s and 1930s, bar committees disciplined or disbarred lawyers for
“ambulance chasing,” soliciting clients by going to hospitals and funerals, or
using policeman and doctors to refer clients involved in accidents. Defense
practice in personal injury cases was actually quite as seamy. Companies sent
agents to homes and hospitals to sign releases for cheap settlements. Railroad
legal departments bribed witnesses or sent them out of town to defeat
lawsuits. Meanwhile, corporate lawyers solicited clients on golf courses and
in downtown city clubs. But high-end lawyers almost entirely escaped the
notice of disciplinary committees, whose mission seemed increasingly to
scapegoat low-end lawyers for the ethical failings of the profession.
As bar associations gradually became less gentlemen’s clubs and more
inclusive and heterogeneous, the bar’s disciplinary machinery, never very
effective, decayed into insignificance; by the 1970s, more than 90 percent
of complaints were dismissed with little or no investigation, and aberrant
lawyers were usually reprimanded, but rarely disbarred or suspended except
if convicted of a felony or the outright theft of client funds. Bar committees
virtually never went after major law firms or their partners, even after
egregious public scandals. By 1980, as will be seen, outside agencies were
playing more important roles than the bar in policing misbehavior.
Compared to other nations’ legal professions, the American legal profession
has always stressed lawyers’ duties to their clients over duties to the
courts, legal system, third parties or the public interest. As late as the 1980s,
lawyers’ rhetoric continued to celebrate the contrasting ideal of the lawyer
as a high-minded independent counselor as well as an adversary advocate or
hired gun who steers his client in the paths of legality and warns of adverse
consequences if the client strays. Yet as a practical matter the bar’s ethics
rules and informal norms aligned lawyers’ interests almost entirely with
those of clients and – most of all – other lawyers. Successive versions of the
bar’s ethics codes, such as the ABA’s Model Code of 1969 and Model Rules
of 1983, made fidelity to clients mandatory; lawyers should keep quiet even
if the client were about to commit crime or fraud, unless they believed the
criminal act was “likely to result in imminent death or substantial bodily
harm.”6 Duties to the courts remained vague and mostly unenforced; duties
to the public were hortatory and optional.
5 Thomas M. Cooley, “The Contingent Fee Business,” Albany Law Journal 24 (1881), 26.
6ABA Model Rules of Professional Conduct, Rule 1.6 (1983).
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 83
Judges and Prosecutors
Since their inception the bar associations sought to play a key role in the
selection of judges, going against political party machines that preferred to
keep judgeships as patronage rewards for loyal party service. Appointment
or (as in most state systems) election to judgeships remained a reward for
the politically well connected. Bar associations pervasively tried to insert
themselves as official filters or endorsers of candidates in state and federal
procedures for nominating judges. In some states, starting with California
in 1934 and Missouri in 1940, they were able to get “merit” selection
systems adopted: a special commission would nominate a list of candidates
for judicial vacancies and the governor (or a commission) would make shortterm
appointments from the list, after which the judge was subject to a
retention election. This system basically supplemented party politics with
bar politics, a contest among lawyers representing different client interest
groups. Whether appointed or elected, partisan, non-partisan or “merit,”
state court judges tended to stay in office for long terms. The ABA was
formally consulted on nominations for federal judgeships from 1952 until
2001, when President GeorgeW. Bush discontinued the practice. The bar’s
influence tended to be conservative – prosecutors and corporate lawyers, for
example, rather than criminal defense, plaintiffs’ personal injury, or labor
lawyers were consistently favored for federal judgeships – but with probably
generally positive effects on competence and honesty.
But the bar’s influence was limited. In Chicago it was estimated that from
1900 to 1950 more than 60 percent of the party nominees for municipal
courts were not endorsed by the city bar associations. Anyway, bar association
lawyers also needed the favor of sitting judges and could not be too
critical. Since the 1980s segments of the bar – usually trial lawyers representing
plaintiffs on one side, and lawyers representing corporate defendants
and insurers on the other – have turned some state judicial electoral contests
into the rawest kind of interest group politics, funneling campaign
contributions to candidates to purge the bench of judges who issue rulings
unfavorable to their clients and causes.
In the criminal process the important state official was not the judge,
but the public prosecutor. In cities run by political machines, the machine
picked the district attorneys and his assistants as well as the judges. A
caustic 1929 study of the office found that prosecutors tended to be very
young men (in Missouri, between 25 and 29 years old) who took the job for
a few years to get the publicity, contacts, and experience to prepare them
for careers in private practice or politics. All lawyer members of Congress
in 1914, 1920, and 1926 and all lawyer governors in 1920 and 1924 were
former prosecutors. “The office itself is unprofitable and to remain in it long
Cambridge Histories Online © Cambridge University Press, 2008
84 Robert W. Gordon
is to create the unpleasant impression that the incumbent is unable to make
a living at the practice of law.”7
Despite this unpromising process of selection for his office, the prosecutor
exercised enormous power. He had the discretion whether to accept the
results of police work, whether to charge arrested suspects, and with what
crimes to charge them. Grand juries, originally a check on the prosecutor’s
power to indict, had become his creatures. Since nearly all cases ended
in plea bargains, neither trial nor appellate judges reviewed his conduct.
Temptations were irresistible to use the discretion for political ends, to come
down hard on suspects when the crowd was clamoring for their blood and to
punish political enemies, but to go easy on the well connected. Some big-city
district attorneys, however, like Frank Hogan of New York, took advantage
of their long-term political favor to build professional prosecutors’ offices
with career bureaucracies, relatively insulated from immediate pressures of
public opinion. The prosecutor’s office continued for the rest of the century
to be one of the best springboards into public office, as the careers of Thomas
E. Dewey and Rudolph Giuliani attest, as well as into judgeships.
Market Control
Skeptics about professions claim that they are primarily economic cartels,
designed to create and protect a guild monopoly. The organized bar of
the twentieth century certainly did its share to prove those skeptics right.
State bars put in place an impressive ,array of arrangements that restricted
competition among lawyers and between lawyers and other professions. Proponents
invariably argued that such arrangements, like statutes forbidding
the “practice of law” by corporations or rules forbidding lawyers to advertise
their services or solicit clients, were needed to preserve law as a high-minded
profession against commercial money-grubbers who would turn it into a
mere business. Starting in the 1920s, “unauthorized practice of law” committees
fought fierce turf battles with encroaching neighbors – accountants
giving tax advice, “administrative” specialists handling cases before agencies,
collection agencies, trust companies writing wills and administering
trusts, title insurers and real estate brokers handling property sale closings,
unions and automobile clubs offering group legal services, and most
recently paralegals offering help in preparing legal documents. Like the
bar’s efforts to restrict access, these fights met with uneven success. Aside
from signing pleadings and appearing in court, there was never any agreement
on what constituted the “practice of law” that lawyers were entitled
to monopolize. Turf battles often led to treaties marking off boundaries of
7 Raymond Moley, Politics and Criminal Prosecution (New York, 1929), 80.
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 85
practice or allowing peaceful coexistence. The bar enjoined or prosecuted
competition from lay providers of such services as divorce advice, even in
markets that lawyers did not serve. But court decisions favorable to civil
rights legal organizations and threats of antitrust enforcement persuaded
the bar to end its hostility to group legal service plans with “closed” panels
of lawyers (the legal equivalent to Health Maintenance Organizations).
Access to Legal Services
In theory justice is a universal public good: equality before the law requires
access to the law, which in turn requires access to lawyers. Another public
good, health care, began to move away from direct patient financing in
the 1930s: hospital services were provided through non-profit charitable
hospitals and doctors’ services paid by employer-sponsored health insurance,
and the federal government subsidized a growing share of total medical
costs (more than 50 percent by 2000) of veterans, the elderly, and the poor
and of medical research and education. Yet lawyers in the United States
were and still are mostly paid out of pocket by clients. The non-profit
sector has always been tiny and the government contribution (to criminal
public defender programs and civil legal services) negligible (less than 1
percent of legal fees). From the late nineteenth century, lawyers for plaintiffs
with personal injury claims worked on contingency, financing the costs of
suit themselves, taking 30 to 40 percent of damage awards if they won
the case and nothing if they lost. For a few types of lawsuits, statutes
awarded attorney’s fees to successful parties. For most the usual rule was
the “American Rule” that parties pay their own fees and costs. Most people
could not afford much of a lawyer’s time, and poor people, who were often
in the worst trouble with the legal system, could not afford any. For most of
the century extending access to legal services relied on the sporadic efforts of
a few maverick reformers such as the Boston lawyer Reginald Heber Smith,
whose pioneering Justice for the Poor (1919) became the Bible of the legal
aid movement. The organized bar fiercely resisted alternatives to delivery
of legal services through means other than fee for services or charity, and
the great mass of lawyers was indifferent.
Before the revolution in rights pioneered by Warren Court decisions
of the 1960s, the ordinary mass of people suspected, accused, and convicted
of crimes either pawned what possessions they had to buy a plea
bargaining agent or, if too poor (as most were) even for that, were made
objects of casual charity or simply became invisible to the legal profession
altogether. Throughout the century some states assigned court-appointed
counsel to criminal defense or to brief and argue appeals in forma pauperis;
but except in the rare localities where a professional service was created to
Cambridge Histories Online © Cambridge University Press, 2008
86 Robert W. Gordon
handle these cases, such assignments tended to fall on the most marginal
courthouse loiterers, unable to attract clients by other means. Free lawyers
for felony defendants were not required at all in many states until the
Supreme Court’s decision in Gideon v. Wainwright (1963), and even after
that criminal defender programs were funded stingily out of the fear that
suspects would manipulate the system to escape just punishment. In some
states defense lawyers would earn derisory fees even in capital cases. Severely
underfunded and overburdened, too busy to investigate cases or take them
to trial, defenders were reduced to high-volume plea bargaining. Though
the quality of representation was often abysmal, courts would not reverse
convictions for “ineffective assistance of counsel” even if the defense lawyer
was visibly incompetent, drunk, drugged, or even fast asleep for most of
the trial.
As with criminal, so with civil practice for the poor. Elite lawyers in major
cities founded legal aid societies in the early part of the century with the
usual Progressive mixture of philanthropic and social-control motives: to
help give access to justice to the poor, to discourage disfavored claims, and to
displace immigrant legal aid societies that were considered over-aggressive
in bringing personal injury suits. Legal aid programs traditionally refused to
take divorces, bankruptcies, or personal injury accident cases and insisted on
conciliatory approaches to eviction and debt collection. Yet while pursuing
cooperative instead of confrontational approaches to their clients’ cases,
legal aid leaders maintained their identities as lawyers – as distinct from
social workers or social reformers – addressing the strictly legal problems of
one client at a time, rather than the family and work situations or structural
conditions that had caused them.
Most lawyers simply took no interest in establishing or contributing to
legal aid. In 1950, only 9 percent of legal aid funding came from lawyers;
in 1963 only about 400 legal aid lawyers were available nationwide. Bar
regulations contributed to the problem of unequal access. The upper bar
resisted the contingent fee (still not allowed in criminal cases and divorces)
and, for most of the century, group legal services. As we have seen, the bar
prevented entry into markets monopolized by lawyers – even markets they
left unserved – by lay providers or paraprofessionals.
Other industrial societies by mid-century had enacted state-funded systems
of legal aid for indigent clients. Britain led with the Legal Aid and
Advice Act of 1949; by the 1970s, more than half of all British barristers’
income came from state-funded legal aid. In the United States, by
contrast, organized law, like organized medicine, battled fiercely against
government-funded services (fearing the controls that would come with
them) until 1965, when the ABA and local bars switched to strong and
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 87
effective support for the federally funded legal service programs initiated
as part of President Lyndon Johnson’sWar on Poverty.
The profession did little better representing unpopular clienteles.
Lawyers’ livings depended on cultivating good business relations with
clients and collegial relations with judges, regulators, and court personnel;
in small communities especially, lawyers could not afford to offend the
local power structure. Such dependencies set severe limits on accepting
even paying clients if their causes were unpopular or their interests adverse
to regular clienteles. Railroads would give free passes or pay retainers to
all the able lawyers in towns along their lines to inhibit representation of
clients injured in railroad accidents; they would insist on loyalty not only
to a particular railroad client but also to railroad interests generally. In most
of the Jim Crow South a white lawyer would only be available to represent
an African American criminal defendant if appointed by a court. Even then,
if the crime charged had a white victim, and especially if it were rape, his
defense had to be perfunctory. Southern white lawyers could not take on
civil rights cases for African Americans without risking the loss of all their
clients. During the Red Scare of the 1950s, several bar associations passed
resolutions discouraging or even forbidding members to represent communists.
Other associations affirmed the principle that every person, however
vile, deserved representation, but in practice did nothing to ensure lawyers
would be provided for communists; usually they were not.
Practitioners, Scholars, and Legal Science8
The year 1870 was an annus mirabilis for new professional institutions – it
was the birth year of the revitalized law school and of new bar organizations.
Legal professions have always sought to justify their privileges, monopolies,
and aspirations to high social standing on the ground that law is a learned
mystery. In America elite lawyers from the Revolution forward sought
to persuade their fellow lawyers and the public that law was a “science”
demanding special training and lifetime study. Antebellum legal science
was a medley of humanistic learning in the classics and ancient and modern
history, technical learning in the common law reports and treatises (especially
the law of property and pleading), and – to train lawyer-statesmen
as well as attorneys – constitutional, comparative, and international law
and political science. After the Civil War the spectacular successes of natural
science, like the public health discoveries applied to eradicate age-old
8 Chapter 2 in this volume tells the history of modern legal education. My narrower concern
here is with the schools’ relation to the bar and its professionalization projects.
Cambridge Histories Online © Cambridge University Press, 2008
88 Robert W. Gordon
diseases, gave lawyers along with many other occupations the platform to
argue that scientific practice required university-based training. In 1870
President Charles W. Eliot of Harvard appointed Christopher C. Langdell
as dean of Harvard Law School to carry out a program of scientific legal
education. Langdell was a refugee from New York City practice who had
conceived a “hearty disgust for the means and methods by which business,
place and reputation are . . . gained” in New York City.9 He retreated to
the higher ground of a New England university to institute a program of
long-term reformation of his fallen vocation.
Langdell’s idea of legal science was more up to date in some ways, but
also more narrow and parochial than the antebellum version. The mass
of Anglo-American law could be generalized into a system of harmonious
principles. “[L]aw is a science, and . . . all the materials of that science are
contained in printed books,” the common law reports of appellate cases.10
This was an exclusively private law curriculum: it expelled from the law
school both the humanistic liberal arts and the public law and lawyerstatesman
components of the old learning; it taught nothing of legislative
policy. Students would learn how to induce the principles by means of the
“case method,” a Socratic dialogue between teacher and students primed
with the close reading of cases. Under Langdell and his successor James Barr
Ames, Harvard instituted a three-year sequenced curriculum, progressively
stricter pre-legal education for admission (a B.A. by 1895, well before any
other law school), regular examinations, and a high flunk-out rate for those
who failed them. Langdell imagined the school would prepare graduates for
careers as “counselors,” something like the English barrister class, an elite
corps of lawyers specializing in appellate advocacy. Harvard’s idea of the law
professor’s job, however, looked to Germany rather than England. English
law teachers of that period had to struggle against an insular, ingrown,
intellectually conservative profession of judges and lawyers who put no value
on a scientific training in law. In the United States, the emerging bar groups
were too weak to dictate to the schoolmen, whose model was the German
full-time professoriate. Freed from the time demands and client pressures of
practice, Americans could do original research to uncover the true principles
underlying unruly masses of case law, produce monumental commentaries,
serve as authoritative advisers to judges and legislatures, draft model codes,
and gradually produce a truly national and even transatlantic common law.
Parts of this program succeeded beyond its founders’ wildest dreams,
though in oddly mutant forms. The idea of teaching law as a science of
9 James Coolidge Carter, letter to Charles W. Eliot, Dec. 20, 1869 (C. W. Eliot Papers,
Harvard University).
10 C. C. Langdell, “Harvard Celebration Speeches,” Law Quarterly Review 3 (1887), 124.
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 89
principles was abandoned quickly and casually, but the case method itself
flourished, now advertised as a practical method of teaching “how to think
like a lawyer.” Raising admissions requirements at first caused Harvard’s
enrollments to dive, but they recovered, and other schools (Pennsylvania in
1916 and Stanford, Yale, Columbia, andWestern Reserve in 1921) followed
suit. The Harvard template – increasing pre-legal educational requirements,
the three-year private law curriculum, the case method and the
full-time law-professor – spread to other elite schools from 1895–1925 and
eventually to virtually every full-time university-based law school in the
country.
Yet, the cadres of “counselors” the new law schools were supposed to
train never materialized. As the Harvard model was gaining ground, few
graduates of elite schools tried or argued cases; they worked in the new corporate
law firms on complex deals such as mergers and reorganizations and
kept clients out of court. Law schools openly competed to attract students
who would be hired by these firms – not by teaching anything of direct
relevance to the new practice, but by certifying their graduates as culturally
suitable products of liberal arts colleges and survivors of a rigorous boot
camp of Socratic inquiry, class ranking by exam grades, high attrition rates,
and, beginning in the 1880s, the capacity for sustained attention to tiny
points of detail as editors of law reviews. Law firm partners responded by
subsidizing the schools and by hiring their graduates (at least the white
Protestant ones) as associates.
The schoolmen’s program and the bar’s largely converged in the first
decades (1870–1920), though rifts gradually opened between them. They
shared an interest in raising the profession’s intellectual standards and exclusiveness
by credentialing requirements and enhancing the social status and
cultural authority of the upper bar by alliance with university-based science.
The schools fought to maintain the diploma privilege, the bar to end it; the
bar mostly prevailed. The Association of American Law Schools, founded in
1900, fought side by side with the ABA to require college plus law school
for admission to the bar. Law professors never achieved the authority of
German professors as lawgivers and jurisconsults, but they took the leading
role in technical law reform, drafting uniform codes and other model
legislation like the Uniform Sales Act (1906; SamuelWilliston of Harvard,
chief reporter), the Federal Rules of Civil Procedure (1938; Charles E. Clark,
Yale), the Uniform Commercial Code (1958–; Karl Llewellyn, Columbia),
and the Model Penal Code (1952; HerbertWechsler, Columbia). The great
academic treatises –Williston on Contracts (1920),Wigmore on Evidence
(1923), and Scott on Trusts (1939) – and law review articles were marketed
to the bar in a deal whose implicit terms were that the writers would collect
all the cases and arrange them in helpful categories, and the lawyers
Cambridge Histories Online © Cambridge University Press, 2008
90 Robert W. Gordon
and judges in turn would rely on and propagate their interpretations. The
West Publishing Company’s National Reporter System originating in 1876–
87, with its digests and organization of case law doctrines into law-finding
categories tagged with keywords such as “Saws, Cutters and Grinders,”
performed similar functions in more intellectually modest ways.
The high-water mark of academic-practitioner collaboration on private
law science came with the founding of the American Law Institute in
1923, an association of elite law professors, judges, and practitioners, with
the goal to reduce unnecessary “uncertainty” and “complexity” in the law.
Uncertainty resulted from lawyers’ lack of agreement on the fundamental
principles of the common law, “lack of precision in the use of legal terms,”
“conflicting and badly drawn statutory provisions,” “the great volume of
recorded decisions,” and “the number and nature of novel legal questions.”
Complexity resulted from the “lack of systematic development” of legal
science and variations among the many jurisdictions of the United States.11
The ALI’s mission was to promote the creation of a national private law
through Restatements of the main common law fields – contracts, agency,
trusts, torts, and so forth – distilling cases from every jurisdiction into
propositions embodying the “best” views of law, in the hope that state courts
would use them to direct the future of the law. For generations crowds of
dark-suited lawyers gathered in the Mayflower Hotel in Washington to
debate such questions as whether, if Uncle promised Johnny $5,000, and
Johnny spent $1,000 in reliance on the promise, Johnny could sue Uncle
for the full $5,000 or only the $1,000.
The school-bar alliances were always somewhat precarious, because as
each sector evolved their ideas and interests often diverged. Harvard and its
epigones might try to expel public law from their purview, because legislation
and policy were not readily taught from casebooks by the case method.
However, judges, bar leaders, and government lawyers could not ignore it.
They were confronted with a rapidly growing volume of state and federal
regulation, administrative agencies to implement it, and constitutional law
challenging such regulation under state and federal (Fourteenth Amendment)
Due Process Clauses, and the federal Commerce Clause. Indeed many
elite business lawyers from the 1880s to the 1930s found an important
source of cultural capital and professional identity by identifying with the
conservative majority of the U.S. Supreme Court and the body of “classical”
constitutional law it had developed and relying on that body of law as their
chief bulwark against the New Deal’s revolution in government.
11 Report of the Committee on the Establishment of a Permanent Organization for the
Improvement of the Law Proposing the Establishment of an American Law Institute
(1923).
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 91
By the 1920s, however, most of the legal scholars interested in public law
and policy were drifting leftward of their profession. Affiliated with various
Progressive movements, they were developing alternatives to the classical
vision, such as the “sociological jurisprudence” of Roscoe Pound and
the “legal realism” and “legal-economic institutionalism” of scholars concentrated
at Columbia and Yale. These movements were highly critical of
classical legal science, both the private law doctrinal science of Harvardinfluenced
schoolmen and the public law science of the conservative bench
and bar. The Progressives argued that legal science failed to describe how
courts actually decided cases, concealed an implicit and often reactionary
policy agenda, and ignored the effects and results of legal decisions (“law in
action” as opposed to “law on the books”). They recommended that lawyers
and legal scholars invest heavily in social science, the better to understand
the legal system and to formulate policy. Their influence reached a peak
in the New Deal, when Progressive scholars like Felix Frankfurter, James
M. Landis, Jerome Frank,William O. Douglas, and Thurman Arnold were
recruited en masse to draft new legislation and head federal agencies. Veterans
of the New Deal repopulated law faculties after the war, ensuring
that the legal academy remained somewhat more liberal than the business
bar that it continued to supply with graduates and relied on for financial
support.
The interests of academics and practitioners diverged still further in
the 1970s, when legal scholars again moved away from purely doctrinal
scholarship, incorporating theories and literatures from other disciplines,
such as economics, history, philosophy, political science, and literary and
cultural studies into their work. They also hired a few – but very conspicuous
– teachers who were sharply critical of the legal and social status quo.
Ironically one of these disciplinary turns paved the way for a partial rapprochement
of law schools and the conservative bar in the 1980s. A striking
exception to the general liberalism of the major law schools, the lawyereconomists
at the University of Chicago Law School had preached since the
1930s the gospels of unregulated markets and libertarian freedoms. President
Reagan’s election in 1980 brought them out of the wilderness and
into positions of power and influence. Several became federal judges, and
others were appointed to high administrative posts. Free-market foundations
like John M. Olin and Bradley subsidized teaching Chicago-brand
economic theories of law in law schools and, through summer seminars, for
law professors and judges. In some fields such as antitrust and regulated
industries, their theories became government policy. The Federalist Society,
an association of conservative judges, officials, practitioners, law teachers,
and students, founded in 1982, rapidly evolved from a debating society to
a powerful national network, with 25,000 members in 2000. In President
Cambridge Histories Online © Cambridge University Press, 2008
92 Robert W. Gordon
GeorgeW. Bush’s administration it effectively displaced the ABA’s role as
expert advisor in the selection of federal judges.
The revival of the late nineteenth century had sought to base the legal
profession’s power, prestige, and privileges on its association with legal
science, on the analogy of the medical profession’s alliance with natural
science. The ideal proved elusive. Law had no equivalent cognitive basis,
certainly none with the requisite claim to objectivity. Doctrinal legal science
had achieved real progress in imposing some rational order on legal
fields, but it was savagely mocked as sterile, abstract, indeterminate. and
politically biased by generations of Progressive-realist critics. The Progressives
had similar ambitions for making social sciences such as statistics and
institutional economics into the basis of law practice and administrative
expertise. However, the social sciences proved to be just as dispute riddled,
politically contentious, and uncertain in their prescriptions. By the 1970s
legal economists were hoping to make their science the new cognitive basis
of the profession. They did succeed in securing its adoption as the lingua
franca of policy analysis in the regulatory bureaucracies and even to
a limited extent in the federal courts. But most practitioners and judges
resisted legal economics’ attempt to displace the traditional, eclectic, multivalued
discourses of legal argument. Law in every field of practice in the
late twentieth century became more technical, more specialized, and more
demanding of sustained investment in learning. But it was never plausibly
any kind of science. Indeed, by the 1970s, as we shall see, the entire project
of professionalization as the founding generation had conceived of it was
under severe pressure, as the material and ideological foundations of the
project eroded beyond recall.
II. LAWYERS AT WORK: 1870–1930
The history of lawyers is of course much more than the history of their
guilds and professional projects. Four major new developments dominated
this period: the founding of big-city law firms to service the nation’s large
corporations, the rise of a plaintiff’s personal injury bar, the bar’s increasing
specialization and segmentation by clienteles as well as by subject matter,
and the emergence of public interest lawyering.
The Corporate Elite: Organization, Law Jobs, and Social Tasks
Before 1900 the lawyers ranked by the public and their peers at the top
of their profession were rarely exclusively or full-time “corporate lawyers.”
Certainly, a successful lawyer had important business clients: railroads,
financial institutions, insurance companies and industrial firms. However,
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 93
he was also a courtroom lawyer who tried murders, divorces, and will contests
as well as commercial cases; who argued appeals before the highest
federal and state courts; and who took time off from practice to serve in
high elective or appointive office. Typically he practiced in a small partnership,
outside the management hierarchies of his principal clients.
The first exceptions to the pattern were a few men who rose to prominence
as full-time general counsel for emerging giant corporations, beginning
with the railroads. These jobs held enough prestige and pay to persuade
even distinguished judges like William Joseph Robertson of the Virginia
Supreme Court and federal judge G. W. McCrary to leave the bench to
become railroad counsel. Railroad counsel in turn sometimes rose to become
presidents of their roads, as did Chauncey Depew of the New York Central,
who was also a U.S. Senator; Frederick Billings of the Northern Pacific; and
George H.Watrous of the New Haven. General counsel directed the efforts
of the hundreds of local lawyers retained by the railroad in the towns along
its lines, who searched real estate titles, obtained rights of way, and fought
off or settled suits for grade-crossing accidents or damage to livestock. He
was foreign minister and war minister for his client, negotiating deals with
smaller or competing lines to build consolidated systems and taking them
over if they resisted. He directed strategy against striking unions, obtaining
injunctions and hiring deputies to enforce them. He was active in the state
and federal capitals, arguing before utilities commissioners to set high rates
and before courts to reverse the commissions, and as a lobbyist for liberalized
corporation laws and special favors.
By 1900, however, the pinnacle of success at the bar was being redefined
as partnership in an independent multispecialty firm that served exclusively
corporate clients. Paul Cravath’s NewYork City firm pioneered the model of
the new firms as meritocratic (though largely restricted to white Protestant
males) career hierarchies of associates recruited from high-ranking graduates
of elite law schools, who were paid a salary and competed with one another
for partnership and as partners formed a lifetime membership with the firm.
Most new firms were in New York; but the model spread to other cities. By
1915 the five largest American cities had 29 firms with 7 or more lawyers;
by 1924 they had 101.
The big law firm, and with it the modern career of corporate lawyer, was
born of the Big Deals, Big Cases, and increasingly Big State of the industrial
era. The agreements to build giant consolidated enterprises, first railroads
and then other sectors such as oil and steel, required both highly skilled
and specialized legal work and massive bundles of routine tasks, such as
searching titles for oil leases and complying with the securities laws of all
the separate states. So too did the defense of such enterprises against lawsuits
challenging their very existence, like suits for patent infringements
Cambridge Histories Online © Cambridge University Press, 2008
94 Robert W. Gordon
and antitrust violations. Alongside big business arose the administrative
agencies of the modern state to regulate it, starting with regulation of the
railroads and public utilities. All of this created technical, specialized work
for lawyers and a demand for law offices with the numbers and expertise
to staff a railroad merger or bankruptcy organization, defense of a massive
antitrust action, or public utility rate-making hearing, as well as the voluminous
miscellaneous business of large industrial and financial clients. Over
the century, law firms experienced their biggest expansions during merger
movements producing Big Deals, rises in litigation (especially with other
corporations and against the government), and above all, with expansions of
the regulatory state. Most of this work was done in offices and boardrooms,
rather than in courts. The most prestigious and lucrative law firm work of
1900–40, for example, was in representing committees of bondholders and
stockholders to draft and negotiate plans for the reorganization of bankrupt
corporations.
Business lawyers did much more than furnish distinctively “legal” services,
such as representing clients in courts, predicting judicial decisions,
interpreting statutes and regulations, and drafting and planning to obtain
favorable and avoid unpleasant legal consequences. They were also brokers
and fixers. Lawyers served as the crucial intermediaries between finance
capital and entrepreneurs. They traveled the world on behalf of businesses
looking to sell bonds and shares in new American ventures and of American
investors such as investment banks looking for profitable foreign ventures.
A law firm usually had a bank for its anchor client: it would steer its manufacturing
or transport clients to the bank, and the bank to those clients.
In New York, law firms brokered deals between the great European and
American merchant and investment banking houses and expanding business
combines. In regional centers lawyers played the same role, linking local
banking and manufacturing clients with national networks of investors and
investments. Lawyers leveraged their positions as executors and trustees and
as directors of client companies, banks, and insurance companies to steer
capital into favored ventures. When lawyers finished brokering deals, they
did the legal work of putting them on paper.
They also leveraged their contacts with state officials. Business lawyers
liked to strike libertarian attitudes, comparing their jobs with the heroic
role of the criminal defense lawyer who protects the liberty of the individual
against the overreaching state. But in fact what most business clients
wanted lawyers to get from the state were favors: concessions, franchises, tax
exemptions, subsidies, regulatory loopholes, monopoly rights, and public
works contracts. Lawyers were natural intermediaries between clients and
the state, because they had the contacts. They had often held office themselves
or knew brothers at the bar in the legislature or administration; they
were more cosmopolitan than business managers who had spent their lives
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 95
inside an enterprise. They were among the few Americans of the period
who were widely traveled and spoke foreign languages. William Nelson
Cromwell, co-founder of the Sullivan & Cromwell firm of New York, on
behalf of his client, the (originally French) New Panama Canal Company,
intrigued in the U.S. Senate to defeat the rival Nicaraguan canal route in
favor of Panama; he then helped instigate Panama’s revolution from Colombia
in 1903 and the new republic’s transfer of control of the canal to the
United States.
Ad hoc deal-making expanded over time into the work of building stable
contractual structures among business entities and between them and the
state. The new giant enterprises made long-term investments in constructing
railroad lines or huge plants for assembly-line mass production. Facing
high fixed costs, they sought to stabilize their operating environments by
securing predictable relations with creditors, shareholders, suppliers, distributors,
customers, their labor forces, and governments. The function of
lawyers was to help design, negotiate, and craft the legal instruments to
minimize the risks of instability. Much of this was done through private
contracts that adapted old legal forms such as the real estate trust and mortgage
to securing corporate debt, giving managers the authority to spend
borrowed money flexibly and lenders the legal resources to monitor them.
Law firms were developers and the curators of the lengthy form documents
that together made up a large body of private legislation.
Similarly, lawyers sought long-term stable relations for their clients with
the state. As agents of a major structural transformation of the economy,
they were now concerned to redesign the basic legal framework to accommodate
the new forms of industrial and financial enterprise. This redesign
was less a matter of negotiating specific concessions for particular clients,
though of course that still continued, as of changing the general law so
as to legalize consolidations (as by legislation permitting holding companies),
and securing narrow executive and judicial interpretations of the
antitrust laws and antitrust exemptions for entire industries. Lawyers like
the legendary James B. Dill of New Jersey got corporate law changed to
facilitate centralize control in management; for example, by reducing common
law directors’ and officers’ liabilities, liberalizing standard state law
charter provisions to relax restrictions on corporate powers and capitalization,
and authorizing managers to exercise “business judgment” without
fear of shareholder suits. Lawyers enlisted the state to help suppress militant
labor by pressing the courts to recognize new forms of corporate “property”
in economic relations protectible by injunction and to validate use of the
antitrust laws to prosecute labor conspiracies.
As ad hoc deal-making expanded into stable structure-building, so
structure-building expanded into statesmanship. At the urging of or
through the medium of their lawyers, leading business firms often pursued
Cambridge Histories Online © Cambridge University Press, 2008
96 Robert W. Gordon
a corporatist politics. They pressed for (or acquiesced in) regulatory schemes
that would satiate populist clamor against monopoly while also enforcing
their price-fixing agreements and raising costs of entry and operation to
their small competitors. They sought cooperative relations with antitrust
enforcers who would grant prior clearance to merger plans and with public
utilities commissions that would prove captive and friendly regulators while
staving off pressure for public ownership of power companies and streetcar
companies. They supported Progressive measures like national labor standards
(outlawing child labor, setting minimum wages and maximum hours)
that would remove advantages of competing firms in anti-labor states, for
social wages financed out of general tax revenues that would improve labor
relations without their having to pay for it, and for workers’ compensation
plans that would quiet labor agitation over safety at acceptable cost. They
instituted cooperative accords with labor unions or set up company unions
that could help maintain discipline and contain militancy in return for job
security and high wages and benefits.
Lawyers played a critical part in both designing and staffing such institutional
arrangements, not only in their roles as counsel for particular firms
or trade associations or business policy groups but also as members of civic
associations such as the National Civic Federation, which brought together
business executives and conservative labor leaders, and as lawyers on leave
from practice as officials in city, state, and federal governments. In many
respects their roles in office were their private roles writ large – making
the state, nation, and world a congenial environment for American capitalism.
Eastern corporate lawyers – Elihu Root, Charles Evans Hughes,
James Coolidge Carter, Henry Stimson, Russell Leffingwell, William J.
Donovan, Dean Acheson, and John Foster Dulles – dominated high foreign
policy posts in the first half of the twentieth century. The policies of such
men generally reflected the interests of their business clienteles: a peaceful,
prosperous, and economically reconstructed Europe; the use of military and
diplomatic power to promote stable governments reliably committed to
promoting and protecting foreign direct investment and payment of foreign
debts; and a system of international treaties and arbitration to enforce
transnational contracts and settle international disputes. To be sure, such
lawyer-statesmen were much more than tools for clients: their vision was
often broader, more cosmopolitan, and more farsighted in anticipating that
compromises would have to be made for the sake of industrial peace. They
differed sharply among themselves about policy directions, however.
How did it come about that lawyers performed these state-building
functions? In Britain, Continental Europe, and Japan, state bureaucrats
negotiated the design and enforcement of regulations directly with corporate
managers, rather than through lawyers; private lawyers were rarely
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 97
conspicuous public intellectuals compared to economists, journalists, academics,
and literary figures; and they rarely became senior ministers of
state. Even in Germany, where law-trained officials dominated the bureaucracy,
they were lawyers who entered public careers, not “advocates,” who
in Europe tended to be confined to the narrow role of representing clients in
court. In the United States by contrast, the central state apparatus developed
late, only after (and partly as a response to) the emergence of giant corporations.
From the Revolution forward, suspicion and devaluation of public
careers had inhibited the development of an elite corps of civil servants
with high prestige, whereas Congressional government favored patronage
appointees and part-time amateurs in government agencies. By default,
private corporate lawyers did much of the design of the legal forms of statebusiness
relations that in Europe was done by central bureaucracies, in part
because so many of such relations were administered through courts, where
lawyers held the monopoly of practice. Basic private law was court-made
common law; and the highest law was court-declared constitutional law. Lay
advocates competed with lawyers for representation before administrative
agencies, but ultimately all important administrative actions were reviewed
in courts. Courts administered the general body of corporate law and had
to ratify and enforce contracts between corporate shareholders, managers,
and creditors; courts in their equity jurisdiction managed bankrupt corporations
– which at one time included nearly every American railroad – as
receivers and approved or disapproved consolidations; courts even oversaw
much of the regulation of monopoly through the (awkward) forms of criminal
and civil antitrust lawsuits; and they governed labor-capital conflicts
through labor injunctions. Constitutional law – a familiar resource of business
lawyers because of its uses for challenging regulations – supplied the
basic public language for arguing about the distribution of government and
private, federal, and state power and the appropriate limits on government
action; in speaking this discourse, lawyers held the advantage over rival
professions.
This elite took on another major project: building an ideological framework
of legal order, a set of overall structuring, ordering principles (rationalizations,
justifications, inspirational guiding norms) of the legal system.
Lawyers contributed to this project as public intellectuals – judges, officeholders,
law reformers, civic activists, treatise-writers, and bar leaders –
from a variety of public pulpits.
There were actually at least two rival projects or visions of ideal legal
order. The older was the “classical” ideal being perfected as a legal science
of private law principles in the law schools and their articles and treatises
and in decisions of the courts. The private law principles called for strict
enforcement of all contracts, even the most one-sided, such as employment
Cambridge Histories Online © Cambridge University Press, 2008
98 Robert W. Gordon
contracts; tended to limit the liability of companies for industrial accidents;
and were hostile to most collective tactics of organized labor such as strikes
and boycotts. Classical public law, developed out of Due Process Clauses
in state constitutions and the federal Fourteenth Amendment, produced a
rich jurisprudence of constitutional limitations on legislators’ and administrators’
powers to change the ground rules of economic life, which inspired
courts to strike down some “Progressive” social legislation, such as minimumwage,
maximum hours, and federal anti-child labor laws; laws creating
worker’s compensation commissions to replace jury trials at common law;
and laws favoring union organizing.
The challenge to the classical vision came from the Progressive ideal
that began to emerge in the 1890s; was developed by Progressive reformers
and intellectuals; secured beachheads in legislatures, on regulatory commissions,
and even with some influential judges (Oliver Wendell Holmes, Jr.,
Louis Brandeis, Benjamin Cardozo, Learned Hand, and Julian Mack among
others); and achieved its institutional triumphs first at the state and local levels
and then at the federal level in the New Deal. The Progressives criticized
classical law as biased and inadequate to deal with social problems; they
proposed to substitute social-science-based expertise applied by administrative
commissions. Relatively older and more conservative lawyers of the
1890–1940 era, men like James Beck, John W. Davis, and William D.
Guthrie, favored the classical vision, as it gave them a basis in constitutional
principle for fighting legislation and regulation that disfavored their
clients and the sympathetic and conveniently final forum of the judiciary to
make their arguments. But as many leading lawyers denounced the famous
Supreme Court Lochner (1905) decision invalidating state maximum hours
laws for bakers and the New York Ives (1911) decision invalidating state
workers’ compensation laws as applauded them. As we have seen, business
interests and their lawyers were among the driving forces behind much of
the administrative state-building efforts of the early part of the century.
Business lawyers tended to switch back and forth between classical and
Progressive visions as political contexts and client interests changed.
Rise of the Tort Plaintiffs’ Bar
The most dramatic development in the legal practice sector serving individuals,
and certainly the one with the broadest and most controversial social
effects, was the rise and transformation of personal injury tort practice.
Tort practice hardly existed in the United States before the 1880s and
1890s. Injured persons rarely sued. The wounded soldiers of industry were
compensated, if at all, by small payments from mutual benefit societies
or paternalistic employers. The routine expectation of “total justice,” that
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 99
someone else could be blamed and should have to pay for accidental injuries,
was not yet widely embedded in the culture. Plaintiffs who did sue usually
either lost or recovered tiny damage awards. Corporate defendants, and
employers in particular, had many effective defenses such as rules denying
compensation to workers injured by fellow employees or who could be
alleged to have “assumed the risk” of injury by taking on a dangerous job
or contributed to the injury by their own negligence. The sudden rise in
tort claims was a response both to the enormous carnage of death and injury
caused by industrial technology – railroads, factory machinery and mining
operations, streetcars, and eventually automobiles – to workers, passengers
and bystanders and to a mostly immigrant urban bar of attorneys, working
for contingent fees of 30 to 50 percent of the amounts recovered, willing
to take on cases for the injured.
Personal injury practice was never for the ethically fastidious. Plaintiffs’
lawyers chased ambulances, hung around hospital rooms and funeral parlors,
hired “runners” and policemen and doctors to refer business, and bribed
witnesses. As we have seen, elite lawyers used their control of bar associations
to discipline the plaintiffs’ bar for “solicitation” and tried to limit contingent
fees and keep out of the profession graduates of the night schools where (until
mid-century) most plaintiffs’ lawyers were trained. Company lawyers also
developed tricks of their own. They sent agents into hospitals to get injured
plaintiffs to sign releases of liability in return for low settlements, smuggled
inconvenient witnesses out of town, and deployed a vast and versatile arsenal
of procedural weapons to delay cases, exhaust adversaries, and move cases
into more sympathetic legal forums than state jury trials: the appellate
courts and the federal courts.
Where accidents were common, the mutual interest of injurers and
injured in quick and predictable settlement resulted – after much conflict
and dissent – in the routinizing of claims processes. For industrial
injuries to workers, the tort system was displaced, in part through political
compromises negotiated between big businesses and unions and embraced
by Progressive reformers, by an administrative no-fault non-judicial system
– worker’s compensation, which spread to almost all states between
1910 and 1920. (This system was supposed to do away with the need for
lawyers as well as courts, but lawyers soon came back in to represent injured
workers, if only to argue about whether the injury was suffered on or off
the job.) Auto accidents, eventually by far the biggest class of injuries –
in 1930 more than 30,000 Americans died in auto accidents – remained
in the tort system; however, the great majority of cases were disposed of
without suit by insurance company claims adjusters and the rest by stables
of specialist defense lawyers working in-house or on retainer for insurance
companies.
Cambridge Histories Online © Cambridge University Press, 2008
100 Robert W. Gordon
Segmentation by Clienteles
As late as the 1870s, even the lawyers who appeared most often for railroad
clients in appellate cases appeared almost as often for individuals suing
railroads. Clarence Darrow was still general counsel for the Chicago and
Northwestern Railway while trying to get pardons for the Chicago Haymarket
defendants; he resigned from the railroad job to represent Eugene
Debs in his legal battles with the Pullman Company and the nation’s railroads,
but continued to do legal work for his railroad client part-time. But
by the 1880s lawyers for railroads no longer appeared for adverse interests.
Those who tried to retain their independence were overwhelmed by
the railroads’ insistenc,e on an exclusive loyalty. If they wanted any railroad
work, they had to agree to represent the railroad exclusively. Often the most
able lawyers in towns along the line were paid retainers, not for actual legal
work, but to prevent them from appearing for anyone on the other side, not
just of the client but of any anti-railroading interest. Railroad legal departments
organized lawyers as political as well as legal agents; they formed
trade associations, lobbied and paid for friendly legislation and friendly
commissions, and financed campaigns of friendly politicians. By 1900, a
lawyer who had railroads among his clients was expected to support and
be a spokesman for railroad interests generally. Some carried their loyalties
into public office. Richard Olney remained general counsel for the Boston
& Maine and Chicago, Burlington & Quincy Railroads while, as Cleveland’s
Attorney General, he was seeking and enforcing injunctions against Eugene
Debs’s strikes against railroad associations, including his clients.
Fatefully, the bar had begun to specialize careers by clienteles – one
specialty for tort defense against personal injury suits with another bar,
usually immigrant in origins and ethnically distinct, for plaintiffs; and in
labor disputes eventually a bar for management and a bar for labor, whose
members almost never crossed over the line to represent the other side. To
the great distress of some lawyers – but, it must be said, to the apparent
satisfaction and enrichment of most – the most reputable segment of the
bar had become a dependency of business empires, and often very unpopular
empires at that. In 1910, the same Richard Olney who broke the Pullman
strike painted a vivid contrast between the private and public views of
lawyering. He deplored the new image of the lawyer who was represented
only as one variety of businessman; as an adjunct to business and its adventures
with functions as much a part of its routine as those of its wage earners and day
laborers; as using his “legal acumen and agility,” so far as he remains a lawyer at all,
in advising how nearly the extreme limits of the law can be approached without
being overstepped; as influencing legislation in favor of his clients’ interests; and
as dexterously manipulating the issue and sale of corporate securities. . . . [L]awyers
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 101
as members of a community absorbed in money-making, are themselves more or
less infected, so that it is not surprising that many, consciously or unconsciously,
come to regard money-making as the real aim and object of their career.12
These alliances sometimes proved embarrassing to lawyers who sought
political office or judgeships; and several notable lawyers severed their ties
with clienteles altogether to support Progressive reform causes that regulated
them.
Rise of Public Interest Lawyering
Progressive lawyers invented a new institutional form, the job of legal counsel
for the public interest group claiming to represent an amorphous and
diffuse constituency – Citizens for Good Government, or Public Franchise
League, or Committee of One Hundred for the Improvement of Education.
As representative of such an abstract “client,” the public interest lawyer
naturally had a good deal of discretion about how to deploy his influence.
The master of this form of public interest practice was the Boston lawyer
Louis Brandeis, a successful corporate lawyer. Brandeis represented public
interest causes without fee and reimbursed his partnership for the diversion
of his time. In a famous address of 1905, Brandeis said that the public
standing of lawyers had declined because “[i]nstead of holding a position
of independence, between the wealthy and the people, prepared to curb
the excesses of either, able lawyers have . . . allowed themselves to become
adjuncts of great corporations. . . .We hear much of the ‘corporation lawyer’
and far too little of the ‘people’s lawyer.’”13
Progressive policy entrepreneurs’ ultimate goal was usually to set up an
expert commission. They were experts at creating publicity. After a scandal
revealing some social horror – exploited child labor, tainted meat, railroad
bribery of legislators or kickbacks to preferred customers, prostitution
rings, or insurance fraud – reformers in collaboration with the muckraking
press would persuade legislatures to establish commissions with the
power to investigate, hold hearings, and make recommendations. These
commissions were mostly staffed by part-time amateur volunteers, usually
lawyers. Sometimes they turned into permanent administrative agencies.
The reformers also brought test-case litigation, not to get courts to declare
rights, but to refrain from interfering with Progressive legislation. Brandeis
and the lawyer-reformers Florence Kelley and Felix Frankfurter brought
12 Richard Olney, “To Uphold the Honor of the Profession of the Law,” Yale Law Journal
19 (1910), 341–44.
13 Louis D. Brandeis, “The Opportunity in the Law,” in Business – A Profession (Boston,
1927), 333–34.
Cambridge Histories Online © Cambridge University Press, 2008
102 Robert W. Gordon
test cases on behalf of the National Consumers League (which supported
maximum hours and minimum wage legislation) and also testified before
legislatures in favor of legislative reforms and intervened in administrative
agency proceedings. After being appointed to the Supreme Court in 1916,
Brandeis continued to direct public interest crusades from behind the scenes
through Frankfurter, his agent and disciple.
The more traditional model of cause lawyering, dating back to the legal
tactics of the anti-slavery societies of the antebellum period, was to bring test
cases in constitutional courts to extend rights of liberty and equality to new
constituencies. The American Civil Liberties Union (ACLU) was founded in
the Red Scare ofWorldWar I, when several thousand people, most of them
connected to militant labor organizations, were prosecuted by the federal
government for impeding the war effort or deported as undesirable aliens.
Supported largely by membership subscriptions and the volunteered time
of lawyers, the ACLU built an impressive record of using the federal courts
to prevent persecution of political and religious dissenters by providing
them with free legal representation; in the process it helped produce the
United States’ extraordinarily libertarian (by world standards) regime of
judicial protection for free speech. The most amazing and dramatic use of
the model was by the National Association for the Advancement of Colored
People (NAACP) and its legal arm, the Legal Defense Fund. The NAACP
brought and won an important test case in its early years, Buchanan v.
Warley (1917), in which the Supreme Court struck down a racial zoning
ordinance in Louisville that forbade homeowners in white neighborhoods to
sell to African Americans. The Legal Defense Fund’s epic journey began in
1930 with a small foundation grant to study the conditions of educational
inequality in the South and culminated in 1954 with the Supreme Court’s
decision striking down legally mandated segregation.
The reformers who led these early rights-activist crusades were an interesting
alliance of establishment and marginal lawyers. Some were patricians,
like the Boston corporate lawyer Moorfield Storey who headed the
first NAACP legal committees. Others were highly educated professionals
confined to the margins of their profession by prejudice: Jewish, African
American, and women lawyers such as Morris Ernst, Osmond Fraenkel,
Crystal Eastman, CarolWeiss King, Pauli Murray and Ruth Bader Ginsburg
(mainstays of the ACLU); and Charles Hamilton Houston, Thurgood Marshall,
Constance Motley, Robert Carter and Jack Greenberg of the NAACP
Legal Defense Fund. Cause lawyering was hardly the pathway to economic
success for marginals: it paid very badly and able lawyers made severe financial
sacrifices to undertake it. But it was a path upward in other ways, to
respect and status. The courtroom, especially in a high-profile case, was one
of the few places where an African American could appear on a plane of
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 103
equality with white Protestant males and where courtroom decorum would
ensure they would be treated with respect.
The noted African American lawyer Raymond Pace Alexander of
Philadelphia, though a Harvard Law School graduate, could not get a job
with any law firm or be admitted to any bar association except the association
for African American lawyers. In his early years of practice, even well-off
African American clients and businesses would not hire him, thinking they
would fare better in a white court system with a white lawyer. He had to
get by on a smattering of small clients and criminal cases. He could not
rent office space downtown or, when in a Southern city like Washington,
D.C., eat at the restaurant across from the courthouse or hang out at the bar
with the other trial lawyers. But in court, he was called Mr. Alexander and
treated by the judge and court personnel as equal to the white lawyer on the
other side; he could cross-examine white witnesses, display his talents, and
win cases. Thurgood Marshall was denied admission to the University of
Maryland Law School in 1930 because he was African American. In 1936,
soon after graduating from Howard, he had the satisfaction of winning the
case that desegregated the school that rejected him.14
Women similarly found a practice niche in supporting the causes and
concerns of women and other social underdogs. Leaders of the women’s
rights movements like Belva Lockwood of Washington, D.C., and Myra
Bradwell of Illinois also led the fights for admission of women to the bar.
Once admitted, and facing strenuous resistance to their presence in the
courtroom and in business law firms, many women lawyers played up their
comparative advantage as members of the gentler sex devoted to charity
and reform. One of the most impressive pioneers, Clara Shortridge Foltz of
California, took on poor as well as paying clients, and led the movement to
create the first state public defender system, though she also hard-headedly
commented that if a woman lawyer “prefers to engage in child welfare
work, takes up legal aid work, runs here, there and everywhere at the whim
of every ambitious clubwoman, omitting to charge for her services, she
cannot hope to win while her eyes are bright.”15 The pattern held for the
rest of the century. Women lawyers were prominent among the leaders
of Progressive reform, civil liberties, labor, and civil rights movements. By
2000, though admitted to the bar in numbers almost equal to those of men,
they were under-represented relative to their proportion in the profession in
private law firm partnerships but over-represented in legal services, public
defenders, public interest firms, NGOs, and in government.
14 Pearson v. Murray, 169 Md. 478, 182 A. 590 (1936).
15 Bureau of Vocational Information questionnaire 180, March 9, 1920 (BVI Records,
Schlesinger Library, Radcliffe College).
Cambridge Histories Online © Cambridge University Press, 2008
104 Robert W. Gordon
III. NEW DEAL, POSTWAR STABILITY, AND THE RIGHTS
REVOLUTION: 1930–1975
The Depression of the 1930s was as hard on lawyers as on other occupations.
The median income of lawyers fell by 8 percent between 1929 and 1933.
Younger lawyers suffered worst: in New Jersey, beginning lawyers’ income
fell 67 percent (1925–37) and those with fifteen years in practice by 53
percent (1928–38). Michigan lawyers reported to a 1940 survey that 38
percent of them had been unable to make a living in at least one year
between 1929 and 1934.
The New Deal and Postwar Order
The New Deal set in motion a revolution in government that would ultimately
yield substantial business for lawyers and a variety of new specialties
and functions. The New Deal itself was a vast employment program for
lawyers – by 1939 there were 5,368 lawyers in federal service – and not just
for government lawyers but lawyers for client groups and constituencies
needing to deal with the new government agencies. The New Deal’s hiring
policies mostly expanded – but also in some ways limited – social mobility
for marginals. A huge number of new positions opened up. New Deal agencies
managed to hire most lawyers outside civil service requirements (which
gave strong preferences for veterans and for geographical distribution) and to
bypass Congressional patronage for non-civil service appointments. For the
top positions, the New Dealers used much the same meritocratic criteria as
big firms, except that they discriminated much less against Jews, Catholics,
women, (occasional) African Americans, and lawyers with overtly left-wing
political views. The best record for a job applicant was a high-grade average
from an elite Northeastern law school and the recommendation of a law professor,
preferably Felix Frankfurter. This was a great system for marginals
who had somehow made it to Harvard, Columbia or Yale, but would never
be hired by a Wall Street firm. It was not so good for lawyers without
elite credentials, prot´eg´es of Congressmen, most graduates ofWashington,
D.C., area law schools, graduates with only a year or two of college and with
law degrees from unaccredited law schools, and for women law graduates
who had often been admitted to civil service in non-legal positions such as
stenographers but were eligible to rise through the ranks.
For many lawyers and perhaps most, however, the main reason for joining
the government was not employment opportunities: it was the challenge
of the cause. About half of the leading lawyers of the New Deal came out
of corporate practice, taking a big pay cut to do so and often risking their
relationships with anti-New Deal business clients. Some of them were law
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 105
professors who had already left, or shunned, corporate practice. The New
Deal offered a chance to do something important, glamorous, and in tune
with political convictions. Many of these lawyers thought they were severing
their ties with the world of private business lawyering by crossing over to
the government side. But of course as the federal government’s functions and
agencies expanded, they created large new domains of practice for lawyers –
tax, antitrust, regulation of securities, public utilities, power, and labor
relations, among others. The New Deal lawyers found they had acquired
professional capital that they could convert back into to private practice.
After the war, some of the principal New Deal lawyers, “young men with
their hair ablaze” like Tommy Corcoran and James Rowe, Thurman Arnold,
and Abe Fortas, become founders of Washington D.C. firms, representing
corporate clients before agencies such as the SEC, created by legislation they
had written and that they had originally staffed.
Business lawyers were ambivalent about the New Deal. Even those who
were classical conservatives swallowed their doubts about the most constitutionally
dubious of the New Deal’s experiments, the National Industrial
Recovery Act, because their major clients initially supported it. They then
celebrated its invalidation by the Supreme Court after their clients had
turned against it. Many represented business clients who bitterly opposed
arrangements such as the New Deal’s schemes of securities, public utilities,
and especially labor regulation, or they supported them as long as
they thought they could control the regulators and went into opposition
only when they could not. Some lawyers were themselves by ideological
conviction ferociously opposed to any large federal or government role in
the regulation of business. In the 1930s, two thousand lawyers headed by
such luminaries as John W. Davis, a former presidential candidate; James
M. Beck, former Solicitor General; and George W. Wickersham, former
Attorney General, formed the National Lawyer’s Committee of the American
Liberty League. The League counseled its industrial clients to civil
disobedience of the orders of the New Deal’s new Labor Board, in the certain
(and ultimately mistaken) conviction that the Supreme Court would
invalidate the National Labor Relations Act.With allies in the ABA, led by
the increasingly conservative Roscoe Pound, they led the struggle, embodied
in the Walter-Logan Act of 1940 vetoed by President Roosevelt, to
burden the administrative process with so much trial-type due process as
to bring it to a total halt. But other business lawyers, such as those who
signed the New York City Bar Association’s report opposingWalter-Logan,
did not wish to hamstring the administrative process, but to keep it informal
and flexible and negotiate cooperative deals with it on behalf of their
clients. By the 1950s most of the New Deal’s innovations had settled cozily
into the familiar pattern of tripartite deals between industries, their friends
Cambridge Histories Online © Cambridge University Press, 2008
106 Robert W. Gordon
in Congress, and regulatory agencies. Leading firms viewed them as an
at least tolerable and often very useful revised framework for a capitalist
economy.
Certainly, the New Deal was good for the law business. By virtue of
the federalization of regulation, all big-city law firms became specialists in
national law able to compete with the New York firms. Baker & Botts of
Houston is a good example. The Public Utility Holding Company Act of
1935, which broke up the nation’s utility systems, first gave the firm the job
of fighting the act; then, when the fight was lost, the firm took on the business
of reorganizing all its utility clients to comply with the act, which in
turn brought valuable contacts with New York financial houses and experience
working with government agencies. The Railway Labor Act of 1926
andWagner Act of 1935 delivered the business of helping defend hundreds
of labor cases before the new labor boards. A partner at the firm, looking
back on the era, commented, “Of course lawyers were as vociferous as their
clients in complaining about the New Deal legislation, but in retrospect one
may wonder how lawyers would have survived without the legislation.”16
The relative stability of large corporations in 1945–65 – oligopolies
within a legal-regulatory framework of business-friendly corporatism –
extended to their lawyers, who helped administer the framework from both
the private side and the public side. Younger lawyers often started their
careers with a brief term in government to learn the system from the inside.
Older firm lawyers were appointed to senior positions in the agencies. Largefirm
corporate practice became still more technical and specialized, much
less a matter of negotiating new conventions with the state than of administering
existing ones. Lawyers continued to cultivate relations with the
bureaucracy, but their main stock-in-trade became their expertise, rather
than their contacts. Business firms turned over their political action work
to specialists in lobbying and government relations. Practice conditions
were stabilized as well. Law firms were locked into long-term relations
with major corporate clients and handled all but the most routine of those
clients’ business. Younger lawyers entered the firm hoping to stay with it
for life. Companies rarely switched firms; partners rarely left them.
Labor Lawyers and Radicals
The New Deal also fostered the creation of a labor bar, something that previously
had scarcely existed. Through the 1930s the American Federation of
Labor, the umbrella organization of craft unions and dominant spokesmen
16 “Memorandum Prepared by John T. McCullough as Basis for Remarks . . . on November
27, 1979,” Baker & Botts Historical Collection.
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 107
of labor, pursued the goal of “voluntarism” (collective laissez-faire); its only
legal aim was negative – defense of its members against employers’ legal
campaigns against them. Unions still needed lawyers to fight court injunctions,
criminal contempt proceedings for defying injunctions, and antitrust
suits. They found them among left-leaning general practice and business
lawyers willing to suffer the stigma of association with organized labor.
Some of those lawyers, such as Clarence Darrow, Felix Frankfurter, Donald
Richberg, David Lilienthal, and Harold Ickes, went on to serve in high posts
in the New Deal. More radical lawyers of the period, mostly from socialist
immigrant Jewish households, were drawn to the growing industrial union
movement, like Maurice Sugar, general counsel of the United Automobile
Workers (1939–47) and Lee Pressman, general counsel of the Congress of
Industrial Organizations (1933–48).
AfterWorldWar II, lawyers with commitments to social reform continued
to go into labor law, often after a stint on the National Labor Relations
Board staff inWashington. But labor law gradually lost some of its appeal for
reformers. Union officials, often Catholic blue-collar workers, tended to be
hostile to the Jewish intellectuals who did their legal work, however much
they needed them. The New Deal’s labor regime stabilized labor relations by
embedding them in legal procedures: this created a steady demand for labor
lawyers, but also routinized the work of representing unions and deprived
it of the romance of a cause. The labor movement lost some of its most
intensely committed lawyers when the Taft-Hartley Act (1947) required a
purge of Communists. Incorporated, albeit grudgingly, as a regular partner
with business in the postwar economic order, most unions grew more conservative
in their aims and ideology, more interested in bread-and-butter
bargaining gains than social transformation, and (in many locals) actively
hostile to the new claims of African Americans and women for jobs in
union-controlled workplaces. Others, like the Jewish labor lawyers, stayed
with the cause and went from the government National Labor Relations
Board into jobs representing labor in unions or labor-side law firms. But
even these relatively self-denying cause lawyers were propelled into professional
prominence as their cause of organized labor flourished in the 1950s
and 60s, when union general counsel had the status of foreign ministers
negotiating general policies affecting wage rates and working conditions
in vast industries, were routinely invited to join boards and commissions
setting national government policies, and in liberal administrations were
appointed to Cabinet posts and even (in the case of the labor lawyer Arthur
J. Goldberg) to the Supreme Court.
Some radicals (like Pressman) also joined the Communist Party. Most
joined the National Lawyers’ Guild, founded in 1937 as a broad coalition
organization of liberal and radical labor, civil rights, and civil liberties
Cambridge Histories Online © Cambridge University Press, 2008
108 Robert W. Gordon
lawyers aspiring to be more inclusive than the major bar associations (it
admitted Jews, women and African Americans) and to function as an organized
counterweight to the conservative politics of the ABA. The Guild
split apart and lost most of its non-Communist center-left members when
it rejected their pleas to expel Communists. Although harried by the Justice
Department and House Committee on Un-American Activities, Guild
lawyers continued to represent those involved in unpopular radical causes
and were sometimes the only lawyers that some radicals such as Communists
could rely on. The most radical lawyers joined the Communist-affiliated
International Labor Defense (ILD), whose most famous cause had been its
defense of the Scottsboro Boys, nine African American teenagers charged
with the gang rape of two white girls in 1931. ILD lawyers fought fiercely
to obtain legal victories in their cases, though their primary and sometimes
conflicting aim was to publicize and dramatize the injustice of capitalist
society.
Lawyers and the Rights Revolution
Beginning in the 1930s, the NAACP’s Legal Defense Fund, with a tiny staff
of low-paid lawyers headed by Charles Hamilton Houston and Thurgood
Marshall, embarked on a twenty-five-year campaign of test-case litigation
in the federal courts to try to establish, by gradual degrees, the principle
that state-mandated separation of the races in public institutions and
places violated the constitutional requirement that states give all persons
the “equal protection of the laws” and by so doing to dismantle the system
of legally established racial apartheid in the South. That campaign
climaxed in 1954 with a moral triumph – the Supreme Court’s declaration
in Brown v. Board of Education that state-mandated segregation of the races
in public schooling (and by implication in other public settings as well)
was unconstitutional. It would, however, take many more years of protest
movements, legal challenges, and federal legislative and executive action
before much was done to implement the principle.
The spectacular victory of civil rights lawyers in Brown inspired more
and more groups to follow the strategy of the civil rights movement. In the
1960s and 1970s these strategies met with astonishing success. The reformers
found surprisingly receptive allies in the – as often as not, Republicanappointed
– judges of the U.S. Supreme Court, under the leadership of
Chief Justice EarlWarren and JusticeWilliam J. Brennan, and of the lower
federal courts: middle-class men who could be provoked to outrage by
what test-case litigation revealed of the treatment of marginal and outcast
groups in American society. Federal judges embarrassed by the racism and
backwardness of the old South, for example, were so revolted by the conditions
exposed in Southern prisons – long run on the feudal model of slave
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 109
plantations and characterized by ferocious levels of filth, torture, and coerced
labor – that they stretched their legal authority to construct far-reaching
remedial orders that placed entire institutions for years under professional
reform administrators. Other judges were provoked to sweeping remedial
action by the obstruction and resistance of local authorities to court orders,
especially orders to compel school integration. Every act of defiance created
more judicial sympathy for rights activists, who now appeared as champions
of the rule of law against the lawlessness of regularly constituted authorities.
Client groups asserting rights to be free from arbitrary or contemptuous
treatment by government also found judges receptive to this traditional
libertarian strain. Rights litigators were sometimes able to recruit allies in
elite law firms to help their causes.
Rights activism was not radical in principle. It aimed simply to extend
accepted legal principles of equality and fair procedural treatment to groups
of persons who had been excluded from their coverage; it did not challenge
the principle, only the operation in practice, of distribution of social goods
by capitalist markets; and it wished only to open the chance to compete on
equal terms. This might seem a centrist or even conservative program, but
taken seriously and given elite judicial and political backing, it profoundly
disrupted existing patterns of hierarchy, authority, and inequality. Suits
brought to achieve declarations of new rights were rapidly followed by more
suits for judicial remedial orders and by lobbying for legislation and executive
action to enforce them. Claims of rights to equal opportunity and fair
treatment rapidly turned into claims for major redistribution of resources –
admission of women, African Americans, and other minorities to professions
and crafts; equalization of public school finances among rich and poor
districts; and drastic overhauling of institutions like schools, prisons, and
mental asylums and welfare administration. Such actions energized a major
political backlash against the rights revolution. The Republican Party engineered
a major electoral realignment based in large part on recruiting voters
angered byWarren Court and Democratic administration support for black
civil rights, especially school integration orders involving busing and affirmative
action plans designed to remedy employment discrimination, the
feminist campaign for equal rights for women and the constitutionalization
of the right to abortion, and expanded protections for criminal defendants.A
succession of Republican administrations under Presidents Nixon, Reagan,
and Bush gradually replaced the generation of liberal reform-minded federal
judges with conservatives committed to reversing, or at least not extending,
the proliferation and aggressive enforcement of rights. By the 1990s, liberal
lawyers who thirty years earlier had fought to get their cases into federal
courts now fought to stay out of them.
In some ways, rights activism was an elite reform strategy high above the
fray of ordinary politics. For some rights-activist lawyers the important goal
Cambridge Histories Online © Cambridge University Press, 2008
110 Robert W. Gordon
was more to vindicate a principle or implement a policy than to advance the
interests of a concrete group. Some lawyers seeking judicial recognition of
the rights of religious dissenters or people accused of crimes neither identified
with nor even met actual clients. This was not invariably so. To build
their test cases, Legal Defense Fund lawyers had to do the arduous and dangerous
work of recruiting plaintiffs and organizing suits in the rural South.
And though rights activists were often criticized for over-investing in judicial
rule change and paying too little attention to political mobilization and
bureaucratic implementation, in fact they rarely relied on litigation alone
to achieve their aims. Litigation was always one strategy among many others,
including lobbying, supporting candidates for elections, conducting
voting drives, mobilizing allies such as labor organizations, dramatizing
causes to the media, doing grassroots organizing, and staffing and monitoring
enforcement bureaucracies. For example, once a grassroots civil rights
movement had started, the LDF lawyers switched a large part of their efforts
from test-case litigation to advancing the goals of the movement and keeping
its members out of jail. Still, the natural home of rights-activist lawyers
was the courts, especially the upper federal courts.
An entirely new field of endeavor, poverty law, was opened up in the mid-
1960s. President Lyndon Johnson created a federally funded Legal Services
Program in the Office of Economic Opportunity (OEO) as part of his War
on Poverty. In 1965 the combined budgets of all legal aid societies in the
United States totaled $5,375,890, and their combined staffs numbered
400 full-time lawyers. By 1968 OEO Legal Services had an annual budget
of $40 million and had added 2,000 lawyers; by 1980 (before President
Reagan cut it by a third) the budget was $321 million, supporting 6,000
lawyers. OEO Legal Services also funded “backup centers” in fields such
as health and employment discrimination to serve as research centers and
information clearinghouses for poverty lawyers in the field. In the early
1970s foundations led by the Ford Foundation began making grants to
“public interest firms,” about half of which identified the poor as their
principal clientele; by 1975 foundation grants contributed 42 percent of
public interest law firm budgets.
The new poverty lawyers were a very mixed lot. Like labor, civil rights,
and civil liberties lawyers, some came from left-of-center families and backgrounds
in social activism. In its early years poverty law practice also
attracted high-ranking graduates of elite schools, many of them paid for
by OEO “Reggie” (Reginald Heber Smith) Fellowships. But just as often,
poverty lawyers came from solo practice or other low-paid “legal rights”
jobs like legal aid or public defender practice. Though turnover in Legal
Services was always high – few stayed more than four or five years – even
lawyers who left kept up their activist commitments in other jobs.
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 111
The poverty lawyers often disagreed about what their objectives should
be. Traditional legal aid lawyers and their supporters in the organized bar
thought the main mission was a service function, taking care of clients’
individual needs and not antagonizing local political or commercial power
structures. Others favored a model closer to the Progressive settlement house
of “storefront” services located in poor neighborhoods, combining legal with
other social work services that were focused on enabling families to move up
and out of poverty. Most of the new lawyers had a more ambitious vision of
law as a means of broader social reform, which would work major structural
changes in the situation of the poor. An important group favored test-case
litigation directed at reforming the indifferent and repressive bureaucracies
that served the poor. Others saw litigation as one component of a strategy
directed at helping communities of poor people mobilize politically to
articulate their own needs and demands and to participate in making and
applying policies of the new anti-poverty agencies in the cities.
Poverty lawyers involved in reforming the welfare system (1965–73) tried
combining all of these strategies. They brought test cases to force welfare
bureaucracies to apply their own rules faithfully and fairly and eliminate
arbitrary paternalist regulations; then they helped organize a movement
(the National Welfare Rights Organization) of welfare recipients to insist
on their rights, in hopes that such claims would overwhelm the bureaucracy
and move the government toward a system of unconditional grants. They
also sought to repeat the successes of the civil rights movement: to define
the poor as a pariah group subject to unconstitutional discrimination, and
to constitutionalize a general substantive right to a guaranteed minimum
income. After initial successes on all fronts of its strategy, the movement
for welfare rights backfired. As welfare rolls burgeoned – partly because
of the lawyers’ successes in enrolling eligible families – state and federal
governments began to cut back on welfare spending and to impose new
requirements. The courts had granted procedural rights to fair hearings,
but refused to create substantive rights to welfare. The nascent political
organizations collapsed.
The poverty lawyers stirred up a hornets’ nest. Established legal aid programs,
local bar associations, charitable organizations, and local political
machines saw them as threats to their own turf and patronage relations and
tried to close them down and restrict their operations to routine individual
services. Several governors tried to abolish the programs in their states,
after Legal Services sued the states for violating their own laws and policies.
President Reagan tried to abolish the federal program and succeeded in crippling
it; it limped onward under many restrictions on its systemic reform
activities. The bar associations, however, switched sides and after 1975
became staunch supporters of Legal Services, in part because the programs
Cambridge Histories Online © Cambridge University Press, 2008
112 Robert W. Gordon
created jobs for lawyers, in part because the bar wanted the profession to
look good, and lawyers instinctively resisted attempts to restrict whom they
may represent and by what means.
The Progressive model of lawyer as policy entrepreneur acting on behalf of
diffuse and unorganized constituencies was reinvented in this period. In the
late 1960s and early 1970s the model was developed into the role of public
interest representative in administrative proceedings. The muckraker and
consumer lawyer Ralph Nader, who organized cadres of college and law
student volunteers to investigate government programs and their failures,
became the best known and one of the most effective. The mission of the
public interest lawyers was to repair glaring defects in political pluralism –
to open up the administrative agencies that the Progressives and New Dealers
had created to the broad constituencies that they were supposed to serve.
Until the late 1960s, administrative agency decision procedures – such as
hearings on the construction of energy projects like nuclear power plants
or the granting or renewal of radio or TV licenses – were usually dominated
by representatives of industries they regulated. The new public interest
lawyers claimed that other, more diffuse constituencies – TV viewers,
lovers of wilderness and the environment, consumers, future generations –
also had interests in the decision. The lawyers claimed to represent those
interests. With the help of the federal courts, public interest lawyers were
increasingly permitted to intervene in agency proceedings and to challenge
agency decisions on judicial review. They established a regular place at
the table in administrative decision-making processes. In politically congenial
administrations, such as President Jimmy Carter’s, they were brought
in to staff important government posts.
The most successful public interest representatives turned their abstract
constituencies into real ones. The environmental movement, for example,
began as a few vanguard activists. However, it used its activism to create a
mass middle-class movement, aroused, well financed, and able to mobilize
politically around major initiatives or perceived threats to its core interests.
Other examples may be found in the movements for women’s rights,
disability rights, gay and lesbian rights, and animal rights. Many public
interest constituencies, however, limited their involvement to writing
checks to keep the movements alive: real decision-making power remained
with their representatives.
IV. EXPANSION AND UPHEAVAL: 1970–2000
A century after the founding of its major modern institutions, the legal
profession began to undergo momentous changes in virtually all sectors of
practice.
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 113
Size and Composition of the Profession
The bar’s project to limit admissions by raising pre-legal and legal educational
requirements and lowering bar exam pass rates – combined with
the collapse of part-time night schools in the Depression – had kept the
proportion of lawyers to the population reasonably stable since 1900. But
after 1970 the volume of new entrants soared. The number of approved law
schools increased and their student bodies increased rapidly (from 22,000
law students in 1950 to 132,500 in 1990), particularly after the arrival of
the baby boomers in the cohort of college graduates and the opening of
the profession to women. The total number of lawyers rose from 355,000
in 1970 to 542,000 in 1980 and by the end of the century had doubled
again to over a million: just over 3 lawyers per thousand of population. By
effectively handing over admission to the profession to the law schools, the
bar had surrendered its role as gatekeeper.
Beginning in the late 1960s, anti-discrimination laws and affirmative
action combined to produce a substantial increase in African American
enrollments in law schools, from 2,000 in 1969 to 6,000 in 1985. Thereafter,
however there was a slight decline; and African American lawyers
remained strikingly underrepresented in law firms, making up 3.3 percent
of associates in 1996 and only 1.7 percent of partners. In 2000
4.2 percent of all lawyers were African American. The biggest change was in
the profession’s acceptance of women. Between 1967 and 1983, enrollment
of women at ABA-approved law schools rose 1,650 percent, from 4.5 to
37.7 percent of the total; at the end of the century it had stabilized at
almost 50 percent. In 1980 only 8 percent of lawyers were women; by
2000, 27 percent were women. However, some combination of continuing
discrimination and the brutal time demands of corporate practice continued
to keep law firm partner ranks predominantly male – around 85 percent or
more in most firms. By 2000 women were much better represented (around
25 percent) in prosecutors, government, and house counsel offices and
among law teachers; they were often the majority in legal aid offices and public
interest firms. Hispanic-Americans in the profession rose slightly from
2.5 percent in 1980 to 3.4 percent in 2000, Asian Americans from 1.4 to
2.2 percent.
As striking as the higher numbers were the shifts in jobs among sectors.
The proportion of lawyers in private practice declined significantly in the
post-WorldWar II years, from 89.2 percent in 1948 to about 68.3 percent
in 1988. In 2000, however, it was back up to 74 percent. In that category
the biggest decline was in solo practice, from 61.2 to 48 percent. Where did
the private practitioners go? Primarily to private employment, as in-house
employees of business – up from 3.2 percent in 1948 to 8 per in 2000 – and
Cambridge Histories Online © Cambridge University Press, 2008
114 Robert W. Gordon
to governments. Federal government employment of lawyers, as a proportion
of all lawyers, fell from 5.6 percent in the 1950s and 60s to 3.5 percent
in 2000, but state government employment of lawyers increased (from
1.8 percent in 1950 to 6.7 percent in 2000). All government lawyers in
2000 accounted for 10.3 percent of the total; all legal aid and public defenders,
for 1 percent (down from 2 percent in 1980). A few more became law
teachers, up from 0.6 percent in 1951 to 1 percent in 2000.
Within private practice the big reallocation was from individual to corporate
practice. A study of Chicago lawyers found that in 1975 the share of
lawyers’ efforts going to corporate matters was 53 percent versus 21 percent
going to individual “personal plight” clients. In 1995 the share of effort
going to corporate clients increased to 61%, whereas effort to personal
plight clients was down to 16 percent.
The Corporate Sector
The most explosive transformations were in the corporate practice sector.
The demand for corporate lawyers multiplied with client demands for
lawyers to staff an exploding increase in transactions, government regulations,
and litigation. The main origins of the new phase were in the severe
shocks to the settled corporate-legal order delivered by international competition,
the new mobility of capital, and the new volatility of the market
for corporate control. The federal government lifted regulatory controls on
some industries (airlines, trucking, communications, banking) in the 1970s
and 80s, but created whole new fields of regulation to take their place –
bans on employment discrimination against African Americans, women,
the disabled, and the old; environmental controls on polluting, land use,
drilling, and grazing; consumer protection, toxic substance, and occupational
safety regulation – as well as several major changes in the federal tax
code.
In response, big business firms switched strategies. Instead of negotiating
cooperative compacts with government agencies and labor unions,
companies began to aggressively challenge regulation and labor agreements
they once accepted as the price of stability. Meanwhile they became more
prone to mergers or takeovers as targets or raiders and driven to constant
restructuring – acquiring new divisions, shedding or spinning off old ones,
and rearranging profits, losses, and debts on paper – to manage financial
appearances to the capital markets and taxing authorities. Before the 1970s
companies rarely sued for breach of contract; by the end of the century,
corporate contract suits accounted for the largest share of new lawsuits filed
in federal courts. Suits against companies rose as well, notably for mass
torts such as toxic waste emissions and defective products. Whole new
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 115
industries emerged such as the high-tech ventures of the Silicon Valley,
whose products called for invention of new fields of law. As deals with new
trading partners around the world replaced informal ties of businesses in
long-term continuing relationships, lawyers were called in to craft contracts
covering performance terms and reducing the business and legal risks of
new ventures. All this work required platoons of lawyers in many different
specialties: to manage major lawsuits with their warehouses full of documents
sought in discovery; to avoid, work around, resist, or simply comply
with complex regulations, taxes, and disclosure and reporting requirements
in many different states and foreign countries; to staff transactions such as
mergers or takeovers or initial public offerings of new companies; and to
do the paperwork for deals. As businesses expanded globally, firms hired
lawyers from many jurisdictions and nationalities to join the teams.
The most visible effect of these demands for more lawyers was a sharp
rise in the number, size, and geographic reach of law firms. In 1900 a “large
firm” – so large that contemporaries called it a “law factory” – was eleven
lawyers. Around 1960 only thirty-eight firms had more than fifty lawyers;
half of them were in New York City. In 1978, 15 firms had over 200 lawyers;
by 1987, there were 105. By 2005, 17 firms had over 1,000 lawyers, 30 over
800, 70 over 500, 196 over 200, and 405 over 100. Firms with more than
100 lawyers made up only 1 percent of American firms, but employed 14
percent of all lawyers in private practice and a tenth of the entire profession:
107,472 lawyers.
Some firms grew internally, others by merger. In the 1980s and 90s, firms
extended their reach by opening both domestic and foreign branch offices.
A sample of ten of the largest firms showed them operating between one and
six domestic branch offices and one to six foreign offices in 1983. In 1999
those firms had doubled their domestic branch offices and almost tripled
their foreign offices. Houston’s Vinson & Elkins was typical. Like other big
firms, Vinson & Elkins expanded geometrically in the boom legal market
of the 1970s and 80s. By the late 1970s the firm had 286 lawyers; by 1999,
more than 500; by 2002, 862 in eighty different practice specialties. More
and more business came in from increasing state and federal regulation,
and Vinson & Elkins lawyers began to specialize in energy, environmental,
patent, admiralty, and municipal bond law; in antitrust, securities, and mass
tort litigation; as well as its old fields of oil, gas, banking, and insurance. It
opened branch offices in Dallas, Austin, Washington, New York, London,
Moscow, Tokyo, Beijing, and Dubai.
As they expanded, firms transformed the nature of legal practice by
competing aggressively with one another to attract clients and to hire senior
lawyers and associates. Confronted with escalating legal costs, companies
tried to keep these costs down by severing long-term ties with outside firms
Cambridge Histories Online © Cambridge University Press, 2008
116 Robert W. Gordon
and bringing substantial pieces of legal work in-house. The job of in-house
general counsel to a business, once a resting place for lawyers who had
failed to make partner in law firms, became newly prestigious and powerful
and – like railroads in the 1870s – attracted lawyers at the top of their
profession. The general counsel’s job was purchasing and managing all the
legal services for his or ,her company, auctioning off fragments of specialized
work – especially complex litigation – to many different outside firms. The
result was a whole new style of corporate practice – ruthlessly competitive,
powered pretty nearly exclusively by the drive for profits, so demanding as
to leave no time or energy for other commitments, and mostly indifferent
to social responsibility and public values.
The practice was very lucrative for lawyers and firms who succeeded but
also highly stressful because the specter of failure hovered so close by. Huge
firms, some new, some long established – Finley, Kumble and Lord Day &
Lord of New York; Gaston Snow and Hill & Barlow of Boston; Brobeck,
Phleger of San Francisco; and many others – collapsed from over-expansion
or over-caution.
The old stable institutional order of law firm practice dissolved. Lawyers
no longer expected a lifetime career in a single firm, but moved among
firms who bid for their services and from firms to house counsel’s offices,
investment banks, accounting firms, and business consulting services. Firms
raised associates’ salaries from 1986 onward to compete with pay in alternative
careers newly open to law graduates – by 2004, beginning lawyers
earned $125,000 or more. However, with more pay also came longer hours of
work (eighty hours or more a week in some firms) and much lower chances
of making partner or of obtaining secure tenure even after partnership.
Clients around the world wanted service night and day from their lawyers.
Compensation was tied to the ability to attract clients: “You eat what you
kill.” With the rise of the new practice, the old ethnic barriers fell. Bluechip
white-shoe firms eagerly sought after the Jewish and Catholic lawyers
who had staffed formerly degraded specialties such as litigation and had
expertise in mergers and acquisitions. Firms also hired African American
and women lawyers, but were less likely to retain and promote them. Both
groups were less likely to have the business contacts to recruit the clients
necessary for survival in firms. Women still had the double shift at home,
which limited both their capacity and desire to spend every waking hour
at work or travel for weeks out of town to prepare a big case or close a big
deal.
Meanwhile, American firms and the American style of corporate law practice
spread to foreign countries, especially Europe. They encountered many
competitors: English solicitors’ firms, multinational accounting firms, and
new European multidisciplinary consortia. In 1999 only ten of the largest
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 117
twenty international firms (with between 700 and 2,500 lawyers each)
were American law firms. Six were firms of English (and one of Australian)
solicitors. The rest were giant accounting firms. Accounting firms dominated
legal services in Europe and even in the United States employed
more than 5,000 lawyers who gave advice on tax shelters and bankruptcy
reorganizations. American lawyers were also competing fiercely for a share
of the increasingly lucrative business of international arbitration, formerly
dominated by Europeans, and promising to bring the dubious blessing of
American-style litigation practice to the rest of the world. In competition
with European solicitors and accountants, American lawyers were building
the new legal frameworks for the transaction of international commerce.
Practice for Individual Clients
The profession’s individual practice sector also experienced seismic shocks.
In 1900 solo and small-firm lawyers serving individuals and small business
encompassed the entire profession save for a few big-city big-business firms.
In 2000 individual practice was still numerically the largest segment of
the private bar, but accounted for a rapidly diminishing share, relative to
corporate practice, of total lawyers’ effort and earnings. Over the century it
had included some very wealthy and famous members, such as tort plaintiffs’
lawyers who were richer than all but a few top corporate lawyers, and
celebrity trial lawyers, like Max Steuer, Louis Nizer, F. Lee Bailey, and
Edward Bennett Williams, who took on high-profile clients and cases. Its
staple business throughout the century remained much the same: claims for
simple debt and collections, personal injury suits, criminal defense, divorce
and other family work, real estate closings, wills and trusts, bankruptcies
and foreclosures, and miscellaneous problems of small businesses.
Specialization carved out large segments of general office practice. The
leaders of the bar in 1900 were still mostly generalists. Elihu Root and
Joseph Hodges Choate made their mark as trial lawyers who tried a medley
of civil and criminal cases, such as wills, divorces, libels, murders, and as
constitutional lawyers argued before the Supreme Court, as well as serving
as general business advisers. The growth of the regulatory state with its
arcana of complex technical administrative rules doomed the generalist in
corporate practice: a lawyer could spend a lifetime mastering a few sections
of the corporate tax code or securities laws and keeping up with new amendments
and regulations. Fields such as prosecution and patents were already
specialized by 1900; labor, tax, patents, antitrust, oil and gas, and securities
were highly specialized by mid-century. In the late 1970s, 22 percent of
Chicago lawyers worked in only one field, and by the late 1980s, that figure
had risen to 32 percent. Criminal defense and personal injury had become
Cambridge Histories Online © Cambridge University Press, 2008
118 Robert W. Gordon
specialty fields. But many solo and small practitioners still engaged in a
general family practice.
At the bottom were solo and small-firm practitioners making a precarious
living on the cliff’s edge of unemployment. They were the most
vulnerable to business cycle downturns; to competition (since their staple
work was real estate work, wills, debt collection, auto accidents, and
divorces) from non-lawyer organizations, such as trust departments, title
insurance companies, and accounting firms; to reforms reducing the need
for lawyers, such as no-fault auto accident and no-fault divorce laws; and to
do-it-yourself forms, manuals, and software programs. Incomes of partners
and associate in corporate practice rose sharply after 1970; those of solo
practitioners declined by 30 percent between 1970 and 1985, while their
numbers were increasing (by 34 percent from 1980–88). One response to
these precarious market conditions was the organization of franchised law
firms, which attempted to realize scale efficiencies and product standardization
through consolidation and rationalization of legal work for middle-class
individuals.
Personal Injury Practice
The most dramatic development in the individual practice sector, certainly
the one with the broadest and most controversial social effects, was the rise
of a mass-tort class action specialty within the personal injury bar.
The first mass-tort cases – involving large numbers of victims injured by
the same cause – were cases arising from accidents: fires, floods from bursting
dams, sinkings of boats. Litigation of such disaster claims had unpromising
beginnings. Victims were often poor, hired local counsel to fight experienced
company lawyers, and faced daunting jurisdictional requirements,
procedural obstacles, and hostile courts. Only one civil suit was brought in
the wake of the 1911 Triangle Shirtwaist Fire caused by unsafe tenement
conditions, in which 145 New York sweatshop laborers died. Plaintiffs
rested their case after only one day and lost it; the remaining civil suits
settled for $75 each. Few lawyers could risk the costs of taking on such
suits, given the risks of loss and low settlements.
The tort bar began to organize itself after World War II. An association
of workers’ compensation lawyers founded in 1946 added tort practitioners
in the 1960s and eventually became the Association of Trial Lawyers of
America (ATLA). In 1951 it had 2,000 members; by 1971 it had 25,000
and had become a clearinghouse for information, a means for recruiting
cadres of lawyers to take on and coordinate mass-tort litigation, and a
powerful political interest group with a massive war chest for lobbying
legislatures and influencing judicial elections.
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 119
As the tort bar organized, it developed specialties. The first was airplane
accident law, a desultory practice area before the 1940s. Stuart Speiser pioneered
the role of coordinator and general contractor of teams of plaintiffs’
attorneys who represented families of air accident victims from different
jurisdictions, helping the lawyers consolidate cases and prepare a common
litigation strategy. In the 1960s, 288 lawyers, representing all about 75
percent of the 1,500 plaintiffs who sued the Merrill Company for harmful
side effects caused by its anti-cholesterol drug MER/29, combined into a
group that drastically cut the costs of litigation by centralizing research
and document discovery, deposing witnesses, and finding and preparing
scientific experts.
Meanwhile changes in substantive law and procedure transformed the
landscape of tort disputes. The courts opened the door to “strict products
liability” claims against manufacturers of products alleged to have caused
plaintiffs’ injuries, which did not require proof that the company was negligent,
only that the product was “defective.” The “asbestos era” of the
federal court system began in 1973, when a federal appeals court ruled that
asbestos manufacturers were strictly liable (Borel v. Fibreboard ). By 1987,
around 50,000 asbestos claims were pending in the nation’s courts; by 1992
there were 200,000 claims; and 6,000 to 9,000 new claims were being filed
annually. In 1986 the federal courts began to allow the aggregation of
asbestos claims as class actions ( Jenkins v. Raymark). Patterns established
in asbestos litigation rapidly spread to other mass torts litigation, such as
DES, Bendectin, the Dalkon Shield, Agent Orange, breast implants, and
most recently and profitably, tobacco.
Mass-tort practice as it evolved gravitated to an increasingly smaller
number of specialized firms headed by celebrity “Kings of Torts,” such as
the Peter Angelos firm of Baltimore, which in the 1990s represented more
than 10,000 asbestos plaintiffs. In 1995 a Forbes list of the twenty-five trial
lawyers with the highest incomes listed nine who specialized in mass-tort
products or accident cases. The mass-tort lawyers’ successes in court and
their growing wealth and political influence made them very controversial.
Manufacturers anxious to limit exposure to products liability verdicts and
conservative politicians eager to deprive Democrats of a reliable funding
base led “tort reform” movements to induce legislatures and judges to make
product liability suits harder to bring and to win and to limit damage awards
and attorneys’ fees. Tort reformers accused plaintiffs’ lawyers of growing fat
on the fees of an out-of-control “litigation explosion” of groundless claims
based on “junk science,” brought only to induce settlements and, by making
companies fearful of huge punitive damages awards, tending to stifle innovation
and cripple the American economy. From the 1980s onward conservative
politicians made tort reform and the crusade against plaintiffs’
Cambridge Histories Online © Cambridge University Press, 2008
120 Robert W. Gordon
lawyers a centerpiece of their campaigns. In riposte, friends and allies of
the plaintiffs’ bar portrayed plaintiffs’ lawyers as populist heroes willing to
fight the system of callous corporate wrongdoing on behalf of little guys,
who needed the occasional big verdict to cover the high risks of litigation
and “send corporate America a message.”
More disinterested observers told a less Manichean but just as troubling
story. The most serious defect of the tort system was not that it encouraged
too many meritless claims, but too few meritorious ones. Most injured
parties simply absorbed their losses without complaint; of those who consulted
lawyers, many were turned away because their case was not worth
enough to generate a substantial contingent fee. Punitive damages were
rarely awarded; when awarded they were usually a low multiple of compensatory
damages and, if high, were invariably reduced on appeal. Evidence
that fear of product liability had bad macroeconomic effects was weak to
non-existent. Clearly some mass-tort claims (like the Bendectin and, more
disputably, the Agent Orange and breast implant cases) were indeed based
on dubious science. In others (like tobacco) the bad science was generated
by the corporate defendants.
The biggest problem with mass-tort actions turned out to be that some
of them ill served not corporations, but the victims themselves. Corporate
lawyers came to welcome class actions as a means to consolidate and dispose
of all the claims against their clients. Plaintiffs’ lawyers acquired a strong
interest in colluding with their opponents to settle cases quickly for low
total damage figures, so they could earn extravagant fees for themselves
without having to do much work. Trial judges went along with the collusion
and with plans to prevent individual plaintiffs from “opting out” of class
actions and bringing suits on their own, because such arrangements made
cases manageable and reduced pressures on dockets. The administrative
costs, including lawyers’ fees, of adversary procedure in tort cases were
always distressingly high, likely to consume at least half and often more of
the total recovery. This fact alone kept most small individual claims out of
the tort system, because lawyers could not afford to litigate them. Yet for
all its high costs, the personal injury lawyer working for a contingent fee
remained the only practical means by which an ordinary individual could
confront a powerful corporate entity and effectively seek redress for injuries.
Such a person, however, increasingly needed protection from abuse by some
of his champions as well as his injurer.
Cause and Public Interest Lawyering
In the 1970s conservative public interest law groups emerged as rivals to
longer established liberal and left-wing groups. The Virginia lawyer Lewis
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 121
F. Powell, Jr. wrote a famous manifesto to the Chamber of Commerce in
1971, just before his appointment to the U.S. Supreme Court, complaining
that leftist opinions hostile to the capitalist system dominated the
academy, the press and, by means of public interest litigation, the courts.
He urged business to finance a counter-offensive. Out of this project eventually
flowed the tidal wave of conservative foundations, think tanks, John M.
Olin Foundation scholarships, programs, research funding, professorships
in Law and Economics in the law schools, and a new generation of public
interest law firms. In the view of firms like the Washington Legal Foundation
and Pacific Legal Foundation, what the public interest required was
dismantling inefficient regulation, especially anti-discrimination law and
health, safety, and environmental regulation, and a return to free-market
principles that would genuinely serve consumers and create wealth. Well
funded by business interests, such firms borrowed all the techniques of the
liberal groups, from intervention in agency proceedings to seeking judicial
review of agency action to challenging economic regulation as violating
the Commerce and Takings Clauses and the First Amendment; they scored
victories as conservatives increasingly occupied the judiciary and (for most
of 1968–2004) the executive.
Liberal public interest law groups also continued to proliferate, though
their financial support was always uncertain. In the 1980s a new specialty
attracted lawyers’ organizations, in response to the growing influence
of international human rights laws and treaties and both public and
non-governmental bodies reporting on violations (Helsinki Watch and
Charter 77, Amnesty International, Human Rights Watch, the State
Department’s human rights reports) and organizations to sanction them (the
European and Inter-American Courts on Human Rights, the International
Criminal Court, the South African Truth and Reconciliation Commission,
various special UN Tribunals for the former Yugoslavia, Rwanda, etc.). As
many Communist and military dictatorships collapsed in the 1980s, the
United States funded programs to bring the blessings not only ofWesternstyle
democracy but of the rule of law to the ex-Communist and developing
world; these programs were well funded by the World Bank, U.S. Aid
for International Development, and the ABA. Lawyers signed up by the
hundreds to join such programs.
Professionalism Imperiled, 1970–2000
Professionalism as a strategy for organizing occupations and justifying occupational
privileges reached its high tide from 1880 to 1960. Recall how
promoters of the professions had framed the goals of their projects: basing
practice standards on scientific learning; raising standards of education
Cambridge Histories Online © Cambridge University Press, 2008
122 Robert W. Gordon
and admission to practice; regulating ethics, competence, and discipline;
seeking primary rewards in recognition among peers for learning, craft,
and quality of client service and disdaining commercialism; maintaining
independence from non-professional outside controls over the quality, conduct,
and conditions of work; and finally, promoting public goods – in the
legal profession’s case the integrity of the framework of laws and procedures,
the improvement of the legal system, and universal access to justice.
By the 1960s, the professional ideal – and the attendant privileges and
authority – were under attack from the right, the left, and within the
professions’ own ranks. Left-wing cultural critics attacked the professions
as elitist conspiracies to exclude, dominate, exploit, and paternalistically
control social inferiors by mystifying professional knowledge. Right-wing
critics and economists attacked them as cartels designed to restrict entry
and fix prices. Lawyers were especially vulnerable to such critiques. Their
moral standing had always been somewhat dubious because one of their
jobs had been to put the best face on even unattractive clients and causes
and because they were suspected of overselling their competence to profit
from the misery of others. Valid or not, the critiques had a corrosive effect
on attempts to defend professional values, good as well as bad, in terms of
civic virtue or social trusteeship. The left-wing solution was lay empowerment
of consumers, entry of lay providers, and redistribution of social
and economic power. The right-wing solution, which generally prevailed,
was deregulation, increasing competition, and faith in market forces. On
balance, lawyers’ own behavior undermined more effectively the plausibility
of some of their professional claims than any outside critics could have
done.
The legal profession did succeed in raising admissions standards, at some
cost to the promise of law as an avenue of upward mobility. Its self-regulatory
enforcement record – lax, unresponsive, self-protective, and never directed
against the upper bar – was a conspicuous failure. Under pressure of scandals,
bar associations came increasingly to share control of discipline with external
regulators: judges, new full-time disciplinary bureaucracies, regulatory
agencies such as the Internal Revenue Service and Securities and Exchange
Commission (which regulated by conditioning the right to practice before
them), new legislative controls such as consumer protection laws requiring
standardized contract terms and disclosure to clients, malpractice actions,
and insurers against malpractice and other risks trying to reduce the risks of
lawyers’ incompetence and misconduct. Malpractice claims doubled in the
seven years between 1979 and 1986, and the average settlement increased
from $3,000 to $45,000. The practice of law, almost completely unregulated
in 1900, was in 2000 hedged about by thickets of rules, some with
effective sanctions behind them.
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 123
As with collective self-regulation, so with control over work. After the
1970s many types of lawyers, like doctors, lost much of their residual discretion
to determine the terms, pace, and quality of their work, as they were
reclassified as subordinates within bureaucratic hierarchies. An especially
harried group were insurance defense lawyers who now had to process their
cases according to rigid standardized protocols dictated by their employers
or were governed by detailed contract terms imposed by clients or insurers.
Even lawyers at the top of the hierarchy, partners in large firms, had to submit
to close monitoring by clients. Time billing, introduced in the 1940s as
an internal accounting device for allocating costs among cases and clients,
had become a Taylorist instrument for monitoring and increasing lawyer
work output within the firm; as a result, associates padded hourly billings
to increase their chances of partnership, and firms padded billings to clients.
In response clients began to impose budget caps and to dictate instructions
on how to travel (coach, increasingly), and how many and which associates
they might use on a case. In turn inside corporate lawyers who hired firms
had to justify their legal budgets to their chief financial officers. But even in
the lower tiers of practice, cost-cutting franchised law offices crowded many
lawyers out of practice and imposed a strict work discipline on those who
remained by standardizing forms, transactions, and caseloads and enforcing
strict time accounting.
Another casualty of the period was the professional ideal of independence
from clienteles. The reformers of the 1870s and after looked to professional
organizations and norms to open some distance between themselves and
the more corrupt and unscrupulous tactics of their own business clients, by
defining their jobs so as to strengthen their independence from illegitimate
client demands. While lawyers were supposed to be zealous advocates of
legitimate client claims, they also aspired to be objective independent counselors,
discouraging clients from actions that were legally or morally dubious
and that might invite retribution from popular or political backlash. They
also tried to preserve their capacity to argue for general legal reforms and
changes that clients might not support.17 In its most grandiose moments,
the bar leadership aspired to be independent guardians of constitutional and
common law principle and statesmen guiding legal and legislative reform
in the public interest, rising above party and faction and the local and particular
interests of clienteles. For reasons explored earlier in this chapter, the
emerging material bases of lawyers’ practices precluded most of them from
17 Lawyers in the New York State Bar Association’s tax section, for example, consistently
promoted legislation to close tax loopholes that benefited the reformers’ clients, and
municipal bond lawyers in the City’s bar association ensured the adoption of an ethics
rule prohibiting “pay to play” contributions to politicians who could be clients.
Cambridge Histories Online © Cambridge University Press, 2008
124 Robert W. Gordon
taking up these exalted roles. But the ideal of independence persisted in
professional rhetoric and sporadically in lawyers’ actions. As late as 1960, a
study ofWall Street lawyers confirmed that the bar’s elite still gave at least
lip service to the ideal of the independent counselor and lawyer-statesman.
Even this rhetorical commitment, however, mostly vanished in the intense
competition for clients in the 1980s. The last thing most lawyers wanted
to advertise was their superior scruples as monitors of client conduct or
as proponents of legal reforms their clients might not welcome. Ironically,
lawyers in the most lucrative and prestigious specialties had less autonomy
from client controls than general practitioners at the bottom.
Lawyers also undercut their traditional claims to pursue criteria of craft
and service above those of the marketplace. Some of the bar’s more dubious
rules for expressing anti-commercial values, its bans on advertising and
minimum fee schedules, were struck down by the Supreme Court as violations
of the antitrust laws and the First Amendment,18 though the rules
against direct solicitation of clients survived challenge. More important,
lawyers began openly to flaunt purely commercial criteria of success. A new
legal press, led by The American Lawyer (1979–) and National Law Journal
(1978–), broke down law firms’ long-standing genteel reluctance to discuss
salaries and fees in public and with the firms’ eager connivance began to rank
them by profits-per-partner. Firms hired business consultants to improve
their profitability and market consultants to market services to clients; they
began to reward rain-makers (partners who pulled in new business) rather
than highly skilled advocates or specialists with the largest shares of profits.
While Paul Cravath’s firm had forbidden its partners to invest in clients or
sit on their boards, lest their objectivity be impaired, the new firms eagerly
bought stock in their clients in exchange for services.
When the older rhetoric of professionalism resurfaced in this period, it
was usually to repel threats of competing professions and lay “unauthorized”
providers. By the 1990s, the most formidable challenges to American corporate
lawyers’ practice turf came from accounting firms employing lawyers
and giving tax and business consulting advice, foreign law firms such as vast
English solicitors’ offices, and proposals to permit “multidisciplinary practices”
(combines of lawyers, accountants, financial consultants, and others.)
In the individual practice sector, the flow of lay services in the form of
advice books, do-it-yourself manuals, form books, and computer software
programs became a deluge that no bar group could stop. In the face of such
encroachments, lawyers appealed to a morality above mere commerce to justify
their monopolies of practice fields. But in the wake of their unabashed
18 Goldfarb v. Virginia State Bar, 421 U.S. 773 (1975), Bates v. State Bar of Arizona, 433 U.S.
350 (1977).
Cambridge Histories Online © Cambridge University Press, 2008
The American Legal Profession, 1870–2000 125
embrace of business criteria of success the appeals rang hollow. Lawyers have
never ranked high in public opinion surveys of occupations. In the 1980s
and 90s their reputation sank still further.19
CONCLUSION
The century began with ambitious efforts to establish the legal profession as
a distinct and powerful institutional force in American society, to increase
lawyers’ prestige and cultural authority, and by augmenting their influence
to promote the rule of law – a legalist vision of governance enforced through
neutral principles, rules, and expert systems by cadres of professionals specially
trained and experienced in legal and administrative sciences and the
procedures to make them effective.
In some respects the project was stunningly successful. The spheres of
governance through law and legal procedures, and those where legal expertise
was required or useful, expanded and multiplied. American-style models
of transactional planning and lawyering, dispute settlement, legally
mediated regulation, and even rights-seeking and rights-protecting public
interest law were spreading through the globe. But these very successes
created backlashes and doubts about the professional project. Lawyer’s law
was expensive and thus priced out of the reach of almost all but wealthy
users. Litigation was perceived by almost everyone as a colossally wasteful
mode of dispute settlement. Legal-rights-seeking as a means of producing
social justice was questioned as ineffective or counterproductive for its
beneficiaries. Proliferating regulation provoked widespread business and
libertarian revolts. Professionalism and professional ideals were perceived
on both right and left as camouflage for a narrow economic self-interest.
Business lawyers scrambled to join the ranks of financial services businesses,
and now, without a distinctive product to sell, faced intensifying
competition from non-lawyers in similar trades and regulation from outside
agencies. Since the elites consistently put self-interest and loyalty to
primary clienteles over maintaining their profession’s independence and
serving the Republic, nobody took seriously their aspirations to be spokesmen
for a vision of the rule of law above politics and faction; by the 1980s
private lawyers had mostly ceased to pay those aspirations even lip service.
19 To a poll question, “Please tell me how you would rate the honesty and ethical standards
of people in these different fields,” 26 percent of respondents rated lawyers “Very High”
or “High” in 1977. This number slid to 13 percent in 1999 and then rose slightly to
18 percent in 2001. Meanwhile the public standing of the other traditional professions
was rising (physicians, from 51 to 66 percent; college teachers, from 46 to 58 percent;
engineers, from 46 to 60 percent), except for bankers (from 39 to 34 percent) and
journalists (from 33 to 29 percent). CNN/USA Today Gallup Poll, Nov. 26–27, 2001.
Cambridge Histories Online © Cambridge University Press, 2008
126 Robert W. Gordon
The plaintiffs’ bar began its career as the populist champion of the injured
weak, but at its apex grew wealthy and powerful at the expenses of its own
clienteles. The traditional general practice solo practitioner, like the family
doctor, was threatened with obsolescence.
Traces of the older ideals survived and continued to attract some students,
if only a small minority, to the profession of law, having migrated from
elite business lawyers to public interest and international human rights
lawyers and non-governmental organizations. The dream of a universal rule
of law, a world blanketed by legal controls on war, genocide, corruption,
environmental damage, ethnic strife, and racial and gender hierarchies, had
never had so many lawyers and institutions working energetically toward
its practical achievement, even as every day’s headlines testified to the huge
obstacles to its realization and to new horrors and injustices to overcome.
Cambridge Histories Online © Cambridge University Press, 2008
4
the courts, federalism, and the federal
constitution, 1920–2000
edward a. purcell, jr.
The history of American federalism in the twentieth century falls into three
distinct periods. The era of post-Reconstruction federalism, which began in
the late nineteenth century, ended in the years after 1929 when a shattering
series of domestic and international crises combined with the innovative
presidency of Franklin D. Roosevelt to reorient the nation’s laws, politics,
and institutions. The resulting “New Deal Order” lasted for almost five
decades before crumbling in the century’s last quarter when massive social,
cultural, economic, and political changes combined with the dramatizing
presidency of Ronald Reagan to begin reorienting the system once again.
At century’s end, the nature and course of that emerging era remained
unsettled.
I. THE NATURE AND DYNAMICS OF AMERICAN FEDERALISM
With a de facto default rule favoring decentralization, American federalism
is a governmental system based on the existence of independent political
power at both state and national levels. Its essence lies, first, in the institutional
tensions that the Constitution structured between the two levels
of government, and second, in the complex processes of decision making
that the Constitution established to maintain satisfactory relations between
the two levels. Those processes were complex because they involved, on the
national side, three distinct and counterpoised branches of government
and, on the state side, a growing multitude of equal, independent, and
often conflicting governing units. In theory, and sometimes in practice,
national power served to foster economic integration and efficiency, facilitate
the development and enforcement of desirable uniform standards,
enable the people to deal effectively with problems national and international
in scope, protect the security and general welfare of the nation as
a whole, and safeguard liberty by checking the potential tyranny of local
majorities. Conversely, also in theory and sometimes in practice, state power
127
Cambridge Histories Online © Cambridge University Press, 2008
128 Edward A. Purcell, Jr.
served to foster economic innovation and efficiency, nourish social and cultural
diversity, encourage democratic participation, facilitate the adoption
of narrow solutions tailored to special local problems, and safeguard liberty
by checking the potential tyranny of national majorities.
As a matter of historical development, American federalism gave rise to
a dynamic and fluid political system in which competing groups and coalitions
struggled for control of the nation’s diverse centers of governmental
power and used constitutional arguments to place decision-making authority
over contested issues in the level and branch of government that seemed,
at any given time, most likely to support their values, interests, and aspirations.
The claim of “state sovereignty,” for example, which limited or denied
the authority of the national government, served a variety of diverse groups
over the centuries: Jeffersonian Democrats in the 1790s, New England
Federalists during theWar of 1812, South Carolina nullifiers in the1830s,
Northern anti-slavery civil libertarians before the CivilWar, and then from
Reconstruction to the late twentieth century those who defended racial
segregation and disenfranchisement. The pressures generated by successive
waves of such diverse groups and coalitions – themselves the products
of relentless social and economic change – drove the system’s evolution.
Certain widely shared cultural commitments – to republican government,
the common law, religious freedom, private property, and individual liberty
– combined with the idea of a written Constitution and the reality
of institutionally divided powers to constrain and channel that evolution.
But the system’s operations and assumptions continued to shift as changing
cultural values, social conditions, economic innovations, institutional
practices, legal theories, judicial decisions, and constitutional amendments
blurred or redrew the lines of state and federal authority.
In that long and complex historical process, one issue repeatedly emerged
as pivotal: what institutions or procedures existed to settle disputes over the
respective spheres of state and federal authority? Americans debated that
issue vigorously for eight decades and then, in the Civil War and its three
constitutional amendments, settled it in part. The national government,
not the states, held dispositive authority. Neither the war nor its resulting
constitutional amendments, however, answered two further questions:
which branch or branches of the federal government held that authority?
And how was the authority to be exercised? Much of the history of American
federalism after the CivilWar revolved around the contested answers given
to those two questions, as the three federal branches – each responding to
the values and interests that dominated it at any given time – adopted
diverse and sometimes conflicting policies that led them to defer to state
prerogatives on some occasions and trump them on others.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 129
Indeed, as American life became increasingly centralized and homogenized
in the late nineteenth and twentieth centuries, many of the distinctive
and authentically “local” values and interests that had originally
given the federal system its embedded social meaning withered or became
suspect. Some blended into emerging and widely shared national values
and interests; others grew attenuated or disappeared entirely; a few – most
obviously, those involving racial oppression – were explicitly repudiated
by new national majorities and constitutional amendments. The result was
that the ingrained cultural understandings of the late eighteenth and early
nineteenth centuries gradually disintegrated, the lived social meaning of
American federalism grew more amorphous and contestable, and the distinctively
local values and interests that the system protected increasingly
appeared either narrow and parochial or vague and abstract. Over the course
of the twentieth century the idea of American federalism as a normative concept
– that the Constitution set out clear lines that defined and distinguished
state and federal powers – grew ever more amorphous and manipulable.
Thus, the history of American federalism cannot be understood by focusing
solely on constitutional provisions or theories of federalism. The Constitution
provided a sound framework of government and a shrewd system
of institutionalized checks and balances, but it did not draw bright or generally
determinative lines of authority between state and federal power nor
specify any particular “balance” between them. Similarly, theories of federalism
provided a range of normative baselines, but their specific injunctions
were invariably construed diversely and contested sharply. Indeed, conflicting
views of federalism existed from the nation’s beginning, and the passing
years produced a smorgasbord of new variations, each inspired by and suffused
with the emerging values, interests, expectations, and preconceptions
of its advocates. The federal structure helped sustain the nation’s commitment
to limited government, cultural diversity, and individual liberty, but
its history can be understood fully only by examining how and why its
practical operations evolved, its political significance shifted, its social consequences
unfolded, and its ideological contours periodically eroded and
reformed.
Since the early decades of the nineteenth century, the prevailing theory
held that the Constitution established a system of “dual federalism.”
The principles attributed to the system were few. The national government
was one of limited and delegated powers only; the states were independent
sovereigns with exclusive authority over local matters reserved to
them by the Tenth Amendment; and the powers of the two governments
were limited to “separate spheres” and intended to serve as checks on one
another.
Cambridge Histories Online © Cambridge University Press, 2008
130 Edward A. Purcell, Jr.
Although the actual practice of American federalism was always more
complicated than the theory of dual federalism implied, during the late
nineteenth and early twentieth century five accelerating developments substantially
reshaped the system. First, spectacular revolutions in transportation
and communications together with the ongoing processes of industrialization,
urbanization, westward expansion, and economic centralization
remade American society. What in 1789 had been a collection of geographically
rooted, locally oriented, and culturally diverse island communities
had by 1920 become an increasingly mobile, nationally oriented, and economically
and culturally integrated nation. Ever widening areas of life were
coming to have national significance, and Americans from coast to coast
increasingly faced similar problems that flooded beyond the ability of individual
states to remedy.
Second, the powerful nineteenth-century belief that the primary function
of government was to protect private property and economic freedom
was weakening. Since the Civil War governments at all levels had become
increasingly active in attempting to deal with the massive social disruptions
that came with urbanization and industrialization. Repeatedly the states
increased taxes and expanded their activities, legislating over a widening
variety of social and economic problems and establishing administrative
agencies to regulate railroads, insurance companies, and many other types
of business. They raised their funding for local governments, for example,
from barely $50 million in 1902 to almost $600 million by 1927.
Third, the federal government was growing at an even more accelerated
rate. Although the states still employed several times as many workers and
spent more than twice as much money as the federal government, the balance
of power between the two was shifting. As economic and cultural centralization
proceeded, the political consensus that had tilted strongly toward
decentralization in the early nineteenth century was moving by century’s
end toward support of more and broader government action at the national
level. In 1887 the federal government began to use its authority over interstate
commerce to regulate the new national economy, and by the second
decade of the twentieth century it had asserted extensive national control
over interstate transportation and communications while subjecting other
interstate businesses to an expanding variety of new federal regulations.
Fourth, running against that nationalizing current, a vehement reaction
against Reconstruction among white Americans had severely constrained
the power of the federal government to protect the rights of African Americans.
Notwithstanding the Civil War amendments, an informal national
settlement in the century’s last decades had successfully redefined most matters
involving black civil and political rights as local issues that properly
fell within the exclusive authority of the states. Increasingly, the cries of
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 131
“states’ rights,” “state sovereignty,” and the “principles of federalism” were
identified with the establishment and preservation of racial segregation and
disenfranchisement.
Finally, the power of the federal judiciary was growing relative to that of
both Congress and the states, and by the early twentieth century the U.S.
Supreme Court had emerged as the ultimate – if still sharply contested –
authority on the law of both American federalism and the new national
economy. The nation’s commitment to law and the ideal of limited constitutional
government had led Americans gradually to embrace the Court –
“the Court” as they came to call it – and its umpiring role, while the structure
of the federal judiciary – like that of the executive branch but unlike
that of Congress – allowed the Court to act relatively quickly and decisively.
The Court determined the extent to which any government could regulate
business and property as well as the particular level of government that
could regulate them. On the former issue, it held that a narrow range of
economic activities “affected with a public interest” were subject to extensive
regulation, but that most business and property remained “private”
and subject only to minimal regulation. On the latter issue, it held that
specific economic activities found to be “closely” or “directly” related to
interstate commerce were national in scope and hence subject to federal
control under the Commerce Clause but that the bulk of such activities
remained local and subject to regulation only by the states. As a general
matter, the Court’s rulings gradually extended the powers of the federal
government while restricting the power of the states to intrude into the
workings of the burgeoning national market. To enforce its mandate, the
Court reshaped the jurisdiction of the lower federal courts to make them
more effective instruments of national judicial authority, turning them from
disputes between private parties over issues of local law to suits that challenged
government action or raised issues of national law. Increasingly, too,
the Court exercised its burgeoning power. In seventy-one years up to 1860
it had held only 2 federal and 60 state statutes unconstitutional, but in a
mere thirty-nine years from 1898 to 1937 it voided 50 federal and 400
state laws.
II. NATIONALIZATION AND THE DECLINE OF
POST-RECONSTRUCTION FEDERALISM: FROM
WORLD WAR TO THE GREAT DEPRESSION
When 1920 dawned, American federalism seemed on the verge of even more
substantial change. Pre-war Progressivism had focused American politics
on the national level, and constitutional amendments authorizing a federal
income tax and the popular election of senators had expanded federal power
Cambridge Histories Online © Cambridge University Press, 2008
132 Edward A. Purcell, Jr.
enormously while curtailing the power of state legislatures. Both amendments
gave the American people a new and direct involvement in their
national government, while the income tax provision allowed the federal
government to raise virtually unlimited amounts of money, paving the way
for explosive growth in the future. The Supreme Court, too, had seemed
willing to approve some widening assertions of national power by stretching
the limiting categories of business “affected with a public interest” and
activities “closely” related to interstate commerce.
Most dramatic were the changes that followed American entry intoWorld
War I. Relying principally on their war powers, Congress and Democratic
President Woodrow Wilson exercised unparalleled authority. They established
national conscription, took control of the nation’s transportation and
communications systems, imposed tight restrictions on the distribution of
food and fuel, asserted authority over relations between labor and management,
and expanded the federal income tax system drastically. In addition,
through the Espionage and Sedition Acts they prohibited a variety of activities
– including speech critical of the government – that might interfere
with the war effort. They criminalized, for example, “disloyal, profane,
scurrilous, or abusive language” directed at the Constitution, the armed
forces, the government, or the flag.1 Perhaps most arresting, by statute and
then by constitutional amendment Congress and the states prohibited the
manufacture, sale, and transportation of alcoholic beverages in the United
States. Ratified in 1919, the Eighteenth Amendment conferred on the federal
government authority to enforce nationwide Prohibition and expanded
its power into areas that had previously been considered both local and
private.
The war challenged the structure of post-Reconstruction federalism in
other ways as well. Politically, it led to the adoption of yet another nationalizing
constitutional amendment, the Nineteenth, which prohibited the
states from denying the vote to women and conferred on Congress the power
to enforce its mandate. Institutionally, the war induced the Supreme Court
to back away from its umpiring role and watch passively as Congress and
the president exercised sweeping war powers. Socially, the war’s proclaimed
goal of making “the world safe for democracy” even hinted at the possibility
of change in the nation’s racial status quo.
Although post-Reconstruction federalism trembled, it did not crumble.
The end of the war brought a series of bitter labor strikes, a brief but virulent
Red Scare, repeated outbreaks of anti-black violence, rapidly rising prices
followed by a short depression, and spreading resentment at the administration’s
continued use and abuse of its war powers. Those events destroyed
1 Act of May 16, 1918, ch. 7,5, 40 Stat. 553.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 133
wartime unity, fragmented Progressivism, and generated a powerful desire
for a return to a more stable and tranquil order. In 1920 the reaction gave
the Republicans control of both Congress and the presidency.With the help
of returning prosperity, the Republicans maintained that hold for a decade,
ensuring a government of order, conservatism, business domination, and
minimal economic regulation. Under their rule, Republicans announced,
America was entering a “New Era” of sustained economic progress and
prosperity. For almost a decade their promise seemed golden.
The national turnaround in 1920 induced the Court to reassert its authority.
In cautious dicta it began to suggest judicially enforceable limits on
federal war powers, and in 1921 it invalidated on vagueness grounds the
statute that had authorized federal control over food during and after the war.
Then, within two years, Warren Harding, the new Republican president,
appointed four new justices – including ex-PresidentWilliam Howard Taft
as Chief Justice – who were more conservative and property conscious than
their predecessors. The stage was set for a period of conservative judicial
activism.
The new Taft Court moved quickly to ensure social stability, impose judicial
limitations on both state and federal governments, and protect business,
property, and the expanding national market. In less than a decade
it invalidated legislation – in most cases measures passed by the states –
in approximately 140 decisions, a rate far higher than that of any previous
Court. Its efforts were unwittingly enhanced by a seemingly technical
jurisdictional statute enacted in 1925. The so-called Judges’ Bill made the
Court’s appellate jurisdiction almost wholly discretionary, thereby enabling
it to decide freely not just how, but when and where, it would assert its
authority. After 1925 the Court’s role in American government continued
to expand, and its efforts became more purposeful, as shifting coalitions of
justices learned to use the Court’s new jurisdictional discretion to set their
own agendas.
Three of the Taft Court’s early decisions revealed its determination to
impose limits on government. Pennsylvania Coal Co. v. Mahon (1922) limited
both state and federal power over private property by holding that
regulatory actions that went “too far” constituted “takings” that, absent
compensation, were invalid under the Fifth and Fourteenth Amendments.2
Similarly, Adkins v. Children’s Hospital (1923) invalidated a minimum wage
law, a type of statute the Court’s conservative justices considered especially
obnoxious. Adkins proclaimed freedom of contract “the general rule” and
government regulation an “exception” confined to a few narrow categories
of specially “public” matters.”3 As much as the two cases demonstrated the
2260 U.S. 393, 415. 3261 U.S. 525, 546.
Cambridge Histories Online © Cambridge University Press, 2008
134 Edward A. Purcell, Jr.
Court’s determination to limit government regulation, however, they also
suggested the difficulty the justices faced in their task. In each, the Court
acknowledged that the limiting categories it used were incapable of precise
delineation, a confession that highlighted the extent to which the lines it
drew were the product, not simply of the Constitution, but of the dominant
attitudes of the era and the specific values of the justices themselves.
The third decision, Bailey v. Drexel Furniture Co. (1922), was directed
solely at the federal government and sought to infuse new life into the
idea of dual federalism. Only four years earlier the Court had struck down
the first federal Child Labor Law, ruling in Hammer v. Dagenhart (1918)
that the commerce power did not allow Congress to ban the products of
child labor from interstate commerce. Though seemingly inconsistent with
prior decisions, Hammer voided the child labor statute on the ground that it
was not a true effort to regulate interstate commerce, but rather a disguised
attempt to intrude into a “local” activity – the production of goods – that the
Tenth Amendment reserved to the states. Amid a popular outcry against
the decision, Congress responded with the Child Labor Tax Act, relying
on the federal taxing power to impose special charges on employers who
used child labor. Drexel Furniture declared the second federal child labor
act another subterfuge, one intended not to raise revenue but to regulate a
local matter. Following Hammer, it held the act invalid as a violation of the
Tenth Amendment. It was “the high duty of this court” to protect “local
self-government” from “national power” and to preserve the federal system
that, the justices declared, was “the ark of our covenant.” If it failed to block
the Child Labor Tax Law, Drexel Furniture warned, Congress could use its
taxing power “to take over to its control any one of the great number of
subjects of public interest” that the Constitution reserved to the states.4
Like earlier Courts, however, the Taft Court shaded its federalism decisions
to fit its social values. It ignored Hammer when Congress passed a
statute prohibiting the movement of stolen vehicles in interstate commerce,
avoided Drexel Furniture when Congress used its taxing power to control narcotics,
and construed the commerce power with exceptional breadth when
business invoked the federal antitrust laws to break a small union’s boycott
of local employers. The Court stretched national power in the first case to
protect private property, in the second to allow government to control what
the justices viewed as a moral and social evil, and in the third to check a
potentially powerful weapon of organized labor.
The particular social values that the Taft Court protected quickly generated
political controversy. Provoking strong opposition from Progressives
and organized labor, its decisions sparked a variety of proposals for “curbing”
4 259 U.S. 20, 37–38.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 135
the Court by restricting its jurisdiction or requiring a supermajority vote of
six or seven justices to invalidate legislation. In 1924 Republican Senator
Robert M. LaFollette ofWisconsin helped organize a new Progressive Party
and ran for president on a platform that indicted the Court as an antiprogressive
and pro-business partisan. He proposed a constitutional amendment
that would authorize Congress to override any decision invalidating
one of its statutes. Rising to the Court’s defense, most Republicans and
Democrats castigated the proposal as a radical and destructive assault on
the foundations of American federalism. In the election LaFollette did well
for a third-party candidate, but he was overwhelmed in a Republican landslide.
While the election revealed widespread hostility to the Taft Court, it
also suggested that the great majority of Americans supported the Court’s
institutional role, even if many of them disliked some of its individual
decisions.
Responding to LaFollette and other critics, CharlesWarren, the nation’s
preeminent historian of the Supreme Court, seemed to speak for most Americans
– even many Progressives – when he praised the Court for playing
an essential institutional role in the federal system. The “existence of the
American form of government – a federal republic with limited national
powers – implies and requires for its preservation the existence of a Supreme
Court,” he declared. “The retention of such a republic is inseparably bound
up with the retention of a Court having authority to enforce the limitation
of national powers.” Warren articulated a belief that had been spreading
since the mid-nineteenth century and that had become sacred writ among
conservatives by the early twentieth: the Supreme Court was the anchor of
American government, the paramount bulwark protecting the American
people and their liberties from the dangers posed by an otherwise uncontrollable
and centralizing national government. “It is, of course, possible to
have a republic without a Supreme Court,” Warren explained; “but it will
be a republic with a consolidated and autocratic government, a government
in which the States and the citizens will possess no right or power save such
as Congress, in its absolute discretion, sees fit to leave to them.”5
Although Taft and a majority of his Court shared both Warren’s suspicions
of Congress and his conclusions about the Court’s essential role, they
nevertheless sought to accommodate what they considered the reasonable
demands for more active government that flowed from the continuing centralization
of American social and economic life. Cautiously, they continued
the process of expanding federal power under the Commerce Clause and,
in a more innovative move, approved a broadened use of federal taxing
and spending powers. In Massachusetts v. Mellon (1923) the Court upheld
5 CharlesWarren, Congress, the Constitution, and the Supreme Court (Boston, 1925), 4, 5.
Cambridge Histories Online © Cambridge University Press, 2008
136 Edward A. Purcell, Jr.
a statute that provided federal funds for state infant and maternity care
programs. The decision in effect sanctioned the federal government’s power
to offer monetary grants to states conditioned on their acceptance of federal
use restrictions, and it thereby allowed Congress to legislate – albeit
indirectly – over matters that seemed entirely “local.” In the 1920s such
federal grants were few in number and small in scale, but during the next
half-century they would expand dramatically.
The Taft Court also extended federal judicial power over the states by
expanding the meaning of “liberty” in the Fourteenth Amendment. On one
front it voided state statutes that restricted the educational opportunities
of children. The Court held that the amendment protected certain personal
and familial rights, including the right of parents to rear and educate their
children as they wished. On a second front the Court began to consider
the claim that the First Amendment right of free speech also constrained
the states. Successful prosecutions under the Sedition and Espionage Acts
had provoked powerful dissents from Justices Oliver Wendell Holmes,
Jr. and Louis D. Brandeis; and, after the postwar hysteria had dissipated,
many Americans came to believe that governmental power to punish speech
should be limited more tightly. In Gitlow v. New York (1925) the Court
announced that the right of free speech recognized by the First Amendment
was part of the “liberty” protected by the Fourteenth Amendment and,
consequently, was binding on the states as well as the federal government.
Although the Court’s decisions in these areas were few, they created a rich
seedbed for the future.
Conversely, considering the rights of African Americans, the Taft Court
left post-Reconstruction federalism essentially unchanged. Refusing to
question racial segregation and disenfranchisement, it protected African
American rights only in the most outrageous and exceptional cases. In one,
where it granted habeas corpus relief to an African American sentenced
to death in a Southern state court, it could not ignore the fact that the
defendant had been convicted on unsubstantiated charges by an all-white
jury that had been surrounded and intimidated by an angry white mob. In
another, where it invalidated an “all-white” Texas primary election system,
it could not deny the explicitly racial nature of the legal discrimination or
its negation of the fundamental constitutional right of all citizens to vote.
In each case, however, the Court stressed the narrowness of its decision.
Federal habeas corpus was rarely available, it declared, and criminal matters
were ordinarily local issues for the states alone to resolve. Similarly,
the all-white primary was unconstitutional solely because its racially discriminatory
nature was explicitly written into state law. Indeed, a decade
later the Court unanimously approved a slightly more indirect version of
the all-white state primary, one that was equally effective in maintaining
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 137
black disenfranchisement but more cleverly designed as a matter of reigning
constitutional law.
For their part, the states in the 1920s continued to set policy not only in
matters concerning race but also in most other areas that affected daily life,
and they continued as well to provide most of the government services that
Americans received. During the 1920s the states accounted for almost threequarters
of all public spending and two-thirds of the taxes collected. While
a few sought to sustain the tradition of pre-war reform, most conformed to
the conservative national mood that underwrote the Republicans’ New Era.
Largely abandoning efforts to regulate business and enact progressive social
legislation, they sought to trim government regulation and concentrated
much of their spending on highway construction to meet the exploding
demands created by the automobile. Indicative of the political mood, the
states raised most of their highway money through regressive gasoline taxes,
which by 1929 accounted for 25 percent of their total tax receipts. Indeed,
while thirteen states had enacted mildly progressive income tax laws in the
decade after 1911, during the New Era only one state, New Hampshire,
adopted such a tax. As a general matter, the governments of both states and
nation seemed in accord on the basic issues of social and economic policy.
Both seemed content, for the most part, to keep a low profile and give
business its head.
III. FROM THE GREAT ECONOMIC TO THE GREAT
PSYCHOLOGICAL DEPRESSION: NATIONALIZING AND
RECONCEPTUALIZING LIBERTY AND EQUALITY, 1930s–1970s
The year 1929 witnessed the onset of the decade-long and world-wide Great
Depression. Causing massive disruptions and hardships, the Depression
challenged the capacities of democratic governments throughout the world.
The resulting turmoil paved the way for Adolph Hitler to seize power in
Germany, energized the forces of international Communism, and ultimately
helped bring on a second and far more destructive world war. In the United
States it gave birth to the New Deal and, together with the war and Cold
War that followed, transformed American federalism.
The Great Depression and the Foundations of the New Deal Order
The ravages of unemployment, bankruptcies, foreclosures, bank failures,
lost savings, and crushed hopes savaged all classes and regions. Those identified
with the roseate New Era of the 1920s – primarily business, the Republican
Party, and the federal judiciary – quickly became objects of anger and
distrust. Governments at all levels tried to respond to the emergency. State
Cambridge Histories Online © Cambridge University Press, 2008
138 Edward A. Purcell, Jr.
and local agencies, however, could provide neither the relief nor the structural
reforms that seemed necessary. By 1931 their resources were exhausted,
and the national and international scope of the ever-deepening crisis was
undeniable. The federal government under Republican President Herbert
Hoover became increasingly active, but it furnished far too little in the
way of either money or leadership. The experience taught Americans two
fundamental lessons: that a massive governmental response was necessary
and that only national action could possibly be adequate.
From 1930 to 1936 four successive elections repudiated the Republicans,
and after 1932 the Democrats firmly controlled both the legislative
and executive branches of the federal government. President Franklin D.
Roosevelt’s New Deal initiated a wide range of efforts to provide emergency
relief, restructure and stimulate the economy, and reform the nation’s financial
institutions. Although the administration worked closely with state
and local governments, political power shifted decisively to the federal
level. The National Industrial Recovery Act (NIRA) and the Agricultural
Adjustment Act (AAA), for example, the New Deal’s major initial efforts to
reorganize and revive the economy, imposed sweeping federal controls and
reached extensively into matters of industrial and agricultural production
that hitherto had seemed both local and private.
While the conservative orientation of the federal judiciary clouded
the future, it seemed possible that the New Deal might proceed without
encountering fatal constitutional obstacles. The Taft Court had been
split between six conservatives and three progressives, but that lineup had
changed in 1930 when Taft and one of his conservative colleagues died.
Charles Evans Hughes, a relatively progressive Republican, became Chief
Justice, and the moderate Republican, Owen J. Roberts, filled the second
opening. In the early 1930s the two new justices voted with the three
progressives in a number of critical cases, and they seemed to have tipped
the judicial balance. The Court applied the Fourteenth Amendment to safeguard
freedom of speech and provide some protection for African Americans
in Southern state courts, and it gave broad constructions to both the commerce
power and the category of business “affected with a public interest.”
Further, in two sharply divided 5–4 decisions – with both Hughes and
Roberts joining the Court’s three progressives – it recognized the need for
both state and federal governments to have emergency powers to combat
the depression.
If the Hughes Court was different from the Taft Court, however, it
nonetheless remained committed to enforcing limits on economic regulation
by both the states and the federal government. In early 1935 it invalidated
a part of the NIRA and then began a series of rulings – with Roberts
and sometimes Hughes joining the four conservatives – that checked state
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 139
and federal regulatory power and, in the process, declared both theAAAand
the remainder of the NIRA unconstitutional. Invoking the Tenth Amendment
to invalidate another New Deal measure, Roberts and the four conservatives
emphasized that “every addition to the national legislative power
to some extent detracts from or invades the power of the states.”6
While the anti-New Deal majority invoked the idea of federalism, the
dissenters often did the same. Illustrating the intrinsically double-edged
nature of the concept, Justice Brandeis, the Court’s leading progressive,
deployed it to undermine the conservative majority. Excessive centralization
could flow not only from Congress, he warned in 1932, but from the
federal judiciary as well. In voiding the reasonable social and economic
regulations that the states attempted, Brandeis declared, the Court was
not exercising “the function of judicial review, but the function of a superlegislature.”
Its anti-progressive decisions unwisely restricted the states and
improperly centralized American government. Moreover, he charged, the
Court’s decisions negated a signal virtue of American federalism. “It is one
of the happy incidents of the federal system that a single courageous State
may, if its citizens choose, serve as a laboratory,” Brandeis explained, “and
try novel social and economic experiments without risk to the rest of the
country.” Confronted by “an emergency more serious than war,” Americans
had the right to experiment with a variety of possible remedies, and
the nation’s federal system was designed to allow such diverse and creative
efforts.7 Turning the tables on the conservative majority, Brandeis used
his progressive theory of “experimentalist” federalism to indict the Court
itself as a centralizing force that was obstructing the federal system’s proper
operation.
Not surprisingly, the double-edged nature of American federalism provided
the Court’s anti-progressive majority with a ready response. The
states could “indulge in experimental legislation,” Justice George Sutherland
replied for the conservative majority, but they could not “transcend the
limitations imposed upon them by the federal Constitution.” National limits
existed and controlled, and the Court itself was the institution that identified
and applied those limits. “The principle is embedded in our constitutional
system,” he declared, “that there are certain essentials of liberty with
which the state is not entitled to dispense in the interest of experiments.”8
Thus, the Supreme Court – the ostensible bulwark of federalism – once
6 Carter v. Carter Coal Co., 298 U.S. 238, 294–95 (1936).
7 New State Ice Co. v. Liebmann, 285 U.S. 262, 280, 300, 306, 311 (1932) (Brandeis,
J., dissenting, joined by Stone, J.). Justice Cardozo, the third “progressive,” did not
participate in the decision.
8 New State Ice Co., 279, 280 (1932) (Sutherland, J.).
Cambridge Histories Online © Cambridge University Press, 2008
140 Edward A. Purcell, Jr.
again served not as the defender of state autonomy but as an agent of
national power.
The Court’s anti-New Deal decisions set up one of the most famous
episodes in its history, the “Constitutional Revolution of 1937.” The standard
tale is familiar and the storyline dramatic. Overwhelmingly reelected
with crushing Democratic majorities in both Houses of Congress, Roosevelt
stunned the nation with his proposal to “pack” the Supreme Court by adding
one new justice, up to a total of six, for every member of the Court over
the age of seventy. Then, while Congress and the nation debated the plan,
the Court suddenly seemed to change its position. In a series of 5–4 decisions
– Hughes and Roberts joining the three progressives – it discarded
the doctrine of liberty of contract and drastically broadened federal power.
Over the next few years the Court’s four conservatives resigned, and the
president replaced them with loyal New Dealers who extended the changes
the Court had begun in the spring of 1937.
The traditional story over-inflates the role of the Court-packing plan and
oversimplifies the processes of constitutional change. The label “revolution,”
moreover, obscures complexities. There was continuity as well as change
in the Court’s decisions, and many of the innovations that occurred had
roots in earlier periods and witnessed their full flowering only in later
ones. In spite of the qualifications necessary, however, the traditional story
highlights a fundamental fact: the New Deal years brought fundamental
and far-reaching changes to the federal system.
First, the New Deal altered the way the system functioned. Centralizing
many areas of American life, a dozen path-breaking measures asserted
new or expanded federal authority over the nation’s economy and financial
system. The National Labor Relations Act, for example, which the Court
upheld under a broadened commerce power, extended federal regulatory
authority to the employment relationship and guaranteed labor the right
to organize and bargain collectively. The result was the centralization of
government labor policy, the preemption of many state laws considered
hostile to workers, and the transformation of organized labor into a newly
powerful and nationalizing force in American politics. Similarly, the Social
Security Act, which the Court upheld under a broad construction of the
spending and taxing powers, established the institutional foundations for a
limited national welfare state. The act placed special taxes on workers and
employers, created a variety of federal social support programs, and used
conditional grants to enlist state participation and impose federal standards
on their operation.
In addition, the New Deal moved the federal government into a widening
range of previously local areas. It established agencies to insure individual
home mortgages and private bank accounts, for example, and it
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 141
funded a series of massive projects to construct local public facilities and
provide employment for millions. Using its power to tax and spend, it provided
grants to states for a variety of new programs and raised the amounts
involved into the billions of dollars. The grants extended federal involvement
into such previously local areas as employment counseling, health
care, public housing, conservation, slum clearance, social welfare, and child
care programs.
Numbers told much of the story. In 1913 state and local governments
had spent more than twice as much as the federal government, but by 1942
their spending amounted to barely a quarter of the national total. Federal
expenditures skyrocketed from less than 30 percent to almost 80 percent of
total government spending in the United States. Similarly, in 1929 federal
grants to state and local agencies had stood at less than $100 million, but
after 1935 they averaged more than a billion dollars a year.
Further, the New Deal altered the functioning relationship between federal
and state governments. As growing federal financing made national
direction seem increasingly appropriate, the federal government began to
expand its administrative capacities and enforce tighter and more detailed
controls over its grants. Some of the conditions it imposed began to regulate
not just spending but also the operations of the state and local government
agencies that administered the grant programs. Further, the rapid
expansion of federal-state grant programs began to alter the politics of
intergovernmental relations. It nourished larger bureaucracies at all levels
of government; intermixed the operations and interests of the federal,
state, and local officials who administered them; and began to create new
interest groups made up of program beneficiaries and their varied political
supporters. Still embryonic in the late 1930s, those institutional changes
would accelerate in the coming decades and increasingly reshape the de
facto operations of American federalism.
The New Deal, moreover, tipped the balance of the federal system even
more by expanding the institutional authority of the national executive.
Roosevelt broadened the power of the presidency by providing a charismatic
image of national leadership, assuming a major role in initiating
and securing passage of legislation, and by boldly exercising his authority
to issue executive orders. He also strengthened the institutional resources
of the presidency. Although Congress refused to adopt his sweeping plan
to reorganize the executive branch, in 1939 it established the Executive
Office of the President, providing an expanded staff and other resources that
allowed the president to exert greater control over the executive branch and
to project his policy decisions more effectively.
The second major change that the New Deal brought was to inspire substantial
changes in constitutional law that allowed governments at all levels
Cambridge Histories Online © Cambridge University Press, 2008
142 Edward A. Purcell, Jr.
to assert expanded regulatory powers. Most obvious, the post-1937 Court
stretched federal legislative power far beyond its prior limits. In United
States v. Darby (1941) it overruled Hammer v. Dagenhart and renounced
the idea that the Tenth Amendment created a substantive barrier against
national power. The Tenth Amendment, it declared, could never block an
action that was otherwise within the constitutional powers of the national
government. Further, the Court broadened the commerce power to allow
far-reaching regulation of economic activities. In the late nineteenth century
it had held that the “production” of goods was not “commerce” but a
local activity immune from Congressional reach, and in the early decades of
the twentieth century it had maintained that distinction while expanding
the types of local activities that were sufficiently “close” to interstate commerce
to come within Congressional power. After 1937 it found an ever
wider range of activities falling within that power, and in 1942 it discarded
both the close relationship test and the distinction between “production”
and “commerce.” InWickard v. Filburn (1942) the Court held that Congress
could regulate any activity that – as part of the aggregate of all such activity –
was likely to have some practical effect on interstate commerce. Under that
construction the commerce power seemed capable of reaching almost anything.
Finally, going beyond Massachusetts v. Mellon, the Court construed the
Taxing, Spending, and General Welfare Clauses with exceptional breadth.
It held that they constituted independent grants of power, authorized taxing
and spending for the broadest purposes of national welfare, and allowed
the federal government to make grants to the states contingent on the
states’ acceptance of federal conditions and limitations. Such restrictions,
the Court ruled, neither coerced the states nor invaded any of their reserved
rights.
Similarly, as the international situation grew ominous in the late 1930s
and Roosevelt moved toward a more activist foreign policy, the Court
enhanced the powers of the president over the nation’s foreign affairs. It
ruled that the nation’s “powers of external sovereignty”9 lay in the executive
branch, existed independent of the Constitution, and operated free of restriction
from any reserved rights of the states. In a striking decision in 1937
it held that the president had authority to make “executive agreements”
without Senate approval and that such agreements trumped otherwise valid
state laws. Thus, as foreign policy emerged as a newly dominant concern in
the late 1930s, the expansion of presidential power accelerated even more
rapidly, bringing larger areas of American life under federal authority and,
in an increasingly vital area of national concern, edging the states toward
the periphery.
9 United States v. Curtis-Wright Export Corp., 299 U.S. 304, 318 (1936).
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 143
While constitutional changes during the New Deal years substantially
expanded federal power, they also broadened state regulatory authority. The
Court narrowed its use of both federal preemption and the negative Commerce
Clause to allow states an expanded role in regulating economic activities,
made state rather then federal common law controlling in the national
courts on issues of state-created rights, and in a variety of cases instructed
the lower federal courts to defer to the proceedings of state courts and
administrative agencies. Further, when it abolished the doctrines of substantive
due process and liberty of contract, the Court freed state as well as
federal legislative power. In West Coast Hotel Co. v. Parrish (1937) it overruled
Adkins v. Children’s Hospital and upheld the authority of states to enact
minimum wage statutes for women, substantially enlarging their general
police powers. The states were not shy about using their new powers, moreover,
extending their regulatory, service, and welfare activities substantially.
In 1913 state and local governments had raised and spent approximately
$1.8 billion, but by the early 1940s the comparable number was five times
that amount. In addition, one of the most striking, if indirect, results of
the New Deal was the adoption in 1933 of the Twenty-First Amendment,
which repealed the Prohibition amendment, thereby eliminating a major
grant of federal authority and restoring power to the states.
The third major change that the New Deal brought was the transformation
of the federal judiciary. Roosevelt restaffed the lower courts with
appointees sympathetic to his policies, and between 1937 and 1943 he
reoriented the Supreme Court by filling seven of its seats with administration
loyalists. The new judges, in turn, began to reshape federal law in
line with the goals and values of the New Deal. Some maintained that they
were merely casting off crabbed doctrinal accretions from the late nineteenth
century and restoring the expansive constitutional principles that
the Founders had originally intended. Others began to articulate a new
attitude toward constitutional law. They advanced the idea that the Constitution
was a flexible, practical, and even “living” instrument. The Founders
had used broad and adaptive terms, they argued, so that Americans would
be able to respond effectively to future problems as the changing demands
of their well-being required.
Drawing on those ideas and their New Deal sympathies, federal judges
began to infuse new meanings into the constitutional ideals of liberty and
equality. They began to give increased protection to the kinds of “personal”
liberties that they believed all individuals should enjoy in a democratic
society while downgrading the economic liberties that accrued, as a practical
matter, primarily to the benefit of large corporations and the economically
powerful. Further, they sought to move beyond mere formal legal equality
and nourish a greater practical equality by showing, often though surely
Cambridge Histories Online © Cambridge University Press, 2008
144 Edward A. Purcell, Jr.
not invariably, a special solicitude to individuals and groups that were weak
or disadvantaged – African Americans, workers, consumers, labor unions,
political dissenters, victims of industrial injury, and unpopular ethnic and
religious minorities.
Haltingly and somewhat erratically, the post-1937 Court floated a variety
of constitutional theories to justify its shifting social orientation, including
the idea that the Constitution required it to provide special protection for
rights that were “vital to the maintenance of democratic institutions” or
that were so “fundamental” as to be “implicit in the concept of ordered liberty.”
10 Although the Court did not consistently apply any single theory,
one of those it suggested would – decades later and in the wake of theWarren
Court – become particularly influential. When normal democratic political
processes were working and citizens had fair opportunities to influence their
governments, five justices declared in United States v. Carolene Products Co.
(1938), the Court should defer to decisions of the political branches. Conversely,
when normal democratic processes were blocked or when they led
to systemic abuses against helpless minorities, the Court should intervene
to remedy the situation. Translating theory into doctrine, Carolene Products
suggested that judicial review should operate on two tracks. When the
Court reviewed ordinary economic regulations that resulted from normal
political competition and compromise, it would apply a “rational basis”
test, upholding government action if the action bore a reasonable relation
to some legitimate government end. When, however, it reviewed cases
involving the denial of fundamental non-economic rights or discrimination
against “discrete and insular minorities” – situations in which ordinary
democratic processes had failed to work properly – the Court would apply
a “stricter scrutiny,” an inquiry that would validate government actions
only on a showing that the actions were narrowly tailored and essential to
achieve a compelling governmental goal.11
Regardless of its varied justifications and sometimes contradictory rulings,
the post-1937 Court was proposing itself as the protector of abused
individuals and minorities, and, in so doing, it was also turning away
from its earlier role as umpire of the federal system. On the ground that fair
democratic politics should ordinarily prevail and that the legislative branch
represented the states as well as the people, it accepted the principle that
Congress was ordinarily the proper institution to determine whether and to
what extent federal power should be exercised. Similarly, on the ground that
the president had vast authority and discretion in the conduct of foreign
10 Schneider v. Irvington, 308 U.S. 147, 161 (1939); Palko v. Connecticut, 302 U.S. 319, 325
(1938).
11 304 U.S. 144, 152 n.4, at 152–53.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 145
relations, it increasingly deferred to executive decisions that implicated foreign
policy concerns. The altered role the Court sketched would help define
the triple tracks of governmental centralization that marked the years after
1937. In economic matters Congress would exercise sweeping national legislative
authority; in foreign policy matters the president would exercise
an ever-growing and often unchecked executive discretion; and in certain
areas involving non-economic social and political rights the Court would
come to assert an expanding national judicial authority.
War, Cold War, and Civil Rights: The High Years of the New Deal Order
World War II and the dominating events that followed – the birth of the
nuclear age, the onset of the Cold War, and the emergence of the United
States as the undisputed leader of “the free world” – reinforced the nationalizing
trend that the Depression, the New Deal, and the nation’s longaccelerating
economic and cultural centralization had forged. The war led
to massive expansions in the federal bureaucracy, sweeping national controls
over the domestic economy, and the induction of more than 16 million men
and women into the armed forces. The Cold War that followed sustained
the national mobilization, generated a pervasive anti-Communism that further
homogenized and centralized political debate, and provided a national
security justification for growing federal intrusions into areas previously left
to the states. Turning the nation from its traditional and relatively aloof
foreign policy, the war and Cold War transformed the United States into
a global military and economic superpower at least potentially interested
in even the smallest and most distant regions of the world. The power and
activities of the federal government grew apace, and the role of the presidency,
in particular, continued to swell. The National Security Act of 1947
established both the National Security Council and the Central Intelligence
Agency as powerful and well-funded agencies of the executive branch, and
the White House staff, which numbered 64 people at the end of World
War II, jumped to 399 by 1957 and then to 485 only six year later. All
extended the president’s ability to control and enforce national policy and to
shape the contours of the nation’s domestic political debates. The escalating
foreign policy challenges, moreover, induced the Court to adopt a highly
deferential attitude toward both Congress and the president, temporarily
checking its proclaimed new commitment to protect civil liberties. During
the war the Court refused to challenge the army’s decision to place more
than a hundred thousand Japanese-Americans in concentration camps, and
into the 1950s it failed to protect the civil liberties of many of those who
ran afoul of the second Red Scare that erupted in the early years of the
ColdWar.
Cambridge Histories Online © Cambridge University Press, 2008
146 Edward A. Purcell, Jr.
Although postwar politics grew more conservative, the major achievements
of the New Deal remained largely in place. Harsh memories of the
Great Depression, the unprecedented efforts of the Roosevelt administration
to alleviate the nation’s ills, and the stunning and sustained economic boom
that followed wartime mobilization combined to inspire a broad new consensus.
Americans had come to believe that many of the pressing difficulties
they faced were “social” in nature, not “individual,” and that government
could and should take a more active role in resolving them. Indeed, their
acceptance of the idea that a newly muscular federal government was necessary
to protect national security in the Cold War strengthened their belief
that the same national government could also act as an effective instrument
of rational, democratic problem solving at home. Increasingly, they
looked to government at all levels for an expanding variety of services. Most
immediately, they had come to believe that anything affecting the American
economy was properly a national issue for which the federal government
should take responsibility. Sustaining economic growth and ensuring full
employment became domestic goals of the highest priority, and Americans
assumed that one of the primary duties of the federal government was to
underwrite the nation’s continuing economic welfare. Accordingly, government
at all levels grew, and the federal government expanded most rapidly.
With its unparalleled capacity for raising funds through the national income
tax, and the distinct advantages its members realized from dispensing public
money, Congress proved increasingly ready to finance new programs and
expand old ones. Funds allocated to regular domestic grant programs, for
example, doubled in only the first two years after the war.
Although the Republicans controlled of one or both Houses of Congress
as well as the presidency for much of the period from 1946 to 1960, they
gradually acceded to most New Deal reforms and even joined in expanding
the activities of the federal government. Congress passed new public housing,
urban redevelopment, and minimum wage legislation, and it expanded
federal spending programs to enlarge Social Security, guarantee opportunities
for returning veterans, and provide funding for education, conservation,
hospital construction, scientific research, and rural electrification. During
the presidency of Republican Dwight D. Eisenhower from 1953 to 1961,
federal aid to states on a per capita basis more than doubled. The system
of “dual federalism” had passed away, replaced by one of “cooperative
federalism” in which governments at all levels participated in a widening
variety of joint programs and dealt with national problems by blending
federal funding and direction with state and local administration. Illustrating
both the spread of cooperative federalism and the ways in which
Cold War national defense concerns fostered the expansion of the national
government, Republicans and Democrats joined forces in 1956 to pass the
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 147
Interstate Highway Act. The measure provided massive federal funding for
the construction of a 40,000-mile interstate highway system that promised
to benefit a wide range of groups and interests across the nation. The states
supported it enthusiastically, and Congress easily justified it as necessary
for national defense.
Indeed, the extent to which the federal system, and normative theories
about it, had evolved became apparent rather quickly. Between 1947
and 1959 Republicans and other supporters of states’-rights ideas initiated
four major efforts to study the federal system and find ways to check
and reverse the trend toward centralization. None had a noticeable impact.
During his presidency, Eisenhower sponsored two such efforts. In 1957, for
example, he urged the creation of a special government task force designed
“to designate functions which the States are ready and willing to assume
and finance that are now performed or financed wholly or in part by the
Federal Government.”12 To accomplish that end, he cooperated with the
National Governors Conference in establishing a Joint Federal-State Action
Committee composed of officials from the highest ranks of state and federal
government. After an elaborate and well-financed study, the committee was
able to identify only two programs – vocational education and municipal
waste treatment – that should be transferred from federal to state control.
Together, the two programs accounted for a barely noticeable 2 percent
of total federal grants to state and local governments. While a variety of
political and economic factors conspired to trivialize the committee’s conclusions,
its much-heralded effort revealed one overpowering fact. By the
1950s a complex system of nationally directed and funded cooperative federalism
had been firmly established and was becoming widely accepted in
both theory and practice.
While some conservatives still hoped to restore a more decentralized
system, liberals worked to shape the operations of the new order to their
purposes. If national power had been drastically expanded and federalism
transformed into a “cooperative” system, they reasoned, then the Supreme
Court required a new institutional role adapted to those new conditions.
The horrifying brutalities of Nazi and Soviet totalitarianism inspired an
intensified commitment to the idea of the rule of law, and the tumultuous
ColdWar campaigns against Communism heightened their belief that the
nation needed a strong judiciary to protect individual liberties. Further,
the growing conservatism of the states in economic matters, their enthusiasm
for fighting Communism by restricting civil liberties, and – most
12 Dwight D. Eisenhower, “Excessive Concentration of Power in Government Is Dangerous:
Power and Responsibilities of State Government Must Be Preserved,” Vital Speeches of the
Day 23 (July 15, 1957), 578, 580.
Cambridge Histories Online © Cambridge University Press, 2008
148 Edward A. Purcell, Jr.
crucially – the adamant determination of those in the South to preserve
racial segregation combined to cast a new and unflattering light on the idea
that the states were democratic laboratories that should be free to conduct
social experiments. Indeed, in the postwar years the very term “social experiment”
raised images not of beneficent progressive reforms but of Nazi death
chambers and Stalinist labor camps. Increasingly, Democrats and liberals
turned to the reoriented post-New Deal federal judiciary as the government
institution most likely to enforce national rules that would serve their new
values, interests, and aspirations.
One of the most thoughtful, and eventually influential, formulations
of those liberal attitudes came from Herbert Wechsler, a prominent legal
scholar and old New Dealer. The normative constitutional problem that
postwar liberals faced, Wechsler explained, was to find a principled way
to “defend a judicial veto” when used to protect “personal freedom,” but
to “condemn it” when used to block government actions “necessary for
the decent humanization of American capitalism.”13 In 1954 Wechsler
suggested an elegant solution. The Constitution itself guaranteed state
sovereignty by providing the states “a role of great importance in the composition
and selection of the central government.” Those “political safeguards
of federalism” included equal state representation in the Senate,
control over many aspects of voting and districting for the House, and a
key role in electing the president through the system of electoral votes.
Thus, the very structure of the Constitution meant that Congress and the
president would “be responsive to local values that have large support within
the states.” Consequently, there was no need for the Court to protect the
states or to serve as the umpire of federalism. Instead, the constitutional
structure suggested that the Court should focus its efforts elsewhere. First,
because the federal government had no part in composing the state governments,
it was the federal government, not the states, that needed the Court’s
protection. Thus, the Court should ensure “the maintenance of national
supremacy against nullification or usurpation by the individual states.”
Second, because the Constitution’s majoritarian “political processes” would
not remedy popular and democratic abuses against ,disfavored minorities,
the Court should enforce “those constitutional restraints on Congress or
the states that are designed to safeguard individuals.”14 Thus, post-New
Deal liberalism began to develop the idea that Carolene Products had voiced:
13 Norman Silber and Geoffrey Miller, “Toward ‘Neutral Principles’ in the Law: Selections
from the Oral History of HerbertWechsler,” Columbia Law Review 93 (1993), 854, 924.
14 Herbert Wechsler, “The Political Safeguards of Federalism: The Role of the States in
the Composition and Selection of the National Government,” Columbia Law Review 54
(1954), 543, 554, 559, 560, n. 59.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 149
the Constitution underwrote the principle that the Court should protect
abused individuals and helpless minorities, not the already powerful states
or the well-entrenched federal system.
In the postwar years the most systematically disadvantaged minority in
the United States was African Americans, and a variety of factors pushed
the Court to take action on their behalf. Some were internal: a few useful
precedents, the spread of post-New Deal liberal values, the justification
provided by the Carolene Products idea, and key changes in the Court’s personnel
– especially the appointment in 1953 of EarlWarren as Chief Justice.
Others were external. The African American community had been leaving
the South, developing a strong middle class, increasing in organization
and militancy, and gaining political influence in the North. Further, the
atrocities of Nazi Germany had discredited racist ideas, and the Cold War
made repudiation of racism necessary to counter Soviet efforts to undermine
American influence in the Third World. The Democratic Party, too, had
been transformed since the New Deal. Increasingly urban, northern, liberal,
and reliant on African American votes, it was ready to support meaningful
efforts to end racial oppression. Finally, the National Association for the
Advancement of Colored People was pressing a methodical legal campaign
against racial segregation, and its efforts presented a series of well-designed
constitutional challenges that allowed the Court to chip away at legalized
racial segregation. Together, the changes highlighted the discordant nature
of Southern racial practices, led increasing numbers of Americans to reject
them, and helped install in the federal courts judges sympathetic to the
cause of racial equality.
The judicial turning point came in 1954 when the Court ruled in Brown
v. Board of Education (1954) that racial segregation in the public schools
violated the Equal Protection Clause and then, over the next few years,
extended its ruling to a variety of other public institutions and facilities.
Exemplifying and dramatizing the idea of the federal judiciary as the protector
of both fundamental non-economic rights and “discrete and insular
minorities,” the decisions asserted national authority over the states in a
crucial area of social policy, one that had been labeled “local” since the end
of Reconstruction. When Southern state governments and private citizens’
groups pledged massive resistance to Brown, the Court responded in 1958
with an extraordinary assertion of national judicial supremacy signed by all
nine justices. “[T]he federal judiciary is supreme in the exposition of the
law of the Constitution,” they proclaimed in Cooper v. Aaron, and “the interpretation
of the Fourteenth Amendment enunciated by this Court in the
Brown case is the supreme law of the land.”15 The decisions strengthened
15 Cooper v. Aaron, 358 U.S. 1, 18.
Cambridge Histories Online © Cambridge University Press, 2008
150 Edward A. Purcell, Jr.
a galvanizing civil rights movement, but they also provoked bitter and
sometimes violent opposition. By themselves they were unable to end
racial segregation in the South. That had to await events of the following
decade.
Brown and the civil rights struggle helped fire the tumultuous era known
as “the sixties,” a politico-cultural phenomenon that began sometime after
1957, became self-conscious in the early 1960s, peaked between 1965 and
1972, and expired rapidly after 1974. Underlying social developments – a
sustained economic boom, rapid expansion and luxurious federal support
of higher education, the emergence of experimental “youth cultures” and
radical “liberation” movements, and the popularization of social theories
that challenged traditional ideas across the board – combined to spur major
changes in American attitudes and values. Melding with escalating and
disruptive protests against an ever widening and seemingly futile war in
Vietnam, the changes generated a volatile era of turmoil and transformation,
of vaulting hopes and intensifying hates.
With respect to the federal system, the sixties initially accelerated
the trend toward centralization. Democratic President John F. Kennedy
inspired a new enthusiasm for liberal activism after his election in 1960,
and his successor Lyndon B. Johnson strove to build a “Great Society,” one in
which the federal government would achieve the social and economic goals
of the New Deal and ensure that all Americans shared in their benefits. The
Supreme Court became increasingly active in imposing liberal national
standards on the states, and after an overwhelming Democratic victory in
1964, Congress responded with a series of major domestic reforms. Further,
between 1961 to 1971 the nation ratified four constitutional amendments,
three of which protected the right of Americans to vote, limiting state
authority and giving Congress power to enforce their mandates.
Of most enduring importance, the federal government as a whole finally
committed itself to the cause of black civil rights. Kennedy and Johnson
increasingly embraced the issue, and between 1964 and 1968 Congress
passed three monumental civil rights acts. Two broadly prohibited racial
and other types of discrimination in housing, education, employment, and
“public accommodations.” The third negated a wide range of legal and practical
obstacles that Southern states deployed to deny African Americans the
franchise. Equally important, the statutes created effective remedies for violations
and made the federal government an active and continuous agent of
enforcement. Illustrating the relatively consistent purpose that animated
the entire federal government in the late 1960s, the executive branch immediately
initiated or expanded a variety of programs to enforce the new civil
rights statutes, while the Supreme Court quickly upheld their constitutionality.
It approved the sharply challenged public accommodations provision
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 151
by applying the sweeping interpretation of the Commerce Clause advanced
in Wickard v. Filburn, and it validated federal control over voting rights on
the ground that Section 5 of the Fourteenth Amendment gave Congress the
broadest possible power necessary to enforce the amendment’s rights. By
the end of the 1960s legalized segregation was crumbling, and the constitutional
pillar of post-Reconstruction federalism that had survived the New
Deal – the principle that racial matters were local – had been obliterated.
Congress expanded federal authority in other areas as well. Johnson’s
Great Society reached into the backwaters of American life, identifying the
very existence of poverty and inequality as problems of national importance.
Like the theory of Carolene Products and the concerted attack on racial discrimination,
hisWar on Poverty sought to assist the nation’s poorest groups
and remedy fundamental structural inequalities. Congress authorized ever
more generous grants to state and local governments for a seemingly limitless
variety of “categorical” purposes, including welfare, housing, child
care, mass transit, job training, education, urban renewal, medical insurance,
and legal services for the poor. Similarly, the federal government began
a concerted effort to deal with issues of environmental pollution and the
conservation of natural resources. Increasingly, moreover, the new programs
were intended not merely to help state and local governments deal with their
problems but to implement national policies designed to achieve national
objectives.
A report of the federal Advisory Commission on Intergovernmental Relations
published in 1967 charted the steady and accelerating expansion of
federal funding programs. Before 1930 the national government offered
funding to state and local governments in only ten areas of activity. The
New Deal brought federal funding to seventeen more areas, and the early
postwar years added another twenty-nine to the list. The period from 1961
to 1966, however, witnessed the most explosive growth. New programs
extended federal funding to another thirty-nine areas of state and local government
activity – an increase of almost 70 percent in only six years. Thus,
by 1967 the federal government was funding state and local government
activities in 95 areas and doing so through 379 separate categorical grant
programs. In a decade, total federal aid to state and local governments
tripled, rising from $4.9 billion in 1958 to $15.2 billion in 1967.
The political momentum carried into the next decade. Even under
Republican President Richard M. Nixon, who talked about a “new federalism”
that would return power to the states, national activism continued.
Indeed, in the first two years of his administration federal funding to state
and local governments jumped by more than a third, reaching $25 billion
in 1970. Through a variety of changes within the executive branch, Nixon
enhanced presidential power to manage both the federal bureaucracy and
Cambridge Histories Online © Cambridge University Press, 2008
152 Edward A. Purcell, Jr.
the distribution of funds to the states. He sought not so much to limit
federal power and government activism as to make all government agencies
more streamlined and efficient. Moreover, stressing the problem of “crime
in the streets” and the need for “law and order,” he accelerated the use of the
national government to fight crime, particularly “organized” crime and narcotics
trafficking.Newlegislation expanded the scope of the federal criminal
law, turned a multiplying number of state-law crimes into federal violations,
and in the Racketeer Influenced and Corrupt Organizations Act (1970) gave
the national government muscular new tools to investigate and prosecute
transgressors. Similarly, the decade brought major federal initiatives
aimed at protecting the environment and expanding government welfare
services. Although some social programs, particularly those involving
Johnson’s War on Poverty, were crimped or terminated, many others took
their place. During the decade total federal spending on welfare programs
more than doubled. By 1979 Congress had established more than five hundred
grant programs that accounted for a third of the federal budget and
furnished state and local governments with approximately 30 percent of
their total revenue. Moreover, although Republicans criticized many aspects
of the civil rights movement, especially school busing, affirmative action,
and some aspects of anti-discrimination law, the party – or at least its Northern
wing – accepted many of the changes the movement had brought.
As federal funding gushed forth, the national government’s control over
its programs continued to tighten. Although Nixon sought to minimize federal
restrictions through unconditional “revenue sharing” and less restrictive
“block grants,” his efforts were only minimally successful. Federal agencies
swelled in number and responsibilities, while the scope and complexity of
their regulations multiplied geometrically. Expanding and reorganizing the
federal bureaucracy, for example, Congress established the Departments of
Housing and Urban Development (1965), Transportation (1966), Energy
(1977), and Education (1979), as well as the Environmental Protection
Agency (1970), to help administer some of its new programs. The agencies
spawned a growing body of regulations that ranged from detailed
rules controlling individual categorical programs to broad across-the-board
rules covering many or all grant programs. Increasingly, moreover, federal
regulations sought to serve a variety of national policies – ending discrimination,
protecting the environment, expanding opportunities for the
disadvantaged – unrelated to specific grant programs themselves. During
the 1970s the total number of federal regulations more than doubled, and
Congress and the federal bureaucracy were increasingly regulating not just
the distribution of funds but the policies and operations of state and local
governments themselves.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 153
The continued growth of federal activism was driven in large part by
three fundamental changes in the political system. One was the increasing
centralization that marked all areas of American public life and transformed
ever larger numbers of issues into matters of national concern. The accelerating
nationalization and internationalization of economic enterprise, the
dramatic and unifying power of ever more pervasive mass media, the growing
ease and speed of travel, and the frequency with which Americans moved
their homes from state to state and region to region combined to homogenize
American life and culture, and the attitudinal changes that resulted increasingly
made most problems seem national in scope and resolvable only with
national solutions. Moreover, the ever-tightening tyranny of money in the
political process magnified the influence of those private organizations –
almost always national in operation and concern – that were capable of
providing the huge campaign donations that the political parties required.
Those organizations – corporations, labor unions, industrial and professional
associations, and swelling varieties of ideological advocacy groups – almost
invariably sought, in return for their support, national policy decisions that
would provide them with advantages national in scope.
The second change lay in the new and stronger sets of interlocking local,
state, and national interests that resulted from the massive federal spending
programs of the prior decades. The programs were attractive to members of
Congress who found them ideal ways to shape policy while assisting their
favored interest groups, funneling money to their districts, and improving
their chances of reelection. Further, the programs developed their own
powerful constituencies: grant recipients and the interest groups who supported
them; professionals who designed and administered the programs;
and innumerable officials at all levels of government who for reasons of
public policy, bureaucratic influence, and personal advancement found the
programs highly desirable. As federal spending grew, so did the power of
those interlocking interests, and they continued to drive expanded federal
spending in the 1970s even as the animating values of post-New Deal
liberalism were withering.
The third change was rooted in the altered role of the presidency in
an age of mass communications and cultural centralization. Dominating
national politics and the public agenda, presidents – and all serious candidates
for the office – found it essential to propose national solutions for
almost every problem that drew national attention. By the late twentieth
century American presidents were expected to act not only as chief executives
and commanders-in-chief but also as legislative leaders and all-purpose
national problem solvers. The nation’s seemingly limitless demands on the
office magnified its irresistibly centripetal force.
Cambridge Histories Online © Cambridge University Press, 2008
154 Edward A. Purcell, Jr.
While Congress, the executive, and concentrating social pressures were
extending federal power, the Supreme Court was doing the same. Beginning
in the early 1960s, the Warren Court launched a new and broader
phase of liberal activism. Shifted leftward by the retirement of two conservatives
– including Justice Felix Frankfurter, the Court’s leading advocate
of “judicial restraint” and deference to the states – and galvanized by the
reformist nationalism of Warren and Justice William J. Brennan, a new
majority coalesced in almost perfect harmony with the decade’s vibrant
liberal politics. Between 1962 and 1969 the Court expanded its efforts
far beyond civil rights and announced a breathtaking series of decisions
that imposed federal limitations on the states in a variety of areas. Perhaps
of greatest institutional importance, the Court asserted national authority
over the districting and apportionment of state and local legislative bodies.
Rejecting earlier decisions, it ruled that the Equal Protection Clause
required that electoral districts have closely comparable populations based
on the egalitarian standard of “one person, one vote.”16
Similarly, the Court substantially expanded the reach of the First Amendment.
Construing the amendment’s religion clauses, it prohibited a variety
of government-sponsored religious practices, ruling that states could not
require officeholders to declare their belief in God, sponsor Bible reading
as part of the public school curriculum, or compel schoolchildren to recite
compulsory prayers. Construing the Free Speech Clause, it ruled that the
states could punish advocacy only if a person’s words were specifically calculated
to incite imminent unlawful actions, and it held that the right of
free speech created a qualified privilege against state defamation suits, a
decision that not only limited state law but opened the way for particularly
vigorous criticism of state and local officials. Perhaps most innovative, in
Griswold v. Connecticut (1965) it held that the First Amendment, in conjunction
with other amendments, created a constitutional right of privacy that
barred states from prohibiting residents from using or conveying information
about contraceptives.
Equally controversial, the Warren Court applied most of the rest of the
Bill of Rights to the states. Again reversing prior doctrine, it held that the
central provisions of the Fourth, Fifth, Sixth, and Eighth Amendments were
“incorporated” in the Due Process Clause of the Fourteenth Amendment.
Moreover, it repeatedly broadened the protections that the clauses offered.
In what was probably its most controversial decision in the area, Miranda
v. Arizona (1966), it required law enforcement agents to inform arrestees
about their constitutional rights and to respect their decision to exercise
those rights. To enforce its rulings, the Court expanded the availability of
16 Gray v. Sanders, 372 U.S. 368, 381 (1963).
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 155
federal habeas corpus for state prisoners, enabling the lower federal judiciary
to review state court criminal convictions more frequently. The decisions
created, in effect, an expanding federal code of criminal procedure that
bound the states, restrained police behavior across the nation, and provoked
bitter and widespread criticism.
As Congressional activism continued into the 1970s, so did the Court’s.
Although Chief JusticeWarren resigned in 1969 and Nixon appointed four
new justices, including the new chief justice,Warren E. Burger, the Court
changed less than many expected. Indeed, in several areas it continued to
extend federal power, making the early Burger Court seem almost a third,
if somewhat ambivalent, phase of the Warren Court. During the 1970s
the Burger Court gave constitutional sanction to some types of affirmative
action, confirmed the broad power of Congress under the Fourteenth
Amendment, and upheld a substantial, if limited, remedial authority in
the federal courts to order local officials to integrate previously segregated
public school districts. In addition, it provided due process protections
for welfare recipients faced with termination of benefits and continued the
Warren Court’s efforts to expand the relief that injured individuals could
obtain under a variety of federal regulatory statutes.
In three areas the Burger Court’s decisions seemed particularly liberal,
activist, and nationalist. First, it held that the Equal Protection Clause
applied to gender classifications. Congress had begun to address gender
inequality in the 1960s, and in 1971 the Court ruled in Reed v. Reed that
a state statute disfavoring women violated the Constitution. Second, reaffirming
and broadening the constitutional right of privacy that theWarren
Court had pioneered in Griswold, it held that the right barred states from
prohibiting the sale of contraceptives to unmarried persons and, far more
innovative and controversial, announced in Roe v. Wade (1973) that it guaranteed
women the right to an abortion. The Burger Court thus confirmed
that a new and vibrant “public/private” distinction had entered American
constitutional law. Unlike the pre-New Deal Court, which had used
the distinction to protect property and economic liberty from government
regulation, however, the Warren and Burger Courts infused new meaning
into the dichotomy, using it to protect intimate matters involving sex and
procreation from such interference. Finally, the Burger Court extended the
reach of the Eighth Amendment, mandating minimum federal standards
on both capital punishment and prison conditions. Its rulings prevented
the states from executing hundreds of condemned prisoners, forced them
to make substantial revisions in their criminal laws, and compelled them
to institute a variety of reforms in the administration of their corrections
systems. By the 1980s more than 200 state prisons and 450 local jails in
forty-three states were operating under federal court orders.
Cambridge Histories Online © Cambridge University Press, 2008
156 Edward A. Purcell, Jr.
The growing control that the federal courts exercised over the nation’s
prisons was only one of the more visible areas in which federal judicial supervision
cabined the power of state and local officials. After Brown the federal
courts had gradually taken over hundreds of schools in their efforts to ensure
that the Court’s mandate was enforced. Inspired by their role in combating
racial segregation and energized by a burgeoning faith in the judiciary’s
power to redress social wrongs, the federal courts grew increasingly willing
to take on broader and more complex social problems. Moreover, the explosion
of Congressional legislation compelled them in the same direction.
Numerous statutes created new and sometimes vague rights under many
of the cooperative programs that the federal government funded, and those
provisions spurred a rapidly expanding range of suits in the national courts
against state and local governments. Increasingly, federal judges became
active managers of ongoing litigations that sought to reform the structures
and procedures of those governments, and they often issued detailed
orders establishing federal rules over many areas that Congressional funding
had brought within the indirect, but nevertheless effective, control of the
national government.
Although national law and national standards had become pervasive by
the 1970s, the states nevertheless remained vital centers of power. For the
most part, their laws still controlled many of the most basic areas of American
life: marriage, family, education, criminal justice, commercial transactions,
zoning and land usage, estate planning and inheritance, the use of
automobiles and the highways, and most of the broad common law fields
of tort, contract, and property. Indeed, in lawsuits where state law properly
controlled, federal constitutional law continued to bind the national
courts to follow and apply it. State and local governments, moreover, were
heavily involved in providing most services in such basic areas as education,
transportation, social welfare, police and public protection, housing
and developmental planning, natural resource conservation and usage, and
labor relations and employment practices. While from 1950 to 1975 the
number of federal civilian employees edged up from 2.1 to 2.9 million,
the number of state and local government employees jumped from 4.2 to
12 million, almost 60 percent of whom were concentrated in the fields of
education and health services.
Further, stimulated by the federal government’s expanded activism,
local reformers pushed to modernize state governments and enhance their
administrative capacities. Liberals sought to strengthen their ability to provide
greater ranges of social services, while many conservatives hoped that
stronger state governments would help check the increasing nationalization
that marked the post-New Deal decades. From the 1940s through
the 1970s the states increased their use of professional administrators and
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 157
drafted expert commissions to frame constitutional amendments and other
structural reforms that would strengthen the institutions of state government.
In 1962 only twenty states held annual legislative sessions, for example,
but by the mid 1970s forty-two did so. Influenced by the growing
emphasis on executive leadership that marked the national model, sixteen
states extended gubernatorial terms to four years, and a dozen eliminated
long-established restrictions to allow their governors to serve a second successive
term. Further, nineteen states restructured their entire executive
branches, expanding gubernatorial powers over a variety of budgetary matters
and giving their governors greater administrative control over a wide
range of state and local agencies. Moreover, state employment, revenues,
and expenditures generally expanded relative to those of local government
entities, and most states centralized their administrations by imposing
a growing number of requirements and restrictions on local government
institutions.
Finally, states and localities were able to protect their positions in the
federal system by exerting persistent and effective pressures on the national
government. They marshaled their power by establishing a variety of organizations
– including the National Governors’ Association, the National
Conference of State Legislatures, the National League of Cities, the U.S.
Conference of Mayors, and the National Association of Counties – to influence
federal policy and ensure that national programs were tailored to
local needs and interests. Further, by administering many cooperative statefederal
programs, they were able to help shape their operations and impact.
The states, too, retained substantial independence in their actions because
their officials continued to be elected directly by their citizens and derived
neither office nor authority from the national government. While the states
helped elect federal officials, the federal government had no such role in
state electoral processes.
IV. RESHAPING FEDERALISM IN AN AGE OF
FRAGMENTATION AND REALIGNMENT: VECTORS
OF AN UNFOLDING ERA, 1970s–2000
The 1960s ended badly for post-New Deal liberalism. Escalating militancy
in the civil rights and antiwar movements brought mass protests and
civil disobedience to the center of American politics, while the appearance
of communes, youth cultures, feminism, sexual freedom, gay liberation,
black nationalism, and varieties of political radicalism fueled a growing
backlash among older and more conservative Americans. Three stunning
political assassinations – President Kennedy; his brother, Robert, a senator
and Democratic presidential candidate; and Dr. Martin Luther King, Jr.,
Cambridge Histories Online © Cambridge University Press, 2008
158 Edward A. Purcell, Jr.
the revered and despised leader of the civil rights movement – compounded
a growing sense of turmoil, division, and crisis.
The events fragmented post-New Deal liberalism. On the level of ideas,
the fundamental assumptions that underwrote the regulatory state – faith
in science, expertise, and administrative neutrality – seemed increasingly
dubious and misconceived. On the level of politics, the war in Vietnam
pitted Johnson’s Great Society against a rising tide of antiwar sentiment that
increasingly enlisted the support of women, students, liberals, intellectuals,
and racial minorities. Those core elements of the Democratic coalition came
to view the war as a political betrayal, and an outspoken radical minority
transformed the very word “liberal” into a term of derision. At the same
time, other key elements of the coalition veered off in the opposite direction.
Many white Americans, including urban workers and ethnic Catholics,
grew increasingly angry at civil rights advances, antiwar activism, and
what they regarded as the social and cultural outrages that exploded in
the decade’s second half. To make matters worse, organized labor, a central
pillar of the Democratic coalition, began shrinking in both membership and
influence.
The result was rupture and defeat. In 1968 the anti-war movement drove
Johnson from office, and disaffected Democrats – some by voting Republican
and others by abstaining in protest – helped elect Nixon president.
Campaigning against crime, radicalism, affirmative action, and theWarren
Court itself, Nixon joined leftist radicals in blaming liberalism for the
nation’s problems. Although the election was close, it marked the beginning
of the end of the New Deal order.
If the 1960s had been strife-torn but optimistic, the 1970s were strifetorn
and pessimistic. Dominated by the party’s left wing, the Democrats lost
disastrously in 1972, and the Republicans suffered an equally humiliating
blow two years later when theWatergate scandal forced Nixon into the first
presidential resignation in the nation’s history. The civil rights movement
fragmented over both goals and tactics, while white resentments stoked a
burning opposition that focused on school busing and affirmative action.
The war inVietnam, moreover, came to an excruciating end when the United
States withdrew its forces in 1973 and then watched as the Communist
North conquered the South, the fanatic Khmer Rouge seized control of
neighboring Cambodia, and literally millions of Southeast Asians – many
of whom had loyally supported the United States during the war – were
murdered, starved to death, or drowned trying to escape. Further, Roe v.
Wade began to unite moral traditionalists, Evangelical Protestants, and the
Catholic Church in a passionate anti-abortion movement that widened what
seemed an unbridgeable moral divide among Americans. At the same time
the Yom Kippur War in the Mideast triggered an Arab oil embargo and
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 159
drastic price increases that created a severe energy crisis. The result was a
steep recession and a debilitating inflation that lingered into the 1980s.
Fundamental economic problems – severe inflation, sharply rising interest
rates, high levels of unemployment, and persistent economic stagnation –
compounded the national downswing. Increasingly, American industry lost
out to foreign competition, and in 1971 the nation witnessed its first trade
deficit in almost a century, a deficit that multiplied more than tenfold
by 1981. Finally, a grisly national humiliation capped the decade. Iran, a
critical ColdWar ally, fell to a violently anti-American Islamic movement
that seized the United States embassy and held seventy-six Americans as
hostages. Daily television coverage carried anti-American denunciations
across the world; and, when a rescue mission failed in early 1980, the
nation watched in horror as Iranian radicals gloated over the burnt remains
of dead American soldiers and their crashed helicopters.
Those events combined to destroy the New Deal order, but they failed
to generate a successor regime that was equally stable and well defined.
The economic depression of the 1930s had confronted the nation with a
single and overwhelming challenge, one that focused attention and interests
on a national effort to revive and reform the economy. In contrast, the
psychological depression of the 1970s enveloped the nation in a web of
amorphous anxieties and multi-cornered conflicts. If the earlier depression
had pitted business and the wealthy against the unemployed and the middle
class, the later one tended to divide Americans into a splintered multitude of
groups identified not only by economic and class position but also by race,
age, region, gender, religion, ethnicity, sexual orientation, and political
ideology. The Carolene Products idea of “discrete and insular minorities”
seemed to have become the “big bang” of a new and fragmenting politicocultural
universe.
One result was that both liberals and conservatives showed a chastened
sense of limits. Liberals enjoyed their major successes in opposing the war
and cultivating a growing concern with the environment. The former was
premised on the limits of American power and the latter on the limits of
industrial society. Conservatives enjoyed their greatest triumphs in bringing
traditional religious ideas and neo-classic economic thinking into the
political mainstream. The former was based on the mandate of a transcendent
God and the latter on the iron laws of the market. All reflected a
declining faith in the power of reason, science, and government to bend the
future to the nation’s wishes.
While the psychological depression deepened, other forces were beginning
to nudge Americans in new directions. One was a complex but profound
set of attitudinal changes: escalating distrust of government, resentment
against minorities, hostility toward welfare programs, rejection of
Cambridge Histories Online © Cambridge University Press, 2008
160 Edward A. Purcell, Jr.
“liberalism” and its regulatory tradition, and a festering anger directed
against challenges to traditional religious and moral ideas – particularly
feminism, abortion rights, and gay liberation. A second factor was a longbrewing
revitalization of market economics. Together with the general
assault on government and scientific expertise, the spreading market ideology
helped turn the nation toward deregulation, privatization, and a
renewed faith in the power of private enterprise and the virtue of becoming
rich. A third factor was the formation of what appeared to be a new Republican
majority based on the merger of the party’s traditional supporters –
especially business, the well-to-do, rural America, and the old Anglo-Saxon
middle class – with new social groups, such as Catholics, ethnic whites, disaffected
members of the working class, the culturally conservative “solid
South,” and the growing forces of Evangelical Protestantism.
Drawing the new Republican coalition together was a cultural synthesis
that implicitly reversed the values of Carolene Products and post-New Deal
liberalism. Disillusioned intellectuals began to articulate a new conservative
ideology that called for a return to “authority” and to a social order
build solely on “merit.” Market theorists developed the idea that politicians
responded only to organized interest groups that sought to use government
to gain special favors contrary to the common good – “rent seeking,” as they
called it. Traditional conservatives and Evangelical groups maintained that
secular liberalism and the welfare state were undermining the nation’s moral
fiber, family values, and religious foundations. Business interests sought to
minimize their legal liabilities and avoid regulatory requirements by claiming
that their productivity was at the mercy of “frivolous” lawsuits brought
by dishonest or deluded claimants seeking undeserved windfalls. Property
owners and other groups, squeezed by recession and angered at government
spending on social welfare programs, organized “taxpayer revolts” designed
to secure substantial reductions in local, state, and national taxation. Finally,
those who harbored resentments against racial and ethnic minorities were
angered by the “preferential treatment” that the civil rights laws gave to
those whom they considered unable to succeed on their own. Subtly and
only half-consciously, those varied attitudes blended into a new social persuasion,
one that saw the weak, disadvantaged, non-conformist, and ill
treated as morally unworthy and judged their attempts to secure governmental
assistance as trickery and exploitation. Simply put, the ideology of
the new Republican coalition transmuted “discrete and insular minorities”
into “rent-seeking interest groups,” the systemically disadvantaged into
the morally unworthy. Conversely, the ideology elevated business and the
economically successful into exemplars of merit and paladins of the common
good. Those groups were not special interests but pillars of economic
growth, national might, and moral rectitude. Thus, it was appropriate for
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 161
government to foster business with deregulation and favor the prosperous
with tax cuts.
As New Deal liberalism had done, the new conservatism generated and
popularized its own supporting constitutional theories. Rejecting what they
considered unlimited Congressional power over the economy and improper
judicial activism by the Warren Court, conservative thinkers sought to
discredit the former with revived ideas of state sovereignty and the latter
with restrictive ideas about separation of powers. Although they advanced
a variety of arguments, often supported by reasoning drawn from market
economics, they rallied around the unifying claim that post-New Deal liberalism
had distorted the Constitution and abandoned its “original” meaning.
Rejecting the idea of a “living” Constitution, they maintained that the
document’s meaning was fixed and unchanging. Those not biased by liberal
nationalism, they charged, could identify the Constitution’s authentic
meaning by focusing on its text, the “original intent” or “understanding” of
its drafters and ratifiers, and the social and moral context that surrounded
its adoption.
Edwin Meese III, who served as attorney general under Republican President
Ronald Reagan in the 1980s, emerged as the most prominent national
proponent of the new conservative constitutional theory. The federal judiciary
was designed to protect federalism and limited government, Meese
insisted, and “the literal provisions of the Constitution” and “the original
intentions of those who framed it” provided the clear and correct “judicial
standard” for interpreting its meaning. Castigating the “radical egalitarianism
and expansive civil libertarianism of theWarren Court,” he charged
that liberal judicial decisions were “ad hoc” and even “bizarre,” often “more
policy choices than articulations of constitutional principle.” To preserve
limited constitutional government and construe the Constitution properly,
the Court must return to the original intentions of the Founders, “the
only reliable guide for judgment.” Such a return, Meese promised, “would
produce defensible principles of government that would not be tainted by
ideological predilection.” Thus, he announced, it “has been and will continue
to be the policy of this administration to press for a Jurisprudence of
Original Intention.”17
Although the idea of “original intent” was an old one and, like the theory
of Carolene Products, had some merit, it suddenly began to command attention
and inspire devotion because it was – again like Carolene Products – a
highly serviceable tool of constitutional politics. For the new conservatives,
17 Edwin Meese III, address to the American Bar Association, July 9, 1985, reprinted in
The Federalist Society, The Great Debate: Interpreting OurWritten Constitution (Washington,
DC, 1986), 1, 9, 10.
Cambridge Histories Online © Cambridge University Press, 2008
162 Edward A. Purcell, Jr.
the idea of original intent provided theoretical grounds for discrediting
much of the constitutional law of the preceding half-century, and it justified
both attacks on the Warren Court and the demand for justices who would
overturn its decisions and restore the “authentic” Constitution. Indeed, the
concept of a normative original intent was inherently an instrument of
doctrinal disruption and change. Asserting the existence of a “true” constitutional
meaning established in a distant past, the idea provided theoretical
justification for casting off constitutional interpretations that had evolved
over the subsequent centuries and for rejecting judicial decisions rendered
in more recent periods. Equally important, by making eighteenth- and
nineteenth-century attitudes the touchstone of constitutional meaning, the
idea promised to strengthen the legal and historical arguments that conservatives
advanced against the political adversaries they opposed most
intensely – those supporting gay rights, abortion, gun control, affirmative
action, restrictions on the death penalty, more expansive tort liability, rigid
separation of church and state, institutional reform litigation, and broad
federal anti-discrimination laws.
Influenced by Nixon’s four appointees, the Burger Court began to reflect
those spreading attitudes. Trumpeting a new concern with what it called
“Our Federalism,” it increasingly sought to counter liberal nationalism
by limiting the reach of federal law into the operations of state and local
government. It expanded the immunity of government officials from civil
rights suits, curtailed remedies for those injured by violations of federal
statutes, and narrowed the scope of the Fourteenth Amendment. Similarly,
it cabined many of the Warren Court’s criminal law decisions, narrowing
both the Fourth Amendment exclusionary rule and the Fifth Amendment
right to counsel. Although it did not overrule Miranda v. Arizona,
it repeatedly found ways to shrink its reach. Most commonly, the Court
targeted the institutional power of the lower federal courts, developing
a variety of procedural restrictions to limit their opportunities for liberal
activism. It required them to abstain more frequently in favor of state
forums, limited their power to issues writs of habeas corpus to state officials
and to order remedies in school desegregation suits, and used the Eleventh
Amendment to deny them jurisdiction over suits against states for money
damages.
Although it employed the rhetoric of federalism, the Burger Court
seemed increasingly committed to a substantively conservative political
agenda, especially after the appointment of Justice Sandra Day O’Connor
in 1981. Its decisions, for example, commonly deployed the rhetoric of federalism
to close the federal courts to groups that the new Republican coalition
had targeted – tort plaintiffs, civil rights claimants, and state criminal
defendants. Indeed, when deference to the states led to unpalatable results,
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 163
the Court often balked. In Michigan v. Long (1983), for example, deference
to state decision making would have meant upholding the constitutional
claim of a criminal defendant. The Court’s majority would allow no such
result. Instead, it broadened its own jurisdiction to review decisions of state
courts and thereby extended the reach of federal authority to overturn state
court rulings.
Most fundamental to the federal system, in a 5–4 decision in National
League of Cities v. Usery (1976) the Burger Court sought to strike directly
at the New Deal legacy by reviving the Tenth Amendment. Overruling a
decision of the Warren Court, it held that the Fair Labor Standards Act of
1938 (FLSA) could not be applied to state employees and, for the first time
since 1937, voided a Congressional statute enacted under the commerce
power. Citing the Tenth Amendment, National League declared that there
were “definite limits upon the authority of Congress to regulate the activities
of the States as States by means of the commerce power.”18 The Court,
National League reasserted, was responsible for protecting the states from
national legislative power. For three liberal dissenters, Brennan rejected the
majority’s holding and invoked the post-New Deal theory of the “political
safeguards of federalism.” The “fundamental tenet of our federalism,” he
insisted, is “that the extent of federal intervention into the States’ affairs”
was properly determined not by the Court but “by the States’ exercise of
political power through their representatives in Congress.”19
Indicative of its transitional nature as both a third Warren Court and
the ur-Rehnquist Court, the Burger Court – actually, a single justice –
changed its mind nine years later. Overruling National League in another
5–4 decision, Garcia v. San Antonio Metropolitan Transit Authority (1985),
it upheld an application of the FLSA to a municipal transit system on
two closely related constitutional grounds. One was that the Constitution
offered “no guidance about where the frontier between state and federal
power lies” and, hence, gave the justices “no license to employ freestanding
conceptions of state sovereignty when measuring congressional authority
under the Commerce Clause.” The other ground was a liberal version of
original intent, a broad theory of the Framers’ design: “the principal means
chosen by the Framers to ensure the role of the States in the federal system
lies in the structure of the Federal government itself.”20 In explicit terms
the Court adopted the reigning liberal theory that the federal system was
properly protected not by the Court but by the “political safeguards” that
the Framers had built into the constitutional system.
18 426 U.S. 833, 852.
19 426 U.S. at 876–77 (Brennan, J., dissenting).
20 469 U.S. 528, 550.
Cambridge Histories Online © Cambridge University Press, 2008
164 Edward A. Purcell, Jr.
Reviving the pre-New Deal views of William Howard Taft and Charles
Warren, four Republican appointees dissented vigorously. Justice Lewis
F. Powell rejected the “political safeguards” theory as both functionally
inadequate and constitutionally unfounded, and he insisted that “judicial
enforcement of theTenth Amendment is essential to maintaining the federal
system.” Casting a hopeful eye to the future, JusticeWilliam H. Rehnquist,
Nixon’s last appointee and the author of National League, agreed. The principle
of state sovereignty, he declared defiantly, “will, I am confident, in
time again command the support of a majority of this Court.”21 Little more
than a year later Ronald Reagan appointed Rehnquist Chief Justice.
Elected president in 1980, Reagan did far more than that. He helped
reorient American politics, lead the nation out of the psychological depression
of the 1970s, and inspire a crystallizing Republican majority in its
drive for national dominance. That coalition reelected Reagan in 1984, put
two other Republicans – George Bush in 1988 and George W. Bush in
2000 – in the presidency, and forced Democrat Bill Clinton to move his
party substantially to the right in order to scratch together two presidential
victories in the 1990s. Equally important, the new Republican coalition
steadily increased the party’s strength in Congress, which the Democrats
had dominated since the Great Depression. After 198,0 the Republicans frequently
controlled the Senate, and in 1994 they won control of the House,
a position they retained to century’s end.
Reagan established both the rhetoric and direction of the new era.
“[G]overnment is not the solution to our problem,” he announced. “Government
is the problem.”22 His greatest success came in reshaping the
parameters of public debate and establishing the values of the new Republican
coalition – religious traditionalism, suspicion of government, faith in
business and the free market, and opposition to welfare, abortion, homosexuality,
and affirmative action – at the center of American politics. His
administration pursued four principal policies: business deregulation, tax
cuts weighed in favor of the wealthy, heavy increases in military spending,
and a balanced budget. In large part it delivered on the first three and, likely
by design, failed on the fourth – a result that led to skyrocketing federal
deficits and, consequently, to intensifying pressures to cut federal domestic
spending on welfare and other social programs. Further, Reagan, who had
opposed both the Civil Rights Act of 1964 and the Voting Rights Act of
1965, altered the position of the federal government on civil rights issues.
His administration opposed affirmative action and school busing, and it
21 469 U.S. 570 (Powell, J., dissenting); id. at 580 (Rehnquist, J., dissenting).
22 Ronald Reagan, “Inaugural Address,” Jan. 20, 1981, in Public Papers of the Presidents of
the United States, 1981 (Washington, DC, 1982), 1.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 165
slackened substantially federal efforts to enforce the national civil rights
laws.
Proclaiming another “New Federalism,” Reagan sought to restructure
the system far more substantially than Nixon had attempted. Nixon’s “new
federalism” had embraced the idea of active government. Accepting the
need for massive federal spending it had attempted to make government
more responsive and efficient by decentralizing management. Its primary
method was to abandon highly restrictive categorical grants in favor of block
grants and general revenue sharing, thereby maintaining the flow of funds
to state and local governments but with far fewer federal use restrictions. In
contrast, Reagan rejected revenue sharing and, more important, sought to
minimize or terminate federal financing and supervision in as many areas
as possible. His goal was to shrink government at all levels. Although his
most ambitious federalism proposals failed, he succeeded in ending revenue
sharing and reducing federal grants to state and local governments. During
the 1980s funding for welfare programs fell, and federal grants to state
and local government dropped by 25 percent. Along similar lines, Reagan
substantially reduced federal supervision over state and local governments.
His administration adopted administrative procedures to slow the growth of
federal rule making and altered many existing regulations to allow the states
greater discretion and to relieve them of costly reporting requirements.
It consolidated seventy-seven categorical programs into nine broad block
grants, for example, condensing and simplifying a wide range of rules and
restrictions. In social terms, the weak and disadvantaged, both the working
and non-working poor, bore the hardships and deprivations of his federalism
reforms.
In spite of its commitment to decentralization, however, the Reagan
administration readily embraced federal power when necessary to advance
its political objectives. While in most cases – welfare spending and civil
rights enforcement, for example – curtailing federal activism served its
social purposes, there were exceptions. When business interests advocated
both uniform national standards to open more miles of highway to larger
trucks and a national product liability law restricting consumer rights, Reagan
supported the proposals in spite of the fact that they required federal
preemption of state laws in areas of traditional state control. Similarly, his
administration readily advocated national standards in its effort to impose
workfare requirements on state welfare programs, extend federal criminal
law to fight a variety of social evils, and defeat the affirmative action programs
that dozens of state and local governments had established.
Indeed, although Republican administrations from Nixon to the second
George Bush formally upheld the banner of federalism, all contributed to
the further centralization of American government. In domestic matters
Cambridge Histories Online © Cambridge University Press, 2008
166 Edward A. Purcell, Jr.
they joined Democrats in expanding national involvement in such traditional
state areas as education and family relations, and they pushed –
against determined Democratic opposition – to nationalize elements of tort
law in order to restrict suits against business and government. Further, they
helped federalize ever larger realms of the criminal law. Indeed, by 1996
more than 40 percent of all federal criminal statutes had been enacted since
Nixon’s election in 1968. Similarly, the Republicans steadily reinforced
the expansion of presidential power and the prioritization of military and
foreign policy concerns. That persistent emphasis impinged on the states
by centralizing issues of paramount public concern, expanding the de facto
scope of federal authority, and diverting resources from domestic programs
that the states helped control to the military and national security institutions
that operated under exclusive federal authority. Ironically, the end of
the Cold War between 1989 and 1991 seemed to lead only to rapid international
destabilization, further magnification of foreign policy anxieties,
and an ever greater concentration of power and discretion in the federal
executive.
By the end of the 1980s the successive achievements of post-New Deal
liberalism and the decentralization efforts that began after 1969 had combined
to alter and in some ways strengthen the nation’s federal system.
The former accomplished three critical results. First, compelling the states
to redistrict their legislatures, post-New Deal liberalism increased urban
representation in many states and helped create new legislative coalitions
that began to address the pressing problems that earlier rural-dominated
legislatures had ignored. Second, it brought the franchise to African Americans
in the South and forced broad non-discrimination policies on all
states. The result was to ensure fairer treatment for minority groups and to
begin mitigating abuses that had long tarnished the claim of states’ rights.
Third, federal matching grants stimulated new social programs and spurred
many states to modernize and professionalize their governmental structures.
Between 1965 and 1980, for example, twenty-two states redesigned their
executive branches; the number of state employees who worked under merit
systems rose from 50 to 75 percent. Similarly, thirty-four states reorganized
and expanded their court systems, and all fifty established offices of court
administration to address caseload burdens and increase judicial efficiency.
Those achievements substantially enhanced the ability of the states to
handle the consequences of the new decentralization that began in the
1970s. On one level, the decentralization effort made the national government
more responsive to state complaints about bureaucratic waste and
unnecessary administrative burdens. The result was the elimination or simplification
of many federal regulatory procedures and a greater flexibility
at the state and local levels in shaping government programs. On a second
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 167
level, decentralization allowed states to take greater control over the programs
they administered and encouraged them to modernize their administrative
structures and use their enhanced capacities to initiate new programs
and approaches of their own. Beginning in the 1970s the states embarked
on a range of new initiatives to expand social services, improve financial
capabilities, attract outside investment, develop energy and conservation
programs, and reform their public education and criminal justice systems.
On a third level, the decentralization movement revived the idea of the
states as laboratories that could attempt valuable social experiments. The
states began to look to one another – rather than to the federal government –
for new ideas and techniques, and with increasing frequency they borrowed
from the approaches that their sister states had tried and found effective.
Wisconsin exemplified both the era’s new state activism and its growing
social conservatism. In the century’s early decadesWisconsin had pioneered
many progressive social measures, and in the 1990s it emerged once more as
an innovative force, this time in developing restrictive “workfare” programs
designed to reduce taxes, curtail welfare coverage and benefits, and compel
recipients quickly to find private employment. Its approach encouraged
conservative attacks on the federal welfare system and not only influenced
other states but also had an impact at the national level. In 1996Wisconsin
again stood as a paragon of laboratory federalism when the federal government
invoked its experience in substantially revamping the nation’s welfare
law. A monumental federal welfare reform act encouraged the wider use of
workfare requirements, eliminated some national programs, expanded the
use of block grants, and allowed the states greater leeway in shaping their
own systems.
In spite of the decentralization efforts, however, governmental power at
the national level remained decisive. That fact was nowhere more apparent
than in the movement to replace welfare with workfare. AlthoughWisconsin
illustrated a renewed vitality in state governments, the welfare reform
law that Congress enacted in 1996 demonstrated that the federal government
remained the paramount force in establishing national welfare policy.
The act not only required the adoption of workfare policies, but it also
compelled the states to comply with a number of other rigorous federal
mandates, including the imposition of time limits on eligibility, reduction
or withholding of benefits for certain classes of recipients, reporting
procedures involving the paternity and immigration status of underage
beneficiaries, and the development of various centralized procedures for
administering key elements of state welfare programs.
Contemporaneous developments in the state courts suggested similar
conclusions about the continuing dominance of national standards. Those
courts had authority to construe their own state constitutions, and they were
Cambridge Histories Online © Cambridge University Press, 2008
168 Edward A. Purcell, Jr.
free in most cases to establish broader individual rights and liberties than
the U.S. Supreme Court recognized under the Federal Constitution. Not
surprisingly, then, in the 1970s liberals reacted to the narrowing constitutional
decisions of the Burger Court by urging the state courts to use their
independent authority to counteract its decisions by expanding individual
rights under their separate state constitutions. Some responded, and a number
of state judges invoked their authority to establish rights broader than
those recognized in federal law. The liberal appeal to state judicial power,
however, brought only limited and scattered results. For the most part state
courts spurned their opportunities and in the overwhelming majority of
relevant cases chose either to rely on federal constitutional law directly or
to conform state constitutional law to the contours of federal law. Indeed,
when the courts of California and Florida refused to follow decisions of the
Burger Court, they were abruptly reigned in. Both states responded with
constitutional amendments that required their state courts to bring their
interpretations of certain state constitutional provisions into conformity
with the decisions of the U.S. Supreme Court.
The relatively conformist behavior of the state courts suggested several
interrelated conclusions about American federalism in the late twentieth
century. One was that underlying social, cultural, and economic forces were
continuing relentlessly to centralize national affairs. In spite of the swelling
paeans to federalism, Americans were ever more commonly advancing their
values and policies as properly “national” in scope. Although they frequently
and sometimes bitterly disputed the nature of the values that were proper,
they nevertheless insisted ever more stridently that their own values –
whatever they were – be given national recognition. The second conclusion
was that the U.S. Supreme Court was playing an ever more prominent
and important role in public affairs. To a growing number of Americans it
was the truly “supreme” authority that could and should rule on all major
issues that faced the nation. Americans were beginning to view the Court,
in other words, as they had come to view the presidency – as an institution
that should address not only problems that were properly “national” in
some antecedent and technical constitutional sense but also all issues that
had become, as a practical fact of everyday life, important to the nation as a
whole.Athird conclusion was that the concept of “federalism” had lost most
of its substantive meaning as an independent normative guide to the distribution
of governmental powers. While theories of federalism continued
to proliferate and activists of all stripes persisted in invoking the concept’s
authority, little remained of the idea that could not readily be turned to
partisan use by able and designing hands. The fourth and last conclusion
was that a politically conservative and socially ungenerous mood had come
to pervade political attitudes across the nation. The state courts properly
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 169
followed the U.S. Supreme Court, many Americans seemed to believe, not
just because it was the authoritative voice of the national Constitution but
also because it was – with a few glaring exceptions – moving that law, for
the time at least, in the general directions they considered desirable.
Although the Court increasingly reflected the values of the new Republican
coalition, Reagan and his successors failed to transform the Supreme
Court as quickly or completely as the New Deal had done. Between 1933
and 1969 the Democrats had controlled the presidency for twenty-eight of
thirty-six years, the Senate for all but four of those years, and both together
for twenty-four years. Conversely, in the decades after 1968 the Republicans
controlled both the presidency and the Senate simultaneously for
only six years, 1981 through 1987, a period in which only two vacancies
occurred. Thus, Republican nominations were commonly subject to Democratic
check. Then, further diluting their drive for control, during the 1990s
Clinton was able to add two moderate liberals to the Court.
Even though Republican presidents were responsible for ten of the twelve
justices placed on the Court after 1968, their new appointees failed to form
a consistently united bloc. Indeed, only three of them pushed aggressively
and relentlessly to implement the values of the new Republican coalition.
In contrast, three others edged into the Court’s moderate-to-liberal wing,
and the remaining four were often cautious and respectful of precedent,
rather than ideological and ardent for change. As both conservatives and
opponents of judicial activism, the moderate four may have felt themselves
bound to honor the principle of stare decisis and to remain for the most part
within existing constitutional channels. Thus, a combination of external
checks, internal barriers of role and doctrine, and differing jurisprudential
orientations prevented abrupt change in many areas.
Although a variety of obstacles slowed Republican efforts to remake the
federal judiciary, the party’s determined drive nevertheless began to bring
increasingly substantial results by the late 1980s. Methodically appointing
ideologically sympathetic judges, Reagan and Bush increasingly turned the
lower federal judiciary toward the values of the new Republican coalition.
Far more visibly, they did the same to the Supreme Court. Reagan markedly
changed its direction when he elevated Rehnquist to the center chair in
1986 and then added conservative Justices Antonin Scalia and Anthony
Kennedy to the bench. Then, when Bush replaced liberal Justice Thurgood
Marshall, the last survivor of theWarren Court, with the rigidly conservative
Justice Clarence Thomas in 1991, he established a relatively firm five-justice
conservative bloc that began to act with increasing boldness.
In the name of federalism the new majority took particular aim at the
powers of Congress, and in the century’s last eight years it voided at least
ten Congressional statutes on federalism grounds. In United States v. Lopez
Cambridge Histories Online © Cambridge University Press, 2008
170 Edward A. Purcell, Jr.
(1995), the five-justice bloc voided the Gun-Free School Zones Act, which
made it a crime knowingly to possess a gun near a school. The decision
seemed to limit the Commerce Clause to formally “economic” activities
that Congress could show were directly related to interstate commerce. Five
years later in United States v. Morrison (2000) the same five justices relied on
Lopez to void a provision of the Violence AgainstWomen Act that created a
federal remedy for victims of gender-motivated violence. Such violence, the
Court explained, was “not, in any sense of the phrase, economic activity.”23
Similarly, the Court deployed the judicially created doctrine of standing to
trump Congressional power to enforce federal environmental laws through
private lawsuits, and it even suggested doctrinal grounds for possible future
use in enforcing limits on the spending power.
More pointedly, reacting against national regulation of state and local
governments, the Court severely constrained federal power over the states
themselves. First, in 1996 it held that the Eleventh Amendment barred
Congress from using its commerce power to create claims against states,
and three years later it extended that holding to all of Congress’s Article I
powers. Second, it narrowed the Fourteenth Amendment for the same purpose.
Although the Court did not challenge the principle that Congress
could abrogate state sovereign immunity when legislating under Section 5
of the Fourteenth Amendment, it created severe limitations on the power
and invalidated a series of Congressional statutes that imposed liabilities
on states for violating federal civil rights statutes. Finally, the Court
further insulated the states from federal power by developing an “anticommandeering”
principle that forbad Congress from requiring states or
their officials to assist in implementing federal regulatory programs.
Although the Rehnquist Court revived the Tenth Amendment, it did
not use it to remove a broad category of “local” activities from federal
authority as the Taft Court had done in Drexel Furniture. Rather, in the
spirit of National League, it employed the amendment more narrowly and
seemed primarily interested in protecting the operations and institutions
of the state governments themselves. Its decisions restricting the lower
federal judiciary paralleled its decisions limiting Congressional power. The
Rehnquist Court curtailed federal habeas corpus, shrank remedial authority
over institutional reform suits, and narrowed substantive liabilities under
federal statutory and constitutional provisions in order to minimize federal
judicial intervention in the operations of state and local governments.
Beyond insulating state governments, the Rehnquist Court’s decisions
limiting Congressional power seemed targeted primarily at civil rights legislation.
Its Commerce Clause decisions limited Congressional authority
23 529 U.S. 598, 613.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 171
to activities that were primarily “economic;” its Section 5 decisions struck
directly at the principal Congressional power specifically designed to protect
disadvantaged social groups. Politically, then, the Court’s efforts to
constrain Congress seemed to reflect the social and cultural strains of the
new Republican coalition more than its free market and business-oriented
aspects.
The Rehnquist Court’s lack of sympathy with the federal civil rights
laws was apparent. Immediately after the last Reagan appointee took his
seat in 1988, it issued a stunning series of decisions that methodically
narrowed the civil rights laws and restricted the remedies available for their
violation. Its decisions struck most ruthlessly at affirmative action programs
and employment discrimination law. Revealingly, when the Court dealt
with affirmative action, it readily set aside its goal of insulating the states
and imposed federal constitutional restrictions on their power to establish
such programs.
The political significance of the Court’s civil rights decisions was clear.
Since 1968 Republicans had deployed the language of federalism to shape
a “Southern strategy” that sought white votes by opposing civil rights
activism and, in particular, affirmative action programs. The Reagan administration
had followed the same course, intensifying the rhetoric, limiting
enforcement of the civil rights laws, and – for the first time since Brown –
bringing the federal government into court to oppose civil rights claims.
Then, in 1988 Reagan’s vice president, George Bush, was elected president
after a campaign that promised “law and order” and featured a notorious
television advertisement that was widely perceived to be racist. When the
Democratic Congress attempted to pass legislation to counter the Rehnquist
Court’s civil rights decisions, Bush vetoed one bill and then compelled
Congress to weaken another before signing it. The Rehnquist Court’s civil
rights decisions fit snugly with the Republican program.
Not surprisingly, the Rehnquist Court also followed the Reagan and
Bush administrations in asserting national authority to enforce other values
of the Republican coalition. Joining the effort to restrict tort claims
against business, it readily displaced state law when federal rules served the
purpose. Similarly, it expanded federal power under the Due Process and
Takings Clauses, limited state power to enforce environmental regulations,
and applied a broad First Amendment right of association to allow large
private organizations to exclude homosexuals. Indeed, in decisions protecting
private property, it again set state authority aside by imposing a federal
constitutional duty on states to provide tax refunds in certain cases and,
further, suggested that the Takings Clause might override state sovereign
immunity and allow federal courts to order states to pay just compensation
for certain regulatory actions.
Cambridge Histories Online © Cambridge University Press, 2008
172 Edward A. Purcell, Jr.
Equally revealing, however, the Rehnquist Court also asserted federal
authority for other purposes as well. It enforced First Amendment limits
on governments at all levels, and it used the negative Commerce Clause
and the doctrine of implied preemption to displace state law and expand
the reach of much federal legislation. Indeed, during the last decade of the
twentieth century the Rehnquist Court voided actions taken by states in
54.7 percent of the relevant cases it decided (111 of 203), an invalidation
rate that was slightly higher than theWarren Court’s rate of 53.6 percent in
such cases during its sixteen years of existence (128 of 239). Most arresting,
on occasion it even asserted national power in ways that conflicted with
the values of the Republican coalition – though only over scathing dissents
from the justices most fervently committed to those values. A slim, moderate
majority, for example, preserved the federal constitutional right to an
abortion and used the Fourteenth Amendment on occasion to protect both
women and homosexuals.
Thus, in spite of its rhetoric, the Rehnquist Court did not simply defer
to the states or check national power in all areas. Nor, of course, did it
invariably honor the values of the Republican coalition. Rather, it did what
its predecessors had done: it enforced its own peculiar version of federalism as
determined by shifting coalitions among its justices, each ofwhomsought to
meet the new and unexpected challenges that were generated by a changing
and dynamic society. Like the liberal Courts that followed the New Deal,
it reflected the variations and inconsistencies of its nine justices as well
as the characteristic values that marked the shared jurisprudential ideas
of its generally dominant majority. Indeed, as its frequent willingness to
assert a muscular federal judicial power evidenced, the Rehnquist Court
seemed driven as much by three substantive social goals as by any principled
concern for the states. It sought to limit government regulatory authority,
particularly in the areas of civil rights and environmental protection; it
sought to restrict lawsuits against both business and governments; and it
sought to shrink the rights of criminal defendants and prison inmates.
Beyond the specific social policies it served, the Rehnquist Court stood
at century’s end on three fundamental propositions about American federalism.
One was that the power of Congress had become all encompassing and
that limited constitutional government required the imposition of some
kind of effective limits. The second was that the power of the national government
over the states themselves had to be circumscribed severely. The
last was that the “political safeguards” of federalism, whatever their efficacy
in prior times, were no longer adequate to check federal power and protect
state independence. All three propositions pointed to the same conclusion:
the Court itself must enforce limits on national power.
Cambridge Histories Online © Cambridge University Press, 2008
The Courts, Federalism, and the Federal Constitution, 1920–2000 173
However sound the Court’s premises and conclusion, at century’s end
the fundamental – and operational – questions remained as they had been
ever since 1789: What specific vision of federalism should be adopted?
What specific limits should be enforced? Which governments – and which
branches of government – should be subject to federalism’s limitations? For
what purposes, and in whose interests?
CONCLUSION: AMERICAN FEDERALISM AT CENTURY’S END
The twentieth century ended, almost literally, with Bush v. Gore (2000).
There, the five-justice Rehnquist majority asserted a questionable jurisdiction
to determine who would win the presidential election of 2000 and
then, on sharply contested grounds, ruled in favor of Republican George
W. Bush.
In the most dramatic manner possible the decision revealed two fundamental
characteristics of American federalism. First, it demonstrated the
extent to which the Supreme Court had moved to a position of institutional
centrality in American government. In troubled elections in 1800 and
1824 the House of Representatives had followed constitutional provisions
in determining who would be the next president. In the bitterly disputed
election of 1876 a special extra-constitutional commission composed of
five representatives each from the Senate, House, and Supreme Court had
convened to resolve the same issue. Notwithstanding prior practice, constitutional
clauses, and statutory provisions that suggested Congress or the
state legislature as the authoritative institution, the Court stepped into the
disputed election of 2000 and decided the outcome. Alone. No branch of
Congress sought to intervene or participate, and no branch of state government
moved to oppose. Deeply and closely divided, the nation accepted the
Court’s decisive role as practically necessary and constitutionally proper.
Bush v. Gore capped the Rehnquist Court’s basic institutional achievement:
confirming the evolution of the role and authority of the federal
judiciary – and, particularly, the Supreme Court itself – that had occurred
over the previous century or longer. That evolution had elevated the Court,
with the lower judiciary as its wide-reaching arms, to a position of sweeping
institutional authority. Repeatedly, the Rehnquist Court insisted that it
was the final arbiter of the Constitution, and it brought new vitality to the
Warren Court’s pronouncement of judicial authority in Cooper v. Aaron. “It
is the responsibility of this Court, not Congress, to define the substance of
constitutional guarantees,”24 it declared in shrinking Congressional power
24 Board of Trustees of the University of Alabama v. Garrett, 531 U.S. 356, 365 (2001).
Cambridge Histories Online © Cambridge University Press, 2008
174 Edward A. Purcell, Jr.
and asserting its own primacy under the Fourteenth Amendment. Not
surprisingly, the Rehnquist Court exceeded the Warren Court in the rate
at which it held federal as well as state actions unconstitutional.
Second, Bush v. Gore exemplified the shifting, contested, and instrumentalist
nature of American federalism. Although some of the legal issues were
novel, the decisive constitutional issue was stark: did authority to settle the
matter reside at the state or national level? Unlike the many cases in which
the ideology of the new Republican coalition coincided with deference to
the states, in Bush v. Gore the two conflicted. The five-justice majority
bloc rushed to trump state sovereignty with national power. “[T]he federal
government is not bad but good,” one of the majority justices had told
a conservative audience some two decades earlier before ascending to the
Court. “The trick is to use it wisely.”25 As the twentieth century ended,
Bush v. Gore stood as a monument to the dynamics of American federalism,
the system’s paradigmatic case.
Thus, in spite of the many changes that reshaped the system and restructured
its operations, American federalism closed the twentieth century
much as it had begun it, as a somewhat disjointed and malleable, but nevertheless
stable and democratic, system of government with the capacity
to confront new problems and adapt to new conditions. A variety of social
and cultural factors sustained its working order: a strikingly diverse population
that enjoyed prosperity, education, and freedom; a variety of formal
and informal checks that helped counter concentrated power; the ingrained
social values, cultural habits, and institutional practices that constituted
the nation’s vital, if inherently human, rule of law; and a sustaining popular
faith that the nation was committed, ultimately if quite imperfectly, to the
lofty ideals it formally proclaimed. American federalism maintained itself
in the twentieth century not because the Constitution set forth bright lines
that defined state and federal power or because the Court articulated its own
consistent and unchanging rules but because the system’s complex operations
were shaped and constrained by that social, cultural, and institutional
base.
25 Antonin Scalia, “The Two Faces of Federalism,” Harvard Journal of Law and Public Policy
6 (1982), 19, 22.
Cambridge Histories Online © Cambridge University Press, 2008
5
the litigation revolution
lawrence m. friedman
This chapter examines myths and realities in the recent history of litigation
in the United States. It looks at the actual figures – how many people are
suing, and where are they suing; and are they suing more or suing less than
they did in the past? It looks at the differences between federal and state
litigation. It looks at litigation qualitatively as well as quantitatively: are
the kinds of lawsuit changing, and in what ways? It examines the disputed
question of the impact of litigation on society. It also examines alternatives
to litigation and their popularity.
Litigation is controversial and has been controversial for more than a
century. To say that a person or a society is “litigious” is not complimentary.
This is true not only in this society, but in other societies as well. It is an
interesting question why this should be the case. After all, the right to
a “day in court” is one of the hallmarks of an open, democratic society.
Modern societies insist that people must have access to justice (though they
often fall far short of this ideal); why then is it considered bad if people take
advantage of this right?
There is no easy answer. Lawsuits, however, are costly, take time, and
exact a toll. They may be inefficient and overly technical. Charles Dickens’
notorious description of a lawsuit in equity – the fictional Jarndyce v. Jarndyce
in Bleak House – was an exaggeration, but it tapped into widespread feelings
of dismay at the pathologies of litigation. On the whole, litigation is procedural
and rule-bound, and the lay public has trouble grasping the reasons
for some of the rules. In litigation, too, there are winners and losers, and
the losers usually feel they have been wronged. Also, as we shall see, litigation
has the capacity to upset powerful interests, and they can retaliate,
and do retaliate, with influence and propaganda. For the last few decades
of the twentieth century, a resourceful anti-litigation movement has been
gathering strength, and it has achieved a measure of success.
Here are some of the main points of this chapter. First: Because Americans
are accused of litigating too much, it is important to know the facts: how
175
Cambridge Histories Online © Cambridge University Press, 2008
176 Lawrence M. Friedman
much litigation is there? But litigation rates are extremely difficult to
measure. This is so in part because it is hard to define litigation in a way
that can command general agreement. But surely “litigation” implies some
sort of dispute that is settled in court. If we take this as a rough definition,
then – despite what many people think, including lawyers and judges –
there is little hard evidence that litigation rates are rising or that people in
the United States are more litigious than they have been at various periods
in the past.
Second: There are many different types of lawsuits. Some types are more
common than they were, and sometimes it is easy to see why.Without civil
rights laws, for example, there would be little or no civil rights litigation;
there would be little or no environmental litigation without environmental
law. Yet at the same time, some types of litigation are, in fact, diminishing –
debt collection, for example. There are probably more massive, giant lawsuits
than before – huge antitrust cases or humongous clusters of lawsuits
against asbestos companies – and this is part of what gives the impression
of an “explosion” of litigation. And, in fact, the amount of money that
businesses and individuals spend on legal services, including litigation, has
risen quite sharply in the late twentieth century.
Third: Though evidence of a “litigation explosion” is slim, there is plenty
of evidence of what we might call a “liability explosion,” particularly in
tort law. Old doctrinal barriers to lawsuits against doctors and hospitals,
against manufacturers and municipalities, broke down in the twentieth
century. Clearly, too, in such fields as civil rights and environmental law,
as we have just noted, new legislation and new doctrines allowed or even
encouraged litigation.
Fourth: Businesses – but not only businesses – have resented the liability
explosion and have poured money into campaigns to curb what they
consider excesses. Much of the population, as we have said, finds litigation
odious. Some scholars have argued that litigation is hurting the country,
economically and otherwise – although this is in fact a difficult case to
prove. Politicians, particularly on the right, campaign against the “plague”
of lawsuits. Many states have passed laws to try to curb litigation or to put
a ceiling on the amounts plaintiffs can collect. And judges, scholars, and
policymakers have led a search for detours around the court system. ADR
(alternative dispute resolution) has flourished in the late twentieth century.
The idea is to save time and money and to avoid litigation. Whether that
has been the result, however, is unclear.
I. THE SO-CALLED LITIGATION EXPLOSION
One of those “facts” that the general public seems very sure about is that
there is an explosion of litigation in this country. Americans love to sue, it
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 177
is said, and they are suing each other in droves. What this would mean, first
of all, is that in absolute terms there is a great deal of litigation, perhaps too
much litigation, although there is no easy way to decide how much is too
much. More specifically, to talk about an “explosion” implies change, and
dramatic change at that: a vast increase in the number of lawsuits filed, in
proportion to the population. The hypothesis would be, therefore, that in
some period – the last half of the twentieth century, for example – litigation
climbed at a very striking rate, compared to earlier times.
In fact, evidence for such an explosion is rather hard to come by. There are
two basic problems. The first is the definition of “litigation.” The second
is counting the cases – figuring out the actual numbers. As to the first: not
all the cases filed in court are necessarily litigation, if by litigation we mean
actual disputes between two or more parties. Thousands and thousands of
cases filed in court are not disputes at all. Petitions to change a name, to
adopt a child, to open an estate, to end a marriage, to collect a debt, or
to evict a tenant usually do not involve a dispute at all. To be sure, any of
these situations can generate a dispute: a bitter custody battle, a contested
will; a tenant can fight back against a landlord, a debtor against a creditor, a
birth mother can oppose an adoption. But these are exceptions. So although
a huge increase in uncontested divorces can give the appearance of a rise
in litigation rates, if the bulk of the increase is in uncontested divorces,
matters where no one disagrees and which never result in a trial, it would
be misleading to count these cases as evidence for an explosion of lawsuits.
Gathering national data, that is, data for all the states, is particularly
difficult. What courts should we measure? Do we include traffic courts?
Small claims courts? If we restrict ourselves to trial courts above the level
of traffic courts, small claims courts and the like – that is, to courts of
general jurisdiction – there are issues of comparability: each state defines
jurisdiction rather differently. There is, of course, no doubt that courts are
heavily used in the United States (though, as we have said, not necessarily
for litigation). The actual number of cases filed in all courts is impressive.
There is little evidence, however, for any recent increase. In the state courts,
according to one count, filings in 1984 totaled 85,796,447; in 2000 the
total was 91,954,001. This represents an increase on the order of 7 percent,
which, in the light of population growth, is hardly an increase at all. The
bulk of these cases were traffic cases (no fewer than 55,742,240 in 2000).
“Domestic” cases (mostly uncontested divorces) rose from 2,890,546 in
1984 to more than 5,000,000 in 2000. Other civil cases showed no growth
at all. According to figures provided by the National Center for State Courts
there were just over 15,000,000 filings in 1991 and just under 15,000,000
in 2000.
If we try to look at a broader time span, we run into serious data problems.
Nationally, historical statistics hardly exist. Research even on individual
Cambridge Histories Online © Cambridge University Press, 2008
178 Lawrence M. Friedman
jurisdictions is rather thin. Robert Percival and I studied two trial courts in
California between 1870 and 1970, one rural (San Benito County), and one
urban (Alameda County); we found no evidence for a litigation explosion, at
least up to 1970. Both counties had higher rates of filing in 1970 compared
to 1890, but the Alameda rate was in fact higher in 1910 than in 1970,
and the rate in San Benito County actually declined between 1950 and
1970. Marc Galanter, reviewing the literature on litigation rates up to the
early 1980s, came to the same general conclusion; so didWayne McIntosh,
who studied St. Louis data. John Stookey’s analysis of the flow of litigation
in Arizona in the twentieth century found great fluctuations, which were
associated with the business cycle.
Figures from some states do suggest that civil court filings have risen in
absolute numbers over various time periods. In Texas, for example, some
65,000 filings in 1938 in the basic trial courts had grown to about 235,000
in 1971. Of these, some 86,000 were divorce cases. In the year ending
August 31, 1996, 373,000 civil cases were filed; 118,000 of these were
divorce cases, and 90,000 were “other family law matters.” In the twentyfive-
year period from 1971 to 1996 tort claims had more than doubled, from
about 17,000 to 43,000. On the other hand, some categories had declined
(workers’ compensation, for example), either absolutely or in proportion to
the population, and on the whole the evidence from state courts is mixed.
Moreover, there have clearly been periods in American history in which
citizens resorted to the courts more often than they do today. In the colonial
period, courts were cheap and ubiquitous and handled a wide variety of
matters – administrative as well as judicial. The names of most adult citizens
in a locality would appear in its court records each year for one reason or
another – something that is certainly not true today. In rural areas in the
nineteenth century, too, there is reason to think that courts were more
generally used than they are today.
Of course, use is not the same as litigation. The hypothesis is not just
a quantitative hypothesis; it also assumes an attitude, an aspect of legal
culture, a psychological bent toward claims consciousness. It also assumes
that institutions and doctrines exist that foster and stimulate this culture
of suing. These qualitative matters, of course, are almost impossible to
document and to test.
Some facts about American litigation are beyond dispute. For example,
the distribution of litigation, between state and federal courts, changed
over the course of the twentieth century. It remained true in 2000, as in
1900, that the overwhelming majority of lawsuits were filed in state courts.
But the federal courts have grown in importance in the last hundred years.
Their caseload spiraled upward steadily in the course of the twentieth century.
In 1900, 12,230 cases were filed in federal district (trial) courts; in
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 179
1941, there were 38,477 such cases. By 1970, this number had grown to
87,321. Since then, the rise has been even more dramatic. Between 1950
and 1986, the increase in civil filings in federal court was up by 367 percent.
In 1992, 224,747 cases were filed in the district courts; in 2001, 254,523.
The increase is much greater than the increase in the number of people in
the United States.
The figures on federal litigation should come as no surprise. The federal
courts have always had jurisdiction over certain cases – for example, admiralty
(maritime) cases – and they have always been open to certain “diversity”
cases as well (cases in which, for example, a resident of Wyoming sues a
resident of Maine). But most of the matters that concern the state courts –
ordinary contract cases, tort cases, divorce, and family cases – were largely
absent from federal courts, except in diversity situations. In the twentieth
century, however, the federal government grew exponentially. It did more,
regulated more, and monitored more behavior than ever before. Consider,
for example, the increase in civil (and criminal) litigation due to the income
tax law, which came into being by act of Congress in 1913. The Prohibition
Amendment jammed federal courts and federal jails in the 1920s; the drug
laws did the same later in the century. The New Deal of the 1930s added a
whole host of regulatory statutes – for example, the Securities and Exchange
Act and the National Labor Relations Act – all of which added somewhat
to the stock of litigation. The great Civil Rights Law of 1964 and then the
Voting Rights Law (1965) produced a rich harvest of lawsuits in federal
court about sex and race discrimination. Before these laws, there were only
a handful of civil rights cases. In 1971, however, 4,621 cases under these
laws were filed, and in 1986, 17,776; by the end of the twentieth century,
victims or alleged victims of age discrimination and discrimination against
the handicapped added their numbers to the roster of plaintiffs in civil
rights cases.
Bankruptcy filings are a special instance of growth. The Constitution gave
Congress specific power to enact bankruptcy laws. At times, in the nineteenth
century, Congress did so, but those laws were short-lived. Bankruptcy
law really dates from a law of 1898 – frequently amended, added to, and
tinkered with ever since (very notably in 2005, for example). In the first
decade of the twentieth century, there were about 20,000 petitions for
bankruptcy. During the depths of the Depression, the numbers rose greatly
to more than 60,000, butWorldWar II and prosperity drove the numbers
down dramatically: there were 12,862 petitions in the year ending June 30,
1945. Then the figures started climbing again. By 1990, there were well
over a million bankruptcy petitions. Of course, these range from little people
over their heads in credit card debt to giant corporations that collapse
like beached whales. All of them have to go through some sort of process
Cambridge Histories Online © Cambridge University Press, 2008
180 Lawrence M. Friedman
in federal court, but few of these proceedings qualify as litigation in the
sense that there are two sides or three sides or many sides at legal war with
each other. Bankruptcy figures tell us something about the economy and,
even more so, about legal culture. Clearly, bankruptcy (like divorce) has
lost a great deal of stigma. It is also an index of a vigorous entrepreneurial
culture, and a consumer culture in which deferring gratification plays little
or no part.
One of the most striking aspects of modern American litigation is the
use of the class action: a lawsuit brought on behalf of a whole group or class
of people. The legal basis of the class action, in the federal courts, is Rule
23 of the Federal Rules of Civil Procedure, which in turn rested on an
earlier Equity Rule (Rule 38). However, Rule 23 is much broader and more
powerful than the older rule. The Federal Rules were adopted in 1938,
and many of the states either swallowed them whole or revamped their
own rules along similar lines. Amendments in 1966 to the Federal Rules
strengthened the class action; and the class action has become more and more
important since then. Its utility in certain types of case is obvious. If an
airline overcharges a million customers $1 each, obviously none of them will
bring a lawsuit; but a class action, aggregating all their claims, is another
matter. The giant tort cases – for example, against asbestos c,ompanies –
are sometimes aggregated as class actions. Some important civil rights
cases are also pursued as class actions, such as the claim that some huge
corporation discriminated against women. While federal cases are overrepresented
among class action suits, the majority of such cases (almost 60
percent) appear to be filed in state courts. There is also some evidence that
the number of such cases has been growing in recent years.
Class actions are hard cases legally and are also hard to manage and decide.
First, there is the issue of defining the class. Then there is the issue of keeping
the members or potential members informed. People have to have the right
to opt in or opt out. For the lawyers who have learned how to try these
cases, they can be a gold mine. The members of the class might collect a
few dollars each or some sort of certificate entitling them to a discount on
the defendant’s products, but the lawyers, who get a cut of the proceeds,
can reap a succulent fee that might run into the millions. In some cases,
the lawyers received more, in fees and expenses, than all of the members
of the class put together. Businesses, on the whole, detest class actions and
feel that many of them are scraped together by unscrupulous lawyers. The
lawyers surely play an important role, and many class actions would be
unthinkable without lawyer-entrepreneurs, who put the class together and
run the show. But the lawyers are in many ways simply taking advantage
of claims-conscious aspects of contemporary society.
The term “litigation” conjures up, in most peoples’ minds, the image of
a trial: a judge and jury, a courtroom with an American flag, rows of seats
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 181
crowded with onlookers. Above all, it conveys the idea of a trial – a procedure
in court. In fact, trials went into a steep decline in the late twentieth century,
so much so, that Marc Galanter and other scholars began to talk about the
“vanishing trial.” Federal civil cases filed in 1962 ended up as actual trials
in only 11.5 percent of the cases – which seems like very little; yet by
2002, the portion that ended up in an actual trial was astonishingly low –
1.8 percent. The same decline seems to be taking place in state courts.
Most cases, then, do not fit the popular image of the trial. The trial, in
fact, has been vanishing for a long time. Jury trials have been declining for
more than 150 years. Certain categories of case never went before a jury –
cases involving family trusts, for example, or maritime cases – and even
where there is a right to a jury, the parties can, if they wish, choose to
waive a jury and let a judge handle the case by herself. Moreover, it would
surprise people to know how little time litigation lawyers actually spend in
a courtroom arguing a case. Mainly this is because, as we shall see, most cases
settle, so that what litigation lawyers do has been described as “litigotion,”
that is, a process of bargaining and dickering, outside of court.
But it is also because the center of gravity in trials, even those that do
not settle, has shifted dramatically to the pre-trial phase. Many of the witnesses
are “deposed”; that is, their testimony is taken and recorded in a kind
of mini-trial outside the courtroom. Also important is the rise of “discovery.”
Under Rule 34 of the Federal Rules, either side, for “good cause,” can
get a court order to “discover” any “documents, papers, books, accounts,
letters, photographs, objects” from the other side, if they “constitute or contain
evidence.” Discovery, at its worst, permits wild and expensive fishing
expeditions; at its best, it makes for more efficient trials, avoiding surprises
and wasted energy. All of this pre-trial activity, however, by now perhaps
outweighs the actual trial as an element in the life-course of litigation.
II. ADVERSARIAL LEGALISM
In the last section, we learned that there is little or no evidence of an
explosion in litigation rates. Since the 1950s, however, the amount of money
spent on litigation has probably ratcheted upward. At any rate, the amounts
spent for legal services have grown tremendously – from 7 billion dollars
in 1970, to 46 billion in 1985, to 82 billion in 1990, and 125 billion
in 1999. Of course, the dollar has fallen in value, but in constant dollars
current expenditures are about four times as great as they were in 1970.
“Legal services” is a term far broader than litigation, but there is no doubt
that litigation has shared in this harvest of expense.
Americans may not be litigious, on the whole, but no doubt there are
some Americans who actually are litigious; and there are undoubtedly areas
in which a fair number of Americans do not feel inhibited in the least
Cambridge Histories Online © Cambridge University Press, 2008
182 Lawrence M. Friedman
from filing a lawsuit. The role of litigation in society may be more crucial
than the naked figures suggest. Robert A. Kagan, for example, claims
that there is something quite distinctive about the American way of law,
which he calls “adversarial legalism.” He defines this to mean a system of
“policymaking, policy implementation, and dispute resolution by means
of lawyer-dominated litigation.” The United States, he claims, relies much
more on adversarial legalism than other developed countries; other countries
lean more heavily on “bureaucratic administration, or on discretionary
judgment by experts or political authority.” Moreover, litigation in the
United States is different from litigation in most other societies; it is not
“judge-dominated,” but “lawyer-dominated.”
Kagan is particularly harsh on the ways in which litigation can make
a shambles of the administrative process. Litigation can make building
an airport, or dredging a harbor, or constructing any major public work
slow, costly, and sometimes perhaps even impossible. In some countries,
administrative agencies have practically unfettered discretion. They make
their decision, and that’s the end of it. But American law grants much less
discretion to the agencies. Partly because their discretion is limited, the
agencies tend to rely much more on formal rules than their counterparts
overseas. Also, the law allows private litigants, in a wide range of cases,
to go to court in opposition to the work of the agencies. They can attack
particular administrative decisions or the agency’s rules themselves. The
good news is that American regulatory law is more open to the public and
more responsive to all sorts of interests than it is in other countries. The
bad news is that it is often proceeds at a snail’s pace and is often tangled in
endless webs of litigation.
III. THE LIABILITY EXPLOSION
In many ways, the subject matter of litigation has altered quite dramatically
in the course of the twentieth century. There is much more evidence of a
liability explosion than of a litigation explosion. The most obvious case is
in the law of torts. Tort law is a ragbag of causes of action for damages (“civil
wrongs”), including such things as trespass to real estate, libel and slander,
and invasion of privacy. But what has exploded is that segment of tort law
that deals with personal injuries. Personal injuries – and personal injury
law – first became a significant social and legal problem with the dawn of the
Industrial Revolution. It takes machines, factories, locomotives, and the like
to wreck the human body on a wholesale scale. In nineteenth-century tort
law, however, a cluster of doctrines tended to protect “tortfeasors” (mostly
corporations) from liability in whole classes of case. Most notorious was the
so-called fellow servant rule. A worker could not recover for an injury on
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 183
the job if the injury was due to the carelessness of a fellow worker. This rule
effectively prevented most workers from collecting any damages for work
accidents. Moreover, well into the twentieth century, juries were stingy
with awards in tort cases (despite mythology to the contrary). Randolph
Bergstrom’s study of New York City in 1910 showed that plaintiffs won less
than half of their personal injury cases and they collected, on the average,
only $958.
The twentieth century proceeded to dismantle the restrictive rules of the
nineteenth century, one by one. It opened the door to a huge expansion of
liability for personal injuries. Products liability is itself largely a product of
the twentieth century. A key case was MacPherson v. Buick, decided by the
New York Court of Appeals in 1916. Benjamin Cardozo wrote the majority
opinion. The plaintiff bought a Buick car; a wooden wheel crumbled
while he was driving, and MacPherson was injured. He sued the Buick
Motor Company. What stood in his way was an old doctrine, the doctrine
of “privity,” which insisted that a plaintiff generally had to sue whoever sold
him the product, not the ultimate manufacturer. Cardozo’s opinion undermined
the rule totally – though he never said so directly – and MacPherson
won his case. Clearly, the privity doctrine made no sense in the age of
mass-produced, advertised goods. Hence, it is no surprise that other courts
followed Cardozo’s lead and buried the privity doctrine once and for all.
The fellow servant rule also came to an inglorious end. The Federal
Employers’ Liability Act (1906) got rid of it for railroad workers. The
Supreme Court struck down this statute, but Congress passed a new version
in 1908, and this one the Court upheld. In 1920, maritime workers got the
same protection. In the states, however, what replaced the fellow servant
rule was the system of workers’ compensation. Basically, the compensation laws
abolished virtually all tort actions for industrial accidents and replaced them
with a guaranteed (but limited) scheme of payment. Fault, negligence, and
other considerations were brushed aside. If you were injured on the job,
if you had an accident at work, you had the right to claim compensation.
By 1920, almost all of the states had a workers’ compensation law; the last
straggler, Mississippi, joined the other states in 1948. The fifty state statutes
cover the overwhelming majority of the working people of the country.
Workers’ compensation laws were supposed to get rid of the massive
amount of litigation over industrial accidents. They were supposed to
change the orientation of tort law dramatically, from concern with individual
fault to a more social theory of causation and, in the process, to create a
more efficient and fairer system. Beyond a doubt, the laws were successful
in getting rid of at least some potential litigation. But this branch of the
law has had its own version of the liability explosion. The typical workers’
compensation law purports to cover accidents and injuries “arising out of
Cambridge Histories Online © Cambridge University Press, 2008
184 Lawrence M. Friedman
and in the course of employment” (these words were borrowed from an earlier
British statute of 1897). The core meaning of the phrase seems obvious:
the injury has to be something that happened during work hours and has
to be connected somehow with the job. But the courts have expanded the
meaning of these terms enormously. Behind the passage of the statutes was
a concrete, specific social problem, the classic industrial accident: thousands
of lives were sacrificed on the altar of production every year, and the harvest
of broken bones, lost limbs, blinded eyes, and wrecked lives ran into the
tens of thousands every year. The underlying image was the factory, railroad
yard, or mine; the loud noise of heavy machinery; and dangerous, dirty
work in dirty and dangerous environments. Indeed, the original Oklahoma
compensation law (enacted in 1915) was confined to workers in “hazardous
occupations”; the law specifically excluded white collar workers, among
others. And most statutes, at first, were about accidents quite literally: if the
job made the worker sick, or exposed her to harmful chemicals, or if she
simply wore out over the years because of work, there was no claim and no
coverage.
Courts and legislatures moved hand in hand to expand the scope of the
compensation laws. Recovery for occupational diseases was added to most
statutes; New Jersey, for example, in 1949, amended its law to include
“diseases” that “are due to causes and conditions . . . characteristic of . . . a
particular trade, occupation, process, or employment.” Workers’ compensation
litigation did its part. Thus, a secretary who slipped and fell in the
company restroom or its cafeteria, a worker injured at a company picnic, a
traveling salesmen burned in a fire at a motel – courts held that plaintiffs of
this sort were entitled to recover. Dozens of cases wrestled with the issue of
heart attacks on the job. Then came an epidemic of psychological claims –
claims that the job had thrown A into deep depression, or had driven B
crazy, or that getting fired pushed C’s mental condition over the brink.
These claims, toward the end of the twentieth century, became so numerous
and costly that businesses panicked and demanded relief. A number of
states, including California, passed laws radically cutting back on psychological
claims. Under the new California statute, no worker could recover for
“psychiatric injury” if it was caused by a “lawful, nondiscriminatory, good
faith personnel action.” The statute seemed to make a difference.Workers’
compensation, which cost $11 billion in California in 1993, dropped to
$8 billion two years later. But “reform” of workers’ compensation was still
on the agenda at the beginning of the twenty-first century in California. It
was still considered too lenient, too worker-friendly, too hostile to business.
Labor of course disagreed.
The courts and the legislatures have modified other doctrines that stood
in the way of tort claims. In the nineteenth century, one of the most powerful
was the doctrine of contributory negligence. The plaintiff not only had
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 185
to prove the defendant was negligent; she also had to be totally free of negligence
herself. If she had displayed even the tiniest degree of carelessness,
there could be no recovery at all. In 1908, the Federal Employers’ Liability
Act abolished the doctrine for railroad workers. A jury was entitled to
“diminish” damages “in proportion to the amount of [the worker’s] negligence,”
but the claim remained valid. This was essentially what came to be
called comparative negligence. The idea made its way slowly in the states;
it was still a minority view in the 1960s, but by the end of the century,
almost all of the states had some version or other of this new rule. In some, a
plaintiff who is, say, 80 percent at fault can still sue a negligent defendant,
collecting 20 percent of the damages. In other states, the plaintiff wins if
he was not as negligent as the defendant. In either case, the traditional rule
has lost most of its bite.
The main engine of the liability explosion, doctrinally speaking, was
through expansion of the concept of negligence. “Strict” liability – that is,
holding a defendant liable without the necessity of showing fault – has
struggled for a place in the doctrinal world. It is most clearly recognized
for “abnormally dangerous activities,” like storing or using dynamite or
other explosives in residential areas. But on the operating level, in the
course of ordinary litigation, the basic change that expanded the law of
torts was less a change in doctrine than a change in attitudes. Judges and
juries seemed more willing to listen sympathetically to the stories plaintiffs
(and their lawyers) told. The law reflected, as usual, popular ideas of right
and wrong, and these were (apparently) changing. The law also reflected
the growth of insurance and the feeling of judges and juries that, in most
cases, the insurance company would actually do the paying, not the nominal
defendant. This was particularly important in cases of automobile accidents.
After all, jurors and judges were drivers themselves and knew or thought
they knew that almost everybody carried accident insurance.
Medical malpractice was another growth area in tort law. It had, of
course, always been the case that a doctor (like anybody else) was liable
for his acts of negligence. But malpractice cases were never common in
the nineteenth century or well into the twentieth century. In Randolph
Bergstrom’s study of New York courts in 1910, only about 1 percent of the
tort cases were cases of malpractice. After 1950, the number of such cases
grew rather strikingly: in San Francisco County, between 1959 and 1980,
they amounted to 7 percent of all the civil trials. Most plaintiffs lost their
cases. Neil Vidmar’s study of malpractice cases in North Carolina in the
1980s found that half the malpractice cases were settled, another 40 percent
were dropped or otherwise terminated, and doctors won about 80 percent of
the cases. In some places plaintiffs did better, but in the nation as a whole,
the win rate for plaintiffs in malpractice cases was less than three cases
in ten.
Cambridge Histories Online © Cambridge University Press, 2008
186 Lawrence M. Friedman
Nevertheless, enough plaintiffs won and there was enough bad publicity
to frighten the country’s doctors half to death. Doctors claimed the plague
of lawsuits forced them to practice “defensive medicine” (or drove them,
through higher insurance premiums, out of the business altogether). The
doctrine of “informed consent,” which dates from the late 1950s, made
matters worse (for the doctors). If a doctor did not tell the patient enough
about the risks and side effects of some medical procedure, then the patient
had not really given her “informed consent” to the procedure and could sue
the doctor if something went wrong, or if the risk or side effect turned up.
A small but exceedingly important group of lawsuits have arisen out of
what has been called mass toxic torts. In the typical auto accident cases there
are one, or two, or a handful of victims. Even train wrecks and plane crashes
have limited numbers of victims. But there are incidents and situations in
which the number of victims can run into the thousands, or millions. This
has been the case with asbestos litigation. The first case against the asbestos
companies was decided in the early 1970s. By the middle 1980s, there were
more than 30,000 claims, brought by men and women who had sickened
or died after exposure to asbestos. The number of claims soon rose to over
100,000 and even higher, and the amounts involved were so great that the
asbestos industry essentially ceased to exist.
Some of these mass toxic torts cases were class actions, and they were
exceedingly complex, factually, legally, and procedurally. In some of them,
there was a serious question whether the companies should have been liable
at all. Agent Orange was a herbicide sprayed over Vietnam to clear the
jungle. Vietnam veterans by the thousands blamed Agent Orange for birth
defects in their children, cancer, and many other harms. The scientific evidence
was, to say the least, cloudy. Lawsuits against the A. H. Robins Company,
which manufactured an intrauterine device, the Dalkon Shield, drove
that company into the sheltering arms of the bankruptcy court. Tobacco and
firearms companies are recent objects of mega-lawsuits, so far with indifferent
success, but the potential impact is enormous. Fast-food restaurants
that sell junk food may be next in line.
A liability explosion, of course, is not the same as a litigation explosion;
the older studies found little evidence, as we have seen, for a litigation
explosion, including the law of torts; and this continues to be the case. In
a study of ten states published in 2001, Jeff Yates and associates found a
modest growth between 1975 and 1995 in tort filings; but in three of the
states there had actually been a decline. Nor have all forms of litigation
shared in the explosion of liability. Some issues of family law have become,
apparently, more common and more heavily contested. “No-fault” removed
the issue of divorce (and the argument over grounds for divorce) from the
scene, but custody and property disputes remain. They are probably more
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 187
common sources of litigation than earlier in the twentieth century. As we
noted, civil rights litigation has been a growth area. All of the great civil
rights laws of the last half of the twentieth century allow private citizens,
under certain circumstances, to bring lawsuits. The numbers are fairly large,
as we have seen, but, as in the case of tort law, it is not the quantity but the
subject matter and the scale and scope of the cases that have been controversial.
Civil rights laws have put new burdens on business and branches
of government, have required more red tape and record-keeping, and have,
in many cases, engendered a fair amount of resentment. This is also true
of some environmental litigation. The number of lawsuits during the year
that try to use environmental protection laws to fight plans or policies
is not large, but these can be very significant cases. Some of them are
attempts to scuttle major projects; some raise issues that are politically and
economically sensitive. Filing a lawsuit to stop a huge dam on behalf of some
tiny fish, or a lawsuit that puts the jobs of lumberjacks at risk, for the sake
of an endangered owl, is asking for trouble and for headlines, passion, and
countermoves.
In fact, there are many kinds of litigation that have not shared at all in
the richness of the twentieth-century docket. Some, such as disputes over
title to real estate, have been in deep decline in the twentieth century. Debt
collection cases that in the nineteenth century made up an appreciable percentage
of filed cases in trial courts, have also lost ground in the twentieth
century, as Robert Kagan has shown. In the St. Louis Circuit Court, such
cases at one time in the nineteenth century were a large part of the docket,
but by 1970, they amounted to less than 5 percent. Kagan feels that the
dominant position of “large, bureaucratized, legal sophisticated institutions,”
such as banks and department stores, helps account for the decline.
Title to land is now regularized and governed by title insurance companies;
there are fewer reasons to fight over land titles. In the nineteenth century,
when banks and money supplies were wobbly and unreliable, people often
paid their debts with personal instruments, and they tried to get wealthy or
reputable people to endorse these instruments and guarantee their credit.
These endorsements produced a lot of lawsuits when endorsers tried to wriggle
out of payment, but this kind of case is now exceedingly rare. No-fault
divorce put an end to contested divorces (though not, of course, to disputes
over children and property). The docket, in short, shifts over the years, as
old problems move off the stage and new ones come forward.
IV. THE CULTURE OF LITIGATION
It is widely believed that Americans are litigious. They are supposed to be
“claims-conscious” or perhaps even quarrelsome. This may be a matter of
Cambridge Histories Online © Cambridge University Press, 2008
188 Lawrence M. Friedman
structure – Robert Kagan’s adversarial legalism depends heavily on such
structural features as federalism, decentralization, and the common law
tradition. But it is also felt to be a statement about culture or personality.
This kind of idea is very hard to examine rigorously. It is true, though,
that people think of Americans as litigious. And the perceived litigation rates
are perhaps almost as important as the (unknown) real rates. There is only
weak evidence of an explosion in litigation rates, as we have said, but most
people are unaware of the facts. Millions of people – including lawyers
and judges – are firmly convinced that Americans sue at the drop of a hat
and that our society is incurably addicted to lawsuits. This belief is fed
by the urban legends and newspaper horror stories that describe, in gory
detail, the excesses of litigation and the hunger of litigants for money they
do not deserve. This belief has consequences. It has fueled a political and
legal backlash that I describe shortly. The “victims” of litigation – doctors,
big businesses, municipalities – are only too willing to take advantage of
a popular mood for reform. To be sure, some debates and disputes about
litigation are technical and professional; the public hardly knows anything
about the pathologies, such as they are, of class action cases or the assumed
need to control “fishing expeditions” (abuse of the discovery process) that
cost litigants money and time. But public opinion does seem to provide
general support for the movement to curb or control litigation.
In fact, the vast majority of Americans have not sued anybody and have no
plans to do so. Many Americans agree that litigation is messy and undesirable.
Many Americans think badly of people who do litigate – this was even
the case before the tort reform movement. David Engel, for example, studied
attitudes in a rural county in Illinois. He found that many people resented
tort litigants. The people in this county considered tort plaintiffs greedy
and dishonest, people who wanted something for nothing. Shrill newspaper
accounts of wild, irrational lawsuits are taken as evidence that Americans
really are litigious. One famous example was the old woman who spilled
hot coffee on herself and sued the living daylights out of McDonald’s; she
collected an enormous sum of money because of what most readers assumed
was her own careless behavior. Another bogeyman was the (fictional) burglar
who supposedly had the gall to sue after he suffered an injury during the
course of his burgling. In fact, these horror stories are evidence of something
that is almost the opposite of litigiousness: a pervasive American belief that
litigation is sometimes or often a racket.
“Litigious,” if it means anything, must mean that Americans are claimsconscious
or prone to litigate, not in absolute terms but comparatively:
either compared to the past or compared to other countries and cultures.
But it is very difficult to make these comparisons. The historical comparisons
are especially troublesome, because the figures are simply not available.
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 189
Undoubtedly, however, there have been cultural changes that do have an
impact on litigation. In the nineteenth century, there was no welfare state,
very little insurance against liability except for marine insurance and (business)
fire insurance, and life was precarious in terms of both health and
finances. In the twentieth century, modern medicine, the welfare state, and
widespread use of insurance may have led to what I have called a culture
of “total justice.” This is the expectation that compensation is due and by
rights ought to come from some source or other, when calamity occurs. If
other sources fail, there is always litigation.
There is no question that litigation is an important social phenomenon in
the United States, quite apart from whether people are or are not litigious.
Whether or not rates of litigation rise, it may well be that the cases that are
litigated are more important or that a small subset of lawsuits have enormous
social or economic importance. In fact, more and more money is spent in
the United States on lawyers and lawyering; businesses, governments, and
large institutions shell out billions of dollars, and a good deal of this goes
into litigation. Corporations are more and more the main litigators, and
they tend to win their cases.
The litigation habit, whether myth or reality, has in any event been the
target of enormous criticism. Litigation has been accused of many sins.
One of them is harming the economy, and this accusation has particularly
fueled the reform campaign against tort litigation. But the true impact of
litigation on American society is almost impossible to measure. Economists
have attempted to assess the costs of litigation – not only the amounts spent
on lawyers and lawsuits, but the net loss to the economy from “excess”
litigation. The benefit side is much more difficult to measure. Nobody
would choose, in a Utopian society, to use litigation as a tool for social
reform. But under certain circumstances, there is no alternative. Consider,
for example, the civil rights movement. African Americans were effectively
shut out of political power in the Southern states. They did not vote, hold
office, or serve on juries; there were no African American sheriffs, police,
or judges and almost no African American lawyers. Southern senators were
all powerful in the Senate and blocked any hope of even the mildest civil
rights legislation. The federal courts came to seem like the only hope. The
NAACP and other civil rights organizations, therefore, pursued a litigation
strategy – and the strategy seemed to pay off.
This is by no means the only example in which litigation seemed to
produce social change, although the actual impact is often problematic and
can be (and is) debated. On the surface, the case seems sometimes fairly clear.
To take one example, lawsuits in the federal courts accused some states of
running archaic and brutal prison systems; some of these lawsuits ended
up with strong court orders, telling the states to produce reforms. Lawsuits
Cambridge Histories Online © Cambridge University Press, 2008
190 Lawrence M. Friedman
shook up the world of school finance. Lawsuits have blocked or delayed
or killed many projects or proposals, from airports to dams to logging
operations. Litigation is a way to probe weaknesses in the structure and
scale of the welfare-regulatory state. Litigation is like a siege engine, which
exploits soft spots in a wall of resistance. Kagan, who is unsparing in his
criticism of some aspects of “adversarial legalism,” is quite explicit that the
system does produce some benefits.
Also, there are places where litigation seems to be badly needed. Kagan
points out that, in the 1970s and 1980s, Dutch workers had five to ten
times the rate of asbestos-related diseases as the United States. But fewer
than ten lawsuits were filed in the Netherlands. By 1991 the number of
lawsuits filed in the United States for asbestos-based torts, was, as we have
seen, incomparably greater – perhaps almost 200,000. Why the difference?
Because, Kagan argues, disabled Dutch workers will receive medical care
and generous benefits for life.Welfare laws already provided for the Dutch a
level of care that only a lawsuit could accomplish in the United States. The
Dutch system was clearly more efficient than the American non-system. In
the United States, the tort system filled the gap; but it was painfully slow
and incredibly wasteful. Each lawsuit reached its own idiosyncratic result.
The results of class action cases varied legally and financially. And up to
two-thirds of the money recovered – in settlements and trials – ended up
in the hands of lawyers and other professionals, rather than in the hands of
victims and their families.
For all the reasons mentioned – because the results of tort litigation were
chaotic, wasteful, and inconsistent and because they seemed so damaging
to important interests – a strong campaign emerged, particularly in the last
third of the twentieth century, to put limits on litigation. It was directed
mostly against tort litigation.
There had been earlier campaigns. In the first part of the twentieth
century, there were campaigns against “ambulance chasers”: personal injury
lawyers and their “runners,” who, it was said, raced to the scene of an
accident, or the hospital, or the home of the victim to sign the victim up
as a client. In 1928, there was a major investigation of ambulance chasing
in New York City. Jury trials for tort cases had risen dramatically; and the
increase was blamed on the personal injury lawyers. The investigation made
headlines and ended up recommending disciplinary proceedings against
some seventy-four lawyers who were guilty of turning legal practice into
a dirty and disreputable business. Nothing was said about the fact that
businesses also chased ambulances; that is, they sent claims adjusters out in
a race with the lawyers, trying to induce victims to sign releases in exchange
for small settlements. In the end, only a few lawyers were disciplined.
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 191
Filings, however, dropped dramatically in New York, though probably
less because of the investigation than because of a huge increase in filing
fees.
In the 1970s, businesses, stung by fears and threats of litigation, mustered
as much political muscle as they could in the battle to limit tort lawsuits.
(Another focus has been on stockholder suits against corporations). The antitort
campaign has made its mark on legislation. The Republican Party, in
particular, made tort reform one of its promises. Litigation, especially tort
litigation, was supposed to be damaging the economy. Japan and Europe –
it was said – were getting an edge over the United States. Lawyers were
parasites and trouble-makers whose activities were sapping the strength of
the country, costing money and jobs.
In the 1970s, doctors were important figures in the movement to do something
about the “flood” of litigation. Many doctors faced sharply increased
rates for insurance against malpractice. This created a sense of crisis in the
profession. Many in the profession put the blame squarely on tort litigation.
From 1975 to 1978 there was a wave of legislation aimed at alleviating the
malpractice “crisis” and other problems thought to come from tort litigation.
No fewer than twenty states put limits on contingent fees; some
fourteen states put caps on money damages. In the mid-1980s, a new wave
of reform – also stimulated by increases in liability insurance – led to caps
in sixteen states on “pain and suffering”; more than half the states also put
limits on punitive damages. Almost every state, in fact, passed some sort of
legislation with the aim of reform tort litigation and curbing the assumed
avalanche of lawsuits.
What has been the impact of this wave of reform? Litigation rates, in tort
suits, do seem to be dropping. Is this because of the new laws against tort
cases? To a degree, perhaps. More powerful, according to some studies, has
been the indirect impact. The powerful campaign against tort litigation has
had an impact on the general public. It has affected, in other words, the pool
of people from which juries are chosen. And if juries become tougher, then
insurance companies also get tough in bargaining and negotiation because
they are less fearful of what juries might do if the case were to go to trial.
And personal injury lawyers, whose income is on the line, will take fewer
marginal cases and settle other cases for much less money than before in this
kind of litigation climate. Stephen Daniels and Joanne Martin examined
the evidence for Texas and found this to be the case. Tort filings in Texas
trial courts dropped by almost 25 percent between 1995 and 2000: tougher
juries led to tougher insurance companies, leading in turn to changes in the
behavior of the lawyers. And the income of at least some litigation lawyers
had also been in decline.
Cambridge Histories Online © Cambridge University Press, 2008
192 Lawrence M. Friedman
V. ALTERNATIVE DISPUTE RESOLUTION
No study of litigation would be complete or realistic without attention to
some of the alternatives to litigation. Litigation is obviously a last resort. Of
the thousands and thousands of problems that might give rise to a lawsuit,
only a tiny fraction actually do. Lawsuits are expensive and troublesome.
Individuals and businesses, for the most part, try to avoid them.
There have been a few attempts to investigate the actual life-cycle of
disputes. The most notable has been theWisconsin study of civil litigation
carried out in the 1970s. An accident occurs; a woman slips on the sidewalk
and breaks a bone. The woman might, for example, blame only herself. For
those cases where she blames somebody else, in only a fraction of occurrences
does the blame turn into a claim. And many claims – most claims – never
go very far; they are dropped or settled. Few of these claims turn into a
dispute, and fewer still of these disputes turn into lawsuits. The process can
be pictured as a kind of pyramid: incidents are at the base, and the pyramid
gets narrower and narrower toward the top; the surviving claims that end
up in court are relatively rare events. For the population sampled, only
fifty court filings resulted from every one thousand grievances. There was,
however, considerable variation, depending on the type of case: only 38 of
every 1,000 tort grievances and a tiny 8 of 1,000 discrimination grievances
resulted in the filing of a lawsuit; but “post-divorce” grievances (disputes
over child custody or over the question of dividing up property) were much
more litigation-prone: almost half of them went the whole route and ended
up in court. A study of aviation accident litigation showed an even more
dramatic recourse to courts. In 1970–76, 64 percent of the claims arising out
of aviation accidents (and a claim was filed for virtually every victim) turned
into a lawsuit, and for fatal cases between 1979 and 1982, an astonishing
84 percent went to court.
It is more common to settle a case out of court than to pursue it to the
bitter end. Most settlements are informal, arranged by the parties or their
lawyers. Mediation is a more formal method of settling a dispute. A mediator
is a third party who works with the parties in dispute and tries to help them
find a way to settle it. The mediator has no power to impose a solution; if
the parties decide to give some third party that power – the authority to
make an actual decision and to make it stick – we call that third party an
arbitrator. Arbitration and mediation are old substitutes for litigation, and
they continued to be very popular in the twentieth century and into the
twenty-first. With regard to arbitration, there are many legal questions:
what, for example, is the legal impact of an arbitrator’s decision? Under
what conditions is it absolutely binding? Must a court follow it, if the
losing party tries to shift the dispute into the courtroom? More and more
Cambridge Histories Online © Cambridge University Press, 2008
The Litigation Revolution 193
contracts contain an arbitration clause – the parties promise to arbitrate
and not to go to court. Is such a promise enforceable? These clauses are
not politically and economically neutral; they are favorites of the business
community, but not so popular with individuals, especially workers.
There has also been a movement, some of it coming from within the
legal profession itself, to develop methods and procedures of alternative
dispute resolution (ADR). As a movement, ADR rose to prominence in the
1970s. A conference in St. Paul, Minnesota, in 1976, focused on “Popular
Dissatisfaction with the Administration of Justice” (echoing the title of a
famous talk given by Roscoe Pound seventy years earlier). The conference
promoted the idea of quicker, more efficient, “alternative” justice. The Civil
Justice Reform Act of 1990 put a kind of federal stamp of approval on ADR;
this law asked all federal district courts to adopt a “civil justice expense and
delay reduction plan,” including methods of ADR. The Administrative
Dispute Resolution Act, passed by Congress in 1996, was intended to
encourage ADR in the handling of administrative disputes. The Act begins
with certain “findings”: that “administrative proceedings” were too “formal,
costly, and lengthy” and that ADR could offer a “prompt, expert, and
inexpensive means of resolving disputes as an alternative to litigation.” The
act applies to all federal agencies; and it imposes on these agencies the duty
to “adopt a policy that addresses the use of alternative means of dispute
resolution and case management.” There have been parallel developments
in the various states.
The ADR idea, like arbitration, is quite attractive. Nobody likes the
idea of litigation, except for the people (trial lawyers, mostly) who make
their living out of litigation. For almost everybody else, it is undesirable –
sometimes a necessary evil, sometimes very beneficial to society, but still it
is regrettable if litigation is the only way to achieve some desirable goal.
Business people find litigation particularly obnoxious for all sorts of reasons
– cost and disruption among them. Business people appreciate procedures
that are quick, simple, and private. A fair number of private companies
are geared up to provide ADR, for a price. In California, under a program
nicknamed “Rent-A-Judge,” parties can choose to have their own, private
trial, with a private “judge,” who often is (in fact) a retired judge.
Roughly, formal alternatives fall into four categories – mediation, arbitration,
private ADR, and court-annexed ADR; that is ADR run and managed
by the regular courts. Whether all of these ADR methods have the virtues
claimed for them is another story. There is some evidence that at least some
of these methods save very little in the way of time and effort.
Litigation, even when replaced by alternatives, is never totally irrelevant.
Bargaining and negotiation often turn on how litigation would turn
out – or, at any rate, on the way in which lawyers and potential litigators
Cambridge Histories Online © Cambridge University Press, 2008
194 Lawrence M. Friedman
assess the probabilities and how they read the law. This is what Robert
Mnookin and Lewis Kornhauser have called “bargaining in the shadow of
the law.” They coined this phrase in the course of an article about negotiation
in divorce cases. Somewhat earlier, H. Laurence Ross had described the
same process in his study of settlements in auto accident cases. Litigation
no doubt casts a long shadow. But, as Ross makes clear, the “shadow” of
the law is often a distorted one. The law itself is complex and subtle and
the outcome of litigation never quite predictable. Negotiation, then, often
proceeds on the bases of guesses or hopes, leavened by social norms that may
or may not coincide with the operating norms of the legal system. Moreover,
it is not really the shadow of the law that concerns the parties so much as
the shadow of adjudication – the costs and troubles of going to court.
CONCLUSION
This chapter has examined the ebb and flow of litigation in the twentieth
century. Reality is complex and, in a way, almost contradictory. The country
spends a lot more time and money on legal services, including litigation,
than at the beginning of the century. But it appears that the actual rate of
litigation has not exploded the way most people think. Nor is there strong
evidence that Americans are, by nature, litigious.
But the clouds of smoke that pour out of the debates over lawsuits do
suggest that somewhere, somehow, there is a fire. Indeed, in the world of
litigation, there have been important qualitative changes. New forms of
action have arisen. Liability in tort law has indeed exploded, so much so as
to generate a backlash. “Adversarial legalism” is a reality in administrative
law. Courts are powerful and exercise their power when they wish to and
when litigants press them to. The early twentieth century never dreamed
of so-called mass toxic torts.
Litigation does not mean, necessarily, trials, which have, on the whole,
decreased in the latter part of the twentieth century. More and more of the
work of settling disputes gets done outside of court – in the settlement
process, in the back- and-forth dance of discovery and other forms of pretrial
process, and through diversion to the various forms of ADR. Litigation
will never disappear, but it will continue, no doubt, to evolve.
Cambridge Histories Online © Cambridge University Press, 2008
6
criminal justice in the united states
michael willrich
Anyone vaguely familiar with the career of American criminal justice in
the twentieth century knows this story does not have a happy ending.
A liberal democracy that incarcerates more of its people per capita than
any other nation on the planet cannot take pride in its crime policies; nor
have those policies, like a revolver in the nightstand, made Americans feel
particularly secure. Criminal justice – like crime itself – is often assumed to
be an intractable social problem, timeless and impervious to reform. Who
among us can imagine modern society without the prison? But criminal
justice is inescapably the product of history. The historical trajectory of
American criminal justice no longer seems as certain or progressive as it
once did. When American criminology was still in its infancy as a scientific
discipline, around the turn of the twentieth century, practitioners told the
story of punishment in uniformly whiggish terms: a steady march toward
ever more humane, modern methods. The past century in criminal justice
now looks far otherwise: a record of abrupt shifts, stark continuities, and
stunning reversals.
The twentieth century opened with a dramatic transformation in the
ideas and institutions of American criminal justice. The United States was
in the throes of industrial expansion, an era of rapid urbanization and mass
immigration that had already turned a predominantly agrarian country into
the world’s most productive industrial economy. To regulate the harshest
human consequences of industrial capitalism, social reformers, lawmakers,
and innovative government officials began to put together the pieces of
an administrative-welfare state. They built this “modern” interventionist
state from old and new materials, assuring that long-established criminal
justice institutions – codes, police, courts, jails, prisons – would not be
displaced by the new-fangled administrative agencies and welfare bureaus.
In fact, criminal justice reform was at the cutting edge of institutional
change. In the industrial cities, which seemed overrun with poverty and
crime, a broad cross-section of people – social activists, social scientists,
195
Cambridge Histories Online © Cambridge University Press, 2008
196 Michael Willrich
legal academics, lawyers, lawmakers, judges, and criminologists – strove
to redefine criminal justice. They rejected traditional retributivism and to
some degree punishment itself, as barbaric, and they regarded the whole
notion of individual responsibility with a distinctly modern skepticism.
For these thinkers and reformers, the rising social science disciplines
confirmed what right-thinking Americans already knew from experience:
in an urban industrial nation, the traditional concept of the individual as a
“moral free agent” no longer made much sense. Crime had its causal origins
not in the moral free will of the autonomous individual, but in “social”
conditions that determined human behavior: bad heredity, poverty, broken
homes, and the urban environment. The reformers sought to remake criminal
justice institutions into instruments for the therapeutic treatment of
criminals, the production of useful social knowledge, and the governance
of society as a whole. The new social conception of crime inspired institutional
innovations – the juvenile court, the indeterminate sentence and
parole, probation, even eugenics-inspired laws to sterilize “mental defectives.”
All of these experiments aimed to prevent crime and to reduce the
centrality of the prison to the Am,erican way of justice.
A century later, those Progressive era ideas and institutions were being
deemed a failure and slated for destruction. The last quarter of the twentieth
century brought domestic deindustrialization, sharply rising economic
inequality, and sustained political attacks on the welfare state. Retributivism
and a rhetoric of “personal responsibility” returned. Politicians
declared a “war on crime.” Lawmakers enacted mandatory minimum sentencing
laws that reined in judicial discretion and ensured that convicts
spent much more time behind bars. The prison system, whose future seemed
uncertain a century before, experienced a population explosion, surging
from the 500,000 prisoners in the system on any given day in 1980 to
1.8 million prisoners in 2000. (The entire European Union had some
300,000 people behind bars that year.) In many American states, spending
on corrections grew faster than any other item in the budget, and private
corporations won lucrative contracts to build and run prisons. Progressive
era reformers and 1960s liberals had viewed criminal justice institutions
as means for rehabilitating offenders of all nationalities and races. At the
end of the century that optimism had withered. In 1996 African Americans
were incarcerated at a rate eight times that for whites.1
In this chapter I offer an interpretive history of American criminal justice
since 1920. Two central themes run though it. First, in criminal justice, as
in other areas of American political development, institutions and political
1 Figures are from John Irwin et al., “America’s One Million Nonviolent Prisoners,” Social
Justice 27 (2000), 135–47.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 197
structures matter a great deal more than historians usually give them credit
for. The distinctive institutional complexity and decentralized constitutional
structure of the American polity are crucial to the story of criminal
justice. In fact, to a significant degree, they are the story. Second, criminal
justice institutions, which occupy little more than a sidebar in most
U.S. history textbooks, belong at the center of American historical analysis.
Defending society from crime and bringing criminals to justice – vast
enterprises in their own right – are not the full measure of criminal justice.
Criminal justice is an extremely complex and far-reaching field of political
ideas and governmental practices that has profoundly affected the development
of law, constitutionalism, liberalism, and the modern state. To be
sure, the historical development of criminal justice institutions has reflected
broader social, cultural, and political changes. But it has also shaped them.
A few preliminary words about each of these themes. Compared to those
in otherWestern nations, criminal justice institutions in the United States
are sprawling and decentralized. They constitute a system only in a tenuous
sense. The widespread usage of that familiar term – “criminal justice system”
– was one of the achievements of 1960s liberals, who supported the
efforts of the U.S. Supreme Court to impose greater national uniformity in
procedural rights and advocated a greater role for the federal government in
crime prevention. In reality, the United States has nothing to compare with
Great Britain’s Home Office – a central authority charged with setting crime
policy and overseeing criminal justice institutions for the entire nation. In
America, the public institutions of lawmaking bodies, police forces, prosecutors
offices, courts, jails, and prisons operate at the local, state, and federal
levels. The shifting boundaries of federalism – rooted in the Constitution,
legislation, and judicial rulings – determine the scope of authority and
power in each domain. This distinctively American lack of centralized policy
coordination has important consequences. It leaves the nation’s criminal
justice institutions exceptionally vulnerable to the influences of partisan
politics, reform movements, local customs and norms, and policymaking
through litigation. These institutional arrangements help explain the contradictory
trends in criminal justice that have confused students of history
and frustrated Americans on both the political left and right. During the
1970s, for example, litigation in the federal courts greatly expanded the
rights of prisoners, even as the pressure of popular partisan politics spurred
lawmakers to enact new “tough on crime” policies. All of this institutional
complexity makes telling the story of criminal justice in modern America
a daunting task. But without an appreciation of institutional arrangements
that story is incomprehensible.
Throughout the twentieth century, criminal justice was a central – at
times, the central – problem of liberal governance in the United States.
Cambridge Histories Online © Cambridge University Press, 2008
198 Michael Willrich
Its historical trajectory shaped and was shaped by the critical issues of liberal
politics and state formation: the growth of the administrative-welfare
state, the changing conceptions of individual freedom and responsibility
that redefined liberalism, the related struggles over the legitimate scope of
government intervention, and the politics of social and cultural pluralism.
Conceiving of criminal justice in these explicitly political terms puts criminal
justice institutions at the heart of central issues in twentieth-century
history. Changing conceptions of criminal responsibility informed public
perceptions of what the state could and should do to alleviate poverty and
inequality. Criminal justice institutions such as juvenile courts were proving
grounds for emerging techniques of bureaucratic governance. Since
1920, criminal justice, traditionally the province of state and local authority,
has exemplified the increasing centralization of economic, cultural, and
governmental power in America. Even after the recent decades of devolution
of public authority back to the states – particularly in the area of social welfare
programs – criminal justice’s status as an issue of national concern and
governance continues to grow. Despite these significant trends, the history
of criminal justice since 1920 has been marked by continuities and retreats
as well as change: the persistence of localism, the survival and revival of old
ideas of individual responsibility and deterrence, and the recent decline of
the venerable Enlightenment idea that criminal justice must have a rational
purpose beyond punishing individual offenders.
The chapter is organized into three chronological sections. Part I examines
the decades between the world wars when Americans grappled with
the progressive legacy of “socialized” criminal justice in an era of political
conservatism and “crime wave” scares. With Prohibition and the growth
of the Federal Bureau of Investigation, the nation took its first serious (but
tentative) steps toward nationalizing crime policy, and in a new reckoning
of racial and economic inequalities in the administration of local criminal
justice, the U.S. Supreme Court tried to impose uniform procedural
standards on local courts.
Part II traces the trends in the post-World War II era that culminated
in the liberal moment of criminal justice policy in the 1960s. The Model
Penal Code, a distinctive product of Cold War legal culture, promoted an
unprecedented level of uniformity in the substantive criminal law for the
states and attempted to resolve long-standing tensions between sociological
and legal understandings of criminal responsibility. Under Chief Justice Earl
Warren, the U.S. Supreme Court launched a “due process revolution” that
established new procedural rights for accused criminals and empowered
federal courts to police state and local criminal justice institutions. In the
1960s crime became a litmus-test issue in national politics, signaled by
the 1967 release of a report commissioned by President Lyndon Johnson,
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 199
The Challenge of Crime in a Free Society. In retrospect, the report’s publication
was the high-water mark of liberal crime policy in America.
Finally, Part III examines the “severity revolution” that transformed
American criminal justice in the last quarter of the century. In an era of
rising political conservatism, lawmakers enacted “get tough” crime measures
and drug laws that flooded prisons and had their greatest impact on
urban minority communities.With the collapse of the progressive tradition
in American liberalism, the social problems of poverty and inequality lost
much of their moral claim on the state. The notion that society bore some
collective responsibility for crime was perhaps the most battered legacy of
the old tradition.
I. CRIMINAL JUSTICE BETWEEN THE WARS
Historical memory of American criminal justice between the world wars
is dominated by a single, failed experiment in social control: national Prohibition.
By any rough quantitative measure – institutions built, policies
introduced, hearts and minds won – the decades following ratification of
the Eighteenth Amendment in 1919 would seem fallow compared to the
Progressive era. After all, it was during those two previous decades of institutional
reform that the modern criminal justice system took shape. Urban
police administration, long the prize of political factions, began to resemble
a profession. The defining beliefs of progressive criminology – that
crime had social causes and criminals could be rehabilitated by a therapeutic
regime of individual treatment – won many converts and enjoyed
institutional success. Rehabilitative practices took root in state and local
criminal justice institutions: indeterminate sentences and parole for felons;
court-monitored probation for juvenile delinquents and first-time adult
offenders; and “socialized” criminal courts, staffed with social workers and
psychological experts, for juvenile delinquents, bad parents, and prostitutes.
Progressive ideas reached even into the deteriorating corridors of jails
and prisons, where wardens classified and sorted convicts according to the
latest scientific behavioral categories and penological experts ministered
to the souls of convicts, like the jailhouse chaplains of the past. Nothing
that happened in the 1920s and 1930s could quite match that earlier era of
experimentation and reform. And yet, the interwar years were consequential
ones, marked by ideological controversies, new crime-fighting initiatives,
an unprecedented expansion of federal power, and the first critical academic
surveys to take stock of the vast American archipelago of criminal justice
institutions as a national system.
For American criminal justice, the Twenties began in 1919. The year that
the war in Europe ended was an exceptionally terrifying one in the United
Cambridge Histories Online © Cambridge University Press, 2008
200 Michael Willrich
States, punctuated by huge general strikes, deadly urban race riots, a string
of anarchist bomb plots, and a nationwide crackdown on political crime.
America’s first Red Scare, which peaked in the winter of 1919–20, was in
one sense the last act of World War I – a shockingly repressive crackdown
triggered by fears of revolutionary insurgencies abroad and immigrant and
worker radicalism at home. But the brief nationwide campaign to round
up and, in the cases of many immigrants, forcibly deport suspected radicals
also foretold a long-term build-up in the domestic surveillance capacities
of the federal government.
In the Red Scare’s most famous episode, the Palmer Raids of January
1920, more than five thousand alleged radicals, mostly members of leftwing
immigrant groups or labor organizations, were arrested in thirty-three
American cities. The U.S. Department of Justice, under Attorney General
A. Mitchell Palmer, directed the round-up. The administrative capacities
to carry out such an action had been built up during the late war, when
Congress enacted three statutes – the Espionage Act (1917), the Sedition
Act (1918), and the Immigration Act (1918) – that gave the federal government
new authority to police or deport individuals who aimed to impede
the war effort or advocated violent overthrow of the government. After the
war, the Bureau of Investigation, which had been established in 1908 to
help the Department of Justice enforce antitrust laws, closely monitored
suspected radicals. Under the zealous young J. Edgar Hoover, the Bureau’s
General Intelligence Division amassed files on civil rights leaders, pacifists,
and other radical individuals and organizations. The federal government had
no monopoly on anti-radical activities. Thirty-two states enacted their own
sedition and criminal syndicalism laws to check subversive activities, and
local police departments created Bomb Squads and Red Squads. Local and
federal authorities ignored procedural niceties. Suspects were arrested without
warrants, hundreds were summarily deported, and many were detained
for long periods without access to lawyers.
The growth of government surveillance mobilized an emerging network
of civil liberties activists and lawyers. As the Red Scare subsided,
they developed a formidable constitutional defense of political speech. The
group included former Progressives who had applauded the expansion of
federal power during the war, but had grown concerned about the government’s
readiness to trample fundamental rights in the name of security. The
American Civil Liberties Union, established in 1920, soon won important
legal victories (and half-victories) that laid a foundation for civil rights and
civil liberties cases of the 1930s and beyond. In Gitlow v. New York (1925),
the ACLU represented a Communist party leader convicted under New
York’s Criminal Anarchy Act of 1902 for publishing a pamphlet called
The Left-Wing Manifesto. The U.S. Supreme Court upheld the New York
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 201
statute and affirmed Benjamin Gitlow’s conviction. But Justice Edward T.
Sanford’s majority opinion created a crucial precedent for First Amendment
rights. Sanford initiated the long process, realized in the next four decades,
of “incorporating” the Bill of Rights into the Fourteenth Amendment’s
Due Process Clause – that is, using the clause to apply the Bill’s provisions
against the states. Sanford wrote that “for the present purposes we may and
do assume that freedom of speech and of the press – which are protected by
the First Amendment from abridgment by Congress – are among the fundamental
personal rights and ‘liberties’ protected by the due process clause
of the Fourteenth Amendment from impairment by the states.”2 Though
of little immediate aid to Gitlow, this language held great promise for civil
liberties, particularly in the field of criminal justice. If the Due Process
Clause carried the First Amendment to the states, why couldn’t it do the
same for the Fourth through Eighth Amendments, which protected the
rights of suspects, defendants, and convicts? For the time being, though,
the Court was in no hurry to impose national standards on the local realm
of criminal justice.
Since the creation of the republic, criminal justice had been almost exclusively
a local affair, an expression of those broad police powers reserved for
the states by the Tenth Amendment. In the aftermath of Reconstruction, the
1878 Posse Comitatus Act threw an additional harness around the federal
government by forbidding the use of federal troops in civil law enforcement.
Until the 1890s, the federal government did not even have its own
prison system; state penitentiaries housed federal prisoners (in exchange
for boarding fees and the right to seek a return from the prisoners’ labor).
As the population of federal prisoners rose in the 1880s and 1890s, and
the prevailing leasing system of state penitentiary labor fell under political
assault from the labor movement, Congress authorized the creation of the
first federal prisons (in Leavenworth, Kansas; Atlanta, Georgia; and McNeil
Island, Washington). By 1930, the federal government owned seven prisons.
The federal criminal code, which covered mostly interstate crimes and
conspiracies, remained a shadow of the state codes. But as early as the
Comstock Law of 1873, which made it a federal crime to send “obscene”
materials through the mails, Congress had proved willing to throw federal
power behind morality crusades. In 1910, at the height of the progressive
legal assault on prostitution, Congress enacted the Mann (White Slave) Act,
which made it a federal crime to transport a woman across state lines for
“immoral” purposes. Still, nothing quite prepared the federal government –
or the American public – for the extraordinary expansion of federal authority
required to put into force the commands of the Eighteenth Amendment.
2 Gitlow v. New York, 268 U.S. 652, 666 (1925).
Cambridge Histories Online © Cambridge University Press, 2008
202 Michael Willrich
Local alcohol regulations dated back to the colonial era. In a burst of
temperance reform between 1851 and 1855, twelve of the thirty-one states
followed Maine’s example and enacted statewide laws outlawing the manufacture
and sale of liquor.With the notable exception of New York’s 1855
prohibition statute, which was struck down by that state’s highest court
in an early enunciation of substantive due process doctrine, most of these
state measures survived constitutional challenge. By the 1880s and 1890s,
as the American economy grew ever more national in scope, federalism had
become a major source of frustration for prohibitionists. Interstate commerce
doctrines, promulgated in the federal courts, made it legal for liquor
dealers based in wet states to import their goods into dry states. The federal
liquor excise tax conferred a measure of legitimacy on the liquor industry
and, by making a stream of federal revenue dependent on the free flow of
liquor, made prohibition reform less politically appealing. By 1900, only
five states (three of them in New England) retained statutory or constitutional
prohibition.
The passage of national prohibition just nineteen years later was a remarkable
achievement that revealed the changing character of American politics,
particularly the growing power of interest groups and the centralization
of government authority. With the founding of the Anti-Saloon League
in 1895, temperance advocates boasted a national coalition with a strong
base in Protestant congregations. Like the most effective women’s political
organizations of the day, the League organized across the federal system.
The prohibition crusade illustrates how effectively early twentieth-century
activists linked criminal justice issues – in this case, the control of drinking
– to pressing social and political issues: family dependency, woman’s suffrage,
the corruption of government by business interests. The League took
advantage of the era’s reforms of political procedure. The direct primary
laws enabled it to press both parties to put forward dry candidates. Dry
state legislatures submitted the saloon question on referenda to the voters.
By 1916, twenty-one states banned saloons.
The national elections of that year produced a Congress ready to make
prohibition federal law. In 1917 Congress submitted the prohibition
amendment to the states. National Prohibition so threatened the tradition
of federalism that even many temperance advocates initially opposed
the idea. But American entry into World War I eased the passage of federal
prohibition laws, justified as war measures to conserve both grain and
the morals of American servicemen. With the ratification of the Eighteenth
Amendment in 1919, the Constitution now permanently banned
the “manufacture, sale, or transportation of intoxicating liquors.” Enacted
over President Woodrow Wilson’s veto, the 1920 National Prohibition
Enforcement Act (the “Volstead Act”) outlawed beverages that contained
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 203
more than 0.5 percent alcohol by volume, set fines and prison terms for
violations, and entrusted enforcement authority to the Bureau of Internal
Revenue, a branch of the Treasury Department.
From a law enforcement perspective, national Prohibition had a beltand-
suspenders redundancy built into it. Like many other social policies
adopted by the federal government during the 1920s and 1930s (including
child labor controls and public assistance to single mothers), Prohibition
effectively nationalized existing state policies. Since the state prohibition
laws did not fall off the books, Prohibition had the virtues and accountability
problems of concurrent state and federal enforcement. Concurrent
jurisdiction diffused responsibility for enforcement and emboldened states
to attach their own standards to the controversial law. The 0.5 percent alcohol
standard, adopted from an older Bureau of Internal Revenue standard
for taxable alcoholic beverages, was much scorned in some states. In 1920,
Massachusetts, New Jersey, and New York enacted laws authorizing the
manufacture and sale of low-alcohol beer and wine above the federal limit.
That same year, the U.S. Supreme Court forced the states into line with
federal law.
For national Prohibition to work, the U.S. government needed to marshal
vast political will and financial resources to the cause. But fiscal conservatism
ruled in the 1920s.With little encouragement from the Republican administrations
of Presidents Warren Harding and Calvin Coolidge, Congress
never gave the Treasury Department a budget large enough to fund an effective
nationwide enforcement effort. A great rural-urban enforcement gap
revealed that no single constitutional amendment could easily overcome the
resilient localism of American legal cultures. Many rural communities had
little need for national Prohibition, having voted themselves dry early in the
twentieth century. But in urban centers like Chicago, Prohibition opened
an enormously profitable field of enterprise to urban ethnic entrepreneurs,
including Al Capone, who turned the illicit manufacture, distribution, and
sale of alcoholic beverages into a big business.Wet mayors, like New York’s
JimmyWalker and Chicago’sWilliam Hale Thompson, strengthened their
political bases by openly flaunting the dry laws. Mayor William Dever of
Chicago (1923–27), motivated by a sense of personal duty and the many
letters of petition he received from poor immigrant women, actually tried
to enforce the law, ordering his police to shutter hundreds of businesses.
His actions triggered a deadly run of gangland violence in which more
than 115 people died. Dever was voted out of office after only one term.
In contrast to the dramatic clashes of Treasury Department officials and
bootleggers memorialized by Hollywood, much of the actual resistance
to Prohibition enforcement was decidedly mundane. Local criminal courts
were overrun with Prohibition cases. Lacking funds to hire more clerks and
Cambridge Histories Online © Cambridge University Press, 2008
204 Michael Willrich
judges for the job, many courts instituted docket-clearing “bargain days,”
inviting masses of defendants to barter guilty pleas for light fines.
It took awhile for Americans to get around to the unprecedented act of
repealing an amendment to the U.S. Constitution. In the 1928 presidential
election, the voters passed over the wet Democratic candidate Al Smith for
the dry Republican Herbert Hoover, who called Prohibition “a great social
and economic experiment, noble in motive and far-reaching in purpose.” As
president, Hoover authorized construction of six new federal prisons. But
many Americans already favored modifying the Volstead Act to legalize
light wines and beers. During the Depression, the Association Against the
Prohibition Amendment, a group led by manufacturing interests, claimed
that repeal would lift the economy by creating jobs and restoring federal
tax revenues. Repeal also had the support of the Women’s Organization
for National Prohibition Reform, which argued that Prohibition violated
women’s individual freedom. The bold public arguments of these wealthy
women for repeal attested to the dramatic cultural transformation in morals
and manners that had occurred during the 1920s. Most urban ethnic voters,
whose communities bore the brunt of dry law enforcement, readily supported
repeal. When the Democratic Party persuaded Franklin Roosevelt
to run as a wet in 1932, the end of Prohibition was assured. The Democratic
landslide returned an overwhelmingly wet Congress. The Congress
got around the state legislatures, many of which were still controlled by the
drys, by submitting the Twenty-first Amendment directly to state ratifying
conventions. The strategy worked. National Prohibition was repealed
in 1933, returning control of liquor regulation to the states.
For all of its limitations, Prohibition had a large impact on American
society, culture, and politics. The day-to-day criminal justice functions
of the federal government expanded; by 1930 the federal prison system
held more than 12,000 inmates, more than one-third of them convicted
under the Volstead Act. Prohibition was also apparently effective in curbing
the consumption of alcohol, particularly among wage earners, who were
hardest hit by the steep cost of bootleg booze and beer. Arrests for public
drunkenness dropped. Medical treatments for some alcohol-related diseases
declined. Per capita consumption of alcohol in America did not return to
pre-Prohibition levels until 1970.
But it was in its failures – actual and perceived – that Prohibition had its
greatest impact on the public life of criminal justice. By the mid-1920s, the
violence and lawlessness associated with the bootlegging industry aroused
public fears that a “crime wave” had overtaken America. A rage for crime
control dominated the politics of criminal justice for the next decade, as
self-appointed urban crime commissions, county prosecutors, and state lawmakers
launched, in the words of the editor of the New York Times, the
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 205
nation’s first “country-wide war on crime.”3 In 1926, the New York legislature
enacted crime-fighting measures collectively known as the “Baumes
laws” (after their sponsor, Republican State Senator Caleb Baumes). The
laws introduced new criminal offenses, narrowed the procedural rights of
accused criminals, and mandated that anyone convicted of a fourth felony be
sentenced to life in prison. The Baumes laws (which anticipated the “three
strikes and you’re out” laws of the 1990s) served as a model for similar legislation
in California, Michigan, andWest Virginia. But they wreaked havoc
on New York’s own sentencing and parole system, helped trigger prison
riots, and added fuel to a prison population boom that caused Governor
Franklin Roosevelt to launch construction of five new state penitentiaries
(including the ill-fated Attica).
Prohibition and the war on crime were both context and cause for an
extended public debate over the progressive legacy of socialized criminal
justice. At the heart of the progressive reforms was the protean concept of
social responsibility for crime: since social conditions had a large hand in
causing criminal behavior, society bore collective responsibility for alleviating
those conditions and rehabilitating the individuals driven to commit
crime. A burst of interest in criminal jurisprudence, criminology, and
criminal justice administration during the 1920s strengthened the hold of
progressive ideas in the professional disciplines of law and social science. As
one social scientist observed in the American Bar Association Journal, “The
traditional views of human nature and conduct in which all of the older and
most of the younger men in the legal profession were brought up have been
seriously challenged for the past thirty years and all but demolished during
the past decade.”4 Lawmakers and crime-fighters, however, were operating
under an entirely different set of premises: individual responsibility,
deterrence, and social defense.
In popular culture, a traditional moral view of crime prevailed. This view
was evident in national magazines, in the unfavorable public response to the
psychiatric testimony put into evidence by Clarence Darrow in the 1924
trial of the teen murderers Leopold and Loeb, and, most conspicuously,
in the figure of the public enemy. A product of urban culture – ripped,
as Hollywood script writers liked to say, from the headlines of the bigcity
newspapers – the celluloid gangsters in movies like Scarface seemed
to mock the whole notion of social responsibility. Having triumphed over
their own humble beginnings as urban immigrants, the public enemies,
3 Quoted in Rebecca McLennan, “Punishment’s ‘Square Deal’: Prisoners and Their Keepers
in 1920s New York,” Journal of Urban History 29 (2003), 609.
4 Nathaniel Cantor, “Law and the Social Sciences,” American Bar Association Journal 16
(1930), 387.
Cambridge Histories Online © Cambridge University Press, 2008
206 Michael Willrich
like the crime commissioners their real-life counterparts often foiled, were
successful businessmen.
The riddle of responsibility was not merely theoretical. Many judges
in criminal trial courts felt compelled to address the tension between the
deterministic implications of social science knowledge, which by the 1920s
was a staple of higher education and intellectual discourse, and the formal
assumptions of individual freedom and responsibility that lay at the foundation
of the criminal law. For those who took this problem seriously, this was
an immensely complicated question. The institutional logic of American
judicial administration provided some answers.
Judicial recognition of social causation had made its greatest inroads
in the handling of the great mass of criminal offenses below the grade of
felony – cases that did not involve life-threatening violence or substantial
amounts of property. Social responsibility was also most plausible with
regard to certain classes of offenders who were assumed, even in the best
of circumstances, to lack the reason and strength of adult men. Since the
creation of the first juvenile courts in Cook County (Chicago) and Colorado
in 1899, socialized criminal justice techniques had been adopted most
widely in criminal courts that handled misdemeanor offenses or crimes
involving women or children (juvenile offenses, prostitution, desertion,
and non-support). Following the model of the Municipal Court of Chicago,
the first bureaucratic big-city court system, many local communities had
created special “socialized courts” – staffed with social workers, psychologists,
nurses, and probation officers – to handle such cases. Judges used
the personal data gathered by the court’s experts to devise individual treatments
for each offender, ranging from probation to eugenical sterilization,
designed to remove, cure, or incapacitate the root causes of their deviant
behavior.
In the administration of felony cases, which were typically committed
by adult men, social responsibility and individual treatment were a harder
sell. By the 1920s, judges had established a kind of working compromise in
dealing with felonies. The compromise enabled them to maintain the formal
legal concepts of individual political liberty and free will (and the related
criminal law concepts of intent and culpability) while extending some
recognition to the social and personal factors that impinged on individual
choice. Again, the compromise was derived in practice from the institutional
structure in which judges worked. During the guilt-assessment phase – the
trial itself – the old common law presumptions of free will and responsibility
would remain undiluted. In the sentencing phase, judges could legitimately
take notice of mitigating social facts, tailoring the sentence to their own
subjective calculus of social versus individual responsibility in the case
at hand. The compromise did not put the matter to rest, of course. By
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 207
the 1960s, whether to reckon crime as a social or individual problem had
become one of the defining domestic issues in American politics.
The rhetoric of the urban businessmen, criminal justice officials, and
other reformers who led the crime-control movement of the 1920s and
early 1930s may have rejected the behavioral premises that underlay much
modern social science. But the reformers were eager to harness the cultural
authority and explanatory power of social science to their purpose of producing
a more efficient system for preventing crime and punishing criminals.
The well-funded crime commissions established in major American cities
and states during the period styled themselves as non-partisan, scientific
bodies that would investigate and reform the administration of criminal
justice. In a series of well-publicized local and state “crime surveys,” the
crime commissions produced thickly documented analyses of working criminal
justice systems. The national model was the Cleveland Crime Survey
of 1922. Funded by the Cleveland Foundation, directed by professor Felix
Frankfurter and Dean Roscoe Pound of Harvard Law School, and based
on research conducted by thirty-five legal experts and social scientists, the
survey was the first scientific investigation of an entire criminal justice
system. Urban criminal justice reformers had long suspected that urban
criminal justice was corrupted by political influence and unwarranted official
discretion. But surveys revealed the extraordinary discretion and hidden
administrative strategies that prosecutors and judges used to cut through
overwhelming caseloads. The major discovery of the Cleveland Survey, for
example, was the prevalence of botched cases and plea bargaining. The surveyors
learned that almost 60 percent of felony cases were either discharged
or reduced to less serious charges. The Cleveland Survey was followed by
similarly thorough and damning investigations of the entire state systems
in Missouri (1926) and Illinois (1929). Funded largely by private business
interests or community foundations, the social scientific studies discounted
social conditions and highlighted political influence and systemic efficiencies
in the policing, prosecution, and punishment of crime.
In 1931, the crime survey went national, signaling one of Prohibition’s
most enduring legacies: the consolidation of crime and law enforcement
as plausible subjects of national politics and federal policy. In 1929 President
Hoover had appointed a National Commission on Law Observance
and Enforcement to report to him on the problems facing law enforcement
under the “noble experiment.” Headed by former Attorney General
GeorgeW.Wickersham and comprised of prominent lawyers and legal academics,
including Pound, the Commission did confirm, as expected, that
Prohibition was widely flaunted and inadequately enforced. Even so, the
Wickersham Commission argued against repeal. The Commission did much
more than study Prohibition, however. In fourteen thick reports, published
Cambridge Histories Online © Cambridge University Press, 2008
208 Michael Willrich
in 1931, the Commission produced the first systematic national study of
American criminal justice.
Although the Great Depression diminished their public impact, the
Wickersham Commission reports added up to a powerful indictment. Edith
Abbott, dean of the Graduate School of Social Service Administration at the
University of Chicago, contributed an exhaustive report on crime and the
foreign-born. It plainly refuted the popular belief, which had been exploited
by the eugenicist supporters of federal immigration exclusion legislation
in the early 1920s, that immigrants caused much of the nation’s crime
problem. Abbott’s data showed that the foreign-born constituted a disproportionately
small share of criminals in America. A Commission report
on prisons and parole revealed the authors’ frustration with the dramatic
local variation among the nation’s penal institutions – “an unwieldy, unorganized,
hit-or-miss system” of more than three thousand jails, prisons,
reformatories, farms, workhouses, and chain gangs.5
The Commission’s most shocking report documented the pervasive brutality
of local police forces – the frontline troops of the era’s crime war.
The provocatively titled Report on Lawlessness in Law Enforcement was written
by three civil liberties advocates recommended to the Commission by the
ACLU: Zechariah Chafee, Jr., of the Harvard Law School, and New York
lawyers Walter H. Pollak and Carl S. Stern. Copiously documented from
local investigations and the records of sixty-seven appellate court cases,
the report concluded from the “naked, ugly facts” that “the third degree –
that is, the use of physical brutality or other forms of cruelty to obtain
involuntary confessions or admissions – is widespread.” The interrogation
tactics of the New York police included “[p]unching in the face, especially
with a hard slap on the jaw; hitting with a billy; whipping with a rubber
hose; kicking in the abdomen; tightening the necktie almost up to the
choking point; squeezing the testicles.” In Chicago, police clubbed interrogation
subjects with the city phonebook, heavy enough to “stun a man
without leaving a mark.” The report teemed with examples of Southern
police beating confessions out of African Americans. In one Birmingham
case, “a confession of miscegenation was extorted by a city detective from
an aged Negro at the point of a pistol.”6 SamuelWalker wrote in his history
of the ACLU that the report “created a national sensation.” “[T]he ACLU
drafted a model statute requiring the immediate arraignment of all arrested
persons, detention by an agency other than the police, and the right of all
5 U.S. National Commission on Law Observance and Enforcement, Report on Penal Institutions,
Probation, and Parole (Washington, 1931), 5.
6 Report on Lawlessness in Law Enforcement (Washington, 1931), 6, 4, 92, 126, 70.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 209
suspects to consult a lawyer. Eventually, the courts embraced most of these
ideas.”
The contention that the local administration of criminal justice must
answer to national constitutional standards – an issue of special importance
to African Americans in the South – was beginning to have its day in court.
During the early 1930s, the U.S. Supreme Court took steps toward making
local justice institutions heed the Bill of Rights. The first two cases arose
from the Scottsboro Cases, a cause cel`ebre of the American left. In 1931,
nine poor black boys were falsely accused of raping two white girls. In a
single day, all nine defendants were convicted in a rural Alabama court and
sentenced to death. In Powell v. Alabama (1932), the Supreme Court ruled
that an indigent defendant charged with a capital crime had the right,
protected by the Due Process Clause of the Fourteenth Amendment, to
have an attorney appointed by the state at his trial. In Norris v. Alabama
(1935), the Court held that systematic exclusion of African Americans from
jury service violated the amendment’s Equal Protection Clause. One year
later, in Brown v. Mississippi, the Court overturned the convictions of three
African American men who had confessed to murder after being brutally
whipped. Extracting confessions by torture – the “third degree” that the
Wickersham Commission had found to be such an entrenched part of local
law enforcement – now clearly violated national standards of due process.
The rulings were a significant step in the Supreme Court’s jurisprudence,
toward the more general incorporation of Bill of Rights protections
achieved by the Warren Court. But the 1930s cases had little immediate
impact. According to Michael Klarman, “[S]outhern blacks continued to
experience nearly universal exclusion from juries, to endure beatings aimed
at coercing them into confessing crimes, and to suffer convictions for capital
offenses after sham trials in which court-appointed lawyers barely went
through the motions of providing a defense.” The limits of federal protections
for Southern African Americans were evident, too, in the NAACP’s
failed campaigns for anti-lynching legislation. The House of Representatives
passed anti-lynching bills three times – in 1922, 1937, and 1940 – but
actual or threatened filibusters by Southern Democrats killed the legislation
in the Senate. President Roosevelt, who needed Southern support for his
New Deal programs, refused to publicly support anti-lynching legislation.
The Roosevelt administration is best remembered for its economic regulations
and welfare programs; in criminal justice, the administration is
credited with laying national Prohibition to rest. Indeed, New Deal liberalism
has long been praised for stripping away, once and for all, the moral
excess of the progressive political tradition. The historical record, however,
contradicts this conventional wisdom. Even as the New Dealers helped
Cambridge Histories Online © Cambridge University Press, 2008
210 Michael Willrich
end Prohibition, their policies greatly expanded federal criminal justice
authority in other areas, and Congress continued to legislate morality.
It is a little-noted fact, for example, that the New Deal economic regulations,
such as the codes promulgated by the National Industrial Recovery
Administration, carried criminal penalties and were enforced in federal
trial courts. Roosevelt had established respectable crime-fighting credentials
as governor of New York, when he called for a massive program of
prison construction. As president he demanded “immediate suppression”
of crime in his 1934 State of the Union Address. His attorney general called
crime “a war that threatens the safety of our country.”7 The same New Deal
Congresses that built the modern administrative state created a passel of
new federal crimes and appropriated funds for a burgeoning federal law
enforcement bureaucracy. In 1932, following the sensational kidnapping
and murder of the son of Anne and Charles Lindbergh, Congress made kidnapping
a federal crime. Amidst heightened public fascination with rural
gangsters, Congress passed a series of laws that authorized the use of federal
law enforcement where local and state enforcement efforts consistently came
up short. The Fugitive Felon Law made it a federal crime to flee prosecution
by crossing state lines. The Interstate Theft Act gave federal authorities
jurisdiction over stolen goods worth more than $5,000 transported between
states. In 1934, with Roosevelt’s support, Congress enacted the National
Firearms Act, the first major piece of federal gun control legislation. The
statute imposed high taxes and background checks on purchases of saw-off
shotguns, silencers, and other hardware associated with the crime war. In
1937, Congress responded to a moral panic over marijuana (a panic colored
by racist perceptions of Mexican migrant workers and urban African Americans)
by passing the Marijuana Tax Act, which set stiff penalties for the
possession or sale of marijuana.
Collectively, these statutes greatly increased the criminal jurisdiction of
the federal government and especially the Bureau of Investigation, which
was officially renamed the Federal Bureau of Investigation in 1934. Beginning
in the early 1930s, FBI director J. Edgar Hoover took control of the
federal fingerprint network and the Uniform Crime Reports system, under
which all American law enforcement agencies were required to submit data
on major crimes to the bureau. Hoover put pressure on Hollywood to make
films that glamorized the bureau’s work. The expansion and bureaucratization
of federal crime fighting extended to corrections. The 1930 Bureau of
Prisons Act gave the Federal Bureau of Prisons centralized administrative
control over federal penitentiaries, which previously were run according to
7 Quotes are from Samuel Walker, Popular Justice: A History of American Criminal Justice
(2nd ed., New York, 1998), 160.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 211
the procedures set by their local wardens. In 1934, the Bureau opened the
island prison of Alcatraz, a maximum security facility to house criminals of
the “vicious and irredeemable type.”
The federal government never seriously threatened to seize control of
criminal justice from the state and local governments. Even in the 1990s,
when federal drug policies and sentencing guidelines greatly increased the
federal prison population, local and state governments outspent the national
government on criminal justice functions by nearly six to one. In the year
2000, state felony convictions totaled roughly fourteen times the federal
count.8 Given the American constitutional framework of federalism and the
persistent distrust of centralized power, the federal government is unlikely
to eclipse the local and state role in law enforcement any time soon. But
the interwar period of the 1920s and 1930s – the era of the Red Scare,
Prohibition, the birth of the modern FBI, and New Deal crime policies –
did represent a historic departure of lasting significance. New policies, measures,
and institutions modified the long tradition of localism in American
law enforcement and launched the federal government into the business
of fighting crime. And the actions of the U.S. Supreme Court, however
tentative and ineffectual in the short term, laid important groundwork for
the creation of viable national procedural standards in American criminal
justice.
II. CRIMINAL JUSTICE IN POSTWAR AMERICA:
THE LIBERAL MOMENT
The 1960s marked the high tide of liberal optimism in American criminal
justice. Not since the Progressive era had there been such a sustained surge
of public concern, academic interest, political debate, and calls for government
action to prevent crime and reform criminal justice institutions. The
liberal agenda, which had been in the making for a decade or more, drew
on old and new sources. In its expansive social rhetoric and its concrete
proposals for penal reform, criminal justice liberalism owed a large debt to
the progressive tradition of scientism, social responsibility, and individual
treatment. In its decidedly national orientation and its aim to create a more
unified criminal justice system, the agenda built on the foundation of federal
social intervention laid during the New Deal and expanded in President
Johnson’s Great Society programs. And in its heightened concern for civil
liberties and civil rights, anticipated by the Supreme Court decisions of the
1920s and 1930s, criminal justice liberalism bent to both the grassroots
8 U.S. Department of Justice, Bureau of Justice Statistics (hereafter BJS), “Key Crime and
Justice Facts at a Glance,” http://www.ojp.usdoj.gov/bjs/glance.htm, accessed 8/18/2004.
Cambridge Histories Online © Cambridge University Press, 2008
212 Michael Willrich
mobilizations of the civil rights movement and the ColdWar imperative of
fortifying America’s international image as a beacon of liberty and equality
before the law.
Three clusters of events – an influential private reform initiative, a wave
of federal court decisions, and political skirmishes on the national stage –
defined the postwar decades in American criminal justice. The Model Penal
Code, a lawyers’ reform project launched in 1952, aimed to clarify and ,unify
the substantive criminal law by proposing a template for revising the fifty
state criminal codes. A landmark in criminal law theory, the Code strived to
reconcile the progressive tenets of scientism, determinism, and treatment
with a renewed concern for the formal legal principles of culpability and
deterrence. During the 1960s, the Warren Court handed down a series
of decisions that strengthened the procedural rights of accused persons,
defendants, and prisoners. This “due process revolution” imposed a new level
of constitutional uniformity on local and state criminal justice institutions.
With less fanfare, lower federal courts brought state prisons under federal
judicial oversight. In the mid-1960s rising crime rates became a pressing
subject of national politics. President Johnson appointed a blue-ribbon
commission and charged it to “deepen our understanding of the causes of
crime and of how society should respond to the challenge of the present
levels of crime.”9 The commission’s report was an eye-opening analysis of
American criminal justice and the definitive liberal statement on the causes
and cure for crime.
The Model Penal Code was commissioned by the American Law Institute,
a Philadelphia-based organization founded in 1923. The ALI project
was to get the nation’s best legal minds together and produce authoritative
“restatements” of common law principles, in areas such as torts and contracts,
and in this way to make American law more unified, rational, and
scientific. In criminal law, the ALI lawyers concluded, a restatement was
not enough. For all of the progressive innovations in the administration
of justice, there had been little effort to reform the substantive criminal
law: the hodge-podge principles of criminal liability, the catalogues of
crimes and penalties, and the rules of punishment contained in the copious
state codes and common law precedents. Under the direction of Herbert
Wechsler, a Columbia University law professor, an ALI committee labored
on the Model Penal Code from 1952 to 1962, producing thirteen drafts for
review and comment. The final result resembled an actual criminal code: its
articles, sections, and subsections glossed penal principles, defined crimes,
outlined the proper organization of a correctional system, and specified how
9 President’s Commission on Law Enforcement and Administration of Justice, The Challenge
of Crime in a Free Society (Washington, DC, 1967), 2.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 213
convicted criminals must be treated. But the Code was deceptive both in
name and appearance. Much more than a prototype for state reform, the
Code was the most important American treatise on the criminal law since
the nineteenth century.
The Code bore the unmistakable impress of the 1950s and that decade’s
distinctive postwar legal culture. The horrors of totalitarianism – whether
in its fascist, Nazi, or Stalinist guise – reverberated throughout American
intellectual life. In diverse academic disciplines, scholars turned from
inherently political questions of substance and value to matters of form,
technique, and process. Law scholars associated with the ascendant “legal
process” school drew categorical distinctions between law and politics, procedural
versus substantive justice. The legal realists of the 1920s and 1930s
had challenged those old formal dualisms with devastating intellectual
force. But times had changed. In a world visibly threatened by absolutist
regimes, legal process scholars insisted, official discretion must be restrained
through an almost religious adherence to the constitutional processes and
institutional arrangements – the neutral rules of the game – that made the
American rule of law exceptional. Herbert Wechsler, who had served as
a legal adviser at the Nuremberg trials, took this argument to its logical
extreme in his infamous critique of the Warren Court’s decisions banning
state policies of racial discrimination. The decisions, he argued, rested on
political or moral judgments, rather than “neutral principles.” Though this
position was controversial, Wechsler’s legal faith fit the times. For many
Americans (including many civil rights activists) the concept of a rule of
law, founded on formal equality and individual justice, distinguished the
United States from the USSR as the powers vied for the allegiance of nations
around the globe.
In this ColdWar context, it seemed more important than ever to ensure
that American criminal justice rested on time-honored legal principles,
rather than political fiat or administrative discretion. This concern permeates
the 1952 Harvard Law Review article in which Wechsler made his
case to America’s legal elite for a model penal code. Given its immense
importance – its power to protect and to destroy – the criminal law was
in a disgraceful condition. Criminal law was a poor relation in the legal
academy and profession. State codes were mindlessly imitative and full of
uncertainty. Wechsler was especially concerned by the widening range of
criminal offenses in which the courts applied a standard of strict liability:
proving the act itself, without clear evidence of a guilty mind (mens rea),
was sufficient for penal sanction. The vagueness and harshness of state codes
encouraged judges and prosecutors to cut plea bargains with defendants.
Wechsler worried that such informal administrative techniques had “so
largely come to dominate the field,” eclipsing traditional legal concerns
Cambridge Histories Online © Cambridge University Press, 2008
214 Michael Willrich
like mens rea. Echoing Roscoe Pound’s famous condemnation of the growth
of the administrative process during the New Deal, Wechsler warned that
“to a large extent we have, in this important sense, abandoned law – and
this within an area where our fundamental teaching calls most strongly for
its vigorous supremacy.”10
In criminal justice institutions, the procedural problem of administrative
discretion was closely tied to the substantive riddle of criminal responsibility.
The prospect of “abandoning” the law had first been raised by
Pound’s Progressive generation, as they struggled to bring jurisprudence
and the administration of justice into line with the new scientific knowledge
of society. Central to this first encounter between law and the social
and behavioral sciences was the problem of culpability. The criminal law
presumed that people were moral free agents, and for a criminal code to
deter potential criminals one had to assume that people were rational actors,
capable of choosing to obey the law. But from the perspective of disciplines
like psychology and psychiatry, such unfettered individual free will was an
indefensible concept; human behavior, including criminal acts, was largely
determined by socioeconomic circumstances, heredity, and mental disorders.
As these disciplines rose in cultural authority, the tension between
the new common sense of educated Americans and the old common sense
presumed by the criminal law grew ever more acute. By the 1950s, the consensus
in the social and behavioral sciences was, asWechsler put it, “that the
penal law is ineffective, inhumane, and thoroughly unscientific.” Free will
was a fiction, the penal law’s actual function “nothing more than vengeance
in disguise.” Wechsler was sympathetic to these complaints, which came,
he said, from “important groups seeking to further public interest.” So he
made it a central purpose of the code project “to explore the merits of such
criticism in the context of a reconsideration of the law.”11
The institutional challenge was how to make criminal law run in accord
with the recognition that the individual will was neither the sole cause
of crime nor the sole object of penal control, without abandoning the law
for an entirely administrative penal regime. For Pound’s Progressive generation,
the solution had been to “socialize” the criminal courts: bring
in the social experts, and make the courts run more like bureaucracies,
without entirely loosening the old common law restraints on discretion.
By the 1950s, techniques of individual treatment such as juvenile courts,
probation, and parole, were almost universal in the judicial and correctional
systems of the states. Still, in the eyes of many social and behavioral
10 Herbert Wechsler, “The Challenge of a Model Penal Code,” Harvard Law Review 65
(1952), 1102.
11Wechsler, “Model Penal Code,” 1103.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 215
scientists of the era, the treatment programs remained a thin veneer for a
system founded on retribution. ForWechsler, the challenge was to synthesize
the scientists’ renewed calls for therapeutic treatment of offenders with
a more traditional commitment to culpability and deterrence.
The Model Penal Code appeared in 1962; official commentary filling six
volumes came out by 1985. The drafters did much to clarify state law. They
created an integrated law of theft to replace the long menu of crimes –
embezzlement, larceny, false pretenses, larceny by trick, and so forth – that
state codes had imported from the common law. The Code’s central theme,
though, was its reaffirmation of mens rea. In place of the confusing array
of terms that the common law used to define mens rea, the Code specified
“four modes of acting with respect to the material elements of offenses –
purposely, knowingly, recklessly, and negligently.” One of these had to be
present to establish criminal liability. The Code took a modest stand against
strict liability crimes, which in the regulatory environment of twentiethcentury
America had grown to cover acts ranging from traffic violations to
statutory rape to felony murder. The Code insisted that penal law properly
dealt only with blameworthy behavior. Minor strict liability offenses were
not called crimes at all; they were redefined as violations, punishable by
fines only.
The eminent postwar criminal law scholar Herbert L. Packer praised the
Code as a triumph of “principled pragmatism”; the drafters had shrewdly
accommodated their reform principles to “existing institutions.”12 In fact,
it was the existing institutions that held the whole enterprise together, making
it possible for the Code drafters to reconcile their own contradictory
principles: legal authority and scientific knowledge, individual responsibility
and therapeutic treatment.
Wechsler and his colleagues assumed that there was a definite societal
consensus about the proper purpose of the penal law: to prevent culpable
behavior that harmed the interests of society. They enshrined this principle
in the Code’s emphasis on individual culpability and deterrence (which
implied the moral free agency and rationality of criminals). But when it
came to dealing with convicted criminals, the Code emphasized therapeutic
treatment (which implied that offenders were abnormal individuals who
should be restored to society only after undergoing treatment to rehabilitate
the curable and incapacitate those beyond cure). This was no small
contradiction. After all, correctional treatment programs like parole and
psychiatric confinement involved the sort of broad administrative discretion
that postwar legal academics professed to abhor. Like criminal court
12 Herbert L. Packer, “The Model Penal Code and Beyond,” Columbia Law Review 63 (1963),
594.
Cambridge Histories Online © Cambridge University Press, 2008
216 Michael Willrich
judges in the 1920s, the Code drafters found a solution to this problem
in the legal process. At the front end of the criminal justice system – in the
definition of specific offenses, in the judicial determinations that proved a
defendant’s guilt or innocence – the neutral standards of mens rea and individual
culpability reigned supreme. But at the back end of the system – in
the handling of convicted criminals – treatment of deviant personalities,
rather than simple retribution or punishment, best served the end of crime
control. Accordingly, the Code set generous sentencing ranges. These gave
correctional officials wide authority to determine the actual time served
according to their assessment of the offender’s personal history, character, or
mental condition. Such vast discretion was safely granted, the logic of the
Code implied, because treatment was reserved for people who had demonstrated
a high level of legal culpability.
During the next two decades thirty-four states revised their criminal
codes in ways that reflected the Model Penal Code’s influence. The Code also
sparked interest in reforming federal criminal law, which, Charles McClain
has noted, “was in a sorrier condition than that of most states.” For many
years after its publication, the Code had a strong (some would say stultifying)
influence on legal scholarship and education. Above all, the Code
helped judges, lawyers, lawmakers, and perhaps the public envision American
criminal justice as a system, founded on unified, consensual principles
that could reasonably unify purposes of deterrence and treatment within a
framework of protecting society from blameworthy conduct.
Compared to the progressive reform discourse on criminal justice, however,
the Code articulated a narrowly procedural notion of justice. Even as
it reaffirmed the Progressives’ commitment to rehabilitation, the Code said
little about society’s responsibility to address structural inequalities that
caused crime. In this sense, too, the Code was a product of the postwar
years, when a moderately liberal law professor like HerbertWechsler could
profess to find no “neutral principles” on which white racial discrimination
against blacks could be legitimately condemned by the courts. The quest
for substantive justice – racial or economic – was too political to have a
place in the neutral legal process.
All the while the legal process was changing. As the ALI Code authors
circulated their drafts, the U.S. Supreme Court, under Chief Justice Earl
Warren, embarked on one of the most consequential eras in its history. In
one landmark decision after another, the Court remapped the boundaries
of governmental power and individual rights in America. Collectively, the
Court’s decisions greatly enhanced the power of the federal government
(including the Court itself) as the guarantor of civil liberties and civil
rights. Although history has shown that these decisions were not enough to
guarantee economic, racial, or gender justice, this train of decisions gave the
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 217
phrase “equality before the law” a substance it had never before possessed
in American history.
The roots of the due process revolution reached back to the post-World
War I First Amendment decisions and extended through the 1920s and
1930s, when the Court, for the first time since Reconstruction, acted to
restrain the practices that made due process apply for whites only in Southern
courts. In 1938, the Court gave notice, in Justice Harlan Fiske Stone’s
famous Carolene Products footnote, that henceforward the Court would apply
a higher level of scrutiny to state laws that interfered with civil liberties or
civil rights. Despite these important precursors, the due process revolution
was clearly a product of the post-WorldWar II era. The urgency came from
many sources: the recent global experiences with totalitarianism, African
Americans’ struggle for civil rights, and the ColdWar imperative to square
the American creed of liberty and equality with the realities of racism and
police violence. It is no surprise that one leading edge of this revolution in
rights was the constitutional law governing criminal justice institutions,
for it was there that the coercive power of the state to destroy human liberty
was most explicit. Long-tolerated local practices like the third-degree
suddenly carried global implications.
The Supreme Court’s actions defied the long tradition of localism in criminal
justice, a tradition in which EarlWarren himself was exceptionally well
versed. From 1926 to 1938 he served as the crime-fighting district attorney
of Alameda County (Oakland), California. He prosecuted suspected radicals
under the state’s criminal syndicalism statute. During his subsequent tenure
as California attorney general, prosecutors working under him were known
to build cases on warrantless wiretaps and coerced testimony. In the decades
before President Dwight Eisenhower appointed him to the Supreme Court,
a local or state law enforcement officer like Warren could rest assured that
there were virtually no federal constitutional restraints on how he went
about enforcing the state law and very little risk that any conviction he
won would be undone by a federal court. The protections that the Bill of
Rights extended to the accused – freedom from unreasonable search and
seizure, the right to an attorney, freedom from self-incrimination, freedom
from cruel and unusual punishment – were understood (correctly or not)
to restrain only the federal government. There were important precedents
arising from the Scottsboro cases in the 1930s. Even so, theWarren Court’s
decisions were virtually unprecedented. And for many Americans, there
was something undemocratic about nine appointed federal judges, tenured
for life, striking down laws enacted by elected state lawmakers and telling
local communities how to fight crime.
Thanks partly to the journalist Anthony Lewis’s 1964 best-seller, Gideon’s
Trumpet, which explained and celebrated the achievements of legal liberalism
Cambridge Histories Online © Cambridge University Press, 2008
218 Michael Willrich
for a national audience, the Warren Court’s criminal justice decisions are
among the most familiar in American constitutional history. The Warren
Court effected a wholesale constitutional transformation of criminal procedure
by using the Fourteenth Amendment’s Equal Protection and Due
Process Clauses as a textual basis to “incorporate” the Bill of Rights protections
and apply them against the states. In Mapp v. Ohio (1961), the
Court applied the exclusionary rule, established in 1914 for federal cases,
against the states. No longer would evidence turned up using faulty warrants
be admissible in state cases. The Court recognized that the Fourth
Amendment’s protection against unreasonable search and seizure was virtually
meaningless if prosecutors could introduce evidence gathered by
such means. In Gideon v. Wainwright (1963), the Court applied the Sixth
Amendment’s guarantee of counsel in felony cases to the states. In Cooper
v. Pate (1964), the Court declared that state prisoners (in this case, a black
Muslim in Illinois’s Stateville prison) had a First Amendment right to free
exercise of religion. In Miranda v. Arizona (1966), the Court required local
and state police to alert criminal suspects, before interrogation, to their
Fifth Amendment right not to incriminate themselves. The Court struck
against the procedural informality of progressive criminal justice in the
case In Re Gault (1967), ruling that juvenile defendants must have at least
partial procedural rights, including the rights to counsel and to confront
one’s accusers. In Furman v. Georgia (1972), the Court ruled that the death
penalty, as applied in Georgia, was arbitrary and thus violated the Eighth
Amendment’s ban on cruel and unusual punishments.
Many of the rights newly guaranteed by the Court in cases like Mapp and
Gideon were in fact already protected by the statutes or constitutions of many
states – but not all. That was what made the cases significant. Together they
imposed an unprecedented level of national constitutional uniformity on
the practices of local and state police, criminal courts, prisons, and jails.
As with many Supreme Court decisions, the Court’s decisions protecting
the rights of defendants and the accused depended to a large degree on the
willingness of public officials to abide by them – and such cooperation was
often refused. But there is no question that the cases opened up new protections
for individuals and imposed a new level of constitutional uniformity
on the states. Like the Model Penal Code, the due process revolution helped
turn American criminal justice institutions into something more closely
resembling a national system.
The lower federal courts were also taking action to impose new norms
of liberal constitutionalism on state and local criminal justice institutions.
Before 1965, no federal court had ever presumed to tell a state prison or
local jail to reform its practices or improve its conditions. The “hands-off
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 219
doctrine” prevailed. As one federal appeals court put it, “The Government
of the United States is not concerned with, nor has it power to control or
regulate the internal discipline of the penal institutions of its constituent
states.”13 Meddling by a federal court in the internal affairs of a state prison,
judges had long reasoned, would be tantamount to making prison policy,
violating long-settled principles of federalism, separation of powers, and
the rule of law itself.
The old constitutional restraints began to unravel in 1965. The U.S.
District Court for the Eastern District of Arkansas ruled that conditions at
the Cummins Farm State Prison violated the Eighth Amendment’s prohibition
on cruel and unusual punishment. With that decision, the federal
district courts and appellate courts asserted jurisdiction over state prisons
and local jails. Individual prisoners and prisoner’s rights organizations
filed a steady stream of suits. The ensuing train of federal prison cases far
outlasted the Warren Court. By 1995, write Malcolm Feeley and Edward
Rubin in their exhaustive study, “The ACLU estimated that prisons in a
total of forty-one states, as well as the District of Columbia, Puerto Rico, and
the Virgin Islands, had at one time or another been under comprehensive
court orders, as had the entire correctional systems of at least ten states.”
Federal courts told state officials how large cells must be, how often a prisoner
must be allowed to shower, even what nutritional value prison meals
must have.
This thirty-year wave of “judicial policymaking,” as Feeley and Rubin
have shown, reflected a broader transformation in the fabric of American
governance during the late twentieth century. The hundreds of individual
federal judges who participated in these discrete prison decisions were
not radicals; they were “middle-of-the-road, upper-middle-class Americans,
largely white and male, appointed by Republican and Democratic presidents.”
Their decisions were not guided from above by the Supreme Court.
Their institutional base – federal courts – spanned the nation. Decades of
institutional and intellectual change had made federal judges willing to lay
the hands-off doctrine to rest. The growth and apparent permanence of the
administrative state had eroded the “conceptual power” of the formal legal
doctrines – dual federalism, separation of powers, and the rule of law ideal –
that had once justified the hands-off doctrine.
In retrospect, federal judges and liberal Democratic politicians picked a
tough time to pursue sweeping institutional reforms and policies that critics
could plausibly denounce as soft on crime. By the mid-1960s, the nation
was in the throes of an extended, historically exceptional surge in reported
13 Siegel v. Ragen, 180 F.2d 785, 788 (1950).
Cambridge Histories Online © Cambridge University Press, 2008
220 Michael Willrich
crimes. As early as the 1964 presidential election, the Republican candidate,
Senator Barry Goldwater, issued a stern warning about escalating “violence
in our streets.”14 Goldwater cited the FBI’s Uniform Crime Reports, data-rich
publications whose very existence invited Americans to think of crime as a
national problem that Congress and the president must address. Reported
crimes in the United States rose from 1,861,000 in 1960 to 2,780,000 in
1965 to 5,568,000 in 1970. By 1975, that figure doubled again. In popular
culture, the crime fears of the moment crystallized in the racial stereotype
of the young black male “mugger.” Goldwater’s speech was a turning point.
Explaining the crime epidemic and identifying the true culprits were now
major issues in national politics, for liberals and conservatives alike.
Even if Goldwater had not raised the issue during the 1964 campaign,
it seems unlikely that President Johnson could have avoided addressing
the surging crime rates or the race riots – typically triggered by an incident
with the police – that took place in more than forty cities between
1964 and 1967. Still, Johnson went much further than he had to. In 1965,
he became the first president to address Congress on crime, announcing,
“Crime is no longer merely a local problem.” Like Hoover before him,
Johnson created a crime commission, the President’s Commission on Law
Enforcement and the Administration of Justice. Chaired by Attorney General
Nicholas Katzenbach and directed by Harvard law professor James
Vorenberg, it conducted the most extensive survey of American criminal
justice since the Wickersham Commission. The sixty-three-member staff
included police officers, sociologists, correctional personnel, prosecutors,
lawyers, and psychologists. They took ride-alongs with city police, visited
courtrooms, and toured urban slums. The Commission’s 1967 report, The
Challenge of Crime in a Free Society, was a powerful statement of Great Society
liberalism. It called for a massive federal government effort to reform criminal
justice institutions and to fight crime by reducing poverty and racial
discrimination.
It says something about how far American politics has traveled in the past
three and a half decades that the report’s language now sounds so radical:
America must translate its well-founded alarm about crime into social action that
will prevent crime. [The Commission] has no doubt whatever that the most significant
action that can be taken against crime is action designed to eliminate slums
and ghettos, to improve education, to provide jobs, to make sure that every American
is given the opportunities and the freedoms that will enable him to assume
his responsibilities.15
14 Barry Goldwater’s Acceptance Speech at the Twenty-Eighth Republican National Convention,
1964, at http://www.washingtonpost.com, accessed 7/29/03.
15 President’s Commission, Challenge of Crime, 15.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 221
The report attributed the escalating crime rates to urban poverty, institutional
racism, and the economic process of deindustrialization, which
drained industrial jobs from the cities just as the baby boomer generation
entered their late teen years. Bristling with more than two hundred recommendations,
the report called for an extensive federal effort to finance
and coordinate local and state law enforcement. The commission urged that
prisons be reserved for the most dangerous offenders; the rest should receive
treatment in the community. At least for its supporters, the report served as
a powerful argument for expanding the Great Society’s poverty programs.
In February 1967, shortly after the report’s release, Johnson sent Congress
legislation to provide major federal funding in direct grants to the cities and
states to start implementing the commission’s recommendations. Congress
debated the act in 1968. It was a year of riots, demonstrations against
Johnson’s faltering war in Vietnam, and a presidential campaign in which
George Wallace and Richard Nixon appealed to voters with promises of
“law and order.” Congress eventually passed an Omnibus Crime Control
and Safe Streets Act, which Johnson signed with serious misgivings. The
law contained several provisions offensive to the administration: it gave
law enforcement officials enlarged powers to engage in wiretapping and
other provisions aimed to limit the reach of the Warren Court’s due process
decisions. The statute also required that federal monies be distributed
to the states in block grants – rather than direct grants – giving states
wide discretion over how the money would be spent. But the statute
also launched the Law Enforcement Assistance Administration (LEAA), an
agency within the Justice Department that would administer federal grants
to state and local law enforcement agencies, educational institutions, and
private organizations.
Weakened by Vietnam, Johnson did not seek his party’s nomination in
1968. The Republican Platform repudiated the Crime Commission report:
“We must re-establish the principle that men are accountable for what they
do, that criminals are responsible for their crimes, that while the youth’s
environment may help to explain the man’s crime, it does not excuse that
crime.” Nixon appealed to die-hard Republicans and white working-class
voters with his demands for “law and order,” and he denounced the Supreme
Court’s due process decisions for handcuffing the police.
Johnson and Nixon had one thing in common, though. Both invited rising
public expectations that the federal government must fight crime. During
Nixon’s first term, the federal law enforcement budget tripled; aid to the
states exploded from $60 million to nearly $700 million. Since 1968 crime
has become a seemingly permanent addition to national political discourse,
and American criminal justice has become increasingly punitive. Yet, liberalism
survived as an institutional influence in American criminal justice
Cambridge Histories Online © Cambridge University Press, 2008
222 Michael Willrich
long after 1968. Rehabilitation programs and the community-based services
expanded. From 1965 to 1975, the number of adults on probation rose from
144,000 to 923,000; the parole population grew from 63,000 to 156,000.
The new Law Enforcement Assistance Administration also sustained a liberal
perspective on crime causation by bankrolling social science research.
Liberalism survived in criminal justice because legal activists, politicians,
and litigants helped keep it alive. It persisted also because many of the
achievements of liberal reform – the nationalization of criminal justice,
the constitutionalization of criminal procedure, federal judicial oversight
of prisons – were grounded in broader changes in American governance.
On the flip side, America’s second “war on crime,” just getting underway
during Nixon’s presidency, would have been unthinkable before the 1960s,
when crime became a national issue. Unfortunately, this unintended legacy
of the liberal moment outlived many of its other achievements. America’s
second war on crime made the first look like a schoolyard scuffle.
III. THE SEVERITY REVOLUTION
Whether we ought to reckon crime as a matter of personal choice or social
conditions may be an ultimately irresolvable question. But criminal justice
is decidedly the product of human political decisions. In a representative
democracy, the people share responsibility with their political leaders for
how crimes are defined, communities policed, and criminals punished. This
is especially true in the United States. The nation’s exceptionally decentralized
government institutions and fiercely competitive party politics render
criminal justice policymaking at the local, state, and federal levels particularly
vulnerable to popular pressure, media representations, interest group
demands, and the demagoguery of ambitious politicians.
In the last quarter of the twentieth century, as a rising tide of conservatism
transformed American politics, government officials responded aggressively
to rising public concerns about crime. Collectively, their innumerable policy
choices, made in the nation’s thousands of legislative, judicial, and administrative
arenas, launched what Albert Alschuler called a “severity revolution”
in criminal justice. As lawmakers rolled out “get tough” measures –
stiff mandatory sentences for drug offenders, huge appropriations for prison
construction, and a revival of chain gangs and public shaming – the widening
color gap behind bars demonstrated the enduring significance of racial
inequality in a nation premised on equal justice for all. The number of people
of color, particularly young African American men, under state restraint
and police surveillance vastly exceeded their proportion of the general population.
By century’s end, the world’s most powerful liberal democracy
incarcerated nearly two million of its members in theWest’s harshest penal
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 223
regime. The vast U.S. penal system – the last in the industrialized West
to retain capital punishment – drew uncomfortable (and, by some measures,
unfavorable) comparisons to apartheid-era South Africa, Communist
China, and Taliban-ruled Afghanistan. A headline in the British Economist
expressed the growing disapproval of America’s Western allies: “Crime in
America: violent and irrational – and that’s just the policy.”16
America’s severity revolution reflected and reinforced the dramatic social,
economic, and political changes underway in the nation from 1970 to 2000.
The historic achievements of the civil rights movement were undermined
by widening economic inequalities and new forms of racial politics. Deindustrialization
of the Northern cities – the movement of manufacturing
jobs from old industrial centers like Detroit to suburbs, the Sunbelt, and
increasingly offshore – tightened urban job markets. African Americans,
who migrated to Northern manufacturing centers by the tens of thousands
during and after World War II, disproportionately suffered as the
industrial job base in those communities shrunk. The suburbanization of
America during the postwar decades encouraged a political realignment
in the nation. Predominantly white suburbanites of both parties rallied to
protect their property values and children from all threats, especially crime
and the perceived threat of state-enforced school integration. The shifting
suburban political base fostered the rise of a new conservatism in American
politics that gave the Reagan and Bush Republicans control of Washington
during the 1980s. Conservatism transformed Democratic politics too,
culminating in the election of “New Democrat” Bill Clinton on campaign
promises of free trade, welfare reform, and tough crime policies. For many
Americans, the collapse of the USSR provided powerful confirmation for
a set of assumptions widely shared on both sides of the political aisle by
Clinton’s election. “Big government” welfare programs and regulatory policies
must be scaled back. Many of the government’s social functions could
be better managed by private firms and charitable associations. The old
“liberal” ethics of social responsibility needed to be updated or discarded
altogether.
Born of a political movement against big government, America’s second
war on crime triggered the greatest build-up in the coercive power of the
state in the nation’s history. During the last quarter of the twentieth century
a population explosion of Malthusian proportions took place in America.
It happened behind bars. According to data compiled by the Bureau of
Justice Statistics, a branch of the U.S. Department of Justice, in 1980, the
total population of jails and prisons in the United States stood at 504,000
persons. By 1990, that population more than doubled, reaching 1,149,000.
16 Economist, June 8, 1996, 23–25.
Cambridge Histories Online © Cambridge University Press, 2008
224 Michael Willrich
By 2000, it had climbed to 1,937,000.17 By contrast, in 2000 the European
Union – which had a population of some 370 million, compared with 274
million Americans – incarcerated about 300,000 people.18 The raw numbers
cannot fail to impress: Imagine if the general population doubled every
decade! But a more meaningful measure is the incarceration rate: how the
swelling ranks of the incarcerated tracked with the nation’s overall population
growth. For this the Bureau of Justice Statistics offers up a different
indicator: the number of “sentenced inmates under State and Federal jurisdiction”
per 100,000 U.S. residents. That indicator climbed sharply from
139 in 1980 to 297 in 1990 to 478 in 2000, when the incarceration rate
showed signs of leveling off.19 “America’s per capita incarceration is now
the highest in the world,” James Whitman noted in 2003, “approaching,
and in some regions exceeding, ten times the rate inWestern Europe.”
Plummeting crime rates failed to slow the severity revolution. From
1991 to 2000, America’s homicide rate fell 44 percent, burglaries dropped
42 percent, and robberies declined 47 percent. Criminologists called it “the
Crime Drop.” Politicians and law enforcement officials rushed to take credit
for the good news; surely tougher policing and stiff sentences had deterred or
incapacitated criminals. Criminologists were not so sure. Their hypotheses
revealed how complex and divisive the study of crime causation had become.
Some experts attributed the Crime Drop to the aging of the population,
others to law enforcement strategies such as community policing initiatives
and crackdowns on “lifestyle” offenses, others to the decade’s economic
boom, and still others to the decline of the “crack” cocaine trade. The most
controversial theory (because of its eugenicist implications) chalked it up
to Roe v. Wade. The availability of legal abortion, researchers suggested,
prevented the births of thousands of unwanted, impoverished children,
whose diminished life chances would have put them at risk for careers
of crime. In the midst of all of this speculation, a few experts conceded
that perhaps the mystery of the Great American Crime Drop could not be
solved.
How could the incarceration rate continue to rise if reported crime was
falling?
A simple answer: Convicts were spending more time in prison. At the
heart of the severity revolution was a legislative movement for longer, determinate
sentences. From the beginning this movement was tightly linked to
theWar on Drugs, which started in the waning years of the VietnamWar.
17 BJS, “Correctional Populations,” http://www.ojp.usdoj.gov/, accessed June 7, 2004.
18 Irwin et al., “Nonviolent Prisoners.”
19 BJS, “Incarceration Rate, 1980–2002,” http://www.ojp.usdoj.gov/, accessed June 7,
2004.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 225
In 1973, Governor Nelson Rockefeller of New York pushed through his
famous anti-narcotics law, which curtailed judicial discretion and imposed
hefty mandatory sentences for even relatively small offenses. In the 1980s,
federal and state lawmakers bound judges to mandatory minimum sentences
for an ever-widening range of offenses. As crime levels dropped during the
1990s, Congress and many state legislatures further flexed their muscles,
mandating “enhanced” sentences for drug dealing, use of a firearm during a
crime, and, most notoriously, for a third felony conviction (under the “three
strikes and you’re out” laws).
Drug offenses accounted for much of the rising prisoner head count. Consider
another Bureau of Justice Statistics indicator: the number of persons
in the custody of state correctional systems, classified by the most serious
offense they committed. By this measure, in 1980 the states had custody of
19,000 drug offenders; in 1990, that figure hit 148,600; and by 2000, it
reached 251,100 – more than thirteen times the 1980 figure. By comparison,
during the same twenty-year period the number of violent offenders
in state custody grew from 173,300 to 589,100 (3.4 times); the number of
property offenders rose from 89,300 to 238,500 (2.7 times); and the number
of public order offenders climbed from 12,400 to 124,600 (10 times).20 The
rising tide of drug and public order offenders behind bars has had a curious
cumulative effect. As a recent report observed, while the public worries
most about violent crime, “[m]ost of the growth in America’s prisons since
1978 is accounted for by nonviolent offenders.”21
America’s severity revolution hit hardest in metropolitan minority communities.
Nearly half of all people incarcerated in the 1990s were African
American, though blacks comprised only 13 percent of the population.22
On any given day, nearly a quarter of all African American men in their
twenties were “under some form of criminal restraint – prison, jail, probation
or parole.” Although American criminal justice has a long, violent
history of racism, the glaring racial disparity in the nation’s prison systems
intensified during the twentieth century. In the 1930s, when the FBI
first began compiling Uniform Crime Reports, 75 percent of the people sentenced
to state and federal prisons were white, in rough proportion to the
demographic composition of the United States. By 2000, racial minorities
accounted for 70 percent of new prison admissions and more than half of
all American prisoners. The contrast in per capita prison admissions by
population group was stark. In 1996, American prisons held 193 white
20 BJS, “Number of Persons in Custody of State Correctional Authorities by Most Serious
Offense, 1980–2000,” http://www.ojp.usdoj.gov/, accessed June 8, 2004.
21 Irwin et al., “Nonviolent Prisoners,” 135.
22 “More Than Any Other Democracy,” Economist, March 20, 1999, 30–31.
Cambridge Histories Online © Cambridge University Press, 2008
226 Michael Willrich
Americans per 100,000 whites, 688 Hispanics per 100,000 Hispanics,
and 1,571 African Americans per 100,000 African Americans. The glaring
eight-to-one disparity between rates of incarceration for blacks and whites
was to a great extent due to the war on drugs.23
In an era of fiscal conservatism, America’s severity revolution has been
enormously expensive. In 1982, local, state, and federal governments spent
a total of $35.9 billion on criminal justice functions. By 1990, that figure
had more than doubled. By 1999, Americans were spending $146.6 billion
per year in their war on crime.24
Politicians had good reason to conclude this was the sort of criminal justice
regime Americans wanted. Since 1985, the Bureau of Justice Statistics
has been tracking American attitudes about crime and criminal justice. As
the criminal justice system got tougher and tougher during the next decade,
roughly 85 percent of Americans interviewed told pollsters that their local
courts treated criminals “not harshly enough.” Party affiliation made little
difference in how Americans graded the courts’ severity; neither did race.
As to capital punishment, the same polling data indicated how far public
opinion has moved since the 1960s. In 1965, only 38 percent of Americans
said they “believe[d] in” the death penalty, compared to 47 percent who
were opposed. (The remaining 15 percent were “not sure” or “refused” to
answer.) By 1976, 67 percent of Americans were believers, with 25 percent
opposed (and 8 percent not sure/refused). Support continued to climb. By
1997, 75 percent of Americans supported the death penalty, against only
22 percent opposed (with only 3 percent not taking a position).25
All of which helps explain why the sort of liberal concern for social
root causes associated with President Johnson’s Crime Commission’s 1967
report all but vanished from the political discourse of either of the nation’s
two major parties. A telling example was the way Democratic presidential
candidateWilliam Clinton, then governor of Arkansas, established his
tough-on-crime credentials during the 1992 campaign. In a well-publicized
move, Clinton returned to Arkansas shortly before the New Hampshire
primary to affirm the execution of Ricky Ray Rector, an African American
man convicted of murdering a police officer. The Rector case was controversial
because Rector was severely brain damaged; in an attempted suicide
23 Irwin et al., “Nonviolent Prisoners.”
24 BJS, “Direct Expenditure by Level of Government, 1982–2001,” http://www.ojp.usdoj.
gov/, accessed June 8, 2004.
25 In 1994, for example, 88 percent of Republican respondents, compared with 85 percent
of Democrats, said the courts weren’t harsh enough. See BJS, Sourcebook of Criminal
Justice Statistics 2002, pages 140, 141, 143, http://www.albany.edu/sourcebook, accessed
June 7, 2004.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 227
following his crime, he had shot himself in the head. After Rector’s last
meal, a guard asked him why he had not eaten his dessert. Rector reportedly
answered that he was saving it for later. He appeared to have no idea
that he was headed to his own execution. Had Clinton halted Rector’s execution
he might have appeared soft on crime – a potentially fatal reputation
for any ambitious politician to have in late twentieth-century America.
The severity revolution also registered in a profound ideological shift.
Public concern about the social causes of crime did not vanish, but there
was rapidly diminishing support for a criminal justice system that aimed
to recognize and rehabilitate offenders as distinct individuals. Both the
treatmentist consensus expressed in the Model Penal Code and the liberal,
sociological perspective on crime reflected in the 1967 Crime Commission
Report lost their currency in political discourse and public action. Emphasis
shifted from treating the criminal to punishing the crime; as rehabilitation
declined, retributivism and deterrence came back with a, well, vengeance.
These developments had support from the political right, but also from civil
liberties advocates on the left, who had long argued that the progressive
ideal of individual treatment had too little regard for either the autonomy
of the individual or due process of law.
The retreat from rehabilitation transformed criminal justice. After 1970,
the states imposed new limits on (or, in some cases abolished altogether)
the institutions of probation, the indeterminate sentence, and parole. When
lawmakers established mandatory minimum sentences for specific crimes,
they aimed both to deter crime with the threat of more severe penalties and
to eliminate judges’ discretion to consider offenders’ personal background
(other than criminal record). (In the process, the new sentencing statutes
effectively shifted much of the old sentencing discretion to plea-bargaining
prosecutors.) Prison rehabilitation programs, including prisoner education,
suffered from declining resources and support. Between 1984 and 1997
nearly thirty states built “supermax” prisons, in which inmates typically
spend 23 hours a day alone in their cells, and there was little pretense of
reforming anybody.
The juvenile court was the greatest institutional casualty in this backlash.
The idea that malleable young offenders were entitled to judicial paternalism
and therapeutic intervention in a court of their own, rather than an
adversarial trial and punishment as fully competent adults, was one of the
most successful products of the Progressive era emphasis on social responsibility
for crime. Since the 1960s the institution had weathered criticism
from the right (for being too lenient on young criminals) and the left (for
subjecting vulnerable young offenders to the arbitrary discretion of judges).
But the most serious assault on juvenile justice did not occur until the late
1980s and early 1990s. America was hit by a wave of juvenile violent crime.
Cambridge Histories Online © Cambridge University Press, 2008
228 Michael Willrich
Between 1985 and 1991, homicides committed by boys aged fifteen to nineteen
jumped 154 percent., As criminologists warned of a rising generation
of young male “superpredators,” the states “got tough.” Most states enacted
automatic transfer laws. Juveniles charged with any one of a growing list
of felonies – ranging from murder to car-jacking to dealing drugs near a
school – were transferred automatically to an adult criminal court. When
advocates marked the hundredth anniversary of the Cook County Juvenile
Court in 1999, the institution there and across the nation had lost much of
its public support and many of its young wards.
Other innovations in criminal justice heralded a new penal communitarianism:
“victims’ rights” laws, the revival of public shaming punishments
in local communities, and sex offender notification laws. The notification
legislation was called “Megan’s Law,” in memory of a seven-year-old New
Jersey girl raped and murdered in 1994 by “a twice-convicted sex offender
who lived across the street.” A model notification statute was enacted by
Congress in 1996 and then adopted, with modifications, by all fifty states.
The statutes required sex offenders to register with the police, who then notified
the public. The requirements affected some 386,000 past offenders –
46,000 in California alone. Many states extended their registration requirements
to people convicted before the notification laws took effect, and many
states posted their sex offender registries on the Internet. Despite their popularity,
Megan’s laws were a nightmare for civil libertarians, who insisted
that applying such a law to people convicted before its passage violated
constitutional protections against double jeopardy and ex post facto laws.
But the U.S. Supreme Court upheld the Alaska’s notification law against
such challenges, declaring that the statute’s registration and community
notification requirements created a “civil, nonpunitive regime.” The Court
used similar reasoning to uphold other controversial practices, including
the seizure of drug dealers’ property. The Court insisted that such measures
constituted civil remedies, not criminal penalties, and were therefore
immune from ex post facto and double jeopardy claims.26
Criminal justice cases contributed to the rise of conservative constitutionalism
in late twentieth-century America. The U.S. Supreme Court took
steps to scale back the due process revolution. In Gregg v. Georgia (1976),
the Supreme Court cleared the constitutional hurdles to the death penalty,
leading to its reinstatement and rapid spread in the states. By 1999, ninetyeight
people were executed in the United States, the largest number since
1951; as of the year 2000, 3,601 Americans awaited the ultimate penalty
26 Smith v. Doe, 538 U.S. 84, 96 (2004). Linda Greenhouse, “Justices Reject Challenges to
Megan’s Laws,” New York Times, March 6, 2003, 29.
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 229
on death row.27 In numerous other decisions, the Court proved a major
player in the severity revolution, mostly by restricting individual rights.
The Court created new exceptions to the exclusionary rule and the Miranda
warning requirement,28 upheld the constitutionality of preventive detention
laws (“tough” statutes that gave judges greater authority to refuse
bail to defendants believed to be dangerous),29 upheld California’s model
“Three Strikes and You’re Out” law,30 and handed down many opinions
limiting death row appeals. The Warren Court had given federal district
court judges wide authority to reform state institutions, including criminal
justice institutions, in order to protect civil rights. The Rehnquist Court
scaled back that authority. In 1996, Congress acted to curtail the prisoner
litigation that had placed so many state prison systems and local jails under
the effective control of federal judges. In two separate statutes, Congress
curtailed prisoners’ habeas corpus suits and limited the authority of federal
district courts to interfere with the operations of state prison systems. The
actions of the Court and the Congress seemed in synch with popular attitudes
toward prisoners’ constitutional rights during a period when states
passed or strengthened laws to disenfranchise convicted felons.
Few artifacts reveal so much about the changing character of American
liberalism at the twilight of the twentieth century as the private prison.
The idea that a liberal regime might contract out its monopoly on the legitimate
exercise of violence to profit-seeking manufacturing firms dates back
at least to Jeremy Bentham’s eighteenth-century penological manifesto,
“Panopticon.” But in America the late nineteenth-century labor movement
had fought, with great success, for the curtailment of convict leasing and
other forms of private convict labor. And from the Progressive era through
the New Deal and beyond, the sphere of governmental action had expanded
to include many areas of social life once largely left to the market or to private
initiative. The rising conservatism of the late twentieth century aimed
to roll back the sphere of public action and social responsibility, and a whole
host of public functions and responsibilities were dismantled or contracted
out to private firms.
Private corporations entered the field of prison and jail management
during the 1980s, as lawmakers lauded the superior efficiency of private
enterprise and the prison population explosion placed enormous stress on
27 BJS, “Prisoners on Death Row,” http://www.ojp.usdoj.gov, accessed June 11, 2004. BJS,
“Executions,” http://www.ojp.usdoj.gov, accessed June 11, 2004.
28 U.S. v. Leon, 468 U.S. 897 (1984). New York v. Quarles, 467 U.S. 649 (1984).
29 U.S. v. Salerno, 481 U.S. 739 (1987).
30 Ewing v. California, 538 U.S. 11 (2003). Lockyer v. Andrade, 538 U.S. 63 (2003).
Cambridge Histories Online © Cambridge University Press, 2008
230 Michael Willrich
the penal infrastructure. By 2000, private detention facilities held more
than 87,000 state and federal prisoners – more than 6 percent of the total.
Shares of prison corporation stock traded freely onWall Street. For critics not
wholly persuaded by the neo-liberal promise of market efficiencies, there was
something deeply disturbing and, perhaps unconstitutional, about statutes
that delegated such vital government functions to profit-seeking firms. But
others argued that private prisons offered a cheaper alternative to stateowned
prisons. Moreover, private prison operators had to answer to stockholders
as well as the governments with which they contracted, and they
were liable for torts and civil rights violations.
Privatization of the power to punish has not been limited to the ownership
and management of prisons. The private market in prison labor has been
reinvented for a post-industrial, globalizing economy. In 1979, Congress
opened the gates when it effectively repealed its 1929 ban on interstate
commerce in goods manufactured in prisons. More than thirty states have
since passed laws authorizing private businesses to employ convict laborers,
who now do everything from telemarketing to making computer parts.
To date, private firms have gained control over only a small portion of
the American prison system. But like welfare reform, prison privatization
speaks to a broader theme in recent American history: the diminishing sense
of public responsibility for the nation’s most marginalized populations.
One area of policy innovation in criminal justice seemed to push back
against the severity trend. In 1989, the first American “drug court” – a
therapeutic court for drug offenders – was established in Dade County,
Florida. America had reached a stalemate in the drug war. Narcotics cases
put enormous pressure on criminal court dockets. Judges chafed under the
statutory regimes that gave them little sentencing discretion. And recidivism
rates indicated it was time for fresh thinking about drug policy. Blending
judicial paternalism with therapeutic intervention, the new specialized
tribunals resembled the “socialized” criminal courts of the early twentieth
century. To become a “client” of a drug court and avoid a normal criminal
trial, narcotics offenders had to accept the basic contract of the institution.
In exchange for participating in an intensive regimen of court-supervised
treatment – drug treatment, counseling, twelve-step programs, urinalysis
testing, and regular appearances in court – the offender stayed out of jail and
might eventually have his or her charges dismissed. Backsliding or noncompliance
triggered sanctions, including short periods in jail. Supported by
$40 million in Clinton Administration seed money, the institution spread
rapidly. By 2003, nearly eleven hundred drug courts were up and running
with four hundred more in the pipeline. The institutional success of the
drug courts have provided a model for the creation of other specialized
“problem-solving courts” at the local level to deal with domestic violence
Cambridge Histories Online © Cambridge University Press, 2008
Criminal Justice in the United States 231
cases, mental health cases, and other cases where judges seek the assistance
of social service workers and therapeutic experts to end a cycle of crime or
violence.
At first glance, the drug courts and other problem-solving courts seem a
curious liberal throwback in a conservative age. In fact, the tribunals appear
to have spread so rapidly because there is something for everyone in the
drug court model. Conservatives find merit in the courts’ stern emphasis
on personal responsibility. Liberals applaud the courts’ basic recognition
that drug addiction is not only a criminal act but a disease with social and
personal root causes. For all of their limitations, these powerful new courts
have created a space that had been lacking in the late twentieth-century
American criminal justice system: a space where policymakers, judges, and
the public can seriously consider alternatives to the relentless incarceration
of the past quarter-century.
CONCLUSION
Nothing about the current prison crisis in the United States was foreordained.
What politics wrought, politics might undo. At the turn of the
twenty-first century, after several years of declining crime rates, many Americans
seemed ready to rethink the wisdom of mass incarceration. Proliferating
drug courts and other “problem-solving” criminal courts even had
some hallmarks of a progressive-liberal revival. It seemed a promising way
to launch a revival – through a practical rethinking of the purpose and practices
of criminal justice institutions. Unfortunately, just as these reforms
were getting underway, world events launched a new, largely unprecedented
phase of American crime control. The “war against terror,” whose history is
just now unfolding, has already raised a host of new questions about national
security, federalism, and civil liberties in the world’s most powerful – and,
in many respects, its most punitive – nation.
During the course of the twentieth century, Americans had a remarkable
series of conversations about the changing nature of criminal responsibility,
the purposes of criminal justice, and the related problem of social inequality
in a liberal democracy. If in the twenty-first century the United States is
to arrive at a more just and effective system for defining crime and dealing
with offenders, those conversations must continue. And they must be, to a
very large extent, conversations about history.
Cambridge Histories Online © Cambridge University Press, 2008
7
law and medicine
leslie j. reagan
Both law and medicine possess considerable social significance and power.
The two professions and their institutions, practices, and ethics speak to and
engage each other continuously. Interestingly, however, “law and medicine”
is an underdeveloped field of history. No doubt the relative inattention that
law and medicine have received from historians is related to the way in
which the fields of legal history and medical history initially developed.
Both grew out of the professions themselves and within law and medical
schools, each producing an emphasis on a single profession, its interests,
activities, and heroes. Medical jurisprudence, a specialized product of two
professions with specialized knowledge and practitioners, provided a point
of intersection. The history of medical jurisprudence includes the intellectual
relationship between the legal and medical professions around specific
scientific and medical questions that arose in the legal arena, as well as
the professional relationship between physicians and attorneys (especially
regarding malpractice). Yet, the traditional subjects of medical jurisprudence
are only part of the history of medicine, law, and society.
Here, rather than sticking to a narrow formulation of the legal history
of medicine focused on medical jurisprudence, I expand the definition of
the field and recast it to include public health, health-related legislation,
and the regulatory apparatuses of administrative law. An enlarged field
of analysis allows us to examine public health and its relationship to the
state and to criminal law and then to take those insights and look again at
individual medical practices. Analysis across areas of law and medicine typically
thought of as separate makes visible links that are otherwise concealed
and presumed nonexistent. In particular, the ways in which medicine has
become a key component of state systems of surveillance in the twentieth
century, as well as the ways in which that role has been contested, become
apparent. What became customary practices in public health were transferred
to individual clinical practices and hospital policy in order to assist
the state in its criminal justice investigations. As the police powers of public
232
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 233
health moved into the clinical arena and assisted in criminal investigations,
the questions of civil liberties and constitutional rights that public health
traditionally raised became more widespread and acute.
When we bring medical jurisprudence and public health law together,
it becomes evident that the public’s use of private litigation to resist the
power and practices of heath authorities and medical practitioners is not a
recent phenomenon but a continuation of enduring individual and collective
struggles for recognition of bodily integrity, patient autonomy, and due
process rights. When social movements were lacking or legislative concern
weak, the courts were the only system available for regulating medical and
public health power and practices; at various moments, private litigation
won improvements in medical practice, public health, and safety. This use of
the legal system underlines the ways in which ordinary Americans insisted
on rights as individual patients and citizens (often with the support of
their peers on juries). Those individual cases provided the path for later
“consumer” rights as patients in hospitals and clinics.
Although jokes and cocktail conversation suggest an enmity between the
legal and medical professions, in fact, they enjoy a long history of mutual
respect. Historically, the judiciary and lawmakers granted the medical profession
a great deal of autonomy and deferred to physicians’ judgment
and rights in medico-legal matters. For most of the nineteenth century, the
American medical professions were free of licensing requirements. Although
elite, highly educated medical men resented this state of affairs – the American
Medical Association (AMA), formed in 1847, sought state regulation
of medical practice in order to delegitimate its competitors – these physicians
nonetheless retained the respect of powerful men in law and politics.
In the twentieth century, battles over national health insurance programs
and malpractice took place in public courtrooms and political spaces, yet
the alliance between medicine and the law strengthened in the less visible
venues of hospitals, immigration checkpoints, and police stations.
Both the state and the medical profession have enhanced their power
through their mutual alliance. The results for ordinary people could be quite
intrusive; as law and medicine reinforced one another they appeared to be
overpowering social forces. Under the police powers intrinsic to state-level
governance, public health officials could arrest, hold, and treat individuals.
And though Americans regularly protested – both violently and in court –
the police and the judiciary alike generally upheld the powers of state
health officials.With habeas corpus suspended for reasons of public health,
citizens found there was little they could do to challenge the actions of
health officials.
Over the twentieth century, medicine was drawn into an increasingly
close relationship with the law and its agents as physicians and hospital staff
Cambridge Histories Online © Cambridge University Press, 2008
234 Leslie J. Reagan
became accustomed to collecting information and data for law enforcement
and other state officials. All too often, medical abuses became “normal,”
bureaucratized, and invisible to those in power. It was left to outsiders and
people below – lone individuals, the subordinated, the “deviant” poor –
to bring the abuse of patients and citizens to light and into public consciousness.
Eventually, the medical profession came itself to understand the
medical misuse of power and mistreatment of patients, and began to learn
respect for patient rights from those it had marginalized.
Because medicine is intimately involved with life and death and involves,
by definition, touching and invading the body, it has been a primary area in
which battles over individual civil liberties, autonomy, and bodily integrity
have taken place. The struggles over an array of medico-legal issues were
not confined to the pages of professional journals or courtrooms, but have
claimed popular and political attention as well. Indeed, medicine and law
is an especially useful arena for investigating the development and workings
of power. Analysis of the actions of ordinary Americans, as well as of
elites in hospitals, police stations, courtrooms, and public health offices,
is important for understanding the frameworks of law and medicine that
people negotiated, used, challenged, and remade. Here, too, we can examine
how law and medicine (together and sometimes in opposition) create,
enforce, or dismantle class, race, gender, sexualities, hierarchical medical
arrangements, and corporate power.
The state’s long-standing interest in controlling reproduction and sexuality
is perhaps the most revealing prism through which the law-medicine
nexus may be viewed. The definitions of citizenship in the nation, the
inheritance of enslaved status (based on biological reproduction through
the mother), the laws regarding marriage and child custody, and the legal
interest in and state intervention in pregnancy and even childbirth itself
all point to the significance of reproduction to the state. This history too
indicates both the power of specialization and the ongoing struggles to guarantee
legal and social deference to experts. Indeed, the medical profession’s
struggles to gain social and legal authority were often launched by focusing
on reproduction and sexuality. For physicians, writing their will into law
was achieved earliest and most easily in the reproductive arena. Reproduction
and sexuality, then, have never been peripheral, but have mattered
enormously in the construction of American law, medicine, society, and
the state. Historical shifts in state surveillance and legal recognition of the
autonomy of pregnant women have great relevance for patients in general.
Changes in the relationships among medicine, law, and patient-citizens
have often developed in this arena first.
In this chapter I have adopted a chronological and thematic framework to
highlight the ways in which traditional medico-legal issues, public health,
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 235
and criminal law come together. Starting with medical jurisprudence in the
nineteenth century, the chapter moves to the turn-of-the-twentieth-century
courtroom where the jousting of medical experts became public spectacle.
From popular interest in cases centered on insanity and the female body, I
turn to analysis of the nation’s historical legal interest in reproduction and
sexuality as expressed through legislation, law enforcement, and regulation
of individual medical practices. The chapter then shifts to a focus on public
health and the growing importance of administrative law. Through litigation
and social movements, Americans demanded that the government act
and provide services and that it protect the rights of individual citizens and
patients. Public expectations of services, protection, and rights contributed
to the development and reach of numerous federal agencies that worked to
protect and improve the public’s health.
Finally, I return to analyses of medico-legal issues. As we will see, the
focus of jurisprudence at the turn of the twenty-first century shifted from
medical expertise in the courtroom to decision making and procedures in
the hospital. In addition to the medico-legal questions regarding end-oflife
decisions, patient rights, and privacy that became a focus of law school
textbooks in the late twentieth century, I consider the incorporation of the
medical system into the state’s policing systems over the century as a whole.
Policing practices reliant on medical cooperation and expertise that began
in public health – often as part of patrolling the borders between citizens
and strangers – have, I suggest, increasingly merged with criminal law and
increasingly take place in the hospital. These policing practices tend to
focus first and most on stigmatized populations and to divide the “good”
citizen-patient from the “the bad,” usually marked by color and class. Yet,
the habit of policing has expanded so that everyone is now subject to state
surveillance through medicine.
I. MEDICAL JURISPRUDENCE IN NINETEENTH-CENTURY
AMERICA
In the nation’s earliest years, educated, elite leaders of the regular medical
profession enjoyed a great deal of respect from the upper echelons of
the legal profession and lawmakers. In matters of medical and scientific
expertise, legal leaders deferred to the knowledge of elite physicians. An
excellent example is the New York state legislature, which, in the mid-
1820s, decided to address its mish-mash of common law, colonial law, and
state law by rewriting and codifying the state’s law. The three-man drafting
committee invited John B. Beck, the foremost international expert in
the field of medical jurisprudence, to write codes relevant to medicine and
public health. This pattern persisted across the new nation as it created its
Cambridge Histories Online © Cambridge University Press, 2008
236 Leslie J. Reagan
own American legal culture. Despite the apparent hostility, derision, and
division that developed between the professions later, lawyerly deference
to professional medical knowledge continued into the twentieth century.
Lawyers and doctors saw each other as professionals who had special knowledge,
who served the public good, and who should be trusted to make
decisions and judgments on behalf of the public. When the American Law
Institute (founded in 1923) envisioned the reform and standardization of
American law, it solicited, listened to, and followed the advice of the corresponding
elite leaders of medicine.
Medical jurisprudence as a field of medicine originated in Scotland and
France. The early-nineteenth-century American men who took up medical
jurisprudence dreamed that a system of state medicine like that in France,
with its system of medical police and close ties between physicians and
the state, would be created in the United States. From the 1820s–1840s
in the United States, medical jurisprudence gained a growing reputation,
journals developed, and medical schools all offered training in the field.
The physicians saw their medico-legal expertise and work – of determining
insanity, performing autopsies at inquests, distinguishing between infanticide
and stillbirth or between murder by poison and death by natural
causes – as public services. As important as their knowledge was, however,
physicians received little respect and no payment for these services.
Medico-legal experts hoped to change this situation.
That some of the earliest medical researchers worked to answer questions
that had no diagnostic or therapeutic relevance but were of legal significance
indicates the desire among the medical elite to enhance their own status
by making medicine useful to the law. The emphasis in the medical-legal
relationship was on how medicine might serve the medical needs of the
state in criminal cases, public health, and the protection of property rights.
Equally telling, the actual research questions pursued underscores the
centrality of reproduction and family to both medicine and law. Many
addressed paternity and inheritance. For instance, was the widow’s newborn
really that of her deceased spouse? The scientific answer to this question
could determine the distribution of inherited wealth. Embedded in
such medico-legal problems were gendered norms regarding sexual behavior,
marriage, monogamy, and patriarchal control and possession of women
and children. Physicians investigated the length of human gestation in
order to answer questions about posthumous births as well as false claims
of pregnancy. This research contributed to new scientific understandings
of pregnancy and helped erode traditional ideas about the importance and
meaning of quickening – when the pregnant woman felt fetal movement,
at approximately the fourth or fifth month – at least among (some) regular
(educated, orthodox) physicians, if not among the general public. Beck
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 237
himself focused on infanticide and investigated birth weight and the processes
of suffocation in an effort to determine whether an infant’s death was
due to natural or criminal causes. Others, knowing the battles that too often
ensued when wills surprised expectant heirs, investigated the precise definition
of insanity as death approached and claimed that physicians should
determine whether the deceased had been clearheaded or demented when
he authored his will.
Just as the field of medical jurisprudence was booming in the medical
schools, a sudden rise in malpractice cases in the 1840s and 1850s produced
a new hostility between lawyers and doctors. Historians explain the rise in
suits as a result of regular physicians’ growing expertise, the development
of a medical literature that set standards, and rising public expectations
fueled by physicians’ claims. For instance, regular doctors’ new ability to
set bones in compound fractures was an improvement over the old method of
amputation; yet, the limbs were often crooked and imperfect. Those crooked
arms and legs were a disappointment that led to suits; juries sympathized
more with the deformed than the doctor. Medical-legal experts campaigned
to eliminate the emerging system that treated medical knowledge as a
debate between opposing experts and urged the criminal justice system to
create a board of medical experts to advise judges in medical matters.With
such a board in place, the judge in each case would listen to a panel of
experts and act on their careful and reasoned expertise rather than leaving
medical knowledge, diagnosis, and appropriate therapeutics to be decided
by a laymen’s jury on the basis of conflicting testimony.
Close analysis of the testimony and complaints brought by ordinary people
and their families in malpractice and injury suits, as well as juries’
decisions in tort cases, offers insights into the world of law and medicine.
In these venues, ordinary Americans expressed their sense of rights and
expectations. Through official briefs and transcripts we can learn of popular
beliefs and “unwritten laws” about the body, sickness and health, life and
death, and social responsibility in the face of disease, death, or tragedy.
In the thousands of suits brought by injured travelers against railroad and
streetcar companies in the nineteenth century, it is clear that Americans –
as injured travelers and as jurors – believed they had the right to expect that
corporations would take care to prevent injury and death and that they had
the right to damages when accidents occurred. Furthermore, they expected
the state to act as a mediator and resolve the damages to bodily integrity,
both in the courts and through state regulation of industry. Although the
courts upheld the idea of the free man who was responsible for himself and
who could be found negligent (and thus denied financial compensation),
under the onslaught of suits, they also legitimated the payment of damages
to injured white women – and even, when pressured by findings of
Cambridge Histories Online © Cambridge University Press, 2008
238 Leslie J. Reagan
repeated juries, to an African American man injured when forced to jump
from a moving train. Similarly, a series of suits brought against physicians
whose performance of illegal abortion injured or killed their patients indicates
that American families expected physicians to perform safe procedures
(regardless of their legality) and, if and when there were injuries, to take
responsibility for their mistakes by providing medical services and by paying
for emergency medical care, hospitalization, and even funeral costs.
Juries agreed. So did half of the state supreme courts that ruled on the
question.
In the early years of the Republic, elite educated physicians had won
licensing requirements through state legislatures. In the 1830s and 1840s,
however, the legal recognition and protection secured by the medical profession
disappeared in the face of anti-elitist and democratic impulses. Instead
of recognizing the aspirations of physicians and ensuring that medical practice
was the exclusive right of an exclusive profession, the nation chose to
protect the rights of all citizens to practice various forms of medicine and
to choose among competing practitioners. State licensing of practitioners
was quickly dismantled, not to be reinstated until the end of the nineteenth
century. The lack of licensure laws and the laissez-faire attitude toward the
education of doctors produced a diverse and competitive medical climate.
The sick and injured could purchase guides and medications to practice
their own medicine at home or seek out midwives, specialists in water cure,
homeopathic doctors, or regular physicians. To the chagrin of the highly
educated, more socially conservative regular physicians, all could claim the
title “doctor,” and all were equal in the eyes of the law and the eyes of many
Americans.
With the failure of licensing and the proliferation of practitioners, elite
physicians looked for other ways to constitute their authority and to form
ties with the state. Regular physicians staked their claim to social authority
and medical expertise not only on their expert knowledge but also on
their claim to moral superiority. The creation of new criminal abortion
laws in every state is an important example of regular medicine’s drive for
social power. The laws were rewritten according to the perspective of an
elite group of specialists in obstetrics, a specific procedure was preserved
to doctors only, and the process cultivated a respectful association between
the state and the leaders of the regular medical profession. Under common
law, early abortion had been permitted; only abortion after quickening
was illegal. The AMA-led campaign of the 1860s and 1870s to criminalize
abortion in early pregnancy dramatically changed the law governing
pregnancy. In rewriting the law, regular physicians marked themselves off
from the irregulars and midwives whom they blamed for abortion and also
proclaimed their own purity in contrast to the Protestant ministry, which
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 239
accepted quickening and did not join in the anti-abortion campaign. In
securing laws that simultaneously criminalized early abortion and granted
doctors the authority to make exceptions when they determined abortion
to be medically necessary, regular physicians won exclusive power to determine
the morality, medical necessity, and legality of abortion in specific
cases.
Regular medicine’s social power also rose in tandem with the great scientific
discoveries in bacteriology in the late nineteenth century. The bacteriological
discoveries of Robert Koch and Louis Pasteur in the 1870s and
1880s and the ensuing development of vaccines impressed journalists who
glorified these men and their stories. Modern science and its achievements
became part of the newspaper-reading public’s daily fare. The discoveries
of the laboratory and the decline in infectious diseases that were attributed
wholesale to the germ theory (the effects of improving nutrition and public
health measures attracted much less notice) made it seem that disease
could be conquered decisively. Science became sacred, and medicine claimed
the power of bacteriology. By the 1890s, medical students sat in front of
microscopes and physicians donned white lab coats.
II. THE NEWSPAPERS, MEDICINE, AND THE COURTROOM
The horrors of medicine also captured press and popular attention. Such
stories could be found in the courtroom. The press looked for, publicized,
and helped produce these stories for public consumption. The identification,
treatment, and punishment of the insane were of great interest at the turn
of the century, as were the female body, sexuality, and gynecological surgery.
Newspapers, courtrooms, and medical specialties – particularly psychiatry,
surgery, and gynecology – worked together and against each other to develop
narratives, to sell papers, to protect and destroy reputations, as well as
to address and create social differentiation, norms, and deviance. Social
hostilities and dilemmas around gender, class, race, and sexuality got worked
out in the intertwined arenas of the courtroom and the newspaper.
While the legal system’s adversarial format may have helped determine
the guilt or innocence of the accused, negligence or not, for the practitioners
of medicine the attorneys’ battle to find “the truth” by questioning and
undermining all opinions that did not fit their argument did not clarify
the truth of medical diagnosis, therapeutics, theory, or practice. Instead,
the questioning of expert witnesses degraded the reputation of the entire
profession. When divergent medical testimony reached the newspapers,
the problems within medicine were deepened and broadcast widely. Many
cases required graphic descriptions of the body, surgical instruments, and
techniques, and actual body parts were often passed around the courtroom.
Cambridge Histories Online © Cambridge University Press, 2008
240 Leslie J. Reagan
Tumors, uteri, and other body parts preserved in jars were displayed by
attorneys, identified by witnesses, and seen and handled by jurors. The
medical profession would have preferred that matters medical – in both
senses of the phrase – be contained within professional discourse and spaces.
As the medical diagnosis of insanity entered the courtroom, its causation,
diagnosis, and definition moved out of medical control to become an
object of contention among attorneys and expert witnesses and decided by
judges, juries, and journalists. Because defendants could be found innocent
by reason of insanity, the insanity defense was especially attractive to those
accused of murder. In insanity cases, the “M’Naghten Rule,” the rigid rule
followed since the mid-nineteenth century, held that if a defendant knew
that his or her act was forbidden by law, then he or she was legally sane,
regardless of other behaviors. If the person did not realize at the time of the
act that it was wrong, then the person was deemed insane and not responsible.
The 1881 assassination of President James Garfield and the subsequent
prosecution of his killer, Charles Guiteau, provided an extended moment
during which the public, the psychiatric profession, and the legal system
observed, debated, and judged insanity and sanity. Arguing that Guiteau
was insane, his attorney brought in new German-trained neurologists who
testified to the hereditary nature of the condition. The prosecution argued
that Guiteau was sane and presented insane asylum superintendents as
expert witnesses. The trial gave the public the treat of observing the assassin
defend himself by insisting that he followed God’s orders. To the cheers
of the attending crowds, the jury found Guiteau sane and guilty. The
M’Naghten Rule and the public’s desire for a hanging won out over the new
scientific understanding of insanity, which emphasized heredity. The spectacle
of Guiteau continued as the dead man’s body was autopsied and
scrutinized. On finding lesions, medical men and medical journals changed
their view and declared Guiteau had been insane after all.
Ten years later, another murder and the question of insanity gripped the
nation’s attention; this case found its way into the definitions of psychiatry
and sexology. The murderer was a woman, as was her victim. Alice Mitchell,
a white, middle-class daughter of Memphis, slashed the neck of the girl she
intended to marry. Mitchell avoided a murder prosecution by agreeing to a
lunacy inquisition. If the defendant was found insane, execution could be
avoided, but lifetime incarceration was virtually guaranteed. The evidence
of insanity in Mitchell’s case was, as in Guiteau’s, composed of both evidence
of hereditary insanity and a lifetime of strange and inappropriate behavior.
The odd behavior and the marks on the body that the defense offered as proof
of insanity were defined by the norms of gender and heterosexuality. The
behavior to which family and friends testified began with Alice’s interest in
boys’ games as a child and her refusal to dance with young men and ended
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 241
with her special friendship with FredaWard. Alice was determined to marry
her and had given Freda an engraved gold band. For the psychiatric experts,
Alice Mitchell’s desire to marry a beloved woman proved her delusional
state. Same-sex desire was understood as sickness; gender-bending was proof
of insanity.
The Alice Mitchell case was one of a series of turn-of-the-century cases
that brought together medicine, crime, and sex and captured the imagination
of the press, the public, and the courts. All of them – Alice Mitchell’s
crime, the contemporaneous investigation of Chicago’s abortion business,
the “Jack the Ripper” murders in London (which invoked evil gynecological
surgeons), and the trials of Brooklyn gynecologist, Dr. Mary Dixon
Jones – underline the importance of female sexuality and medical matters
to the New Journalism of the 1880s and 1890s. The front-page newspaper
coverage of Brooklyn gynecologist Dr. Mary Dixon Jones’s questionable
surgical practices and subsequent manslaughter and libel trials in 1888–
89 and 1892 spotlighted the gendered expectations of medical demeanor
and courtroom deportment. The Dixon Jones trials, like the contemporaneous
insanity trials, served as a lightning rod for divergent views about the
direction of medicine. Dixon Jones violated the expectations of the female
physician. Instead of being wary of surgery and science like most women in
the profession, she embraced both; instead of displaying personal modesty,
she was ambitious. Instead of feminine sympathy, she coerced women into
gynecological surgery. Although prominent physicians and sixty patients
testified on her behalf, fifty other women testified to their discomfort with
Dixon Jones, of her insistent demands for money, and of surgeries they
never agreed to. Although Dixon Jones was acquitted of manslaughter and
homicide charges, she lost her libel case and her hospital license was revoked.
III. FROM DRAMATIC CASES TO THE TRIALS OF DAILY LIFE:
REPRODUCTION AND THE LAW
Big cases, like those of Dixon-Jones, Mitchell, or Guiteau, which captured
the attention of the press and the public, offer the historian opportunities
to analyze legal precedents and medical theory, as well as American values,
anxieties, social relations, and social structures. Smaller, less visible
cases rarely reported in legal treatises or newspapers offer similar and other
opportunities: the ability to see how the most ordinary investigations and
trials were conducted on a daily basis and to analyze the more typical treatment
of working-class witnesses and defendants. Even the most routine,
most trivial of interactions from the perspective of police or attorneys were
important moments for the men and women caught in them. Those routine
questions, investigations, and processes taught people the power of the law,
Cambridge Histories Online © Cambridge University Press, 2008
242 Leslie J. Reagan
the operations of justice, and the law’s capacity to punish. Enforcement of
the states’ criminal abortion laws provides an example.
Raids and trials of accused abortionists sometimes dominated the newspapers,
but relying on press reports alone would distort our understanding
of the law in practice because newspapers emphasized the unusual –
abortion-related deaths of unmarried women – and produced a terrifying
picture of a deadly, criminal underworld. The widespread availability of
abortion services and the relatively small number of convictions for criminal
abortion may suggest the failure of the criminal abortion laws, but
to conclude that the laws mattered little would also be incorrect. Data on
prosecution and incarceration do not tell the true story of the state’s ability
to punish. To see how law worked in practice requires analysis of the routines
of enforcement. Local police regularly investigated criminal abortion,
collected evidence, interrogated witnesses, and arrested suspected abortion
providers. Because juries often refused to convict in abortion cases, prosecutors
learned to concentrate on cases where a woman died. As important,
the state’s police officers and prosecutors did not enforce the laws alone,
but relied on the assistance of private entities, particularly the medical
profession.
The process of collecting evidence against accused abortionists punished
women for their efforts to end a pregnancy. To obtain dying declarations,
police and physicians questioned women on their deathbeds, threatened to
withhold medical care, and required them to identify their abortionists and
sign documents stating their belief that they were about to die. In the process,
women, their lovers, husband, relatives, and friends learned first-hand
that the law condemned them and their actions. For unknown thousands
of women, these were the final events of their lives. For many others who
survived their abortions as well as those who were questioned relentlessly
after a miscarriage, it was a humiliating, frightening, and punitive experience.
Women did not need to be arrested, prosecuted, or incarcerated to feel
punished. The use of the hospital to identify crimes and the interrogation of
vulnerable patients were standard components of the state’s criminal investigation
practices and continued until the decriminalization of abortion
nationwide in 1973.
The state’s reliance on medical policing of patients in abortion cases began
at the turn of the century. A few physicians urged their colleagues to help
coroners and police in the repression of abortion, but many doctors resisted.
The state needed medical cooperation, however, and obtained it by threatening
to prosecute physicians or damage their reputations. Coerced, physicians
learned to comply; notifying the police and interrogating women became
standard hospital routine. In the post-World War II period, physicians
themselves developed new methods, namely therapeutic abortion review
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 243
committees of physicians, to monitor their colleagues’ abortion practices,
to define legal and illegal, and to restrict access to (legal, therapeutic) abortions
in hospitals. The intentions of these committees were not clear-cut;
they both limited abortion and legitimated abortions in order to allow
physicians to provide them. Overall, however, the committees reduced the
number of abortions performed in hospitals by dissuading physicians and
patients from seeking committee-endorsed, therapeutic abortions and thus
pushed many into the world of illegal abortion. There, despite the law and
organized medicine’s opposition to abortion, however, the medical profession
was always heavily involved in providing illegal abortions. And in the
late 1950s, a small group of physicians who found the profession’s review
processes and the injury to women intolerable and unjust initiated the earliest
efforts to reform the nation’s criminal abortion laws. By the late 1960s
a few brave physicians broke the laws openly in order to challenge and, they
hoped, change them.
The legal history and practice of contraception and abortion have long
been connected. The 1860s and 1870s saw a frenzy of lawmaking to criminalize
the avoidance of childbearing; to prosecute practitioners, pharmacists,
and others who provided contraceptives and abortions; and to censor discussion
of sexuality, pregnancy, contraception, and abortion. Congress enacted
the Comstock Law in 1873, which banned publication about and the provision
of contraceptives and abortion and equated both with “obscenity.”
The Comstock Law and the criminalization of early abortion underlined the
nation’s interest in controlling sexuality and reproduction, enforced maternity
as a marital duty, indicated support for censorship, and re-entrenched
the notion that sexuality was shameful. In the 1870s YMCA anti-vice
activist Anthony Comstock advanced his career with an attack on free lover
and feminist, VictoriaWoodhull; appointed Special Agent of the U.S. Post
Office, he revived his reputation at the end of his life when he seized Margaret
Sanger’s publication, The Woman Rebel in 1914, and shut down one of the
first birth control clinics in New York City two years later.
Comstock’s activities energized an emerging new pro-sex movement that
demanded the legalization of contraception. Sanger and her socialist supporters
used Comstock’s raids to develop a new movement for freedom of
speech and a movement willing to open birth control clinics in defiance
of the law. Sanger turned to winning legislation granting physicians the
legal right to prescribe contraceptives, but the AMA continued to strenuously
oppose both birth control and abortion. During the Depression, birth
control and abortion boomed as families found it essential to prevent the
birth of children. In the 1930s a series of federal cases – Young’s Rubber Co.
vs. C. I. Lee and Co. (1930), Davis v. The United States (1933), United States
v. One Package of Japanese Pessaries (1936) – found that contraceptives were
Cambridge Histories Online © Cambridge University Press, 2008
244 Leslie J. Reagan
not necessarily “obscene” and that physicians could legitimately purchase
and prescribe contraceptives. By the late 1930s, the American Birth Control
Association sponsored more than 300 birth control clinics; mail-order
firms, pharmacies, and door-to-door saleswomen sold contraceptive devices
and medications to the general public. Police raided and shut down birth
control clinics in the 1910s and 1920s when, clinics first opened as a political
project and were publicly advertised; abortion clinics were similarly raided
as they became open and visible during the Depression. Both remained
criminal until the 1960s and early 1970s though both contraception and
abortion were widely practiced by ordinary Americans.
The federal government quietly provided funds for contraceptives beginning
in the Depression years, but it was not until 1965 that the U.S.
Supreme Court recognized the right of married couples to use contraceptives.
Griswold v. Connecticut (1965) recognized that a “zone of privacy”
existed in which the married couple in their private home had the essential
right to make decisions about procreation and family. In 1972, Eisenstadt
v. Baird found the same right existed for unmarried heterosexual couples
(again recognizing reality). In this period, a dozen states reformed their
abortion laws, and several states legalized abortion, most importantly New
York in 1970. In Roe v. Wade and Doe v. Bolton (1973) the Supreme Court
overturned the nation’s criminal abortion laws in recognition of the right of
women to make decisions about their bodies and reproduction and, at least
as important, in recognition of physicians’ rights to carry out their medical
judgment without interference.
The poor women who died – many of them African American and Latina –
and the thousands of women injured every year as a consequence of the
criminalization of contraception and abortion should not be overlooked. Nor
should the benefits of legalization. Maternal mortality fell in half following
the legalization of abortion. In countries where abortion is still illegal,
by comparison, the procedure accounts for 25 to 50 percent of all maternal
mortality. Legalization of contraception and abortion significantly improved
women’s health and life chances and recognized their rights as citizens
to bodily integrity and self-determination. After Roe v. Wade, the pro-life
movement’s strenuous work to undermine the legality and availability of
medically provided abortions gained it significant media attention. Less
noticed was the anti-abortion movement’s effort to suppress the legitimacy
and use of contraceptives – and not just in America but worldwide.
The state’s emerging reliance on medical experts in criminal law was not
unique to abortion at the turn of the century. It was an innovative feature
of Progressive era judicial efforts to address crime in general as a product
of larger social problems. The Municipal Court of Chicago led the nation
in the new strategy of “socialization” of the law, creating new courts for
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 245
special areas – a family court, a morals court, a boys court – and new agencies
to investigate the underlying social causes for criminal violations. The courts
then tailored their response to the particular individual’s pathology.
In the 1910s and 1920s, eugenics strongly influenced the criminal justice
system. Elite eugenic theory converged with intellectual and popular anxiety
about crime and colored the criminal justice system’s actions. The crime
problem, in eugenic thinking, was a problem of reproduction. The propensity
to crime, deviance, hypersexuality, and more was inherited: the criminal
classes (re)produced criminals, and the “feeble-minded” prostitute gave
birth to feeble-minded criminals. As Progressive era judges, social workers,
and reformers adopted these views, often alongside environmental beliefs
that poverty produced criminal behavior, many concluded that crime control
required preventing crime before it occurred, both by holding the
“mentally defective” preemptively and by preventing the reproduction of
the feeble-minded and criminal. The Chicago Municipal Court system
included its own Psychopathic Laboratory to which local judges sent criminal
defendants and others for IQ and psychological testing. What began
as a way to sensitize the law to an individual’s social circumstances and to
help defendants’ reform themselves quickly became a method for identifying
potential criminals to be ordered to institutions for the feeble-minded
and held indefinitely. This individualized assessment system tested and
incarcerated thousands of working-class citizens annually – in mental institutions,
not prisons, and without due process. Across the country, many of
the inmates of these institutions were also subjected to surgery as part of
the state’s crime control program.
The history of sterilization reveals how medicine and law collaborated
in the past to control specific populations, to shape society in their own
image through involuntary and undemocratic means, and to reinforce and
maintain historical inequalities and hierarchies. The targets of involuntary
sterilization changed with the historical context and the changing sense of
who threatened the social order and the public good. The law did not take
the lead; rather, it followed medical practice by authorizing the practice of
involuntary sterilization through statutes endorsed in the notorious U.S.
Supreme Court decision, Buck v. Bell (1927).
Systematic, involuntary sterilization began as a private program of sterilizing
male prisoners, a program initiated by Dr. Harry Sharp at the Indiana
State Reformatory. Dr. Sharp soon advertised his success; by 1909 he
reported he had sterilized more than 200 male inmates. Eugenicists and
physicians endorsed the program, and the states soon officially sanctioned
the involuntary sterilization of the “feeble-minded,” “defective,” “delinquent,”
“criminal,” and infected. By 1913, twelve states had passed such legislation.
Individuals and civil rights organizations fought these involuntary
Cambridge Histories Online © Cambridge University Press, 2008
246 Leslie J. Reagan
procedures and often won, but ultimately the U.S. Supreme Court endorsed
the practice. Buck v. Bell showed legal deference to accepted medical and
scientific practice and thought. It also indicated how the practice of sterilization
had changed; the person to be sterilized was not a “defective” or
“criminal” man, but a young white woman in Virginia who, along with
her mother and infant daughter, was thought to be “feebleminded.” Sterilization
had shifted to women, the sex that literally produced the next
generation.
The state did not officially approve every sterilization or require informed
consent or review procedures, but instead assumed that social policy and
public morals were in good hands when entrusted to physicians. As mass
sterilization became standardized in the states and tens of thousands were
sterilized through official state programs, private hospitals and private
physicians also carried out their own sterilization policies for their vision
of the public good. By the 1950s and 1960s, abusive sterilization practices
were widespread and routine. Individual physicians and hospital committees
regularly pushed or insisted on female sterilization in exchange for
prenatal care, obstetrical services, or therapeutic abortions. If patients did
not cooperate, some physicians threatened to have state officials cut off welfare
payments or take away their children. Still others neither asked nor
threatened, but simply performed sterilization procedures during delivery
without the knowledge or permission of the woman. The belief of some individual
physicians in their right to act as judges who punished and issued
edicts is remarkable. It must also be said, however, that some physicians
never subscribed to such ideas, broke their profession’s rules by providing
reproductive services requested by women, advocated reproductive rights,
and bravely sought to end patient abuse by bringing abuses to professional,
public, and media attention.
By the 1950s and 1960s involuntary sterilization was increasingly used
to punish single women and single mothers of every race and class for
their sexual activity. Low-income women of color, however, were especially
vulnerable to compulsory sterilization. In the Southwest, Mexican
and Mexican-American women were targeted by physicians in public hospitals
(who used Spanish language as an indicator of welfare abuse, illegal
immigration, and overpopulation to justify coercive sterilization practices).
In the South and in Northern industrial cities, individual physicians and
state programs targeted low-income African American women, represented
as unwed mothers and resented by whites for their claims on public funds
through AFDC (Aid to Families with Dependent Children). The women
coerced into “consenting” to a sterilization procedure in order to obtain a
safe, legal therapeutic abortion performed in a hospital were likely to be
unmarried white women of the middle class. Sexually active single women,
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 247
pregnant or mothers already, bore the brunt of anger at the changes in
heterosexuality, social mores, and gender then taking place. In a fervently
pro-natalist period, these women would have their ability to reproduce permanently
taken away. Although the majority of the involuntarily sterilized
were apparently single, neither married women nor men were protected.
In Puerto Rico and in Los Angeles, married Latinas were pressed to agree
to sterilization as part of population programs or simply sterilized without
being told. In California, one judge pressured a Latino father into “agreeing”
to sterilization in order to avoid imprisonment. Federal programs promoted
and paid for sterilization procedures on American Indian reservations. At
the same time, patients who wanted sterilization procedures were routinely
refused. Affluent whites, married couples, and African Americans all ran
into road blocks and denial when they requested sterilization.
Sterilization abuse reached a crescendo in the 1960s, yet in-depth studies
of specific regions complicate generalizations about the ability of eugenicists
to exert control. For instance, in the face of racist policies in North
Carolina, some poor African American women obtained sterilizations they
wanted for their own reasons through the system designed to prevent the
birth of “undesirables.” Collective efforts could change the law. Even when
suits brought against physicians and hospitals for sterilization abuse failed
(as in Los Angeles), organizers could still win. Chicana feminists, with allied
physicians, attorneys and media attention, won protections against sterilization
abuse in the Los Angeles County Medical Center and wrote California’s
state regulations. In 1978, the federal government (through the Department
of Health, Education and Welfare) adopted guidelines called for by
reproductive and civil rights activists. Protections included waiting periods,
informed consent, prohibition of threats regarding welfare services, and
the availability of Spanish-language materials. As feminist health and civil
rights groups learned, however, new laws and rules designed to protect poor
and minority women would be ignored without continuous monitoring.
IV. PUBLIC HEALTH
The myriad local, state, and federal institutions and regulatory agencies that
acted to protect public health and safety or provide health care all deserve
attention as sites of law and medicine and as sites of legal conflict. The
police powers wielded by local municipalities and the states that undergirded
public health measures in the eighteenth and nineteenth centuries
gave way to a greater emphasis on administrative law by the turn of the
twentieth century together with a growing emphasis on federal responsibility
for the health of the nation. After the turn of the century, the powers
granted bureaucratic agencies worried many; administrative law did not
Cambridge Histories Online © Cambridge University Press, 2008
248 Leslie J. Reagan
follow standard judicial practices regarding habeas corpus, due process, or
evidence and was generally not subject to judicial oversight and review.
Congress and the courts tempered agency powers by mid-century, but certain
arenas – notably immigration – continued to be exempt from adhering
to due process procedures. Areas of public health law were also treated as
inherently in the public interest and left significantly free from judicial
review. For well over a half-century, for example, Americans had been discovering
through personal suits that there were few restraints on health
authorities’ powers over the public or over “inmates” of state-sponsored
health institutions. Indeed, health authorities’ commonplace usage of the
term inmates rather than patients or citizens implicitly equated disease with
crime, and institutionalization with incarceration.
The power and public funds granted health-related institutions and agencies
indicate that government action on behalf of health is a long-held American
value. The legal powers enjoyed by health authorities, however, were
predicated not only on a commitment to health but also on social stigma
and inequality. Social support for controlling undesirable and subordinated
groups, racism, nativism, and sexism made up the foundation on which the
power of health officials stood. The common association of infectious diseases
with downtrodden groups – whether foreign-born, non-white, poor,
criminal, sexually deviant, or alcoholic – lent a hand in enabling the state to
enforce quarantines. The diseased were blamed for their diseases; the public
tended to be more concerned about keeping away from the sick than caring
for them. Diseases often provided the occasion to create racial and class
difference, to discriminate, and to exclude those whom the majority feared,
whether Chinese bachelors in San Francisco blamed for bubonic plague,
African American domestic workers in Atlanta identified as the spreaders
of tuberculosis, or Mexican, Asian, or European immigrants at the nation’s
borders who were searched for trachoma, tuberculosis, and a host of other
diseases and disabilities.
Boards of health benefited from epidemics: in fear of disease, the public,
lawmakers, and the courts granted them greater powers. Fear of epidemics
prompted quarantine, official surveillance, and civic action; much
more common causes of death – tuberculosis and childbirth – produced
little panic or action. In the face of a global cholera epidemic in 1866, for
instance, New York City health officers assisted by city police inspected
businesses and private residences and issued orders requiring premises to
be cleaned, basements drained, privies emptied and pigs moved. Public
health requirements overrode property rights claims: some businesses were
ordered to cease operation altogether. Following New York City’s success
in avoiding a deadly epidemic, cities around the country established boards
of health.
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 249
Although the state has long had the legal power to quarantine, businesses
and individuals often challenged that power and health departments
sometimes lost it. Worried about lost profits, businesses involved in trade
or tourism disputed the quarantine of ships, sailors, travelers, and products.
Involuntary quarantine and/or treatment sometimes provoked violent
protest. In 1894, Milwaukee saw rock throwing and death threats in
response to the traditional public health measures taken during a developing
smallpox epidemic. City health officials had quarantined the sick at
home, isolated others at the city hospital, and offered vaccinations to the
public. When health officials attempted to forcibly remove children from
parents to take them to the hospital (where immigrant mothers were sure
their children would die), the individual agony of mothers turned into a
neighborhood riot. Battles broke out among the German immigrant population,
health officers, and police as the health commissioner insisted he
was simply enforcing the law. The health commissioner’s insensitivity to
Milwaukee’s German population produced a public health failure: smallpox
became widespread in German neighborhoods, and the city council revoked
the commissioner’s legal authority to quarantine without consent. Public
health authorities learned that seeking cooperation through public education
and the encouragement of civic-mindedness were better strategies than
coercion.
State-mandated vaccination to protect the public from smallpox also
raised constitutional questions. When cities or states required vaccination,
some people objected and took their cases to court on the grounds of personal
liberty and religious freedom. In the face of a smallpox epidemic in
Cambridge, Massachusetts, the board of health “ordered, that all inhabitants
of the city be vaccinated.” When Henning Jacobsen refused to be
vaccinated, he was arrested and fined. He took his case all the way to the
U.S. Supreme Court, arguing that the compulsory vaccination law was arbitrary
and that the free citizen had the right to make decisions about his own
health. In Jacobson v. Massachusetts (1905), the Supreme Court upheld local
and state laws that mandated vaccination for the good of the larger public’s
health and at the same time allowed individuals to reject vaccination. The
state could not forcibly vaccinate, but it could fine or incarcerate those who
refused vaccination and could require vaccination of those wishing to attend
school.
The late nineteenth century and early twentieth century saw a marked
shift in the reach of police powers to protect the public’s health as health
authorities increasingly focused on individuals. The transformation in scientific
knowledge of disease undergirded the expansion of administrative
law in health matters. As scientists and health officials embraced germ theory,
public health authorities focused on infectious disease and insisted that
Cambridge Histories Online © Cambridge University Press, 2008
250 Leslie J. Reagan
monitoring and correcting individual behavior were the keys to protecting
the public’s health. In this new context, the contributions of poverty,
ill health, malnutrition, environmental toxins, and the like tended to be
absolved. Health officials pinned blame for the spread of deadly infectious
disease on individuals (and entire groups); they often regarded the diseased
as akin to criminals.
The treatment of one early-twentieth-century New York City woman,
Mary Mallon, who became known as “Typhoid Mary,” exemplifies these
changes and the importance of social stigma. Bacteriologists had theorized
the notion of a “healthy carrier,” an individual who showed no symptoms
of disease, yet was contagious. Mallon was the first person in the United
States to be identified as a healthy carrier of typhoid fever. Although it
was not clear that city health authorities had the legal authority to isolate
a healthy individual, in March 1907 New York City Health Department
officials and police officers seized Mallon and took her to the city’s hospital
for contagious cases where her feces and urine were collected and analyzed
against her will. She was soon placed in isolation. Not until two years later
did a court hear her case in a habeas corpus hearing.
As public health focused its force on individuals, it increasingly provoked
individual lawsuits over civil rights. In Mallon’s case, despite the ambiguity
of the law, conflicting laboratory evidence, and the clear possibility that
hundreds of healthy citizens could be held by the state without recourse (as
Mallon’s attorney pointed out), the court allowed the health department
to isolate Mallon. The court’s willingness to overlook the unequal application
of health regulations underlines the convergence of public health and
administrative processes. That is, as was true of federal immigration law in
the early twentieth century, local courts granted public health officials enormous
authority to apply health regulations and exempted administrative
decision making from the usual due process standards. Health authorities
well knew that thousands of healthy typhoid carriers walked freely in the
city, but few were pursued. Indeed, in keeping with a socialized and highly
gendered view of the law, officials helped healthy male carriers of typhoid
who had families to support, instead of holding these men in quarantine.
For Mallon alone the health department required isolation and continuous
collection and examination of her bodily wastes for evidence of infection.
The complex historical record suggests that Mallon’s unique status as the
first healthy carrier along with her demeaned social status as a single, Irish,
working woman all conspired to make her an example of the state’s power
over uncooperative citizens. If New York health authorities had tried to
isolate the thousands of other healthy typhoid carriers – most of whom
would have been hard-working, respectable citizens – it is likely that the
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 251
authorities would have faced substantial resistance from the public and
politicians.
Yet, there were instances when health authorities did hold thousands
of women and men in order to prevent the spread of infectious diseases
without generating public disapproval. The people held did not come from
all classes nor were they perceived as respectable. Instead, they came from
stigmatized groups – namely, female sex workers and male alcoholics –
whom the respectable majority already disliked and wanted off the streets,
In pursuing specific categories of people, that is, public health punished
the socially and sexually deviant. Few were concerned about their rights
in disease-control efforts. The judiciary’s hands-off attitude toward public
health law further isolated these stigmatized people and left them without
legal means to challenge their treatment by authorities. The criminalization
of health law bred distrust of both public health and the law; vulnerable
citizens who feared health authorities’ links to police and prison avoided
health officials, physicians, and medical institutions.
Venereal Diseases
Federal policy toward the spread of sexually transmitted infectious diseases,
or venereal diseases in the language of the time, first developed in the military.
During the Civil and Spanish-American Wars, the military assumed
that soldiers would visit prostitutes and so officially regulated brothels, regularly
examined women, and required soldiers to undergo chemical treatments
to prevent disease or be disciplined. By World War I, the British
battle over the Contagious Disease Acts and a new sexual politics informed
American policies. Now allied with social purity forces, the military took
a different approach to the spread of venereal diseases among the troops.
Within the ranks, the military stressed sexual purity and attempted to
replace sexual activity with sports, movies, and books. Outside, meanwhile,
it enforced “pure zones” of at least five miles radius around military bases.
Military and local officials emptied these zones of all women believed to be
prostitutes. Prostitute was defined broadly to include all women suspected
of being such, including women who walked alone on city streets, women
out at night, and young women who dated military men or had sexual
relationships with them, as well as women who worked in brothels and
exchanged sex for money.
The local and federal policies pursued duringWorldWar I against venereal
diseases treated disease as a crime and sexually suspect women as criminals.
Only one sex and only one type of person, the prostitute, was seen
as infectious and responsible for venereal diseases. Suspicious women were
Cambridge Histories Online © Cambridge University Press, 2008
252 Leslie J. Reagan
arrested, subjected to forcible gynecological examination, and held by local
boards of health until declared free of disease. Men were exempted. The U.S.
Attorney General promoted the detention and compulsory examination of
women by declaring it “the constitutional right of the community” to hold
those suspected of disease in order to prevent its spread. With the encouragement
of the Attorney General and the military, cities and states passed
legislation requiring the examination, isolation, and treatment of women
suspected of having venereal diseases. When women complained of violations
of habeas corpus for being held without charges or trials and without
any end in sight for their incarceration, their cases were dismissed. The
federal government financed the construction of eighteen institutions to
hold the detained women, and between 1918 and 1920, more than 35,000
women were arrested and more than 18,000 incarcerated, in many cases
for a year or longer. Chicago’s Morals Court had pioneered the method of
requiring physical examination of prostitutes for syphilis and then “offering”
them free medical care instead of fines. With the federal government
endorsing the incarceration of prostitutes as part of the war effort,
Chicago’s Morals Court changed its methods. It refused bail to all women
brought in, required them to undergo a police-enforced mandatory medical
examination for venereal disease, then ordered them held under quarantine
until non-infectious, often for months. Due process did not apply to public
health law.
The women trapped in this punitive system suffered violation of their
bodily integrity and their civil rights and were blamed for harming the
troops and the nation. They had few places to turn for protection. The full
impact of this history on working-class women, primarily in the South
where the army camps were located as well as in cities like Chicago, has yet
to be unraveled. The founder of the American Civil Liberties Union (ACLU)
later called the tens of thousands of incarcerated American women “prisoners
of war.”1 The policies of this period perpetuated the sexual double standard
and showed that blame for infectious diseases could easily be shifted to
women.Working-class women learned of the power of public health officials
and learned to associate them with the military and the police. As health
departments turned to other efforts to improve the public’s health (prenatal
care or tuberculosis programs, for example), those with personal experience
may have resisted all measures, knowing how easily public health could
turn into prison.
At the end of the twentieth century in the face of another stigmatized
epidemic, HIV/AIDS, some politicians drew on the same historical
1 Quotation of Roger Baldwin in David J. Pivar, Purity and Hygiene: Women, Prostitution,
and the “American Plan,” 1900–1930 (Westport, CT, 2002), 217.
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 253
assumptions and practices to write laws requiring that suspected or convicted
prostitutes be tested for HIV/AIDS (again, female sex workers were
identified as the criminal source of fatal disease, not as its potential victims).
Public health professionals generally opposed mandated testing and
quarantine for HIV, knowing that such programs would drive people away
from public health and enlarge the epidemic.
Tuberculosis
The difficulty of protecting the rights of stigmatized and sick populations
in the mid-twentieth century can be seen in the example of Seattle’s Firland,
one of the nation’s largest tuberculosis sanatorium. The state identified the
typical tubercular, infectious person as a homeless alcoholic man living
on the streets; its handling of the disease was correspondingly coercive,
restraining tuberculosis patients in isolation against their will. Patients,
including the down-and-out alcoholics of Seattle’s Skid Road, questioned
the fairness and constitutionality of the state’s policies, but winning attention
to patient complaints was an arduous task. In the mid-1950s patients
pointed to practices they considered particularly egregious violations of
their civil rights and of the rules governing quarantine: holding patients
for long periods of time and in the “lockdown ward” without hearings or
opportunities for appeal, treating patients identified as alcoholics differently
from other tuberculosis patients, and using institutionalization and
isolation as punishment for behavior. A handful of patients and one unusual
former health worker of the institution wrote numerous letters to successive
governors ofWashington State, to health officials, and to newspapers. Most
of these letters were dismissed or returned to the head of the institution,
who promptly censored the mail.
The former staff member finally caught the attention of theWashington
State ACLU when he produced a 51-page report of complaints. Although
the ACLU confirmed that the public tuberculosis sanatorium was violating
the civil liberties of its patients, public health officials refused to make any
changes and the ACLU dropped the matter. The lack of progress in this
case points to the power of public health. The state gave police powers to
public health officials with few restrictions and then trusted them. Once
the sick were placed in the hands of public health officials in order to
protect the health of others, they had little redress. The sick were not
convicted criminals, but with the adoption of locked wards and mandatory
institutionalization and treatment for months, they could be treated as
such. In the early 1960s, in light of growing legal and social concern for the
rights and treatment of inmates in prisons and health institutions, Firland
created a hearings process headed by a local judge. Yet, the judge continued
Cambridge Histories Online © Cambridge University Press, 2008
254 Leslie J. Reagan
to allow the use of isolation for punishment (for drinking or escaping the
institution) and permitted patients who were not infectious – and thus not a
threat to the public’s health and not valid for quarantine – to be held against
their will. Furthermore, the differential treatment of alcoholics, who were
held for a full year rather than the 3–6 months needed to make a patient
non-infectious, persisted.
Like the history of the public response to sexually transmitted diseases,
the Firland case demonstrates that public health law and criminal law were
not distinct but intertwined. As alcoholism came to be understood as a disease,
Washington State judges sent men brought in for public drunkenness
to the TB sanatorium rather than jail. In other states, these men were still
sent to jail, and their quarantine and treatment for tuberculosis occurred
there. The Firland institution itself was a mixture of hospital and prison:
the institution was located in a former naval hospital, and the new locked
ward was the old navy brig. Originally built for punishment, the locked
ward was used that way again as the staff turned to it to manage a large
and sometimes difficult and argumentative population. State law guaranteed
health officials’ right to quarantine people with tuberculosis without
providing the due process rights required in criminal law.
V. FEDERAL AGENCIES AND THE HEALTH OF AMERICANS
At the beginning of the twenty-first century, the United States, unlike the
rest of the industrializedWestern world, does not have a national health care
system. Yet, lawmakers from small towns up to the federal level constantly
pass health-related laws, form agencies with their own administrative laws
and systems, and spend significant shares of tax monies on health and
medicine. The U.S. system is deliberately piecemeal because it has been
constructed against the idea of a universal health care system and in the
name of the idea of a private physician-patient relationship. It is a system
sensitive to political clout. Instead of a universal system, U.S. governmentsupported
health services are awarded to a narrow set of those deemed
ideologically “worthy.”
The new political power of theAMAin the twentieth century can be seen
in its influence on federal health legislation. The AMA vigorously fought
early-twentieth-century reform efforts to win universal health insurance as
achieved in Europe. Attacks on all things German during World War I
and the rise of hysteria over socialism and the “Communist menace” after
the war ensured that universal health care was defeated. Instead, Congress
awarded health benefits to specific groups of deserving citizens: mothers
and soldiers. In 1920, under the threat of losing office as a result of the new
voting bloc of women created by the Nineteenth Amendment, Congress
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 255
passed and PresidentWilson signed the Sheppard-Towner Act for maternal
and infant health. The Sheppard-Towner Act was a major, early act of federal
social welfare legislation.With shared funding from the federal government
and the states, it was intended to reduce maternal and infant mortality (a
source of national shame as the highest rate in the industrialized, “civilized,”
world) through the education of mothers, midwives, and doctors. As much
as this was a victory for the women’s suffrage and health reform movements,
the AMA limited the act’s reach. The AMA succeeded in ensuring, first,
that the Children’s Bureau focused on education, not on the provision of
needed medical services, and, second, that all educational messages urged
pregnant women and mothers to see private doctors, preferably specialists
in obstetrics. Nonetheless, theAMAled the fight that ended the program in
1929 by associating the Sheppard-Towner Act with “socialized medicine”
and “mannish” women. As to soldiers, in 1921 Congress appropriated more
than $18 million to build Veterans Administration Hospitals around the
country to provide care to veterans exclusively.
When President Harry Truman endorsed universal health insurance legislation
in the wake of World War II, the AMA again shot it down by
charging Communism. Postwar federal funding went instead to biomedical
research and to the construction of hospitals to provide space in which
private physicians could practice. Both contributed to the increasingly technological
and expensive approach of American medicine. Congressional
legislation to finance care for specific patient populations – such as those
needing dialysis – or to support specific research agendas – such as cancer –
passed in response to the lobbying efforts of voluntary organizations,
patients, and their families. Not until 1965 did federal funding for patient
care through a compulsory insurance program finally pass, but, again, for
specific groups rather than for the entire citizenry: Medicaid for the poor and
Medicare for the elderly. At the start of the twenty-first century, Medicare
continues to be funded (even if threatened regularly by anti-welfare administrations)
because senior citizens act as a voting bloc. In contrast, ever since
the demise of President Lyndon B. Johnson’s War on Poverty, health services
for the poor – known to be unlikely to flex their muscle as voters and
represented as undeserving, criminal, cheating, and African American (thus
playing into and sustaining racism) – have been perpetually subject to cuts.
At the start of the new century, approximately one-third of all Americans
lacked health insurance.
During the course of the twentieth century, the U.S. Public Health Service
(USPHS), which originated in the need to provide for the care of sailors
at the end of the eighteenth century, came to encompass most of the federal
agencies with public health or medical responsibilities. One infamous earlytwentieth-
century Public Health Service program became the impetus for
Cambridge Histories Online © Cambridge University Press, 2008
256 Leslie J. Reagan
new regulations to protect patients and research subjects. In the Tuskegee
Syphilis Study, the health service tracked venereal diseases not in order to
treat or quarantine infectious individuals, but in order to not treat them.
Although this study and the deception and abuse of poor African American
men for which it is remembered never went to trial and never resulted
in a court opinion, the class action suit to which it gave rise nonetheless
raised awareness about the medical rights of patients, highlighted the need
for regulation to protect patients and human subjects, and contributed to
changing the laws under which biomedical research could be conducted.
The Tuskegee Syphilis Study observed “untreated syphilis in the Negro
male” for forty years, from 1932 to 1972. The “study” was based on an
assumption of biological racial differences and was intended to prove it
through pathology. Several hundred African American men from Macon
County, Alabama, all extremely poor sharecroppers and tenant farmers,
were recruited for a study of “bad blood.” They were lured by the seeming
provision of health care and with promises that their funerals would be
paid for by the government. No explanations of the experiment were ever
offered nor informed consents obtained despite the medical understanding
since at least the nineteenth century that research on human subjects
required their consent. The economic and medical poverty of the men, the
“ignorance” attributed to them, and racism all justified the Public Health
Service’s failure to provide care.
In 1972, the Associated Press exposed the project, thanks to the continuing
efforts of a young, low-level employee in the health service who
pressed his superiors to see the wrongs committed and to treat the men. In
the context of powerful civil rights and health movements as well as contemporaneous
scandals that revealed the paternalistic and racist attitudes
toward and abuse of patients in both public health and medical settings
(such as sterilization abuse and the dangers of the pill and IUDs), news of the
Tuskegee Study quickly generated national attention. When the subjects
themselves learned from the national news of their use in a racist experiment,
they turned to the most prominent African American attorney in
Alabama. Fred Gray had represented Rosa Parks during the Montgomery
Bus Boycott, and on behalf of the study’s subjects, Gray filed a class action
suit against the Public Health Service and the state of Alabama for failing
to obtain informed consent. The federal government finally agreed to pay
$10 million to the subjects or their survivors, to provide free health care to
the subjects and their families, and to provide the long-promised burials.
Exposure of the Tuskegee study resulted in the writing of new federal guidelines
to prevent future abuses of human subjects in biomedical research. All
federally funded research on human subjects was made subject to approval
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 257
by cross-disciplinary Institutional Review Boards (IRB). Despite the new
regulations, concern remained that IRBs might be inadequate for the detection
of abuses, especially of patients and subjects who are poor, immigrants,
non-English speaking and/or non-white.
By the late twentieth century, myriad federal agencies had responsibilities
for American health and welfare; each developed its own regulations and
administrative law processes. Such federal agencies included the Food and
Drug Administration (FDA), the U.S. Department of Agriculture (USDA),
the Office for Occupational Health and Safety (OSHA), the Environmental
Protection Agency (EPA), the Indian Health Service (IHS), the National
Institutes of Health (NIH), the Centers for Disease Control (CDC), and the
Department of Health and Human Services (HHS), among many others.
The enormous number of agencies and their accompanying administrative
law machinery indicate the continuing interest in public responsibility for
medical and health matters; their number also points to the difficulty of
reaching any unified policy, priority, or program. Furthermore, the work
of federal agencies was always vulnerable to legislative de-funding or other
forms of political interference from Congress, the President, or outside
business or political interests.
At times, industry practices so blatantly threatened the public’s health
that reformers succeeded in translating public outrage and fear into greater
power for regulatory agencies. The FDA was formed in 1907 in reaction
to Upton Sinclair’s expos´e of the meat industry. In 1938, Congress granted
the FDA expanded powers to regulate drugs before they reached the market
after more than a hundred people died due to poisoning by a sulfa drug
mixed with toxic sweetener. In 1962, Congress and President Kennedy
again expanded the FDA’s powers following the thalidomide tragedy, which
damaged thousands of newborn children worldwide. The EPA (formed in
1970) has banned a number of chemicals because of the threat they pose to
human health. When the EPA attempted to take pesticides off the market,
however, the chemical industry challenged administrative law and hit the
agency with grinding, lengthy, and expensive suits. As a result, regulators
learned to choose cases that would be most successful in the courtroom and
in Congress.
In 1990, people long regarded as patients or potential criminals, institutionalized
because of diminished intellectual capacity or physical disability,
won recognition as rights-bearing citizens with the passage of the
Americans with Disabilities Act (ADA). The ADA forbade discrimination
against people with disabilities; mandated that workplaces, schools,
city streets and public services accommodate the disabled; and required
the Equal Employment Opportunity Commission to issue guidelines and
Cambridge Histories Online © Cambridge University Press, 2008
258 Leslie J. Reagan
pursue complaints. The ADA showed the power of an organized social
movement to win legislation; the subsequent erosion of the ADA indicates
the larger power of business interests.
VI. PATIENT AUTONOMY, PRIVACY, AND SURVEILLANCE
In the last thirty years of the twentieth century, the intersections of law
and medicine within the United States proliferated endlessly, particularly
around questions of patient autonomy, privacy, and civil liberties. Over
the same period, the impact of U.S. funding, regulation, and involvement
in health policies, services, and biomedical research was increasingly felt
around the world. The growing intersections between law and medicine
in both domestic and international contexts require considerable research;
the historical global reach and power of American medical foreign policy
and regulation in particular have not been studied or integrated sufficiently
into U.S. legal and medical history.
Here my analysis concentrates on just two areas of controversy that illuminate
the developments of the later twentieth century: decisions at the end
of life, and decisions around reproduction. In both cases, American rights to
“privacy,” whether we mean privacy of information, decisions about medical
treatment or non-treatment, or the privacy of reproductive and sexual
practices, have increased due to the advocacy of social movements. Backlash
movements with highly organized legal and political arms have also eroded
those rights.
End of Life
Since the 1970s, the legal system has been directly involved in scrutinizing
medical practices at the end of life. Standard interpretation finds that the law
intruded into medical practice and the relationship between physician and
patient, but in fact the judiciary was invited in by physicians and hospitals
who imagined their vulnerability to prosecution. As death and dying moved
out of the home and into the hospital and as new technologies extended the
process of dying through artificial ventilation and feeding systems, growing
numbers of Americans began to fear the process of death and the prospect
of “turning into a vegetable” who spent years in a hospital or nursing home
bed. In unknown numbers of instances, physicians and families decided
together to turn off the machines that kept the dying breathing, but did
not cure or bring the person back to an active, conscious life. They allowed
people to die. (No doubt in most of these cases the families tended to be
privileged with medical insurance and a relationship with physicians rooted
in a common racial or class background. For others, the problem was not
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 259
discontinuing unwanted treatments, but obtaining adequate medical care in
the first place.) In other cases, physicians or hospital administrators refused
and some families sued. In 1975, press and television coverage of the case of
Karen Ann Quinlan, a 21-year-old New Jersey woman who was comatose, in
a persistent vegetative state, and attached to a respirator first brought these
problems to national attention and debate. For reasons that were unclear,
Quinlan had stopped breathing and lapsed into a coma; after several months
of daily visiting and hope for improvement, Quinlan’s family met with her
doctors and asked that the ventilator be removed so that she could return
to her “natural state” and be allowed to die. (Interestingly, the Catholic
family had turned to their priest for comfort and had been assured that
allowing a natural death did not violate Catholic teachings.) The doctors and
hospital agreed; Quinlan’s father signed paperwork to protect the medical
professionals. The next day the doctor refused to carry out the family’s
wishes. When the case eventually went to court, the doctor and hospital
opposed allowing the father to serve as Karen Quinlan’s guardian, argued
that removing a ventilator violated medical standards, and equated doing
so with homicide. The New Jersey Supreme Court, In the Matter of Quinlan
(1976), affirmed Joseph Quinlan’s appointment as Karen Quinlan’s guardian
and also affirmed that removing a ventilator and feeding tube – when she
could not be cured or returned to cognitive life and the treatments were
only “prolong[ing] her inevitable, slow deterioration and death” – was
constitutional under the right of privacy found in Griswold and other cases.
On request of the patient’s family and guardian and with the agreement of a
hospital ethics committee that Quinlan could not emerge from her comatose
state, physicians and hospital could withdraw life support without fear of
civil or criminal penalties. After further delay and resistance, the hospital
finally removed the ventilator after “weaning” Quinlan from the system.
She survived unassisted. According to her mother’s account, Karen Quinlan
and her family suffered another ten years until her eventual death.
The physician and hospital in the Quinlan case feared being held criminally
liable if they acted on the Quinlan family’s request; in their fear, they
provoked a suit. The hospital and doctors wanted advance approval from
prosecutors to do what physicians and hospitals had long done outside the
legal eye. Fearing legal trouble, medicine brought greater legal scrutiny on
itself. Quinlan established that a trusted family member or guardian could
make decisions in the best interest of the patient and that, for medical
professionals, there was a difference between “curing the ill and comforting
and easing the dying.” The case also prompted other legal/medical reforms.
Living wills in which people explicitly declared their wish to avoid life
support systems if they were in a persistent vegetative state and registered
“do not resuscitate” orders were developed in order to produce a clear record
Cambridge Histories Online © Cambridge University Press, 2008
260 Leslie J. Reagan
of the testator’s wishes in advance of these difficult situations. In light of the
New Jersey court’s expectation that hospitals would have ethics committees
for consultation, hospitals set up such committees.
A Missouri case rigidified the requirements permitting the withdrawal of
life support. In a case similar, to Quinlan, the parents of Nancy Beth Cruzan,
who remained in a persistent vegetative state after a 1983 car accident and
resuscitation by paramedics, asked the hospital to end artificial feeding.
When the hospital refused, the parents went to court and won a ruling to
the effect that a person in Nancy Cruzan’s state had a constitutional right
to refuse or end “death prolonging procedures.” On appeal, however, the
Missouri Supreme Court reversed the decision, a reversal affirmed by the
U.S. Supreme Court in Cruzan v. Missouri Dept. of Health (1990). The divided
U.S. Supreme Court held that the state had passed legislation requiring that
“the incompetent’s wishes as to the withdrawal of treatment be proved by
clear and convincing evidence” and that this was constitutional. In the
Cruzan case, the majority of the court found that a “serious” conversation
with a friend was insufficient to establish a patient’s values and wishes on
which a guardian could act. The state, Cruzan declared, could “decline to
make judgments about the ‘quality’ of life . . . and simply assert an unqualified
interest in the preservation of human life.” The burden of proof on
those seeking to withdraw life support was greater. The dissenting justices
pointed to the rights of privacy, the rights of individuals to assert their
wishes to avoid medical care, and their right to expect that those wishes
would be respected, all of which were undermined by the decision. The family’s
battle continued in Missouri courts, where they finally convinced the
court with additional witnesses of their daughter’s expressed preferences,
and after several years of legal battles, “life support” systems were removed.
Cruzan died in 1990.
Cruzan undermined people’s ability to avoid a prolonged dying, but the
case also inspired Congress and the states to provide legal mechanisms to
ensure that such cases need not occur. The federal government required
hospitals to inform patients of their right to make advance directives; states
passed medical proxy laws so that people could choose who would make
decisions for them if incapacitated and permitted living wills. As a consequence
of the Cruzan case, those who feared prolonged dying were strongly
encouraged to sign living wills and appoint health care powers of attorney.
Yet only a tiny minority of Americans have taken such steps (approximately
10 percent have living wills).
When advance directives are lacking, guardianship for making decisions
about medical care goes first to the spouse, then adult children, and then
parents. The political effort to prevent people from refusing artificial life
support in order to die continued dramatically in the fall of 2003 in Florida.
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 261
The Florida legislature and Governor Jeb Bush undermined the end-of-life
decision-making process and the legal processes of adjudication when families
disagreed by intervening in the hotly contested case of Theresa Marie
“Terri” Schiavo. In 1990, Terri Schiavo suffered respiratory and cardiac failure
leading to severe brain damage and dependence upon a feeding tube.
After Schiavo had spent ten years in a persistent vegetative state, the Pinellas
County (Florida) Circuit Court ordered the removal of the feeding tube in
response to the petition of her husband Michael, and according to her
verbally expressed wish. Schiavo’s parents and the right-to-life movement
fought these decisions, demonstrated, and prayed. Republican legislators
passed a law allowing Governor Bush to overturn court orders and require
physicians to reinstall the feeding tube into Schiavo’s body. The orders,
and the President’s endorsement, were widely perceived to be part of President
George W. Bush’s re-election strategy. In the spring of 2005, after
the Florida Supreme Court struck down “Terri’s law” and state and federal
appeals courts upheld the previous decision to remove the feeding tube,
President Bush and Congress intervened to pass legislation giving a federal
court jurisdiction over this one special case with the opportunity to overrule
the decisions of numerous courts. The diagnosis of Schiavo’s condition by
politicians inWashington, D.C., and the refusal to accept the legal system’s
decisions revealed a disregard for medical privacy, patient rights, and professional
ethics, as well as the rule of law and the separation of powers. Every
court that considered the case, however, including the U.S. Supreme Court
on repeated occasions, rejected these political and religious efforts to evade
the legal process and returned the case to the original county court that had
reviewed all of the evidence. That court again ordered the removal of the
feeding tube as requested by Michael Schiavo, and on March 31, 2005, Terri
Schiavo died. Many Americans found the Schiavo case riveting and upsetting.
Many also, conservatives and liberals both, were deeply concerned at
the manipulation of family grief and division at a time of severe medical
crisis for political purposes and to undermine the Constitution. Congress
and the President trampled not only on the rule of law but also on the
founding idea that American democracy included and respected people of
different values and religions. The long-term repercussions of the Schiavo
case are yet to be seen.
A related problem for many patients not only at the end of life but also
following accidents that produce long-term disability is the medico-legal
assumptions about who should make decisions with doctors on behalf of an
incapacitated patient. The immediate assumption when patients are incapacitated
and unable to voice their own decisions is that either a spouse or
a parent is in charge, but for many adults this is inappropriate. Unmarried
individuals – notably gay women and men whose relationships are legally
Cambridge Histories Online © Cambridge University Press, 2008
262 Leslie J. Reagan
unrecognized and often resented by homophobic family members, hospital
staff, and/or judges – have had their most intimate and trusted partners
excluded from hospital rooms and overruled by hospitals and/or the courts.
A prominent example is the Sharon Kowalski case. From the time of a car
accident in 1983 and over the next ten years, Kowalski of Minnesota and
her partner, Karen Thompson, struggled in the courts and through public
protest to gain recognition of Kowalski’s right to have her lesbian partner
visit and act as her medical advocate and Kowalski’s own right to make decisions
for herself as a disabled woman, even if she could not speak. When
Kowalski’s father was made her legal guardian, he prohibited visits by
Thompson. Five years after the accident, the courts recognized that Kowalski
could speak her mind by typing, but it was several more years before
legal guardianship for her care was granted to her partner. Civil rights, gay
and lesbian rights, and disability rights groups all celebrated the Kowalski-
Thompson case for securing recognition of the rights of the disabled and
lesbian or gay couples in the medical arena.Ahealth care power of attorney in
advance of unexpected accidents, comas, and major medical decisions would
ensure that, when people suddenly become incapacitated patients, their selfselected
entrusted advocates will be listened to by the medical system. Most
people, however, have not completed such legal documents. When they
do, cases like Kowalski’s suggest they will still need social movements and
attorneys to back up their wishes.
It is striking that the most highly publicized cases of intense struggles
among parents, partners, hospitals, and the courts over decision-making
power all involve young, injured white women. One may speculate whether
American culture is peculiarly attached to young white women whom it
imagines as “sleeping beauties” – a phrase used repeatedly in the Quinlan
media coverage – princesses, and daughters whose lives are threatened and
who need to be rescued.
The formal commitment to patient rights of autonomy and privacy
strengthened in the last thirty years of the twentieth century. In exercise,
however, the rights were highly contested. Posters on hospital walls
announced patient rights (and responsibilities), patients signed detailed
informed consent forms repeatedly, and federal laws assured patient privacy.
In response to public anxiety about insurance companies’ information
sharing and denial of coverage, the U.S. Department of Health and Human
Services developed privacy protections as part of HIPAA (Health Insurance
Portability and Accountability Act, 1996). But here, in fact, is an
example of administrative law with enormous and unanticipated effects.
In effect as of April 2003, federal privacy standards promised to prevent
unknown individuals, insurance companies, or other businesses from gaining
unauthorized access to patients’ personal medical records. However,
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 263
unclear about the precise intentions and requirements of the new standards,
hospitals cited federal privacy standards in refusing to allow visitors
or to send medical information to third parties as specifically requested
by patients. Such actions revealed the medical system’s commitment to
hospital control of patient information and distrust of patients’ desires
and decisions about their own medical care and medical information. (For
instance, the domestic adoption process has been one site for information
obstructions of this nature, both when adoptive parents attempt to provide
medical information to agencies and when birth mothers seek to give
adoptive-parents-to-be their own and their child’s medical information in
order to provide appropriate pediatric care).
HIPAA also restricts access to archival and historical records. In the name
of patient privacy, medical materials and information are being removed
and made inaccessible and may be suppressed depending on individual
archivists’ and legal departments’ interpretation of the law, commitment
to historical inquiry, and the funding and political support (or distrust) of
the archive. HIPAA may make it difficult for historians to research many
of the issues discussed in this essay. It is at least debatable whether the
“privacy” being protected in some cases is that of patients or of powerful
institutions and state agencies against the interests of patients and citizens.
Less than a year after HIPAA went into effect, the nation saw unprecedented
federal intrusion into physician practices and patient privacy.
Attorney General John Ashcroft issued demands for patient records from
Planned Parenthood Clinics, hospitals, and physicians for investigation
into possible violations of the new “partial-birth abortion” ban. In 2003,
Congress and the President of the United States had taken the unprecedented
step of prohibiting a specific medical procedure, an action that overturned
the long history of respect for medical autonomy in diagnosis and
therapeutics. That this federal involvement in medical practice occurred in
the arena of reproductive rights is both in keeping with a long history of
state interest in reproduction and a product of a thirty-year-old backlash
against feminism, patients’ rights, and the recognition of constitutional
rights to privacy in sexuality and reproduction.
Reproduction
Attorney General Ashcroft’s 2004 demands for patient records capped years
of legal intrusions in clinical practice and the surveillance of patients, especially
in the reproductive arena. Since the 1980s, hospital personnel and
policies have played a central role in criminalizing patients and in inserting
criminal law into the medical system. Physicians and hospitals have gone
to court to force patients to comply with medical advice or have initiated
Cambridge Histories Online © Cambridge University Press, 2008
264 Leslie J. Reagan
state investigation of the patients in their care. Medical, prosecutorial,
and judicial actions have revealed the degree to which many medical and
legal professionals distrust pregnant women and regard them as malevolent
adversaries to the infants to whom they give birth. Individual doctors,
nurses, judges, and police officers have appointed themselves child protectors
in disregard of the pregnant woman/mother’s rights to bodily integrity
and due process and in denial of the truth that mothers – even the most
destitute or addicted – attempt to improve their own health and behaviors
on behalf of their future children.
The women who have been most subject to medical policing during
pregnancy in the late twentieth century tend to be African American, poor,
and drug-addicted – or suspected of illegal drug use because of their color.
No woman, however, has been immune, once she deviates from medical
expectations; even the most elite, white women with education, money,
and health insurance have been threatened with investigation by child
protective services when they rejected recommended tests, examinations,
and procedures for either themselves or their newborns. Most have given
in. The legal power granted public health authorities to protect the public’s
health has been extended to and claimed by individual physicians in clinical
practice; doctors may call in police or agency officials when pregnant women,
mothers, and families attempt to overrule the dominant medical system.
Doctors have shocked pregnant women and their families by turning to
the law to transform medical advice into court orders. Obstetricians have
won court orders to force pregnant women to undergo cesarean sections
against their wishes. In cases like these, the physician’s belief that the baby’s
life is threatened and the tradition of judicial respect for medical judgment
have overruled the woman’s decisions about her body and her pregnancy. (In
several cases women fled and delivered healthy babies elsewhere; in others,
surgery was forcibly performed and both the woman and baby died). The
American College of Obstetricians has officially rejected the turn to courtordered
surgeries, yet threats and court orders have persisted. Despite the
medical profession’s complaints about patients’ propensity to sue and the
law’s intrusion into their practices, physicians have often been the ones to call
the law in against their patients. In doing so, physicians and hospitals have
insisted on their power to make decisions for patients and point to potential
malpractice suits while betraying their lack of anxiety about patients suing
for assault or for violations of their autonomy and bodily integrity.
Beginning in the 1980s, state and federal prosecutors forged new ground
as they prosecuted pregnant women for their behaviors and decisions during
pregnancy. Pregnant women who refused a recommended cesarean section
and then delivered a stillborn child have been prosecuted for manslaughter;
others who use alcohol and drugs have been prosecuted for the “delivery”
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 265
of illegal narcotics to an “unborn child.” Male reproductive and societal
contributions to ill health, low birth weight, and infant death are denied
and obscured by the focus on the supposed wrongs committed by poor
pregnant women. The most systematic effort to capture and punish pregnant
women occurred in South Carolina at the behest of medical institutions. The
hospital of the Medical University of South Carolina in Charleston contacted
a local prosecutor about what it might do to “assist” in the prosecution of
mothers who used cocaine. In 1989, prosecutors, police, and hospital staff
devised a plan to test pregnant women who entered the hospital for the
presence of illegal drugs in their urine; the hospital would report positive
tests to the prosecutor, and the woman would be arrested for delivery of an
illegal substance to a minor and for child neglect. During delivery, patients
were handcuffed to their beds as criminal suspects and afterward taken
away in chains. All of the women were low-income and all but one of those
arrested were African American. The collection of information did not occur
in a legal setting nor was it done by police or other state officials. Instead,
lab tests presumably for the patient’s health were collected by health care
providers as evidence. The state then used that evidence to threaten, arrest,
incarcerate, and prosecute pregnant women and to terminate their parental
rights. All told, more than two hundred women tested positive, thirty were
arrested, and two were sentenced to prison. Statewide, seventy to eighty
pregnant women were arrested on these grounds.
The Charleston case may be the most egregious example of the criminalization
of pregnant women and the ways in which legal and medical
authorities together have created racist policies that do not attend to the
health needs of women or children, but instead demean, discriminate, criminalize,
and punish. In 1993, attorneys Lynn Paltrow and Susan Dunn with
the ACLU filed suits on behalf of ten women who had been arrested. The
lower courts upheld the policy and found that it was not discriminatory
in application, but in 2001 the U.S. Supreme Court found it unconstitutional
because it violated the Fourth Amendment’s protections against
illegal search and seizure.
CONCLUSION
The historical relationship between law and medicine in the United States
has been both collaborative and combative. By the opening of the twentyfirst
century, the medical profession had achieved a form of the close relationship
with the state that its early-nineteenth-century forerunners had
desired. The police power that authorized public health measures ultimately
contributed to a refashioning of the relationship between doctors
and the law. Through the nineteenth century, much public health regulation
Cambridge Histories Online © Cambridge University Press, 2008
266 Leslie J. Reagan
had focused on business practices, property maintenance, or the construction
of public works for sanitation and clean water; individuals became the
objects of scrutiny and quarantine only sporadically during epidemics. From
the late nineteenth century on, as local and federal public health officers
increasingly focused on the individual, individual physicians and medical
institutions were increasingly involved in enforcing public health measures
and identifying those viewed as a danger to the public’s health.
The criminal justice system’s use of the hospital as a site for locating and
identifying suspects and its vision of the body as a source of evidence of
criminal activity simultaneously created a new medico-legal relationship
that put medicine into the service of the criminal justice system, rather
than in the service of the public and their patients. The state’s reliance on
the medical profession and its institutions for the collection of evidence,
information gathering, and investigation for public health and criminal
justice purposes brought both power and problems. The medical profession
enjoyed great political and economic clout, but also saw its own autonomy
undermined and the trust of patients eroded. As medicine became enmeshed
within a state system of medical policing, it become subject to policing
itself.
Medical professionals in their practices often stand at the intersection
of constitutional rights. Because medicine is a site of great personal significance
where the body is invaded and where life and death hang in the
balance, and because the hospital has been used by the public health and
criminal justice systems, the hospital and clinical practices are intimately
tied up with constitutional rights regarding confidentiality, privacy, bodily
integrity, equality, and due process. Understandably, most physicians and
other health care workers do not see themselves as defenders of civil liberties
and patient rights for those are the responsibilities of attorneys and courts
or activists and social movements. The job of health care providers is to
practice medicine and make people well. Yet, contests over civil liberties
issues cannot be escaped in the medical arena; health care professionals are
not innocent bystanders in the struggles for justice. Unfortunately, some
assume a policing attitude toward their patients as part of their practice of
medicine.
The field of law and medicine deserves our attention for an array of
theoretical reasons – it provides rich avenues for critical analysis of how
the law works in practice and for examination of the complex development
and deployment of power in many guises. The problems and oppressions
that many have experienced at the hands of public health officers or in
hospitals and other health institutions have also been the source of major
achievements in civil liberties. In moments of crisis and pain, ordinary
people have pushed to change the law and the practices of medicine, public
Cambridge Histories Online © Cambridge University Press, 2008
Law and Medicine 267
health, and crime control and have articulated the rights of patients to
autonomy and self-determination. They have done so through collective
protests, law breaking, political campaigns, and social movements as well
as individual private suits. As we have seen, litigation has often been the
only path that has brought recognition of the damage done by medical
and health authorities to the rights of patients. The late twentieth century
has seen organized medicine and some elected officials campaign to impose
limits on the rights of patients and citizens to litigate. In the light of
historical evidence underlining the importance of litigation in securing
patients’ rights, their success would clearly place severe limitations on the
public’s capacity to restrain the abuse of power and to force changes in
medical and state practices.
The historical efforts to recognize the rights of patients and citizens in
medical and public health systems and to bring medicine into line with
democratic and egalitarian ideas often included principled and independent
bureaucrats, students, attorneys, physicians, low-level workers, and
others who worked within the institutions and professions of medicine and
law. More research on the people who recognized abuse, discrimination,
and inequality and acted on behalf of civil liberties and patient rights in
conjunction with their work on behalf of health is needed. They provide
examples of medicine and law at their best.
Cambridge Histories Online © Cambridge University Press, 2008
8
the great depression and the new deal
barry cushman
The New Deal era was the principal watershed in twentieth-century American
constitutional development. The profound economic crisis that gripped
the nation during the Great Depression inspired a period of extraordinary
legislative ferment, generating a series of strikingly wide-ranging and farreaching
changes in the American legal and constitutional order. On the
eve of World War II, Congress would wield an unprecedented degree of
authority over the nation’s peacetime economy. The national legislature
underwrote a voluminous array of spending programs to relieve distress
and to stimulate economic growth, while at the same time it enacted a
remarkable succession of regulatory programs designed to restore health
to the economy. The administration of these new programs called for the
creation of new federal agencies and the significant expansion of existing
ones, resulting in an explosive growth in the size and power of the federal
bureaucracy and the full flowering of the administrative state.
At the apex of this burgeoning fiscal and administrative apparatus stood
the office of the presidency, supported by a significantly expanded staff
and invested with enhanced authority over agencies within the executive
branch. Just as the chief executive emerged from the Depression with greater
authority over the administration of domestic affairs, so the commander-inchief
would enterWorldWar II with greater discretion over the conduct of
American foreign policy. Meanwhile, the federal judiciary receded from its
traditional role as the umpire of the federal system and the guardian of vested
rights, but would become increasingly assertive in the vindication of civil
rights and civil liberties, such as freedom of speech and rights of the accused.
In upholding new programs of redistributive and protective legislation
that might once have been condemned as “special” or “partial” legislation,
the Supreme Court’s evolving jurisprudence cleared the way for a style of
national politics frankly centered on a model of interest-group pluralism.
The signature transformation of the New Deal era was the dramatic
growth in the size, power, and responsibility of the federal government. A
268
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 269
deepening conviction that only the national government could effectively
ameliorate the protracted distress provided a powerful impetus to the centripetal
forces of regulatory and fiscal centralization. President Franklin
Roosevelt’s New Deal embodied a striking expansion of both the scope of
federal authority and the ambition of its exercise. Federal spending programs
would undertake to provide economic security for all citizens; Congress
would extend its regulatory influence over areas previously controlled principally,
when at all, by state governments. Banking, securities markets,
agriculture, energy, industrial labor relations, and much more would fall
under the authority of federal officers.
Though the preemption of state regulation in such critical areas significantly
reduced the formal authority of state governments, the states were
by no means subsumed into a unitary national state. Local officials were
often granted substantial discretion in the administration of federal grantsin-
aid, while states retained most of their traditional authority over the
content of vast domains of law regulating such subjects as property, contracts
and commercial transactions, business associations, torts, crime, and
the family. Similarly, the Supreme Court lifted long-standing impediments
to state regulation as it retired economic substantive due process and relaxed
restraints imposed by the Contract and Dormant Commerce Clauses of the
Federal Constitution.
Nevertheless, the concentration of an unprecedented degree of authority
and responsibility in a national regulatory and welfare state constituted a
revolution in the American federal system. The breathtakingly novel reach
of federal economic regulation, its effect on vested property rights, and
the scope of discretionary authority confided to the executive branch in its
administration each stretched established understandings of constitutional
limitation, sometimes to the breaking point. To be sure, existing constitutional
doctrine provided a comfortable foundation for the many New Deal
spending programs designed to relieve poverty. Yet a number of Congressional
measures rested on understandings of federal regulatory powers that
were unprecedented in their breadth, while several state and federal statutes
curtailed private economic rights in a manner raising serious constitutional
questions under the Contract Clause and the Due Process Clauses of the
Fifth and Fourteenth Amendments. The fate of state and federal legislation
addressed to the economic havoc wrought by the Depression accordingly
turned on two critical variables: the capacity of lawmakers to accommodate
transformative statutory initiatives within the structure of contemporary
doctrine and the inclination of Supreme Court justices to relax or abandon
constitutional constraints on federal and state regulatory power.
The mechanisms through which the New Deal order ultimately secured
the Court’s constitutional sanction are readily discernible. The conditions of
Cambridge Histories Online © Cambridge University Press, 2008
270 Barry Cushman
the Great Depression and the inadequacy of Republican efforts to deal with
them cemented the electoral influence of a political coalition that would
entrust the presidency and both Houses of Congress to the Democratic Party
from 1933 forward. The sustained dominance of that coalition ensured that
the demand for national action to grapple with the crisis would be both
powerful and persistent. That persistence would in turn have two important
ramifications. First, in those instances in which the justices held that an
initial legislative attempt to address a particular problem did not pass constitutional
muster, the New Deal Congress would have the opportunity to
reformulate the program to achieve the desired end through means consistent
with prevailing constitutional doctrine. Throughout the 1930s, New
Dealers would repeatedly employ this adaptive strategy with remarkable
success.
The second consequence grew out of Franklin Roosevelt’s repeated reelection
to the presidency. Facing a federal judiciary bearing the imprint of
twelve years of Republican ascendancy in presidential politics, throughout
his tenure Roosevelt steadily filled lower court vacancies with loyal Democrats.
Yet, neither death nor resignation provided a frustrated Roosevelt
with an opportunity to appoint a Supreme Court justice during his first
term. Though President Hoover’s three appointments to the Court had created
a majority seemingly more receptive to government regulation than
the Taft Court had been, that majority was fragile and by no means fully
committed to the constitutional views of the administration. Between 1937
and 1941, however, President Roosevelt would elevate seven New Dealers
to life tenure on the nation’s highest court. Fully reflecting the constitutional
sensibilities undergirding the New Deal vision of government,
these appointees would in turn transform the nation’s constitutional law to
accommodate regulatory innovations that their judicial predecessors could
not have approved. The continued electoral success of Democrats even after
Roosevelt’s death would enable the party further to entrench its position in
the federal judiciary, so that New Deal constitutionalism would remain a
powerful orthodoxy even as its sponsoring political coalition began to fray.
The balance of this chapter consists of five topical parts and a conclusion.
Part I describes the economic conditions of the Great Depression and
details the executive and legislative responses produced under the Hoover
and Roosevelt administrations. Part II examines contemporary controversies
over the growth of federal executive authority and the elaboration of
the administrative state. Part III documents the relaxation of constraints on
economic regulation imposed by the Fifth and Fourteenth Amendments and
the Contract Clause. Part IV analyzes various manifestations of the revolution
in constitutional federalism. Part V explores the growth of protections
for civil rights, civil liberties, and democratic processes.
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 271
I. THE GREAT DEPRESSION: CONDITIONS AND RESPONSES
At the close of a decade celebrated for its prosperity, the American economy
underwent a profound contraction whose baleful effects were remarkable
both for their duration and their intensity. Though the Depression
would linger throughout the 1930s until dispatched by the stimulus of
wartime production, the precipitous economic decline of its first four years
was particularly staggering. Between 1929 and 1933 national income was
cut in half. Manufacturing output, retail sales volume, and wholesale and
commodity prices all suffered devastating reductions. In 1930 alone a
record 26,355 businesses failed, while1931 recorded some 65,000 cases
in bankruptcy. Between September 1929 and March 1933 the aggregate
value of all domestic stocks listed on the New York Stock exchange declined
by 80 percent, from approximately $80 billion to about $16 billion. During
the same period, farm values declined by a third, and foreign trade
was dramatically curtailed, with both exports and imports decreasing by
nearly 70 percent. By 1933 the ranks of the unemployed had increased
to nearly thirteen million workers, leaving one-quarter of the American
workforce idle. Even those who survived the epidemic of layoffs saw their
wages decline and their working hours reduced. At the same time more
than 5,000 banks collapsed – nearly 2,300 in 1931 alone – decimating
more than nine million savings accounts. Though the business cycle had
produced recurrent periods of boom and bust throughout American history,
such punishing economic collapse was unprecedented.
The Hoover administration was not entirely inert in the face of this crisis.
Throughout the 1920s Congress had grappled unsuccessfully with the
seemingly intractable problem of depressed prices resulting from the overproduction
of farm commodities. Early in his term President Hoover called
Congress into special session to enact the Agricultural Marketing Act of
1929. This statute established a Federal Farm Board, which was authorized
to make loans from a $500 million revolving fund to farmer-owned commodity
stabilization corporations and agricultural marketing associations.
It was hoped that, by using the funds to purchase and store surplus farm
produce and to regulate its flow to terminal markets, these private entities
might increase the demand for agricultural commodities and thereby raise
the prices at which they traded. Similarly, Hoover oversaw the creation of
the federal Reconstruction Finance Corporation, a temporary agency authorized
to extend billions of dollars in loans to prevent the economic collapse
of railroads, insurance companies, banks, and other financial institutions.
Yet, Hoover’s valorization of individual initiative, his preference for economic
solutions grounded in voluntary cooperation in the private sector
rather than government regulation, and his aversion to concentrations of
Cambridge Histories Online © Cambridge University Press, 2008
272 Barry Cushman
political and economic power led him to resist far-reaching proposals for
federal intervention of the sort embraced by his successor. For example,
his program contained no proposal for legislative reform of the national
securities markets. He opposed the delegation of government power to
private interests, rejecting proposals from business interests calling for a
suspension of antitrust laws that would enable them to establish federally
administered cartels. In 1931, Hoover vetoed a bill that would have authorized
government-owned electricity and nitrogen plants built at Muscle
Shoals duringWorldWar I to sell power and fertilizer in competition with
privately owned concerns. And although he supported such initiatives as
the expansion of credit, tax relief, and modest appropriations to support
public works, President Hoover was reluctant to heed requests for federal
relief expenditures to aid the millions of the unemployed.
Hoover’s limited and ineffectual responses to the crisis left him vulnerable
to his Democratic challenger in 1932, the affable governor of New
York. Franklin Roosevelt’s platform did call for greater federal relief to
the unemployed, but in most other respects it differed little from the economic
policy espoused in its Republican counterpart. Though one may in
retrospect detect germinal hints of portions of the New Deal in some of
Roosevelt’s campaign speeches, for the most part he was content to rely on
vague references to the need for bold experimentation, “imaginative and
purposeful planning,” and greater solicitude for “the forgotten man.” In
November Roosevelt and the Democrats coasted to victory in a landslide.
Shortly following his inauguration in March 1933, campaign generalities
began to take shape as specific policy proposals. By the time Roosevelt
assumed the presidency, officials in thirty-eight states had closed their banks
in the face of a growing spate of bank failures; banking operations had been
curtailed in the remaining ten states. Meanwhile, the monetary system
was increasingly roiled by nervous hoarding of gold and currency and a
troubling flight of gold to foreign markets. The president immediately
initiated a series of emergency measures to staunch the hemorrhaging,
proclaiming a nationwide “bank holiday” and proscribing further exports
of gold. At the same time Roosevelt called into special session the new
Congress, which quickly ratified his actions with the Emergency Banking
Act of 1933. The statute provided for the reopening of solvent banks under
executive supervision, confirmed presidential control over transactions in
gold, and required that those holding gold bullion, gold coin, and gold
certificates surrender them to the Treasury in exchange for new Federal
Reserve notes. Once federal control over the nation’s gold supply had been
rendered more secure, Roosevelt would undertake to arrest the deflationary
spiral by significantly reducing the gold content of the dollar.
Having eased the banking crisis, the Roosevelt administration next
placed before the Congress an ambitious program of prescriptions for relief,
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 273
recovery, and reform. Relief measures took a variety of forms. The Federal
Emergency Relief Administration distributed direct public assistance
through state and local agencies. Other federal programs – such as the shortlived
Civil Works Administration and later the Works Progress Administration
and the Public Works Administration – employed the jobless in a
variety of public works and improvement projects. Similarly, the Civilian
Conservation Corps put unemployed youth to work on reforestation projects
in the national forests. Meanwhile, the Farm Security Administration
offered low-interest loans to distressed tenant farmers and sharecroppers,
just as the Farm Credit Administration and the Home Owner’s Loan
Corporation underwrote a massive refinancing of defaulting farm and home
mortgages. The swelling federal budget tells the story of this remarkable
proliferation of federal programs and grants-in-aid to support relief and
public employment: between 1929 and 1939 federal expenditures mushroomed
from $2.6 billion to $9 billion. The relief of persistent poverty
by the federal government proved enormously popular with the voters and
contributed in no small part to the remarkable electoral successes of the
New Deal coalition.
Roosevelt’s recovery program could not lay claim to comparable laurels.
Its two principal pillars were the Agricultural Adjustment Act (AAA) of
1933 and the National Industrial Recovery Act (NIRA). The AAA sought
to raise farm prices not by purchasing the agricultural surplus and either
storing it or dumping it abroad, but instead by decreasing production. It
imposed a tax on the processors of specified agricultural commodities, the
proceeds of which were used to pay farmers who contracted with the Department
of Agriculture to reduce their production of those commodities. The
NIRA similarly sought to stabilize plummeting prices by limiting industrial
output. Suspending enforcement of the antitrust laws, the program
provided for an unprecedented degree of industrial self-regulation. Acting
under the sanction of the newly created National Recovery Administration
(NRA), business representatives were authorized to promulgate elaborate
and legally enforceable “codes of fair competition” to govern their respective
industries, including the prescription of minimum wages and maximum
working hours. Section 7(a) of the NIRA guaranteed the rights of workers
to organize and bargain collectively with their employers. Dogged by
vocal criticism and widespread unpopularity, each of these short-lived programs
would be declared unconstitutional before Roosevelt’s first term was
finished.
More enduring were the New Deal’s contributions to economic reform.
Restoration of confidence in the nation’s troubled financial sector ranked
high among the Roosevelt administration’s priorities. The Glass-Steagall
Banking Act of 1933 mandated the separation of commercial banking
from investment banking, thereby preventing bankers from using ordinary
Cambridge Histories Online © Cambridge University Press, 2008
274 Barry Cushman
deposits to underwrite securities issues or to speculate in securities markets.
This act also created the Federal Deposit Insurance Corporation, which
helped revive flagging faith in the banking system by providing insurance
on small bank accounts. The Securities Act of 1933 required that all new
securities offered to the public through either the facilities of interstate
commerce or the postal service first be registered with the Federal Trade
Commission (FTC). All registration statements were required to disclose
detailed financial information concerning the securities and to be certified
by an independent accountant. Those failing truthfully to disclose the
required information were subjected to civil and criminal penalties, and
the FTC was granted considerable regulatory authority over the issuance of
new securities. The Securities Exchange Act of 1934 created the Securities
and Exchange Commission (SEC) and transferred to the SEC the authority
confided to the FTC under the 1933 Act. The 1934 act extended the disclosure
requirements of the 1933 act to all companies listing securities on a
national exchange, requiring them to file detailed annual financial reports
with the SEC. The 1934 act further authorized the SEC to regulate the
stock exchanges, to police abuses such as stock market manipulation, and
to prohibit fraud in connection with secondary market transactions. Finally,
the Public Utility Holding Company Act of 1935 required such companies
conducting interstate business to register with the SEC and subjected
the utilities to the agency’s supervision in matters pertaining to corporate
structure and dissolution. The Federal Power Commission was authorized
to regulate the rates and business practices of such companies.
The Social Security Act of 1935 brought two major innovations in social
insurance. One title of the act created a federal program of compulsory, contributory
old age and survivors insurance financed by payroll taxes levied
on employers and employees, with the first benefits payable in 1942. A second
title established a cooperative federal-state system of unemployment
insurance: employers paying into a qualifying state unemployment compensation
fund would receive a credit against a tax otherwise payable to a
comparable federal fund. By mid-1937 every state and territory had adopted
a qualifying statute. Other provisions of the act authorized federal grantsin-
aid to states for aid to the blind and disabled, to dependent children,
and to those among the needy aged – such as domestic and agricultural
workers – who were ineligible to participate in the old age insurance program.
The Social Security Act further authorized grants to state programs
for the promotion of public health, for maternal and child health and welfare,
and for vocational rehabilitation. A Social Security Board was vested
with authority to administer these programs.
The New Deal similarly produced two enduring reforms in the field of
labor relations. The National Labor Relations Act of 1935 (NLRA) sought
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 275
to prevent or resolve labor disputes threatening to disrupt interstate commerce.
It guaranteed to selected industrial workers the rights to organize and
to bargain collectively with their employers through their elected representatives
and prohibited employer interference with those rights. A National
Labor Relations Board modeled on the Federal Trade Commission was created
to administer the act’s organization and representation provisions; it
was empowered to issue orders enforceable in federal court to cease and desist
in the commission of “unfair labor practices,” such as cultivating employersponsored
“company unions” and discouraging union membership through
discrimination in hiring, discharge, or terms of employment. Three years
later, the Fair Labor Standards Act of 1938 prohibited the interstate shipment
of selected goods manufactured by children or by workers employed
at wages below or for hours exceeding federally prescribed standards.
The reform efforts of the New Deal Congress did not stop there. Legislation
establishing the Tennessee Valley Authority, for example, went well
beyond the earlier Muscle Shoals bills envisioning government manufacture
and sale of electricity and fertilizer. The act further authorized a massive
regional development project involving the construction of new dams and
public power plants, as well as programs for flood control, reforestation, and
the prevention of soil erosion. By 1941 the Rural Electrification Administration
had increased from ten to forty the percentage of American farms
with electric power. The U.S. Housing Authority and the Federal Housing
Administration underwrote the construction and rehabilitation of lowand
middle-income housing. The year 1938 saw significant revisions to
existing federal legislation concerning both bankruptcy and food and drug
regulation. And under the leadership of Commissioner of Indian Affairs
John Collier, the Roosevelt administration inaugurated an “Indian New
Deal” for Native Americans. Collier discontinued existing federal policies
restricting enjoyment of civil liberties, such as the freedoms of speech and
religion and the right to travel, and extended criminal procedure protections
of the Bill of Rights to proceedings in Courts of Indian Offenses.
In 1934 Collier persuaded Congress to enact the Indian Reorganization
Act, which abolished the policy of land allotment, authorized a substantial
measure of tribal self-government, and established funds to support the
education of Native Americans and to promote economic development on
Indian reservations.
The New Deal was thus breathtaking in scope and freewheeling in style.
The product of pressure from disparate elements within the Democratic
coalition, the Roosevelt administration’s program was grounded in no single
coherent or systemic theory. Roosevelt himself was a pragmatist who once
elusively described his ideological commitments as those of “a Christian
and a Democrat,” and his administration produced policies that occasionally
Cambridge Histories Online © Cambridge University Press, 2008
276 Barry Cushman
conflicted in their objectives and effects. Whereas the relief program sought
to alleviate widespread conditions of want, for example, the AAA aimed
to raise the price of food through enforced scarcity. Similarly, the administration’s
recovery efforts chafed against the Social Security Act’s withdrawal
of capital from the economy through payroll taxes. Yet, Democratic
control of the White House and both houses of Congress offered a muchanticipated
chance to implement a long-frustrated progressive agenda for
reform, while the exigencies of the moment nurtured an experimental temperament
congenial to young lawyers reared on sociological jurisprudence
and legal realism at institutions like Harvard, Yale, and Columbia.
Those young lawyers would be central to significant developments in the
practice of law and in the composition of the American bar. Although the
economic contraction reduced the demand for lawyers engaged in private
practice, the New Deal offered fresh and exciting possibilities for public
service in Washington. Many graduates of elite law schools who might
earlier have heeded the call of Wall Street were now drawn to the nation’s
capital by the appeal of Roosevelt’s crusade and the attractions of power
a,nd a steady paycheck. Jewish and Catholic lawyers facing discrimination
in private employment were particular beneficiaries of the expanded legal
opportunities centered in the Justice Department and the alphabet agencies.
At the same time the profuse generation of new federal law created
significant new opportunities for private sector specialization in burgeoning
areas, such as administrative law and labor law – opportunities of which
many New Deal lawyers would soon avail themselves.
Several of the New Deal’s legal innovations presented serious issues concerning
the scope of federal power, the separation of powers, and constitutional
protections for property rights. Although many New Deal initiatives
would never face judicial review, challenges to some of the central features
of the programs for recovery and reform would produce landmark decisions
in the Supreme Court. Several of these decisions were unanimous in
upholding or invalidating Congressional or executive action. Other important
cases were decided by a closely divided Court. It is always hazardous
to offer general characterizations of a justice’s jurisprudence, as the complexity
of a jurist’s record so often confounds stereotypic assessments. Yet,
of the Taft Court veterans the justices most likely to regard elements of the
New Deal as constitutionally problematic were the so-called conservative
Four Horsemen: Willis Van Devanter, James Clark McReynolds, George
Sutherland, and Pierce Butler. Their more liberal Taft Court colleagues –
OliverWendell Holmes, Louis Brandeis, and Harlan Fiske Stone – by contrast,
had shown somewhat greater receptivity to state and federal economic
regulation.
Between 1930 and 1932 President Hoover appointed three justices to
the Court. In 1932 he replaced Holmes with Benjamin Cardozo, who shared
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 277
many of the jurisprudential views of his predecessor. The outcomes in several
closely divided cases would therefore be determined by the votes of
Hoover’s 1930 appointments of Charles Evans Hughes to replace Taft and
Owen Roberts to take the seat previously occupied by EdwardTerry Sanford.
When New Dealers could draft statutes, select test cases, and craft arguments
securing the support of these two constitutional moderates, they were
virtually certain of victory. It would take some time and bitter experience,
however, before reformers in Congress and the administration were able to
formulate and execute consistently successful legal strategies.
II. EXECUTIVE AUTHORITY AND
THE ADMINISTRATIVE STATE
The persistent economic crisis besetting the country in the 1930s consolidated
the popular conviction that an unregulated free market guided solely
by the invisible hand of private interest could lead only to grief. The Roosevelt
administration insisted that the countervailing power of government,
administered by disinterested expert regulators, was necessary to discipline
the market and stabilize an economy that “economic royalists” had left in
tatters. The result was a stunning expansion of administrative authority
both within and independent of the executive branch.
Agency government was by no means a novelty in 1933. Congress had
established the Interstate Commerce Commission (ICC) in 1887, and during
the half-century that followed the federal legislature had enacted a
series of regulatory statutes authorizing administrative bodies to superintend
a variety of activities and enterprises. Some of these statutes were
administered by independent regulatory agencies; others had delegated
new responsibilities to specific cabinet departments. The explosion of federal
administrative authority inaugurated by the Roosevelt administration
and the New Deal Congress was nevertheless unprecedented both for the
number of agencies created and the scope of regulatory authority conferred.
The Depression decade witnessed the creation of several new independent
commissions: the Securities and Exchange Commission, the Federal Communications
Commission, the National Labor Relations Board, the U.S.
Maritime Commission, and the Civil Aeronautics Authority (transferred in
1940 to the Department of Commerce). To regulate prices and trade practices
in the troubled coal industry, statutes enacted in 1935 and 1937 each
created a National Bituminous Coal Commission, whose brief and turbulent
life ended in 1939 when an executive order assigned its functions to the
Department of the Interior. Still other existing commissions saw their jurisdictions
enlarged or their powers enhanced. The Federal Power Commission,
which had been reorganized in 1930, was given expanded responsibilities
under the Federal Power Act of 1935 and the Natural Gas Act of 1938. The
Cambridge Histories Online © Cambridge University Press, 2008
278 Barry Cushman
Motor Carrier Act of 1935 gave the ICC regulatory authority over the interstate
trucking industry. The New Deal created a vast new federal bureaucracy
with extensive administrative authority over a multitude of activities
that had previously been regulated by state and local government or not
at all.
This dramatic expansion of federal administrative authority promised
to raise numerous constitutional controversies centered on questions of
due process, federalism, and the separation of powers. With respect to the
last of these issues, agency authority received a warmer reception from
the justices than many might have anticipated. In 1937 the President’s
Committee on Administrative Management, also known as the Brownlow
Committee, would denounce independent federal commissions as comprising
“a headless ‘fourth branch’ of the Government, a haphazard deposit of
irresponsible agencies and uncoordinated powers” doing “violence to the
basic theory of the American Constitution that there should be three major
branches of the Government and only three.”1 By contrast, advocates for
the administrative state such as former SEC Chairman and Harvard Law
School Dean James Landis valorized the specialized expertise and political
independence of agency officials, dismissing “the traditional tripartite
theory of government organization” as talismanic “political conceptualism.”
2 Yet on the eve of the New Deal the Court had fortuitously secured
the constitutional footing of the administrative state with its decision in
Crowell v. Benson (1932). In upholding the Congressional delegation to
a deputy commissioner of authority to adjudicate workers’ compensation
claims filed by maritime employees, the Court approved conferral of broad
fact-finding and adjudicative authority on administrative agencies as consistent
with the requirements of both due process and the separation of
powers. Administrative agencies, the Hughes Court justices recognized,
were necessitated “by the increasing complexities of our modern business
and political affairs.”3 Though judicial review of agency action remained
essential to preserve constitutional limitations and to safeguard constitutional
liberties, ordinary administrative findings would enjoy the deference
traditionally accorded to jury verdicts. And although a narrow majority of
the Court would continue to insist that Congress could not make agency
findings of “jurisdictional” facts final, Justice Van Devanter’s retirement at
the conclusion of the 1936 term would herald the triumph of the minority’s
more deferential position.
1 President’s Committee on Administrative Management, Administrative Management in
the Government of the United States (Washington, DC, 1937), 36.
2 James M. Landis, The Administrative Process (New Haven, CT, 1938), 12.
3 Jones v. Securities & Exchange Commission, 298 U.S. 1, 24 (1936).
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 279
Perhaps the most significant safeguard of the political independence of
the regulatory commissions came in an opinion that was widely perceived
as a reproach to the president. In Humphrey’s Executor v. United States (1935),
President Roosevelt had removed a Hoover appointee to the FTC without
cause, notwithstanding provisions of the Federal Trade Commission
Act limiting presidential removal of commissioners to instances of inefficiency,
neglect of duty, or malfeasance in office. The Court affirmed the view
announced in Myers v. United States (1926) that the president enjoyed sole
and illimitable power to remove “purely executive officers” such as postmasters.
Yet notwithstanding obiter dicta in Myers that appeared to suggest the
contrary, a unanimous Court held, that with respect to independent agencies
exercising legislative and judicial functions, Congress might constitutionally
restrict the president’s removal power as it had in the Federal Trade
Commission Act.
At the same time, however, particular exercises of agency authority could
still provoke strong judicial reactions. While sustaining the registration
requirements imposed by the Securities Act of 1933 and the Public Utilities
Holding Company Act of 1935, for example, the Court nevertheless
denounced the SEC’s refusal to permit withdrawal of a registration statement
allegedly containing material misrepresentations and quashed its subpoena
of the withdrawing registrant’s testimony and business records. The
majority castigated the Commission for unauthorized appropriation and
arbitrary, autocratic exercise of power, encroaching on fundamental liberties
in a manner reminiscent of “the intolerable abuses of the Star Chamber.”
The three dissenting justices, who found “hyperbole in the sanguinary
simile,” maintained that the majority’s ruling would “invite the cunning
and unscrupulous to gamble with detection,” knowing that they could
evade investigation and punishment by the simple expedient of a timely
withdrawal. Thus, wrote Justice Cardozo, might the act and its sanctions
“become the sport of clever knaves.”4
The Court was by no means alone in its anxiety over excessive agency
discretion. The American Bar Association’s Special Committee on Administrative
Law, chaired by former Harvard Law dean Roscoe Pound, was a
persistent critic of what Pound viewed as growing “administrative absolutism.”
5 The 1938 Pound Report’s allusion to the looming totalitarian
threat across the Atlantic found a receptive ear in Congress, which the
following year took up a bill to promulgate a uniform code of procedure
for federal agencies, formalizing their internal processes; separating their
4 298 U.S. 1, at 28, 32–33.
5 Report of the Special Committee on Administrative Law, 1938 A.B.A. Annual Report 331,
343 (1938).
Cambridge Histories Online © Cambridge University Press, 2008
280 Barry Cushman
legislative, prosecutorial, and adjudicative functions; and expanding judicial
review of their decisions. Though passed by both houses of Congress
in 1940, theWalter-Logan Bill was vetoed by President Roosevelt. Yet the
bill’s ambition to constrain administrative discretion would persist. It was
embraced in moderated form in the “minority bill” proposed in 1941 by
the Attorney General’s Committee on Administrative Procedure, which in
turn provided the blueprint for the Administrative Procedure Act of 1946.
Though members of Congress were anxious to see federal agencies subjected
to greater control, they were uncomfortable entrusting that task
to the president. Roosevelt’s veto of the Walter-Logan Bill followed on
the heels of a bruising political battle over his proposal to reorganize the
executive department. In 1937 Roosevelt requested that Congress embody
in legislation the Brownlow Committee’s recommendation that the president
be granted authority to bring under greater presidential control more
than one hundred federal administrative bodies, including independent
regulatory commissions, by consolidating and merging them into existing
executive departments. Roosevelt publicly denied charges of attempted
executive aggrandizement, asserting that the measure was necessary for
effective management and coordination of the activities of bodies charged
by Congress with the administration of federal law. This admonition went
unheeded in the House, which rebuffed the President’s request in 1938.
Congress did enact an executive reorganization bill granting the president
much weaker authority in 1939, but at the same time sought to restrain
the power of the executive branch by restricting the political activities of
its employees. Concerns among Republicans and conservative Democrats
that federal relief officials had improperly used their positions to influence
voting behavior prompted Congress to enact the Hatch Act of 1939, which
prohibited lower level executive employees from taking an active part in
any political campaign.
The central separation-of-powers issues confronting the Hughes Court
concerned the scope of Congressional power to delegate legislative authority.
Previous decisions had identified limits on the authority of Congress
to confer legislative power on the executive branch, but never before had
the Court held that a statute failed to satisfy those limiting criteria. That
would change in early 1935, when two oil companies challenged the constitutionality
of section 9(c) of the National Industrial Recovery Act in
Panama Refining Co. v. Ryan (1935). In response to price destabilization in
the petroleum industry brought on by a frenzy of wildcat drilling in the
East Texas oil fields, Congress had authorized the president to prohibit the
interstate shipment of “contraband” or “hot” oil produced in violation of
quotas imposed by the state of production. The president had announced
such a prohibition by executive order, delegated to the Secretary of Interior
authority to promulgate appropriate rules and regulations, and approved
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 281
a Code of Fair Competition for the petroleum industry. An 8-1 majority
found that section 9(c) transgressed previously latent limitations on Congressional
delegation. That section, objected Chief Justice Hughes, offered
the president no guidance concerning the circumstances under which he
was to prohibit interstate transportation of hot oil. Rather than establishing
a policy or standard to govern the president’s course, Congress had instead
conferred on him an unlimited legislative authority.
The Panama Refining decision cast a pall of doubt over the constitutionality
of the broader recovery program, and the Court let the other shoe drop
in Schechter Poultry Corp. v. United States (1935), the famous “sick chicken”
case. Schechter involved the conviction of a kosher slaughtering concern in
Brooklyn for violation of various provisions of the Live Poultry Code promulgated
pursuant to section 3 of the NIRA. That section authorized the
president to prescribe codes of fair competition to govern various trades
and industries and to approve codes proposed by trade and industry representatives.
The president was further authorized to provide exceptions and
exemptions from the provisions of the codes where in his sole discretion
he deemed it necessary to accomplish the policy of promoting industrial
recovery.
A unanimous Court condemned this unprecedented delegation of legislative
authority to the executive. Section 3, wrote Chief Justice Hughes,
prescribed neither rules of conduct nor any meaningful standard to guide the
exercise of the president’s “virtually unfettered” discretion to prescribe and
approve codes. Congress might authorize the executive branch to promulgate
subordinate legal rules, so long as the legislation established standards
sufficient to guide and confine the discretion of the executive in carrying out
the declared legislative policy. But Congress could not alienate the essential
legislative functions with which it was vested. Even Justice Cardozo,
who had dissented alone in Panama Refining, would not defend section 3.
Its delegation of legislative power, he observed, was “not canalized within
banks that keep it from overflowing. It is unconfined and vagrant.” The
president had been granted “a roving commission to inquire into evils and
upon discovery correct them.” This, Cardozo exclaimed, was “delegation
running riot.”6
Following the Schechter decision Congress enacted the Bituminous Coal
Conservation Act of 1935, also known as the Guffey Coal Act. Seeking to
impose order on a chaotic industry plagued by cutthroat competition, the
act created a National Bituminous Coal Commission, which it authorized to
regulate the price at which bituminous coal moved in interstate commerce.
A further provision created a labor board to adjudicate labor disputes in the
industry, and it safeguarded the right of coal company employees to organize
6 295 U.S. 495, at 542, 551, 553.
Cambridge Histories Online © Cambridge University Press, 2008
282 Barry Cushman
and bargain collectively. This act provoked numerous constitutional objections,
among them that one of its provisions unlawfully delegated to a
majority of coal producers the power to fix the hours and wages of the
employees of other coal producers. In Carter v. Carter Coal Co. (1936) the
Court held that this delegation of legislative power, not to a government
official, but to private parties having interests possibly and often actually
adverse to the competitors over whom they would wield such power, was
“clearly arbitrary” and thus a denial of the rights safeguarded by the Fifth
Amendment’s Due Process Clause.
The practical significance of these decisions should not be overestimated.
Few mourned the death of the NIRA, which had been greeted
with widespread noncompliance and weak enforcement. Consumer prices
and unemployment had risen during its tenure, while workers’ wages (especially
those of African American workers) had remained low, as employers
flouted with impunity the wage, hour, and collective bargaining regulations
of the codes. The code-making authorities had been dominated by the
representatives of larger business enterprises, whose efforts to reduce competition
and to restrict production ill served their smaller competitors. The
NIRA’s two-year charter was set to expire within three weeks of the Schechter
decision, and difficulties with the unpopular statute’s administration had
already made any extension doubtful. Moreover, Congress had no difficulty
placing its oil and coal programs on a sound constitutional footing.Within
six weeks of the Panama Refining decision Congress enacted the Connally
Act, which solved the delegation problem by simply prohibiting the interstate
shipment of hot oil. The statute was uniformly sustained in the lower
courts and unanimously upheld by the Court in 1939. Similarly, Congress
enacted a revised Bituminous Coal Conservation Act in early 1937, stripping
out the provisions that had not withstood constitutional scrutiny.
With the objectionable delegation to private producers now removed, the
Court had no difficulty upholding the revised act in 1940. In two cases
decided in 1939 the Agricultural Marketing Agreement Act would again
provoke Roberts, Butler, and McReynolds to press delegation objections.
By the late 1930s, however, a series of Roosevelt appointments to the Court
had consigned the delegation views of these justices to the minority. The
non-delegation doctrine was never a serious obstacle to the accomplishment
of the administration’s domestic policy objectives.
Nor did scruples over legislative delegation impede the president’s conduct
of foreign affairs. In 1934 Congress passed a joint resolution authorizing
the president to prohibit arms sales to Paraguay and Bolivia, except
under such limitations and exceptions as the president might prescribe,
should he find that such a prohibition might contribute to the cessation
of ongoing hostilities between those neighboring countries. Had the
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 283
resolution pertained to the internal, domestic affairs of the nation rather
than to international relations, one might have expected the Court to brand
it an unconstitutional delegation of legislative of authority insufficiently
confined by a standard. Yet in United States v. Curtiss-Wright Export Corp.
(1936), only Justice McReynolds dissented from Justice Sutherland’s opinion
upholding the president’s action under the resolution. The federal government’s
power to conduct foreign relations, the Court held, was an inherent
feature of sovereignty rather than an enumerated grant. The president
was the sole representative of the nation in the field of international relations,
and the requirements for the successful conduct of those complicated
and delicate relations justified Congress in conferring on the chief executive
a degree of discretion that would be impermissible in the domestic context.
The implications of Curtiss-Wright were elaborated the following year
in United States v. Belmont (1937). The case involved the validity of an
assignment to the United States of Soviet claims against American nationals
by the so-called Litvinov Agreement, a bilateral compact entered into
coincident with the establishment of diplomatic relations between the two
countries in 1933. The Litvinov Agreement took the form of an executive
agreement rather than a treaty and was accordingly never presented to
the Senate for its advice and consent. The Court confirmed that entry into
the Litvinov Agreement was within the competence of the president and
that the agreement, like a treaty, was entitled to the dignity accorded the
supreme law of the land. The principles established in Curtiss-Wright and
Belmont would soon underwrite an array of Congressional authorizations
and executive measures undertaken following the outbreak of hostilities
in Europe. As the continental powers lurched toward global conflict, the
Court’s decisions consolidated the triumph of executive discretion in the
conduct of American foreign relations.
It is perhaps not surprising that Congressional delegation to the judicial
branch received a warm reception at the Supreme Court. Throughout
the nineteenth and early twentieth centuries, legal actions in the federal
trial courts had been governed by the Process and Conformity Acts, which
instructed federal judges to follow the forms of civil procedure employed
by the courts of the state in which the federal court sat. The federal courts
also continued to employ the traditionally distinct forms of procedure for
actions at law and cases in equity, long after many states had merged the
two into a single system. The bar’s growing dissatisfaction with the resulting
lack of uniformity in federal procedure provoked Congress to enact the
Rules Enabling Act of 1934, authorizing the Supreme Court to prescribe
uniform rules of pleading, practice, and procedure for civil actions in the
federal trial courts. The act further authorized the Court to unify the procedure
for actions at law and cases in equity brought in the federal courts by
Cambridge Histories Online © Cambridge University Press, 2008
284 Barry Cushman
establishing a single set of rules to govern both. The Court in turn appointed
an Advisory Committee to draft the rules and, after modification of the draft
in response to comment from the legal profession, approved the new Federal
Rules of Civil Procedure in 1938. As approved, the rules merged law and
equity, simplified and relaxed rules of pleading, and expanded procedures
for pre-trial discovery. In Sibbach v. Wilson (1941), the justices treated the
delegation of legislative authority under which they had promulgated the
Rules as constitutionally unproblematic. At the decade’s close it appeared
that, if any constitutional limitations on the power of Congress to delegate
legislative authority still remained, it would require an effort to transgress
them.
III. THE REVOLUTION IN DUE PROCESS JURISPRUDENCE
Liberty of Contract, Rate Regulation, and the Minimum Wage
The idea that the Due Process Clause of the Fourteenth Amendment might
limit the power of state and local governments to regulate prices had
emerged before Reconstruction’s close. In Munn v. Illinois (1877), the Court
had held that prices charged could be fixed by law only if the business in
question were “affected with a public interest.”7 “Private” businesses were
not amenable to such regulation. Over the course of the next half-century,
the Court upheld price regulation of such “public” enterprises as railroads,
grain elevators, water utilities, and public stockyards, yet forbade regulation
of prices charged by theater ticket brokers, employment agencies, and,
in early 1929, by the Standard Oil Company for gasoline. Decisions concerning
price regulation in the District of Columbia revealed that federal
regulatory power was similarly constrained by the Due Process Clause of
the Fifth Amendment. On the eve of the Great Depression, governmental
authority to regulate prices was tightly circumscribed.
This distinction between public and private enterprise similarly informed
the Court’s views on wage regulation. In Adkins v. Children’s Hospital (1923),
the Court invalidated a Congressional statute authorizing the prescription
of minimum wages for women working in the District of Columbia. Analogizing
wage regulation to price regulation, the Court observed that such
legislation could be constitutionally applied to those engaged in public
employment and to those working in businesses affected with a public
interest. As applied to those employed in a private business, however,
wage regulation was not an appropriate exercise of the police power. It
7 94 U.S. 113, 126.
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 285
deprived the parties of their “liberty of contract” and took the property of
the employer without due process of law.
Adkins’ declaration that wages might be regulated in businesses affected
with a public interest was reaffirmed in 1930 when a unanimous Court
upheld federal regulation of fees charged by commission men on sales of
livestock in major stockyards and again in 1931 when a narrowly divided
Court sustained a New Jersey statute regulating commissions paid to agents
selling fire insurance. Yet, prospects for more systemic wage regulation, and
for the minimum wage in particular, remained dim so long as the category
of businesses affected with a public interest remained narrowly defined.
That constitutional obstacle was removed in 1934, when a sharply divided
Court upheld state regulation of minimum retail milk prices in Nebbia v.
New York (1934). Rejecting as impertinent the contention that the milk
business was not “affected with a public interest,” the majority opinion
insisted that the guarantee of due process required “only that the law shall
not be unreasonable, arbitrary, or capricious, and that the means shall have
a real and substantial relation to the object sought to be attained.” “There
is no closed class or category of business affected with a public interest,”
wrote Justice Roberts. The term meant “no more than that an industry, for
adequate reason, is subject to control for the public good.”8
The dissenting Four Horsemen were not alone in recognizing that the
principles advanced in the majority opinion “would support general prescription
of prices for . . . labor, when some legislature finds and declares
such action advisable and for the public good.”9 (Shortly after the decision
was announced, Justice McReynolds wrote his old friend, former Solicitor
General James Beck, lamenting “the end of the constitution as you and
I regarded it. An alien influence has prevailed.”10) Commentators arrayed
across the political spectrum recognized that Nebbia could underwrite the
constitutionality of ambitious programs of state and federal price regulation
and virtually guaranteed the demise of Adkins. This latter promise was fulfilled
inWest Coast Hotel v. Parrish (1937), in which the justices comprising
the Nebbia majority narrowly upheld Washington state’s minimum wage
statute for women, thereby pronouncing last rites for what Justice Holmes
had once called “the dogma, Liberty of Contract.”11
8 291 U.S. 502, 511, 515, 516.
9 Nebbia v. New York, 291 U.S. at 523 (McReynolds, J., dissenting).
10 J. C. McReynolds to James M. Beck, April 10, 1934, quoted in Morton Keller, In Defense
of Yesterday: James M. Beck and the Politics of Conservatism, 1861–1936 (New York, 1958),
254.
11 Adkins v. Children’s Hospital, 261 U.S. 525, 568 (1923) (Holmes, J., dissenting).
Cambridge Histories Online © Cambridge University Press, 2008
286 Barry Cushman
The preceding year, however, Justice Roberts had confounded observers
by joining the majority in a 5–4 decision invalidating a comparable New
York statute on the authority of Adkins. This has prompted speculation concerning
the cause of Justice Roberts’ contrasting performance in Parrish.
Some wags have described it as “the switch in time that saved the Nine,”
suggesting that Roberts was capitulating to the pressure brought to bear
by the president’s scheme to “pack” the Court. Yet this cannot be the case.
Roosevelt’s proposal to add a new justice to the Court for each justice who
had not retired within six months following his seventieth birthday was
introduced on February 5, 1937. The vote to uphold theWashington minimum
wage statute was taken in conference on December 19, 1936, more
than six weeks before the plan, known only to a handful of the president’s
most intimate advisors, was unveiled. Others have speculated that Roberts
might have been responding to Roosevelt’s landslide victory in the November
1936 elections, yet this hypothesis is also problematic. The New Deal
had won an enormous vote of confidence with the Congressional Democrats’
historic triumphs in the 1934 mid-term elections. Yet Justice Roberts and
his colleagues had appeared completely unfazed by this popular endorsement,
proceeding over the next two years to invalidate a bevy of major
federal programs for recovery and reform. Moreover, the results of the 1936
presidential election could convey no independent information concerning
popular support for the minimum wage, as both the Republican platform
and party standard-bearer Alf Landon explicitly endorsed such legislation.
On his retirement in 1945, Roberts acceded to Felix Frankfurter’s request
that he prepare a memorandum explaining his behavior in the minimum
wage cases. In that memorandum Roberts recalled that counsel for the state
of New York had not requested that Adkins be overruled, but had instead
sought to distinguish the statute from the law invalidated in Adkins. Roberts
had been unable to see any constitutionally significant distinction and had
accordingly been unwilling to rest a decision upholding the statute on that
ground. Justices Brandeis, Stone, and Cardozo had been willing to overrule
Adkins, but Chief Justice Hughes had written separately insisting that the
New York statute could be upheld without impairing the authority of
Adkins. In both Schechter Poultry and Carter Coal Co. (two cases decided after
Nebbia but before the 1936 minimum wage case), the Court had declined to
invoke liberty of contract as a rationale for invalidating federal regulation
of wages – presumably because Roberts, the author of Nebbia, had refused
to join the Four Horsemen to make a majority for this view. Although it is
possible that better communication among the justices might have altered
the result, it appears that Roberts’ unwillingness to uphold the New York
statute unless at least four of his colleagues were prepared to confront and
overrule Adkins, combined with Hughes’s insistence that the precedent be
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 287
distinguished rather than overruled, conspired to produce the anomalous
1936 minimum wage decision. In Parrish, by contrast, where Hughes was
prepared to confront and overrule Adkins, Roberts would join him to form
a new majority to sustain the minimum wage.
Decisions in the early 1940s solidified this revolution in due process
jurisprudence. United States v. Darby Lumber Co. (1941) confirmed that
the minimum wage provisions of the Fair Labor Standards Act of 1938
did not violate the Fifth Amendment. Olsen v. Nebraska (1941) reaffirmed
Nebbia’s abandonment of the “affected with a public interest” limitation in
upholding state regulation of fees charged by employment agencies. Contemporaneous
decisions receded from decades of precedent under which the
Court had rigorously scrutinized public utility rate regulation to ensure a
fair return to investors. Governmental powers to regulate wages and prices
had emerged from the Great Depression virtually unconstrained by the
Constitution’s Due Process Clauses.
Liberty of Contract and Collective Bargaining
When the United States enteredWorldWar II in 1941, the rights of American
workers to organize and bargain collectively were more robust than at
any time in the past. This was made possible by the eradication of due
process constraints that had previously limited legislative efforts to secure
those rights. In Adair v. United States (1908), the Court had invalidated
provisions of the 1898 Erdman Act prohibiting interstate carriers from discharging
or discriminating against any worker because of his membership
in a labor union or requiring him to agree as a condition of his employment
not to join a union. Similarly, in 1915 the ruling in Coppage v. Kansas had
invalidated a Kansas statute outlawing such “yellow dog” contracts. In each
instance, the Court had held that such legislation deprived the employer
of his liberty of contract. Legal support for efforts to organize had reached
its nadir on the eve of the nation’s engagement inWorldWar I in Europe.
In the 1917 case of Hitchman Coal & Coke Co. v. Mitchell, the Court had
enjoined an effort by the United Mine Workers to organize a non-union
mine as an unlawful attempt to induce the company’s employees to breach
their yellow dog employment contracts.
All of this was to change over the ensuing two decades. In 1926, Congress
enacted the Railway Labor Act, which safeguarded railroad workers’ rights
of organization and collective bargaining from employer interference. A
unanimous Court upheld the act in Texas & N. O. R. Co. v. Brotherhood
of Railway and Steamships Clerks (1930). Emphasizing employee rights of
association and downplaying the employer’s claimed injury to its liberty
of contract, the Court affirmed the order of a lower court requiring the
Cambridge Histories Online © Cambridge University Press, 2008
288 Barry Cushman
railroad to reinstate employees it had discharged for engaging in lawful
union activities. This decision inspired Congress to insert into the Norris-
LaGuardia Act a provision declaring yellow dog contracts contrary to public
policy and unenforceable in federal courts. Finally, the 1934 amendments
to the Railway Labor Act requiring carriers to negotiate exclusively and in
good faith with the selected representatives of their employees were upheld
by a unanimous Court in March of 1937.
This set the stage for the Court’s decision the following month upholding
theNLRA.The National Labor Relations Board initiated a series of test cases
in which employers had fired employees for engaging in activity protected
under the statute. The Court unanimously sustained the act as applied to
an interstate bus company that, as a common carrier, was a classic business
affected with a public interest. The justices narrowly divided on the due
process issue in the three cases involving manufacturing concerns, however.
For the majority, the issue had been effectively settled in the 1930 decision
upholding the Railway Labor Act’s protection of the “fundamental right”
of self-organization. In the view of the dissenting Four Horsemen, however,
that principle applied only to businesses affected with a public interest, not
to “private” enterprises. Here again, the issue that divided the justices was
the one that a fractured Court had settled three years earlier in Nebbia.
Subsequent construction of the statute would make clear that the justices
had not abandoned all solicitude for employers’ rights of property and
contract. For example, the Court read the act to authorize struck employers
to hire permanent replacement workers, but not to protect from discharge
aggrieved employees staging sit-down strikes. By the spring of 1941, however,
with each of the Four Horsemen having retired, there was no one left
to dissent from the assertion that “the course of decisions in this Court since
Adair v. United States and Coppage v. Kansas have completely sapped those
cases of their authority.”12 The Court had consolidated the constitutional
revolution in labor law.
The Contract Clause and Due Process: Debt Relief
In the early 1930s, widespread unemployment, a wave of bank failures, and
a powerful deflationary spiral placed profound stress on relations between
debtors and their creditors. Prices and wages fell nearly 25 percent between
1929 and 1933, and millions of workers lost their jobs and remained chronically
unemployed. However, although the contraction of the money supply
diminished the prices and wages that businessmen, farmers, workers, and
other debtors could command in the marketplace, it did not alter the face
12 Phelps Dodge Corp. v. NLRB, 313 U.S. 177, 187 (1941).
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 289
amount of obligations undertaken before the economic collapse had so devastated
their earning capacity. In the winter of 1932–33, frustration over
the inability to service mortgage debt boiled over into riots protesting the
epidemic of residential and farm foreclosures in several Midwestern states.A
number of state legislatures responded by enacting various forms of debtor
relief legislation.
In Minnesota, where more than half of the owner-operated farms were
mortgaged, the state legislature passed a mortgage moratorium law in
April 1933. The statute empowered the state courts to extend the period
of redemption up to two years beyond the one year provided by prior law,
provided the defaulting mortgagor in possession paid the reasonable rental
value of the mortgaged property during the extended period. Though much
of the existing precedent suggested that such a legislative modification of
the debtor’s obligation would violate the Contract Clause, a sharply divided
Court upheld the law as a valid and reasonable exercise of the state’s police
power in Home Bldg. & Loan Assn. v. Blaisdell (1934). Under such conditions
of economic emergency, wrote Chief Justice Hughes, the statute’s temporary,
conditional, and limited alteration of the mortgagor’s undertaking did not
impair the underlying obligation of the contract.
At the federal level, the government sought to ease the debt crisis by
reinflating the currency. A critical feature of the administration’s monetary
plan depended on the power of the federal government to abrogate
a provision routinely inserted in long-term debt contracts. This so-called
gold clause required the obligor to repay in gold coin of a specified weight
and fineness or in an equivalent amount of paper money as measured by the
gold content of the dollar on the date of the contract. Congress had therefore
enacted a joint resolution declaring all gold clauses against public policy
and forbidding their enforcement even with respect to existing contractual
obligations. The constitutionality of this prohibition was contested in the
Gold Clause Cases.
In Norman v. B. & O. R. R. Co. (1935), the Court upheld the abrogation
of the gold clause in private contracts by a vote of 5–4. Such action,
wrote Chief Justice Hughes for the majority, was a necessary and proper
means of exercising Congress’s power to establish and regulate the value
of a uniform national currency. The administration, fearing an avalanche
of bankruptcies were the nation’s debtors required to repay obligations at
$1.69 on the newly devalued dollar, breathed an enormous sigh of relief.
The opinion in Perry v. United States (1935), by contrast, held unconstitutional
the abrogation of the gold clause in federal government bonds. Yet,
the administration’s expectation that the rise in the price of gold resulting
from government purchase and its devaluation of the dollar would be
accompanied by an immediate general increase in domestic prices had not
Cambridge Histories Online © Cambridge University Press, 2008
290 Barry Cushman
been realized. Chief Justice Hughes accordingly maintained for the majority
that payment to Perry in the uniform devalued currency had left him
with no less purchasing power than he would have enjoyed had no devaluation
occurred. Accordingly, he had suffered no injury and was entitled to
no relief. Congress subsequently withdrew the government’s consent to suit
on monetary claims as of January 1, 1936, thereby depriving bondholders
of the opportunity to show actual damages.
Both Blaisdell and the Gold Clause Cases provoked impassioned dissents
from the Four Horsemen. “Fewer questions of greater moment than that
just decided have been submitted for judicial inquiry during this generation,”
wrote Justice Sutherland dissenting in Blaisdell. “He simply closes
his eyes to the necessary implications of the decision who fails to see in it
the potentiality of . . . serious and dangerous inroads upon the limitations of
the Constitution which are almost certain to ensue.”13 Justice McReynolds
condemned the monetary program as embracing “a debased standard,
adopted with the definite purpose to destroy obligations.” Such “arbitrary
and oppressive action” violated the Fifth Amendment. “Just men regard
repudiation and spoliation of citizens by their sovereign with abhorrence,”
he remonstrated. “Loss of reputation for honorable dealing will bring us
unending humiliation; the impending legal and moral chaos is appalling.”14
When delivering his dissent from the bench, he extemporaneously gave
voice to sentiments he had earlier expressed privately over Nebbia and
Blaisdell. “This is Nero at his worst,” he thundered. “The Constitution is
gone.”15
These reports of the Constitution’s demise turned out to be greatly
exaggerated. In the twenty-five months following the announcement of
the Blaisdell decision, the Court heard three cases involving challenges to
state debtor relief legislation under the Contract Clause. In each case, the
Court invalidated the legislation by a unanimous vote.W. B. Worthen Co. v.
Thomas (1934) struck down an Arkansas statute absolutely and retroactively
exempting the proceeds of certain insurance policies from liability for debts
and seizure under judicial process. W. B. Worthen Co. v. Kavanaugh (1935)
disapproved another Arkansas debtor-relief package as “an oppressive and
unnecessary destruction of nearly all the incidents that give attractiveness
and value to collateral security.” “With studied indifference to the interests
of the mortgagee or to his appropriate protection,” wrote Justice Cardozo,
the legislature had “taken from the mortgagee the quality of an acceptable
13 290 U.S. 398, 448 (Sutherland, J., dissenting).
14 Perry v. United States, 294 U.S. 330, 372, 362, 381 (McReynolds, J., dissenting).
15 Elliott Thurston, “Biggest Barrier to U.S. Monetary Program is Removed,” Washington
Post, Feb. 19, 1935, at 1.
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 291
investment for a rational investor.”16 And in Treigle v. Acme Homestead Assn.
(1936), the justices found that a Louisiana statute diminishing the rights of
withdrawing members of building and loan associations was neither temporary
nor conditional, but instead arbitrary and oppressive. Nor did the
celebrated events of 1937 mark the end of judicial enforcement of the Contract
Clause. Over the objection of some Roosevelt appointees, the Court
would find fault with statutes impairing the obligation of contract in 1938
and again in 1941.17 Though the Contract Clause would slumber for more
than three decades after Hughes retired in the summer of 1941, throughout
his tenure as Chief Justice context-specific judgments of reasonableness continued
to constrain state legislative regulation of contractual obligations.
The same was true at the federal level. The Frazier-Lemke Farm Debt
Relief Act of 1934 permitted distressed farmers to stay foreclosure proceedings
for a period of five years, during which time they could take title to the
mortgaged property free and clear by paying its appraised value rather than
the amount of the debt. In Louisville Joint Stock & Bank Co. v. Radford (1935),
the Court unanimously held that the act unconstitutionally impaired the
vested rights of mortgage creditors. Yet, Justice Brandeis’s opinion for the
Court offered Congress guidance on how the statute might be reformulated
so as to conform to the requirements of the Fifth Amendment. Congress
accepted the advice and quickly redrafted the measure accordingly. When
the inevitable challenge came before the Court in Wright v. Vinton Branch
Bank (1937), the opinion upholding the revised statute was again unanimous.
It was not a change in constitutional doctrine, but instead a change
in legislative means that enabled Congress to attain its desired objective.
The Persistence of the Old School and the Significance of the Roosevelt
Appointments
The Court upheld state and federal regulatory legislation more frequently
in the late 1930s than it had earlier in the decade. This was due in no small
part to greater efforts by legislative draftsmen, such as those who rewrote the
Frazier-Lemke Act, to comply with constraints imposed by contemporary
constitutional doctrine. At the same time, a good deal of this increased
success resulted from transformations in constitutional doctrine brought
about by changes in Court personnel. Just as decisions such as Nebbia and its
progeny were the result of Hoover’s appointments of Hughes, Roberts, and
Cardozo, later decisions relaxing the restraints of the Fifth and Fourteenth
Amendments on federal and state regulatory power were the consequence
16 295 U.S. 56, 60, 62.
17 Indiana ex rel. Anderson v. Brand, 303 U.S. 95 (1938);Wood v. Lovett, 313 U.S. 362 (1941).
Cambridge Histories Online © Cambridge University Press, 2008
292 Barry Cushman
of Roosevelt’s nominations from 1937 to 1943, which placed Hugo Black,
Stanley Reed, Felix Frankfurter,William O. Douglas, Frank Murphy, James
F. Byrnes, Robert H. Jackson, andWiley Rutledge on the Court. The voting
patterns of Hughes and especially Roberts in cases decided between 1938
and 1940 belie the notion that they “switched” in 1937 to the view that
those amendments did not constrain governmental regulatory authority.
In fact, the decisions illustrate the remarkable persistence of these centrist
justices’ moderate constitutional views.
In the 1935 case of Railroad Retirement Board v. Alton, Justice Roberts
wrote for a narrowly divided Court that the Railroad Retirement Act of
1934 was unconstitutional, first because several of its provisions violated
the Due Process Clause of the Fifth Amendment, and second because the
establishment of a pension system for railroad workers exceeded Congress’s
power to regulate interstate commerce. Though Alton marked no retreat
from Nebbia’s dramatic abandonment of the “affected with a public interest”
limitation, it did indicate that Nebbia’s requirement that regulatory
legislation “not be unreasonable, arbitrary, or capricious” was not entirely
toothless. Chief Justice Hughes, whose opinion for the four dissenters
agreed that o,ne of the statute’s provisions violated due process, was one
among many observers who believed that the Commerce Clause holding
doomed any comparable pension legislation, even if redrafted to address
the Court’s due process objections. Yet, astute members of Congress realized
that such a pension program funded from general revenue rather than
from an earmarked source might be immunized from constitutional attack
under the taxpayer standing doctrine announced in Frothingham v. Mellon
(1923). The pension payments could be made directly from the general
treasury rather than from a segregated fund, with the necessary revenue
derived from a special tax on interstate carriers. President Roosevelt persuaded
representatives of the major railway unions and railway companies to
join Congress and the administration in hammering out the details of such a
program, which were then set forth in the Railroad Retirement and Carrier
Taxing Acts of 1937. Representatives of the unions and the companies
also kept their promises not to challenge the program’s constitutionality,
and their pension system, with some modifications, remains in place to
this day.
Though this turn of events precluded relitigation of the precise issues
that had been before the Court in Alton, Roberts’ subsequent treatment of
the precedent testifies that his views had not changed. In the 1938 decision
United States v. Carolene Products Co., Justice Stone famously declared that
“regulatory legislation affecting ordinary commercial transactions is not to
be pronounced unconstitutional unless in light of the facts made known or
generally assumed it is of such a character as to preclude the assumption that
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 293
it rests upon some rational basis within the knowledge and experience of
the legislators.”18 Yet, New Dealer Justice Black refused to join this portion
of the opinion, for Stone then proceeded to qualify that pronouncement in
a passage citing Roberts’ Alton opinion as authority. As every other justice
joining that portion of Stone’s opinion had dissented in Alton, the citation
is explicable only as an accommodation to Roberts. Roberts again expressed
his conviction that his 1935 decision had been correct when United States
v. Lowden (1939) effectively overruled Alton. The reconstituted “Roosevelt
Court’s” decision was unanimous, but only because Roberts suppressed the
dissenting vote he had cast in conference.
Still other decisions from the mid- and late 1930s illustrate how Nebbia
and West Coast Hotel could coexist with a rationality standard that stopped
short of complete deference to the legislature. For example, in 1935 Hughes
and Roberts joined the 6–3 decision in Colgate v. Harvey holding that a provision
of a Vermont tax violated the Equal Protection and Privileges or
Immunities Clauses of the Fourteenth Amendment. Colgate would be overruled
in Madden v. Kentucky in 1940, but only over the dissent of Justice
Roberts. Similarly, in the 1932 case of New State Ice v. Liebmann, Hughes
and Roberts had joined the opinion holding unconstitutional an Oklahoma
statute designed to exclude aspiring entrants to the ice business. In the
1936 decision of Mayflower Farms, Inc. v. Ten Eyck, these justices again voted
with the majority, this time to strike down a New York milk industry regulation
that operated to freeze out potential market entrants. And in United
States v. Rock Royal Cooperative (1939), this time in dissent, Hughes and
Roberts voted to invalidate a federal milk regulation they believed placed
smaller milk dealers at an unconstitutional disadvantage in the competition
with their larger rivals. Hughes and Roberts continued throughout their
careers to maintain that the Constitution safeguarded the right to pursue a
lawful calling on terms of legal equality with all others. These features of
constitutional doctrine changed not because Hughes and Roberts revised
their long-standing views, but because President Roosevelt repopulated the
Court with justices harboring different commitments.
When Justice Roberts retired in 1945, then-Chief Justice Stone prepared
a draft of the customary farewell letter from the remaining members of the
Court. Stone’s draft contained the encomium, “You have made fidelity to
principle your guide to decision.”19 Justices Black and Douglas, partisans
of the “switch-in-time” narrative, refused to sign any letter containing such
an assertion, while Justices Frankfurter and Jackson refused to join any
18 304 U.S. 144, 152.
19 “Memorandum for the Court,” from Harlan Fiske Stone (undated), Frankfurter Papers,
Harvard, Series III, Reel 4.
Cambridge Histories Online © Cambridge University Press, 2008
294 Barry Cushman
letter from which the sentence was omitted. This impasse resulted in no
letter being sent. Yet it now appears that Stone, Frankfurter, and Jackson
had come correctly to see in Roberts’ jurisprudence a principled character
that Black, Douglas, and many others could not or would not recognize.
IV. THE REVOLUTION IN FEDERALISM JURISPRUDENCE
The Commerce Power
In 1929, the power of the national government to regulate the economy was
qualified not only by the restraints of the Due Process Clause but also by
those of constitutional federalism. By the end of 1942, both of those limitations
had dissolved, and federal regulatory power over economic matters
was virtually plenary. The principal means through which Congress exerted
control over “ordinary commercial transactions” was through exercises of
its power to regulate commerce among the several states.
Since 1895, Commerce Clause jurisprudence had been organized around
two fundamental distinctions. Each of those distinctions was drawn from
the Court’s Dormant Commerce Clause jurisprudence, a branch of constitutional
doctrine that articulated implied limitations on the power of state
and local governments to tax or regulate interstate commerce. Those distinctions
were between production and commerce and between direct and
indirect effects on commerce. The Court recognized broad federal authority
to regulate interstate transportation and interstate sales. Yet, a series of cases
involving the Sherman Antitrust Act had established the principle that the
power to regulate commerce did not as a general matter include the power
to regulate activities of production, such as agriculture, mining, and manufacturing.
The Sherman Act might reach such “local” activities as corporate
mergers and labor strikes were it shown that they were intended to restrain
interstate commerce. In such instances commerce was affected “directly.”
Without proof of such intent, however, the effect on commerce – irrespective
of its magnitude – was merely “indirect” or “incidental,” leaving the
activity in question beyond the reach of federal authority.
One of the principal qualifications to this general framework found
expression in the “stream of commerce” doctrine. A series of Fuller and
Taft Court decisions had held that the activities of a “local” enterprise
might nevertheless be subjected to federal regulation if they occurred in
a “current” or “flow” of interstate commerce that began outside the state
and later continued beyond its borders. In Stafford v. Wallace (1922), for
example, the Court upheld federal regulation of “local” transactions in the
Chicago stockyards. The livestock came from the western states to Chicago,
where they were housed, fed, watered, sold, and often slaughtered. They
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 295
then continued their interstate journeys to other states in the Midwest or
East for ultimate consumption. The stockyards were the “throat” through
which this current of interstate commerce flowed, the transactions therein
being essential to its interstate movement.
The scope of the stream of commerce doctrine was constrained, however,
by the due process requirement that the “local” business regulated be
affected with a public interest. Only these businesses had the capacity to
exact exorbitant charges and thereby affect the flow of interstate commerce
“directly.” So long as that category of business remained small and select,
the stream of commerce promised to cut a narrow channel. With Nebbia’s
abandonment of the public interest limitation in due process jurisprudence,
however, the potential range of application of the stream of commerce doctrine
was enlarged dramatically. Now any business located in a current of
interstate commerce was amenable to federal regulation.
Yet it remained necessary that the local activity be situated within the
current of interstate commerce, rather than at one of its terminals. Just as
the Court’s Dormant Commerce Clause decisions continued to maintain
that activities that took place before interstate commerce had begun or after
it had ceased remained subject to state and local powers to tax and regulate,
so the Court’s affirmative Commerce Clause jurisprudence adhered to the
view that such activities lay outside federal regulatory competence. Thus,
the stream of commerce doctrine was inapposite in Schechter Poultry Corp. v.
United States. The chickens butchered at the Schechters’ slaughterhouse had
“come to a permanent rest”20 in New York and were sold locally rather than
in interstate trade. Because interstate commerce in the poultry had come to
an end, the NIRA’s Live Poultry Code regulated local activity that affected
interstate commerce only indirectly. Such activity was subject to exclusive
state jurisdiction. Though President Roosevelt denounced the decision as
adopting a “horse and buggy”21 era conception of interstate commerce, the
judgment was unanimous. Indeed, Justice Department lawyers and other
Roosevelt advisors had regarded Schechter as an extraordinarily weak case
for the government and had tried to prepare the president for an adverse
outcome.
Administration lawyers similarly were convinced that the provisions of
the Guffey Coal bill regulating labor relations at the mine were unconstitutional
in view of the Court’s opinion in Schechter, and Attorney General
Cummings refused to offer to a subcommittee of the HouseWays and Means
Committee an opinion on the bill’s constitutionality. Instead, he urged the
20 295 U.S. 495, 543.
21 Samuel I. Rosenman, ed., The Public Papers and Addresses of Franklin D. Roosevelt, 4 (New
York, 1938), 200, 221.
Cambridge Histories Online © Cambridge University Press, 2008
296 Barry Cushman
representatives to “push it through and leave the question to the courts.”22
President Roosevelt similarly cajoled the subcommittee’s chairman, Sam
B. Hill, not to “permit doubts as to constitutionality, however reasonable,”
to block the bill’s enactment.23 Majorities of both the subcommittee and
the full committee considered the bill unconstitutional, and it was only
through such vigorous prodding from the administration and the resulting
abstention of dissenting Democrats that the bill was even reported to the
House floor. Though Democrats vastly outnumbered Republicans in both
the House and the Senate, the bill was passed in each by uncharacteristically
narrow margins.
The constitutional doubts of the bill’s critics were vindicated in Carter v.
Carter Coal Co. Justice Sutherland’s majority opinion echoed what Congressional
opponents had pointed out: the stream of commerce doctrine could
have no application where the interstate flow had not yet begun. The Guffey
Coal Act presented the same difficulty the Court identified in Schechter,
though from the opposite end of the stream. The coal mine in question lay
at the source rather than amid the flow of the current of commerce. The
act therefore regulated labor relations in the local activity of production,
which affected interstate commerce only indirectly. The majority accordingly
invalidated the Guffey Coal Act’s labor provisions. Moreover, despite
the act’s severability clause, the majority held that the labor provisions were
inseparable from its price regulation provisions, thereby vitiating the entire
statute. Chief Justice Hughes wrote separately, agreeing with the majority
that the labor provisions were unconstitutional. Yet, Hughes maintained
that those provisions were separable from the price regulation provisions,
which were constitutional in light of Nebbia. In dissent, Justices Cardozo,
Brandeis, and Stone insisted that the price regulation provisions were constitutional,
that they were separable from the labor provisions, and that
the constitutional challenge to the labor provisions was premature. Significantly,
none of the justices contended that the labor provisions were within
the scope of the Commerce Power.
A number of observers read the Schechter and Carter decisions as casting
doubt on the constitutionality of the NLRA. Government lawyers preparing
cases to test the act’s constitutionality before the Court disagreed. In
selecting those test cases they had shrewdly pursued instances involving
labor disputes at steel, trailer, and clothing plants that imported raw materials
from other states and then shipped their products across state lines for
22 New York Times, July 6, 1935, p. 2, quoted in Ralph Baker, The National Bituminous Coal
Commission (Baltimore, 1941) 50.
23 Franklin D. Roosevelt to Samuel B. Hill, July 5, 1935, reprinted at 79 Cong. Rec. 13449
(74–1).
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 297
subsequent purchase. Labor Board lawyers contended that these factories,
like the Chicago stockyards in Stafford v. Wallace, were located in a stream
of interstate commerce whose flow would be disrupted by work stoppages
produced by labor strife. The power to enact regulations designed to prevent
or curtail such disruptions was therefore comprehended by Congress’s
power to regulate interstate commerce. The Court upheld application of
the NLRA to such manufacturing enterprises in the Labor Board Cases,24
with Justices Brandeis, Stone, Roberts, and Cardozo joining Chief Justice
Hughes’ opinions for the majority. Hughes maintained that it was unnecessary
to decide whether the factories in question lay in a stream of interstate
commerce, for that doctrine was merely one illustration of a principle also
immanent in the Court’s Sherman Act and railroad regulation precedents:
any activity whose “close and substantial relation” to interstate commerce
made its regulation necessary to protect such commerce from burdens and
obstructions was subject to appropriate Congressional control.25 Hughes
cautioned readers against interpreting this formulation too broadly, and
both the dissenting opinion of the Four Horsemen and subsequent remarks
by Justice Roberts strongly suggested, as many contemporary observers
recognized, that the government’s stream of commerce analogy had in fact
provided the basis for Roberts’ crucial fifth vote to sustain the Act.
Others, however, have suggested that Roberts’ behavior – and that of
Hughes as well – was prompted by entirely different considerations. President
Roosevelt’s proposed Judicial Reorganization Act – known colloquially
as the “Court-packing plan” – remained pending in Congress even as the
justices were deciding and announcing the result in the Labor Board Cases.
Because six of the sitting justices had already celebrated their seventieth
birthdays, the bill would have empowered Roosevelt to expand the personnel
of the Court from nine to fifteen immediately. Roosevelt claimed that
the measure was necessary because the aged justices – the “Nine Old Men,”
as a popular book of the day26 called them – were unable to keep pace
with the demands of the Court’s docket. Yet, it was generally understood,
as Roosevelt essentially admitted in a fireside chat in early March, that the
bill’s objective was to secure a Court majority sympathetic to the New Deal.
Though neither house of Congress would approve the president’s proposal,
there has long been speculation that Hughes and Roberts voted to
uphold the NLRA in order to blunt Roosevelt’s attack on the Court and
thereby defeat the Court-packing plan. The question of motivation cannot
24NLRB v. Friedman-Harry Marks Clothing Co., 301 U.S. 58 (1937); NLRB v. Fruehauf
Trailer Co., 301 U.S. 49 (1937); NLRB v. Jones & Laughlin Steel Co., 301 U.S. 1 (1937).
25NLRB v. Jones & Laughlin Steel Co., 37.
26 Drew Pearson & Robert S. Allen, The Nine Old Men (New York, 1936).
Cambridge Histories Online © Cambridge University Press, 2008
298 Barry Cushman
be resolved with absolute certainty, but there are reasons to be skeptical of
this view. Although acknowledging the strain the justices felt during the
crisis, both Hughes and Roberts denied that the pending bill had affected
their votes. One would of course not expect a contrary admission, but there is
abundant evidence to corroborate their claims. First, the justices had ample
reason to doubt that Congress would enact the president’s proposal. The bill
provoked vigorous opposition from the moment it was introduced, from
powerful forces both inside and outside Congress. It was denounced in the
press, by leaders in higher education, and by a variety of civic organizations
including the American Bar Association. It was conspicuously criticized by
prominent liberals and former members of Roosevelt’s own administration.
While organized labor offered the proposal only faint praise, farm organizations
launched public campaigns of opposition. Congressmen found that
constituent correspondence ran heavily against the Plan, and contemporary
public opinion polls registered both consistent opposition to Court packing
and a steady decline in Roosevelt’s popularity.
The signals from Congress were similarly ominous. The president’s failure
to consult with Congressional leaders before unveiling his proposal created
hard feelings on Capitol Hill. Hatton Sumners, chair of the House Judiciary
Committee, responded with two measures. First, he quickly pushed
a judicial retirement bill through Congress with the aim of persuading his
colleagues that the problem of judicial opposition to the New Deal could
be solved simply by offering elderly conservative justices a financial incentive
to leave the bench. In this Sumners was successful. Two of the justices
were anxious to retire, and had remained on the Court only because the
Economy Bill of 1933 had left the provisions for retired federal judges at
unacceptably parsimonious levels. Justice Van Devanter would announce
his retirement within two months of the retirement act’s passage; Justice
Sutherland would step down the following January, and but for the pendency
of the Court-packing bill would have retired with Van Devanter
the previous spring. Second, Sumners lined up a comfortable majority of
his committee against the president’s plan, assuring the opposition control
of the hearings and the power to bottle the bill up in committee for an
indefinite period.
Because of the opposition of Sumners and his colleagues on the House
committee, the administration took the unusual step of introducing the bill
instead in the Senate. There as well, however, the plan faced stiff resistance.
All of the Senate Republicans and many Senate Democrats, led by liberal
Burton Wheeler of Montana, announced their opposition. By mid-February
Henry Morgenthau, Roosevelt’s Secretary of the Treasury, assessed the bill’s
chances as even at best. Two events later in March prompted observers to
revise this estimate downward. On March 29, the Court took wind out of
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 299
the plan’s sails when it announced its decision upholding the minimum
wage in West Coast Hotel v. Parrish. A week earlier, on March 22, Senator
Wheeler had read before the Senate Judiciary Committee a letter he had
solicited from Chief Justice Hughes with the approval of Justices Brandeis
and Van Devanter. The letter rebutted point by point each of the president’s
allegations concerning the Court’s efficiency. Hughes insisted that the Court
was fully abreast of its calendar, was granting all meritorious petitions for
review, and that the addition of new justices would frustrate rather than
enhance the Court’s efficient operation. The impact of Wheeler’s recitation
prompted Vice-President Garner to telephone Roosevelt to tell him, “We’re
licked.”27
At least two weeks earlier, it had become clear that the opposition
intended to filibuster the bill on the floor of the Senate and appeared to have
at least enough votes to prevent cloture, if not to defeat the bill in an up-ordown
vote. Even had the bill’s proponents succeeded in cutting off debate
in the Senate, however, the obstacles raised by Sumners and his colleagues
remained looming in the House. Yet the continuing deterioration of the
bill’s fortunes throughout the spring prevented it from getting even that
far. By early May the opposition held a clear majority in the Senate; at midmonth
the Senate Judiciary Committee issued its adverse report on the bill.
In early June Roosevelt finally relented to Democratic leaders and agreed to
support a compromise measure that would have permitted him to appoint
a smaller number of additional justices over a longer period of time. Efforts
to revive the plan again foundered in the Senate, however, and the bill was
recommitted with instructions to excise its Court-packing provisions.
Moreover, the voting patterns of the justices are difficult to reconcile
with the claim that they were influenced by the president’s proposal. Several
Congressional bills to constrain the Court had been introduced in 1935
and 1936, yet none of them appeared to have any effect on its decisions.
The Court upheld New York’s unemployment compensation statute over
due process objections ten weeks before the announcement of the plan;West
Coast Hotel was decided in conference six weeks before the justices could have
known of the president’s intentions, and the Social Security Act was upheld
after it appeared that the plan was doomed. Moreover, Hughes and Roberts
continued to vote to uphold state and federal regulatory statutes – and occasionally
to invalidate them – long after the Court-packing plan was dead and
buried. And while some or all of the Four Horsemen occasionally voted to
uphold such programs after 1936, their votes to invalidate important New
Deal measures in the spring of 1937 demonstrate the failure of Roosevelt’s
effort to pressure them into compliance. Improvements in Congressional
27 Burton Wheeler, Yankee from the West (Garden City, NY, 1962), 333.
Cambridge Histories Online © Cambridge University Press, 2008
300 Barry Cushman
draftsmanship and administration lawyering, rather than raw power politics,
best account for the Court’s greater receptivity to the NLRA and other
New Deal initiatives.
Subsequent NLRA decisions underscored the continuity in doctrinal
development. Santa Cruz Fruit Packing Co. v. NLRB (1938) upheld the
application of theNLRAto employees engaged not in production, but in the
initial stages of interstate transportation. Consolidated Edison Co. v. NLRB
(1938) concerned a company whose activities were indispensable to the
operation of a vast network of interstate communication and transportation,
and it upheld NLRB jurisdiction only on this narrow ground. NLRB v.
Fainblatt (1939) involved a clothing manufacturer situated in a stream of
interstate commerce, receiving raw materials from outside the state and
shipping its products across state lines.
Nor did contemporaneous decisions upholding federal regulation of the
coal industry and agriculture push back the frontiers of Commerce Clause
jurisprudence. In a series of statutes enacted between 1935 and 1938,
Congress had revised its strategy for stabilizing prices in these troubled
sectors of the economy. Rather than controlling the conditions or quantities
of local production in these enterprises, Congress now regulated the interstate
marketing of their products. This price stabilization was to be accomplished
either directly, through price regulation, or indirectly, through
limitation of the amount of the item that could be marketed in interstate
commerce. Before Nebbia, the Fifth Amendment’s Due Process Clause would
have prohibited price regulation with respect to such ordinary commodities;
after Nebbia, federal regulation of the price at which goods moved in
interstate commerce was constitutionally unproblematic. Yet, members of
Congress recognized that coal and agricultural produce sold in intrastate
commerce competed with such items sold in interstate commerce. If price
stabilization in interstate commerce were to be effective, these intrastate
transactions would have to be comprehended within the federal regulatory
scheme.
Congressional sponsors found authority for such intrastate regulation
in the 1914 Shreveport Rate Cases (1914). There the Court had held that
Congress could regulate the intrastate rates charged by interstate carriers
where necessary to make federal regulation of rates for competing interstate
carriage effective. This authority to regulate intrastate rates was thus derivative
of Congress’s authority to regulate interstate rates. Before Nebbia, the
Due Process Clause had confined that authority to businesses affected with
a public interest. Indeed, before 1934, every decision following the Shreveport
doctrine had involved regulation of that paradigmatic business affected
with a public interest, rail carriage. After Nebbia, however, the potential
application of the Shreveport doctrine expanded dramatically. Now Congress
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 301
could regulate the price at which coal and agricultural commodities were
sold in interstate commerce, and as intrastate sales of coal and agricultural
commodities competed with interstate sales of these items, Shreveport
authorized federal price regulation of these local transactions as well.
Thus the Bituminous Coal Conservation Act of 1937, unlike its predecessor
struck down in Carter Coal, did not purport to regulate labor relations
and conditions in the coal industry. Instead, its sponsors invoked Nebbia
and Shreveport in support of the act’s regulation of the price at which coal
was sold in both interstate and intrastate commerce. In Sunshine Anthracite
Coal Co. v. Adkins (1940) the Court upheld the act over the lone dissent of
Justice McReynolds, the sole remaining Nebbia dissenter. The Agricultural
Adjustment Act (AAA) of 1938 similarly did not seek to prescribe the
amount of specified commodities that farmers could produce. Its proponents
instead again pointed to Shreveport in support of the act’s restrictions
on the quantities of such commodities producers could market in either
interstate or intrastate commerce. Justice Roberts, joined by Chief Justice
Hughes, wrote the opinion upholding the act in Mulford v. Smith (1939).
The sponsors of what became the Agricultural Marketing Agreement Act
of 1937 maintained that Nebbia and Shreveport supported the act’s provisions
authorizing the Secretary of Agriculture to set minimum prices for interstate
and intrastate sales of certain agricultural commodities. The Court
accepted this contention, upholding the act in a series of decisions between
1939 and 1942. By regulating marketing rather than production, Congress
could address the problems that had plagued the energy and agricultural
sectors with programs that could withstand judicial scrutiny, and all of this
could be accomplished, as Justice Cardozo put it, “within rulings the most
orthodox.”28
Thus, although Nebbia’s transformation of due process doctrine allowed
existing Commerce Clause precedents far greater scope than they had previously
enjoyed, developments in Commerce Clause doctrine itself remained
relatively modest in the late 1930s. This helps to explain Justice Brandeis’s
landmark 1938 opinion in Erie Railroad Co. v. Tompkins. Ever since the decision
of Swift v. Tyson29 in 1842, the Court had interpreted section 34 of the
Judiciary Act of 1789 to require federal courts sitting in diversity to apply
the statutory but not the common law of the states in cases coming before
them. Where the relevant state legislature had enacted no statute covering
the issue in question, the federal courts were to apply the “general common
law,” the content of which many came to criticize as unduly favoring
corporate litigants. In Erie, the Court held that this long-standing interpretation
of section 34 was not only incorrect but also unconstitutional. Federal
28 Carter v. Carter Coal, 298 U.S. at 329. 2941 U.S. 1 (1842).
Cambridge Histories Online © Cambridge University Press, 2008
302 Barry Cushman
courts henceforth would be required to apply state common law rules as
rules of decision in diversity cases. “There is no federal general common
law,” declared Justice Brandeis. “Congress has no power to declare substantive
rules of common law applicable in a State whether they be local in their
nature or ‘general,’ be they commercial law or part of the law of torts.”30
The federal courts had no power to declare general rules of commercial and
tort law for the states, Brandeis maintained, in part because Congress had
no power to do so. As Chief Justice Hughes had written in the Labor Board
Cases, the reach of the Commerce Power “must be considered in light of our
dual system of government and may not be extended so as to embrace effects
upon interstate commerce so indirect and remote that to embrace them, in
view of our complex society, would effectually obliterate the distinction
between what is national and what is local and create a completely centralized
government.”31 The justices made it clear throughout the 1930s that
the Commerce Power remained subject to judicially enforceable constraints
of constitutional federalism.
That would change in the early 1940s. By February of 1941 all but three
of the sitting justices were Roosevelt appointees. None of the remaining justices
had participated in the notorious case of Hammer v. Dagenhart (1918),
in which a 5–4 majority had invalidated the Keating-Owen Child Labor
Act’s prohibition of interstate shipment of goods made by child labor. The
statute, the majority had held, was a pretextual use of the Commerce Power
to regulate manufacturing, a matter reserved to the states. Hammer and a
subsequent decision invalidating an excise tax on firms employing child
workers had inspired a movement to amend the Constitution to empower
Congress to regulate the practice. The Child Labor Amendment received
the requisite endorsement of Congress in 1924, but fell eight states short of
ratification. In 1938, however, Congress again asserted its authority under
the Commerce Power to regulate employment of children, as well as the
wages and hours of adult employees, in the Fair Labor Standards Act. In
United States v. Darby (1941), the Roosevelt Court unanimously overruled
Hammer in upholding provisions of the act prohibiting interstate shipment
of goods made by employees working under substandard labor conditions.
The Court also unanimously sustained provisions of the Fair Labor Standards
Act prohibiting employment of workers engaged in “production for
interstate commerce” at substandard wages or for excessive hours, though
internal Court records reveal that Chief Justice Hughes was deeply troubled
by this latter extension of the Commerce Power. Although in cases following
Hughes’s retirement the Roosevelt appointees would find that Congress had
30 Erie Railroad Co. v. Tompkins, 304 U.S. 64, at 77–78.
31 301 U.S. 1, at 37.
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 303
not intended by this language to include every employee working for firms
engaged in production for interstate commerce, only Roberts would insist
that Congress was powerless to reach “purely local” activities.
Wickard v. Filburn (1942) cast grave doubt on whether there were any
activities left in that category. The Secretary of Agriculture had penalized
Roscoe Filburn for growing wheat in excess of his annual allotment under
the amended AAA of 1938. Filburn maintained that the surplus wheat
was intended solely for personal use and for consumption rather than for
sale and that its production was therefore a purely local activity beyond the
reach of federal authority. This extension of the Commerce Power sought by
the government troubled many of the justices, including several Roosevelt
appointees, and when the case was initially argued in the spring of 1942
there was not a majority to uphold it. After reargument in the fall, however,
the Court unanimously sustained the penalty. In an opinion that did not
even cite the Labor Board Cases, Justice Jackson reasoned that if many farmers
satisfied their own needs by growing for personal use, they would reduce
the total demand for the crops marketed and thus the price at which those
crops were sold in interstate commerce. The aggregate effect of such activity
on interstate commerce might be “substantial.” Congress’s regulation of
such activity was therefore a necessary and proper means of regulating the
interstate price of agricultural commodities.
Roberts’ opinion in Mulford had taken pains to demonstrate that theAAA
of 1938 regulated “marketing” rather than “production.” At the Darby conference,
Chief Justice Hughes had voiced concern over the Fair Labor Standards
Act’s regulation of all “production for commerce.” These justices had
regarded “production” as presumptively immune from federal regulation.
In NLRA decisions such as Santa Cruz Fruit and Consolidated Edison, moreover,
Hughes had continued to employ the older vocabulary of “direct” and
“indirect” effects. The Wickard opinion, by contrast, expressly repudiated
the notion that such nomenclature was even useful, much less controlling.
Though initially expressing doubts about the government’s position in just
such traditional terms, Jackson had come to believe that the Court never
had succeeded – and never could succeed – in developing a workable legal
standard for determining which economic effects made federal regulation
appropriate and which did not. Neither the direct/indirect test nor the “close
and substantial” test provided an adequate legal criterion for judicial evaluation
of Congressional policy judgments. Indeed, Jackson could no longer
conceive of an activity whose relation to commerce was so attenuated as to
make its regulation by Congress inappropriate. He consequently despaired
of the enterprise and turned instead to a conceptualization of Commerce
Power issues as presenting political rather than judicial questions. The
national political process would allocate regulatory authority between the
Cambridge Histories Online © Cambridge University Press, 2008
304 Barry Cushman
state and federal governments, and the Court would defer to those political
judgments. Whereas Hughes and Roberts had insisted that the Court was
responsible for policing the line beyond which exercises of the Commerce
Power usurped state regulatory authority, theWickard Court concluded that
“effective restraints” on the power’s exercise “must proceed from political
rather than from judicial processes.”32
The Dormant Commerce Clause
Just as political process theory helped to rationalize the Court’s withdrawal
from enforcing federalism limitations on Congressional exercises of the
Commerce Power, so it also explained the persistence of judicial enforcement
of the Dormant Commerce Clause. State interests were theoretically
represented in the national Congress, but as Justice Stone observed in South
Carolina Highway Department v. Barnwell Bros. (1938), out-of-state interests
were often not adequately represented in state legislatures. “[W]hen the
regulation is of such a character that its burden falls principally upon those
without the state,” Stone explained, “legislative action is not likely to be
subjected to those political constraints which are normally exerted on legislation
where it affects adversely some interests within the state.”33 So,
for example, in 1941 the Court invalidated a California statute that prohibited
transporting into the state indigent non-residents, as the excluded
persons were “deprived of the opportunity to exert political pressure upon
the California legislature in order to obtain a change in policy.”34
Yet, this persistence of enforcement was accompanied by significant doctrinal
change. Since the 1870s, Dormant Commerce Clause doctrine had
been organized around the same categories that had structured affirmative
Commerce Clause jurisprudence. State or local regulation that affected
interstate commerce “directly” was forbidden; regulation that affected such
commerce only “incidentally or indirectly” was permitted. In the late 1930s
the Court continued to employ these categories in analyzing questions of
both state and federal power.With their abandonment inWickard, however,
their persistence in Dormant Commerce Clause doctrine became anomalous.
Adjectives inadequate for describing the effects of activities on interstate
commerce were surely equally inadequate to describe the effect of state
and local regulations on such commerce. Moreover, understood through
the older vocabulary, Wickard suggested that virtually every local activity
affected interstate commerce sufficiently “directly” to warrant its federal
regulation. If it now followed that state and local regulation of those local
32Wickard v. Filburn, 317 U.S. 111, at 120. 33303 U.S. 177, 184–5, n.2.
34 Edwards v. California, 314 U.S. 160, 174 (1941).
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 305
activities now also affected interstate commerce “directly,” then all such regulation
would violate the Dormant Commerce Clause. The expansion of federal
power thus threatened to destroy traditional state and local regulatory
authority by implication. The Court’s solution to this difficulty, adopted
in Parker v. Brown (1943) later in the same term that Wickard was decided,
was to decouple Dormant Commerce Clause doctrine from its affirmative
counterpart, to abandon the categories each had shared, and to treat regulatory
authority over local activities as presumptively concurrent. Henceforth,
in the absence of Congressional preemption, non-discriminatory state and
local regulation would be evaluated by “comparing the relative weights of
the conflicting local and national interests involved.”35
In one area of the law, the uncertainties created by this doctrinal reorientation
were sufficient to provoke Congressional intervention. Since 1869
the Court had consistently maintained that the business of writing contracts
of insurance was not itself interstate commerce and that state regulation of
the insurance industry therefore did not violate the Dormant Commerce
Clause. In view of the prevailing symmetrical relationship between affirmative
and Dormant Commerce Clause doctrine, each of the branches of
the federal government had treated the regulation of insurance as a matter
for the states rather than the federal government. The Court disrupted this
understanding in 1944, however, holding in two cases that the activities of
certain insurance companies bore sufficient relation to interstate commerce
to bring them within the reach of the Sherman Act and the NLRA. These
decisions created anxiety over how much state insurance regulation would
now be treated as implicitly preempted by federal statute or the Dormant
Commerce Clause. Congress quickly responded by enacting the McCarran-
Ferguson Act of 1945, providing that neither Congressional silence nor
federal legislation should be construed to displace such state regulation by
implication. Only statutes specifically relating to the business of insurance
would trump state law.
Taxing and Spending
This expansion of the realm of concurrent jurisdiction in Commerce Clause
jurisprudence found its complement in the erosion of intergovernmental tax
immunities. In 1939, Graves v. New York ex rel. O’Keefe overruled Collector v.
Day (1871) and its more recent progeny in announcing that the salaries
of federal officers and employees would no longer enjoy constitutional
immunity from state taxation, nor would the compensation of state officials
be exempted from the federal income tax. Similarly, in United States v. Bekins
35 Parker v. Brown, 317 U.S. 341, at 367.
Cambridge Histories Online © Cambridge University Press, 2008
306 Barry Cushman
(1938) the reconstituted Court upheld a federal municipal bankruptcy act
comparable to one narrowly invalidated two years before under related principles
of intergovernmental immunity. Moreover, while “essential” functions
and instrumentalities of the national and state governments would
retain their traditional implied immunity from taxation by the other
sovereign, that category came to be understood more narrowly than it had
been previously. Nevertheless, the doctrine continued to be complicated by
lines of distinction that Justice Jackson would later characterize as “drawn
by an unsteady hand.”36
The Court’s Tenth Amendment jurisprudence similarly constrained Congressional
power to impose regulatory taxes on activities whose control had
been reserved to the states. In early decisions such as McCray v. United
States (1904) and United States v. Doremus (1919), the majority opinions
for divided benches had come near to suggesting that Congress enjoyed
unfettered authority to levy substantial excises on disfavored activities.
Yet, the Court had corrected that impression in Bailey v. Drexel Furniture
Co (1922). There Chief Justice Taft wrote for an 8–1 majority including
Justices Holmes and Brandeis that an excise on 10 percent of the net profits
of companies employing child labor was a penalty rather than a tax. As the
penalty was imposed only on mines and manufacturing establishments failing
to comply with the statute’s prescribed regime of employment relations,
it transcended the limitations of the Tenth Amendment.
The Hughes Court’s regulatory taxation decisions suggested some differences
among the justices concerning these Tenth Amendment limitations
on Congressional power, but did not openly question the underlying structure
of the doctrine. In fact, the divisions among the justices concerning
the Congressional authority to regulate through the use of fiscal powers
emerged most openly in a case involving the Spending Power. United States
v. Butler (1936) involved a constitutional challenge to the AAA of 1933. A
food processor challenged the tax by which the acreage-reduction benefit
was funded as a step in a federal scheme to regulate the local activity of
agricultural production and thus a usurpation of the powers reserved to the
states by the Tenth Amendment. By a vote of 6–3, the Court invalidated
the act.
The differences between the majority justices and the dissenters boiled
down to a question of the perspective from which the benefit payment
should be viewed. Assuming non-compliance with the federal scheme and
thus non-receipt of the benefit payment as the baseline, the dissenters saw
payment of the benefit as a reward for compliance with the terms of a
contract the farmer was free to reject. “Threat of loss, not hope of gain, is
36 United States v. Allegheny County, 322 U.S. 174, 176 (1944).
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 307
the essence of economic coercion,” wrote Justice Stone.37 By contrast, the
majority justices assumed compliance with the scheme and thus receipt of
the payment as the baseline – and, indeed, the vast majority of American
farmers did comply and receive the payment – and therefore regarded the
withholding of the benefit as a regulatory tax on non-compliance. In Bailey,
a manufacturer remained free to employ child labor, but only by paying a
tax that would presumably place him at a competitive disadvantage with
competitors who complied with the federal regulation. Similarly, in Butler,
a farmer remained free to produce in excess of the Secretary’s target quota
for his farm, but only at the cost of forgoing a benefit payment that his
compliant competitors were receiving. In each instance, an enumerated
fiscal power was employed to induce compliance with a federal effort to
regulate local production.
The Butler Court did not, however, adopt the Madisonian understanding
of the spending power. While that power might not be employed to usurp
regulatory prerogatives confided to the states by the Tenth Amendment, it
was not limited to carrying into effect exercises of other powers enumerated
in Article I, section 8. Instead, and for the first time, the Court explicitly
endorsed the Hamiltonian view of the power to spend as an independent
grant of power not so limited. This commitment provided the foundation
for Justice Cardozo’s 1937 opinions for the Court upholding the old age
pension and unemployment compensation titles of the Social Security Act
in Helvering v. Davis and Steward Machine Co. v. Davis. The vote in Helvering
was 7–2, with Justices Van Devanter and Sutherland joining the majority.
The vote in Steward Machine was 5–4, but Van Devanter and Sutherland’s
dissent voiced general approval of the statute, objecting only to certain easily
correctable administrative provisions. Similarly, when the Court upheld the
Alabama state unemployment compensation law in Carmichael Southern Coal
& Coke Co. v. Alabama (1937), Van Devanter, Sutherland, and Butler agreed
that the statute’s objective was constitutional and took issue only with the
particular means selected by the state legislature. Their dissent detailed how
the statute might be revised so as to pass constitutional muster, pointing
to the Wisconsin statute as an exemplar of constitutionality. Even at the
height of the Court-packing struggle, these conservative justices had set a
face of flint to minimum wage legislation and federal regulation of local
employment relations. Yet, they clearly shared the majority’s view that no
constitutional revolution was necessary to sustain state and federal programs
of social security.
Even the significance of the Court’s embrace of the Hamiltonian understanding
of the spending power should not be overestimated. Though the
37 United States v. Butler, 297 U.S. 1, at 81.
Cambridge Histories Online © Cambridge University Press, 2008
308 Barry Cushman
proper understanding of that power’s scope had long been the subject of
controversy in Congress and elsewhere, Frothingham v. Mellon’s 1923 taxpayer
standing doctrine had operated to confine the debate to extra-judicial
fora. The taxpayer standing doctrine so thoroughly insulated federal expenditures
from judicial review that the constitutionality of a wide array of
New Deal spending initiatives financed from general revenue was never
challenged. Among these were the Civilian Conservation Corps, the Farm
Credit Act, the Reconstruction Finance Corporation, the Rural Electrification
Administration, and the Emergency Relief Appropriation Act of
1936. Moreover, the Supreme Court and the lower federal courts repeatedly
invoked the Mellon doctrine in rejecting constitutional challenges to loans
and grants made by the PublicWorks Administration.
Indeed, the taxpayer standing doctrine played a central role in the subsequent
history of the administration’s farm program. After the Butler decision
invalidated the AAA’s tax on food processors, the government continued to
pay the benefits payments it had promised to individual farmers, but now
in unchallengeable fashion from general revenue.Within two months of the
decision, Con,gress had replaced the AAA with the Soil Conservation and
Domestic Allotment Act of 1936. This act authorized the Secretary of Agriculture
to pay farmers to shift acreage from overproduced “soil-depleting”
crops to “soil-conserving” crops. The bill’s sponsors refused to support a
companion taxing measure designed to produce the revenue necessary to
finance these expenditures and thereby successfully inoculated the measure
against constitutional attack.
While instances involving the taxpayer standing doctrine were the most
important examples of the manner in which justiciability doctrine shielded
the New Deal from judicial review, they were not alone. A series of lower
court decisions refused to consider constitutional challenges to various New
Deal initiatives on the ground that the plaintiff had not suffered a legally
cognizable injury, and it was on this basis that the justices rebuffed constitutional
attacks on the Tennessee Valley Authority. Throughout the 1930s,
the “passive virtues” served as a significant, self-imposed restraint on judicial
superintendence of the political branches.
V. THE EMERGENCE OF MODERN CIVIL RIGHTS
At the same time that the justices were sustaining state and federal economic
reforms designed to secure “positive” liberties for working men and
women, the Court’s decisions also increasingly evinced heightened concern
for certain “negative” liberties of American citizens. With one eye on
the alarming rise of repressive totalitarian states in Europe, the Hughes
Court affirmed and elaborated American constitutional commitments to
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 309
civil rights, civil liberties, and democratic processes at a time when
many Western intellectuals were questioning the future of democracy. In
Carolene Products, the Court had declared that legislation regulating “ordinary
commercial transactions” would enjoy a robust “presumption of constitutionality.”
But if constitutional law had become increasingly agnostic
on matters of economic policy, it nevertheless remained and would become
more strongly committed to certain core political values. In the famous
“Footnote Four” of his Carolene Products opinion, Justice Stone identified
three types of statutes that would be subjected to “more exacting judicial
scrutiny”: legislation appearing to conflict with “a specific prohibition of
the Constitution, such as those of the first ten amendments”; “legislation
which restricts those political processes which can ordinarily be expected
to bring about repeal of undesirable legislation”; and statutes directed at
“discrete and insular” “religious, national, or racial minorities,” prejudice
against whom tended “seriously to curtail the operation of those political
processes ordinarily to be relied upon to protect minorities.”38 There
was often considerable overlap among these categories: a law directed at a
discrete and insular minority might itself restrict the political process or
implicate a provision of the Bill of Rights, and a law implicating the Bill
of Rights might itself restrict the political process. Nonetheless, the categories
provide a useful heuristic. And though subsequent decisions would
both enlarge the scope and strengthen the content of these three categories
of prohibition, none of them was without recent precedent in the Court’s
jurisprudence.
For decades since Reconstruction the Court had rejected contentions that
the Fourteenth Amendment incorporated various of the criminal procedure
protections of the Bill of Rights. In the 1937 case of Palko v. Connecticut
Justice Cardozo’s opinion unanimously reaffirmed these precedents, holding
that the protection against double jeopardy was not so “implicit in
the concept of ordered liberty” that its observance was a requirement of
due process. Only principles of justice “so rooted in the traditions and
conscience of our people as to be ranked as fundamental,” those “fundamental
principles of liberty and justice which lie at the base of all our civil
and political institutions,” were so comprehended.39 Yet, two significant
Hughes Court decisions, although they did not incorporate the corresponding
provision of the Bill of Rights, read the Due Process Clause to afford
criminal defendants comparable protections. In Brown v. Mississippi, decided
in 1936, the Court overturned the murder conviction of an African American
man who had denied commission of the offense until subjected to a
38 United States v. Carolene Products Co., 304 U.S. 144, at 152, 153.
39 302 U.S. 319, 325, 328.
Cambridge Histories Online © Cambridge University Press, 2008
310 Barry Cushman
severe beating by police. The unanimous Court held that the brutal extortion
of this confession, which constituted the principal basis for the conviction,
was “revolting to the sense of justice.” The states were not bound by
the Fifth Amendment’s prohibition against compulsory self-incrimination,
wrote Chief Justice Hughes, but “[t]he rack and torture chamber may not
be substituted for the witness stand.”40 In 1940 Chambers v. Florida would
extend this principle, unanimously overturning murder convictions secured
on the basis on confessions elicited from four African American defendants
through the sorts of third-degree methods of interrogation condemned by
former Attorney General George W. Wickersham’s Committee on Official
Lawlessness nearly a decade earlier.
The decade similarly witnessed significant development of the right to
counsel in criminal cases. Powell v. Alabama (1932) involved the first trial
of the “Scottsboro Boys,” nine African Americans charged with raping two
white girls. There the Court overturned the capital convictions due to the
failure of the trial court either to provide the illiterate defendants adequate
opportunity to secure counsel or to appoint effective counsel to act on their
behalf. Effective assistance of counsel in a capital case was a necessary component
of the hearing to which a defendant was entitled as a matter of
due process. Powell found a more expansive federal counterpart in Johnson v.
Zerbst (1938), decided the same term as Carolene Products. There the Court
held for the first time that in federal criminal prosecutions the Sixth Amendment
did not merely overturn the older English rule severely limiting the
assistance felony defendants could receive from their counsel. Instead, the
right to assistance of counsel ensured by the Amendment imposed an affirmative
obligation to provide an attorney to federal defendants who were
unable to obtain representation. Not for another quarter-century, however,
would the Court fully guarantee this right to defendants in state criminal
prosecutions.
Enforcement of national Prohibition by federal authorities had also presented
the Court with a series of cases implicating the search and seizure
provisions of the Fourth Amendment. Though the Eighteenth Amendment
and the Volstead Act were successful in reducing the consumption of alcohol
in the United States, by the late 1920s they had come to be regarded
with widespread public disaffection and even disregard. As public enthusiasm
for the “Noble Experiment” waned, the Court routinely excluded
evidence obtained by warrantless searches without probable cause, evidence
obtained by searches beyond the scope authorized by the warrant,
40 Brown v. Mississippi, 297 U.S. at 285–86.
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 311
and evidence obtained illegally by state officials cooperating with federal
officials. Powered by demands to stimulate legitimate job growth and to
redirect the resources of federal law enforcement, by the desire for the excise
revenue that legalization could afford, and by Congressional reapportionment
that enhanced the clout of more urban, ethnic constituencies, the
Democrat-led movement for repeal sailed to victory in 1933 with the ratification
of theTwenty-First Amendment. The first constitutional amendment
ever to repeal another was also the only amendment for which Congress has
required ratification by popularly elected ratifying conventions rather than
by state legislatures.
Though many of the decade’s leading criminal procedure decisions
involved the discrete and insular minority of African American defendants,
the Court’s opposition to racial bias in the administration of criminal justice
emerged most explicitly in cases involving discriminatory practices
in the selection of grand and petit juries. The Hughes Court consistently
overturned such convictions, two of which involved subsequent trials of
the Scottsboro Boys. Meanwhile, the Court fired its opening salvo in support
of the NAACP’s incipient campaign to desegregate public education
in the 1938 case of Missouri ex rel. Gaines v. Canada. Rather than admitting
blacks to its state law school or providing separate legal education
to its black citizens within the state, Missouri officials paid the tuition of
black Missourians admitted to law schools in adjacent states. The Court
held that furnishing legal education within the state to whites while not
doing so for its black citizens denied them equal protection. Missouri must
either admit its qualified African American residents to its existing state
law school or establish within the state “substantially equal” facilities for
their legal education.41
By 1938 the justices could also claim to have invalidated numerous
statutes restricting the operation of political processes. Critical to the proper
functioning of that process was the freedom of the press. In the 1931 case
of Near v. Minnesota, the Court struck down a statute authorizing the imposition
of prior restraints on publication of any malicious, scandalous, or
defamatory matter, even if true. Such a restriction on the power of the press
to report and criticize the actions of public officials, wrote Chief Justice
Hughes, was “the essence of censorship.”42
While the White and Taft Courts had developed the modern “clear and
present danger” framework for analyzing questions of free speech, it was
41 Missouri ex rel. Gaines v. Canada, 305 U.S. 337, at 351.
42 Near v. Minnesota, 283 U.S. 697, at 713. See also Grosjean v. American Press Co, 297 U.S.
233 (1936).
Cambridge Histories Online © Cambridge University Press, 2008
312 Barry Cushman
the Hughes Court that deployed the doctrine to shield political dissenters
from prosecution. Though the Court upheld denial of citizenship to foreign
pacifists – two of them women – for refusing to pledge armed defense of
the United States, the justices repeatedly vindicated the speech and assembly
rights of leftist citizens. In Stromberg v. California, decided in 1931,
the Court overturned the conviction of a summer camp counselor for violating
a California statute prohibiting the display of a red flag “as a sign,
symbol or emblem of opposition to organized government.” Stromberg had
supervised campers in a daily ceremony in which the children raised a reproduction
of the flag of the Communist Party of the United States. During
the ceremony the children stood, saluted, and recited a pledge of allegiance
“to the workers’ red flag, and to the cause for which it stands; one aim
throughout our lives, freedom for the working class.” “The maintenance of
the opportunity for free political discussion to the end that government may
be responsive to the will of the people and that changes may be obtained
by lawful means,” wrote Chief Justice Hughes, “is a fundamental principle
of our constitutional system.” Insofar as the statute was “so vague and
indefinite” that it might be construed to punish protected expressions of
“peaceful and orderly opposition to government by legal means and within
constitutional limitations,” the Court held, it offended this fundamental
principle.43
In 1937 the Court again invoked this fundamental principle, unanimously
reversing Dirk De Jonge’s conviction under Oregon’s criminal syndicalism
statute for his participation in a peaceable assembly of the Communist
Party at which no unlawful conduct was advocated. “[P]eaceable
assembly for lawful discussion cannot be made a crime,” Hughes insisted.
“The holding of meetings for peaceable political action cannot be proscribed.”
44 Later that year the Court again vindicated this fundamental
principle. Angelo Herndon, an African American and a paid organizer for
the American Communist Party, had been convicted for violating a Georgia
statute prohibiting any attempt, “by persuasion or otherwise, to induce others
to join in any combined resistance to the lawful authority of the State.”
Herndon had held meetings seeking to recruit members for the Party and
was found in possession of Party literature advocating mass action and revolutionary
struggle against the ruling white bourgeoisie. But there was no
evidence that Herndon had read or distributed any of the literature, nor that
he had himself advocated or incited the forcible subversion of governmental
authority. A divided Court held that to construe the statute to prohibit
Herndon’s actions deprived him of his rights of free speech and peaceable
assembly. Unlike the power of the state to regulate ordinary commercial
43283 U.S. 359, at 361, 362, 369. 44 DeJonge v. Oregon, 299 U.S. 356, 365.
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 313
transactions, the power to abridge freedom of speech and assembly was
“the exception rather than the rule” and “must find its justification in a
reasonable apprehension of danger to organized government.”45
Among the chief beneficiaries of this New Deal for free speech was organized
labor. In Hague v. CIO, decided in 1939, the Court held that the Fourteenth
Amendment protected the freedom of labor organizers to assemble
peaceably to disseminate and discuss information concerning the provisions
of the NLRA. A municipal ordinance prohibiting exercise of those rights on
public streets and in public parks unless authorized by municipal officials
enjoying unfettered discretion was accordingly facially void, as was an ordinance
absolutely prohibiting distribution of handbills. The following year,
in Thornhill v. Alabama (1940), the justices relied explicitly on Footnote
Four to make clear that peaceful labor picketing was also protected speech.
Exercise of the right to picket was subject to reasonable regulation to preserve
order, and acts of violent intimidation and defamation lay outside
the scope of constitutional immunity. But state statutes and common law
policies prohibiting peaceable persuasion and communication of grievances
impaired “those opportunities for public education that are essential to
effective exercise of the power of correcting error through the processes of
popular government.”46
The Court’s solicitude for rights of expression ran to religious speech
as well. In the late 1930s and early 1940s the Jehovah’s Witnesses, often
assisted by the American Civil Liberties Union, conducted a vigorous and
remarkably successful litigation campaign to vindicate the rights of their
members to proselytize for their faith. Between 1938 and 1940 the Court
invalidated several ordinances prohibiting the distribution of literature on
public streets. During this time the justices similarly struck down as prior
restraints on expression ordinances punishing the distribution of literature
and solicitation of contributions on the public streets or door-to-door
without first obtaining a permit that might be conferred or withheld in
the discretion of local authorities. Such regulations, the Court maintained,
restricted “appropriate means through which, in a free society, the processes
of popular rule may effectively function.”47 This string of victories
under the Free Speech and Free Exercise Clauses was brought to an abrupt
if temporary halt in Minersville School District v. Gobitis (1940), in which
the Court upheld a state regulation requiring public school students to
participate in a daily ceremony saluting the flag and reciting the Pledge
of Allegiance. Rendered in the midst of Hitler’s devastating conquests in
45 Herndon v. Lowry, 301 U.S. 243, 258 (1937).
46 Thornhill v. Alabama, 310 U.S. 88, 95.
47 Minersville School District v. Gobitis, 310 U.S. 586, 599 n.6 (1940).
Cambridge Histories Online © Cambridge University Press, 2008
314 Barry Cushman
Western Europe in the spring of 1940, the decision held that the scriptural
injunction against bowing down before graven images must yield to the
public interest in promoting sentiments of patriotism and national unity.
The Free Exercise Clause did not exempt individuals from the commands
of generally applicable laws that did not target the religious commitments
of particular sects. Justice Stone, dissenting alone, affirmed the values and
elaborated the theory he had articulated in Footnote Four. Asserting that
the Constitution required more than “that democratic processes must be
preserved at all costs,” Stone maintained that the free exercise rights of “this
small and helpless” “discrete and insular minority,” which were “admittedly
within the scope of the protection of the Bill of Rights,” must be secured
through a more “searching judicial inquiry into legislative judgment” than
that afforded by the majority.48 A properly functioning democracy afforded
protection of such minority rights.
Stone’s position would command a majority within three years. In Jones
v. Opelika (1942), the Court upheld the application of non-discriminatory
municipal license taxes on itinerant sales agents to Jehovah’s Witnesses
selling religious literature. Stone again dissented, insisting that the freedoms
of speech and religion – two of the “Four Freedoms” identified by
President Roosevelt in his 1941 State of the Union address – occupied
“a preferred position.”49 Those freedoms could thus be afforded no less
protection from burdensome taxation than the Court had given ordinary
commercial transactions in interstate commerce. This time, however, Stone
was joined in dissent by three members of the Gobitis majority – Justices
Black, Douglas, and Murphy. Apparently influenced in part by the outpouring
of unfavorable commentary on the decision and reports of widespread
and often violent private and official persecution ofWitnesses that followed
in its wake, these justices took the extraordinary step of confessing error
in voting to uphold the compulsory flag salute. By the following term
this dissenting bloc had become the core of a new majority to renounce
both Jones and Gobitis. Now non-discriminatory license taxes could not be
imposed on the privilege of selling religious literature, the door-to-door
distribution of such literature could not be prohibited, nor could the flag
salute be made compulsory. “If there is any fixed star in our constitutional
constellation,” wrote Justice Jackson overruling Gobitis in West Virginia
Board of Education v. Barnette (1943), “it is that no official, high or petty,
can prescribe what shall be orthodox in politics, nationalism, religion, or
other matters of opinion. . . . Authority here is to be controlled by public
opinion, not public opinion by authority.” Alluding to “the fast failing
efforts of our present totalitarian enemies,” Jackson cautioned that “[t]hose
48310 U.S. 586, at 606–07. 49 Jones v. Opelika, 316 U.S. 584, 608.
Cambridge Histories Online © Cambridge University Press, 2008
The Great Depression and the New Deal 315
who begin coercive elimination of dissent soon find themselves eliminating
dissenters. Compulsory unification of opinion achieves only the unanimity
of the graveyard.”50
Cases involving voting rights illustrated both concern for the proper
functioning of the political process and the doctrinal limitations on that
commitment. In the 1915 decision of Guinn v. United States, the Court had
unanimously invalidated an Oklahoma suffrage regulation exempting from
its literacy requirement anyone lineally descended from a person qualified
to vote in 1866. This “grandfather clause,” obviously designed to exempt
whites but not blacks from the literacy test, violated the Fifteenth Amendment’s
prohibition against racial discrimination regarding the right to vote.
A special session of the state legislature had responded by enacting a new
election law bestowing permanent registration status on anyone who had
voted in 1914 under the now-invalidated election law, and granted all other
qualified electors only twelve days within which to register or be permanently
disfranchised. The effect of this transparent attempt to prolong the
discriminatory impact of the grandfather clause was not as great as one
might surmise: African Americans were permitted to register and vote in
most counties despite the statute. When an Oklahoma citizen disfranchised
under the statute brought a constitutional challenge in the 1939 case of
Lane v. Wilson, he won the support of a unanimous Court.
In other voting rights cases, the results were mixed. In Nixon v. Herndon,
decided in 1927, a unanimous Court had held that a Texas statute excluding
its black citizens from participation in the primary elections of the
Democratic Party denied them equal protection. The Texas legislature had
responded by repealing this statute and enacting another simply authorizing
the Executive Committee of each of the state’s political parties to
prescribe qualifications for membership and participation in its primary
elections. The Executive Committee of the state Democratic Party had in
turn adopted a resolution excluding blacks from voting in its primaries.
In Nixon v. Condon, handed down in 1932, a narrowly divided Court held
that, as the authority to prescribe the qualification was derived from the
statute, the action of the Executive Committee constituted impermissible
discriminatory state action. Three weeks after the decision in Condon, the
state Democratic convention adopted a resolution limiting membership in
the party to white voters. This time, however, a unanimous Court invoked
the state action limitation in rejecting the black petitioner’s equal protection
challenge. In 1935 Grovey v. Townsend held that the Texas Democratic
Party was a voluntary, private association, unconstrained by the requirements
of the Fourteenth Amendment. And in Breedlove v. Suttles, rendered
50 319 U.S. 624, 641–42.
Cambridge Histories Online © Cambridge University Press, 2008
316 Barry Cushman
in 1937, a unanimous Court upheld a provision of the Georgia constitution
treating payment of a poll tax as a prerequisite to exercise of the elective
franchise. Though proposed constitutional amendments to abolish payment
of poll taxes as a prerequisite to voting in federal elections would be introduced
in Congress regularly over the next twenty-five years, it was not until
1964 that the goal was achieved through ratification of the Twenty-Fourth
Amendment, nor until 1966 that the Court would invalidate poll taxes for
state elections as well.
The white primary’s lease on life would prove short by comparison. As the
culminating step in a more general reorganization of the Justice Department
in the 1930s, Attorney General Frank Murphy created the Civil Liberties
Unit (later renamed the Civil Rights Section) of the Criminal Division in
early 1939. Encouraged by the success of prosecutions under federal statutes
prohibiting peonage and involuntary servitude in the mid- and late-1930s,
the Unit initiated a series of actions under Reconstruction-era civil rights
statutes in cases involving both official and private infringements of civil
rights and liberties. Among these was the 1941 case of United States v. Classic,
in which the justices sustained convictions under the Enforcement Act of
1870 of Louisiana Commissioners of Elections who had fraudulently tabulated
the results of a Congressional Democratic primary election. Qualified
voters had the right to participate in Congressional primary elections that
were either integral to the selection process or that effectively determined
the ultimate electoral outcome, and Congress could protect that right by
appropriate legislation. Three years later, in Smith v. Allwright (1944), the
Court relied on Classic in overruling Grovey v. Townsend. Because a series
of state actions had made the Democratic primary integral to the electoral
process, party determinations of eligibility to participate constituted state
action within the meaning of the Fifteenth Amendment. The invalidation
of the white primary, bolstered by shifting white attitudes and the enforcement
efforts of the NAACP and the Justice Department, contributed to a
dramatic increase in Southern black voter registration: from 3 percent in
1940 to 20 percent in 1952.
Despite the obvious significance of these decisions as articulations of official
constitutional ideology, their actual impact on law and social practice
should not be overstated. Poverty and lack of access to adequate legal services
conspired with improvised strategies of official evasion and private intimidation
to diminish the significance of incipient constitutional protections
for those accused of crime, and the Warren Court revolution in criminal
procedure doctrine still lay two decades in the future.Widespread disfranchisement
of Southern blacks would persist until enactment of the Voting
Rights Act of 1965. The Court would soon sustain the administration’s disgraceful
wartime treatment of Japanese Americans, the Cold War’s severe
Cambridge Histories Onlin