Guest Essayist: Joerg Knipprath
Signing of the Constitution - Independence Hall in Philadelphia on September 17, 1787, painting by Howard Chandler Christy, on display in the east grand stairway, House wing, United States Capitol.

The United States Constitution is the product of a process which attempted to address perceived inadequacies of the Articles of Confederation in dealing with practical problems of governance. Its writers sought to provide practical solutions, shaped by their experiences. On that matter, it was irrelevant whether the Philadelphia Convention technically acted outside its charge from the states and the Confederation Congress and produced a revolutionary new charter, which argument James Madison disputed in The Federalist, No. 40, or whether the Constitution was a mere extension of the Articles and “consist[ed] much less in the addition of NEW POWERS to the union, than in the invigoration of its ORIGINAL POWERS,” as he averred in essay No. 45.

There are numerous devices in the Constitution to frustrate utopian schemes. Most of them are structural. The drafters understood that utopian schemes were more likely to succeed in smaller and more homogeneous communities. Madison in The Federalist, No. 10, identified the problem as one of faction, where members of a community joined by a common passion to gain power. Derived from the natural inequalities among human beings, factions are a foreseeable part of society. While democracies are most susceptible to control by an entrenched faction, small republics are not immune. The danger somewhat abates across a state but is least likely to occur in the nation and within its general government. As he explained. Vividly:

The influence of factious leaders may kindle a flame within their particular states, but will be unable to spread a general conflagration through the other states: a religious sect may degenerate into a political faction in a part of the confederacy; but the variety of sects dispersed over the entire face of it, must secure the national councils against any danger from that source: a rage for paper money, for an abolition of debts, for an equal division of property, or for any other improper or wicked project, will be less apt to pervade the whole body of the union than a particular member of it; in the same proportion as such a malady is more likely to taint a particular county or district, than an entire state.

Madison continued along the same vein in essay No. 51, “In the extended republic of the United States, and among the great variety of interests, parties, and sects, which it embraces, a coalition of the majority of the whole society could seldom take place upon any other principles, than those of justice and the general good …. And happily for the republican cause, the practicable sphere may be carried to a very great extent, by a judicious modification and mixture of the federal principle.” [Emphasis in the original.]

In Madison’s view, emergence of a permanent majority faction was more concerning, as minority factions would be controlled through majority voting. Fortunately, the diversity of religious, economic, ethnic, and customary influences creates shifting alliances among various factions, none of which would become an established majority at the national level. This creates a protective moat for society against dangers from radical policies which one faction might seek to impose on the country. In addition, the structural balance of formal constitutional powers between the national government and the states, further prevents any utopian faction in one state from readily spreading to another. This “federal” structure is enhanced by what Madison considered to be the adoption within the Constitution of the principle of subsidiarity, that is, that most political matters would be handled at the lowest level of political units, rather than by Congress.

In essay No. 51, Madison also explained another protection against a radical utopian faction gaining hold of the national government, the separation of powers. That separation consists of two parts in the Constitution, namely, provisions which guarantee to each of the branches a degree of immunity and independence from the other, as well as provisions which create a blending and overlapping of functions and require the different branches to collaborate to create policy. Examples of the first group of provisions are the control each branch of Congress has over its membership and the immunity of its members from prosecution for debates in Congress; the President’s privilege to withhold information from the other branches protected under, among other sources, the “executive power” clause; and the Supreme Court’s tenure during “good Behavior.” Examples of the second are the Congress’s power of the purse, the requirement that both chambers agree to the same legislation, the President’s qualified veto over legislation, and the Court’s power of judicial review. These protections help guard against rash policies and, as Alexander Hamilton phrased it in The Federalist, No. 78, “dangerous innovations in the government, and serious oppressions of the minor party in the community.” Moreover, many state constitutions incorporate similar principles of separate, yet overlapping, powers.

Leaving aside the unelected judiciary, the selection process for these positions supports the protection against radical utopian factions. Much of the operation of the political system under the Constitution, as distinct from its substantive powers, is ultimately founded in the federal system of states. Madison addressed the complex interrelationship between national and federal characteristics of the Constitution in The Federalist, No. 39. The people elect the House of Representatives, and representation is apportioned among the states on the basis of population, which are “national” characteristics. However, the states respectively determine the qualifications of the voters through their control over the electoral franchise for their own legislatures. Moreover, each state is guaranteed at least one seat, so even the population basis of the House is qualified by the existence of the states. The Senate is organized on the basis of the equality of the states in their corporate political capacities, a “federal” characteristic, and members originally were elected by the state legislatures. The president is selected by a body which is based on allocations of electoral votes among the states through a combination of population and state equality. Moreover, these electors are selected by the state legislators. As Madison explained in essay No. 39, the eventual election of the president was expected to be made by the House of Representatives, but on that occasion voting by state delegations.

The state-centric nature of these operative aspects of the constitutional structure helps diffuse power among various constituencies within a state and among different states. House members today are typically elected in single-member districts, whose constituents might be quite diverse from district to district. As originally envisioned, presidential candidates were selected by electors through a national, or at least regional, frame of reference. With the advent of modern political parties and the demographic changes over the past century, the president today is elected by a national constituency. Still, having to gain the endorsement of one of the two major political parties by having to appeal to different types of constituencies complicates the efforts of a radical faction’s candidate to gain sufficient power to orient the nation’s policies in a utopian direction. Political pragmatism and compromise is the inertia within the system.

One might add to these constitutional rules others of a more institutional origin. One such device which protects against utopian projects by a majority faction is the Senate’s “filibuster” rule. Another is the collection of arcane parliamentary procedures in the houses of Congress which can be used to derail or moderate legislation. Yet another is the committee structure and, at least in the past, the seniority system for chairmanships when powerful committee chairmen could frustrate the demands of the majority.

The problem with this presentation of a system of machine-like operation under clear constitutional rules that create a carefully-calibrated balance among various political actors, all while allowing government to function, yet protecting minority rights and guarding against dangerous utopian tendencies, is that it flatters irrationally. Seeing the political system only through the technical functioning of the rules is slanted and presents what one might call a “utopian” view. In fact, a hard look at the current system is needed to see how differently it operates.

At the level of national versus state governments, both consume a vastly greater percentage of Gross Domestic Product than a century ago, never mind two centuries ago. The national government’s share in particular has increased manifold. The national debt is at a record peacetime high in relation to GDP. The current use of debt by all levels of government would make the schemers in the state governments of the 1780s blanch. Congress today uses its legislative powers over interstate commerce, taxing, and spending to intrude into the most local and personal activities. Madison’s explanation in essay No. 39 of The Federalist that the national government’s jurisdiction extends to only a few enumerated ends, while the states have “a residual and inviolable sovereignty over all other objects” seems quaint and quizzical. Indeed, the very concept of residual state sovereignty has been neutered through Congress’s use of its taxing and spending powers, just as the Anti-federalists predicted and Hamilton attempted to refute in essays No. 32 and 33 of The Federalist. Prodigious government grants of money are a lifeline for much academic research, and those funds are readily applied to advance utopian projects by their recipients. As to legislation at the state and local levels, the ubiquity of laws far surpasses that of the earlier time, a product perhaps of a more complex society or the fact that legislating has become a full-time occupation for many politicians today.

As to the separation of powers, the contrast between the Constitution’s text and the operation of the system is if anything, starker. The proliferation of “alphabet agencies,” unencumbered by doctrines of separated functions, make rules, enforce them, and adjudicate violations of those rules in formally civil, but functionally criminal, proceedings. Those rules, adopted by generally unaccountable “independent” commissioners, administered by career functionaries, and virtually immune from judicial challenge, constitute the vast majority of the American corpus juris. There has been significant research into the political tactic of “regulatory capture,” whereby private entities, be they businesses, unions, or ideological “NGOs” (Non-Governmental Organizations) effectively take control of regulatory agencies. The danger with the last of these is that they often pursue utopian agendas behind the label “public interest,” rather than the more prosaic economic benefits to which the first two usually direct themselves.

There has been a concomitant expansion of executive power. The growth of the White House budget for various in-house offices, agencies, and directors which often parallel the domains of the formal constitutional departments, yet are independent of them, is one measure. As well, vast delegations of authority by Congress to the executive branch occurred as early as the Woodrow Wilson administration. The Supreme Court took some desultory steps against such delegations during the 1930s. Justice Felix Frankfurter warned about the expansion of executive power in his concurrence in Youngstown Sheet & Tube v. Sawyer, the famous Steel Seizure Case in 1952. Yet the Supreme Court has not struck down such a delegation in nearly a century. Some of this delegation, as well as broad ritualistic claims of inherent executive authority, arose in connection with war or other emergencies. Unsurprisingly, those powers continued during peace. A claim of discretionary power to act in emergencies inevitably produces more claims of emergencies. As shown by quite recent history, similar displays of broad executive power and uncontrolled administrative governance are part and parcel of state and local systems, as well.

As to constitutional barriers to utopianism provided by the electoral structure, or institutional barriers through the filibuster, one must wonder about their continued efficacy. Gerrymandering of districts has produced many “safe” partisan districts, where primary elections control the eventual outcome. Primary elections—or local party caucuses—attract the most ideologically committed participants. Such gerrymandering has been blamed for the election of candidates committed to ideologically pure, but practically harmful, utopian policies.

The mobility of American society and the advances in communications technology and entertainment have challenged Madison’s basic assumption about the diversity of interest groups rooted in different geographical areas. The electorate has become much more homogeneous and “national,” so that a nation-wide electoral majority might degenerate into an ideological faction similar to what Madison described was the danger in a local democracy. Candidates, too, are less dependent on the moderating influences of party organization. One need only to consider the emergence of billionaire-politicians and celebrity-politicians who can use their money or status to capture a party’s nomination and, subsequently, the office, without the support of the party’s established apparatus. Institutional restraints, such as the filibuster, have been weakened and are threatened with elimination, which would further undermine protections against a bare majority faction in Congress imposing utopian projects on the country.

Madison dismissed the dangers of a minority faction controlling Congress because of the “republican principle” of majority vote. But a minority faction driven by utopian fervor is much more likely to coalesce than a majority, and Madison’s faith in the vote is too blind to that danger. It has long been established that an ideologically committed organized minority can control an unorganized majority in politics or otherwise. The large economic or psychological benefits of a policy to the members of the minority outweigh the proportionally smaller costs to each member of the majority. With the increased and hidden power of unelected entities described earlier, the danger becomes more acute. One example should suffice: Before his inauguration, President-elect Donald Trump challenged leaked, unsubstantiated claims by American intelligence agencies that Russia had hacked the 2016 election. Senator Charles Schumer then warned President Trump, “You take on the intelligence community—they have six ways from Sunday at getting back at you.” Schumer was not alone in that prognosis. The specter of the hidden intelligence apparatus undermining the president in pursuit of an ideological objective has been raised many times over the past decades and is in direct conflict with the constitutional order.

In similar manner, the doctrine of judicial review has increasingly been used to advance constitutional novelties. The Constitution provides a formal amendment process, based on broad super-majoritarian approval that is, in Madison’s description, partly federal and partly national. It requires broad consensus in Congress and among the people or legislatures of the states. There has also developed an informal amendment process which retains elements of popular approval and consensus. For example, when Congress passes a law, the president signs it, and there is no successful constitutional challenge brought to the law in the courts, continued and open adherence to that law by the people over time makes that law’s political essence part of the constitutional fabric. A similar development occurs if a significant number of states pass laws respecting a particular matter of state control, if those laws do not conflict with a clear constitutional provision. A constitutional challenge to such well-established laws years later ordinarily should be rejected, because, as Hamilton stated, the purpose of judicial review is to prevent sudden popular passions from passing laws which violate established constitutional rules and threaten individual rights.

In that sense, judicial review is “conservative.” Judicial review is not intended to have five unelected judges decree a novel constitutional order by overturning long-established laws. That is the function of lawmakers in legislatures or constitutional conventions. Yet, the Supreme Court at times has taken on that function by discovering fanciful, previously unheard-of constitutional meaning in ambiguous clauses. These discoveries typically reflect the views of a narrow socio-political elite more than those of the citizenry at large. An ideologically committed minority faction is thus able to impose its utopian vision on the majority.

One can easily come up with more examples of current functional weaknesses and dysfunctions in the constitutional system described by the writers of The Federalist. The Anti-federalists broadly predicted many of the current developments, although it is to be doubted that their proposals, to the extent they had any, would have worked better to preserve the republican nature of the original order. Nor is it to be understood that all changes are necessarily bad. One might well agree with the social benefit of some of the constitutional innovations from the Supreme Court, yet be concerned about the way in which those changes came about. One might accept that some of the actions of the unelected agencies have been for the public good, yet worry about the threat to republican self-government posed by the bureaucratic state of self-declared “experts.” One might favor certain policies enacted into law by Congress, yet question the desirability of a system which increasingly micromanages life from thousands of miles away.

There are many ominous signs which suggest that we have lost our republican form of government as envisioned by the Framers. What we have left, it often appears, are certain trappings and rituals, much as happened with the Roman senate and other republican institutions during the Roman Empire and beyond. Perhaps the classic expositors of republics were right, that such a form of government cannot exist over a large area with many diverse groups of people. Perhaps Madison’s faith in the representative system was shaped by an implicit acceptance of the Aristotelian assumption that self-government was possible only in a community small enough that one could speak of “friendship.” There was much debate among classic writers about the size limits of community. One person did not make a polis. With 100,000, one no longer had a polis. At the time of the Philadelphia Convention, the largest state, Virginia, had a population over 800,000, including slaves. The next largest, Pennsylvania, had under 500,000. The debate over proper-sized districts for the House of Representatives, the most “republican” part of the government, settled the number at 30,000 residents per representative. The Anti-federalists challenged this ratio as too high and unrepublican by pointing out that in Pennsylvania’s state legislature, the ratio was one representative for each four to five thousand residents. Madison replied in essay No. 55 that the House of Representatives would only deal with national matters which do not require particular knowledge of local affairs or connection to specific local sentiments. Today, each congressional district approaches the then-population of Virginia, and the Congress regularly passes laws which have profound local effects. Whether or not Aristotle was correct about the precise limits of “community,” surely it beggars belief to say that today’s congressional districts are republican in anything but name.

Long tenures in office were another danger, according to republicans. The Articles of Confederation limited the number of terms a member of Congress could serve. The Constitution does not. Hence it is common for representatives to spend decades in office, which results in part from the difficulty of ousting incumbents in large districts gerrymandered to protect them. It is problematic to claim that such effective life tenures are “republican.”

Another important role in republican systems is played by various non-governmental social associations, such as the family, religious institutions, unions, and charitable groups. The 18th-century Anglo-Irish philosopher and politician Edmund Burke centered his theory of constitutional stability on the vitality of such institutions, which represent tradition and continuity and thereby guard against radicalism and turmoil. Burke was quite familiar to Americans for his vindications of their political claims before and during the Revolutionary War. He contrasted the stability of the English constitutional system with the situation in France. He was horrified by the violence of the French Revolution which grew from its utopian radicalism. It is inevitably the object of totalitarian governments to destroy or subjugate such intermediary institutions which threaten the power of the state over the people.

One must consider, then, some uncomfortable topics. To what extent has the American family structure been undermined by divorce, single parenthood, and various incentives created through the welfare state? How significant are religious institutions in the life of Americans today compared with preceding generations? With the exception of public employee unions, how significant are labor unions today? The same question must be asked about the vitality of local business associations and related service clubs, which played such significant roles in communities in the past.

The great Northwest Ordinance of 1787 declared, “Religion, morality, and knowledge being necessary to good government and the happiness of mankind, schools, and the means of education shall forever be encouraged.” This goal reflects the inculcation of private virtue which the different groups of American republicans agreed were a necessary basis for the preservation of republican government, even if some argued that it was not a sufficient basis. Are educational institutions fulfilling their task of teaching the heritage, morals, and substantive knowledge upon which the founding generation staked the success of their republic, or has the radicals’ long march through those institutions corrupted that mission?

Is the current dynamic of identity politics leading us to a dangerous tribalism which tears the social bonds necessary for a stable and peaceful community? If factions are the bane of republican systems, will the stress of this anarchic impetus ultimately lead to a collapse into the tyranny which the Anti-federalists feared?

If freedom of the press is needed for “republican form of government,” are the media providing useful information to the public or at least performing their self-appointed task of bravely and indiscriminately “speaking truth to power”? Or have they become so ideologically blinded to convince themselves of the righteousness of their quest to indoctrinate the public, that they have vindicated Thomas Jefferson’s indictment of the press in his 1807 letter to John Norvell, “Nothing can now be believed which is seen in a newspaper. Truth itself becomes suspicious by being put into that polluted vehicle….The man who never looks into a newspaper is better informed than he who reads them, inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods and errors.”

Many of these dysfunctions were spawned by utopian schemers who without thought or hesitation cast aside rules and institutions forged in human experience. They failed to heed G.K. Chesterton’s warning in his parable of the fence built across a road not to tear it down until one clearly understands why it was erected in the first place.

As explored over these 90 sessions, the Constitution’s drafters constructed a framework of republican government and the means to preserve it. The structural components of the system, functioning as intended, assist that task. However, the Framers understood their own fallibility and the fragility of their creation. The Constitution is just a parchment. To give it life requires the attention of a civically militant citizenry committed to the preservation and functioning of its parts. That is the politics of the true “living constitution.” And, as has been said, politics is downstream from culture. The French philosopher Joseph de Maistre pungently observed, “Every nation gets the government it deserves.” Although his comment was about Russia, it would have particular relevance for a republic. Likewise, in his famous aphorism, Benjamin Franklin did not just say to Mrs. Elizabeth Powel that the convention had produced a republic. He added, wittily but ominously, “if you can keep it.” The question is whether the American people continue to be up to the challenge.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox

Guest Essayist: Joerg Knipprath

Utopianism appears to be inbred in the human brain—the desire for the perfect life, however a person might define that. Parents tell children to “follow their dreams.” Adults, too, often follow suit. Examples abound, from the ‘49ers of the California Gold Rush, to the “drop-out” hippie communes of the 1960s, to current athletes and entertainers. From print publications to electronic media, the protagonists of many stories—fictional or true—are those who “followed the beat of their own drum.”

This human trait is admirable and is something which marks us as more intellectually complex than brute animals. Aristotelian understanding of “happiness”—eudaimonia—is that quest for a fulfilled and flourishing life, to be “truly human.” One might never fully attain that state, or, Aristotle advises, one might not fully comprehend it until one is close to death. Even the failure of such a quest, though, can teach valuable lessons. A person might end the journey to become a singer, once she realizes that the agitation among the neighborhood’s cats stem from the sounds she emits. Instead, perhaps a new dream to become a talent agent forms to motivate her. Looked at another way, even if her utopian vision fails completely, it likely affects only her and perhaps a few around her.

By contrast, utopianism at the level of societies is much more dangerous to human flourishing. At that scale, failure, such as the collapse of a polity, affects multitudes in a profoundly existential manner. The ship of state requires a calm hand at the wheel. Phronesis, the classical virtue of practical wisdom, must control, not utopian passion. The statesman must have the clear ability to make the moral and practical choices which conduce best to the well-being of the community.

Still, there lurks the unsatisfied yearning to achieve, or to return to, the perfect society. It is the psychological desire to return to a Garden of Eden and a state of perfect innocence. From a Neo-Platonic perspective, which influenced the writings of St. Augustine and other early Christians, this yearning might reflect the human soul’s longing to attain union with the ultimate Good, or God.

Writers since ancient times have dabbled in philosophic creation projects of ideal societies. Plato’s 4th-century B.C. Politeia (The Republic), his prescription for a government run by a “guardian class” of philosopher-kings, is an early example. Thomas More’s 1516 book Utopia about an ideal society dwelling on an idyllic island, is another. More recently, Karl Marx’s writings about the process of historical transition which ultimately would end class strife through the establishment of a classless, communist society, dazzled many acolytes. Common to these three particular works, it should be noted, was opposition, in some manner or another, to private property. Another commonality was a degree of hostility to the traditional nuclear family structure.

At least the first two of these works are not necessarily to be taken at face value. The revolutionary changes which would be necessary to establish Plato’s ideal republic conflict with fundamental philosophic views he expressed in other writings. Moreover, he was quite clear about the inevitability that the project would fail due to the passions which are part of human nature. His work is a warning at least as much as it is a blueprint.

More’s work is satirical through and through, from the book’s title (a play on two similar sounding Greek words meaning no place—Utopia—and perfect place—Eutopia), to the names of various places and persons within the work, to the customs of his island’s denizens. It was satire of English society, but also a warning about societies unmoored from Christian ethics.

Along with utopian philosophies have come utopian projects. The Plymouth Rock Colony of the Pilgrim Fathers in 1620 was organized initially along communist principles of land cultivation. The disastrous economic consequences from that brief, two-year experiment threatened the very existence of the colony. Fortunately, the misstep was soon corrected. A similar fate awaited Robert Owen’s utopian “socialist” colony New Harmony, Indiana, which turned from a prosperous religious settlement when sold to Owen in 1825 to an economic shambles by 1828. The religious predecessor had also held property in common, but within a tightly-knit religious community. Owen’s associates lacked any strong bonds of community. As one contemporary commentator noted, “There are only two ways of governing such an institution as a Community; it must be done either by law or by grace. Owen got a company together and abolished law, but did not establish grace; and so, necessarily, failed.” He might have added one additional approach, the use of relentless force.

Often, these utopian communities are driven by a fervent vision of a new type of society founded on religious principles. They seek to create an earthly community close to God. Besides the Pilgrims, the Shakers and other charismatic groups come to mind. Others, like the Owenite socialists are motivated by more secular ideologies. Sometimes, an odd brew of messianic zeal and political ideology is blended, as in the “apostolic socialism” of Jim Jones’s People’s Temple in Guyana. These groups eventually adapt their dogma to the complexities of human nature and the real-world challenges of social living, as the Pilgrims and the Latter Day Saints did. Or, they disintegrate, as was the fate of the Owenites and the Shakers. Tragically, some come to a violent end under the thrall of a toxic “prophet,” as did the unfortunates of the People’s Temple.

Another factor which contributes to the instability of utopian projects is the scale of the venture. The communities previously mentioned were comparatively small. The Aristotelian ranking of associations from the family to the clan to the polis encompasses ever greater numbers. As those numbers increase and the members’ relationships to each other become more distant, the bonds become looser. Human nature is, essentially, selfish. Self-interest is not necessarily bad. Killing an attacker to save one’s own life has long been recognized as the most fundamental of natural rights. However, another human characteristic, more developed than in lesser species, is altruism.

Altruism, and one’s willingness to incur burdens for the benefit of another, is most pronounced in regards to those whom we “know.” The bonds of love are strongest towards immediate family members. They are also present, but less intensely, towards the extended family. Beyond that lie the still significant bonds of friendship about which Plato and Aristotle mused at length. Aristotle considered the highest form of friendship that which is maintained not for what one might get out of it, but, instead, what is done for the benefit of the other. He also considered friendship as the key measure of proper self-government in the polis. At some point, however, the number of residents within the community might grow too big for the mutual interactions required to maintain friendship. As that number grows, the psychological tension between self-interest and true altruism inevitably favors the former.

For example, a “communist” approach to work and reward can succeed within a family, perhaps even an extended one of longstanding relationships. Trouble arises when the relationships are not familial. To eliminate this inequality of sentiment, utopian societies seek to undermine or abolish the family and other voluntary affinity groups, which itself is doomed to fail and simply accelerates the group’s collapse. A large utopian society, whose members are not bound together by religion or by rules derived from long-established customs which reflect the traditional ordering within stable communities, requires increasingly brutal force to maintain commitment to the utopian project. Pol Pot’s devilish regime in Cambodia nearly half a century ago is a notorious example of this, as memorialized in the chilling movie The Killing Fields.

No matter how intellectually promising and rationally organized the effort is, human nature and passions will derail the utopian project. Plato laid the problem at the feet of eros—passionate love and desire—which upends the controlled marriage and mating program his ultra-rational utopia required. Among the rulers, nepotism and greed manifest themselves. It is hardly shocking that Fidel Castro acquired a wealth of nearly $1 billion at the time, all the while exhorting the unfortunate subjects in his impoverished nation to sacrifice for la Revolución. The inevitable failings of the system set off a hunt for scapegoats, those wreckers who do not show the requisite zeal and who harbor counterrevolutionary or other heretical views.

Within societies which are not openly pursuing some political or religious utopia, there may nevertheless be strong currents of utopianism. In our time and place, the extreme emphasis on risk avoidance is a utopian quest. It has resulted in a bloated legal and administrative apparatus as smaller and more remote and dubious risks are targeted. Economic and social costs are ignored as a health and safety security state takes shape. Those who dissent from the secular millenarian orthodoxy are liable to be marginalized or cast aside like religious heretics. Individual rights of association, religion, providing for oneself and one’s family, and bodily autonomy are subject to the guesses and whims of unelected credentialed “experts.” Yet these measures, when pursued robotically for some ideal beyond what practical wisdom would advise, fail or produce only marginal benefits, often at great cost. Even if they are abandoned, the damage has occurred.

In a related manner, there has been a decades-long quixotic quest to create emotional placidity. While not socially harmful if done on an individual, voluntary basis, compelled “treatments” have been a favorite of ideologues to deal with dissenters. The Soviet Union was infamous for its psychological analyses steeped in Marxist utopianism and its use of political dissent as “red flags” of psychological “deviance.” But the problem festers closer to home, as well. From state-applied electric shock therapy and lobotomies in the past, to the modern approach of psychotropic drugs, a therapeutic totalitarianism has been spreading. Those who dissent, especially parents who balk at such drug use or at school “safe zone” counseling done behind their backs, are liable to find themselves ridiculed or worse.

The delegates to the Philadelphia Convention were educated in classic writings and western history. They were not naïfs about human nature or politics. They understood lessons from the failures of regimes and the dangers of utopian projects, as did their opponents in the debate over ratification. Moreover, their own experience from the Revolutionary War, the Articles of Confederation, and service in their state governments had inured them to utopian speculations. Illustrative of the skepticism is a letter Alexander Hamilton wrote even as the struggle for independence still hung in the balance in 1781, “there have been many false steps, many chimerical projects and utopian speculations.” He noted that the most experienced politicians were Loyalists. He was registering his complaint about the lack of political sophistication among his co-revolutionaries in the conduct of the war, the adoption of the Articles, and the drafting of state constitutions.

That is not to say that the supporters and opponents of the United States Constitution lacked political and philosophic bearings. Most had a sense of what they wished to achieve, set within a coherent broader philosophic framework. The historian Forrest McDonald, in his far-reaching and detailed analysis of the framing of the Constitution, classifies the delegates into two groups, “court-party nationalists” and “country-party republicans,” analogous to the British Tory and Whig parties, respectively. Among the best-known such nationalists were Washington, Hamilton, Benjamin Franklin, James Wilson, Gouverneur Morris, and Robert Morris. Among the notable republicans were Elbridge Gerry, George Mason, Luther Martin, and Edmund Randolph. Others were more difficult to label. McDonald places James Madison in between the two groups and somewhat harshly judges the latter “an ideologue in search of an ideology.” He claims that by temperament Madison thought matters through to the detail and preferred “the untried but theoretically appealing, as opposed to the imperfections of reality.” Yet, he also concedes Madison’s willingness to abandon politically untenable positions as needed.

A third group, whom McDonald considers arch-republican ideologues, did not attend for varied reasons. They included Thomas Jefferson, John Adams, Sam Adams, Richard Henry Lee, and Patrick Henry. Some of these outsiders and other opponents of the Constitution presented more consistently “principled” arguments, but it is always easier to attack someone’s work than to provide a comprehensive and workable alternative.

None of the groups at the convention had a majority. Moreover, they were not ideological in the modern sense of positing a single abstract moving cause for all human action in the private and public realms. The closest might be the idea that humans act from self-interest. But there was nothing like Marxist economic determinism or Freudian psychoanalysis or current Marxism-derived Critical Race Theory. The various broader theories of government delegates favored still resulted in differences which must have seemed intractable, at times. Some delegates left out of frustration that their ideas about the proper constitutional order were not sufficiently realized.

But most held on and difficult compromises were eventually reached. Even the matter which deadlocked the convention for weeks and threatened more than once to tear it apart, namely the structure of Congress and the mode of representation, ultimately was resolved mostly in favor of the small states through Roger Sherman’s Connecticut Compromise. So was the controversy over Congress’s powers. The small-state proposal of an enumeration of specific powers supplemented by an enabling clause was adopted over a more national position favored by Madison that Congress would have power to address all issues which affected the nation where individual states would be “incompetent to act.” The slavery question was generally avoided. The concept was simply euphemized, rather than expressed. Specific issues, such as the fugitive slave clause and the three/fifths clause to apportion representatives and direct taxes were borrowed from the Northwest Ordinance of 1787 and a failed amendment to the Articles of Confederation. Whatever might have been the hearts’ desires of various philosophically committed members, compromise prevailed. The result was a system which was partly federal and partly national, as Madison laid out the particulars in Number 39 of The Federalist.

As remarked in previous essays, the authors of The Federalist emphasized the influence of experience, not idealism, on the convention’s deliberations, and the process of compromise, not purity, which resulted in a plan suited to the practical demands of governing. Aside from Hamilton’s noted aphorism in Number 6 of The Federalist, “Let experience, the least fallible guide of human opinions, be appealed to for an answer to these inquiries,” the authors repeatedly drew on experience under the Articles of Confederation, the state constitutions, and earlier European and ancient systems. That was, of course, also what the convention had done. In Number 38, Madison mocked the variety and inconsistency of objections and their often vague and general nature. While his sarcasm disparages the constructive and systematic efforts of opponents such as the “Brutus” essays by New York’s Robert Yates, Madison’s specific examples illustrate the spirit of pragmatism at the convention. He declared “It is not necessary that the [Constitution] should be perfect: it is sufficient that the [Articles are] more imperfect.” In Number 41, he acknowledged, “…that the choice must always be made, if not of the lesser evil, at least the GREATER, not the PERFECT good; ….” [Emphasis in the original.]

Perhaps the best summation of the pragmatism which steered the delegates as they proceeded with their work was voiced by Benjamin Franklin. He rose on the day of the final vote and implored his colleagues, “Thus I consent, Sir, to this Constitution. Because I expect no better, and because I am not sure, that it is not the best. The opinions I have had of its errors, I sacrifice to the public good….I can not help expressing a wish that every member of the Convention who may still have objections to it, would with me, on this occasion doubt a little of his own infallibility, and to make manifest our unanimity, put his name to this instrument.”

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox

Guest Essayist: Joerg Knipprath

At the 1896 Democratic Party convention in Chicago, a former Congressman from Nebraska, William Jennings Bryan, gave a stirring oration in favor of the party’s “pro-silver” political platform. Filled with passion and a near-revolutionary fire, the speech concluded with a warning to those who wanted the United States to maintain a gold standard for the dollar, “You shall not press down upon the brow of labor this crown of thorns; you shall not crucify mankind upon a cross of gold.” Bryan underscored this patently religious analogy by posing at its conclusion with his arms outstretched like someone nailed to a cross. The convention erupted in pandemonium. The ecstatic reaction of the delegates resulted in the “Boy Orator of the Platte River” receiving the party’s nomination for president of the United States at age 36, the youngest major party nominee ever. He became the Democrats’ presidential standard bearer twice more, in 1900 and 1908, again the only major party nominee to do so. He lost each time.

In addition to the Democratic Party nomination, Bryan received that of the more radical, mostly rural-based Populist Party, which favored federal government intervention in the economy. The Populists split after the 1896 election, with most supporters voting for Democrats, while others, typically urban workers, drifted to the Socialist Party. Although historians have long debated the direct influence of the Populist Party on the Progressive movement of the turn of the 20th century, there are clearly identifiable connections between them in regards to economic and political reforms. One difference, however, is in their class identification. The Populist movement was working class and agrarian. The Progressive reformers were upper-middle class urbanites, many from the Midwest. Related to that difference was the greater support for Progressivism among intellectuals and “scribblers,” which produced a more coherent political program and a stronger ideological framework. Ultimately, this produced far greater political success for the Progressive agenda—and more lasting repercussions.

As that passage from the “Cross of Gold” speech suggests, Bryan had a strong evangelical and Calvinist bent. He had a religious conversion experience as a teenager and in his entire life remained a theological conservative who preached a social gospel. His resort to religious imagery and apocalyptic language was not uncommon among Progressives. Theodore Roosevelt could thunder to the assembled delegates at the Progressive Party convention in 1912, “We stand at Armageddon, and we battle for the Lord,” as his enraptured supporters marched around the hall, singing “Onward, Christian Soldiers” and similar spirited hymns.

Those Progressives who were more skeptical of religion nevertheless had similarly messianic visions of reform which would deliver the country from its ills and lead to the Promised Land. The forces for change would be a democratized political structure invigorated by mass participation; a concerted program to attack the root causes of societal dysfunctions, from poverty to alcohol, narcotics, gambling, and prostitution; laws to prevent exploitation of the large urban working class; and, most fundamental, a rational system of policy-making controlled by a strong executive and a stable bureaucracy of technological and scientific experts. As presidential nominee Woodrow Wilson announced in his campaign platform in 1912, “This is nothing short of a new social age, a new era of human relationships, a new stage-setting for the drama of life.” Certainly nothing picayune or transitory about that!

The first of those goals was accomplished over time with the popular election of Senators through the 17th Amendment, and through the adoption by many states of the initiative and referendum process, primary elections for nominations for public office, more expanded “home rule” for localities, and non-partisan elections for local offices. Further, the half of American women excluded from the franchise received it through the 19th Amendment, adopted in 1920. On the other hand, by the late 1920s, the Progressives’ nativism eliminated the previous practice in a number of states of letting non-citizen immigrants vote.

The second came in the form of state laws against vice. Lotteries became illegal. Prostitution, which was ubiquitous at the turn of the 20th century typically in the form of brothels, was already against the law; those laws began to be enforced more vigorously. Another of America’s periodic movements to ban alcohol got under way. Because state laws often proved unable to control interstate markets of vice made possible through easier modes of transportation, the federal government became involved. Narcotics were regulated through taxation under the Harrison Narcotics Tax Act of 1914. The interstate transportation of lottery tickets was prohibited in 1895 through a federal law upheld by the Supreme Court in Champion v. Ames in 1903. The Mann Act, or White Slave Traffic Act of 1910, prohibited taking a woman across state lines for immoral purposes. That law was upheld by the Supreme Court in Hoke v. United States in 1913 and extended to non-prostitution private dalliances in Caminetti v. United States in 1917. After 27 states declared themselves “dry,” and others adopted “local options” to prohibit alcohol, temperance groups, especially those connected to upper-middle class women’s organizations, succeeded in having the 18th Amendment adopted in 1919. That national ban on production, sale, and transportation of alcohol for drinking was quickly followed by enabling legislation, the Volstead Act, that same year.

The third area of social reform was advanced through the adoption of maximum hour laws, minimum wage laws, unionization protections, and anti-child labor laws around the turn of the 20th century. Some such efforts, especially by Congress, initially came a cropper before the Supreme Court as violations of the United States Constitution. They fared better during the next wave of Progressivism under President Franklin Roosevelt in the 1930s.

The fourth, a government and society directed by an unelected technocratic elite of policy-making experts, lay at the heart of the Progressive movement. It proved to be a long-term project. To understand the “Progressive mindset” requires a closer examination of two men, Woodrow Wilson and Herbert Croly. There were other influential intellectuals, such as Walter Lippmann (who wrote A Preface to Politics in 1913, among many other works) and Brooks Adams (who was a grandson of President John Quincy Adams and wrote A Theory of Social Revolution that same year), but Wilson and Croly were renowned.

Thomas Woodrow Wilson was dour, humorless, and convinced of the fallen nature of all but the elect few. For human progress to flourish, he postulated the need for strong leaders with proper principles who would provide the discipline and vision for the moral guidance of the weak at home and abroad. Calvinist in appearance, outlook, and family background (his father and grandfather having been Presbyterian ministers), he embodied the caricature of a Puritan divine. Those traits also made him a perfect Progressive.

Before becoming president of the United States, Wilson was a professor at Princeton University, later becoming its president. He also was elected governor of New Jersey. During his academic tenure, he wrote several influential books which set forth his criticisms of American constitutional structure. His proposed solutions cemented his bona fides as a Progressive.

Wilson was strongly influenced by 19th century German intellectual thought, especially Hegel’s views of the State as the evolutionary path of an Idea through history, and by contemporary adaptations of Darwinian theories to social science. Indeed, so enamored was Wilson of German philosophy and university research that his wife, Ellen, learned the language just to translate German works of political science for him.

Wilson enthusiastically embraced the nascent ideology of the State. He characterized that entity as organic and contrasted it with what he described as the mechanical nature of the Constitution with its structure of interacting and counterbalancing parts. As he wrote in Constitutional Government in 1908, “The trouble with [the Framers’ approach] is that government is not a machine, but a living thing. It falls, not under the theory of the universe, but under the theory of organic life. It is accountable to Darwin, not to Newton.”

The “organic” State tied to the people in some mystical union must not be shackled by a fusty piece of parchment with its artifice of checks and balances. An entirely new constitutional order must be created that reflects the inevitable ascendancy of the State in human affairs. If that was not a realistic option due to reactionary political forces or sentimental popular attachments, the parchment must be broadly amended. During Wilson’s first presidential term, constitutional amendments to authorize a federal income tax and to elect Senators by popular vote were approved.

Beyond formal amendment of the Constitution, the various components of the government had to be marshaled into the service of Progressivism. Thus, Congress must pass far-reaching laws that increase state power at the expense of laissez-faire individualism. The result was a series of federal regulatory laws in union-management affairs, antitrust, child labor, tax, and—through the creation of the Federal Reserve system—banking. That activism was replicated in many states. The era of big government had arrived.

As usual, the Supreme Court took longer. Though the Court upheld various particulars of Progressive legislation, the organic theory of the state was not embodied forthrightly in its decisions until the later New Deal years and the post-Second World War emergence of the “Living Constitution” jurisprudence. Adherents to the Progressive deification of the State, then and now, have sought to remake judicial doctrine by untethering it from formal constitutional structure in favor of ideological dogma. Their efforts have focused on an expansive interpretation of Congressional powers, disregard of the prohibition against excessive delegation of power to bureaucracies, and a transformation of the Equal Protection Clause into a contrivance for “positive” equality. On that last point, success has been slow in coming. But since every political entity necessarily has a constitution, for Progressives it is beyond cavil that their “organic state” requires a progressive living constitution, one that prioritizes social justice and secures equality of condition. Exempting, perhaps, the governing elite.

That left the Presidency. Wilson’s early work, Congressional Government from 1885, reflected his contempt for American separation of powers and urged constitutional change to a parliamentary-style system with centralized power and an expanded federal bureaucracy. He dismissed the president as a mere “clerk of the Congress.” Over the next two decades, his perceptions about the Presidency changed significantly. Wilson regarded the administrations of Grover Cleveland and Theodore Roosevelt as exemplary. His last major work, Constitutional Government, published in 1908, focused on the Presidency as the engine for change.

Wilson’s eventual views of the Presidency were thoroughly 20th century. He treated the formal constitutional powers of the office as minor matters and regarded its occupant as increasingly burdened by obligations as party leader and as executor of the laws and administrator of Congressional policies. That burden had become impossible for a single man, a refrain frequently heard before and since. This fact of political life would only become more pressing with the inevitable—and welcome—evolution to a more powerful and controlling State.

Therefore, a president will and must leave the performance of those duties increasingly in the hands of subordinates. The appointment of trusted officials was more important than the selection of wise men of different opinion to give him counsel, as George Washington did, or of leaders of prominent factions within the party coalition, as was the practice of, among others, Abraham Lincoln. Instead, as Wilson wrote, presidents must become “directors of affairs and leaders of the nation,—men of counsel and of the sort of action that makes for enlightenment.”

Theodore Roosevelt’s “bully pulpit” construct of the Presidency was the new model. The traditional chief executive dealt with the congressional chieftains to influence policy as it emerged within those chambers in response to the broadly-felt needs of the times. Instead, the modern president would bypass the ordinary channels of political power and appeal to the public to shape policy to his creative vision. Wilson wrote, “The President is at liberty, both in law and in conscience, to be as big a man as he can. His capacity will set the limit….” This Nietzschean conception of the Presidency as a vessel for its occupants to exercise their will to power is quintessentially fascist. The focus on the charismatic and messianic leader as the ideal of government and the vehicle for progress to a utopian just society is a hallmark of American progressivism to this day and has also characterized the more virulent forms of collectivism. There are telling appellations:  Il Duce Mussolini, Der Fuehrer Hitler, Vozhd Joe Stalin, El Líder Castro, and North Korea’s Kims (Great Leader, Dear Leader, and Respected Leader). All convey the same meaning. Personality cults inevitably accompany Progressive-style leaders.

Wilson’s descriptions of the Presidency and the reality of political practice had a core of truth, lest his prescriptions not be plausible. To get to those prescriptions, however, he set ablaze many constitutional straw men. Though he paid lip service to the Constitution’s framers’ sagacity, he understated their practical appreciation of the office. Alexander Hamilton wrote several Federalist Papers that extolled the need for energy and accountability in the Presidency which he argued were furthered by the Constitution’s structure of the unitary executive. Through his Pacificus Letters, Hamilton became the foundational advocate of a theory of broad implied executive authority on which later presidents relied, including Wilson’s model, Theodore Roosevelt. George Washington shaped the contours of the Executive Branch by his actions within the purposely ambiguous contours of presidential powers under the Constitution. There were serious debates in the Washington administration about the nature of the president’s cabinet and the constitutional relationship between the president and the officers, debates that were generally resolved in favor of presidential control over those officers.

Wilson decried what he saw as a lack of accountability in the Constitution’s formal separation of powers. Yet it was his system where the president is “above the fray,” while little-known and uncontrolled subordinates carry out all manner of critical policies without, allegedly, his awareness. Events over the past two years have amply demonstrated the flaws of rule by credentialed, but unaccountable “experts” at all levels of government. Their decrees, too often based on misunderstood or even fabricated “evidence” and produced in a closed culture implacably hostile to dissent, affected Americans in profound economic, psychological, and social ways. Long-cherished individual rights were brushed aside, selectively, by this pretended clerisy through appeals to the greater health of the society and the common good, appeals which were frequently shown not to affect the behavior of the elite elect. All the while, politicians sought to deflect responsibility onto those bureaucrats.

Herbert Croly was perhaps the most important intellectual of Progressivism, next to Wilson. That seems odd, given the tortuous language and convoluted emotive passages that characterize his work. The Promise of American Life, published in 1909, is Croly’s most significant contribution to public debate, one that supposedly so influenced Theodore Roosevelt it is said to have been the catalyst for Roosevelt’s return to politics as a third-party “Bull Moose” presidential candidate in the 1912 election. Whereas Wilson dealt with constitutional structure and politics, Croly focused on political economy.

In Promise, Croly described himself and his vision as Hamiltonian. But it painted as “Hamiltonian” something that Alexander Hamilton would have foresworn. Croly argued for organization of the economy through coordination among large nationalized corporations, powerful and exclusive labor unions, and a strong and activist central government. This was the classic corporatist model of “rationalizing” the economy. It embraced the essence of fascist political economy and, with some tinkering, of socialist and Progressive systems. Whereas Hamilton proposed to use government incentives to unleash the entrepreneurial and inventive spirit of Americans to create wealth which ultimately would benefit all, Croly wanted the national government to throttle such entrepreneurial opportunity in favor of large entities, enhance the powers of the few, and use public policy to legislate a welfare state for the poor. However, haphazard social welfare legislation would be inadequate. As noted, the program had to be comprehensive of the whole of society. Independent small businesses, as elements within traditional American republicanism, were the bane of Progressive true believers in mass organization. Theirs would be a coalition of the wealthy few, an administrative elite, the working class, and the mass of poor against the broad middle.

Another book, Progressive Democracy from 1914, extended Croly’s Progressive canon. It rested on the usual Progressive premises, such as the omnipotent, all-caring, and morally perfect Hegelian God-state that would be the inevitable evolutionary end of Progressive politics. It posited the notion—so common in Progressive and other leftist theory—of stages of human social and political development that have been left behind and whose outdated institutions are an impediment to ultimate progress. Hence, Croly insisted, the Constitution’s structure of representative government and separation and division of powers needed to be, and would be, changed. Unlike the societal realities of the late 18th century which had produced American republicanism in the form of representative government within a federal structure, “In the twentieth century, however, these practical conditions of political association have again changed, and have changed in a manner which enables the mass of the people to assume some immediate control of their political destinies.”

The new political mechanism was direct democracy, the most authentic expression of popular will. It was beloved of leftists of all stripes. At least in theory. However, Croly considered reforms such as the initiative, referendum, primaries, and popular election of Senators to be misdirected and inauthentic if they were used only to restrict government power and to correct government abuses. As such, they were still shackled by old conceptions about the primacy of individual rights and by the suspicion of powerful government that had characterized the earlier period of Jeffersonian republicanism. “If the active political responsibilities which it [direct democracy] grants to the electorate are redeemed in the negative and suspicious spirit which characterized the attitude of the American democracy towards its official organization during its long and barren alliance with legalism [the Constitution as a formal system of checks and balances that controls the actions of the political majority], direct democracy will merely become a source of additional confusion and disorganization.”

There were, then, bad and good direct democracy. The good form was one that produced the proper, Progressive social policy, and accepted the dominance of powerful state organs which could accomplish that policy: “Direct democracy…has little meaning except in a community which is resolutely pursuing a vigorous social program. It must become one of a group of political institutions, whose object is fundamentally to invigorate and socialize the action of American public opinion.” Note some key words: A political system must be measured by “meaning,” such as the quintessentially Progressive “Politics of Meaning,” long associated with manifestos of the American Left. “Vigor” and “action,” two words that were markers of Progressive ideology and rhetoric at the personal, as well as the political, level. Wilson, the two Roosevelts, and John and Robert Kennedy strove mightily to present themselves as embodying those very characteristics, often to hide physical limitations. Finally, “social” or “socialize,” as the antidote to the traditional American insistence on the rights of individuals that were derived from sources outside the State and which trumped the demands of the collective.

In that “good” form, popular participation was, in effect, a thermometer to measure the temperature of the public’s support for an activist political program. Croly advised, “A negative individualistic social policy implies a weak and irresponsible government. A positive comprehensive social policy implies a strong, efficient and responsible government….A social policy is concerned in the most intimate and comprehensive way with the lives of the people. In order to be successful, it must rest on the basis of abundant and cordial popular support.” Instead of a structure constrained by the text and the received traditions of fundamental law, government would be limited only by vague measures of its policies’ popularity.

Despite Croly’s perfunctory disclaimer, popular participation was to be little more than a plebiscite on actions to be taken by a legislature otherwise unrestrained by the formal structures of the “Law.” “The government must have the power to determine the Law instead of being circumscribed by the Law,” he wrote in Progressive Democracy. As Croly—and Wilson— recognized, legislatures would not be up to the task of supervising such an increasingly intrusive paternalistic State. Hence, a powerful administrative apparatus was required. That signature component of the modern regulatory state—the vast, unelected bureaucracy—was necessarily beyond the control of the people. True, it might be a dictatorship of the technocratic elite, but it would be a benevolent one, we are assured, always loyally and selflessly laboring for our weal.

But like H.G. Wells’ society of Eloi and Morlocks in The Time Machine, the Progressive state was not as benign as its propagandists depicted it on the surface. The Progressives had a strong Darwinian bent. If Woodrow Wilson identified the State as an organism governed by the biological laws of Darwin, those laws raised some uncomfortable topics. Evolution and change are the constants of such a system; evolution requires adaptation to change. But in the State, unlike nature, adaptation could not be left to chance but must be directed rationally. Where survival of the fittest was the rule, only the fittest could rule. That the government was not under more direct control of the people was due to what Croly euphemistically described as the small size of the fund of social reason.

In view of that scarcity of social reason, Croly explained, “[the] work of extracting the stores of reason from the bosom of society must be subordinated to the more fundamental object of augmenting the supply of social reason and improving its distribution.” This was a task critical to the success of government unconstrained by the old Constitutional structures. “The electorate must be required as the result of its own actual experience and unavoidable responsibilities to develop those very qualities of intelligence, character, faith and sympathy which are necessary for the success of the democratic experiment.”

While Croly proposed that education would provide the means of human progress and the nurturing of social reason among the mass of people, there were those who were unfit for such efforts. Croly, like Woodrow Wilson and unlike William Jennings Bryan, believed in the need for state regulation of marriage and reproduction to combat crime and insanity and to promote the propagation of the truly fittest. When he was governor of New Jersey, Wilson signed a law of just such tenor that targeted various “defectives” for sterilization. Therein is mirrored one of the traits commonly attributed to the progressive intellectual. He professes to idolize humanity and the principle of popular government, but he despises humans and distrusts individual autonomy and political choice.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox

Guest Essayist: Joerg Knipprath
Domenica del Corriere, Italian newspaper, drawing by Achille Beltrame depicting Gavrilo Princip assassination of Archduke Franz Ferdinand of Austria & his wife Sofie, in Sarajevo, Bosnia, June 28, 1914.

Supporters of the proposed United States Constitution of 1787 frequently warned that there was no mechanism under the Articles of Confederation to prevent what they saw as the inevitable commercial rivalries between the states from escalating into armed conflict. Such rivalries had begun to appear through protectionist trade laws enacted by various states. Another event was the dispute between Virginia and Maryland over fishing and navigation in Chesapeake Bay and the Potomac River. The end, the Federalists charged, would surely be the dissolution of the union into some number of quarreling confederations.

The Anti-federalists had several responses. First, Number IX of the Articles authorized Congress, on petition by any state, to provide for the appointment of a court to resolve any conflict between that state and another. Second, they pointed to the Mount Vernon Conference of 1785 which had settled those very divisive claims between Virginia and Maryland. Third, they declared that it was fanciful to claim that republics, especially those with commercial relations as close as those within the Confederation, would go to war with each other. The history of republics wagered against such eventualities, they asserted. As William Grayson, a moderate opponent of the Constitution, put forth at length before the Virginia ratifying convention, the states were bound by mutually reinforcing commercial bonds and interests. He sarcastically described the Federalists’ panicky and hyperbolic claims as predicting that Pennsylvania and Maryland would attack like Goths and Vandals of old, and that “the Carolinians, from the south, (mounted on alligators, I presume), are to come and destroy our cornfields, and eat up our little children!” Such specters were “ludicrous in the extreme.” Others repeated Grayson’s contentions even more forcefully, often combined with sneering attacks on the writers of The Federalist.

Alexander Hamilton, among others, rejected Grayson’s dismissal of the danger. In essay No. 6 of The Federalist, he asserted that immediate national interests, including economic advantage, are more likely to precipitate war than more general and remote objects, such as justice or dominion. He asked rhetorically,

“Have republics in practice been less addicted to war than monarchies?…Are not popular assemblies frequently subject to the impulses of rage , resentment, jealousy, avarice, and of other irregular and violent propensities?…Has commerce hitherto done any thing more than change the objects of war? Is not the love of wealth as domineering and enterprising a passion as that of power and glory? Have there not been as many wars founded upon commercial motives, since that has become the prevailing system of nations, as were before occasioned by the cupidity of territory or dominion?”

It was as to these questions that Hamilton invoked the guide of experience for answers.

That experience he found in the history of Sparta, Athens, Rome, and Carthage. All of them he classified as republics, the last two as commercial republics. He detailed the numerous ruinous wars in which they engaged. Moving forward in time, he then indicted the commercial republic of Venice for its wars in Italy and the 17th-century commercial Dutch Republic for its wars with England and France. Britain came in for scorn as particularly bellicose for commercial advantage. Worse yet, Hamilton charged, the king was at times dragged into wars he did not want, by “the cries of the nation and importunities of their representatives,” so that there have been “almost as many popular as royal wars.” He singled out wars for commercial advantage between Britain and France and Britain and Spain. One of those wars between Britain and France overthrew a network of alliances which had been made two decades earlier. He acidly asked, “Is it not time to awake from the deceitful dream of a golden age, and to adopt as a practical maxim for the direction of our political conduct, that we, as well as the other inhabitants of the globe, are yet remote from the happy empire of perfect wisdom and perfect virtue?”

In addition to commercial incentives for war, Hamilton pointed to personal motives of rulers and other prominent individuals, or to intrigues hatched by influential advisers, as prompting wars between republics. Thus he blamed the Peloponnesian War, so disastrous to Athens, on the personal motives of the great statesman Pericles. England’s ill-advised war with France Hamilton assigned to the machinations of Henry VIII’s chief minister, Cardinal Wolsey, and his pursuit of political influence.

Whatever the merits of Hamilton’s predictably slanted analysis of specific historical events, his message was that political theory disproved by experience is not a sound basis for public policy. A more recent scenario which fit his skepticism about pacific republics was the Great War from 1914 to 1918, which led to the collapse of the 19th-century European political order and to revolutionary political and social change. The antagonists were the Central Powers of Germany, Austria-Hungary, and Ottoman Turkey against the Triple Entente of Britain, France, and Russia. The latter group was eventually joined by Italy, Japan, and the United States. Of the major participants, Germany, Britain, France, and the United States were commercial and industrial powerhouses. They were also outright republics or had sufficient political power vested in parliamentary bodies to qualify as quasi-republican constitutional monarchies. Each also had substantial overseas territories, Britain by far the most. Of the rest, Russia and Japan were rising industrial and commercial nations. In particular, Germany and Britain had considerable commercial interaction, but it likely was exactly that commercial and colonial competition which the British saw as a threat. The prewar German naval buildup did nothing to calm British nerves.

There was also a complicated system of alliances which emerged shortly before the war. This reshuffling of international arrangements changed the dynamics of the relatively stable post-Napoleonic international order in Europe which had even survived disruptive processes of unification in Germany and Italy and disunion in the old Austrian Empire. True, there had been revolutionary tremors and limited wars, such as between Prussia and Denmark, and Prussia and Austria, and the Franco-Prussian War of 1870-71. Skillful diplomacy, in particular by the German Chancellor Otto von Bismarck, had prevented any conflict of an existential nature from arising. Bismarck had isolated France after 1871 through alliances with Russia, Austria-Hungary, and Italy, first through the Three Emperors’ League, and then through the Triple Alliance of 1882 and the Reinsurance Treaty of 1887. Relations with Britain were preserved through family relationships and Britain’s preoccupation with her empire overseas. He had also smoothed frictions between the rival empires, Russia and Austria-Hungary, through the Congress of Berlin in 1878, and among various colonial powers through a conference in the same city in 1884.

Even after Bismarck was forced out of office, it appeared that strengthened international legal norms would prevent wars. International arbitrations settled disputes. Two Hague Conventions, the London Naval Conference of 1909, and the London Conference of 1912 convinced “the right kinds” of Europeans that large-scale war was anachronistic. The foreign offices of the various governments, staffed with forward-looking and educated internationalists, surely would extend the great-power stability of the 19th century’s Concert of Europe. Ignored was that these multinational conferences and conventions left some number of participants dissatisfied and nursing grudges. This was particularly true for the Balkan countries. While trying to establish their independence from the crumbling Ottoman Empire, they warred with the Turks, the Austro-Hungarians, and each other and resented their fates being controlled by larger powers. Over time, these perceived affronts to national honor during a time of heightened national consciousness overrode the rational self-interest served by commercial considerations. Moreover, various treaties and diplomatic agreements overlapped and indeed conflicted with each other. Alliances increasingly shifted around, which begot international uncertainty during an age of domestic demographic changes, increasing political militancy, and unequal industrial and technological prowess.

This new system of alliances had another potentially destabilizing element. It allowed the relatively weaker participants to act like big players on the international stage, counting on their more powerful allies to back them up. Instead, the bravado and exaggerated sense of national honor of less important states dragged the major powers into a disastrous conflict. Everything changed when a Bosnian Serb nationalist, supported by secret nationalist societies and Serbian military intelligence, assassinated the reform-minded presumptive heir to the Austrian throne, Archduke Franz Ferdinand, and his wife in Sarajevo, Bosnia, on June 28, 1914.

After some delay, during which it was hoped that the assassination might become just another deplorable act that would result in an appropriate punishment for the captured perpetrators, the Austrians responded. Having received some halting assurances from the German government that they would back Austria-Hungary’s response to Serbia, the Austrians sent an ultimatum to the Serbs. Serbia only partially accepted the Austrian demands, mobilized its army, and briefly sent troops into Austro-Hungarian territory. In quick response, Austria began partial mobilization of its army and, on July 28, 1914, declared war on Serbia.

At this stage, the conflict might yet have become another limited skirmish. But the Russian government, some of whose ministers had been informed of the plot ahead of time and whose military intelligence likely helped the plotters, had promised the Serbs that Russia would come to Serbia’s aid against any attack by Austria-Hungary. When Austria-Hungary began partial mobilization, Russia within two days ordered full mobilization of its forces. Fearing the large number of Russian troops, Austria-Hungary in turn mobilized fully. Germany, coming to her ally’s assistance, did likewise on July 31, 1914. At the same time, Germany issued a demand of neutrality to Russia. When Russia failed to acquiesce, a state of war existed on August 1. France, pursuant to a treaty with Russia from 1892, had rejected German demands for neutrality and had ordered a general mobilization the previous day. On August 3, 1914, Germany declared war on France. Britain, pursuant to her treaty obligations to France under the Triple Entente of 1907, declared war on Germany on August 5, 1914, after the latter ignored Britain’s demands for withdrawal from occupied Belgium. Italy, as was her wont during 20th-century wars, initially refused to stand by her treaty obligations to Germany and Austria-Hungary and eventually switched sides to the Entente.

The war took on a dynamic of its own. Occasional peace feelers went nowhere, in part because of objections by military leaders. There was, however, another equally significant hurdle, namely, political opposition based on the respective publics’ sentiments that their sacrifices demanded something more than a muddled armistice. It must be remembered that the war initially was very popular and welcomed with an almost giddy celebration of patriotic zeal by the citizenry of the combatants. Hamilton’s observation about monarchs having “continued wars, contrary to their inclinations, and sometimes contrary to the real interests of the state” due to public pressure, was being realized.

The Great War, infelicitously dubbed “the war to end all wars,” ended in the collapse of the Ottoman, Russian, German, and Austro-Hungarian monarchies. It also severely damaged the British and French empires around the world. The revolutionary chaos it unleashed and the national resentments its end ignited soon produced totalitarian movements and another world war. The tens of millions killed in those wars and the even higher number murdered by those ideological totalitarian regimes during the 20th century are a grisly monument to man’s potential to do evil, often cheerfully. The war should have put paid to the conceit that the world of human self-interest and passion can be readily subordinated to a legal artifice designed by a cadre of internationalists. Such idealism sounds marvelous in a university faculty lounge or in a graduate seminar in international relations, but, as Margaret Thatcher observed, “The facts of life are conservative.”

As fundamental challenges to the post-World War II United States-led international order have arisen over the past two decades, much debate has erupted over what system will replace it. The current conflict in Europe has once again tested the notion that commercial relations will make war obsolete. Russia has been dissuaded neither by Western economic pressures and commercial ostracism nor the military aid by NATO to Ukraine from taking a course of action which her government and people see, rightly or wrongly, as important to their national identity. One hopes that these broader fundamental geopolitical changes, such as the apparent emergence of a multi-polar international order, do not lead to the type of destruction World War I caused a century ago. But such hopes must rest on diplomacy based on experience, not on smug nostrums about pacific republics or the bonds of commerce.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox

Guest Essayist: Joerg Knipprath
John Adams, author of “A Defence of the Constitutions of Government of the United States of America.”

“Virtue” and “republic” have long been connected to each other among philosophers of politics. The connection was frequently asserted in the rhetoric of Americans during the founding. Indeed, it was while states were writing constitutions that these ideas were more rigorously investigated and an increasingly sophisticated understanding emerged. The most widely read source on the experiences of republics and the importance of virtue was Plutarch’s Lives, which contained the biographies of Greek and Roman statesmen. Many intellectuals also read primary sources, such as Aristotle, Cicero, and Polybius, and interpreters of those sources, such as Machiavelli, Montesquieu, and various 18th century English political essayists. These investigations led to a political conundrum. Most Americans believed that mankind’s actions were driven by base desires, such as avarice, gluttony, and lust. Yet the success of republics had always been said to rest on public virtue, the requirement that the rulers and the people overcome their passion for personal gratification and act for the benefit of the community, “res publica.” Moreover, the wisdom received from ancient writers postulated that public virtue was derived from private virtue. The task became to reconcile this tension between private passions and republican virtue.

Three ideological theories of republicanism emerged, with attendant differences in their conceptions of private and public virtue and the connection between them. These three conceptualizations had significant geographic roots. One was an American version of classic republicanism, which might be called puritan republicanism. It is “positive” republicanism. The proponents looked to the firm hand of government to promote both aspects of virtue, private and public, and to insure their continued interrelation. It was founded in the religious tradition and political experience of New England communities, although its influence was not confined there. One of the best exponents of that tradition and its republican significance was John Adams.

Another was agrarian republicanism, which coalesced somewhat later, and was rooted in the experience of the South, especially its largest and wealthiest state, Virginia. Agrarian republicans also accepted the need rigorously to inculcate private virtue, but they were less optimistic about the conviction that private virtue assured public virtue. At the very least, they were skeptical that sufficient public virtue might be realized among those who would gain political influence. That skepticism was particularly acute when the matter became who would control the distant general government and thus be most removed from effective supervision by the people.

Best, then, not to rely on virtue among the rulers, but to look for other means to limit their ability to cause harm to the republic. If private virtue of the ruler or the people was inadequate to assure public virtue, the rulers’ self-interest must be channeled to serve the public good. James Madison worked out these ideas in his constitutional ideology, which found its way into basic structures in the United States Constitution. Madison was not alone, and he was not the most rigorous expositor of agrarian republicanism. That title goes to John Taylor of Caroline.

A third approach was national republicanism, represented by Alexander Hamilton as its most prominent ideological proponent and George Washington as its leading public figure. In many ways their views complemented those of the agrarians that private virtue was a necessary but also a regrettably flawed guardian of the success of republics. However, there was a crucial difference. Government would have a much more active role in using incentives to create conditions through which republican virtue of the public sort might be fostered. Moreover, republican virtue was not limited to those connected to the land, but extended to those engaged in commercial and even manufacturing enterprises. Hamilton, after all, was not part of the landed gentry, like Adams, or the Southern planter class, like Taylor. National republicanism was based in the emerging commercial centers, especially those of the mid-Atlantic states.

John Adams’s major work on constitutional government and republicanism was A Defence of the Constitutions of Government of the United States of America, a treatise on the emerging American constitutionalism with its emphasis on checks and balances of governmental powers. But Adams was also a prolific writer of letters to numerous correspondents. Many years before he wrote in his 1798 response to the Massachusetts militia, “Our government was made only for a moral and religious people,” he wrote to the chronicler of the period Mercy Otis Warren that republican government could survive only if the people were conditioned “by pure Religion or Austere Morals. Public Virtue cannot exist in a Nation without private, and public Virtue is the only Foundation of Republics.” Sounding the theme of positive classic republicanism, he continued, “There must be a positive Passion for the public good, the public Interest, Honor, Power, and Glory, established in the Minds of the People, or there can be no Republican Government, nor any real liberty.” [Emphasis in the original.]

In light of man’s fallen nature and his helpless soul’s inclination to sin, a firm hand was needed. Hence, three New England states had an official church, the Congregational Church, heir to the Puritans. Moreover, a Stoic virtue of private simplicity and public duty was cultivated, not the least by intrusive sumptuary laws. Such laws, passed in the name of protecting the people’s morals and sometimes dressed up in broader cloaks of liberty and equality, restricted various luxuries and excessive expenditures on jewelry, clothing, victuals, and entertainment. Adams, in his 1776 book Thoughts on Government, touted the benefits of such laws, “[The] happiness of the people might be greatly promoted by them….Frugality is a great revenue, besides curing us of vanities, levities, and fopperies, which are real antidotes to all great, manly, and warlike virtues.”

The historian Forrest McDonald, in his invaluable book Novus Ordo Seclorum, provides details about the constitutional and statutory sources of such laws. For example, Article XVIII of the Massachusetts Bill of Rights urged a “constant adherence” to “piety, justice, moderation, temperance, industry and frugality [which] are absolutely necessary to preserve the advantages of liberty.” Legislators and magistrates must exercise “an exact and constant observance” of those principles “in the formation and execution of the laws.” None other than John Adams had drafted that document in the Massachusetts convention. Other states had similar provisions. At the Philadelphia Convention, George Mason of Virginia sought to grant Congress the power to enact sumptuary laws, but his proposal was defeated.

Adams also lauded laws that resulted in the division of landed estates, because he perceived such laws as promoting relative equality of property ownership. Adams termed it the “mediocrity of property” on which liberty depended. This sentiment, drawn from an ancient republican pedigree, put him in good company with American republicans of other stripes. Indeed, “agrarian republicans” were, if anything, even more militant than Adams in their adoration of land ownership as the bulwark of republican virtue and personal liberty. Thomas Jefferson spoke for most Americans in his 1785 book Notes on the State of Virginia, when he declared that “those who labor in the earth are the chosen people of God if ever He had a chosen people, in whose breasts He has made His peculiar deposit for substantial and genuine virtue.” He expressed similar views in other writings. During the debate over the subsequent Louisiana Purchase during his administration, Jefferson was able to overcome his constitutional qualms with the satisfaction that the United States had acquired sufficient land to guarantee its existence as a republic of yeoman farmers and artisans for many generations hence.

As a theorist of agrarian republicanism, Jefferson was thin gruel compared to John Taylor, a Virginian planter, lawyer, and politician, who served off-and-on as Senator. To distinguish his branch of the family, Taylor is usually referred to by his birthplace, Caroline County. The aphorism “That government is best which governs least,” has often been attributed to Jefferson, although it appears first in Civil Disobedience by Henry David Thoreau in 1849. If, however, one might at least grant Jefferson the same sentiment, this aphorism even better describes Taylor’s philosophy. In particular, his 1814 book An Inquiry into the Principles and Policy of the Government of the United States sets out a systematic philosophy for land as the basis for personal happiness and republican vitality. Land gives its owners sustenance and trains them to self-reliance, which produces independence, which, in turn, is the source of liberty. A key to maintaining that independence is the right to keep arms.

The (mostly) Southern agrarian republicans shared with their (mostly) New England classic republican compatriots a belief that widely-shared land ownership is most conducive to private virtue. However, they parted ways on the connection between private and public virtue as crucial to the survival of republican government. Taylor wrote, “The more a nation depends for its liberty on the qualities of individuals, the less likely it is to retain it. By expecting publick good from private virtue, we expose ourselves to publick evils from private vices.” While a republican system, as a whole, is strongest when it rests on a broad base of a virtuous and civically militant citizenry, it is risky to rely only on that condition to produce virtuous politicians. Homo politicus is better known for seeking power for personal gain and influence over others than for personal sacrifice and care for the general welfare. As described by the modern school of “public choice” theory, politicians are self-interested actors, whose actions are best explained by their number one goal, to get re-elected. In addition, the puritan approach of an intrusive government which would police private behaviors raised red flags for the agrarians.

Taylor and other agrarians distrusted government generally, but the more removed from direct and frequent popular control officials were, the greater the danger to the republican form. The good news was that sufficient public virtue could be produced even if, for whatever reason, private virtue was lacking in those who would govern. To that end, it became incumbent on those who framed constitutions to recognize the inherently self-interested nature of politicians and to harness that self-interest through constitutional structures which would simultaneously authorize and limit the power of government officials of all types. Politicians would “do the right thing” not because they were sufficiently trained to private virtue, but because it would serve their own self-interest in preserving their positions.

Taylor’s prescription was not novel. The Scottish philosopher David Hume began his 1742 essay, “Of the Independency of Parliament,” by declaring, “Political writers have established it as a maxim that, in contriving any system of government and fixing the several checks and controls of the constitution, every man ought to be supposed a knave and to have no other end, in all his actions, than private interest. By this interest we must govern him and, by means of it, make him, notwithstanding his insatiable avarice and ambition, cooperate to public good.” The works of the charismatic and often controversial Hume were well known to educated Americans.

James Madison expressed these sentiments in a famous passage in Number 51 of The Federalist:

Ambition must be made to counteract ambition. The interest of the man must be

connected to the constitutional rights of the place…. In framing a government

which is to be administered by men over men, the great difficulty lies in this:

you must first enable the government to control the governed; and in the next

place oblige it to control itself. A dependence on the people is, no doubt, the

primary control on the government; but experience has taught mankind the

necessity of auxiliary precautions.

Those “auxiliary precautions” were the structural checks and balances in the Constitution.

Various historians have noted the importance of Taylor’s contributions to American political theory, even lauding him as in some ways the best which America has produced. Although his vision was republican, it may better be characterized as a branch of classical liberalism or liberal republicanism. Note that the term “liberal” does not have the current political connotation. Unlike today’s version, the classic liberalism emerging during that period was directly tied to the individual’s liberty to live free from state-enforced mandates beyond the minimum needed for social stability.

Taylor was not the first skeptic about the classic Aristotelian and Ciceronian connection between private and public virtue reborn in the puritan republicanism of John Adams. The history of 18th-century Anglo-American ideas reveals influential predecessors, such as Bernard de Mandeville and, as mentioned earlier, David Hume. Mandeville wrote his satirical Fable of the Bees in 1705, a famous parody of English politics of the time. In the poem, he describes a thriving colony of bees, where each individual bee seeks to live a life of luxury and ease, a sentiment not disagreeable to Taylor’s Southern planter class. But this prosperous existence comes to an end when some of the bees begin to denounce the personal corruption caused by luxury and to call for a life of simplicity and virtue to be imposed. Many bees die, their hive becomes impoverished, and they live in a hollow tree, “blest with Content and Honesty.” He concludes,

Bare Virtue can’t make Nations live,

In Splendor; they, that would revive

A Golden Age, must be as free,

For Acorns, as for Honesty.

In short, personal vices, such as greed and ambition, generate public virtue of industriousness and prosperity. Similar ideas also infused the writings of an important contemporary of the American founders, the political economist Adam Smith.

Even more than Taylor, it was the adherents of an emerging “national republicanism” who agreed with Mandeville, Hume, and Smith. Although all persons are driven by their passions, not all passions are the same. Some, especially those who already have material riches, might be gripped by a simple desire for fame or honor, or by love of country. Moreover, a properly constructed constitution, produced by those few motivated by such nobler passions, might harness the baser passions of lesser politicians towards the public good. The men who met in Philadelphia for the specific purpose of drafting the Constitution might qualify as men whose primary, if not sole, passions were fame and love of country. For most, no immediate financial gain or personal political success was to be gained. Indeed, contrary to the progressive theory advanced in the early 20th century by the historian Charles Beard that economic self-interest was the driving force behind the Constitution’s adoption, it is well-established that delegates voted in favor of proposals which would, if anything, hurt their financial interests.

Such “good” passions, although they manifested a self-interest, also produced the public virtue necessary for republican government. It produced policies for the general welfare and in the interest of the public. The problem, of course, is that all politicians—and, indeed, bureaucrats of all kinds—routinely claim to be driven by a passion for public service, and that their policy proposals are in the public interest. A multitude of unelected non-governmental organizations and litigious law firms also claim the title “public interest.” Alas, to consider, for instance, who benefits from pay-outs in the typical class-action lawsuit, the reality rarely matches the professed public virtue. One never hears a politician say that a policy, no matter how nefarious and self-rewarding, is done for anything other than the noblest public purpose. Rare even is a politician as honest as the 19th-century New York Tammany Hall leader George Washington Plunkitt. He famously distinguished between “dishonest” and “honest” graft and was frank about his practice of the latter. Dishonest graft meant working solely for one’s own interests. Honest graft was to work for one’s own wealth, while simultaneously furthering the interests of one’s party and state.

The big problem, then, for the national republicans was to constrain those politicians who would in fact hold political offices for a longer time and with less-defined objectives than those who drafted the Constitution. George Washington had long and carefully cultivated the public image of the man driven solely by a passion for honor. Whatever his motives in his private actions, such as, for example, acquiring huge tracts of land, Washington in his public life appears to have been driven by his concern about the public’s perception of him as a man of honor. Forrest McDonald and numerous other historians have painted the picture of a man who might be said to have “staged” his public life. Washington was deeply affected throughout his life by Joseph Addison’s play Cato about the Roman republican statesman Marcus Porcius Cato (“the Younger”). Cato, a committed Stoic, was famous for his unrelenting honesty.

But Washington was a rare specimen of homo politicus. The national republicans’ plan for more run-of-the-mill politicians was similar to that of the agrarians, to rely on one measure of citizen virtue and another measure of constitutional structure to produce public virtue from politicians driven by private passions. Unlike the agrarians, they were convinced that a strong national government must be a part of that structure. On that point Hamilton and at least the 1787 version of Madison could agree. Hamilton and the national republicans parted ways with Madison, and with Jefferson and the more resolute agrarian republicans such as Taylor, by enthusiastically embracing the role of manufacturing and banking in promoting public virtue.

Jefferson’s ideal republic of yeoman farmers and artisans, comprising a large middle class possessed of a rough equality of means, had little room for manufacturers, and none for bankers and other jobbers dealing in phantom “wealth.” Manufacturing, when combined with commerce, the fear went, would necessarily soon lead to two anti-republican results. One was a love for material luxury; the other was a life of drudgery for the impoverished masses. The history of the ancient Roman Republic was a vivid cautionary tale. Taylor and the agrarians accepted the benefit of commerce within their preferred system of political economy, because it facilitated the export of products from the agricultural South and the importation of manufactured goods from abroad. But, in a preview of the South Carolina Nullification Crisis of the 1820s and ‘30s, this required free trade. Like most Southerners, Taylor was a committed free trader and suspicious of any national government regulation of economic matters, especially tariffs.

The agrarians’ fear of manufacturing tied into a general belief among political writers going back to antiquity that political systems evolve and, ultimately, decay. Entropy is inevitable in politics as much as physics. Agriculture may be the most desirable occupation, but, sooner or later, the limited productive land area is fully occupied, as New England was discovering. People would flock to cities where manufacturing would become their occupation. As Adam Smith described the effect on people, “the man whose whole life is spent in performing a few simple operations, of which the effects are, perhaps, always the same … generally becomes as stupid and ignorant as it is possible for a human creature to become ….” This fate stood in sharp contrast to that of the farmer, artisan, and merchant, who must possess broad knowledge and understanding of many activities. If this process was inexorable and made those human brutes unfit to practice private virtues, it also made the demise of the republic inevitable. Even Benjamin Franklin believed in the dangers from this progression, which puts his remark to his interlocutor, “A republic, madam, if you can keep it,” in yet another light. It also explains the urgency which Jefferson and other agrarian republicans felt about the westward expansion of territory and the opening of western land to agricultural settlement needed to forestall this threat to republican governance.

At the conclusion of the passage quoted above, Adam Smith extended a saving hand. After all, he was not opposed to either manufacturing or banking as sources of wealth. The evils of a poor and brutish urban working class would happen, “unless the government takes some pains to prevent it.” Smith had his views of what that might be. In any event, Hamilton, as an enthusiastic believer in Smith’s ideas, agreed that wealth was not fixed, and that even a personal profit motive can contribute to the public welfare. Investing in new processes and useful products and services is a public benefit. Thus, actions of the manufacturer and even the banker exemplify public virtue, whether or not they are driven by self-interest. He, like Adam Smith, believed that private wealth-producing activities qualified as private virtue. While others might not go that far, Hamilton successfully advocated the connection between such activity and the public virtue needed to maintain republican government.

Having established that manufacturing and banking could be “virtuous” in the public sense, there remained the need to foster them in order to ameliorate the conditions of poverty which would threaten republican government. After all, if enough wealth is created for all, “poverty” ceases to be objective and becomes relative. A rising tide floats all ships. At least from a material standpoint, owning a car and various electric and electronic devices today, living in an abode with air conditioning, and having clean water, basic sustenance and medical care, are vastly better than the experiences of past generations.

Hamilton and his supporters believed that their strong national government was the best mechanism to adopt policies which would foster the growth of wealth. Hamilton’s later program in his four reports to Congress between 1790 and 1795 on the public debt, a national bank, and manufactures, laid out in considerable detail his plans to that end. These sophisticated reports were a monument to Hamilton’s intellect and experience applied to the economic problems of the early United States.  They had such potency, and were so hotly contested, that they precipitated the First American Party System of Federalists and Jeffersonian Democratic-Republicans and made Hamilton in effect the dominant figure of American politics in the 1790s.

It should be noted in conclusion that all republicans—classic puritan, agrarian, and national—opposed democracy. Even those delegates and political leaders who at one point had been most favorable towards broad public participation and involvement in politics, were shaken by Shays’ Rebellion in Massachusetts. That event in 1786 had created much tumult and political chaos and was put down by an army raised by the state. It was very much on the minds of the attendees at the Philadelphia convention. Some of the most vociferous detractors of the Constitution as insufficiently “republican” were also the harshest critics of democracy. For them, Shays’ Rebellion exposed the danger of relying on private virtue to provide the public virtue necessary for republican self-government. James Madison spoke for them all when he opined in Number 10 of The Federalist about the inadequacy of democracies to promote public virtue:

[Such] democracies have ever been spectacles of turbulence and contention; have

ever been found incompatible with personal security, or the rights of property;

and have, in general, been as short in their lives, as they have been violent in their

deaths. Theoretic politicians, who have patronised this species of government,

have erroneously supposed, that, by reducing mankind to a perfect equality in

their political rights, they would, at the same time, be perfectly equalized and

assimilated in their possessions, their opinions, and their passions.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox

Guest Essayist: Joerg Knipprath

During the Middle Ages, a distinct separation of church and state existed, at least in theory. The pope in Rome and his bishops and priests throughout Western Christendom took care to protect the souls of the people. The emperors, kings, and other secular nobles protected the physical safety of their subjects. The subjects would “repay to Caesar what belongs to Caesar and to God what belongs to God.”

In fact, matters were more ambiguous, because popes frequently called on secular rulers for protection, and the latter looked to the former to confirm the legitimacy of their rule in the eyes of God and their subjects. Moreover, the ecclesiastical rulers, including the pope, also exercised sovereign political control in various territories and sat in the political councils of others. The emperors and other nobles, in turn, frequently sought to control the appointment of ecclesiastical officials within their domains and, in the case of the king of France, to control the selection of the pope himself.

With the split in Western Christendom caused by the Reformation, and the emerging concentration of power in single political rulers in national kingdoms and lesser principalities, two significant changes occurred from the medieval order. Those changes are important for understanding what led to the Mayflower Compact.

First, in the struggle over who had supreme authority in the physical world, emperors or popes, kings or bishops, the balance shifted decisively in favor of the secular rulers. A secular ruler might become the head of a religious establishment, as happened in England beginning with Henry VIII. Less drastically, the ruler might ally with the bishops to control the authority of the pope in matters temporal or secular, as happened to the Church in France. Or, under the doctrine of cuius regio, eius religio (“whose realm, their religion”), the religion practiced by the prince became that of his subjects. The last was the situation in most German states after the Peace of Augsburg in 1555 ended the initial wave of religious wars between Lutherans and Catholics.

The second change was the renunciation among a number of Protestant dissenters of the episcopal structure of the Catholic, Anglican, and Lutheran churches. Whatever might have been the dissatisfaction of Anglican and Lutheran theologians with Catholic doctrine, practices, and administration, the dissenters viewed those established Protestants as merely paler imitations of the Church of Rome. Building on the teachings of the lawyer John Calvin in Geneva, they emphasized salvation through faith alone and living in a community of the faithful governed by themselves or by some elected elders.

In Scotland, these dissenters formed the Presbyterian Church. In England, the Calvinist dissenters became the “Puritans.” They sought to purify the Church of England from various Catholic practices and doctrines while continuing to associate their congregations with the official church. Their goals seemed within reach after the English Civil Wars in the 1640s. They were well represented in the Rump Parliament and among the military leaders, such as Oliver Cromwell and John Lambert. The Anglican majority proved too immovable, however, and, after the Restoration, many Puritan leaders left England. Another group, however, believed that the Anglican Church was hopelessly corrupt, and that the only available path to personal salvation was through separation. This group has become known as the “Pilgrim Fathers,” although they referred to themselves by other names, such as “Saints.”

Both groups of English dissenters established settlements in New England not far apart. Their theological differences, however, kept them separated for decades. Not until 1690 was the Pilgrims’ Plymouth colony absorbed by the much larger Massachusetts Bay Colony.

The two groups shared certain characteristics, which contributed to the development of American constitutional theory. It is part of American mythology that Europeans came to English North America in search of religious freedom, which they joyfully and readily extended to all who joined them. The matter is much more nuanced. While such toleration might well describe the Quaker colony of Pennsylvania and the Catholic colony of Maryland, both of which were formed later, the Pilgrims and Puritans had a different goal. Theirs was to establish their respective visions of a Christian commonwealth, the City of God in the New World. Having left England for a wilderness because of despair over the allegedly corrupt nature of the Anglican Church, never mind the Catholic one, neither group was inclined now to welcome adherents of such beliefs to live among them. Religious freedom, indeed, but for individuals of like beliefs in a community gathered together for mutual assistance in living life according to those beliefs. Conformity in community, not diversity of doctrine, was the goal. God’s revealed law controlled, and governance was put in the hands of those who could be trusted to govern in accordance with that law.

The two groups also shared another characteristic, alluded to above: voluntary community. The individual alone could find salvation through studying and following the Bible. As an inherently social creature, he could, of course, join with others in a community of believers. The basis of that community would be consent, individual will, not an ecclesiastical order based on apostolic succession. Some years after arriving in the New World, the Massachusetts Bay Puritans in the Cambridge Platform of 1648 declared that “a company of professed believers ecclesiastically confederate” is a church, with or without officers. This was the origin of the Congregational Church, founded on a clear separation from all forms of hierarchical church government.

The congregation would govern itself according to the dictates of its members’ consciences and the word of God, while in the secular realm it would be governed under man’s law. What would happen, if man’s law, and the teachings of the established church, conflicted with the word of God, as the believers understood it? What if, to resolve such conflicts, that religious community left the existing secular realm? A political commonwealth of some sort is inevitable, as most political theorists claim. That is where the experience of the Puritans and the Pilgrim separatists differed.

The Puritans formed their Massachusetts Bay Colony on the same basis as the Virginia Company had been formed to settle at Jamestown two decades earlier. It was a joint stock company, somewhat analogous to a modern business corporation, formed by investors in England. The company’s charter provided a plan of government, which included meetings of a General Court composed of the freemen of the Company. The charter failed to specify where these meetings were to occur. English custom was that such shareholder meetings took place where the charter was kept. Some historians have written that the charter was surreptitiously taken from the company’s offices and spirited to the New World, thereby making Boston the site of the General Court. That is a suitably romantic story of intrigue and adventure, indeed. More prosaic is that the change in locale occurred through the Cambridge Agreement of 1629 between the Company’s majority, composed of its members seeking to establish a religious community in Massachusetts, and the minority which was interested in the possibilities of commerce and profit. The majority was permitted to take the charter and thereby secure a de facto independence from English authorities for a half-century. The minority received certain trade monopolies with the colony.

The formation of the Massachusetts Bay Colony, like the Virginia Colony’s, was based on voluntary association and contract. Once the mercantile interest of the English investors was severed, the charter provided a political constitution for the colony’s governance. But the political consequence was the by-product of a commercial enterprise. The best example of an organic constitution created by consent of a community’s members for the express purpose of self-government was the Mayflower Compact concluded almost ten years earlier.

After vigorous attempts at suppression of them by King James I for their separatist beliefs, many Pilgrims fled to the religiously more tolerant United Provinces of the Netherlands. Eventually, however, English pressure on the Dutch induced the Pilgrims to leave their temporary domicile in Leyden. Having procured a ship and picked up additional travelers in England and a license from the Virginia Company to settle on its land, a group of Pilgrims embarked on their journey westward to their future Zion.

Upon reaching the New World in late November, 1620, at what today is Provincetown, Massachusetts, they discovered to their dismay that they had arrived a few hundred miles north of the Virginia Company’s boundary. Many of the 101 passengers aboard their small ship, the Mayflower, were ill, supplies were dwindling, and bad weather loomed. The group eventually decided to land on the inhospitable coast, rather than continue to sail to their allotted land. Before they did so, however, 41 men signed the Mayflower Compact on November 21, 1620, under the new calendar. It must be noted that fewer than half of the men were Pilgrims. Many were “adventurers,” a term of art which referred to individuals sent over by the Company of Merchant Adventurers to assist the colony, tradesmen and men such as the military leader, Myles Standish. The Company had lent money to the settlers. Repayment of those loans depended on the colony’s success.

Not having the luxury of a drawn-out convention meeting under agreeable conditions, the settlers made the Mayflower Compact brief and to the point, but also rudimentary. In significant part, it declared, “Having undertaken for the glory of God, and advancement of the christian [sic] faith, and the honour of our King and country, voyage to plant the first colony in the northern parts of Virginia; [we] …combine ourselves…into a civil body politick, for furtherance of the ends aforesaid ….” Framing “just and equal laws, ordinances, acts, constitutions, and officers, … as shall be thought most meet and convenient for the general good of the colony …,” was left to another day.

The Mayflower Compact is a political application of the voluntary consent basis of religious congregation which the Pilgrims accepted. There was renewed interest in social contract theory as the ethical basis of the state, as an alternative to the medieval theory of a hierarchical political order created by God. Both approaches, it must be noted, were also used by defenders of royal absolutism in the 17th century. Bishop Robert Filmer in his Patriarcha adapted Aristotle’s connection between the family and the state as social institutions and Cicero’s correlation of monarchy and the Roman paterfamilias, to present the monarch as having whatever power he deems needed to promote the public welfare. To give his contention a more appealing, religious basis, Filmer wrote that God gave Adam absolute control over the family in Genesis, thereafter to the three sons of Noah, and finally, as the nations grew, to monarchs.

Thomas Hobbes used contract theory in his work Leviathan to justify royal absolutism. Humans seeks to escape the abysmal state of nature, where life is “solitary, poor, nasty, brutish, and short,” because a state of war exists of all against all. To gain physical and psychological peace, the desperate people enter into a covenant with a powerful ruler. In return for the ruler’s protection and a life of security, they agree to surrender whatever rights they may have had in the state of nature as the ruler deems it necessary. One exception is the right to life.

One might view the Mayflower Compact as an iteration of the Hobbesian covenant. Indeed, the early governance of the colony at times seemed like a military regime, an understandable state of affairs considering the existential danger in which the residents found themselves over the first few years. Alternatively, one might consider the arrangement as simply a settlement within the existing English state, like any town in England. After all, the Pilgrims expressly avowed themselves to be “the loyal subjects of our dread sovereign Lord, King James,” declared that their voyage, in part, was for the “honour of our King and country,” and noted that they were signing during “the reign of our sovereign Lord, King James.”

One might, however, view the Mayflower Compact as a glimpse into the future, to the work of the social contract theorist John Locke a half-century later. The Pilgrims had removed themselves from an existing commonwealth whose laws they found oppressive. Their persecution over their religious faith was a profound breach of the Lockean social contract under which government was created as a useful tool better to protect a person’s personal security and estate. One remedy for such a breach was to leave political society. For Hobbes, this would have been impossible, because it would have placed the individual back in the intolerable state of nature. For Locke, however, the state of nature of human society was not as forbidding. Locke had more of that Whig confidence in man’s goodness. Government was just a way to deal with various inefficiencies of the state of nature in promoting human flourishing, rather than a Hobbesian precondition to such flourishing.

Solitary contemplation and Bible study allowed one to recognize the glory of God and to deepen one’s Christian faith, a journey made more joyful by joining a religious congregation of believers. In similar manner, joining together in a “civil body politick” as set forth in the Mayflower Compact aided in achieving those objectives. Dealing efficiently with quotidian matters of the physical world permitted more contemplation of the spiritual. Happily also, despite all the challenges the New World presented, it had sufficient bounty to give sustenance to the saints in the new Zion, to “lead the New Testament life, yet make a living,” as the historian Samuel Eliot Morison summarized it.

The singular importance of the Mayflower Compact was in the foundation it provided for a theory of organic generation of a government legitimized by the consent of the governed. Self-government became realized through a contract among and for those to be governed. Later American constitutional theories about the people as the source of legitimacy for government had to deal with the practical difficulty of having many thousands of people in each of the already existing political arrangements called “states.” American writers sought to get around that difficulty by having state conventions rather than ordinary legislatures approve the Constitution, a logically rather precarious substitution. Still, the Mayflower Compact set a readily understood paradigm.

A more troubling lesson drawn from the New England colonies, is to recognize the unsettling connection between seeking religious freedom for oneself and prohibiting the same for others. It requires confronting the tension between community and individuality, law and liberty. The right to associate must include the right not to associate. The right to worship in association with other believers must include the right to reject non-believers. To what extent might the rights of the majority to create their “civil body politick” as an embodiment of their City of God on Earth override the rights of others in that community to seek a different religious objective, or no religious objective at all? Massachusetts Bay provided one answer. New settlers were limited to those who belonged to their approved strain of Puritanism. Dissenters were expelled. Those who failed to get the message of conformity were subject to punishment, such as four Quakers who were publicly executed in 1659 after they repeatedly entered the colony and challenged the ruling authorities. The Pilgrims at Plymouth were more accommodating to others, if grudgingly so, because their original settlers had included a substantial number unaffiliated with their iteration of Christianity.

The framers of American constitutions had to face those issues, and tried to balance these interests through concepts such as free exercise of religion, establishment of religion, and secular government. The problem is that such terms are shapeshifters which allow users to project diverse meanings onto them. These difficulties have not disappeared.

Both the organic creative aspect of the Mayflower Compact and its theocratic imperative were found in other constitutional arrangements in New England. The “Fundamental Orders” of the Connecticut River towns in 1639, a basic written constitution, set as their purpose to “enter into…confederation together, to maintain and preserve the liberty and purity of the gospel of our Lord Jesus which we now profess, as also the discipline of the Churches, which according to the truth of the said gospel is now practiced among us ….” As in Massachusetts Bay, justice was to be administered according to the laws established by the new government, “and for want thereof according to the rule of the word of God.” The Governor must “be always a member of some approved congregation.”

The colonies of Providence and Portsmouth in today’s Rhode Island, established in the 1630s, had similar founding charters as the Mayflower Compact, because they, too, were formed in the wilderness. A distinctive aspect of those colonies was that they were founded by Puritan dissenters, Roger Williams and Anne Hutchinson, respectively, who had been expelled from Massachusetts Bay. Shaped by their founders’ experiences, these colonies allowed freedom of conscience and did not establish an official religion in the manner of other New England settlements.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.

Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox



Guest Essayist: Joerg Knipprath

There have been few times as crucial to the development of English constitutional practice as the 17th century. The period began with absolute monarchs ruling by the grace of God and ended with a new model of a constitutional monarchy under law created by Parliament. That story was well known to the Americans of the founding period.

The destructive civil wars between the houses of York and Lancaster, known as the War of the Roses, ended with the seizure of the throne by Henry VII of the Welsh house of Tudor in 1485. The shifting fortunes in those wars had shattered many prominent noble families. Over the ensuing century, the Tudor monarchs, most prominently Henry VIII and Elizabeth I, consolidated royal power. Potential rivals, such as the nobility and the religious leaders, were neutralized by property seizures, executions, and dependence on the monarch’s patronage and purse for status and livelihood. Economic and social change in the direction of a modern commercial nation-state and away from a feudal society where wealth and status were based on rights in land had already begun before those wars. This change was due to financial necessities and a nascent sense of nationalism arising from the Hundred Years’ War between the English Plantagenet kings and the French house of Valois. Under the Tudors, England’s transition to a distinctly modern polity with a clear national identity was completed.

When Elizabeth died childless, the Tudor line came to an end, and the throne went to James VI Stuart of Scotland, who became James I of England, styling himself for the first time, “King of Great Britain.” On the whole, James was a capable and serious monarch but had strong views about his role as king. His pugnaciousness brought him into conflict with an increasingly assertive Parliament and its allies among the magistrates, especially his Attorney General and Chief Justice, Sir Edward Coke. The need for revenues to pay off massive debts incurred by Elizabeth’s war with Spain was the catalyst for the friction. James was well educated in classic humanities and had a moderate literary talent. He wrote poetry and various treatises. He also oversaw the production of the new English translation of the Bible. As a side note, I have found it amusing that, 400 years ago, James warned about the dangers of tobacco use.

It was James’s political writing, however, which irked Parliament. He was a skillful defender of royal prerogative and seemed to derive satisfaction from lecturing his opponents in that body about the inadequacies in their arguments. James was able to navigate relations with Parliament successfully on the whole, mostly by just refusing to call them into session. But his defense and exercise of his prerogatives, his claim to rule as monarch by the grace of God, and his pedantic and irritating manner, coupled with the restlessness of Parliament after more than a century of strong monarchs, set the stage for confrontation once James departed this mortal coil.

Parliamentary authority had accreted over the centuries through a process best described as punctuated equilibrium, to borrow from evolutionary biology. Anglo-Saxon versions of assemblies of noble advisors to the king existed before the Norman Conquest, in accordance with the customs of other Germanic peoples. William the Conqueror similarly established a council of great secular and ecclesiastical nobles of the realm, whom kings might summon if they needed advice or political support before issuing laws or assessing taxes. This rudimentary consultative role was expanded when the council of English barons gathered at Runnymede in 1215 and forced King John to agree to a Magna Carta. A significant provision of that charter required the king to obtain the consent of his royal council for any new taxes except those connected to his existing feudal prerogatives. This was a major step in developing a legislative power which future parliaments guarded jealously.

In 1295, Edward I summoned his Great Council in what the 19th-century English historian Frederic William Maitland called the Model Parliament because of the precedent it set. This Great Council included not just 49 high nobles, but also 292 representatives from the community at large, later referred to as the “Commons,” composed of knights of the shire and burgesses from the towns. Edward formalized what had been the practice off and on for several decades at that point. Another constitutional innovation was Edward’s formal call for his subjects to submit petitions to this body to redress grievances they might have. This remains a vital constitutional right of the people in England and the United States.

The division of the Great Council into two chambers occurred in 1351, with the high nobility meeting in what later came to be known as the House of Lords and the knights and burgesses meeting in the House of Commons. Within the next few decades, parliaments increasingly insisted that they controlled not just taxation, but also the other side of the power over the purse, expenditures. They faced some hurdles, however. Parliaments had no right to meet, and kings might fail to summon such a gathering for years. Also, these bodies were in no sense democratic. The Lords were a numerically small elite. Due to property restrictions, the Commons, too, represented a thin layer of land-owning gentry and wealthy merchants. The degree to which bold claims of parliamentary power succeeded depended primarily on the political skill of the monarch. Strong monarchs, such as most of the Tudors, could either decline to call parliament into session or push needed authorization through dint of their standing among powerful and respected members of those bodies. A politically adept king could secure those relationships through a judicious use of his patronage to appoint favorites to offices.

During the rule of James I, parliamentary opponents of the king increasingly expressed their displeasure through petitions to redress grievances. English parliaments also manipulated the process as a tool of political power against the king. While those petitions might in fact come directly from disgruntled constituents, they were often contrived by members of Parliament using constituents as straw men to initiate debate in a way which suggested popular opposition to the monarch on a matter. These were political theater, albeit sometimes politically effective. Even if such a petition were granted by Parliament when in session, relief would have to come through the king or his officials, an unlikely result.

After the death of James I, relations between king and Parliament deteriorated further under his son. More affable than his father, Charles I was also less politically astute. As adamant as his father had been about protection of royal prerogative, Charles made too many political missteps, such as arresting members of Parliament who opposed various policies. Much of his political trouble arose from England’s precarious financial situation, partly due to misbegotten and unpopular military campaigns precipitated by Charles’s foreign minister, the Duke of Buckingham. When Parliament proved uncooperative, he attempted to finance these ventures and various household expenses through technically legal, but constitutionally controversial, workarounds.

One constitutional theory held that taxes, especially direct taxes on wealth or persons, were not part of the king’s prerogative. Rather, such taxes were “gifts” from the people. As with other gifts, the king might ask but could not compel. The people could refuse. It was impractical to ask each person. Instead, the Commons collectively could vote to grant such a gift to the king. The king had the prerogative, however, to enforce feudal obligations, collect fees, or sell property to raise funds. When Parliament in 1626 refused to vote taxes to pay for the military expeditions, Charles instead imposed “forced loans” on various individuals. Although such loans were deemed legal by the courts, this constitutional legerdemain was exceedingly unpopular and failed to produce significant income. Worse for the king, Parliament adopted the Petition of Right in 1628, which, in part, reaffirmed Parliament’s sole power of taxation. Charles at first agreed, but soon reneged. He dismissed Parliament and reasserted his power at least to collect customs duties. The Petition would prove to be significant eventually for another reason, because it also asserted certain rights which the king could not invade.

Charles then ruled without Parliament. To pay for his expenses, he resorted to various arcane levies, fees, fines, rent assessments, and sales of monopoly licenses. Still, he ran out of funds by 1640. Needing money for a military campaign against the Scots, he called Parliament into session. The first session proved unproductive, but he summoned another Parliament, which met in various forms for most of the next twenty years and became known collectively as the Long Parliament. Friction between Charles and Parliament led to civil war, a military coup by General Oliver Cromwell and other officers of the New Model Army, the trial of Charles by a “Rump Parliament” purged of his supporters by the Puritan military, and the regicide in 1649.

Following the execution of Charles, the Rump Parliament abolished the monarchy and proclaimed England to be a “Commonwealth.” Deep political divisions remained. If anything, executing who historians consider one of the most popular English kings undermined the legitimacy of the Commonwealth with the people. Cromwell finally dismissed the Rump Parliament forcibly in 1653, after scorning them with the splendidly pungent “In the name of God, go!” speech the likes of which would not be heard today.

The Protectorate established later that year did not smooth relations between Parliament and Cromwell. In essence, this was a military dictatorship, and even the absence of royalists in the Commons and the interim abolition of the House of Lords did not prevent opposition to him. The two Protectorate Parliaments also were dissolved by Cromwell when they proved insufficiently cooperative, especially in matters of taxation, and too radically republican for Cromwell’s taste, having dared to challenge the Lord Protector’s control over the military.

Although the Protectorate’s military government was an aberration in English history, it produced some notable constitutional developments. The Instrument of Government of 1653 and the Humble Petition and Advice of 1657 collectively are the closest England has come to a formal written constitution. They created a structure of checks and balances which captured the trend of the English system from an absolutist royal rule to a limited “constitutional” monarchy. Although these two documents eventually were jettisoned by the “Cavalier Parliament” after the Restoration, they became a model for resolution of a subsequent constitutional crisis.

The Instrument provided the basic structure of government for the Protectorate. It was drafted by the radical republican Puritan General John Lambert and adopted by the Army Council of Officers in 1653. It was based on proposals which had been offered in 1647 to settle the constitutional crisis with Charles I, but which the king had rejected. The Instrument set up a division of power among the Lord Protector, a Council of State, and a Parliament that was to meet at least every three years. The last had the sole power to tax and to pass laws. The Protector had a qualified veto over the Parliament’s bills. However, he had an absolute veto over laws which he deemed contrary to the instrument itself. Moreover, Parliament could not amend the Instrument. Although these provisions put Cromwell in the position of final authority over this “constitution,” the proposition that Parliament was limited by a higher law contradicted principles of Parliamentary supremacy. It anticipated the later American conception of the relationship between a constitution and ordinary legislative bodies. The Humble Petition and Advice was adopted by Parliament in 1657. It proposed some amendments to the Instrument, among them making Cromwell “king” and creating the “Other House,” a second chamber of Parliament, composed of life-term peers. Cromwell rejected the first and accepted the second.

After Cromwell’s death in 1658, and the resignation of his son Richard as Lord Protector the following year, the Protectorate ended. This created a political vacuum and a danger of anarchy. In the end, one of Cromwell’s trusted leaders, General George Monck, led elements of the New Model Army to London to oversee the election of a new “Convention Parliament.” Though Monck had been personally loyal to both Cromwells, he was also a moderate Royalist. The new Parliament technically was not committed either to the Commonwealth or the monarchy. However, it was controlled by a Royalist majority, and popular sentiment was greatly in favor of abolishing the military government and restoring the monarchy. Monck sent a secret message to Charles II for the prince to issue a declaration of lenity and religious toleration. After Charles complied, Parliament invited him to return as king.

Although the new king also fervently believed in his divine right to rule and proceeded to undo the Protectorate’s laws and decrees through his friends in Parliament—which again included the restored House of Lords—he was savvy enough not to stir up the hornet’s nest of Stuart absolutism too vigorously. A period of relative constitutional calm ensued, although Whig exponents of radical theories of popular sovereignty and revolution could still find their works used against them as evidence of treason and plotting.

Upon Charles’s death in 1685, the crown went to his brother, James II, an enthusiastic convert to Catholicism. When he and his wife, Mary of Modena, had a son in 1688, it presented the clear possibility of a Catholic dynasty, a scenario which repelled the Anglican hierarchy. Even more objectionable were James’s exertions at blunting the Test Acts and other laws which discriminated against Catholics and Protestant dissenters from the established Anglican Church. The main tool was his dispensing power, a prerogative power to excuse conformance to a law. But, at the likely instigation of the Quaker, William Penn, he also issued his Declaration for Liberty of Conscience in 1687, a major step towards freedom of worship. The Declaration suspended penal laws which required conformity to the Anglican Church.

James’s Anglican political supporters began to distance themselves from him, and seven Protestant nobles invited the Stadholder of the United Netherlands, William of Orange, to bring an army to England. The Glorious Revolution had begun. James initially planned to fight the Dutch invasion, but lost his nerve and tried to flee to France. He was captured and placed under the guard of the Dutch. William saw no upside to having to oversee the fate of James, who was his uncle and father-in-law. To rid himself of this annoyance, he let James escape to France.

With James gone, William refused the English crown unless it was offered to him by Parliament. At the behest of a hastily gathered assembly of peers and selected commoners, William summoned a “Convention Parliament.” The throne was declared vacant due to James’s abdication. The Convention Parliament drafted and adopted the Declaration of Right. The following day, February 13, 1689, they offered the crown to William and Mary together as King and Queen, with William alone to have the regal power during his life. After accepting the crown, William dismissed the Convention Parliament and summoned it to reconvene as a traditional parliament.

The Convention Parliament was another milestone in the development of Anglo-American constitutional theory and built on the earlier Protectorate’s Instrument of Government. The process instantiated the radical idea that forming a government is different than passing legislation, in that the former is, in the later phrasing of George Washington, “an explicit and authentic act of the people.” The opponents of the Stuarts had long claimed that all power was derived originally from the people. However, parliaments had challenged the king’s supremacy with the claim that they represented the estates of nobles and commons, and that the people had vested all constitutive power in them. But, if the people were truly the ultimate source of governmental legitimacy, how could they permanently surrender that to another body? This debate was carried on among the Whig republican thinkers of the era, such as the radical Algernon Sidney and the moderate John Locke. It raised knotty and uncomfortable issues about revolution. Those very problems would occupy Americans for several decades from the 1760s on in the drive toward independence and the subsequent process of creating a government.

There was no concrete condition that William and Mary accept the Declaration, but the crown was offered on the assumption that the monarch would rule according to law. That law included the provisions of the Declaration, once the reconvened parliament passed it as the Bill of Rights in December, 1689. Until then, the Declaration had no force of law, not having been adopted by Parliament as a legislative body and not having received the Royal Assent. This has been the process of the unwritten English constitution. As with the various versions of the Magna Carta and other famous charters and proclamations, an act of Parliament is required to make even such fundamental arrangements of governance legally binding. The English Bill of Rights is, mostly, still a part of that unwritten constitution, although some provisions have been changed by subsequent enactments.

The English Bill of Rights built on the Petition of Right to Charles I in 1628 and the Habeas Corpus Act of 1679 in expressly guaranteeing certain rights. Among them were protections to petition for redress of grievances, to have arms for self-defense for Protestants, against cruel and unusual punishments or excessive bail or fines, and for trial by jury. Moreover, it protected members of Parliament from prosecution for any speech or debate made in that body. Many of these same protections appeared in American colonial charters, early American state constitutions, the petitions of state conventions ratifying the Constitution, and the American Bill of Rights. At first glance, the failure to protect religious liberty seems to be a glaring omission. However, anti-Catholic feelings ran high, and, contrary to James II, the Anglican majority was not in the mood for religious tolerance. As to Protestant Nonconformists, their religious liberty was recognized in the Toleration Act of 1689.

The Bill of Rights also made it clear that the monarch holds the crown under the laws of the realm, thereby rejecting the Tudor and Stuart claims of ruling by divine grace. This postulate was a crucial step in the evolution towards a “constitutional” monarchy. Following the approach of the Protectorate’s Instrument of Government, the Bill of Rights provided that laws must be passed by Parliament, although the monarch had an unqualified power to withhold consent. One must note, however, that this veto power has not been exercised since 1708 by Queen Anne. An attempt to do so by a British monarch today might trigger a constitutional crisis.

As a reaction against the perceived Catholic sympathies of the Stuarts and, in James II’s case, his actual Catholicism, the Bill of Rights very carefully designated the line of succession if, as happened, William and Mary died childless. That line of succession was limited to what were traditional Protestant families. To make the point clearer, the Bill of Rights defiantly debarred anyone who “is … reconciled to, or shall hold communion with, the see or church of Rome, or shall profess the popish religion, or shall marry a papist …” from the throne. The last prohibition likely was due to the habit of the Stuart kings to marry devout Catholic princesses, and an understandable concern over the influence that such a spouse might have in spiritual matters. On that point, too, the English experience affected later American developments, with the protection of religious freedom in the Bill of Rights and the prohibition of religious test oaths in the Constitution.

In addition to the importance of these historical antecedents to American constitutional development, the English Civil War and the Glorious Revolution demonstrate an uncomfortable truth. When the ordinary means of resolving fundamental matters of governance prove unavailing, those matters will be resolved by violence. Constitutional means work during times of relative normalcy, but on occasion the contentions are infused with contradictions too profound for compromise. It is an axiom of politics that politicians will seek first to protect their privileges and second to expand them. The increased demands by parliamentarians for political power inevitably clashed with the monarchs’ hereditary claims. Both sides appealed to traditional English constitutional custom for legitimacy. With their assumptions about the source of political authority utterly at odds, compromise became increasingly complex and fleeting. It was treating a gangrenous infection with a band-aid. Radical surgery became the way out. The American Revolution in the following century, and even the American Civil War of the century thereafter, showed evidence of a similar progression, with the two sides operating from fundamentally contradictory views of the nature of representative government and proper division of power between the general government and its constituent parts.

The Glorious Revolution resolved the contest over these conflicting views of legitimate authority and the proper constitutional order between king and Parliament. The earlier Commonwealth with its Protectorate was an abortive step in the same direction. It failed due to the political shortcomings of the military leaders in control. Although further adjustments would be made to the relationship between monarch and parliaments, the basic constitutional order of a limited monarchy reigning within a political structure of Parliamentary supremacy was set. The new constitutional arrangement became a model for political writers of the 18th century, such as the Baron de Montesquieu. American propagandists of the revolutionary period readily found fault with the British system. Once they turned to forming governments, however, Americans more dispassionately studied and learned from the mother country’s rocky path to a more balanced and “republican” government in the 17th century. Both sides in the debate over the Constitution regularly used the British system as a source of support for their position or to attack their opponents.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.

Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox



Guest Essayist: Joerg Knipprath

Two noted maxims of Roman constitutional law contained in the code of Justinian’s 6th century Corpus Juris were, “What pleases the prince is law,” and, “The prince is not bound by the law.” These are classic expressions of sovereignty. They locate the ultimate power and authority to make and enforce law in one identifiable person. They reflect the full imperium of the Roman emperor and create a contrast with the earlier Roman republic, when a similarly complete dominance was exercised only outside the city, by proconsuls in the provinces.

Yet there was another maxim in the Corpus, “What touches all must be consented to by all.” This suggests that the ultimate authority rests not in the governor, but in the governed. In the Roman republic, actions were taken in the name of the Senate and People of Rome. That idea was symbolized by the SPQR (Senatus Populusque Romanus) which was prominently displayed even on the standards of the imperial Roman legions. There is an obvious tension between these maxims. One might locate in that tension the beginning in Western political thought of the lengthy and ongoing debate over the nature of sovereignty.

One of the most influential expositors of the concept was the 16th century French jurist Jean Bodin. In his Six Livres de la République (Six Books of the Commonwealth), published in 1576, Bodin defines sovereignty as the power to make law. Political society, like other human organizations, is hierarchical. Someone must make the rules. Thus, sovereignty must exist as a precondition for a state. Sovereignty, Bodin insists, must be indivisible. And it must be ultimate and absolute. While his preferred sovereign is a monarch, that is not requisite. As a student of the classics, he asserts that all political constitutions are monarchic, aristocratic, or democratic. As a man of the Renaissance, he believes in scientific epistemology. But, before one can effectively study a country’s laws, one must know the source of those laws, which is in one identifiable man or body of men.

The appeal of such a theory to a strong ruler is clear, and there were few rulers of the early modern period as absolute in power and self-assured of his sovereignty as Louis XIV of France. The “Sun King” ruled from 1643 to 1715, said to be the longest recorded of any monarch in history, although during his minority France was governed under the regency of his mother, Queen Anne. He took over sole rule in 1661, after the death of his chief minister, the political and diplomatic virtuoso Cardinal Mazarin who had been the de facto ruler of France for a couple of decades. Louis’s famous dictum, “L’état, c’est moi” (“I am the State”), may well be apocryphal, but it summarizes his view of government.

Louis certainly was not alone in that regard. The Early Modern Period saw the rise of the nation-state and, as an essential component, the absolute monarch ruling by divine right. By the reasoning of various defenders of the new order, an absolute monarch as sovereign was as natural as the rule by the paterfamilias over the family and the rule of the pope over the community of believers. While Martin Luther and other early Protestant leaders might challenge the second analogy, they had no problem with the bigger point. On its way out was the old divided feudal structure, based on personal covenants of fealty, with power divided between popes and emperors, emperors and nobles, and nobles and freeholders. The conflict between King John and the nobles at Runnymede, which culminated in the Magna Carta of 1215, was an anachronism. More representative of the new order of things was King Henry VIII’s campaign of arrest and execution of English noblemen and seizure of noble estates. In similar manner, the walk by Emperor Henry IV over the wintry Alps in 1077 to Canossa to beg forgiveness from Pope Gregory VII and have his excommunication lifted, would be seen as rather odd. Instead, there was that same King Henry VIII first making himself head of the Catholic Church in England and, soon thereafter, head of the new Church of England.

Historians have speculated about the many possible causes of the rise of the modern nation-state. It is difficult to pinpoint any one cause, or even to distinguish between causes and symptoms. Was it the increased sophistication of weaponry and the changed structure of military operations, which eroded the relative equality of power among various nobles because of the greater expense of the new technologies and the larger armies drawn from commoners? Was it the growing influence of commerce due initially to the greater affluence and stability of society in the 12th and 13th centuries and then, ironically, to the economic recovery in the 15th century after the prior century’s population collapse from pestilence and famine due to the colder climate of the Little Ice Age? Was it the result of the decimation of the nobility due to the many wars among nobles, such as that between the House of York and the House of Lancaster in the English War of the Roses in the 15th century? Was it the European expansion and exploration in the Age of Discovery, enabled by European technological superiority, the expense of which could only be undertaken by comparatively large states and which, in turn, brought great wealth to their rulers? Was it simply, as Niccolo Machiavelli might declare, due to Fortuna and the virtu of dynamic statesmen with which a particular political entity was favored?

Whatever the reason, every ruler, it seemed, wanted to be what Louis XIV became. Timing was not uniform. England under the Tudors became the domain of an absolute monarch a few generations before France did, but also lost that status well before France did. The German princes operated on a smaller scale and were well behind France in their pretensions to absolute rule; indeed, the Holy Roman Empire never coalesced into a nation-state. But the common thread for these rulers, other than in various city states and in a few oddities such as the Holy Roman Empire, the Swiss Confederacy, and the United Provinces of the Netherlands, was that they claimed to exercise full sovereignty in fact.

The existence of the aforementioned oddities presented a problem for theorists such as Bodin. The confederated natures of such realms and their distributions of power among various political organs vexed him. His solution was simple. He either just assigned such divided governments to a pure system or declared them not to be true states. Thus, he characterized the intricate constitution of the Roman Republic as a democracy. The Holy Roman Empire, with its imperium in imperio, that is, a purported dual sovereignty, was not really a state, but a chimera of one.

Along with Bodin, another influential author of the doctrine of sovereignty was the 17th-century English philosopher Thomas Hobbes, whose major work on the topic was Leviathan. As Bodin had done, Hobbes declares sovereignty to be indivisible and absolute. But Hobbes goes further. His approach is more pragmatic and more rigorous than Bodin’s. Hobbes analyzes sovereignty less in terms of authority to make law, but rather in the ruler’s power to coerce others. That is the essence of the old Roman imperium, to command. For Hobbes, the sovereign’s legitimacy arises from the consent of the governed rooted in the social contract. That contract results from the human psychological need for peace. Mankind’s desire for survival impels humans to escape the brutal Hobbesian state of nature with its war of all against all. Human nature is both rational and self-interested. Hence, humans seek the safety of the political commonwealth and the strength of its organized coercive power.

Hobbes’s view of the relationship between subject and ruler is best described as covenantal, and his reference to an Old Testament creature is not coincidental. There is no equality of bargaining and equality of relationship as in a typical contract. The subject agrees to obey unconditionally, and the ruler provides protection and peace. To do that, the ruler must have unquestioned power to bend all persons and all institutions to his rule. The sovereign can act in accordance with established law or contrary to it. Church-state divisions are no longer an issue. The secular sovereign controls the ecclesiastical bodies, as Henry VIII controlled the church. It need hardly be added that a divided state or a system of distributed powers would be an abomination for Hobbes, as it would undermine the commonwealth’s stability and raise the likelihood of a return to the state of nature.

The Bodinian and Hobbesian approbation of undivided sovereignty in an absolute ruler sits rather ill at ease with certain assumptions about the American system. The drafters of the United States Constitution deliberately sought to create a system of balanced powers divided between the general government and the states and among several branches of the general government. The supporters of the Constitution frequently discussed the division between the general government and the states in terms of sovereignty, particularly the residual sovereignty of the states, in their efforts to assuage the concerns and blunt the criticisms of their opponents during the ratification debates. James Madison and others even argued that the Constitution was in many ways just a novel and workable modification of the confederal structure of the Articles of Confederation.

The Anti-federalists were not persuaded and, like Bodin and Hobbes, insisted that sovereignty was indivisible and that, within a union, imperium in imperio was impossible. Either the states were the sovereigns, as under the Articles of Confederation, or the general government was. While the framers may have attempted to “split the atom of sovereignty,” in the vivid words of Justice Anthony Kennedy, the effort was bound to fail. Either the states would control the general government or the latter would control the former. For the Anti-federalists, the teleological direction of the Constitution was clear: The general government would inevitably diminish the states to mere administrative appendages and become a tyranny.

This controversy over the nature of sovereignty in the Constitution has continued. Is there, indeed, an identifiable sovereign at all under the Constitution, with the split in authority among the legislative, executive, and judicial branches, as well as between the House of Representatives and the Senate? This does not even consider the role of what is, in the evaluation of some, the true sovereign: the wholly extraconstitutional vast bureaucracy with its essentially unreviewable combined rule-making and rule-enforcing power.

That question also leads to another controversy. To counteract the criticism that the Constitution was a path to oligarchic rule at best, and outright dictatorship at worst, the Constitution’s supporters made frequent references to the power of the people to participate in various political processes. In similar manner, there arose the claim that, in the United States, unlike even in Britain, “the people are sovereign.” In 1776, George Mason asserted in the Virginia Declaration of Rights, “That all power is vested in, and consequently derived from, the People; …” Although he also expressed caution about this principle, James Madison in Number 49 of The Federalist accepted Thomas Jefferson’s dictum that, “the people are the only legitimate fountain of power,” and acknowledged that, at least, in certain unexplained extraordinary matters, the people should decide directly.

But how do “the people” exercise indivisible and ultimate authority and power? Leave aside various inconvenient facts, such as the usual exclusion of large groups of “the people” from the political system, the often low fraction of eligible voters who actually participate, the ability of unelected bureaucracies or courts to frustrate the political decisions reached, and the dubious premise that “the people” have acted when the vote is, say, 51% in favor and 49% opposed. As the experience of ancient Athens and Rome shows, it is not possible for “the people” to gather in one place. As an interesting side note, modern technology makes such an event less implausible, but even with the capacities of a premium Zoom version, it might be difficult to get a couple of hundred million of “the people” to participate in policy-making. It is a far cry from an 18th-century New England town meeting, and even there, a majority assumes a power over a minority.

Moreover, aside from the Constitution’s optimistic reference to “We, the people of the United States,” every part of that document is about entities other than the people making laws and coercing individuals to obey those laws. Indeed, “the people” did not adopt the Constitution. Nor can they amend it. Technically, there is not even a guaranteed right in the document for “the people” to vote, as the states control the qualifications for voting in the first instance. True, here or there across the American constitutional landscape, one might spot an exemplar of popular sovereignty. Some states provide for direct participation by voting on ballot initiatives and referenda to make law, and there remain in some localities the afore-mentioned town meetings. One might even point to jury nullification as another example. But all of these are well outside the norm.

This dissonance between declarations of popular sovereignty and the reality of governments nevertheless has led some writers to try to reconcile them. Jean-Jacques Rousseau asserted that the people cannot act individually to legislate. Instead, their particular interests are collectivized and transformed rather mystically into the community’s “general will.” For Rousseau, the community is an actual, albeit incorporeal, entity with a will. That general will is expressed in laws through some legislative body. This seems to be a well-perfumed version of the Roman empire’s old constitutional sleight of hand that the people are the ultimate source of political authority but have ceded their sovereignty to the emperor.

Rather than resolve these tensions, one might distinguish between “theoretical sovereignty” and “practical sovereignty.” In a system whose claimed legitimacy is based on consent of the governed and which purports to base the legitimacy of its actions on some degree of popular participation, one might indeed posit a theoretical grounding on “the people” as the unlimited sovereign. The then-future Supreme Court justice James Wilson, a prominent lawyer and intellectual who signed the Declaration of Independence and the Constitution, wrote in his law lectures that a constitution originates from the authority of the people. “In their hands, it is as clay in the hands of the potter: they have the right to mould, to preserve, to improve, to refine, and to finish it as they please.” But that is not how government operates in practice. It is certainly not how the Constitution was adopted and how it has actually been amended.

Just as the high-minded assertion in the Declaration of Independence that “All men are created equal” states a Christian view of us all as God’s children or perhaps a still-aspirational secular equality before the law, “popular sovereignty” or “consent of the people” is a useful philosophic device to communicate the difference between a government and a bandit. It establishes a conceptual basis, perhaps a noble lie, for political obligation, that is, why one is obligated to obey the commands and coercions of the former, but not the latter.

The more difficult and practically relevant investigation is where in our constitutional system does the practical sovereignty lie. Who really governs, makes the rules, and coerces obedience? There indeed is no clear Bodinian sovereign in the Constitution’s formal dispersal of power. Despite Alexander Hamilton’s expansive views of executive power in The Federalist and his subsequent Pacificus letters, the President’s constitutional powers fall well short of a monarch’s, as Hamilton wrote, as well. Even Louis XIV, despite his pretensions, found out that his word was not everyone’s command. He did ultimately acknowledge on his deathbed, “I depart, but the State shall always remain.”

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath

Among the common definitions one finds for “Machiavellian” are “unscrupulous,” “cunning,” “deceitful,” and “duplicitous,” words associated with disreputable character. The namesake for these malignant traits is Niccolo Machiavelli, a Florentine diplomat who lived from 1469 to 1527. He was a scion of an ancient Florentine family. His father, a lawyer, provided him with a classic education. That learning shows in Machiavelli’s various books about political science, warcraft, and history. In addition, Machiavelli wrote numerous letters and shorter essays and a satirical play, Mandragola, which was immensely popular at the time. Whether or not he intended it as such, this play has been described as an allegory about political events in 16th century Italy, a bawdy dramatization of the advice Machiavelli gave to the Medici family in his notorious work, The Prince (De Principatibus or Il Principe).

Machiavelli and his family were firmly associated with the republican factions in Florence. Through that connection, he held diplomatic offices in service to his city, traveling extensively to political centers and royal courts in Italy and the rest of Europe. In this capacity, he met a number of rulers, including the charismatic Cesare Borgia, after which the protagonist in The Prince is supposedly styled. With the return to power of the anti-republican faction of the Medicis in 1512, Machiavelli’s political fortune cratered. The following year, he was accused of plotting against the regime, arrested, imprisoned, and tortured.

It has long been claimed that he wrote The Prince while in prison as a testimony that he was loyal to the regime and, indeed, should be permitted to serve in the new government. The fawning dedication to Lorenzo de Medici, Duke of Urbino, that Machiavelli wrote in the preface of the book lends credence to that claim. Whether or not Lorenzo or any other member of the family ever read the book, Machiavelli’s hope for a further diplomatic career remained unfulfilled. He retired to a life of contemplation and writing.

Around 1517, he wrote his other famous work on politics, The Discourses on the First Ten Books of Titus Livy, wherein he examined the politics of the early Roman Republic. From Rome he sought to learn the necessary conditions for a successful republic, an aspiration for his own city’s future. Although there are common threads, such as the judicious use of violence when needed to maintain the government, The Prince is different in tone and goal than The Discourses. This has led to much speculation about Machiavelli. Was he the amoral cynic who scorned Christian ethics, which the former book displays? Or was he the admirer of republican Rome, who emphasized the need for constant “rebirth” to maintain that best of all systems? In the latter work, he is alarmed that corruption of republican character will destroy the republic, unless something spurs its rebirth, preferably from reforms within the republic itself. John Adams, writing a quarter-millennium later in A Defence of the Constitutions of Government of the United States of America, agreed. But that is not The Prince.

In short, one must look at The Prince on its own terms. Readers then and since have been shocked—or piously professed to be shocked—by its content and tone. But why? The book makes no claim to promote virtue, either in the classic or Christian sense. He does not disparage Christianity or challenge Christian virtue in this or any other of his works. As one commentator has noted, “What should not be assumed is that whatever Machiavelli thinks about things in general is necessarily ‘Machiavellian.’ His view of politics is, but it simply does not follow that his view of everything is ‘Machiavellian.’” The Prince purports to deal with the world as it is, not as philosophy or religion would like it to be. It followed a long literary tradition called “the mirror of princes,” books whose lessons instructed future rulers about “proper” governance. It should come as no surprise that such instructions during the Middle Ages came with a heavy dose of Christian ethics to civilize the prince and habituate him to just and temperate rule. After all, as Thomas Aquinas noted, God gave the ruler care of the community for the general welfare, not a license to exploit the people for the ruler’s own benefit.

Machiavelli builds on that literary tradition but uproots it from its philosophical grounding. He tosses aside the Aristotelian conjoining of ethics and politics, the classic assumption that what defines a good person also defines a good ruler, where the private virtue is elevated to the public. It is an abandonment of the scholasticism of the High Middle Ages and its synthesis of philosophy and religion, of which Thomas was a prominent expounder. The Prince warns the ruler that, to be successful in politics, assume the worst of everyone, whereas the classical version of politics as ethics writ large held that a few people are virtuous, more are evil, and the great majority are in-between. It was for the last group that habituation to ethical behavior might move the needle.

Machiavelli is not interested in saving the prince’s soul, but in having him survive, a matter of particularly acute relevance in the chaotic and often murderous factional politics of the Italian states. He does not hold up his examples as paragons of morality, and his praise of virtu means a prince’s skill at the craft of statesmanship, not the ideal character of a Christian nobleman or the pursuit of personal excellence by a Roman Stoic sage. His advice is specific and based on assumptions about how human beings consistently respond to certain events and actions. These assumptions are drawn from hard-nosed examination of human behavior and contemporary events. Machiavelli engages in empirical psychology, no less valid because his analysis often also draws from historical sources made familiar through his classical education. Like the image of Janus, the Roman two-faced god of transitions, Machiavelli and his contemporaries looked ahead to a more secular world revealed through humanistic tools of discovery but still could not avert their gaze from the medieval world receding behind them.

The Prince is divided into several sections and chapters, dealing with the particular conditions of various principalities. There are secular and ecclesiastical princes.. Among the secular are those who became rulers by conquest, by criminal acts, or by acclaim of the people. Just as all cars might have certain similar requirements for maintenance, yet need different manuals to address their particular components, so does the governance of people in different polities.

Starting with commonalities, there are certain common sense postulates derived from experience. It is better to be feared than loved by the people. He acknowledges that it is best to be both respected and loved by the people. A ruler who is loved is likely to return that love and act magnanimously and govern moderately. But love is unsteady. In human relations, lovers betray each other constantly, through deceit or worse. That behavior is the theme of much literature, dramatic as well as comedic, including Machiavelli’s own Mandragola. At the impersonal level of a state, love becomes even less stable, which Machiavelli’s own fate in a city riven with factionalism demonstrated all too well. No politician is loved by everyone and should not even try. Sic transit gloria mundi should be a warning for every politician, as the glory of today becomes the exile, or worse, of tomorrow. Fear, on the other hand, provides a more stable rule, because it always produces the same reaction from people, of obedience and, indeed, respect for the ruler’s decisive leadership.

True, some might feel so much hatred for a strict ruler that it overcomes their fear. Therefore, the ruler must apply the precautionary principle: treat everyone as a potential assassin, more practical advice to survive in 16th century Italian politics. From this, another general rule emerges. Feign affability, but never let down your guard by mistaking your disguise for reality.

Of particular relevance to the Medicis would be the advice for rulers of conquered lands. Upon victory, the new ruler might react in an understandable human way and be indiscriminately magnanimous to the conquered people. Big mistake. The ruler must put himself in the position of various groups among those people. First, there is the former ruler and his family, around whom those with loyalty to the prior regime might coalesce. To the extent possible, the prior ruler’s family must be exterminated to eliminate this mortal danger to the new prince.

Another group might be those who have invited the prince to invade as a result of factional strife within that domain. This group expects to be rewarded. It is safe to ignore them, as they have no one to support them against the new prince. Their own people consider them traitors, and their very existence depends on the prince’s success. He holds their reins, not they his.

A third group are the sizable portion of the people who have something to lose in wealth or position, but are not among the first two groups. They might be, for example, merchants, artisans, and bureaucrats. The advice: be generous to make them feel connected to him. Kill those with loyalties to the old regime, fine. But get it done quickly, and do it through a subordinate who can then be blamed for having been overly zealous. One might think of King Henry II of England and his cry to the nobles, “Will no one rid me of this meddlesome priest” about killing Thomas Becket, the 12th century Archbishop of Canterbury. Better yet, kill the executioner, for there is no better way of showing that executions are over than hanging the hangman. The conquered people are afraid and cowed, uncertain of what will become of them, their families, and their property. They look for any sign of humanity in the conqueror and want to believe in the ruler’s good will. Such an approach will reassure them that they are safe and will be seen by them as one of generosity. After all, the condemned man is thankful for a pardon, even though it may have been the ruler whose prosecution put the man in the position of needing one. The reader might find it difficult to avoid the sense that this part may have been about Machiavelli and his own family’s situation while he wrote The Prince.

People, by nature, lack gratitude. Over time, the effect of not having been killed or lost their property wears off. Now the prince should reward them, but do so gradually and without raising taxes. The people may see through this, but will respect the prince for his fiscal discipline which has benefited them financially. One other noteworthy point that Machiavelli makes is that this third group of people might accept their conqueror because they blame the prior ruler for their situation. They will believe that the prior ruler lost because of corruption of his moral or political bearings, with the latter due either to the ruler’s laziness in attending public affairs or to a rot of the political structure as a whole. In any case, the prior ruler proved unfit, which makes the new one worthy of respect and fealty.

The last group is the remainder of the population. One option is to rule with perpetual fear and to strangle their livelihoods with taxes to keep them struggling for survival rather than engaging in political scheming. But, sooner or later, the prince will need them as soldiers. It will not do to impoverish the people because, with nothing for them to lose, it will make them unable and unwilling to fight on his behalf.

This broaches the topic of war, one of Machiavelli’s favorites, not coincidentally also a frequent pursuit of the rulers of Italian states during his time. War, he declares, is ubiquitous and inevitable among states. The prince should embrace it, but be smart about how and when to fight. War must deliver benefits for his people, such as tribute or new lands. Internal politics are inevitably connected to foreign policy, an interrelation which a diplomat such as Machiavelli would be sure to emphasize. War also can be a useful distraction from domestic trouble by rallying the people to the prince.

The “how” of fighting the war is of particular significance and requires long-term choices. One might use one’s own forces, those of allies, or mercenaries. While some combination among them, particularly the first two, is possible, he addresses the benefits and drawbacks of each. If one relies on allies, one takes a risk. They may help you and fight with elan. However, they may want a division of the conquered territory. If you refuse, they may turn on you. Therefore, be hesitant about allying with more powerful entities, but at least make sure that there is not one predominant ally among the group.

Mercenaries are always a problem, during war or peace. Perhaps he based this on the experience Italian states had with their frequent use of mercenaries, particularly German and Swiss. He broadened the argument to include professional soldiers in general. They fight for money and often are on retainer during peacetime. Therefore, they want to avoid war and will counsel against or even frustrate the ruler’s political decision about war. If war happens, they feel a certain fraternity with those on the other side. They may know them and even may have fought alongside them in other wars. Mercenaries do not fight vigorously, because the soldier on the other side is “just doing a job,” just as they are. The mercenaries lack the necessary conviction for the cause, because, in the words of one commentator, they “no more hate those they fight than they love those whom they fight for.” Even if they win, they could turn on the prince. At the least, they might raise their fee, a demand it would behoove the prince not to ignore lest the mercenaries act against his interest.

Best, then, to rely on one’s own citizen militia. If there are military reverses, the citizens will fight most vigorously for their hearth and home. If they are victorious, they can be rewarded with a moderate degree of plunder. They might also be useful to colonize the new realm. However, this migration must be undertaken with the long view towards intertwining the conquerors with the original inhabitants. It must not produce a collection of isolated communities of occupiers. Assimilation works best if the conquerors and the conquered share language, religion, and customs. Otherwise, particular care must be taken to be sensitive to deeply-held customs of the conquered people to pacify them. This reflects a practical strategy employed successfully by the ancient Romans as they spread across alien lands.

Machiavelli’s commendation of citizen militias and his distrust of professional soldiers reflects his republican leanings. Such broad-based military service was at the heart of the classic Greek and Roman conception of citizenship. His views became a staple of classic republican argumentation. During the debates over the American Constitution in 1787 and 1788, the Anti-federalists vigorously objected to a standing army as a tool of tyranny that would doom the republic. Hamilton and Madison used several essays in an attempt to blunt those objections.

Another aspect of Machiavelli’s instruction was that the ruler must consider the role of luck in events, particularly in war. He uses Fortuna, the Roman goddess of luck and fate. She is capricious, moody, and willful. She must constantly be courted to keep her on one’s good side. Her capriciousness cannot be tamed, but fortunately, if one may use that word, it may be calmed by the ruler’s virtu. Machiavelli is a Christian, so he does not believe in unalterable fate; man has free will. Moreover, the history of warfare shows not only the influence of luck, but of skill at warcraft, such as when a commander executes a deft maneuver that allows his army to escape a precarious situation. Hence it behooves a ruler to act decisively. Fortuna and virtu, working together, are irresistible.

Unlike the legitimacy a prince has by succession under established constitutional rules, conquest by itself cannot bestow legitimacy on the new prince. Machiavelli’s prince is not Thomas Hobbes’s Leviathan. Machiavelli calls to mind Aristotle’s distinction between king and tyrant. The non-pejorative meaning of “tyrant” was someone who came to power outside the customary process. That said, a consistently “lucky” prince will be seen by the people as beyond ordinary men, which creates legitimacy in their eyes. It is a well-known psychological urge in people to “go with a winner.” One need note only the increased attendance at sporting events in our time when the team is on a winning streak that season. As in the case of the ancient Greek heroes favored by their deities, Fortuna smiles on the prince. The concrete evidence of the prince’s success bestows the legitimacy on him which medieval Christians believed occurred through God’s anointment of kings and emperors. A lot of this may be theater, where elaborate court pomp and ritual provides the stage to make it appear that the prince is powerful and favored by fortune. The medium becomes the message, as the phrasing goes. As in Plato’s parable of the cave, the appearance becomes the reality in the minds of the subjects, a metamorphosis to which citizens of modern republics certainly are not immune, either.

The requirement that a successful prince take account of Fortuna’s fickleness and need for constant attention and courting sounds very much like Plato’s and Polybius’s critiques of the “pure” forms of democracy. For them, the general citizenry was fickle and willful and craved constant flattery from would-be leaders. The extent to which the latter possessed the political virtu to manipulate the citizens would determine how much support such demagogues would get. One also is reminded of Hamilton’s concern in Number 68 of The Federalist that direct election of executives is undesirable, because it rewards men who offer nothing more than their “[t]alents for low intrigue, and the little arts of popularity.”

The Prince has often been compared—unfavorably—to the works of political theorists who followed Machiavelli within a few generations, preeminently Jean Bodin and Thomas Hobbes. The latter, critics have charged, produced much more sophisticated and internally consistent investigations of political systems. Bodin, a French academic and jurist who wrote in the 16th century, analyzed different forms of government and organized them around the concept of sovereignty. Hobbes, an Englishman writing a hundred years later, claimed his work to be a new science of politics. He provided a modern psychological basis for the origin of political society in the rational self-interest of mankind, foremost the desire for personal security and safety. Meeting that primal psychological need established for Hobbes the legitimacy of an absolute ruler such as his Leviathan.

These criticisms miss the purpose of writing The Prince. Like Bodin, Machiavelli favored centralized and effective power through his prince. He hoped for a strong leader to unify Italy, much as Bodin wrote in favor of the French monarchy which had mostly completed the unification of France. Like Hobbes, Machiavelli in The Prince rejects established ethical justifications for a ruler’s legitimacy and justifies a strong and energetic ruler based on that ruler’s success in governing. As was essentially the case for Hobbes, there is no universal moral order of natural law which actually limits the prince’s law-making. To borrow from Justinian’s Code, the prince is the law because there is no earthly sovereign above him. This had also been the position of certain medieval churchmen, especially William of Occam, in regards to the divine realm and God’s omnipotence. Machiavelli and Hobbes secularized those arguments. It is true that The Prince lacks the philosophical wholeness and complexity of other works, but Machiavelli was not aiming for that. His Discourses on Livy comes closer to it. With The Prince, he was writing a practical guide for a successful ruler, a guide drawn from experience and an exemplar of a new science of statecraft.

Machiavelli’s prince did not, then, fail as a political concept. Indeed, Machiavelli’s goal of Italian unification through a dynamic leader, possessed of virtu and smiled upon by Fortuna, was realized, albeit more than three centuries later. Rather, because so much depended on the political skills of each ruler, particular princes failed while others succeeded. This flux destroys the social stability which is needed for productive lives and is traditionally the goal of government. Machiavelli reveals the concurrent strengths and weaknesses of monarchy and other single-executive systems of government. Leaving aside the potential problems of standing armies and heavy taxation discussed earlier, The Prince provides many lessons for us and reveals parallels to how our system functions.

For one, Machiavelli’s methodology is strikingly similar to the approach in The Federalist. Alexander Hamilton declared in Number 6, “Let experience, the least fallible guide of human opinions, be appealed to for an answer to these inquiries.” Use of illustrative historical events and commentaries on human nature based on similar psychological investigations run throughout those essays. One goal of the authors of The Federalist was to explain to their readers how this republican system could be successful as a practical undertaking, regardless of its conformance to some ethical ideal, the virtue—or lack thereof—of its politicians, or the problematic legitimacy of its creation.

Machiavelli also recognized that the fate of the prince and the people ultimately are tied together. The prince’s wise practice of statecraft will bring prosperity, which the citizens will defend vigorously, if needed. This is an eminently pragmatic position, well supported by examining history. As James Madison wrote in Number 40 of The Federalist in response to criticisms that the Philadelphia convention had acted illegitimately and against existing constitutional rules, “[If] they had violated both their powers and their obligations, in proposing a constitution, this ought nevertheless be embraced, if it be calculated to accomplish the views and happiness of the people of America.”

Another lesson is the need to avoid dependence on the particular qualities of one leader. It has long and often been recognized that the Constitution creates a potential for strong executive government. Examples abound, from Alexander Hamilton’s broad claims of implied executive powers in his Pacificus essays from 1793, to Woodrow Wilson’s positively Machiavellian observation in his book Constitutional Government, “If he rightly interpret the national thought and boldly insist upon it, he is irresistible. . . . His office is anything he has the sagacity and force to make it.” Most telling are the numerous claims of far-reaching power to act in emergencies by presidents down to the present, which emergency powers then conjure more emergencies. While the political benefits from energy and decisiveness in the executive were duly noted, the framers of the Constitution intended the system of structural separation of powers to diminish the dangers from concentration of power in a single ruler.

Finally, there was the need to deal with the destructive factional politics that plagued Italian cities during Machiavelli’s time and beyond. The Prince proposes one manner—the charismatic leader whose skill will prevent these factions from entrenching themselves. The Constitution recognizes the problem, but proposes a different solution, to set the factions against themselves in peaceful competition by multiplying their number and diversity so that none become entrenched.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.

Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath

Historians have usually described the government of the Netherlands in the two centuries between 1579 and the political system’s collapse in the late 18th century as a “republic.” Consistent with his commentary about the government of Venice, James Madison did not approve of this characterization. In Number 20 of The Federalist, he deemed the United Netherlands “a confederacy of republics, or rather of aristocracies, of a very remarkable texture.” While at times complimentary in his assessment, overall he saw in their government further evidence of what ailed, in his view, all confederations, including the United States under the Articles of Confederation.

Like the Articles, the Dutch system was forged in a war for independence, the first goal of which was to survive militarily. The Dutch referred to their Revolt of the Netherlands as the “Eighty Years’ War.” Fighting against Spain began in 1566, the seven northern provinces of the Spanish Netherlands formally united in their common cause through the Union of Utrecht in 1579, a watershed step not unlike the agreements of mutual aid and action among the North American colonies in the years before 1776. The Dutch analogue to the American Declaration of Independence was the Act of Abjuration of 1581 against the king of Spain. There were some truces and cessations of hostilities in subsequent decades, but independence was not officially recognized until the Treaty of Westphalia in 1648 which ended the much broader European conflict known as the Thirty-Years’ War. Still, the Dutch Republic had been functioning as an independent nation long before the status became official.

In the romanticized founding myths of the Dutch, the struggle was about religious toleration and national independence precipitated by an inquisition launched by the Spanish crown in support of the Council of Trent of 1543 and the Catholic Counter-Reformation. That may have been the motivator for some portion of the populace, and the assertion was useful in papering over the tensions which arose among the provinces during the war. The general reality was less lofty and more prosaic.

The Habsburg family ruled the Holy Roman Empire. They had received 17 provinces of the Duchy of Burgundy in 1482, which were allotted to the family’s Spanish branch in 1556. What happened next sounds familiar to the student of American history. The new Spanish king, Philip II, sought to centralize administration over these provinces located some distance from Spain, and to increase the efficiency of tax collecting. This would diminish the power that local bodies had previously exercised under the more hands-off approach of the Burgundians and the Emperor. The commercial towns in the southern provinces and the local nobles viewed this as an attack on their ancient privileges, secular and religious.

With resistance turning into rioting in 1566, the Spanish government sent an army, led by the Duke of Alba. Although a very capable military leader said by some to be one of the greatest of all time, he was a harsh governor, referred to by the Dutch as the “Iron Duke.” His army was generally successful against the rebels, but his policy of mass executions, sackings of towns, and massacres coalesced the population against the Spanish. The rebels received the support of a Catholic German-Dutch prince, William of the House of Orange-Nassau, the incumbent royal governor of several of the provinces. Colloquially—but unjustifiably—known as William the Silent for his supposed self-control not to erupt in anger, he was an effective political leader. As one of the richest Dutch nobles, he was also an important financial supporter of the rebels.

Although William had some successes against the Spanish army, the Duke of Alba eventually defeated his forces. William fled to his ancestral lands in Germany, from where he organized several mostly unsuccessful invasions. In 1573, Philip II relieved Alba of command and instituted a policy of reconciliation and acquiescence to greater local control. That split the rebels. The mostly Catholic southern provinces, which constitute Belgium today, returned to the Spanish fold. The seven increasingly Protestant provinces of the north remained in rebellion under William’s leadership. Dutch military fortunes brightened after the army of the United Provinces was formed following the Union of Utrecht. The army was placed under the command of William’s son, Maurice, after William was assassinated by a Spanish agent in 1584. Prince Maurice remained a prominent military and political leader for the next forty years.

One facet of the conflict at which the Dutch were consistently better than the Spanish was at sea. The northern provinces had long been oriented to fishing and maritime trade. Their coastal trade surpassed that of England and France in the 16th century. By the 17th century, their horizon had expanded to oceanic trade and the acquisition of colonies and foreign trading concessions. Along with that experience came skills in naval warfare. Professor Scott Gordon, in his thorough work on checks and balances in older constitutions, Controlling the State, estimates that, in the middle of the 17th century, the United Provinces owned more shipping capacity than England, France, the German states, Spain, and Portugal—combined. Amsterdam became the leading financial center of the world until it was finally replaced by London a century and a half later. It was the Dutch bankers from whom John Adams sought help during the American Revolution, because that was where the money was. Amsterdam was also one of the largest cities in Europe in the 17th century, having grown from 100,000 to 200,000 population in the middle decades.

Although the seven provinces were formally the main constituent parts of the “United Provinces of the Netherlands,” the towns were the actual foundation of the Dutch Republic’s political structure. The approximately 200 native Dutch noble families had status but limited power. There was not the same tradition of feudalism based on relationships of lord and vassal as in other European domains. In part, this was due to the closeness to the sea, with its sources of sustenance and wealth. In part it was due to the fact that for generations, land had been recovered by draining swamps or building dikes. These “polders” were claimed by commoners.

The towns were governed by the Regents, a wealthy subgroup of the merchant elite. The towns traced their charters and privileges to the medieval period. The Regents claimed to act for and represent the citizenry. However, their authority did not rest on broad political participation. From that perspective, the structure was not a republic, but an oligarchy. Meetings of the town councils controlled by the Regents were not open to the public. At the same time, the Regents did not constitute a class-conscious bourgeoisie in a Marxist sense. Rather, their actions seem to have been driven by local identity and preserving their local power. This town-centric system of governance remained until the reorganization of the Netherlands after the end of the Republic in the 1790s.

The towns built their own defense installations and levied taxes to maintain them, to preserve public order, and to provide for the poor. They also operated their own courts, enforced provincial laws, and administered provincial policies. The policy-making bodies, the town councils, generally had between 20 and 40 members. They elected various burgomasters annually from the Regent class to carry out executive and judicial functions.

The oligarchic character of the town governments was modulated somewhat through the militia, a combination military unit and social club. They were composed of troops of well-trained and heavily-armed men. Because members had to supply their own weapons, the militias consisted of middle and upper-middle class volunteers. They were led by officers from Regent families appointed by the town councils and were expected to carry out the latter’s wishes in case of civil disturbances. According to sources cited by Professor Gordon, riots were a not-uncommon manner for the citizenry to provide feedback to the Regents about their policies. The militia sometimes stood back if they opposed those policies themselves. Such expressions of popular discontent would have been particularly potent because the towns were still rather small, with the homes of the Regent families in close proximity to the other residents.

Gordon considers the failure of the Dutch Republic to provide less destructive means of popular expression of opposition to the town councils as one of its defects. Perhaps. But such riots were not uncommon in the history of the American republic, with apparently a customary acceptance of a degree of violence before the militia would be summoned. Recent events show that still to be a characteristic of American society. Whether that shows a defect in the republican nature of the political structure created in the constitutions of the United States and the several states is an interesting speculation.

The level of government above the towns were the provinces, formally the constitutional heart of the Dutch Republic. They were governed by entities called the “provincial states,” another institution formed in the Middle Ages. This term is not to be confused with the American concept of “states” as distinct political domains. Rather, the term refers to specific constitutional bodies which governed such political domains. These were assemblies of delegates from the towns. The members were selected by the town councils typically from the members of the Regent families. A town could send more than one delegate, but each town only had one vote, regardless of its population. However, despite this formal equality where decisions were generally reached by compromise and consensus, a dominant town would necessarily exercise a greater influence. Amsterdam as the largest and wealthiest town within the province of Holland provides a telling example. A province’s nobility also had one vote.

The principal obligation of the provincial states was to maintain the province’s military forces and to provide a system of provincial courts to preside over trials for various crimes and for appeals from the local courts. These assemblies could also assess taxes, but were dependent on the towns to collect them. Not infrequently there might be tension between the provincial state and the stadholder, the province’s chief executive from the House of Orange. Those tensions were especially acute and frequent in Holland, due to the strong anti-Orangist sentiments of Amsterdam, with its bourgeois merchants, its growing tradition of secular and religious dissent, and its cosmopolitanism. At times, Holland, as well as other provinces, refused to elect a stadholder when the prior one died.

At the apex of the Republic’s constitutional structure was the States-General, the body of around 50 delegates from the provinces. It met at The Hague. Although a province might send more than one delegate, each province had one vote. This equality of sovereigns marked the constitutional nature of the Republic in Madison’s characterization of it as a confederacy. As with the provincial states, this formal equality was tempered by the inequality of size and wealth among the provinces, in particular, Holland. That province’s delegation’s willingness to provide—or not—needed funding gave it influence which better reflected its economic position. The terms of office of the delegates were determined by the provinces and could be at pleasure, for one or more years, or for life. The agenda of the States-General was set by its president, which position rotated weekly among the provinces. Unanimity was required for action, although that was sometimes ignored if a particular need arose. It had various working committees to formulate policy and a Council of State to carry out its executive functions. The Council of State was composed of the provincial stadholder and twelve other appointees of the provincial states.

Initially, the States-General was to deal with the military campaign for independence. Thereafter, its role continued to be about war in the various conflicts in which the republic found itself in the 17th century. Beyond that, the States-General had broad responsibilities over coinage, diplomacy and foreign commerce and, as the Dutch quickly entered the pursuit of overseas empire, colonial affairs. Although it had the potential to become a national legislative body, that potential remained inchoate. Aside from the overarching political jealousies of the provinces and towns to maintain their local privileges, there were more direct limitations on the powers of the States-General, as well. For one, that body could not generally impose taxes directly. It could tax the colonies, but that yielded rather little. It could make assessments on the provinces, but that depended on the willingness of their delegates to agree, especially the delegation from Holland which typically had to bear at least half of the burden of an assessment. Any loans sought by the States-General for the benefit of the Republic must be approved by the provinces. It becomes clear why Madison saw the Republic as a case study for the fate of the Articles of Confederation.

Finally, there were the stadholder of the provinces and the de facto stadholder of the United Provinces. The office was derived from the provincial governorships the Holy Roman Emperor had established. Each provincial state selected that province’s stadholder for life. More than one province could appoint the same person, a very common scenario. During the two centuries of the Republic after 1589, all provinces always appointed members of the House of Orange-Nassau. When the need arose, the province of Holland, as the most important of the union, always appointed the head of that family. Technically, there was no Stadholder of the United Provinces. However, by customary practice, the States-General always appointed the stadholder of Holland to be the Republic’s commander-in-chief. This made the head of the House of Orange the main political leader of the most populous and prosperous province and the commander-in-chief of the Republic’s armed forces. The stadholderships generally became hereditary in the mid-17th century.

The power of the Prince of Orange over the armed forces included the power to set up military tribunals and to appoint higher-level officers. He also met with foreign ambassadors and had some adjudicatory powers, such as settling disputes among the provinces. His influence was bolstered by two broad sources. First, at the level of the union, he sat on all working committees of the States-General and on the Council of State. Together with his life term, this gave him broad knowledge about political matters over a much longer time frame than the provincial delegation, analogous to the Venetian Doge’s position in relation to the Senate and Great Council. If knowledge is power, this made the prince powerful, indeed.

Second, being the stadholder of Holland and, usually, several other provinces gave him significant control over provincial and even town affairs. The provincial stadholder was the head the province’s highest court, could pardon criminals, and had significant patronage powers over the appointment of officials at all levels. He could appoint certain burgomasters, although those had to be made from lists submitted by the Regent-controlled town councils. These roles, some formal, others by accepted practice, exercised at all levels of government, and extending to civil, military, and judicial matters, made the Prince of Orange in some ways the vortex around which Dutch politics swirled. In the end, however, with the vague constitutional dimensions of the office, it was the personality and talents of the particular stadholder which defined his powers.

A curious spectacle occasionally arose when various provinces left their stadholderships unoccupied. Even the province of Holland at one point in the 18th century left that position unoccupied for 45 years. In the 17th century Holland also prohibited the House of Orange from holding the stadholderships. Soon thereafter, its provincial state abolished the office altogether. That experiment lasted only five years, when those acts were repealed in the face of an invasion by England and France. One modern commentator quoted by Professor Gordon described the princes of the House of Orange as having “a special status within the Dutch state, almost mystical … in its nature.”

The Republic’s constitution was weakened in the 18th century in part due to factional rivalries in Amsterdam, the largest and wealthiest city in the largest and wealthiest province. The monarchist pretentious of the House of Orange clashed with the increasingly militant endemic anti-Orangist attitudes of the urban bourgeoisie. With a hardening of factional positions, political accommodations became more difficult. As well, the financial burdens of the colonial empire and the military needed to support it began to overwhelm the capacities of what was, after all, a rather small country. Still, it took the military might of, first, the Prussian Army and, thereafter, Napoleon’s forces, to end the Republic’s two centuries of successful government.

Madison in Number 20 of The Federalist disparages the Dutch system, his stand-in for the Articles of Confederation, as, “Imbecility in the government; discord among the provinces; foreign influence and indignities; a precarious existence in peace; and peculiar calamities from war.” He seems to have derived his information from a book by Sir William Temple, a 17th century British ambassador to the United Provinces. But Temple was hardly an unsympathetic observer of the Republic. Where Madison saw deadlock leading to eventual dissolution and anarchy, Temple saw a system which attracted large numbers of foreigners from polities less conducive to liberty. Certainly, the federal nature of the United Provinces stood in stark contrast to the centralization of power in national governments generally, and in monarchs particularly, which was ascendant in the Europe of the time.

If one uses classic designations of constitutions, the Dutch system at first blush most closely resembles an oligarchy. If one uses Madison’s definition in Number 10 of The Federalist, it was a closed system controlled by the wealthy Regent families and the Prince of Orange. It failed the test of broad public participation even by the limited standards of the early American polities. But, if one evaluates a republic functionally, as a political structure which provides overall social stability, fosters the general well-being of the people, and promotes the liberty of individuals to follow their own paths to fulfilled lives, all by reigning in various political institutions through a functioning balancing of powers, the constitution of the United Provinces qualifies. The mutual checks provided among the levels of government (town, provinces, union), among the provinces themselves, and between the stadholder on the one hand and the provincial states and States-General created a system which protected the liberties of the people better than other contemporaneous countries. More bluntly, as Professor Gordon explains, “[W]ith this political system, the Dutch not only fought Spain and France to a standstill and invaded England, but also made their little collection of swamps and polders into the richest, most civilized, nation in the early modern world.”

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath

Much of the history of the Holy Roman Empire was one of conflict and intrigue: among emperors and popes, emperors and nobles, and nobles themselves. Periods shaped by forces that fostered centralization of power in the hands of strong and capable emperors were eclipsed by developments that threatened to tear apart the Empire due to personal weaknesses or military miscalculations by the holders of the imperial title. Several generations of extraordinarily wise and astute rulers were inevitably followed by the collapse of dynasties and periods of political turmoil and social misery.

The collapse of the Western Roman Empire in the 5th century A.D. led to the formation of various Germanic kingdoms throughout the former territory. The Visigoths and other invaders attempted to carry on the Roman civilization, but lacked the administrative capabilities, technological know-how, and economic wherewithal to do so. They, in turn, also collapsed within a few generations. For the inhabitants of the former Roman domain, there was continuing danger from Germanic tribes, other marauders that are said to have been successors to the Huns, and, beginning in the 7th century, Arab raiders and armies. The Byzantine emperor’s control over those lands was nominal. The Roman Catholic Church was organizationally weak and doctrinally disorganized.

In the 8th century, the situation improved. A new line of kings had been elected by the nobles of a Germanic people, the Franks. The most prominent was a warrior-king, Charles. He defeated other German tribes and pushed against the Muslims in Spain whose advance into Frankish territory had been stopped by his grandfather, Charles the Hammer. Pope Leo III, eager to distance himself from the political and religious influences of the Orthodox Byzantine Empire, and hoping to spread the influence of the Catholic Church through the physical security offered by the Franks, crowned Charles emperor on Christmas Day, 800 A.D. Carolus Magnus, or Charlemagne, as he came to be known, was proclaimed the successor to the Roman Empire in the west. Indeed, from the imperial capital at Aachen, in the current Germany, he governed, as “Emperor of the Romans,” an area of Europe larger than anything seen since that empire.

Three decades after his death, Charlemagne’s realm was divided among his grandsons. Several centuries later, the western portion became the kingdom of France. The eastern portion became the German dominions. The end of the Carolingian dynasty in 911 resulted in the fracturing of the eastern portion. There were strong tribal loyalties within the various ancestral German domains, centered on several dukedoms and on the holdings of other, less powerful local strongmen.

In 936, Otto, the duke of the Saxons, a particularly warlike people who had been barely Christianized through force by Charlemagne a century earlier, was elected King of the Germans by the other nobles. A successful military campaigner who extended the eastern Frankish realm, Otto was given the imperial title in 962, after the Pope had appealed to him for military help. Referred to as Otto the Great, he established a new dynasty of emperors. His grandson, Otto III, revived the imperial seal of Charlemagne which had the motto, in Latin, that stood for “Renewal of the Roman Empire.” He understood this to be a clearly Christian empire, not only a political unit as imperium romanum, as reflected in his designation of the realm as imperium christianum. The successors of Otto III were weak and saw themselves as primarily German kings who happened to have holdings in Italy, not as rulers of a multicultural and transcendent Christian empire.

Once political conditions in western Europe became relatively settled by the end of the 10th century, the era of the warrior-king was succeeded by the era of the great landholding magnates. High feudalism emerged as the dominant social and political structure. Wealth, social standing, and power were based on land ownership and formalized through personal obligations between lords and vassals. On the continent more so than in England, local great men were independent of the emperor, who was addressed at times as “King of Germany” or the “German Roman Emperor.” These nobles retained their ancestral privileges and often claimed new ones.

Nevertheless, the idea of Empire remained alive. This political tension of a universal empire, yet of a German people, led externally to frequent, and not always enthusiastic or well-received, involvement of the Germans in the affairs of Italian communities. Internally, it resulted in the strange federal structure of what formally became known in the 13th century as the Holy Roman Empire. The interactions between emperors and popes further underscored the claims to universality. Papal coronation bestowed God’s recognition of the emperors’ legitimacy as secular rulers in Christendom. Refusal by a pope to grant that legitimacy, or removing it later by issuing a ban on the emperor, endangered the emperor’s rule by absolving the people, particularly the nobility, of loyalty to their earthly lord and excused them from fealty to any oath sworn to that lord. In a society vastly more religious than ours, within a feudal structure fundamentally based on mutual personal loyalties and obligations, such a development could prove fatal to the ruler.

After the end of the Saxon Ottonian line in 1024 and of its successors, the Frankish Salians, control over the Holy Roman Empire shifted in 1127 to a family from another part of the realm, the Hohenstaufen line from the Duchy of Swabia in southwest Germany. Under their best-known ruler, the charismatic and militarily and politically astute Emperor Frederick I Barbarossa (“Red beard”) from 1155 to 1190, the Empire achieved its greatest geographical expanse. Shortly after the rule of his similarly powerful grandson, Frederick II, the Hohenstaufen line ended, and the Great Interregnum brought considerable turmoil to the Empire and contests among various noble families for the imperial title. Rival emperors from different houses were chosen, and a general decline of the Empire’s territory and influence occurred. Not until the 16th century did the Empire regain a prominent position in Europe.

The struggle between emperor and nobles ebbed and flowed, depending significantly on the dynamism and capabilities of the emperors. These contests were endemic, with a parallel for several centuries in the conflict between the emperors and the popes. An example of the latter was the Investiture Controversy over the right to name local church leaders which led to a half-century of civil strife in Germany in the late 11th and early 12th centuries and ended with the emperor’s powers reduced as against popes and local nobles. Even as strong an emperor as Frederick II out of political expediency had to confirm, in statutes of 1220 and 1232, previously only customary privileges to the nobles, such as over tolls, coinage, and fortifications.

In 1493, Maximilian I from the Habsburg family, became Holy Roman Emperor. From that year, the Habsburg line provided an almost uninterrupted sequence of emperors until the Empire was abolished in 1806. A significant change in outlook under Maximilian was a turn to a more national identity and the stirrings of a nascent nation-state, in part due to the proposed Imperial Reform during the late 15th century supported by the energetic Maximilian. As a consequence, the realm began to be known as the Holy Roman Empire of the German Nation.

The Imperial Reform of 1495 was an attempt to modernize the administration of the realm and to increase the power of the emperor through more centralized governance. Aside from some success in making aspects of legal administration uniform through the use of Roman Law, the reforms came to naught by being ignored in the local principalities. There, the rulers generally strove to exercise the absolute powers of monarchs in England and France. As to the Empire, these local nobles guarded their privileges. Not to be outdone, the independent imperial “free” German cities, with their rising populations and increasingly powerful commercial bourgeoisie, were no less jealous of their privileges than the landed nobility.

The problem with the political structure of the Holy Roman Empire in the eyes of the framers of the American Constitution of 1787 was the overall weakness of the emperor in relation to the nobles. The Empire was a federal system, but, in their view, an unsuccessful version. The criticism is, overall, a fair one. Alexander Hamilton and James Madison, writing in The Federalist repeatedly identified the sources of weakness. Both emphasized the straightened financial circumstances in which the emperor frequently found himself to fund the costs of imperial government or necessary military actions against foreign countries. That difficulty was due at least in part to the obstructions created by local rulers to the flow of commerce.

Hamilton mentioned in Federalist Number 12 the emperor’s inability to raise funds, despite the “great extent of fertile, cultivated, and populous territory, a large proportion of which is situated in mild and luxuriant climates. In some parts of this territory are to be found the best gold and silver mines in Europe. And yet, from the want of the fostering influence of commerce, that monarch can boast but slender revenues.” Along the same lines, quoting from the Encyclopedia, he wrote in Number 22, “The commerce of the German empire is in continual trammels, from the multiplicity of the duties which the several princes and states enact upon the merchandises passing through their territories; by means of which the fine streams and navigable rivers with which Germany is so happily watered, are rendered almost useless.” In Number 42, Madison seconded Hamiltons’s point, “In Germany, it is a law of the empire, that the princes and states shall not lay tolls or customs on bridges, rivers, or passages, without the consent of the emperor and diet [the parliament]; though it appears from a quotation in an antecedent paper, that the practice in this, as in many other instances in that confederacy, has not followed the law, and has produced there the mischiefs which have been foreseen here.” Both writers painted this bleak picture as an omen of what would occur in the United States under the Article of Confederation. The Constitution would prevent this problem because, there, Congress was given “a superintending authority over the reciprocal trade of [the] confederated states.”

More fundamentally, however, the problem of the Empire and, by analogy, the United States under the Articles of Confederation was in the structure itself, an imperium in imperio, a state exercising sovereignty within another state. In Number 19 of The Federalist, Madison presented a lengthy overview of the Empire’s history. He identified problems with the structure, such as the difficulty to meet military emergencies or collect requisitions. The emperor had no holdings as such, only in his position as a hereditary sovereign in his ancestral lands or those acquired by marriage. Madison dismissed the Empire as a playground of foreign rulers because of the conflicts among the members of the Empire and between the emperor and the nobles large and small. This division allowed foreign rulers to split the allegiances of the nobles and to keep the empire weak. The worst example of this was the Thirty Years’ War from 1618 to 1648. While there were limitations on the powers of the nobles, and while the emperor had various prerogatives, these were paper powers, not real. Ultimately, the problem was that the empire was a community of sovereigns.

In support of Madison’s critique, one can look at one locus of power, the Reichstag, the name for the Imperial Diet or parliament. The Diet in some form already existed during Charlemagne’s time. Originally intended as a forum for discussions, not as a modern legislative body, by the 11th century it presented a serious counterweight to the emperor and a source of power for the nobles in two ways. First, the Diet participated in the making of law, typically through a collaborative manner with the emperor. Second, certain members of the Diet elected the Emperor.

The Diet during the Middle Ages comprised two “colleges.” That number was eventually raised to three as feudalism gave way to a more commercial modern society, and the growing importance of the bourgeoisie in the cities required representation of their estate. Each member of those colleges in essence represented a sovereignty, and the Diet in that light was a “community of sovereigns.” When the Diet met, the colleges and the emperor attended together. All were seated in a carefully prescribed manner, respecting their rank, with the emperor front and center and raised at least three feet above all others. Voting might be either per individual or per collegium as an estate in a complicated arrangement, depending on the rank of that individual and group.

The most important of these groups was the college of electors, which represented another locus of power in the Empire. Not only did the prince-electors vote individually, rather than as an estate, but they had the important occasional task of electing the emperor, the third institution of power. There was a fourth locus of power in the Empire, that is, the pope. Papal influence precipitated many political crises in medieval Europe, because the emperor was not properly installed until crowned by the pope, a practice discontinued after Charles V in the 16th century. However, papal influence is not crucial to an examination of the Empire’s political constitution as that structure influenced the debates over the American Constitution of 1787.

The election of the emperors was derived from the ancient practice of German tribal councils to elect their leaders for life. The direct male heirs of a deceased ruler generally had the advantage in any succession claim, but heredity was never a guarantee. That practice was extended first to the election of the kings of Germany by the dukes of the largest tribes in the 10th century, and then to the election of the emperors in the 13th century. Initially, the number of electors was somewhat fluid, but eventually there were four set secular and three set ecclesiastical electors. Over time, the membership was increased to nine and, briefly, to ten electors. The ecclesiastic rulers from certain archbishoprics eventually were replaced by secular electors, and, in time, the secular rulers themselves might be replaced by others as power shifted among rulers of various local domains.

A critical moment came with the promulgation of the Golden Bull of 1356 by the Imperial Diet at Nuremberg. A “bull” in this usage is derived from the Latin word for a seal attached to a document. Because of such a decree’s significance, the imperial seal attached to this document was made of gold. This particular golden bull was the closest thing to a written constitution of the Empire. It was the result of the political instability caused by contested elections and succession controversies. It specified the number—seven—and identity—by secular or ecclesiastical domain—of the imperial electors. Procedures were set for the emperor’s election, the specific functions of the electors were prescribed, and an order of succession was provided if an elector died. For example, to prevent rival claims from lingering and dragging the realm into disunity and war, the deliberations of the electors must result in a timely decision. Failure to decide on an emperor within 30 days in theory would result in the electors being given only bread and water as sustenance until they concluded their task.

Also significant was the Golden Bull’s undermining of the emperor’s power. Sometimes described as a German analogue to the Magna Charta of 1215 imposed by the English nobility on King John, it affirmed the privileges of the nobility against the emperor. Tolls and coinage were the right of the nobles in their domains. Crimes against them, including presumably through actions by the emperor, became treason against the empire itself. The rulings of their courts could not be appealed to the emperor. With a few notable episodic exceptions, such as the rule of Maximilian I and Charles V in the 16th century, this decree put the Empire on a gradual path to disintegration and reconfiguration as independent nations-states.

Voltaire is credited with the quip in his Essay on Customs in 1756 that the Empire was “neither Holy nor Roman nor an Empire.” Whatever might have been the veracity of his derision half a millennium earlier, when he wrote the essay his satire did not require much nuanced reflection on the part of his readers. The emperor in a basic sense was always the primus inter pares, and his power rested on the prestige of his title, the size and wealth of his own ancestral domain, and his skills as a political operator and military leader. Even with the emergence of the modern nation-state, the Holy Roman Empire remained just a confederation of de facto sovereignties, a matter underscored by the Treaty of Westphalia in 1648, which ended the Thirty Years’ War. The Habsburg ruler’s power was a far cry from the classic imperium of Octavian.

With the Reformation and the rise of the self-confident nation-state, the Roman and classic medieval idea of the universal Christian empire also became anachronistic. And it was no longer “Roman.” The conscious effort of Frederick I Barbarossa in the 12th century to demonstrate that the Empire was “Roman” stands in stark contrast with the 16th century, when emperors and the Diet emphasized its German character. As constituent German entities in the Empire, such as Prussia and Bavaria, grew more powerful, the struggles between emperor and nobles intensified and sharpened into outright wars as between independent nations. The imperial structure and its institutions, such as the Diet, became weaker and, indeed, irrelevant. Despite some belated and ineffectual efforts at reform and reorganization around the turn of the 19th century, the Empire, the thousand-year Reich, was dissolved a half-century after Voltaire’s remark, when Napoleon’s army crushed the emperor’s forces and effected the abdication of Francis II in 1806.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath


In Number 39 of The Federalist, James Madison objects to the habit of political writers of referring to Venice as a republic. He asserts that Venice is a system “where absolute power over the great body of the people is exercised, in the most absolute manner, by a small body of hereditary nobles.” Later, in Number 48 of the same work, Madison raises the need of providing practical security for each branch of the government against the intrusion by others into its powers. He quotes Thomas Jefferson’s Notes on the State of Virginia. Jefferson, commenting about the formal separation of powers in the constitution of Virginia which he had been instrumental in creating, bemoaned the lack of effective barriers among the branches which would better preserve their respective independence. As a part of his critique, Jefferson opined that the concentration of legislative, executive, and judicial powers in one body would be “the definition of despotic government.” Further, it mattered not “that these powers would be exercised by a plurality of hands, and not by a single one. One hundred and seventy-three despots would surely be as oppressive as one. Let those who doubt it, turn their eyes on the republic of Venice.”

Leaving aside the historical veracity of Madison’s and Jefferson’s characterizations of Venice, their perceptions shaped their ideas of a proper “republican” political structure and how that would differ from Venice. Madison’s critique of a city governed absolutely by a small body of men made Venice an aristocracy or, more accurately, an oligarchy for him. It is ironic that opponents of the proposed Constitution launched that very calumny against the structure which Madison was defending. The Anti-federalists maintained a drumbeat of attacks about the supposed anti-republican, aristocratic Constitution. Some were thoughtful and substantive objections. Other writers opted for the popular appeal of satire, not likely nuanced and subtle humor, but an entertaining burlesque style.

Two examples suffice. A writer styling himself “Aristocrotis” wrote a lengthy satire in a pamphlet published in Pennsylvania in 1788.

“For my own part, I was so smitten with the character of the members [of the Philadelphia Convention], that I had assented to their production, while it was yet in embryo. And I make no doubt but every good republican did so too. But how great was my surprise, when it appeared with such a venerable train of names annexed to its tail, to find some of the people under different signatures—such as Centinel, Old Whig, Brutus, etc.—daring to oppose it, and that too with barefaced arguments, obstinate reason and stubborn truth. This is certainly a piece of the most extravagant impudence to presume to contradict the collected wisdom of the United States; or to suppose a body, who engrossed the whole wisdom of the continent, was capable of erring. I expected the superior character of the convention would have secured it from profane sallies of a plebeian’s pen; and its inherent infallibility would have debarred the interference of impertinent reason or truth.”

With the tune of satire set, Aristocrotis applied it to a libretto of feigned aristocratic enthusiasm for a document which, according to him, set the few to rule over the many, in accord with the law of nature. Particularly useful for this aristocratic scheme was a powerful Senate and both direct and deviously hidden restrictions on the potentially dangerous House of Representatives. Establishing the latter was an unavoidable practice reflective of the corrupt practices of the times, he acknowledged. However, providing for 2-year terms, instead of the annual elections common to republican state constitutions, in combination with Congress’s power to set the times, places, and manner of elections allowed that body’s membership to perpetuate itself. In addition, Congress had the power to tax so as to give itself independence over its own pay. Raising taxes on the people would have another salubrious effect: it will make them industrious. “They will then be obliged to labor for money to pay their taxes. There will be no trifling from time to time, as is done now….This will make the people attend to their own business, and not be dabbling in politics—things they are entirely ignorant of; nor is it proper they should understand.” If the people object, Congress had the power to make them comply by raising an army. This backhanded compliment reflected the deep republican antipathy to peacetime armies.

Another example of the style was an essay by “Montezuma,” which appeared in the Philadelphia Independent Gazetteer on October 17, 1787, a month after the constitutional convention adjourned. If anything, Montezuma was even more prone to literary absurdity and plot lines reminiscent of a Gilbert and Sullivan production a century later than was Aristocrotis. He begins, with all emphases in the original,

“We, the Aristocratic party of the United States, lamenting the many inconveniences to which the late confederation subjected the well-born, the better kind of people, bringing them down to the level of the rabble—and holding in utter detestation that frontispiece to every bill of rights, “that all men are created equal”—beg leave (for the purpose of drawing a line between such as we think were ordained to govern, and such as were made to bear the weight of government without having any share in its administration) to submit to our friends in the first class for their inspection, the following defense of our monarchical, aristocratic democracy.”

After this mockery of the Constitution’s preamble, Montezuma proceeds to a listing of provisions that animate his imagined constitution. Any semblance of republicanism in the actual proposal, such as the election of the House of Representatives is a mirage. After all, the actions of the House can be overridden by the aristocratic Senate’s refusal to go along or by the monarchic President’s veto. Moreover, there is no limit to their re-election, so that the basic republican principle of “rotation of office” found in the Articles of Confederation is eliminated. This will result in perpetual re-election and soon make the representatives permanent members of the ruling elite. The Senate is the main home of this elite and is structured with long overlapping terms so that there is continuity in membership to acculturate any newcomers to the elite’s ways. The states are made subordinate to, and dependent on, the national government and will be “absorbed by our grand continental vortex, or dwindle into petty corporations, and have power over little else than yoaking hogs or determining the width of cart wheels.” The office of President is so named to fool the rubes with a republican title which hides his kingship. After all, “[W]e all know that Cromwell was a King, with the title of Protector.” He is the head of a standing army, which will start out small, ostensibly to defend the frontier. “Now a regiment and then a legion must be added quietly.” This allows the elite “to entrench ourselves so as to laugh at the cabals of the commonality.” There is no bill of rights, including the “great evil” of freedom of the press. The list goes on. Concluding his send-up of the Constitution through its closing phrase, Montezuma writes, “Signed by unanimous order of the lords spiritual and temporal,” a direct reference to the British House of Lords.

Montezuma and Aristocrotis recited the common themes of the Constitution’s opponents about the document’s insufficient republicanism: Long terms of office, no rotation in office through mandatory term limits, an aristocratic Senate, a president elected and re-elected for sequential lengthy terms, a standing army, consolidation of the formerly sovereign states into a massive national government, and lack of a bill of rights. There were other, more specific concerns raised by thoughtful opponents, but the foregoing resonated well with the citizenry.

If those themes defined a constitution’s non-republican character, Venice looked little different from what the Philadelphia Convention had produced. True, a formal nobility was prohibited under the Constitution, but there had been no formal nobility set in place in Venice until the previous constitutional structure was changed in 1297. Rather, wealth determined one’s status. Further, the commoners controlled the operations of the government through the bureaucracy. There were other important political institutions, such as the Senate with its important role to define public policy in Venice, but the ultimate power to make law was in the most populous branch, the Great Council, acting without fear of a veto by another branch of government. Unlike the proposed American system, membership in the Venetian Senate and the executive apparatus, with the exception of the Doge, was limited to annual or even shorter terms, as was the practice in the early state constitutions. While the President’s selection was filtered through electors chosen by the state legislatures, and the election might finally be determined by the House of Representatives, the selection of the Doge occurred through a process which had a strong component of what was classically viewed as a “democratic” tool, the drawing of lots of the names of those who would make that selection. The likelihood of a cabal controlling this convoluted process in order to install a puppet as the head of government was no more likely in Venice than under the Constitution. Moreover, the Doge had little formal power, unlike the President. Finally, Venice had no standing army, although it did have a large and powerful navy. In short, to an opponent of the Constitution, “aristocratic” Venice had at least as “republican” a character as the proposed American system, and Madison’s contemptuous dismissal of the city as a small group governing with absolute power sounded hollow.

The writers of The Federalist strove mightily to rebut these attacks. Madison’s narrowly formalistic definition of a republic in essay Number 10 that its distinguishing characteristic was its system of government by indirect representation, rather than direct action by the citizenry, was useful to establish a minimum of republicanism in the proposed system. But, by itself, it would hardly suffice to address the Anti-federalists’ multiple attacks. Madison understood this weakness and went on the attack, cleverly turning his opponents’ arguments against them in connection with the problem of “factions” and their threat to individual liberty and political stability.

Today, that essay is considered a brilliant insight into how political actors operate and how the framers were practical men who set up the constitutional machinery for our system of interest group politics later dubbed by the American political theorist Robert Dahl as Madisonian “polyarchy.” Yet, at the time of its publication, essay Number 10 aroused hardly a murmur. The reason likely was that few disputed his premises or his discussion about the existence, sources, and problems of factions in society seeking their own ends in contrast to the republican ideal of the general welfare. Alexander Hamilton, for one, had addressed the same point in essay Number 9. As well, no one really challenged his definition as a necessary characteristic of a republic. They disagreed about its sufficiency for a republic and, more profoundly, about whether the Constitution adequately balanced the self-interests of factions while at the same time preserving liberty.

As in so many other instances, the writers of The Federalist took to heart the maxim that “the best defense is a good offense.” Madison argued first that the republican principle of the vote, as qualified by the states themselves per the Constitution, would protect against extended dominance by some political minority. As to liberty, Madison asserted that the very variety of political factions spread across the country made the national council less likely to succumb to a dictatorship of an entrenched faction than would be the case in a smaller, culturally more homogeneous polity, whether democratic or republican in structure, such as a state or a city, including Venice. In a memorable paragraph, he wrote:

“The influence of factious leaders may kindle a flame within their particular states, but will be unable to spread a general conflagration through the other states: a religious sect may degenerate into a political faction on a part of the confederacy; but the variety of sects dispersed over the entire face of it, must secure the national councils against any danger from that source: a rage for paper money, for an abolition of debts, for an equal division of property, or for any other improper or wicked project, will be less apt to pervade the whole body of the union, than a particular member of it; in the same proportion as such a malady is more likely to taint a particular county or district, than an entire state.’

In other words, to prevent the deleterious effects of factions, the answer is, the more, the better, and the larger the domain, the more factions will exist. In at least the sense of guarding against a federal tyrant, diversity really is our strength. He repeated this defense of the general government in other essays, including one of the most renowned, Number 51.

Essay Number 51 also provides a thoroughgoing refutation that the states will be “consolidated” into the general government, and that the latter will degenerate into a tyranny. Madison relied on the formal structural separation of powers with its mutual checks and balances and on reflections about human nature. As to the first, he found common ground with his opponents:

“In order to lay a due foundation for that separate and distinct exercise of the different powers of government, which, to a certain extent, is admitted on all hands to be essential to the preservation of liberty, it is evident that each department should have a will of its own; and consequently should be so constituted, that the members of each should have as little agency as possible in the appointment of the members of the others….It is equally evident, that the members of each department should be as little dependent as possible on those of the others, for the emoluments annexed to their offices.” In the opinion of its supporters, the Constitution did that, and to exactly the correct degree.

As to the second, Madison tapped into the cynicism of some of his antagonists and the generally pessimistic views most Americans had about human nature in its fallen state. In another series of hard-hitting paragraphs, he urged:

“But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department, the necessary constitutional means, and personal motives, to resist encroachments of the others….Ambition must be made to counteract ambition. The interests of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government of men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.”

In short, government is a necessary evil commensurate with the fall of mankind. But, as a human creation, it, too, is naturally corrupt. To protect liberty, one cannot overly rely on the virtue of the citizenry, and certainly not on that of the rulers. Constitutions are made of parchment and need robust pragmatism to work. To do that, it is best to harness the natural self-interest of politicians to maintain and then expand their power, by setting them against each other in various independent centers of power, state, national, legislative, executive, and judicial. The scandalous and amoral proto-capitalist assertion by the early-18th-century economist Bernard de Mandeville in his satirical Fable of the Bees about how private vices, such as greed, lead to public benefits, such as economic growth, applies well in the political realm, it seems. Such a multiplicity of political institutions acting as checks on each other, exists in the entire system of human affairs, private and public, according to Madison. An examination of the competition among political bodies and offices which characterized constitutions throughout Western history, from Athens and Sparta to Rome and Venice, bears him out.

It must be noted that, by engaging their opponents in a debate about the objects of government in a republic, not merely about its operational grounding in the particulars of the concept of representation, the writers of The Federalist were able to turn the contest to their advantage. Debates over annual versus biennial election of representatives, or four-year terms for the President versus three-year terms for the governor of New York, was playing small ball. Those issues must be addressed and were, in various writings. Excepting the careful obfuscation of the institution of slavery, the big issues were given their proper due. Reassuring the people incessantly that the federal government was of little consequence when compared to the reserved powers of the states; that the President had exactly the right degree of power to provide energy to government while also being checked by Congress’s or the Senate’s power over the purse, war, and treaties; that a standing army was necessary to protect the country’s security and that the possibility of that army becoming dangerous to liberty was remote in light of the vastly larger number of armed Americans organized into militia.; that a bill of rights was both unnecessary and would be proposed once the Constitution was adopted. Those were the republican principles which mattered, and it was there that Madison and others successfully advocated the Constitution’s republican bona fides.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath

In Number 10 of The Federalist, James Madison defines “republic” and distinguishes that term from “democracy.” The latter, in its “pure” form, is “a society consisting of a small number of citizens, who assemble and administer the government in person, ….” Think of the classic New England town meeting or the administration of justice through a jury drawn by lot from the local citizenry. A republic, by contrast, is “a government in which the scheme of representation takes place, ….” It is distinguished by “first, the delegation of the government … [given] to a small number of citizens elected by the rest; secondly, the greater number of citizens, and greater sphere of the country, over which [a republic] may be extended.” The last quality is due to the fact that direct participation by citizens means that the place of government cannot be too far from their homes, lest they must leave their livelihoods and families, whereas the indirect system of governance in a republic only requires that the comparatively small number of representatives be able to travel long distances from their homes. One argument by historians for the collapse of the Roman Republic and its popular assemblies is that eventually there were too many Roman citizens living long distances from the city to make the required direct participation in the assemblies possible.

Political theorists and Western expositors of constitutional structures have characterized various systems as republics more broadly than Madison’s functional and limited definition. Examples abound. Plato ascribed the title Politeia (“Republic”) to his principal work on government. His conception of the ideal system was one of balance among different groups in society, with the leaders to come from an elite “guardian” class bred and trained to govern. This has been called government by philosopher-kings, but it was an obvious aristocracy in the true meaning of the word, government by the best (aristoi). Such government would establish a realm of “justice,” the cardinal virtue of the individual and the political order, through trained reason. He analogized the system to a charioteer who, through his reason guides the chariot safely along the path to the destination. The charioteer relies on the help of the strong obedient horse to control and direct the unruly horse which, driven by its appetites for physical satisfaction, wants to bolt off the path in search of immediate gratification of its desires. The charioteer is the guardian class, the strong horse the auxiliaries—disciplined and competent military officers and civil administrators—, and the unruly horse the masses. The system allows all to achieve their proper status in society in reflection of their inherent natural inequalities, provides stability necessary for social harmony, and is guided by an ethical principle—justice; hence, it is a republic.

Aristotle in his Politika did not discount the role of the demos in Athens. Like Madison, Aristotle considered democracy to be unstable and dangerous. From an analytic perspective, as was the case for Plato, democracy was a corruption of politeia, which he considered the best practical government for a city. Man is a politikon zoon, a creature which by his nature is best suited to live in the community that was the Greek polis. Once more, preserving a stable society and governing system was the key to maximizing the flourishing of each resident in accordance with the natural inequalities of each. Aristotle saw that balance in the “mixed” government of Athens, neither pure democracy nor oligarchy, in which the formal powers of the demos in the assembly and the jury courts were balanced by the Council of 500 and the practice of deference to the ideas and policies advanced by the elite of the wealthy and of those who earned military or civic honor.

The government of Rome before at least the First Triumvirate in 59 B.C. of Caesar, Pompey, and Crassus has consistently been described over the centuries as a “republic.” Polybius explained mikte (mixed government), the political structure of the Roman Republic, differently than did Aristotle. But he, too, deemed Rome a republic because of the balance among the monarchic, aristocratic, and democratic elements of its constitution. As important, the practical functioning of the competing political institutions limited the power of each. Polybius related the political structure and its evolution to Roman character traits that reflected Rome’s history and contemporary culture, which had stressed the maintenance of civic virtue. Polybius also understood that Romans were not immune to human passions and vices. Like Madison writing nearly two millennia later, he warned that Rome’s republican structures were better than other forms of government but were not impregnable barriers against political failure.

Cicero also described Rome as a mixed government, although his declaration that the people were the foundation of political authority was opposed to his approving description of the patrician Senate as preeminent. For Cicero, Rome’s system reflected the natural divisions of society, with leadership appropriately assigned to the best, the optimates. What made Rome a republic was that the mutual influences and overlapping authority of the various political institutions provided the stability for a successful community oriented to the thriving of all, the res publica. In the Ciceronian version, Rome was a republic, but an aristocratic one.

Closer to Madison’s time were the observations of Baron Montesquieu, an authority well-respected by the writers of The Federalist. Montesquieu’s The Spirit of the Laws has been criticized as contradictory and lacking systematic analysis. In a relevant portion which describes the English system, he calls the structure a mixed government, with separate roles for monarch, Lords, and Commons. He characterizes this as a republic, similar to the Rome of Polybius, because they embodied different interests and were able to check each other to prevent any of them from exercising power arbitrarily. England was a republic in function, but a monarchy in form.

Today, one sees systems self-named as republics that are a far cry from the foregoing examples. North Korea as the Democratic People’s Republic of Korea, the People’s Republic of China, and the erstwhile Union of Soviet Socialist Republics appear to have at most a passing resemblance to the Rome of Polybius or the England described by Montesquieu. Their “republican” connection seems to be at best a theoretical nod to the concept of the people, in the form of the proletarian class, as the source of authority, with the ruler chosen for long term, often life, by a token assemblage of delegates in a closed political system.

What then made classical Venice a republic? Based on classical taxonomy of “pure” political systems, Venice was an aristocracy. Although Venice had been founded under Roman rule, the most revealing period was the half-millennium between the constitutional reforms of 1297 and the Republic’s end after the city’s occupation by Napoleon in 1797. Like Rome and other classical polities, Venice had no written formal constitution or judicially applied constitutional law. The political structure was the result of practical responses to certain developments, the demands of popular opinion, and, as in Rome, the deference to custom traceable to the “wise ancestors.”

In 1297, membership in the nobility became fixed in certain families, and the previous fluid manner of gaining access through the accumulation of wealth during a period of economic expansion was foreclosed (the “Serrata”). That said, the number of nobles was significant, with estimates that it amounted at times to 5% of the population. The nobility governed, and their foundational institution was the Great Council. All adult males of the nobility belonged to the Council and could vote in its weekly meetings. That body debated and enacted laws. It voted on the appointment of the city’s political officials, of which at times there were estimated to be more than 800. Since the officials’ terms of office were brief, and turnover frequent, this task occupied considerable time of the Council.

In addition, there was another powerful political body, the 300-member Senate, Venice’s main effective policy-making institution. Nobles at least 32 years old were eligible to be selected by one of two procedures, election by the Council or by lot drawn from nominations by retiring Senators. Their annual terms overlapped, with no uniform beginning and end. As well, senior civil and military officers were members. The Senate determined policy for the government, most particularly in foreign and financial affairs. However, the agenda of the Senate was set by the 26-member Collegio, a sort of steering committee. While the Collegio could control what matters were debated by the Senate, it could only offer opinions held by various of its members about an issue, not submit concrete proposals.

The administrative part of the Venetian government was particularly complex, as described by Professor Scott Gordon in his well-researched book, Controlling the State. Regarding Venice, he refers frequently to Gasparo Contarini’s classic work from 1543, De Magistratibus et Republica Venetorum. Selection to office involved a confusing combination of voting and selection by lot. Gordon provides a schematic of the selection of the Doge, the city’s head. At once amusing and awe-inducing for its complexity, a simplified version is shown by: 30L-9L-40E-12L-25E-9L-45E-11L-41E-Doge, where L stands for selection by lot and E for election. In other words, at a meeting of the Great Council, the names of 30 members were drawn by lot. From them, 9 were drawn by lot. Those nine voted for 40 members of the Great Council. From those, 12 were drawn by lot, and so on, until 41 nominators were selected who would select the duke. This convoluted procedure had some anticipated benefits. Together with the prohibition of formal campaigning, the unpredictability of the eventual selecting body discouraged election rigging. Moreover, the time delays involved and the likely variation of opinions among the members of the Council encouraged debate in the Council and among the public about the qualifications of various potential candidates. Factionalism is unavoidable in large bodies, but its effects likely were somewhat blunted by this procedurally chaotic approach.

Although elected for life, the doge himself had little formal substantive power. He could do nothing official by himself. To meet visitors, or when he engaged in correspondence, at least two members of the Ducal Council had to be present. The Ducal Council was composed of six members elected for eight-month terms by the Great Council, each representing a geographic district of the city. They were the doge’s advisors, but also his watchdogs, much as the ephori (magistrates) of Sparta shadowed their kings.

Upon election, the new doge had to swear an oath on a document which detailed the limitations imposed on his office. Those limitations could vary, depending on the political conditions and the identity of the person selected. To remind him, the oath was reread to him every two months. After the doge died, his conduct was subject to an inquiry by committees of the Great Council. If he was found to have engaged in illegality, his estate could be fined, a not unusual result.

The office had little formal power, but it was more than simply ceremonial. The Doge presided over the meetings of the Great Council and the Senate, though he did so attended by the Ducal Council and the three chief judges of the criminal court. His power came from his long tenure and his participation in the processes and deliberations of all of the important organs of the city’s government.

There also were security and secret police organs, such as the shadowy Dieci (Council of Ten), elected by the Grand Council to staggered one-year terms, and the three Inquisitors. The Dieci targeted acts of subversion. The usual legal rules did not apply to them, to allow them to move quickly and secretly. The Inquisitors were a counterintelligence entity, set up to prevent disclosure of state secrets. Like all such extraordinary bodies connected to national security, they represented a potential threat to the republican structure of Venice. Notably, there is no record of them attempting to subvert the republic and seize power.

A final and very significant component of the Venetian system were the bureaucracy, the craft guilds, and the service clubs. All of these were controlled by the non-noble citizens of Venice. The first, especially, was an ever-expanding part of the government. Excluded from the political operations, commoners sought power through the bureaucratic departments. Eventually, a sort of bureaucratic oligarchy developed, as prominent families came to dominate certain departments over the generations. These cittadini roughly equaled the nobles in number, and they had the advantage that, unlike the annual terms of noble officeholders, they held their offices for life.

Venice acquired the reputation among writers during the 15th through 17th centuries of an “ideal” republic, with a stable constitution able to survive even catastrophic military defeat in 1508. The city was marked by good government and the protection of political and religious liberty. As noted by one modern commentator, Venice was “a Catholic state where the Protestant could share the security of the Greek and the Jew from persecution.”  The system stood in contrast to the violent chaos and bouts of persecution that characterized the history of Florence and other Italian cities, and the economic backwardness and lack of social mobility of the emerging nation-states, such as France. It was a wealthy, capitalist society, which was easily able to raise more tax revenues than nation-states with several times its population. On the military side, although it had no regular army or militia, Venice had for several centuries the most powerful navy in the world, with bases around the eastern Mediterranean to protect its far-ranging commercial interests.

However, by the 18th century, the “myth of Venice” had become tarnished, as the city acquired a reputation for civic decay. Hamilton and Madison wrote disparagingly about it in The Federalist, the latter claiming that the city did not meet the definition of a republic. Thus, coming back to that earlier question, why was Venice’s constitution described as such by so many? Madison’s own definition in No. 39 of The Federalist, in which he rejects characterizing Venice as a republic, emphasizes that the governing authority in a republic must come directly or indirectly from the “great body of the people,” and the government must be administered by persons holding office during good behavior.

It is true that the organs of state in Venice were controlled by a noble elite of at most 5% of the population. Yet, the general exclusion of women, children, convicts, and slaves from governance in the American states, along with the impact on free male adults of the property qualifications imposed by many states on voting well into the 19th century, undercuts Madison’s claim that the American states were republics. Moreover, in Venice the cittadini carried out the ordinary operations of the government and were, therefore, a significant force in the execution of government policy. Looking at terms of office, with the exception of the doge’s life tenure, office holders in Venice were usually selected for annual terms, unlike the longer terms of office for President, Representatives, Senators, and judges in the United States. Indeed, it was the very length of the tenures of officers of the general government which the Anti-federalists decried as unrepublican, and which Madison defended.

That is not to say that Madison’s focus is misplaced. It is a necessary, but not sufficient, condition of a republic that there is a significant element of popular participation, albeit one not amenable to precise reckoning. As important, however, is that the government is not unlimited and power is not concentrated in a single person, class, or body of persons. The balance and separation of powers which Madison considers to be crucial in The Federalist Numbers 10 and 51, when he defends against the charge that the Constitution is a prescription for tyranny, is also clearly present in Venice’s, one might say Byzantine, structure of overlapping entities checking and supervising each other. It was a structure that, by Madison’s time had, with some alterations, served the city for 500 years since the Serrata, and another three centuries since its independence from Byzantium before then. It took Napoleon’s mass army, the military might of a large nation-state, to end Venice’s long-functioning, but obsolete city-state constitution.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.

Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath

Rome, the city-state on the Tiber River, like her counterparts in Greece, had no cohesive written constitution. There were the Twelve Tables from around 450 B.C., of which mere fragments remain, which are sometimes presented as the Roman Republic’s constitution. However, the tablets were more an attempt to codify certain principles of criminal and civil law, rather than to lay the foundation for a political system. However, they did begin the practice in Roman law of published codes enacted by a legislative body and accessible to all citizens, which remained a core characteristic of European legal systems influenced by Rome.

Much of Rome’s political constitution by contrast was the product of custom. That custom evolved through responses to changes in the society’s social structure, through the citizens’ tacit acceptance of political bodies that arose from critical events, and by incorporating founding legends. An example of the first was the change in sources of wealth and the nature of the aristocracy comprising the leading families. The second would include the expulsion of Rome’s last king, the Etruscan Tarquin the Proud, at the end of the 6th century B.C. That event resulted immediately in the preeminence of the established aristocratic council, the Senate, and, a half-century later, in the emergence of the assemblies as sources of political influence for the commoners. The last would be the creation of institutions (such as the Senate and the tribunes) and practices said to go back to the 8th century B.C., and the acts of Rome’s first king, the legendary Romulus, and his successor, the Sabine Numa Pompilius.

While the writings of historians such as Livy and Sallust and political leaders such as Cicero are instructive, the single most authoritative source for the Roman constitution is its earliest expositor, the great Greek historian and father of constitutional analysis, Polybius of Megalopolis. Born in 200 B.C., he became a prominent politician in the Achaean League, of which his city was a member. The League, had for some years, had to tread a narrow path in relations with Rome, by then in control of most of Greece. With some exceptions, the leaders of the Greek cities generally were less than thrilled about Roman control. Such lack of enthusiasm raised suspicions and put those politicians in potential danger.

After Rome in 168 B.C. defeated Macedon for the third and final time, the Senate decided to break up that kingdom into four tributary republics. Rome also “went Roman” on the Greeks allied with Macedon, destroying 70 towns in the region of Epirus and selling a reported 150,000 into slavery. Rome’s Greek “allies” fared better but were disciplined for their lack of commitment. Polybius was among the 1000 Achaean leaders suspected of “fence-sitting” who were deported. Most were sent to provincial towns away from Greece.

Polybius was allowed to stay in Rome itself, due to the intervention of two powerful Roman leaders, Scipio Aemilianus and his brother. The developing friendship between Scipio and Polybius gave the latter access to the Roman elite. His learning and gregarious and active personality further solidified those connections. Polybius, in turn, became a committed advocate for the city and its system of government. As well, his favored status gave him extensive freedom to travel. When the Senate authorized the Greeks to return to their cities, Polybius declined. Instead, he eventually accompanied his friend Scipio to North Africa when the latter was given the command of the army sent to destroy Carthage in the Third Punic War. Polybius was well acquainted with Rome, its history, and its institutions, and he wrote about them with affinity.

The Histories is Polybius’s major influential work. It was a massive undertaking of 40 books, although one needs to keep in mind that the physical limitations of papyrus scrolls meant that a “book” might be more like a quite lengthy chapter today, and the entire effort perhaps a couple of thousand pages. The first five books are fully available, with more or less extensive excerpts from many others. Some are entirely lost. Most of the work covers Roman history from the Second Punic War (against Hannibal) to Polybius’s time. Most important to constitutional analysis is Book 6, the numerous preserved fragments of which cover, in the estimate of one authority, about two-thirds of the book. Missing is a thorough analysis of the Roman assemblies, in contrast to his discussion of other elements of the Roman constitution.

The constitution Polybius describes is that of his time, after Rome has finalized its drive for dominance of the Mediterranean world. The Punic Wars lie in the past, Carthage has been eradicated, and the destructive Social Wars and civil wars are in the future. Romans’ confidence in their institutions is high, and the republic which Polybius describes is at its political zenith. As was the habit of classic Greek observers of political systems, Polybius believed in a duality of good and bad forms of government, with an inexorable process of degeneration between those forms. But, unlike, for example, Plato and Aristotle, he claimed to see in the Roman constitution a system resistant to such degeneration. He also observed that states commonly moved through those forms sequentially and even attempted an anthropological explanation for the origins of government. Thus, he argued an archaic form of monarchy emerged when the physically dominant member of a primitive band of humans took command.

As societies become more sophisticated, that archaic form of tribal leadership proves inadequate. A more stable form of kingship emerges, one based on reason and excellence of judgment, which, in turn, fosters consent of the governed. Initially, such kings are elected for life. Eventually, the dynastic impulse of rulers to pass their office from father to son leads to kingship often becoming hereditary. Over time, such dynastic succession induces a sense of superiority and entitlement, which results in formal distinctions and ceremonies to set the royals apart from commoners. Worse, these royals begin to consider themselves exempt from rules and morals. As ordinary people begin to react with disgust at such licentiousness and arrogance, the ruler responds with anger and force. Thus, the inevitable outgrowth of kingship is tyranny.

The wealthy and talented members of respected families chafe at the tyrant’s rule the most. Conspiracies develop and the tyrant is replaced by a ruling class of high-minded men, the aristocracy. Recalling Plato’s criticism of oligarchy, Polybius saw the degeneration as the fault of the sons, not the fathers. As he wrote, the descendants “had no conception of hardship, and just as little of political equality or the right of any citizen to speak his mind, because all their lives they had been surrounded by their fathers’ powers and privileges.” Soon enough, the government controlled by supremely moral and wise men gives way to a self-interested oligarchy “dedicated … to rapaciousness and unscrupulous money-making, or to drinking and the non-stop partying that goes with it ….”

The general populace, encouraged in their passions by manipulative leaders, murders or banishes the oligarchs and itself takes on the responsibilities of government. Democracy, according to Polybius, is based on majority rule, but a majority tempered by “the traditional values of piety towards the gods, care of parents, respect for elders, and obedience to the laws.” This sounds strikingly like the admonition of republicans through the ages, that self-government requires self-restraint, focus on the common good and general welfare, and a strong moral and religious framework to promote republican virtue. John Adams’s observation that the American system was fit only for a moral and religious people is one example particularly relevant to the American experience. The exhortation in the third article of the great North-West Ordinance of 1787, about “Religion, morality, and knowledge being necessary to good government and the happiness of mankind” is another.

Regrettably, such values prove to be in short supply, and the population of the democracy, now encouraged in their delusions by manipulative politicians, believes instead that it has “the right to follow every whim and inclination.” Those ambitious for power and wealth seek to get ahead by corrupting the people with money to obtain their support. The common people become greedy for such largesse, and democratic self-government degenerates into ochlocracy (“mob rule”). As Polybius described the fate of democracy, “For once people had grown accustomed to eating off others’ tables and expected their daily needs to be met, then, when they found someone to champion their cause … they instituted government by force: they banded together and set about murdering, banishing, and redistributing land, until they were reduced to a bestial state and once more gained a monarchic master.” This is the predictable and depressing lifecycle of political systems. Polybius would have nodded knowingly, had he been present at Benjamin Franklin’s reply to his interlocutor about the type of government produced at the Philadelphia Convention, “A republic, Madam; if you can keep it.”

Fortunately, such a cycle of corrupt and degenerate forms of government could be avoided, and Rome showed the way. Polybius exalted Rome as a “mixed” government, composed of essential elements of all taxonomic forms, monarchy, aristocracy, and democracy. Unlike Plato’s fictitious ideal republic, Rome’s was a functioning system which had proved its mettle for centuries. Unlike Aristotle’s description of the Athenian government as a workable, but uneasy, mixture of popular and oligarchic elements in the Assembly on one side and the Council of 500 and other institutions on the other, Rome succeeded because of its more developed balance of powers. In that, according to Polybius, Rome’s constitution resembled that of Sparta, although Rome’s developed by natural evolution rather than from a conscious decision by a wise lawgiver like the mythical Spartan Lycurgus. Polybius regarded Sparta’s system as particularly enlightened and wrote with great favor about it, although he recognized that the structure did not prevent Spartan hubris from engaging in ultimately disastrous foreign military adventures. In light of Sparta’s legal totalitarianism, it is ironic that Polybius ascribed to this mixed government a long history of liberty in Sparta. Perhaps by this he meant independence. In any event, his characterization of mixed government became the classic understanding of what today would be called a system of limited government.

The preeminent political institution of the Roman Republic was the Senate. Although eligibility changed over time as membership was opened up to the more prominent plebeian class, the equites ((knights), the Senate was primarily the institution of Rome’s aristocratic families, the patricians. The body had begun as a council composed of 100 men chosen by Romulus from the leading land-holding families as city fathers (“patres“). Initially, it was solely a hereditary body, but eventually the primary determinant, if one sought admission to the Senate, became landed wealth. The Senate had the power over appropriations. The civil functionaries had to obtain Senate consent for all expenditures, most importantly for the massive funds spent every few years on the repair and construction of public buildings. Major crimes, such as treason, conspiracy, and gang murder were under Senate jurisdiction. Foreign relations, colonial administration, and matters of war and peace were the domain of the Senate.

Striking about the Senate was that it had no formal role except to act as an advisory council, the same as under the earlier monarchy. In reality, it was the single most powerful body in the republic, due to its class ties and consciousness, its continuous sessions, and its life membership. Moreover, the mos maiorum (the “custom of the ancestors”), the powerful force of tradition in the Roman constitution, sustained the legitimacy of the Senate. A senatus consultum was merely an advisory opinion by the Senate, but such an opinion was required for any law proposed for adoption by an assembly. Although a consultum could be overridden by the assembly or could be vetoed by a plebeian tribune, in reality an unfavorable consultum usually spelled the end of the proposed law or, if enacted, caused it not to be enforced by the magistrates. Polybius noted, if one were to look solely at the Senate, one would believe that Rome was an aristocracy. Or, in the more jaundiced view of some historians who claimed that the Senate was actually controlled by a tightly knit small hereditary group of families, it was an oligarchy.

There was also, however, another long tradition in Rome’s constitution, “What touches all must be approved by all.” As Cicero put it in Republic, “res publica, res populi.” The consent of the people was given through the assemblies. Polybius described their role in assessing taxes, the ratification of treaties, actual declaration of war, and confirming the appointment of officials. Moreover, the people had a role in legal processes. All death penalties had to be approved by an assembly. The same held for more general criminal cases where a substantial fine would be imposed. He concluded that, from this perspective, one might declare Rome a democracy.

There were various assemblies over time, and Roman citizens could attend any. Histories does not have much discussion of them. This might be because Polybius was not a great admirer of those bodies or, more simply, because his discussion is in the chapters which have been lost. These explanations are not contradictory, and there is evidence for both. One such body was the Centuriate Assembly, the oldest. It can be traced to a 6th century B.C. king and was modeled on the centuriae, the military units of 100 infantry and 10 cavalry that each of the ten subunits of the three “tribes” of Rome had to provide. As in Athens, these tribes were not based on ethnicity but were simply geographic constituencies within the city.

As the city grew, so did the number of tribes and the size of the voting units. For a long time, there were 193 “centuries.” They were organized on the basis of land ownership, wealth, and age, which, in turn, was related to the type of military service and associated weaponry of the members. At the top were the equites (knights), who were wealthy enough to provide horses and served in the cavalry. They had 18 centuries. Next were 170 centuries for the infantry, divided further into five classes based on their members’ wealth and weaponry. Below them were five centuries for the proletarii (the poor), those who could not supply weapons and typically were assigned to the navy.

In contrast to the Athenian ekklesia, in the Roman system the citizens did not vote simply as individuals. Although they met in the same place, the actual voting took place within their respective centuries. Each century had one vote, determined by the majority vote of citizens assigned to that century. The Assembly’s approval depended on a majority vote of the centuries, not of the undifferentiated citizens. With 193 centuries, the votes of majorities in 97 of those centuries would be required to approve a measure. In fact, voting was heavily skewed in favor of the equites and the wealthiest layer of the others. Between them, they were assigned 98 centuries, on the reasoning that those who provided the most financial support and had the most to lose in military service should have the most influence. Moreover, voting was done in class order, with the centuries of the equites voting first, those of the wealthiest class of others voting next, followed by the next lower group, and so on. The poor voted last. As a result, the vote of the poor rarely mattered. Class solidarity, the number of centuries weighted towards the wealthy, and the staggered voting meant that most issues would be decided well before the smaller landowners or the poor voted. Even the reforms of the 3rd century B.C., which expanded the number of centuries for the landowning classes to 350, had little effect on the dominance of the wealthy.

The Assembly could only consider bills which were on the agenda set by the tribunes or the magistrates. The citizens could vote on the proposal but not debate the bill at issue or offer amendments. Finally, all voting was done in the city of Rome. As the city’s domain spread, it became more difficult for any but wealthy citizens to travel to Rome for the duration of the Assembly’s legislative or appointive tasks. Based on his analysis of the system, the historian Scott Gordon doubts that even one-tenth of the 400,000 Roman male citizens at the end of the 2nd century B.C. attended a voting assembly in their lifetimes. The formal powers of the Assembly eventually were transferred to the Senate by the Emperor Tiberius.

There was, however, one mechanism by which the public could express its views, the contio. After a bill was proposed by a tribune, there had to be a period of at least twenty-four days before the Assembly could vote on it. This allowed for informal discussion among citizens of the bill’s merits. Moreover, any tribune could call for a formal meeting, the contio, which all residents, including women, foreigners, and slaves, could attend. The only speakers permitted were those selected by the presiding tribune and usually were senators or various magistrates. Public comment was limited to shouts and other sounds indicating support or opposition.

The final part of the formally operating civil government were judicial, executive, and administrative officials. Chief among them were those sought by ambitious Romans embarked on the cursus honorum, the “path of honors” along a sequence of offices, the apex of which was the consulship. All were initially open only to those of senatorial rank, but eligibility was expanded in the 4th century B.C. In practice, only scions of the wealthy families were likely to be elected, especially as consul. Thus, Cicero, a non-patrician resident of a non-Roman town in Latium and member of the knightly class, the highest of the plebeian classes, climbed this ladder of success quickly.

Election to these offices was by the Assembly for a one-year term, with minimum age requirements. The lowest office was that of the quaestor, who had to be 30 years old and have completed several years of military service. Quaestors were in charge of financial administration, a source of influence for further political advancement, and of record-keeping for the state archives. Above the quaestor was the aedile, in charge of public facilities and public festivals and celebrations. The next rung in the ladder was the praetor, a multi-function office. Praetors performed judicial functions but also could step into the executive role of consul if both of the consuls were absent from the city. As jurists, praetors had significant influence on the development of the body of Roman law. After his term ended, a praetor could also be awarded a foreign post as propraetor. This included military power, with full governing authority in the province. There was no term limit for that office.

At the end of the cursus honorum beckoned the consulship. The Assembly elected two consuls each year, at least one of whom was usually engaged in military campaigns in the provinces, the consul peregrinus. The one in Rome, the consul urbanus, had no real military function, because armed forces had to be kept some distance from Rome during peacetime, a constitutional limit broken, for example, by Julius Caesar when he crossed the Rubicon River. The consul’s position in the Republic was one of influence, not formal power. Any executive decision could be vetoed by the other consul and any of the ten plebeian tribunes, Moreover, he could not override the actions of other magistrates. However, his status as a member of a leading family and constant interaction with the Senate, plus the fact that he had survived the competition to reach the apex of the cursus honorum gave his opinions and actions great constitutional legitimacy. After his one-year term ended, a consul could not be re-elected for at least ten years, until the general Marius destroyed that informal constitutional limit in the 1st century B.C. After his term, a consul could be elected as proconsul, the highest military and administrative position in the provinces, with no term limits. This usually arose from the extended military campaigns abroad, which necessitated continuity of command.

Finally, outside the formal cursus honorum were the tribuni plebis, ultimately ten in number, who originally represented the “tribes” or sections of the city. Tribunes spoke for the political interests of the plebeians. They were elected to one-year terms by the Assembly. In that capacity, they were responsible to assist any plebeian who had been wronged by a magistrate. This included the power to overrule an unjust judicial order of punishment. The tribunes’ political power extended to vetoing any bills proposed to the Assembly by other magistrates and to consulta of the Senate deemed contrary to the plebeians’ interests. Eventually, they became members of the Senate and set the agenda for that body. While they formally represented the plebeian classes, with some exceptions such as the famous Gracchi brothers, they were no radicals. They were typically drawn from the patricians and the knights, the high-status classes, and shared their interests. As well, their potentially significant power was impeded by the fact that any affirmative act of a tribune could be vetoed by any of his nine colleagues. In reality, tribunes could act as a shield for the commoners against the wealthy, but rarely as an effective sword to advance the interests of the lower classes in opposition to the wealthy.

One additional aspect of the Republic’s constitutional practices bears mention. Every system has to deal with the state of emergency that can arise over time, the most common of which is war, either foreign or civil. For a long time, in such exceptional circumstances the Roman Senate would formally appoint a dictator to rule by decree for six months. That practice was discontinued by the end of the 3rd century B.C. Instead, during later troubles, such as those of the civil wars of the 1st century B.C., such exceptional powers would be authorized under the terms of a senatus consultum ultimum, a “final act of the Senate” needed to protect the Republic.

Polybius saw in the structure of the magistracies, especially in the consuls, the monarchic element that was part of the “balance” in the Republic’s constitution. In the various interactions of Senate, Assembly, and tribunes, and in their mutual formal and practical limitations, he perceived a system of “checks” on the power of any of them. In some of the particulars, he was off the mark. For example, unlike the Spartan kings to which he compared the consuls, the latter served for only one year, not life. Moreover, the consuls lacked the formal powers one normally associates with kingship. On the whole, however, his assessment has merit.

Historians have long debated the causes of the Republic’s demise. There is certainly no reason to limit the matter to one such cause. Among them was the collapse of broadly-distributed land ownership which sustained a “middle class” in an agricultural republic. As the wealthy became more so regardless of the source, they bought up more land. Land was a reflection of one’s status. Indeed, because commercial ventures were formally prohibited for Senators, one needed land to join that body. The demand raised the price of land and the taxes imposed on it. The growth of these large latifundia drove the previous smaller landowners into the city. There, they became part of the urban proletariate and competed for employment with the large and growing number of slaves acquired through foreign conquests and with other foreigners attracted to the increasingly imperial city.

Another cause was the opportunity for power and wealth afforded to successful generals operating as proconsuls in the provinces. With the troops often ill-paid by Rome, local taxes were extracted by these commanders and used to pay the troops directly. Loyalties became redirected from the city to the commander. The republican slogan SPQR (Senatus Populusque Romanus), “the Senate and the People of Rome,” which appeared on the standards of the legions, was supplanted by the reality that, “You take the king’s silver, you become the king’s man.” Especially as those troops were increasingly formed from poor Roman volunteers or foreigners, especially after the military reforms of Gaius Marius around the turn of the 1st century B.C., it became easier for generals to use those professional troops—or threaten to do so—against the city itself and to rule by force. Marius himself, and his erstwhile protégée Sulla, set unfortunate examples.

Perhaps most significant was the fundamental change in the political and social conditions of Rome. Consistent with Polybius’s theory, the societal degeneration about which he had warned as the inevitable result of the democratization of politics and the weakening of the population’s character brought about thereby, in fact occurred a couple of generations after his death. The impoverishment of a large portion of society and the resultant dependency on public largesse for survival, made those citizens susceptible to the slogans and programs of the populares, such as Julius Caesar and other, more dangerous demagogues. The bloody competition among families of the oligarchic upper classes, as shown in the Social Wars and the proscriptions of the military commanders Marius and Sulla, contributed to the chaos which sent the Republic on the path to the monarchy of the Empire.

The same events that brought about that radical social transformation also manifested themselves in the essential incongruity of governing a huge multi-cultural empire through institutions designed for a small city-state on the Tiber River. The notion of “community,” with shared traditions, civic and religious, and an ethic of sacrifice necessary to sustain the civic engagement at the core of real self-government, is eroded in the chaos of ethnic, linguistic, religious, and cultural diversity and the impersonality of large numbers. Had the Roman elite been willing to open up its political institutions and to extend citizenship and formal participation in the political system to all parts of their domain sufficiently and in a timely manner, a republican structure of sorts might have survived. As it was, the city had become an empire in fact well before its political structure changed from Polybius’s republic to Octavian’s monarchy.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath

It has been said that Stoic metaphysics was the state philosophy of ancient Rome. While perhaps an overstatement, the point is well taken. Rome did not achieve the prominence of the Greeks in original philosophy, but there were a number of outstanding expositors who adapted Stoic principles to Roman conditions. Seneca the Younger, a wealthy Roman statesman, dramatist, and tutor to the future emperor Nero; Epictetus, born as a slave, but freed by his wealthy master on reaching adulthood; and Marcus Aurelius, known as the last of the “Five Good Emperors” of Rome, were particularly influential Roman Stoics.

The absorption of the Greek city-states into the Macedonian Kingdom of Philip and his successors in the 4th century B.C. shocked the Greeks’ self-regard. Hellenic culture for centuries had emphasized the special status of citizenship in the polis, and its necessity for achieving eudaimonia, human flourishing. The polis was not just “political” in the modern sense. It was a “community” in all manner, political, yes, but also social, religious, and economic. Aristotle associated such community with a true form of friendship, wherein one acts for the friend’s benefit. Plato and Aristotle both concerned themselves at length with what constitutes such a community that is suitable for a fulfilled life. For Plato, the city was the individual writ large, which formed a key component of his description of the ideal government in his Republic. For Aristotle, politics was an extension of ethics. The moral and the political, the personal and the public, were joined. The teaching and practice of individual virtue (arete—the root word for aristocracy) were necessary for a just society, and a polis operating on that basis created the conditions for individual virtue to flourish. Those outside the polis, be they hermits, bandits, or barbarians, and no matter their wealth or military prowess, could not attain that level of full human development.

The Macedonian occupiers were not much different than the Greeks and, such as Alexander, were hardly ignorant of Greek ideas or unsympathetic to Greek social and political arrangements. Moreover, the Greek poleis did not vanish, and ordinary daily life continued. Still, after unsuccessful attempts to rid themselves of their Macedonian overlords, it became clear that the Greeks were just one group competing with others for influence in a new empire. Politics being a branch of ethics, the ideal for the Greeks had been to do politics “right.” With the Macedonian success, it seemed that the foundation of the entire Greek project had collapsed.

The result was a refocus of the meaning of life from the ultimately outward-looking virtue ethics of Aristotle and the vigorous political atmosphere of the polis. In this psychological confusion and philosophic chaos arose several schools. One, the Skeptics, rejected the idea that either the senses or reason can give an accurate portrayal of reality. Everything is arbitrary and illusionary, truth cannot arise from such illusions, no assertion can claim more intrinsic value than any other, and everything devolves into a matter of relative power: law, right, morality, speech, and art. Such a valueless relativism can expose weaknesses in the assumptions and assertions of metaphysical structures, but its nihilism is self-defeating in that it provides no ethical basis for a stable social order or workable guide for personal excellence.

Another group was the Cynics, who responded to the psychological shock of the collapse of the city-state by rejecting it. The correct life was to understand the illusory and changing nature of civilizational order and withdraw from it. Life must be lived according to the dictates of nature, through reason, freedom, and self-sufficiency. The good life is not a project of study and speculation, but practice (askesis). Live modestly through your own toil so that you may speak freely, unperturbed by the turmoil and illusions around you. One of the most prominent Cynics, Diogenes, allegedly lived in a rain barrel in the Athenian market and survived through gifts and by foraging and begging. Social arrangements and conventions are not necessarily inimical to this quest, but they often hide the way. Thus, it becomes the Cynic’s duty to light the way, as Diogenes sought to do with his lamp, by exposing and ridiculing such conventions. The Cynics saw themselves no longer as citizens of the polis, but as citizens of the world.

While principled, the Cynics’ grim lifestyle in order to “speak truth to power” was not for most. An alternative school was founded by Epicurus in the late 4th century B.C. The Epicureans urged people to focus foremost on themselves to achieve the good life. The gods have turned away from the city, political decisions are made in royal capitals far away, and the only control is what you have over your actions. Thus, obeying rules, laws, and customs is practically useful but should not be a matter of concern. To live the good life was to obtain pleasure, the highest end. “Pleasure” is not to be understood as we often do as some form of sensory stimulation. Rather, it was to achieve a state of tranquility (ataraxia) and absence of pain. This ultimate form of happiness would come through a life of domestic comfort, learning about the world around us, and limiting one’s desires. Crucially, Epicureans avoided the turbulence of politics, because such pursuits would conflict with the goal of achieving peace of mind. The best one could hope for in this life was good health, good food, and good friends.

Stoic philosophy was an eclectic approach, which borrowed from Plato, Aristotle, and competing contemporary investigations of ethics and epistemology. Its name came from a school established by Zeno, a native of Citium on Cyprus, who began teaching in Athens around 300 B.C. The “school” met on a covered colonnaded walkway, the stoa poikile, near the marketplace of Athens. Its 500 years of influence are usually divided into three eras (Early, Middle, and Late), which eras broadly correspond to changes from the austere fundamentalist teachings of its ascetic founder into a practical system of ethics accessible to more than wise and self-abnegating sages.

There were two key aspects to Stoicism. First, at an individual level, there was apatheia. It would be massively misleading to equate this with our term “apathy.” Apathy is negative, conveying passivity or indifference. Apatheia means a conscious effort to achieve a state of mind freed from the disturbance of the passions and instincts. It is equanimity in the face of life’s challenges. The Stoic sage would “suffer the slings and arrows of outrageous fortune” over which he has no control and focus instead on his own actions. Reason being man’s distinctive and most highly evolved innate feature, the Stoic must train himself to live life in accordance with nature and reason. He must control his passions and avoid luxuries and material distractions that would lead to disappointments and frustrations. His happiness is within himself. The virtuous life is a simple life, achieved through constant discipline “in accordance with rational insight into man’s essential nature.”

Second was universalism. Hellenic culture became Hellenistic culture, as Greek ideas and practices were adapted to the new world order, as the polis became the cosmopolis. A Stoic saw himself in two ways. In the political realm, he was a citizen of his city or state; in his self, he was a human. As Marcus Aurelius expressed it, “My city and country, so far as I am Antoninus [a title for emperor—ed.], is Rome, but so far as I am a man, it is the world.” Stoicism, unlike its Platonic and Aristotelian sources insisted that the universe was governed by law which applied equally to all and raised all to equal status, a “universal brotherhood of man.” This revolutionary claim would profoundly influence Roman and Christian ideas thereafter.

Stoicism differed from Skepticism in that it rejected the latter’s nihilistic pessimism that life was simply a competition for power. It projected a vision of personal improvement and sought to construct a positive path towards happiness within a universal order of moral truth. It differed from the Cynics in that Stoicism did not reject the basic legitimacy of the state and its laws and conventions or urge withdrawal from the public sphere. Rather, the Stoics separated the universal moral order, by which each person’s individual conduct must be measured, from the reality of the political world and the obligation to obey the laws of the community. Stoics did not reject the secular authority or make a point to ridicule it. From a Christian perspective, it was not exactly “Render unto Caesar the things that are Caesar’s, and unto God the things that are God’s.” But it was close enough, coming from a pagan philosophy.

Finally, the Stoics differed from the Epicureans. The latter’s goal of a tranquil private life through the pursuit of health, learning, good food, and good company was at odds with the former’s demands of a more disciplined private life of constant self-reflection and self-improvement, plus the continuing duty to shoulder one’s obligations under the civic law. Those differences made Stoicism much more attractive than Epicureanism to the average Roman. The Roman upper classes might well be drawn to the Epicurean vision, but Stoicism could appeal to more than the leisure class. Most significant, with its emphasis on self-reliance, simplicity, and service, Stoicism more closely reflected the Roman sense of self during a half-millennium of the Republic and the early Empire. The historian Will Durant observed, “A civilization is born stoic and dies epicurean.” By that he meant that civilizations degenerate. As he explained, “[C]ivilizations begin with religion and stoicism; they end with skepticism and unbelief, and the undisciplined pursuit of individual pleasure.” Though at times turbulent and seeming to veer into dissolution as the political edifice of the Roman Republic became Octavian’s principate, the Roman culture did not yet fundamentally change, due in part to the stability provided by Stoic philosophy.

Stoicism fit well the Roman character imagined by the Romans themselves and reflected in their laws and history. As the historian J.S. McClelland wrote, “The Greeks might be very good at talking about the connection between good character and good government, but the Romans did not have to bother much about talking about it because they were its living proof.” Not unlike Sparta, Rome had always had a strong martial component to its policies, which Romans took to be an essential part of their character. It was a masculine, male-dominated culture, and unabashedly so. At the root of virtus, that is, virtue or excellence, is vir, the word for adult male or hero. Stoicism “spoke” to Romans in a way that Epicureanism could not. That said, the Middle and Late Stoic writers from the second century B.C. on were willing to refine some of the school’s rough homespun aspects and accepted that a materially good life was not inconsistent with Stoicism. Self-discipline and self-reflection were key. Moderation, not excess, all in accord with nature and reason, sufficed. Self-deprivation and the ascetic life were not necessary.

American polemicists of the post-Revolutionary War period often associated the Stoic virtues with the Roman Republic and saw those virtues reflected in themselves. This required turning a blind eye to certain fundamental assumptions. For example, as noted, Stoicism separated the universal moral order’s control over private conduct from the need for unquestioning adherence to the state’s laws made for the welfare of the community. For the Americans, a distinction between private morality and virtue on the one hand, and public morality and law on the other was not readily conceivable, at least as an idea. Though at times John Adams was quite doubtful about the capacity of Americans for self-government, in his message to the Massachusetts militia in 1798 he wrote, “Our Constitution was made only for a moral and religious People. It is wholly inadequate to the government of any other.” James Madison writing in The Federalist, No. 55, noted that republican self-government more so than any other form requires sufficient virtue among the people.

There was another, profound, appeal Stoicism had for the Romans, which connected to their views of good government. Rome prided itself on its balanced republican government, a government meant for a cohesive community, that is, a city-state. “The Eternal City,” the poet Tibullus called it in the 1st century B.C., and so it became commonly known through the works of Virgil and Ovid during the reign of Octavian, long after it had ceased to be a mere city on the Tiber and become an empire in all but name. Indeed, Octavian styled himself princeps senatus, the highest ranked senator, avoided monarchical titles and insignia, and purported to “restore” the Roman Republic in 27 B.C. The trappings of the republican system were maintained, some for centuries.

As in the earlier Greek city-states, Roman citizens had the right and the duty to participate in their governance. Stoicism called on its adherents to involve themselves in res publica, public affairs, working for the benefit of the whole, not themselves, a commitment of personal sacrifice and service. This mirrored basic obligations of Roman citizenship, from military service to political engagement to contribution for public works. These burdens with their physical and economic sacrifices were to be borne with equanimity. Marcus Aurelius, the last great Stoic sage, spent a large portion of his reign on the frontier leading armies against invading German tribes. It is said that he wrote his famous inward-directed Meditations on Stoic ideas and practice during those campaigns.

An important component of the Roman political system was law, both as a collection of concrete commands and as an idea. As noted, Romans were not, by and large, known for original contributions to Western philosophy. For them, that was the role of the Greeks. They were, however, exceptional jurists. As they gained territory, the need to administer that territory required a system of law capable of adapting to foreign conditions. As they gained dominion over cultures beyond the Italian peninsula, and as Roman trade ventured to even farther corners of the world, the Roman law might differ in particulars from that of the local population. At the same time, there appeared to be certain commonalities to the Roman law and those of disparate communities. For the politicians, such commonalities could help unify the realm through a “common law” and support the legitimacy of Rome and its administrators. For the merchants, it could help make commercial dealings more predictable and lower their transaction costs. For the jurists, it raised the possibility of universal influences or elements in the concept of law itself.

The Stoics provided the framework for systematic exploration of that possibility. Stoicism, it may be recalled, had a cosmopolitan, indeed universal, outlook. The Stoic universe was an orderly place, governed by immutable, eternal, constant principles. In other words, an eternal law. At the center was the universal moral law. Law in general had its basis in nature, not in the arbitrary creative will of a human ruler or the cacophony of mutually cancelling irrationalities of the multitude. Humans have an inborn notion of right and wrong. Unlike Adam Smith’s theory of moral sentiments, which he based on our social nature, the Stoics ascribed this to our essential human nature, with each individual participating in this universal moral order. There was an essential equality to Stoicism that eliminated the lines between ruler and subject, man and woman, freeman and slave. Gone was Aristotle’s attempt to explain slavery with the claim that the nature of some conduced them to slavery.

Of course, this only applied to one’s ability to achieve individual virtue through Stoic self-discipline in the personal realm. The outside world still maintained those distinctions in positive law. Many were slaves in Rome. While the Stoics could consider slaves their brethren as members of the human community within the moral law, they accepted the separate obligation imposed on them to obey the political world in its flawed, but real, condition. Epictetus, himself a former slave, blurred that duality when he declared slavery laws the laws of the dead, a crime. But for most, the reality of despotic and corrupt government, the suppression of freedom, and prevalence of slavery were the actions of others over which the Stoic had no control and the consequences of which he had to deal with as best he could through apatheia.

Still, the concept of eternal law, possessed of inherent rightness, and connected to human nature, had some profound implications for human governance and freedom. The universal order is right reason itself and exists within our nature, accessible to us through our own reason. The Apostle Paul addressed this from a Christian perspective in Romans 2:14 and 15: “For when the Gentiles who do not have the law, by nature observe the prescriptions of the law, they are a law for themselves even though they do not have the law. They show that the demands of the law are written on their hearts ….” Proper human law, in its essential principles, is a practical reflection of this higher moral law and necessary for good government. Despite the shortcomings of actual Roman politics, this set a standard.

Because the moral law is universal, eternal and beyond the control of human rulers, it implies a lawgiver of similar qualities. The character of the Stoic “god” was often unclear and differed among various Stoic philosophers. It was certainly not the gods of the Greek and Roman civic religions, with their all-too-human character failings and pathological urges to interfere, usually disastrously, in human lives. Nor was it the personal and loving Christian God of the Gospels, cognizant of each creature within His creation and particularly interested in the flourishing of those created in His image. Rather, the Stoic god is best viewed as a force which created and through its presence maintained the universal order. This force has been described variously as a creative fire, world soul, pneuma (breath), or logos (word). The last two are particularly interesting in relation to Christian writings. Logos not only meant “word” but also the reason, cause, or ultimate purpose or principle of something. The Stoic moral order was an expression of divine reason and accessible to us through the reason that is part of our nature.

One of the foremost Roman commentators and synthesizers of Stoic doctrine in law was Cicero, the great lawyer, philosopher, and statesman. Cicero claimed he was not a Stoic. He seemed to have seen himself as a follower of contemporary versions of Plato’s ideas. Indeed, his two major works on good government, The Republic and Laws, paralleled the titles of Plato’s major works on politics. However, his introduction of the ius naturale (natural law) to Roman jurisprudence, a fundamental step in human freedom, owes much to the Stoics. Note his justification for the right of self-defense: “This, therefore, is a law, O judges, not written, but born with us, which we have not learnt, or received by tradition, or read, but which we have taken and sucked in and imbibed from nature herself; a law which we were not taught, but to which we were made, which we were not trained in, but which is ingrained in us ….”

Or consider the following that vice and virtue are natural, not mere artifices: “[In] fact we can perceive the difference between good laws and bad by referring them to no other standard than Nature: indeed, it is not merely Justice and Injustice which are distinguished by Nature, but also and without exception things which are honorable and dishonorable. For since an intelligence common to us all makes things known to us and formulates them in our minds, honorable actions are ascribed by us to virtue, and dishonorable actions to vice; and only a madman would conclude that these judgments are matters of opinion, and not fixed by Nature.”

Perhaps most famous is this passage from The Republic: “True law is right reason in agreement with nature; it is of universal application, unchanging and everlasting; … It is a sin to try to alter this law, nor is it allowable to attempt to repeal any part of it, and it is impossible to abolish it entirely. We cannot be freed from its obligations by senate or people, and we need not look outside ourselves for an expounder or interpreter of it. And there will not be different laws at Rome and at Athens, or different laws now and in the future, but one eternal and unchangeable law will be valid for all nations and all times, and there will be one master and ruler, that is, God, [note the use of the singular, not the plural associated with the Roman pantheon—ed.] over us all, for he is the author of this law, its promulgator, and its enforcing judge. Whoever is disobedient is fleeing from himself and denying his human nature, and by reason of this very fact he will suffer the worst penalties, even if he escapes what is commonly considered punishment.”

From these recognitions, it is but a short step “self-evident [truths], that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” A short step conceptually, but centuries in time to realize fully.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty. Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Guest Essayist: Joerg Knipprath

In classical studies and terminology, a (political) constitution is a concept that describes how a particular political system operates. It is a descriptive term and refers to actual political entities. It is, therefore, unlike what Americans are accustomed to hearing when that term is used. Rather, we think of The Constitution, a formal founding document which not only describes the skeleton of our political system, but has also attained the status of a normative standard for what is intrinsically proper political action. Thus, we can talk about constitutional law and of rights recognized in that document in defining not just how things are done, but how they ought to be done.

In that, our Constitution is unusual. The ancient Greek cities lacked such formal documents that were self-consciously founding a new political order. However, there were analogous decrees and laws which shaped aspects of government. In that sense, we, too, might say that a statute that organizes a branch of government might be “constitutional,” not in the sense that it is somehow a noble law, and not just that it is within the textual limits of the Constitution. Instead, the term conveys that such a law simply sets up basic procedures to run the government, procedures that people use and, thereby, at least tacitly accept as legitimate. An example might be a statute that establishes a specific system of federal courts.

Moreover, functional descriptions of constitutions must take into account not only formal written rules of government for that entity, but the unwritten customs and practices that shape, refine, or even negate those written rules. Even our formal written Constitution is subject to such informal influences, one prominent form of which is the collection of opinions of Supreme Court justices on the meaning of the words in that document. The ancients, too, were keenly aware of the importance of such long-adhered-to customs to influence the practice of politics and also to give—or deny—legitimacy to political actions. The Greek playwright Sophocles made the clash between a novel royal decree and custom in the form of the “immortal unrecorded laws of God” a central plot device in his play Antigone, a part of the tragic Oedipus Cycle. For the Roman Republic and the early Empire, one must look to the use of constitutional custom through the mos maiorum (the “custom of the ancients” or “practice of the forefathers”) to understand the political order.

As with our own polity, it would be foolish to describe the constitutions of the Greek poleis (city states) as unchanged over the centuries of their existence. Cultural perspectives and societal needs do not remain static. Thus, one must give an evolutionary overview, made more specific through a snapshot of a particular period. When Aristotle (or his students) wrote Athenaion Politeia (the Athenian Constitution), he did just that, providing a history and a contemporary description. As an aside, Aristotle is credited with analyzing 158 Greek constitutions, of which the Athenian is the only one to survive in substantial form. With that number, it is more likely that Aristotle’s students compiled these surveys, perhaps on behalf of their teacher’s research.

As the Greek city states evolved, so did their governments. The chieftain or kingly form of government under a basileus, limited often by powerful individual noble warriors, prominent in Homer’s Iliad, typically gave way to an aristocracy based on land ownership. In Athens, as later in Rome and in the history of Europe and North America, there were further pressures towards democratization, influenced by the growth of commerce and sea trade. Both Plato in Politeia (the “Republic”) and Aristotle in Politika (the “Politics”) wrote about these trends. Neither was a fan. Plato, especially, saw these developments as evidence of degeneration.

While much of this history is murky and in shadows, apparently the major power of government in early Athens was in the Areopagus, a council of aristocratic elders with legislative and judicial powers. Significant constitutional changes in Athens began in 621-620 B.C. with the Code of Draco (who may have been an individual or a signifier for a priestly class), which solidified the powers of the holders of large estates in a legislative Council of 400. This body was selected by lot from the class of those who, according to the Code, could supply a certain level of military equipment.

Solon, regarded by many historians as the founder of Athenian democracy, undertook various political reforms in the early 6th century B.C. One was to deprive the Areopagus of much of its judicial power. Instead, jury courts took over that role, including the ability to adjudicate suits against public officials for unjust treatment. The most significant reform was to expand political participation based on size of land ownership. Four classes were created. All, even the landless laborers could take part in the ekklesia (assembly) and the jury courts. However, only the top two classes could hold the significant public offices. Members of the third class could hold minor administrative positions. In effect, this diminished the role of the hereditary aristocracy and entrenched the wealthier oligarchy of large landowners. The Council of 400 controlled the agenda of the assembly, thereby ensuring more control by the landed elite.

The process of democratization continued with the reforms by the military leader Kleisthenes who came to political power in 507 B.C. He organized the citizens in Athens and the surrounding area into ten “tribes.” While Athens had many residents from other Greek cities and from non-Greek areas, these “metics” were not counted. Tribe is not to be understood as an ethnic concept, but merely as a convenient label for a geographic constituency, such as a community or district. Kleisthenes eliminated the Council of 400 and replaced it with the boule, a Council of 500. Each tribe would have 50 seats in that council, chosen annually by lot from male citizens over 30 years old. The Council was a powerful entity, in charge of fiscal administration. It also set the agenda for the Assembly. Council members could serve only twice in their lifetimes. Kleisthenes had his reforms approved by vote of the Assembly, which gave particular legitimacy to the rules and increased the Assembly’s constitutional significance. However, the nine archons, the senior civil officials, as well as other magistrate offices, such as judges, were still drawn from the nobility and the wealthy landowners.

During the 5th century B.C., further reforms occurred under Ephialtes and Pericles, resulting in what historians often call Athens’s “Golden Age of Pericles.” The Assembly was the focal point of Athenian democracy. It met on a hill near the central market. Sessions were held on four non-consecutive days each Athenian month. There were ten months, with thirty-six days each. A quorum was 6,000 of the estimated 40,000 Athenian male citizens. Anyone could speak on items placed before the Assembly by the Council. Laws generally were adopted by majority vote of hands, though some laws required approval also by a special body drawn by lot from the jury rolls.

This façade of radical democracy must not fool casual observers of Athenian politics. First, there was the matter of demographics. Of the estimated 300,000 residents of Athens and its environs, most were slaves, metics, women, or children. It is estimated that only about 15% were adult male citizens. Second, the members of the Assembly did have final authority to vote, but on proposals shaped by the Council. Finally, business could not have been carried on if thousands of people exercised their right to speak. Thus, informal customs were observed. Speeches on proposals were given by a small number of recognized leading members of the community. These speakers were the “demagogues” (demos means “people”; gogos means “leader”). Initially, the term had a neutral meaning. It soon took on the modern sense, as various individuals sought to gain favor and influence with the voters through inflammatory language, theatrics and emotionalism.

As happens not infrequently, many such spokesmen for the people were from noble families or wealthy businessmen seeking to advance their economic interests. Notorious among them were Alcibiades, known for his charm, wealth, good looks, and Spartan military training; Hyperbolus, namesake of a word that represents theatrical and emotional language, a frequent target of satire by Greek playwrights, and the last person to be “ostracized” (that is, required to leave Athens for ten years); and Cleon, a man who, centuries before William F. Buckley, declared that “states are better governed by the man in the streets than by intellectuals …who… want to appear wiser than the laws…and…often bring ruin on their country.” Such speakers could “demagogue” issues and exploit, exacerbate, and even create divisions within the Athenian populace. However, they also served a useful role in that they were usually well-informed and regular participants in the debates. They could explain to the more casual attendees unfamiliar with the intricacies of Athenian government and politics the issues of the day. It is reported that ordinary Athenians, not known to be reticent in matters of political debate, were anything but shy about vocalizing their opinions about the various speakers through shouts, jeers, cheers, laughter, and a multitude of other sounds even if they did not make speeches.

As noted, the Assembly’s power was not unrestricted. The Council of 500 controlled its agenda. More precisely, since a body of five hundred could not realistically expect to control the shaping of public policy and its administration, it was a standing committee of the Council that performed this work. The standing committee of 50 rotated monthly among the ten tribes which composed the Council.

Athens had no king or president. The archons were senior magistrates and judges. They were selected by lot and, in theory, by the 4th century B.C., any male citizen was eligible for the office. Archons served for one year and thereafter could not be re-selected. Strategoi were the military commanders of the army and navy. Since those positions required particular expertise in war and leadership capabilities, they were not selected by the chancy method of the lot. Rather, the Assembly elected them for one-year terms. Unlike the civil magistrates, because wars operate on their own timetable, military commanders were typically re-elected. At the same time, the Assembly could revoke their commands at any time and for any reason. In addition, Athens had many junior bureaucrats who held their offices longer.

By the end of the fifth century B.C., the jury courts, well-established in the litigious Athenian society, had also taken on a political role. They were in charge of the confirmation process that each official had to undergo before taking office. If challenged on his qualifications, a jury would have to vote by majority to approve the selection. The courts and the Assembly also could hear “denunciations” brought by Athenian citizens against public officials and military commanders after an initial review by the Council. Finally, upon completing his term of office, a public official was subject to a review (euthenai) by an administrative board. If a citizen brought a complaint of mistreatment by the official, that complaint also would be heard by the courts after an initial review by a committee of the Council.

Despite its source in the demos, the Athenian system was not an unrestrained democracy. Such a system would have collapsed quickly, given the size and complexity of the Athenian state by the 6th century B.C. Athens was a “mixed” government (mikte). What brought it to eventual collapse was defeat in the Peloponnesian War at the hands of Sparta, the overextension of its colonial reach, the interference by foreign powers during the 5th and 4th centuries B.C. in the politics of Athens (from Persia to Sparta to Thebes to Macedon), and the usual interest group conflicts that plague societies (rich versus poor, landed versus commercial interests, creditors versus debtors, new elites versus old, traditionalists versus modernists). The social frictions and political instability caused by the violence of the successive factions that controlled Athens in the early 4th century B.C. based on support of, or opposition to, Spartan influence, undermined the system to the point that the city could not resist its eventual assimilation by the Kingdom of Macedon and its successor, the Alexandrian Empire. Both the oligarchic pro-Spartans, such as the Thirty Tyrants, and the democratic anti-Spartans seized the property of defeated political rivals and resorted to death for people suspected of supporting those defeated rivals. It was the democratic faction, after all, that convicted Socrates and sentenced him to death for a trumped-up charge.

All of that said, one must not forget that between the initial democratic stirrings under Draco and the Macedonian occupation, the Athenian democracy functioned three centuries. Even after the end of its independence as a city-state, the Athenian constitution continued, albeit in modified form and with less power abroad.

The Spartan system was superficially similar to the Athenian constitution yet was grounded in some fundamentally different social and political realities. Like some other thoroughly stratified and structured societies, Sparta was highly legalistic. The tight and intrusive control over life that is associated with the “Spartan way” was rooted in law, not tyrannical arbitrariness. Law, in turn rested on tradition, not written statutes, allegedly due to a directive from its possibly fictional founder, Lycurgus.

Spartans attributed the origin of their system to their great “lawgiver,” Lycurgus, supposedly in the 9th century B.C. Because so little is known about Lycurgus, historians have questioned the timing and, indeed, his very existence as a real person. Still, this event lay at the base of Spartan claims that their democracy antedated that of Athens by a couple of centuries.

In some sense, it is curious to imagine Sparta as “democratic,” but there is a basis to that description. The apella was the Spartan Assembly, to which all adult male citizens authorized to bear arms belonged. Moreover, Spartan women were far more equal in status to men than were their Athenian counterparts. While they were not given formal political powers, Spartan women were expected to voice their opinions about public matters. Most important, they also, unlike Athenian women, had rights to their own property through dowry and inheritance.

At the same time, the real political power was exercised by two institutions, the gerousia (Council of Elders—gerontes) and the ephoroi (magistrates). The Assembly could only vote on proposals presented by the Council, not initiate them. There is dispute about whether the Assembly could even formally debate proposals, but it is likely that vigorous debates in fact took place. The Assembly was composed of Spartan warriors, after all. The Council consisted of the two Spartan kings and 28 citizens over the age of 60 who were elected by the Assembly for life. This made the Council the main legislative power in what might be considered a bicameral system. Cicero analogized the Council to the Roman Senate. While the Council was not composed of a hereditary “aristocracy,” as was the principal – but not sole — characteristic of the Roman Senate, its members were drawn from the most prominent and tradition-minded elements of Spartan men.

Political writers since ancient times often pointed to another feature of the Spartan constitution, the dual monarchy. The origins of that system are obscure. For example, historians have sought to locate that origin in an ancient dispute between two powerful noble families that was settled by making the leader of each a king. Others have seen this as the result of a union of various villages or tribes at the city’s founding, the chiefs of the two most powerful becoming the kings. In later years, the system evolved that one king was responsible for domestic matters, mainly religious and judicial, while the other was typically away on military expeditions. The two kingships were not explicitly hereditary, and the kings were elected, another democratic feature. But they were elected for life and from those same two ancient families.

Whatever its origins or democratic bona fides, writers have often lauded the dual monarchy as representing an effective barrier to centralization of power in a single tyrant. The force of tradition and the natural rivalries among powerful faction kept each in check. Given the largely ceremonial role of the kings, except in military campaigns, and the checks otherwise placed on the kings make this justification for the dual monarchy less compelling.

The final piece of the formal Spartan political structure was the board of magistrates. The ephoroi were elected annually by the Assembly. Even the poorest citizen theoretically could be elected. There could be no re-election to a subsequent term. Initially, the ephoroi had limited powers, but as time passed, their offices gained substantive powers. When away on a military campaign, the king was accompanied by two ephoroi. Similarly, the kings lost the power to declare war and to control foreign policy to the ephoroi and the Council. Much of this might be traceable to security concerns that a king could make surreptitious deals with enemies of Sparta or get entangled in foreign schemes injurious to Spartan survival. Except while acting as generals, the kings over time became figureheads. But the ephoroi themselves also had significant limitations on their powers, chief among them their short tenures.

Polybius, often described as the founding light of constitutional and political studies, described the Spartan system as a true balanced and mixed government. In the classic understanding, that meant it contained a mixture of monarchic, aristocratic, and democratic elements balanced in harmony to produce an effective government duly attentive to individual rights. It seems unpersuasive to describe the rigid and totalitarian Spartan society in that manner. In light of the functional dominance of the Council, with its life tenure and its selection from the upper levels of Spartan society, one might more readily classify Sparta as an oligarchic system.

The end of Spartan power was not due to any inherent defect in the constitutional structure. More likely were the combined factors of demographic collapse and overextension in foreign and military ventures. The near-constant warfare of the 5th and 4th centuries B.C. against Persians, then Athenians in the Peloponnesian Wars, then against the combination of Athens, Thebes, Corinth, and Persia in the Corinthian Wars, and, finally, against Thebes alone, depleted the Spartan hoplite infantry on which Spartan military success depended. The population of Spartan citizens shrunk, and their rule over the helots which made up 90% of the state’s residents became increasingly precarious.

The rigid nature of Spartan society, the paranoia reflected in the Spartan security state, and the traditionalism of the Council, shown for example by their unwillingness to extend citizenship to the helots, may have contributed to the downfall of Spartan influence after the Battle of Leuctra in 371 B.C. Still, the city at that time had been a powerful actor in the Mediterranean world for three centuries. Moreover, the system continued to operate reasonably well within the Roman world for nearly another eight hundred years, until it was sacked by Alaric and the Visigoths in 396 A.D.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow.


Click here for American Exceptionalism Revealed 90-Day Study Schedule
Click here to receive our Daily 90-Day Study Essay emailed directly to your inbox


Essay 72 – Guest Essayist: Joerg Knipprath

If one lived in Virginia during the first couple of centuries or so of European settlement, one could do much worse than being born into the Lee family. Founded in the New World by the first Richard Lee in 1639, its wealth was based initially on tobacco. From that source, the family expanded, intermarried with other prominent Virginians, and established its prominence in the Old Dominion State. Richard Henry Lee and his brother Francis Lightfoot Lee, both signatories of the Declaration of Independence and the Articles of Confederation, were scions of one branch of the family. Henry “Light-Horse Harry” Lee III was a son of Richard Henry Lee’s cousin. Henry III was a precocious officer in the Continental Army, major-general in the United States Army, governor of Virginia, and father of Confederate States Army General Robert E. Lee.

Despite this illustrious background, Richard Henry Lee was in relatively straightened financial circumstances, compared to others in his political circle. Though he was the son of a royal governor of Virginia and plantation owner, Lee inherited no wealth other than some land and slaves. He rented those assets out for support, but depended on government jobs to help maintain his participation in politics. Although Lee studied law in Virginia after returning from an educational interlude in England, it appears he never practiced law. Still, his training became useful when he was appointed Justice of the Peace in 1757 and elected to the House of Burgesses in 1758.

Once in politics, Lee quickly took on radical positions. In September, 1765, he protested the Stamp Act by staging a mock ritual hanging of the colony’s stamp distributor, George Mercer, and of George Grenville, the prime minister who introduced the Stamp Act. Soon it was discovered that Lee himself had applied for that distributor position, which proved rather awkward for his bona fides as a fire-breathing patriot. After a mea culpa speech delivered with the trademark Lee passion, he was absolved and, indeed, lauded for his honesty.

He escalated the protest in 1766 by writing the Westmoreland Resolves, which promised opposition to the Stamp Act “at every hazard, and, paying no regard to danger or to death.” Further, anyone who attempted to enforce it would face “immediate danger and disgrace.” The signatories, prominent citizens of Westmoreland County, Lee’s home, pledged that they would refuse to purchase British goods until the Stamp Act was repealed. Eight years later, this type of boycott was the impetus for the Continental Association, an early form of collective action by the colonies drafted by the First Continental Congress and signed by Lee to force the British to repeal the Coercive Acts.

On March 12, 1773, Lee was appointed to Virginia’s Committee of Correspondence. The first such committee was established in Massachusetts the previous fall under the leadership of Sam Adams to spread information and anti-British propaganda to all parts of the colony and to communicate with committees in other colonies. The trigger was the Gaspee affair. The British cutter Gaspee, enforcing custom duties off Rhode Island, ran aground on a sand bar. Locals attacked and burned the ship and beat the officer and crew. The government, keen on punishing the destruction of a military vessel and the assault on its men, threatened to have the culprits tried in England. The specter of trial away from one’s home was decried by the Americans as yet another violation of the fundamental rights of Englishmen. Other colonies soon followed suit and established their own committees. Letters exchanged between Lee and Adams expressed their mutual admiration and laid the foundation for a lifelong friendship between the two.

Amid deteriorating relations between Britain and her American colonies, Parliament raised the ante by adopting the Coercive or Intolerable Acts (Boston Port Act, Massachusetts Government and Administration of Justice Act, Quartering Act) against Massachusetts Bay. Virginia’s House of Burgesses responded with the Resolve of May 24, 1774, concocted by Lee, his brother Francis Lightfoot Lee, Thomas Jefferson, Patrick Henry, and George Mason, which called for a day of “Fasting, Humiliation, and Prayer” for June 1. Time being of the essence, the authors were not above a dash of plagiarism. They took the language from a similar resolution made by the House of Commons in the 1640s during their contest with King Charles I. The Resolve denounced the British actions as a “hostile invasion.” It called for the Reverend Thomas Gwatkin to preach a fitting sermon. The reverend declined the invitation, not eager to have his church drawn into what he viewed as a political dispute. The royal governor, the Earl of Dunmore, reacted by dissolving the Burgesses. Lee and other radicals thereupon gathered at Raleigh’s Tavern in Williamsburg on May 27. They adopted a more truculent resolution, which declared that “an attack made on one of our sister Colonies, to compel submission to arbitrary taxes, is an attack made on all British America.”

Lee’s visibility in the colony’s political controversies paid off, in that he was selected by Virginia as a delegate to the First Continental Congress and, the following year, to the Second Continental Congress. It was in that latter capacity that Lee made his name. In May, 1776, the Virginia convention instructed its delegates to vote for independence. On June 7, Lee introduced his “resolution for independancy [sic].” The motion’s first section, adopted from the speech by Edmund Pendleton to the Virginia convention, declared:

“That these United Colonies are, and of right ought to be, free and independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.”

Debate on the motion was delayed until July due to the inability or unwillingness of some delegations to consider the issue.

In the meantime, colonies were declaring themselves independent and adopting constitutions of their own. With events threatening to bypass Congress, a committee was selected to draft a declaration of independence. Lee was unavailable. He had hurried back to Virginia, apparently to attend to his wife who had fallen ill. That absence prevented him from participating in the debate on his resolution on July 2. He returned in time to sign the Declaration of Independence.

Lee’s terms in Congress demanded much from him. He was what today would be described as a “workaholic.” On several occasions, this led to illness and absence due to exhaustion. He served in numerous capacities, including as chairman of the committee charged with drafting a plan of union, though most of the work on that project was done by John Dickinson as the principal drafter of the eventual “Articles of Confederation.” Lee was one of sixteen delegates who signed both the Declaration and the Articles.

From 1780 to 1782, Lee put his position in Congress on hold to tend to political matters in Virginia. The state was in relatively sound financial shape and keeping up with its war debt obligations. Lee opposed making the highly-depreciated Continental Currency legal tender. He also took the unpopular position of denouncing the law to cancel debts owed by Virginians to British creditors. “Better to be honest slaves of Great Britain than to become dishonest freemen,” he declared.

On the topic of slaves, Lee inherited 50 from his father. Despite that, he had strong anti-slavery sentiments. In 1769, he proposed that a high tax be assessed against importation of slaves, in order to end the overseas slave trade. Some critics grumbled that he did this only to make his own slaves more valuable, the same charge made against those Virginians who supported the provision in the Constitution which ultimately ended the trade after 1808. His pronouncements on the moral evil of slavery continued. It is unclear if Lee ever manumitted his slaves. The charge of hypocrisy is readily leveled at someone like Lee. But this history also demonstrates the difficulty of extricating oneself from an economic system on which one’s livelihood depends.

One pressing problem at the time was the parlous state of Congress’s finances, made even more dire by the looming obligations of the war debt. Lee’s role in stabilizing the financial situation in Virginia added to his stature in Congress. His fellow-delegates elected him their president during the 1784-1785 session. He was the sixth to serve as “President of the United States in Congress Assembled” after approval of the Articles of Confederation in 1781. Despite the impressive-sounding title as used in official documents, the position was mainly ceremonial. However, a skillful politician such as Lee could use it to guide the debates and influence the agenda of Congress.

Lee opposed proposals to give Congress a power to tax, especially import duties. He also believed that borrowing from foreign lenders would corrupt. Instead, he aimed to discharge the war debts and fund Congress’s needs through sales of land in the newly-acquired western territory. With the end of British anti-migration policy, millions of acres were potentially open to settlers. He hoped that the Western Land Ordinance of 1785, with its price of $1 per acre of surveyed land would raise the needed cash. Alas, poor sales soon dashed those hopes. Indian tribes and the pervasive problem of squatters who simply occupied the land mindful of the government’s lack of funds for troops to evict them contributed to uncertainty of land titles. With Lee’s prodding, Congress belatedly adopted the Land Ordinance of 1787, better known as the Northwest Ordinance. This law, reenacted by the Congress under the new Constitution of 1787, provided some needed stability, but it came too late to benefit the Confederation.

When Virginia accepted the call in Alexander Hamilton’s report on the Annapolis Convention of 1786 to send delegates to a convention to meet the following May in Philadelphia to consider proposals to amend the Articles of Confederation, Lee was elected as one of those delegates. Lee declined the position, as did his political ally Patrick Henry and a number of prominent men in other states. Henry summed up the views of many non-attendees. When asked why he did not accept, Henry, known as a man of many words over anything or nothing, stepped out of character and declared simply, “I smelt a rat in Philadelphia, tending toward the monarchy.”

Once the draft Constitution was approved, the Philadelphia convention sent it to the states for ratification as set out in Article VII. They also sent a copy to the Confederation Congress, with a letter that requested that body to forward its approval of the proposed charter to the states. Lee now attempted a gambit, innocuous on its face, which he hoped would nevertheless undo the convention’s plan. He moved to have Congress add amendments before sending the Constitution to the states. Taking clues from his friend George Mason, the most influential delegate at the convention who refused to support its creation, Lee submitted proposals on free exercise of religion, a free press, jury trials, searches and seizures, frequent elections, ban on a peace-time army, excessive fines, among others. These particulars echoed portions of Mason’s Declaration of Rights which he had drafted for Virginia in 1776.

Lee’s strategy was that the states should ratify either the original version, or a revised one with any or all of the proposed amendments. If no version gained approval, a second convention could be called which would draft a new document that took account of the states’ recommendations. One facet of this “poison pill” approach alone would have doomed the Constitution’s approval. As drafted, assent of only nine states’ conventions was needed for the new charter to go into effect among those states. For anything proposed by Congress, the Articles of Confederation required unanimous agreement by the state legislatures. Since support of a bill of rights, which the Constitution lacked, was a popular political position, it was likely that enough states would vote for proposed amendments to that end. In that event, the original Constitution would fall short of the nine states requirement, and Lee’s approach would require a second convention. It was feared—or hoped, depending on one’s view of the proposed system—that this would doom the prospect of change to the structure of governing the United States.

The pro-Constitution faction had the majority among delegations to Congress. Lee’s clever maneuver was defeated. However, rather than conveying the “Report of the convention” to the states with its overt approval, Congress sent it on September 28, 1787, without taking a position.

In the Virginia ratifying convention, Henry and others continued on the path Lee had laid out, of seeking to derail the process and to force a second convention. Like many other Americans, Lee was not opposed to all of the new proposals, but believed that, on the whole, the general government was given too much power. The new Constitution was a break with the revolutionary ethos that had sparked the drive to independence and was alien to the republicanism which was a part of that ethos. The opponents’ conception of unitary sovereignty clashed with that of the Constitution’s advocates who believed, such as Madison asserted in The Federalist, that the new government would be partly national and partly confederate. To the former, such an imperium in imperio was a mirage. Sooner or later, the larger entity would obliterate the smaller, the general government would subdue the states. Likewise, in the entirety of human history, no political entity the size of the United States had ever survived in republican form. To the classic republicans rooted in the struggle for independence who now were organizing to oppose the Constitution, the very existence of an independent central government threatened the republic. Of course, if any version of such a government were to be instituted, a bill of rights was indispensable.

The writings of an influential Antifederalist essayist, The Federal Farmer, have often been attributed to Lee. As with the works of William Shakespeare, historians debate these essays’ authorship. The claim that Lee wrote them was first made nearly a century after these events. No contemporary sources, including Lee or his political associates, mention him as the writer. The essays, presented in the form of letters addressed to The Republican, were collected and published in New York in late 1787 to influence the state ratifying convention. The Republican is Governor George Clinton, a committed Antifederalist who was the presiding officer of that convention and a powerful politician who remains the longest-serving governor in American history. Clinton himself is believed to have authored a number of important essays under the pseudonym Cato. Both Federal Farmer and Cato were so persuasive that they alarmed the Constitution’s supporters to the point that The Federalist addresses them by name to dispute their assertions.

Lee was in New York attending Congress during this time, and he was a prolific writer of letters, so it is possible he composed these, as well. Moreover, the arguments in the essays paralleled Lee’s objections about the threat the new system posed to the states and to American republicanism. The similarity extended even to the specific point that Lee made that the composition of the House of Representatives was far too small to represent adequately the variety of interests and classes across the United States.

However, Lee never wrote anything as systematic and analytically comprehensive as the Federal Farmer letters. What he intended for public consumption, such as his resolves, motions, and proclamations were comparatively brief and, like his rhetoric, to the point and designed to appeal to emotions. John Adams wrote during the First Continental Congress, “The great orators here are Lee, Hooper and Patrick Henry.” St. George Tucker, a renowned attorney from Virginia and authority in American constitutional law, described Lee’s speeches: “The fine powers of language united with that harmonious voice, made me sometimes think that I was listening to some being inspired with more than mortal powers of embellishment.” Historian Gordon Wood has contrasted Lee’s passionate style with the moderate tone and thoughtfulness of the Federal Farmer letters and asserts that Lee did not write them.

If not Lee, who? More recent scholarship has claimed that Melancton Smith, a prominent New York lawyer who attended the state convention, wrote these essays. Smith eventually voted for the Constitution in the narrow 30-27 final vote, which might explain the essays’ moderation in their critiques of the Constitution. His background as a lawyer might account for the close analysis of the document’s provisions. That said, the case for Smith and against Lee is also based on conjecture.

Once the Constitution was adopted, Lee, like Patrick Henry, made his peace. Henry used his influence in the state legislature to take the “unusual liberty” of nominating Lee to become one of Virginia’s two initial United States Senators. In that position Lee supported the Bill of Rights, although he considered its language a weak version of what it was supposed to achieve. Soon, however, Lee parted ways with his old political ally Henry and sided with Hamilton’s expansionist vision of the national government and its financial and commercial policies.

Lee died, age 62, on June 19, 1794. Thus ended the life of a man whose advice still commands attention: “The first maxim of a man who loves liberty, should be never to grant to rulers an atom of power that is not most clearly and indispensably necessary for the safety and well being of society.”

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn.

Click here for Next Essay

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 67 – Guest Essayist: Joerg Knipprath

In his work E Pluribus Unum, the historian Forrest McDonald provides a succinct profile of Samuel Chase: “But for Samuel Chase, Maryland’s immediate postwar history would have been dull in the extreme….At the time, all that seemed to be happening—or most everything with salt and spice, anyway—appeared to revolve around Samuel Chase….

“Chase was a man of peculiar breed, perfectly consistent by his own standards but wildly inconsistent by any other….[W]henever he appeared in public life in the capacity of an elected official, he artfully duped the people, led them by demagoguery into destructive ways, and exploited them without mercy; and they loved him and sang his praises and repeatedly reelected him….

“But when he appeared in public life in a different capacity, the capacity of institution-maker or institution-preserver, he worked with sublime statesmanship to protect the people against themselves, which is to say, against the like of himself. Thus in 1776, as the principal architect of Maryland’s revolutionary constitution, he created a system so fraught with checks and balances, and with powers so distributed between aristocracy and people, that destructive radicalism seemed impossible. Less than a decade later, as a member of the state’s House of Delegates, he engineered a movement to subvert that very constitution, and did so for the most flagrantly corrupt reasons and with the enthusiastic support of ‘the people,’ in whose name he did it….

“As a rogue who exploited public trust, Chase pursued private gain, but he probably did so more because he enjoyed the role than because he really coveted its fruits. Whatever his motives, he led Maryland’s proud and pretentious aristocrats by the nose for nearly a decade, and in so doing executed a dazzling series of maneuvers that accounted for most of the state’s major policy decisions.”

A physically large man, “Old Baconface,” a sobriquet he was given as a young attorney for his ruddy complexion, was in many ways, then, a larger-than-life character in Maryland. And that all happened before Chase’s rise to high federal judicial office, and the vortex of controversy in which he placed himself once more, precipitating an existential institutional crisis for the Supreme Court.

The expulsion in 1762 of Chase, the young attorney, from a debating club was for unspecified “extremely irregular and indecent behavior.” The founding of the local Sons of Liberty in 1765 was with another eventual signer of the Declaration of Independence, his friend William Paca, a wealthy planter and future governor, who was himself no stranger to political corruption. There was a failed attempt to corner the grain market through inside information after being elected to the Second Continental Congress. These incidents were the overture to the dynamic that marked the increasingly consequential relationship between Samuel Chase and the established political and social order.

Chase’s scheming then moved to the Maryland legislature, which, in the 1781-1782 session, adopted two laws favorable to Chase. The first was the creation of the office of Intendant of the Revenues, which placed in one office complete control over the state’s finances. The appointment went to a Chase associate, Daniel of St. Thomas Jenifer, a future signer of the U.S. Constitution. The second deprived Loyalists of their rights and confiscated their property with a value of more than 500,000 Pounds Sterling at the time. That property was to be sold at public auction. Chase and various associates placed their men in crucial administrative positions and manipulated the sales to their advantage. Among those associates was Luther Martin, an influential Antifederalist who began a long tenure as Maryland’s attorney general in 1778 through Chase’s influence. Another was Thomas Stone, who also had signed the Declaration of Independence.

The Chase syndicate acquired confiscated property valued between 100,000 and 200,000 Pounds Sterling, an amount far beyond what they could pay. Their solution was to choreograph the auction process with the help of Intendant of Finance Jenifer so as to cancel that sale through questionable legal technicalities and end up, in a second sale, with a price that was one-tenth that of the original auction price. Even that amount was more than the syndicate had, so they undertook a several-year-long effort to delay payment and procure a law that would enable them to pay their obligation with an issue of depreciated Maryland paper currency.

Chase’s questionable dealings and political scheming caused him and his associates trouble at times. In the end, however, the scandals, investigations, and attendant calumnies did him no harm. The personal charm he could invoke when needed, the political demagoguery to which he freely resorted to portray himself as a tribune of the people and an opponent of aristocracy and Toryism, and the willingness to deflect attention from the negative consequences of a failed political scheme by fomenting another even more base and outrageous, served him well.

It is a cliche of a certain genre of entertainment that a plot featuring a lovable scoundrel or band of misfits needs a straight-laced, establishment foil. In the tale of Samuel Chase, that part was played by Charles Carroll of Carrollton. Carroll came from the leading family of Maryland Catholics. He was a wealthy planter, thought to have been the wealthiest person in the new nation, worth about $400 million in today’s money. He was also the most lettered of the generally well-educated signers of the Declaration of Independence. Carroll was an early pro-independence agitator. As the leader of the Maryland Senate during the 1780s, he jousted politically with Chase and his allies over Chase’s schemes. While Carroll was able to blunt some of those schemes, Chase, in turn, succeeded in painting Carroll as a Tory. This was a supreme irony, indeed, in light of Carroll’s bona fides as a patriot who had been advocating violent revolution against Britain when Chase was still urging discussions.

In 1791, Chase became chief justice of the Maryland General Court, where he stayed until he was appointed to the United States Supreme Court by President George Washington in 1796. Chase served in that capacity until his death in 1811.

As the political temperature in the country heated up after passage of the Alien and Sedition Acts in 1798, Chase was drawn into the rhetorical clashes between Federalists and Jeffersonians. With relish, Chase denounced Jefferson’s Democratic Republicans as the party of “mobocracy.” Drawing on his experience as a partisan brawler during his days in Maryland politics, he denounced Jefferson, the Republicans, and Jeffersonian policies with his accustomed sharp tongue. Crucially for the events to follow, he did so while performing his judicial duties.

The nature of his position as a supposedly impartial and nonpolitical jurist had no impact on him.

Examples were Chase’s ham-handed actions in the trials in 1800 of, respectively, Thomas Cooper and James Callender for publishing libelous materials about John Adams and Alexander Hamilton. While Cooper was a sympathetic figure, Callender was a scandalmonger whose fate in the courtroom probably would not have stirred anyone, had Chase not made him a political martyr. Callender’s attacks on Hamilton had impressed Jefferson, who was pleased with anyone willing to sling rhetorical mud at the Federalists. Jefferson encouraged and subsidized Callender’s efforts and later pardoned him for his conviction in Chase’s courtroom. However, Jefferson soon became much less enchanted with Callender when the latter demanded he be appointed to a federal office. Upon Jefferson’s refusal, Callender switched political allegiances and, as a Federalist Party newspaper editor, published scurrilous articles that claimed Jefferson’s paternity of children born to Sally Hemings, one of his slaves.

Chase, meanwhile, continued his political activism. Not content to campaign as a sitting judge for President Adams’s reelection, he harangued a Baltimore grand jury in 1803 with a long charge which criticized the Jeffersonians for having repealed an Adams-era judiciary statute that Chase favored, and which condemned the idea of universal suffrage as unrepublican. The last was particularly ironic in light of his public persona as a man of the people and opponent of Toryism in his earlier political career in Maryland.

Having made himself the lightning rod for the Jeffersonians’ fury at what they saw as the Federalists entrenching themselves in the judiciary following the latters’ election loss in 1800, Chase became the target of an impeachment effort in the House of Representatives. The grand jury charge in 1803 may have been the catalyst, but Jefferson’s distaste for his cousin Chief Justice John Marshall and outrage at Marshall’s lectures to the executive branch in Marbury v. Madison that same year, helped produce the reaction. Indeed, it was broadly understood that a Chase impeachment was a dry-run for a more consequential attempt to remove Marshall.

Led by another of Jefferson’s cousins, the flamboyant ultra-republican majority leader John Randolph of Roanoke, Virginia, the House voted out eight articles of impeachment on March 12, 1804. The first seven denounced Chase’s “oppressive conduct” in the Sedition Act trials. The eighth dealt with the “intemperate and inflammatory political harangue” in Baltimore which was intended to “excite the fears and resentment…of the good people of Maryland against their state government…[and] against the Government of the United States.” In short, the Jeffersonians accused Chase of the seditious speech they previously claimed Congress could not prohibit under the Sedition Act. With that statute no longer in effect, there was no criminal act on which the impeachment was based. More significantly, since the Republicans had claimed that a federal law that targets seditious speech violates the First Amendment, Chase’s remarks were not even potentially indictable offenses. The vote was a strict party-line matter, 73-32. If party discipline held in the Senate trial, where the Republicans enjoyed a 25-9 advantage, Chase’s judicial tenure was doomed.

The trial was held in February, 1805, supervised by Vice-President Aaron Burr, still under investigation for his killing of Alexander Hamilton in a duel. Chase’s lawyers, including his old political crony, close friend, and successful Supreme Court litigator, Luther Martin, argued that conviction required proof of an act that could be indicted under law. The House managers claimed that impeachment was not a criminal process. Since impeachment was the only way to remove federal judges, they asserted that “high Crimes and Misdemeanors” must include any willful misconduct or corrupt action that made the person unfit for judicial office. Their charges met that test, they averred, because Chase had acted as prosecutor as well as judge in the trials.

The effort failed. Even on the eighth charge, the Baltimore grand jury speech, six Republican Senators voted to acquit, leaving the prosecution four votes short of the necessary two-thirds vote for conviction. On the other, weaker, charges, the House fared worse. Chase’s acquittal diminished the threat which impeachment posed to the independence of the judiciary. Still, the two sides’ respective arguments over the purpose of impeachment and the meaning of the phrase “high Crimes and Misdemeanors” were replayed in subsequent such proceedings and continue to be contested today. After his trial, Chase stayed on the Court another six years. He remains the only Supreme Court justice to have been impeached.

Samuel Chase died in Baltimore in 1811 at the age of 70.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn.



Click Here for next Essay

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 62 – Guest Essayist: Joerg Knipprath

James Wilson was one of the most intellectually gifted Americans of his time. His cumulative influence on pre-Revolutionary War political consciousness, formation of the governments under the Constitution of 1787 and Pennsylvania’s constitution of 1790, and early Supreme Court jurisprudence likely is second-to-none. Along the way, he amassed a respectable fortune, and took his place as a leading member of the political and economic elite that played such a critical role in the events leading to American independence. That said, he was not immune to the “slings and arrows of outrageous fortune,” in the words of the Bard, but, for the most part, he did not suffer them in the mind. Rather, more often, he chose “to take arms [sometimes literally]…and, by opposing, end them.”

Wilson moved to Philadelphia from his native Scotland in 1766, at age 24. Prior to emigrating, he was educated at Scottish universities. There, he was influenced by the ideas of Scottish Enlightenment thinkers, such as David Hume and Adam Smith. Their ruminations about human nature, the concept of knowledge, and the ethical basis of political rule shaped Wilson’s intellectual ideas which he made concrete in later political actions and judicial opinions.

It appears that Smith’s influence was more constructive than Hume’s. The latter denied the essential existence of such concepts as virtue and vice. Hume instead characterized them as artificial constructs or mere opinion. Wilson was critical of Hume’s patent skepticism, deeming it flawed and derogatory of what Wilson saw as the moral sensibilities integral to human nature. He considered Hume’s skepticism inconsistent with what he viewed as the ethical basis of the political commonwealth, that is, consent of the governed. As he wrote later, “All men are, by nature equal and free: no one has a right to any authority over another without his consent: all lawful government is founded on the consent of those who are subject to it.” However, Wilson also believed, along with John Adams and many other republicans of the time, that such consent could only be given by a virtuous people. In short, Wilson’s democratic vision was elitist in practice. The governed whose consent mattered were the propertied classes. The others might register their consent, but only under the watchful eyes of their virtuous betters in society.

After arriving in Pennsylvania, he studied law under John Dickinson, another member of the emerging political elite. While so occupied, he also lectured, mostly on English literature, at the College of Philadelphia, site of the first medical school in North America. He had arrived at an institution that was connected to an astonishing number of American founders. Despite its relatively recent founding in 1755, it counted 21 members of the Continental Congress as graduates; nine signers of the Declaration of Independence were alumni or trustees; five signers of the Constitution held degrees from the College, and another five were among its trustees.

There, Wilson successfully petitioned to receive an honorary Master’s degree, to remedy his failure to complete his studies for a formal degree at the Scottish universities. His scholarly association with the College of Philadelphia continued the rest of his life, including after its merger into the University of Pennsylvania in 1791. At that time, Wilson took on a lectureship in law for a couple of years, only the second such position established in the United States, after the Chair in Law and Police held by George Wythe at the College of William and Mary. The University of Pennsylvania traces its eventual law school to Wilson’s position.

Wilson practiced law in Reading, Pennsylvania. His talent and connections quickly produced financial security. He turned his attention to politics amid the stirrings of conflict with the British government. In 1768, he wrote, “Considerations on the Nature and Extent of the Legislative Authority of the British Parliament.” In this pamphlet, Wilson denied the authority of Parliament to tax the American colonists because of the latters’ lack of representation in that body. Perhaps because it was too early to mount a direct constitutional challenge to the authority of Parliament to govern, this seminal work was not published until 1774. Despite his negation of Parliamentary authority, Wilson did not advocate sundering all ties with the mother country. Rather, he emphasized the connection between England and her colonies through the person of King George. Wilson’s union cemented by a pledge of allegiance to the king was a rudimentary plan for the type of dominion system that John Adams and Thomas Jefferson also proposed in separate missives that same year. In an ironic postscript, the British ministry offered, too late, a similar structure as a way to end the war in 1778. It was a system the British a century later instituted for other parts of their empire.

In 1774, Wilson was elected to the local revolutionary Committee of Correspondence. When the Second Continental Congress was called in 1775, Wilson was elected to the Pennsylvania delegation. With the Adamses—John and Sam—, Jefferson, the Lees of Virginia— Richard Henry and Francis—, and Christopher Gadsden—the “Sam Adams of the South” and designer of the Gadsden Flag—Wilson was among the most passionate pro-independence voices as that Congress deliberated.

Then occurred an odd turn of events. When Richard Henry Lee’s motion for independence came up for debate on June 7, 1776, consideration had to be postponed because Pennsylvania, along with four other colonies, was not prepared to vote in favor. John Dickinson, Wilson’s close friend and law teacher, was part of the peace faction. Did that influence Wilson’s vote? Was Wilson really a pro-independence radical, as his writings and soaring rhetoric in Congress indicated? Or was he an elite conservative reluctantly floating along with the tide of opinion among others of his class? Wilson and others in his delegation claimed that they merely wanted clearer instructions from their colony’s provincial congress. In a preliminary vote within the Pennsylvania delegation on July 1, 1776, Wilson broke with Dickinson and voted for independence. When Congress voted on Lee’s motion the next day, Dickinson and Robert Morris stayed away. Wilson, Benjamin Franklin, and John Morton then cast Pennsylvania’s vote in favor of the motion and independence.

During the Revolutionary War, Wilson divided his time between Congress and opposing Pennsylvania’s new constitution. He also returned to private law practice and served on the board of directors of the Bank of North America. That bank was the brainchild of fellow-Pennsylvanian Robert Morris, another personal friend with whom Wilson also worked closely on the financial matters of the United States.

Wilson continued his life-long practice of land speculation, the vocation of some among the American elite, and the avocation of most others, elite or not-so-elite. The country was land-rich and people-poor. Investors gambled that, after peace was restored, the British pro-Indian and anti-settlement policy of the Proclamation of 1763, which had prohibited American settlement of the interior, would be overturned. Western lands finally would be opened to immigrants. Wilson, along with Robert Morris and many other prominent Americans and some foreigners, had organized the largest of the land companies, the Illinois-Wabash Company, even before the war. Wilson eventually became its head and largest investor. The intrigue among the Company, politicians in various states, delegates to Congress, and agents of foreign governments to gain access to large tracts of trans-Appalachian lands presents a fascinating tale of its own.

The Illinois-Wabash Company was not Wilson’s only venture in land speculation. He co-founded another company and also purchased rights to large tracts individually or in partnership with others. It has been estimated that, directly or through investment entities, Wilson had interests in well over a million acres of Western land. Much of this land bounty was financed through debt. Creditors want cash payment, and highly-leveraged debtors are particularly vulnerable to economic contractions. Land values drop as land goes unsold, and cash in the form of gold and silver specie becomes scarce. Bank notes no longer trade at par, reflecting the financial instability of their issuers. Like his business associate and political ally Robert Morris, Wilson was hit hard by the Panic of 1796-7. He was briefly incarcerated twice in debtor’s prison, even after fleeing Pennsylvania for North Carolina to avoid his creditors. More astounding even was that these events occurred while he was on the U.S. Supreme Court and performing his circuit riding duties.

One sling of outrageous fortune against which Wilson literally took arms occurred on October 4, 1779. After the British abandoned Philadelphia, the revolutionary government undertook to exile Loyalists and seize their property. As John Adams had done for the British soldiers accused of murder in the Boston Massacre in 1770, Wilson successfully took up the unpopular cause of defending 23 of the Loyalists. The public response to Wilson’s admirable legal ethics was more militant than what Adams had experienced. Incited by the speeches of Pennsylvania’s radical anti-Loyalist president, Joseph Reed, a drunken mob attacked Wilson and 35 other prominent citizens of Philadelphia. The mob’s quarry managed to barricade themselves in Wilson’s house and shot back. In the ensuing melee, one man inside the house was killed. When the mob tried to breach the back entrance of the house, the attackers were beaten back in hand-to-hand combat. The fighting continued, with the mob using a cannon to fire at the house. At that point, a detachment of cavalry appeared, led by the same Joseph Reed, and dispersed the mob. It is estimated that five of the mob were killed and nearly a score wounded. Members of the mob were arrested, but no prosecutions were launched, allegedly to calm the situation. Eventually, all were pardoned by Reed.

The Fort Wilson Riot, as it became known colloquially, had more complicated origins and produced more profound changes than one can address in detail in an essay about Wilson. It arose from difficult economic circumstances and rising prices due to food shortages. The lower classes were particularly hard hit, and popular resentment simmered for months, punctuated by gatherings and publications which none-too-subtly threatened upheaval. During that volatile time, Wilson was accused of “engrossing,” that is, hoarding goods with the intent to drive up prices. This may have made him an even more likely target for the mob’s wrath than having defended Loyalists.

As well, the friction between the lower classes and the merchant bourgeoisie was manifested in competing political factions, the Constitutionalists and the Republicans. The former supported the radically democratic Pennsylvania constitution of 1776, which placed power in a unicameral legislature closely monitored through frequent elections. They stressed the need for sacrifice for the common good, done on a voluntary basis or by government force. The latter opposed that charter as the cause of ineffective government and destructive policies which threatened property rights. In the end, the two competing visions of republicanism settled their political conflict during the riot. The mob had violated an unwritten rule of protest, and popular opinion shifted against the Constitutionalists. Wilson’s Republicans had won. They would determine the subsequent political direction of the state, which became the critical factor in Pennsylvania’s struggle to approve the proposed U.S. Constitution in the fall of 1787. The shift in political fortunes culminated in 1790 in a significantly different constitution, one of more balanced powers controlled by the political elite and containing explicit protections of property rights.

Perhaps Wilson’s greatest contribution to America’s founding was his participation in the constitutional convention in Philadelphia in May, 1787. He became one of only six to sign both the Declaration of Independence and the Constitution, the others being George Clymer, Benjamin Franklin, Robert Morris, George Read, and Roger Sherman.

One of the most accomplished lawyers in the country, John Rutledge of South Carolina, future Supreme Court justice and, briefly, the Court’s chief justice, stayed at Wilson’s home during this time. The historian Forrest McDonald describes a plan by Rutledge and Wilson to “manage” the convention. Apparently, Wilson made similar plans with James Madison, Robert Morris, and Gouverneur Morris (no relation). Rutledge, in turn, was scheming with others. To complete the intrigue, Wilson and Rutledge kept their side discussions secret from each other. The plan seemed to bear fruit when Wilson and Rutledge were appointed to the Committee of Detail, charged with writing the substantive provisions of the Constitution from the delegates’ positions manifested in the votes of the state delegations. Considering the committee’s final product, however, their success appears to have been less than spectacular. It was not for lack of trying, however. Wilson spoke 165 times at the convention, more than anyone other than Gouverneur Morris.

Like his fellow connivers, Wilson took a very strong “nationalist” position in the convention. He was instrumental in the creation of the executive branch. Reacting against the weakness of the multiple executive structure of the Pennsylvania executive council model and the lack of an effective balance of power among the branches of government under his state’s constitution, he, like Alexander Hamilton, believed a unitary executive to be essential. The necessary “energy, dispatch, and responsibility to the office” would be assured best if a single person were in charge of the executive authority. As well, such a person would be positioned to blunt the self-interest of political factions which are endemic to legislatures. Wilson objected to the original proposal to have the president elected by the whole Congress or by the Senate alone. Instead, he proposed, the president should be elected by the people. Very few delegates had a taste for such unbridled democracy. Wilson then fell back to his second line of argument, that the president be selected by presidential electors chosen by the people of the states, but with the states divided into districts proportioned by population, like today’s congressional districts. This, too, was defeated by eight states to two. The matter was tabled for weeks. In the end, the current system, one that dilutes majoritarian control and favors the influence of states in their corporate capacity, prevailed.

An explanation of the term “nationalist.” As used herein, it has the classic meaning associated with the concept as it relates to the period of the founding of the United States and subsequent decades. It describes those who identified more with the new “nation,” i.e. the United States, than with the individual colonies, soon to become states, of their birth. Generalizations are, by definition, imprecise. Still, the most ardent American nationalists of the time were those who, like Wilson, Robert Morris, and Hamilton, were born abroad; those who, like Rutledge and Dickinson, had traveled or otherwise spent considerable time in Europe; and those who had significant business connections abroad. They also tended to be younger. The difference between these outlooks was less significant for the process of separating from Britain, than it was for the controversies over forming a “national” government and an identity of the “United States” through the Articles of Confederation and, subsequently, the Constitution of 1787. The nationalists sought to amend and, later, to abandon the Articles. As to the Constitution, the nationalists at the Philadelphia convention supported a stronger central government and, on the whole, more “democratic” components for that government than their opponents did. They also generally opposed a bill of rights as ostentatious ideological frippery. In the struggle over the states’ approval of the Constitution, they styled themselves as “Federalists” as a political maneuver and characterized their opponents as “Anti-Federalists.” After the Constitution was approved, most of them associated with Hamilton’s policies and the Federalist Party. In the sectionalist frictions before the Civil War, they were the “Unionists.” Regrettably, like other words in our hypersensitive culture, the term has been ideologically corrupted recently, so that its obvious meaning has become slanted. Paradoxically, even as the central government becomes powerful beyond the wildest charges of the Constitution’s early critics, the very concept of the United States as a “nation” is today under attack.

In the long wrangling over the structure of Congress, Wilson urged proportional representation, as he had done unsuccessfully a decade earlier in the debate over the Articles of Confederation. He also supported direct election of Congress by the people. In light of his moderate democratic faith in the consent of the governed, and coming as he did from a populous state, his position is hardly surprising. That noted, he favored a bicameral legislature with an upper chamber that would restrain the more numerous lower chamber and its tendency towards radical policies. The insecurity of property rights that resulted from the policies of the Constitutionalist-dominated unicameral Pennsylvania legislature had alarmed Wilson. Wilson adhered to his support for proportional representation in the Senate and direct popular election. Like his fellow large-state delegates Madison and Hamilton, eventually he resigned himself to the state-equality basis of the Senate under Roger Sherman’s Connecticut compromise and to election of that body by the state legislatures. He also supported the three-fifths clause of counting slaves for the purpose of apportionment of representatives. The purpose of that clause, first presented in 1783 as a proposed amendment to the Articles of Confederation, originally was part of a formula to assess taxes on the states based on population rather than property value. That purpose is also reflected in Article I of the Constitution.

During the debate in the Pennsylvania convention over the adoption of the Constitution, Wilson delivered his famous Speech in the State House Yard, a precursor to many arguments developed more fully in The Federalist. Wilson systematically addressed the claims of the Constitution’s critics. He defended his opposition to a Bill of Rights, declaring such a document to be superfluous and, indeed, inconsistent with a charter for a federal government of only delegated and enumerated powers. Copies of the speech were circulated widely by the Constitution’s supporters.

There were those, like Richard Henry Lee of Virginia, who claimed that the drafting convention in Philadelphia had gone beyond its mandate to propose only amendments to the Articles of Confederation and that, as a consequence, the proposed Constitution was revolutionary. Wilson drew on his philosophical roots to declare that “the people may change the constitutions whenever and however they please. This is a right of which no positive institution can ever deprive them.” This notion of popular constitutional change outside the formal amendment method set out in Article V of the Constitution was a self-evident truth to many Americans at the time. It has become much more controversial, as Americans have moved from the revolutionary ethos of the 1780s and a robust commitment to popular sovereignty to today’s more pliant population governed by an increasingly distant and unaccountable elite.

Wilson next turned his attention to the adoption of a new state constitution in Pennsylvania. At the same time, he sought the chief justiceship of the United States Supreme Court. Although that office went to John Jay of New York, President Washington appointed Wilson to be an associate justice. In that capacity, he participated in several significant early cases. As expected, he consistently took a nationalistic position. Thus, in 1793 in Chisholm v. Georgia, he joined the majority of justices in holding that the federal courts could summon states as defendants in actions brought by citizens of other states and to adjudicate those states’ obligations without their consent. Wilson reasoned that the Constitution was the product of the sovereignty of the people of the United States. This sovereignty, exercised for purposes of Union, had subordinated the states to suits in federal court as defined in Article III. The decision ran contrary to the long-established common law doctrine of state sovereign immunity. Swift and hostile political reaction in Georgia and Congress culminated in the adoption of the Eleventh Amendment to overturn Chisholm.

Wilson joined two other nationalistic decisions. One was the unpopular Ware v. Hylton in 1796, which upheld the rights of British creditors to collect fully debts owed to them. Those rights were guaranteed under the Paris Treaty that ended the Revolutionary War, but conflicted with a Virginia law that sought to limit those rights. Like his fellow-justices, Wilson applied the Supremacy Clause to strike down the state law. But he also recognized the binding nature of the law of nations, which had devolved to the United States on independence. The other was Hylton v. U.S. the same year, which upheld the constitutionality of the federal Carriage Tax Act. The case was an early exercise of the power of constitutional review by the Court over acts of Congress and a precursor to Marbury v. Madison. That power was one which Wilson had strenuously urged in the constitutional convention nine years earlier in support of a strong federal judiciary.

Depressed about his precarious economic situation and worn out from the rigors of circuit-riding duties as a Supreme Court justice, Wilson died from a stroke in 1798.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn.

Click Here for Next Essay

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Essay 55 – Guest Essayist: Joerg Knipprath Robert Morris of Pennsylvania: Merchant, Superintendent of Finance, Agent of Marine, and Signer of the Declaration of Independence – Guest Essayist: Joerg Knipprath

Robert Morris, Jr., is one of only two men who signed the Declaration of Independence, the Articles of Confederation, and the Constitution of 1787. He thus was present at three critical moments in the founding of the United States. His most significant contributions to that founding occurred during the decade of turmoil framed by the first and last of these, that is, the period of the Revolutionary War and the Confederation.

Morris was of English birth, but came to Pennsylvania as a child. He inherited a substantial sum of money when his father, a tobacco merchant, died prematurely. After serving an apprenticeship with his father’s former business partner, Morris started a firm with that partner’s son. The firm became a success in the tobacco trade, marine insurance, and commerce in various merchant goods. For these reasons, Morris opposed British taxes on merchants and laws that hindered trade, especially that done with American vessels.

After the skirmishes at Lexington and Concord, Morris was selected to Pennsylvania’s Committee of Safety. His efforts to secure ammunition for the Continental Army led to his appointment to Pennsylvania’s delegation to the Second Continental Congress, which met in the capital at Philadelphia. Morris was torn between opposition to the British government’s actions and his loyalty to the Crown. He sought to mediate between the radicals pressing for independence and the traditionalists seeking to negotiate continued connection with the motherland. When it came time to vote on Richard Henry Lee’s motion for independence on July 2, 1776, Morris and fellow Pennsylvania moderate John Dickinson absented themselves to allow that colony’s delegation to vote in favor. Independence having been declared, Morris went with the tide and signed the Declaration the following month.

During the Revolutionary War, the very wealthy Morris assumed two roles befitting his talents, finance and shipping. Even before independence, he served on the Committee of Trade and the Marine Committee. Once the Articles of Confederation were finally approved in 1781, he was given more formal executive offices, Superintendent of Finance, analogous to the current Secretary of the Treasury, and Agent of Marine, the former version of the Secretary of the Navy. As well, he continued his efforts to secure supplies for the Continental Army through those positions.

It was particularly in the former capacity that he excelled and later received the appellation “Financier of the Revolution.” The new country was, not to mince words, a financial basket case. To term the promissory notes of the Confederation “junk bonds” would be flattery. The British had refused to allow the creation of a domestic banking system in the colonies, in order to maintain control over the economy, thwart independence, and promote the ascendancy of London as the world’s financial center over Amsterdam. Each colony had had its separate financial relationship with London. In the colonies themselves, someone wanting credit had to obtain loans from local merchants. The country was utterly without even a rudimentary integrated banking system.

Commerce, as well, had been regulated by the British to their advantage. Restriction on colonial trade with the West Indies and with continental European countries had been a recurring source of friction in the decade before the War. Shortly before American independence was declared, Parliament in December, 1775, had passed the Prohibitory Act, which outlawed commerce even between the colonies and England. With independence, the gloves came off entirely. The British navy threw a blockade around American ports, which brought legal sea-borne trade to a standstill. American efforts to avoid this blockade through smuggling and eventual licensing of privateers were spirited, but nothing more than a nuisance to the British maritime stranglehold on American commerce.

Money itself was both scarce and overabundant. Scarce, in the form of gold and silver; overabundant in the form of paper currency. Not only British coins circulated, but also those from many other European countries, especially Spanish silver pieces-of-eight (akin to the future silver dollar) and gold doubloons. States issued a few small copper coins along with significant amounts of “bills of credit,” that is, paper scrip which depreciated in value and was at the center of much commercial speculation, economic chaos, and political intrigue over the first decade of independence.

The Confederation’s currency, the Continental Dollar, was, if anything, even more pathetic. Aside from a few pattern coins struck in 1776 mostly in base metals, the currency was issued as paper. Although historians’ research has not been able to reach a definitive conclusion, it appears that, over the course of about five years, about 200 million dollars’ worth was printed. To put this in perspective, the population of the United States at the time was about .8% of that of today. The current purchasing power of the dollar is about one-thirtieth of the value of coins then, and the value of gold was about a hundred times the current nominal value. Due to massive British counterfeiting, even more than that amount of Continental currency actually may have circulated. Congress had no domestic sources of income, because it lacked the power to tax directly. Instead, it must seek requisitions from the states. Although the states were obligated under the Articles of Confederation to pay those requisitions, their performance was unsteady and varied from state to state, especially as the financial demands of the war, the turmoil of military campaigns, and the strangulation of commerce by the British blockade took their toll on their economies.

The printing of vast amounts of currency, out of proportion with what the country could back up with hard assets, such as gold and silver, led to serious inflation. The currency depreciated to such a point that, by 1781, it ceased to be used as a medium of exchange. It did, however, gain linguistic currency through the commonly-used contemptuous aphorism, “Not worth a Continental” to signify something of no value.

Enter Robert Morris. Congress appointed him Superintendent of Finance in 1781. Attempting to ameliorate the desperate financial situation of a bankrupt country, he began to finance the Continental Army’s supplies and payroll himself through “Morris notes” backed by his own credit and resources. His efforts over the next three years, while crucial in averting political disaster, still fell short. The seriousness of the matter was underscored by several near-mutinies among elements of the officer corps of the Army: the Pennsylvania Line Mutiny of January, 1781, the McDougall delegation’s delivery to Congress in December, 1782, of an ominous petition signed by a number of general officers, and the Newburgh Conspiracy by a large contingent of Army officers in early 1783. They all showed the simmering threat to the young republic from Congress’s broken promises caused by the lack of funds to pay the military. Morris’ correspondence with some staff officers at General Washington’s headquarters revealed a desire for new ways to force Congress to compel the states to meet their financial obligations. This gave rise to unsubstantiated rumors that the military’s discontent, especially the Newburgh Conspiracy, was supported, or even instigated, by Morris and other “nationalist” members of Congress.

In other financial matters, Morris directed his efforts to create a banking system, in order to improve access to private credit and to stabilize public credit. In this matter he was assisted by his able protege, Alexander Hamilton, himself trained in business and finance before joining the military. Morris issued a “Report on Public Credit” in 1781, which proposed that Congress assume the entire war debt and repay it fully through new revenue measures and a national bank. The first part of this ambitious endeavor failed when, in 1782, Rhode Island alone refused to approve an amendment to the Articles of Confederation to give Congress the power to tax imports at 5% as a source of revenue.

However, Morris did obtain a charter from the Confederation Congress on May 26, 1781, for the Bank of North America. Modeled after the Bank of England, it began its operation as the first commercial bank in the United States in early 1782. It also took on some functions of a proto-central bank in its attempt to stabilize public credit. About one-third of the bank shares were purchased by private entities, the rest by the United States. Morris used $450,000 of silver and gold from loans to Congress by the French government and Dutch bankers to fund the government’s purchase of its bank shares. He then issued notes backed by that gold and silver for loans, including to the United States. When Congress appeared unable to repay the loans, Morris sold portions of the government’s shares to investors to raise funds. Using those funds, he repaid the bank and then issued more notes to lend to the government to meet its financial obligations.

Unfortunately, despite Morris’ energy and financial wizardry, the Confederation’s debts continued to expand, with no clear way to repay them that was constitutionally permitted and politically feasible. European lenders had reached the end of their patience. Unwilling to remain a part of this calamitous system, Morris resigned from Congress in 1784, having been preceded in exit by Hamilton for similar reasons a year earlier.

As a constitutional matter, the Bank’s charter was challenged early as beyond Congress’ limited powers under the Articles of Confederation. Morris obtained a second charter, from Pennsylvania, in 1782. That state’s legislature briefly revoked the charter in 1785, before reinstating it in 1786. With the end of the Confederation in 1788 due to the adoption of the new Constitution, the Bank’s charter under the Articles expired. It continued to operate as a state institution within Pennsylvania. Through a series of mergers and acquisitions since then, the Bank’s remains are part of Wells Fargo & Co. today. Its role as a national bank, but one supported by a much sounder constitutional and economic foundation, was recreated by the Bank of the United States, chartered by Congress in 1791 at the urging of Alexander Hamilton, and by-then, Senator Robert Morris.

In his role as official Agent of Marine, as well as in an informal capacity before then, it was Morris’ job to supervise the creation of a navy and to direct operations. Congress authorized the construction of more than a dozen warships. These were no match for the Royal Navy. They were primarily used as commerce raiders to capture British merchant ships. Almost all were sunk, scuttled, or captured by 1778. Most American naval ships were armed converted merchant vessels often owned by private individuals. The most effective raiders, favored by Morris, were privateers, which were private vessels licensed by Congress to attack British shipping. Nearly 2,000 such letters of marque were issued by Congress, which caused an estimated $66 million of losses to British shipping. Privateering was so profitable for a time that Morris and other investors built and sent out their own privateers.

After the Revolutionary War, Morris focused on private business, including the favorite investment activity of moneyed Americans, land speculation. On the political side, he was selected by Pennsylvania for its delegation to the Constitutional Convention of 1787. He presided at the opening session on May 25, where he moved to make George Washington the presiding officer. He was a nationalist in outlook and, based on his experience as Superintendent of Finance under the Confederation, wanted to assure the general government a power to tax. He favored replacing the Articles, rather than just amending them. Beyond that, he had no real philosophical commitment to the particulars of the new constitution. Not being a politician or political theorist, he had little influence on the proceedings.

With the new government in place, the Pennsylvania legislature elected Morris to the United States Senate. President Washington wanted to make Morris Secretary of the Treasury. Morris demurred and recommended Hamilton in his stead. The two were closely aligned on economic and commercial policy. Hamilton’s “First and Second Reports on the Public Credit” in 1790 reflected Morris’ own “Report” of a decade earlier respecting the assumption and funding of war debts and the creation of a national commercial bank.

Morris’ genius in financial matters did not save him from economic disaster. He overextended himself in his land speculation. His company owned millions of acres of land. The Panic of 1797, triggered by the damage to international trade and immigration caused by the Napoleonic Wars in Europe, left Morris land-rich and cash-poor. As a consequence of depreciating land values and insufficient cash to pay creditors and taxes, he spent three and a half years in debtor’s prison. The incarceration only ended in August, 1801, after Congress passed a bankruptcy law for the purpose of obtaining his release. He was adjudged bankrupt, and his then-almost inconceivable remaining debt of nearly $3 million was discharged. Still, Morris and his wife were left virtually penniless, having received just a small pension. He died in 1806.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn.

Click Here for Next Essay

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 48 - Guest Essayist: Joerg Knipprath

It is unlikely that many Americans today, even many New Yorkers, have heard of Francis Lewis. Even though he is one of only sixteen to have signed both the Declaration of Independence and the Articles of Confederation, he seems not to have had much impact on the political direction or the constitutional development of the country. Still, he was reputed by a 19th-century biographer to have been admired by his contemporaries. Today, Francis Lewis High School in Queens, New York, preserves his name. According to its website, the school is one of the most applied-to public high schools in New York City.

Lewis was born on March 21, 1713, in Llandaff, Wales. He was orphaned by age 5 and raised by an aunt. After attending school in Scotland and England, he became an apprentice at a mercantile house in London. At age 21, he inherited property from his father’s estate, sold it, converted the proceeds to merchandise, and sailed for New York in 1734. He left a portion of the merchandise for his business partner, Edward Annesley, and took the rest to Philadelphia to sell. He returned to New York in 1736.

Having become a successful businessman with contacts in several countries, he was entrusted by the British military with a contract to supply uniforms during the French and Indian War. In 1756, the first official year of that war, Lewis was at Fort Oswego in upstate New York. During his stay, the French and their Indian allies attacked in August. Lewis was standing next to the English commander when the latter was killed in the battle. The British surrendered the fort to the French, and Lewis was captured and eventually taken to France. It has been written that he was kept in a box or crate during that voyage. His harrowing captivity ended through a prisoner exchange when peace was achieved in 1763. Lewis returned to New York. The British government awarded him 5,000 acres in New York as compensation for the lost years of his life.

Lewis once more turned his attention to business, and he quickly prospered. With his large fortune firmly established, he retired from running his businesses and became active in politics. When Parliament passed the Stamp Act in 1765, he changed his pro-Royalist sentiments and joined the Stamp Act Congress organized to protest the tax.

Thereafter, his political activism deepened. That same year, he was a founding member of the local chapter of the Sons of Liberty, one of a loosely-connected collection around the colonies of silk-stockinged rabble-rousers with their lower-class auxiliaries as enforcers. When the crisis between Britain and her colonies began to worsen, Lewis joined the Committee of Fifty-one, organized in New York in 1774 to protest the closing of the port of Boston to commerce. When the Committee was succeeded by the Committee of Sixty in 1775 to enforce the colonies’ trade embargo against British goods, which had been adopted by the First Continental Congress, Lewis joined that, as well. That committee was replaced, in short order, by the Committee of One Hundred, which directed the colonists’ program against Parliament until the first New York Provincial Assembly met and took over that task on May 23, 1775. The Assembly soon elected Lewis to be a delegate to the Second Continental Congress, where he served between 1775 and 1779.

In the Congress, he signed the Olive Branch Petition on July 5, 1775. That missive, written by John Dickinson of Pennsylvania, was a last attempt by the moderates in the Congress to avert war. The petition assured King George of the Americans’ loyalty to him. Dickinson pleaded with the king to create a more equitable and permanent political and trade arrangement between Britain and her colonies than existed as a result of Parliament’s various unpopular and, to the Americans, unconstitutional, acts. The petition failed to achieve its purpose. The King refused even to read it. Instead, on August 23, 1775, he declared the American colonies to be in rebellion. The message of peace and compromise of the Olive Branch Petition likely was undermined by the Congress’ adoption the following day of the Declaration of the Causes and Necessity of Taking Up Arms. Drafted in parts by Thomas Jefferson and John Dickinson, that document castigated Parliament’s tax and trade policies and its punitive acts. It did so in rather incendiary language, in sharp contrast to the tone of the Olive Branch Petition. As well, John Adams’ letter to a friend, intercepted by the British and forwarded to London, which belittled the petition and complained that the Americans should have built up a navy and taken British officials prisoner, could not have helped the effort to persuade the British government of the Americans’ sincerity.

As the final break with Britain loomed, the Second Continental Congress adopted the Declaration of Independence. The vote on Richard Henry Lee’s resolution to declare independence, on July 2, 1776, was approved by 12 delegations. Lewis and the rest of the New York delegates had to abstain because they had not yet received instructions from the provincial assembly to proceed. After his delegation received the proper authorization from New York, Lewis and the other members signed the Declaration on August 2.

Lewis used his wealth and business acumen to assist the new country. He is estimated to have been the fifth-wealthiest signer of the Declaration. Before and during the war, he was instrumental in procuring uniforms, arms, and supplies for the Continental Army, both on his own account and through his administrative talents. He strongly sided with General George Washington against the latter’s critics in the “Conway Cabal” who sought to replace Washington with the politically popular, but militarily incompetent, General Horatio Gates. Lewis’ service in the Congress also included approving the Articles of Confederation in 1777 and being Chairman of the Continental Board of Admiralty.

Despite his wealth and his involvement in public affairs at an exceptional time, Lewis was no stranger to personal tragedy. Already mentioned was his loss of both parents as a young child, left also without siblings. Only three of his seven children reached adulthood. Perhaps most traumatic was the fate that befell his wife. Lewis had married Elizabeth Annesley, his business partner’s sister, in 1745. While Lewis was away, in 1776, his house in Whitestone, in today’s Queens, New York, was destroyed by the British after the Battle of Brooklyn. Soldiers from a light cavalry troop pillaged the house, and a warship then opened fire. Worse, the British took his wife prisoner and held her for two years. Historical sources aver that the conditions of her captivity were inhumane in that the British denied her a bed, a change of clothing, or adequate food over several weeks.

Eventually, General Washington was apprised of her situation. He thereupon ordered the seizure of the wife of the British Pay-Master General and the wife of the British Attorney General for Pennsylvania. Both were to be held under the same conditions as Elizabeth Lewis. A prisoner exchange was then arranged, and Elizabeth was released in 1778. She returned to be with her husband in Philadelphia. Unfortunately, her captivity had so ravaged her health that she died not long afterwards, in June, 1779. This episode illustrates the suffering that befell families on both sides of what was, in essence, a civil war. It often was a war between neighbors, former friends, and even family members, not one between organized armies of strangers with different lands, cultures, and languages.

Francis Lewis retired from public service in 1781. Thereafter, he lived a life of leisure, with books and plenty of family time with his two sons and their children. A daughter had married an English naval officer and left North America, never to return, a none-too-rare sad consequence of the war, and one that befell Benjamin Franklin’s family, as well. Lewis died on December 31, 1802.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn.



Click Here For Next Essay
Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Essay 37 – Guest Essayist: Joerg Knipprath

If Americans know of John Adams at all, it is probably somewhat vaguely as a long-ago President. Adams’s tenures as Vice-President and President are not generally regarded among the memorable in American history. He was not charismatic, physically imposing, or politically adept. In seeming contrast to his Puritan roots, he also was rather vain. As a result, he did not come easily by loyal friends in the political world.

As Vice-President, he is probably best known for his efforts to devise titles for the President and others along the lines he had seen during his residence in the Dutch Republic, where top government officials were addressed as “His Highmightiness.” He proposed that the President be called some version of “His Excellency” or “His Majesty.” A Senate committee went further, reporting a proposal that the President should be addressed as “His Highness the President of the United States of America and the Protector of the Rights of the Same.” James Madison and many others raised objections about the monarchical tone, and, fortunately, the House refused to approve. For his diligent efforts in this matter, Adams was the target of many jocular “titles.” Senator Ralph Izard of South Carolina referred to the short, plump Adams as “His Rotundity,” and that biting remark stuck.

Despite some policy successes, including the build-up of the Navy, Adams’ single term as President was marked by foreign relations turmoil, such as the naval war with France, and domestic missteps, such as the Alien and Sedition Acts. Adams saw the office as a chore, and avoided his duties at a rate higher than any other occupant of the office. Samuel Eliot Morison relates that, in four years, Adams stayed away for 385 days, returning to his farm in Quincy, Massachusetts.

The Adams’ sojourns at their farm reflected a deep connection to their New England roots. In the 1770s and 1780s, there was probably no single American who was as influential in the overall development of revolutionary and constitutional theory as John Adams. His thoughts often reflected an enlightened Puritanism. During the Revolutionary War, Adams was a diligent and successful administrator. He was an ally and confidant of General George Washington, although, typical of the lack of mutual understanding among the elites from different colonies, Adams did not trust Washington unreservedly. Several times during and after the War, he was selected to undertake important diplomatic tasks. In the words of Benjamin Franklin, Adams was “always honest, often great, sometimes mad.”

Adams was an attorney. He had already made a name for himself, but still took a great professional risk, when he and two other attorneys defended a British officer and eight soldiers accused of murder in the “Boston Massacre” of March 5, 1770. After numerous provocations, and in fear of their safety, the soldiers had fired on a violent mob of colonials, five of whom were killed. The officer was tried for murder seven months later, the soldiers a couple of months after that. All were acquitted of the capital murder charges, although two soldiers were convicted of manslaughter. The trial produced one of Adams’ well-known quotations, “Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passion, they cannot alter the state of facts and evidence.”

Adams’ stature as a member of the radical faction against the British helped him, as well as the soldiers, with the jury. So did his family connections. His cousin Samuel Adams was of similarly militant inclination against the British. Both cousins were trained in classical history and political theory. Both were skilled debaters, though neither was a particularly compelling oralist. But John was the more intellectual “office” type, while cousin Sam was the more hands-on troublemaker. John wrote resolves, treatises, and constitutions, while Sam focused on organizing protests and riots, writing proclamations, and distributing outlandish propaganda.

John Adams had become involved in the political struggle that would culminate in American independence, during the controversy over the writs of assistance that the British used to combat smugglers who sought to avoid the Sugar Act import duties. Writs of assistance were general search warrants whose open-ended nature the colonials saw as violations of their rights as Englishmen. James Otis, Jr., was hired to challenge these writs in Paxton’s Case in 1761.

Otis gave a long and forceful argument that the act authorizing these writs was void, because, “An act against the Constitution is void; an act against natural equity is void.” This was a novel assertion in English law. It challenged the supremacy of Parliament, and, contrary to long-established English constitutional custom, suggested that the courts could refuse to apply such an act to controversies before them. Otis lost his case. Still, his argument provided the germ for the gradual development of basic principles of American constitutional law about the relationship between constitutions and ordinary laws, and about the role of an independent judiciary. As to the writs of assistance, five years later, the British attorney general agreed with Otis about their invalidity. Today, they are prohibited under the Fourth Amendment of the Constitution.

Adams was well-acquainted with Otis and was in the audience at the trial. He was much impressed with the argument, which clearly influenced his later views of balanced government and his drafting of the Massachusetts constitution. Adams also promoted Otis as a leading patriot voice. Both joined in their opposition to the next issue, Parliament passing the Revenue Act of 1764. The colonial assemblies objected that such involuntary taxes were invalid, a sentiment that eventually was captured in the slogan coined by Otis, “No taxation without representation is tyranny.”

In the disputes leading to the Declaration of Independence, Adams emerged as a prominent political theorist for the cause. His work Novanglus, of February 6, 1775, rejected Parliament’s control over the colonies. Adams instead claimed that the colonies and Great Britain were separate states, united only through the person of the king in a dominion status similar to that of England and Scotland. Based on the American theory of representation, and the practical obstacles to American representation in Parliament, such as physical distance, the colonial assemblies governed the colonies, while Parliament governed Great Britain. In an apparent contradiction to this argument, he did allow that Parliament could be in charge of foreign policy and trade, but analogized this to a commercial treaty approved by the Americans explicitly or by custom, rather than an inherent power.

An important part of Adams’s theory in the Novanglus essay was that the colonies, separately and in union, had their own constitutions that were not subject to alteration by Parliament. There appeared the influence of Otis’ earlier arguments that distinguished between Parliament’s legislative powers and constitutional limits thereon. In separate publications, James Wilson and Thomas Jefferson, future signers of the Declaration reached the same conclusions, as well. All rejected the “empire theory,” under which Parliament exercised control over all parts. These three were part of the “radicals” who also opposed the First Continental Congress’ Declaration of Rights and Grievances adopted on October 14, 1774. Congress there had accepted Parliament’s inherent power over the colonies’ external commerce, while rejecting that body’s authority over other matters, such as revenue. Adams adamantly rejected the moderate federal structure that the Congress’ Declaration of Rights embraced. Instead, as he wrote in Novanglus, “I agree, that ‘two supreme and independent authorities cannot exist in the same state,’ any more than two supreme beings in one universe; And, therefore, I contend, that our provincial legislatures are the only supreme authorities in our colonies.”

As the drive to revolution became unstoppable, and the Second Continental Congress declared the colonial charters void, Adams wrote a letter to George Wythe of Virginia, which provided a written plan of government to be considered by that state. The letter eventually was published by Richard Henry Lee of Virginia as Thoughts on Government, and its influence on the Virginia convention’s work was evident to Adams’ contemporaries, and to Adams himself. As he wrote to James Warren, on June 16, 1776, “But I am amazed to find an Inclination So prevalent throughout all the southern and middle Colonies to adopt Plans, so nearly resembling, that in the Thoughts on Government.”

At the same time, the Second Continental Congress appointed Adams to the committee to propose a declaration of independence. The initial drafting task fell to his friend and future political rival, Thomas Jefferson. Jefferson proposed that Adams write the declaration, but Adams demurred. It is said that Adams justified his refusal by telling Jefferson, “Reason first: You are a Virginian and a Virginian ought to appear at the head of this business. Reason second: I am obnoxious, suspected and unpopular. You are very much otherwise. Reason third: You can write ten times better than I can.”

With the war under way, Adams continued to serve in the Continental Congress. He, along with Benjamin Franklin and Edward Rutledge, composed a delegation sent to discuss a political accommodation with the British after a disastrous American military defeat on Long Island. The conference was requested by Admiral Lord Richard Howe, the supreme commander of British forces in North America, and his brother General William Howe, the commander-in-chief of the British land forces. The Howe brothers were Whigs and not unsympathetic to the American cause. Nevertheless, nothing came of the conference, and, as loyal officers of the king, the Howes turned to their job of settling the matter militarily.

The condition of the American army was deplorable, from a dearth of supplies and a lack of training and discipline. Adams was appointed head of the Board of War, the analog to the Secretary of Defense today. He immediately pressed Congress to accede to General Washington’s requests to maintain the army. Adams proposed that an enlistee who joined for the duration of the war be given $20 plus 100 acres land. To maintain discipline, punishments for various offenses were raised. For example, drunkenness on duty became punishable by 100 lashes instead of 39. The number of crimes subject to the death penalty was increased, as well. However, these Articles of War, written by Adams and based on their British counterpart, also provided proper procedures for the accused. Finally, Adams proposed creation of a military academy for better military training for officers, but nothing came of that until after the war.

Adams initially opposed alliance with France, but the desperate state of the American quest for independence eventually caused him to change his mind. As the war wound to a successful conclusion, Adams arrived in Paris as part of the five-member American delegation. Because several members, including Adams, distrusted the French diplomats, the Americans on November 30, 1782, made a separate preliminary treaty with Great Britain. It took nearly a year for the French and British to agree to their own terms, and peace was finally achieved on September 3, 1783.

Adams, who was an Anglophile by family roots and political philosophy, quickly wished to reestablish close commercial and diplomatic ties with Great Britain after the war. He became the first American minister to London in 1785. When he was received by George III, he hoped that “the old good nature and the old good humor” between the two countries would be rekindled. The king was willing, but the government was not. Efforts to enter a commercial treaty failed, due in part to the weakness of the Congress under the Articles of Confederation. The foreign department dismissively suggested that the states send delegations, instead. Adams left the post in 1788, frustrated and disappointed.

In addition to his numerous administrative and diplomatic duties, Adams continued to lead on another political issue, that of drafting constitutions and developing theoretical foundations for them. His principal success was the Massachusetts Constitution of 1780. The people of the state had rejected a constitution proposed by the legislature in 1778. Like other “first wave” state constitutions of the 1770s, that version had mixed different powers, vested primary power in the legislature, and contained no bill of rights.

Adams, like most of the era’s contributors to American constitutional developments, had read the classic ancient political writers, such as Plato, Aristotle, and Polybius, as well as more recent ones, such as Locke and Montesquieu. In their original languages. Adams, cousin Sam Adams, and James Bowdoin were selected by the Massachusetts convention in 1779 to draft a constitution to be submitted to the people. The two other members left the task to Adams.

The completed work, The Report of a Constitution, provided several cornerstones for future American constitutionalism. He proposed a government whose structure was more balanced among three independent branches than the legislature-centric state constitutions rushed out by the state legislatures during the drive to independence in the mid-1770s. Indeed, Article XXX of the Declaration of the Rights in Adams’s constitution offered an almost cartoonish version of an unyielding separation of powers. The Declaration also enumerated a long list of rights the legislature was prohibited from infringing. Finally, influenced by The Essex Result, a petition written by Theophilus Parsons against the proposed constitution of 1778, this new constitution was produced by a convention selected solely for that purpose, rather than by a legislative committee. Moreover, it was approved by town meetings, rather than by the legislature itself. This distinction between the function and status of ordinary legislatures and constitutional conventions became a critical catalyst in the development of American constitutional theory going forward and in the emergence of the judiciary’s power of constitutional review.

Adams’s creation influenced the next wave of state constitutions, as well as the drafters of the United States Constitution in 1787. Though substantially amended since then, the Massachusetts constitution is the oldest still in effect today.

The final work of Adams about constitutions, and perhaps his most comprehensive, was A Defence of the Constitutions of Government of the United States of America, written in three volumes over the course of a little more than a year beginning in 1786. It was a response to criticism by Baron Anne Robert Jacques Turgot, a French government official, of the emerging systems of separation of powers in the American state constitutions. Turgot and others dismissed those constitutions as just the British structure with a republican gloss. Governors who were independent of the legislatures mimicked the king, and bicameral legislatures the British Parliament, with the senates taking the role of the House of Lords. The criticism stung, as Adams himself had drafted such a “mixed government” for Massachusetts.

Defence takes the form of a series of letters as if written by a traveler around Europe. At the time, Adams was the American minister to the English court. His focus became writing, his diplomatic obligations taking a subsidiary role. Summoning his vast knowledge of history and political theory acquired through diligent research, he examined numerous republican constitutions from antiquity forwards. He aimed to expose the weaknesses of the democratic structures and “pure” systems of government favored by Turgot. History, the record of human experience, not ideology, was the sole reliable guide for Adams. Only balanced governments had survived the test of time, a lesson applied to the young American republics.

Like Aristotle and Polybius, Adams feared that pure forms, especially democracies, were unstable and inevitably led to tyranny, because of man’s lust for power due to his fallen nature. Classic republics fared little better, because they, too, relied on human virtue to sustain them. Adams doubted that Americans possessed sufficient virtue, though strong government direction through support of religion and morality might have a positive influence. In early 1776, he wrote that there was “so much Venality and Corruption, so much Avarice and Ambition, such a Rage for Profit and Commerce among all Ranks and Degrees of Men even in America” that put in question whether Americans had “public Virtue enough to support a Republic.” In contrast, much later he would say “Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.” In between, his defense of the American state constitutions was founded on the practical recognition that virtue is not enough to ensure liberty.

Adams was not at the Philadelphia convention, but the first volume of Defence was well-known to many of the participants. Though Adams was criticized by some for what they saw as an abandonment of militant republicanism, the framers of the Constitution adopted a similar system. The “mixed government” of the Massachusetts Constitution of 1787 became the system of “checks and balances” of the United States Constitution which would augment reliance on the people’s virtue in sustaining liberty. As Madison wrote in The Federalist No. 51, to preserve liberty while allowing government to function, “A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.”

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn.



Click Here for Next Essay
Click Here for Previous Essay
Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 26 – Guest Essayist: Joerg Knipprath

“He has affected to render the Military independent of and superior to the Civil power.”

It was an article of faith among English and American advocates of classic republicanism of the 18th century that the military must be subject to civilian control. In the United States Constitution, that faith is manifested expressly in the President’s role as commander-in-chief of the armed forces, including of the states’ militias when called into service of the United States. Moreover, the President, with the consent of the Senate, appoints military officers. In addition, at least five clauses of Article I, Section 8, of the Constitution assign to Congress various roles in controlling the armed forces of the United States and the states’ militias. One of those, prohibiting appropriations of funds for a term longer than two years, was seen by the framers as a cornerstone of control over the military. James Madison went so far as to claim in The Federalist No. 41: “Next to the effectual establishment of the union, the best possible precaution against danger from standing armies, is a limitation of the term for which revenue may be appropriated to their support.”

A similar spirit was manifested in the Articles of Confederation. Article IX of that document gave to Congress the power to appoint the high-level officers of the land forces in the service of the “united states” and all officers of the naval forces. Congress also would make the rules and regulations for those armed forces and direct their operations.

It was the asserted refusal of the British to subordinate their military forces in the colonies to civilian control that created one of the points of conflict leading to the American revolution. Both the Virginia Constitution of 1776 and the Declaration of Independence of the thirteen “united states” denounced the king’s “affect[ing] to render the Military independent of and superior to, the Civil power.” This was not in fact the case in Great Britain itself. The king and Parliament retained control of the military. Moreover, as opponents of the Constitution of 1787 pointed out later, military appropriations by Parliament were limited to a single year, even tighter than the proposed American restriction.

Therefore, the complaint was not against English constitutional custom regarding the relationship between the civil and military authorities, which was, in fact, quite republican in nature. The last time that the military in England was not under civilian control had been during the dictatorship of Lieutenant-General Oliver Cromwell in the 1650s. Instead, the charge against George III arose out of the Americans’ experience with the British treatment of the colonial governments, particularly the events in Massachusetts Bay.

As early as 1765, the Quartering Act required any colony in which British troops were stationed to supply them with provisions and lodging. If lodging in barracks was unavailable, the soldiers might be housed in certain private buildings, typically in inns and establishments that sold alcohol. As a last resort, the troops were to be housed in unoccupied other private buildings. The colonists saw this as a form of taxation to which they had not consented through their assemblies. Moreover, this act appeared to presage the stationing of a standing peacetime army on American soil, another abomination in the eyes of conscientious republicans.

The Act was put to the test in New York. In 1766, the colony’s assembly, which had acted under its own quartering law until the beginning of 1764, refused to comply with the Act. With the 1,500 troops in New York City obliged to remain on their cramped ships, Parliament voted to suspend the assembly in 1767, though no concrete action was taken to enforce the suspension. In 1768, the assembly agreed to provide the funds demanded by the British for supplies for the troops, except the expenses for beer and rum. The Secretary of State for the Colonies, Lord Hillsborough, acting on another vote by Parliament in 1769, thereupon suspended the assembly from further meetings. Once more, no further concrete action was taken, perhaps because a newly-elected assembly soon voted the full requisition.

The events of the mid-1770s brought about increasingly stern reactions from Parliament. The Boston Tea Party, in particular, was a catalyst for British resolve to bring the colonists to heel. The Boston Port Act of 1774 required the city to pay for the tea and for losses to British officials in the Boston riots. Until those obligations were satisfied, the port was sealed off to trade. The Act was enforced by British warships and several regiments of troops. More pointedly, the commanding-in-chief of British forces in North America, General Thomas Gage, was also appointed governor.

Gage replaced Thomas Hutchinson, a prominent local businessman and published historian. Hutchinson had deep family roots in New England, and his appointment was in line with emerging British policy to appoint reliable locals to these executive positions. Like many Loyalists, Hutchinson was torn between those family roots and his loyalty to the Crown. Attacked by both sides as too closely aligned with the other, his attempt to steer a middle course failed. Much of the blame was undeserved, but at a time when the utmost political sensibility and skill were required, Hutchinson too often was tone-deaf. Sam Adams and the other radicals blamed him for, well, pretty much everything. In turn, Lord North, the prime minister, blamed him for the deteriorating political situation in Massachusetts, which led to the appointment of General Gage. In another ironic twist, Gage eventually was removed from his offices, because the British thought him to be too lenient and sympathetic to the colonials.

The Massachusetts Government Act of May 20, 1774, altered the governing charter of Massachusetts Bay. Henceforth, the governor would appoint the council, which was previously elected by the colonial assembly. He also would appoint all lower court judges and nominate judges of the superior courts. Further, no town could call a meeting of its council more than once per year without the governor’s consent. In effect, this put both the judicial and legislative functions under more direct control of Gage, who, as noted, was the military commander.

Finally, Parliament passed the Quartering Act of June 2, 1774. This allowed the governor to order troops to be housed in private buildings without legislative authorization. From the British perspective, this was a reasonable imposition. It was to be used if no funds were appropriated by the colonial assemblies to find other quarters for the British soldiers, who had been forced to camp out on Boston Common for a long period. Recent historical research has determined that the Act, like its predecessors, only permitted quartering of troops in unoccupied buildings.

The locals, however, were convinced that the Act allowed troops to be housed in occupied homes. To them, this was yet another outrage against their liberties and a violation of what they saw as their ancient rights of Englishmen. After all, both the English Petition of Right of 1628 and the Declaration of Rights of 1689 had listed quartering of soldiers in homes without the consent of the owners or authorization by law among the grievances against the Stuart kings, Charles I and James II, respectively. It is no surprise then that, on independence, Article XXVII of the Massachusetts constitution of 1780 declared: “In time of peace no soldier ought to be quartered in any house without the consent of the owner; and in time of war such quarters ought not to be made but by the civil magistrate, in a manner ordained by the legislature.” At the time, “ought” meant a duty owed and was analogous to “must.” The Third Amendment to the Constitution contains an almost verbatim restriction.

The formal subordination of the military to the civil power remains today. In addition to the constitutional sections that deal with such subordination, an additional provision seeks to maintain at least a separation of the two. Article I, Section 6, of the Constitution prohibits anyone “holding any office under the United States [from being] a member of either house during his continuance in office.” Although the matter is not resolved, it appears from a decision of the Court of Appeals for the Armed Forces, United States v. Lane, that a member of Congress could not serve as an appellate military judge. Senator Lindsey Graham was a member of the U.S. Air Force Standby Reserve, as well as a Senator, when he was appointed to serve as a military judge. The court held that a military judge was an officer of the United States, and that the “Incompatibility Clause” disqualified Graham.

However, the Lane court refused to address whether or not all service or status in the military reserve disqualified one from being a member of Congress. Presumably being an active member of the military would do so for various reasons, constitutional and practical. However, members of Congress have been officers in the reserves while simultaneously serving in their legislative capacity. Finally, the subordination principle does not apply to former military officers or to service in a non-legislative capacity, at least so long as the person is subject to removal by the president and civilian control over the military is retained.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast By Maureen Quinn

Click Here for Next Essay 
Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 20 - Guest Essayist: Joerg Knipprath

“He has refused for a long time, after such dissolutions, to cause others to be elected; whereby the Legislative powers, incapable of Annihilation, have returned to the People at large for their exercise; the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within.”

When Thomas Jefferson accused George III, in the Declaration of Independence, of having refused for a long time to permit elections for previously-dissolved colonial legislatures, he had several examples for reference. As early as 1768, Governor Sir Francis Bernard dissolved the Massachusetts assembly on the order of Lord Hillsborough, the Secretary of State for the Colonies, after the assembly had circulated a letter to the other colonial assemblies about the constitutional defects of the Townshend Revenue Acts. This effectively left Boston without a government for a year.

A year before, in 1767, the British government had ordered the New York assembly suspended when it refused to comply with the Quartering Act of 1765. As a result, New York was without a government for most of 1767 to 1769, until an election in the fall of 1769 produced a more pliant assembly.

In October, 1774, after Parliament had adopted the Massachusetts Government Act earlier that year, General Thomas Gage, the governor, dissolved the colony’s assembly. The Act had several parts that struck against the colony’s self-government. It repealed the Massachusetts Bay Charter of 1691, made the hitherto elected council appointive by the governor, and prohibited town meetings more than once per year unless the governor consented. The Act also made other provincial offices, including many judgeships, appointive rather than elective, and those officers could be removed at any time by the governor. To add insult to injury, the first governor selected, General Gage, was also the military commander. This move placed the military authorities in charge of civil government.

From the British perspective, the Act was necessary to curb the radical tendencies of this most radical province. Unfortunately for the British, their political tactics failed in Massachusetts and likely hurt their overall strategy of both pacifying the colonies and advancing their new model of imperial administration. Instead, the Americans simply circumvented the restrictions by electing an ultra vires provincial congress, which met at Concord, elected John Hancock president, organized an administration, voted taxes, collected arms, drilled a militia, and operated the courts. This assemblage governed Massachusetts until the state’s constitution of 1780 was approved. The colony effectively was independent, and the royal governor’s authority was restricted to the city of Boston.

Similar events transpired in other colonies. In Virginia, the royal governor dissolved the House of Burgesses in May, 1774. Led by Patrick Henry and Thomas Jefferson, a rump portion of that assembly called for elections to a provincial congress to meet in Williamsburg on August 1. By the end of 1774, all colonies except Georgia, Pennsylvania and New York had followed suit. Those three fell in line the following year. So, while Jefferson’s charge in the Declaration of Independence was historically correct, the dissolutions of colonial assemblies about which he complained also quickly became irrelevant as a matter of practical government. If anything, those actions by the king and Parliament did not impede self-government, they made it more profound.

The English king long had the power to prorogue (that is, “suspend”) or dissolve Parliament and rule by decree. Charles I had used it to prevent Parliament from meeting for years. As the constitutional position of Parliament strengthened against the king in the 17th and 18th centuries, that power had to be used judiciously, if at all. One of the political missteps by James II that led to the Glorious Revolution of 1688 was his dissolution of Parliament after that body had refused to repeal the pro-Protestant Test Acts.

For the Americans, this authority to prorogue or dissolve legislative bodies and to delay elections was a threat to the independence of their assemblies, the principal protectors of liberty, and distorted the emerging conception of a functional separation of powers. Thus, Article X of the Virginia constitution of 1776, prohibited the governor from proroguing or dissolving the legislature. The Massachusetts constitution of 1780 carefully limited these powers to specified circumstances. The New York constitution was similar. The U.S. Constitution of 1787 goes further and restricts the president to only a limited power to adjourn Congress, but no power to prorogue or dissolve that body.

Jefferson’s observation that “the Legislative powers, incapable of Annihilation, have returned to the people at large for their exercise …” makes two points. First, it postulates that lawmaking, that is, the power to make rules that govern human actions, always exists. That power might be in Parliament, in the assemblies, the king, or the people as a whole. When the king declared the colonies in rebellion on August 23, 1775; when Parliament enacted the Prohibitory Act on December 22, 1775, which declared the colonies outside British protection, blockaded colonial ports, and made all colonial vessels lawful prizes subject to capture; and when the local assemblies were dissolved by the British authorities, the existing constitutional system had been abandoned. The actions of the Continental Congress and of the several former colonies separately in declaring independence and taking control of their fate by setting up new constitutional arrangements, was the inevitable result. After all, this was no different, in the eyes of Americans, than Parliament’s own actions in 1688-89 during the Glorious Revolution. Then, James II had abandoned the throne, which allowed Parliament to assume basic constitutional powers and create a new political order.

Second, the observation reflects Jefferson’s reading of John Locke and other social contract theorists. The British government’s abandonment of its constitutional relationship with the colonies had breached the contract on which the political commonwealth was based. Thus, the people were placed in a new “pre-political” condition. In this stage, each individual was sovereign over his or her own affairs. The legislative power had not been annihilated, but rested within each individual for himself or herself. As anticipated by the social contract theorists and reflected in the Declaration of Independence itself, these individuals would establish new forms of government in order better to secure their God-given inalienable rights to life, liberty, and the pursuit of happiness. By the consent of the governed, the legislative power would then be exercised by the people collectively as in a democracy, or, more likely, by an assembly elected by the people as in a republic.

That the British actions, especially those of King George, amounted to a breach of contract was bolstered by the function of royal charters in the constitutional status and political operation of the colonies. Those charters gave certain powers of self-government to the Americans through their elected assemblies and established the constitutional rights and obligations of all parties, including the king. Moreover, the general neglect of colonial affairs by the government in London over more than a century had accreted various political powers to the local assemblies through repeated practice that reflected a gradual evolution of constitutional custom. By ignoring those arrangements or, more blatantly, revoking them, as had happened in 1774 to Massachusetts Bay’s Charter of 1691, the king and Parliament had breached those contracts. In turn, the Americans were relieved of further obligations to abide by those arrangements, although, curiously, Connecticut and Rhode Island continued to use their royal charters, with appropriate modifications, as their state constitutions into the 19th century.

Jefferson’s complaint that “the State remaining in the mean time exposed to all the dangers of invasion from without, and convulsions within,” seems disingenuous, coming from the American side. After all, the “convulsions within” typically were the products of provocateurs such as the Sons of Liberty or of colonial mobs incited by the rhetoric and actions of those provocateurs. The Boston Tea Party, the Boston Massacre, the Gaspee affair, and assorted other riots and acts of sabotage and unadulterated insurrection were deliberate actions by the Americans. The British responses, often ham-handed, might inflame tensions further, but they were reactive.

Nevertheless, Jefferson had a point. The principal purpose of government is to provide security against external and internal threats to the peace of the community. Whatever merit there is in today’s common perception that government is an indulgent parent that provides food, shelter and health care for all, if a government fails to fulfill the classic obligation of providing security, it will fall. In the Lockean social contract formulation, government is formed to secure one’s rights in one’s person and estate better than would exist otherwise. In Thomas Hobbes’s more pessimistic view of the human condition, security by any means is the be-all and end-all of government. Under either conception, failure to carry out that obligation is a breach of the social contract.

That same understanding of the core purpose of government is found in the Constitution. As John Jay wrote in The Federalist No. 3,

“Among the many objects to which a wise and free people find it necessary to draw their attention, that of providing for their safety seems to be the first….At present I mean only to consider it as it respects security for the preservation of peace and tranquillity, as well against dangers, from foreign arms and influence, as against dangers arising from domestic causes.” [Emphasis in the original.]

Indeed, the adoption of the Constitution itself, in a manner contrary to the Articles of Confederation, was defended by James Madison in The Federalist No. 43 in language reminiscent of the Declaration of Independence,

“by recurring to the absolute necessity of the case; to the great principle of self-preservation; to the transcendent law of nature and of nature’s God, which declares that the safety and happiness of society, are the objects to which all political institutions aim, and to which all such institutions must be sacrificed.”

The Constitution itself grants broad war powers to the president and Congress, along with the power of Congress of “calling forth the militia to execute the laws of the union, suppress insurrections and repel invasions.” The president, as commander in chief of the armed forces, as well as of the militia when called into service of the United States, is also authorized to protect the security of the people from foreign invasion and domestic causes. As needed, courts have interpreted those powers expansively. True, Americans pay at least lip service to the idea that even those governmental powers are limited in some way by the Constitution. Courts have held that there also does not exist a formally distinct “Emergency” or “War” Constitution. Reality, however, is harsher. Jefferson himself, as well as Abraham Lincoln, and any number of politicians and judges have consistently recognized the paramount principle of self-preservation and security of the society, to which, in the end, all other considerations will be subordinated. This calculation is pithily expressed in the aphorism, “The Constitution is not a suicide pact.”

The British government failed to carry out that fundamental obligation of assuring peace and domestic tranquillity, either by resolute military action or, preferably, by deft political maneuvering to adjust the constitutional order to accommodate the major American grievances and halt the drift towards full separation. It does not matter which side gets the credit or blame for specific events or particular political steps. The constituted government has legitimacy to govern only if it satisfies the reason for which it is formed. Failure to do so forfeits that government’s legitimacy, and the people will seek to establish another by any means available to them, even a replacement of the entire constitutional order by revolution.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn.

Click Here for Next Essay
Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 
Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 6 - Guest Essayist: Joerg Knipprath

On June 7, 1776, delegate Richard Henry Lee of Virginia rose to move in the Second Continental Congress, “That these United Colonies are, and of right ought to be, Independent States, that they are absolved from all allegiance to the British Crown, and that all connection between them and the State of Great Britain is, and ought to be, totally dissolved….” The motion was not immediately considered, because four states, lacking instructions from their assemblies, were not prepared to vote. Nevertheless, Congress appointed a committee of five to prepare a declaration of independence. The committee, composed of Benjamin Franklin, John Adams, Roger Sherman, Robert R. Livingston, and Thomas Jefferson, assigned the task of preparing the initial draft to Jefferson.

After numerous revisions by Adams and Franklin and, eventually, by Congress itself, the final draft and report were presented to Congress on July 2, 1776. Formal adoption of the Declaration had to await a vote on Lee’s motion for independence. That was approved by the states the same day, with only the New York delegation abstaining. After a few more minor changes, the Declaration was adopted on July 4, 1776. Copies were sent to the states the next day, and it was publicly read from the balcony at Independence Hall on the 8th. Finally, on August 2nd, the document was signed.

General Washington, at New York, received a copy and a letter from John Hancock. The next day, July 9, Washington had the Declaration read to his troops. Whereas those troops responded with great enthusiasm for the cause, reaction elsewhere to the Declaration was divided, to say the least. Supporters of independence were aware of the momentousness of the occasion. As Washington’s commander of artillery, Henry Knox, wrote, “The eyes of all America are upon us. As we play our part posterity will bless or curse us.” Others were less impressed. The anti-independence leader in Congress, John Dickinson, dismissed it as a “skiff made of paper.”

The Declaration’s preamble embraced four themes fundamental to Western political philosophy in the 17th and 18th centuries: Natural law and rights, popular sovereignty exercised through the consent of the governed, the compact basis of the legitimate state, and the right of revolution.

The idea of a universal moral law, obligatory on earthly rulers and to which human law must conform, went back at least to the Stoics nearly two millennia prior, and indirectly even to Aristotle’s conception of natural justice. Cicero, among Roman writers, and the Christian Aristotelian Thomas Aquinas, among medieval Scholastics, postulated the existence of a natural order directed by universal laws. Humans were part of this order created by God and governed by physical laws. More important for these writers was the divinely-ordained universal moral law, in which humans participated through their reason and their ability to express complex abstract concepts. By virtue of its universality and its moral essence, this natural law imposed moral obligations on all, ruler and ruled alike. All were created equal, and all were equal before God and God’s law. Viewed from a metaphysical and practical perspective, these obligations provided the best path to individual flourishing within a harmonious social order in a manner that reflected both the inherent value of each person and man’s nature as a social creature. The need to meet these universal obligations of the natural moral law necessarily then gave rise to certain universal rights that all humans had by nature.

However, the shattering of universal Christendom in the West, with its concomitant shattering of the idea of a universal moral law and of a political order based thereon, changed the conception of natural law, natural rights and the ethical state. No longer was it man’s reason that must guide his actions and his institutions, including government and law, for the purpose of realizing the ends of this order. Rather, in the emerging modernity, there was a “turn to the subject” and, in the words of the ancient Greek pre-Socratic philosopher Protagoras, “man [became] the measure of all things.”

Political legitimacy and, thereby, the basis for political and legal obligation came to rest on individual acts of will. The most prominent foundation for this ethical structure was the construct of the “social contract” or “social compact.” “Natural law” became deracinated of its moral content and was reduced to describing the rules which applied in a fictional state of nature in which humans lived prior to the secular creation of a political commonwealth, in contrast to the civil law that arose after that creation. Natural rights were those that sovereign individuals enjoyed while in the state of nature, in contrast to civil rights, such as voting, which were created only within a political society.

Although expositors of the social contract theory appeared from the 16th to the 18th centuries, and came from several European cultures, the most influential for the American founding were various English and colonial philosophers and clergymen. Most prominent among them was John Locke.

Locke’s version of the state of nature is not as bleak and hostile as was that of his predecessor Thomas Hobbes. Nor, however, is it a romanticized secular Garden of Eden as posited by Jean-Jacques Rousseau, writing a century later. For Locke, existence in the state of nature allows for basic social arrangements to develop, such as the family, economic relationships, and religious congregations. However, despite Locke’s general skepticism about the Aristotelian epistemology then still dominant at the English universities, he agreed with the ancient sage that human flourishing best proceeds within a political commonwealth. Accordingly, sovereign individuals enter into a compact with each other to leave the state of nature and to surrender some of their natural rights in order to make themselves and their estates more secure. They agree to arbitrate their disputes by recourse to a judge, and to be governed by civil law made by a legislator and enforced by an executive. Under a second contract, those sovereign individuals collectively then convey those powers of government to specified others in trust to be exercised for the benefit of the people.

Thus, the political commonwealth is a human creation and derives its legitimacy through the consent of those it governs. This act of human free will is unmoored from some external order or the command of God. For Hobbes, the suspected atheist, human will was motivated to act out of fear.

Locke allows for much greater involvement by God, in that God gave man a nature that “put him under strong Obligations of Necessity, Convenience, and Inclination to drive him into Society, ….” Moreover, the natural rights of humans derive from the inherent dignity bestowed on humans as God’s creation. The human will still acts out of self-interest, but the contract is a much more deliberate and circumscribed bargain than Hobbes’s adhesion contract. For Locke, the government’s powers are limited to achieve the purposes for which it was established, and nothing more. With Hobbes, the individual only retained his inviolate natural right to life. With Locke, the individual retains his natural rights to liberty and property, as well as his right to life, all subject to only those limitations that make the possession of those same rights by all more secure. Any law that is inimical to those objectives and tramples on those retained rights is not true law.

There remained the delicate issue of what to do if the government breaches its trust by passing laws or otherwise acting in a manner that make people less secure in their persons or estates. Among private individuals, such a breach of fiduciary duty by a trustee would result in a court invalidating the breach, ordering fitting compensation, and, perhaps, removing the trustee. If the government breached such a duty, recourse to the English courts was unavailable, since, at least as to such constitutional matters, the courts had no remedial powers against the king or Parliament.

Petitions to redress grievances were tried-and-true tools in English constitutional theory and history. But what if those petitions repeatedly fell on deaf ears? One might elect other members of the government. But, what if one could not vote for such members and, consequently, was not represented therein? What if, further, the executive authority was not subject to election? A private party may repudiate a contract if the other side fails to perform the material part of the bargain. Is there a similar remedy to void the social contract with the government and place oneself again in a state of nature? More pointedly, do the people collectively retain a right of revolution to replace a usurping government?

This was the very situation in which many Americans and their leaders imagined themselves to be in 1776. Previous writers had been very circumscribed about recognizing a right of revolution. Various rationales were urged against such a right. Thomas Aquinas might cite religious reasons, but there was also the very practical medieval concern about stability in a rough political environment where societal security and survival were not to be assumed. Thomas Hobbes could not countenance such a right, as it would return all to the horrid state of nature, where life once again would be “solitary, poor, nasty, brutish, and short.” Moreover, as someone who had experienced the English Civil War and the regicide of Charles I, albeit from his sanctuary in France, and who was fully aware of the bloodletting during the contemporaneous Thirty Years’ War, revolution was to be avoided at all cost.

Locke was more receptive than Hobbes to some vague right of revolution, one not to be exercised in response to trivial or temporary infractions, however. Left unclear was exactly who were the people to exercise such a right, and how many of them were needed to legitimize the undertaking. Locke wrote at the time of the Glorious Revolution of 1688. His main relevant work, the Second Treatise on Civil Government, was published in 1689, though some scholars believe that it was written earlier. The Catholic king, James II, had been in a political and religious struggle with Parliament and the Church of England. When Parliament invited the stadholder (the chief executive) of the United Netherlands to bring an army to England to settle matters in favor of itself, James eventually fled to France.

Parliament declared the throne vacant, issued a Declaration of Rights and offered the throne to William and his wife, Mary. In essence, by James’s flight, the people of England had returned to an extra-political state of nature where they, through the Parliament, could form a new social contract.

The American Revolution and Jefferson’s writings in the Declaration of Independence follow a similar progression. When King George declared the colonies to be in rebellion on August 23, 1775, and Parliament passed the Prohibitory Act in December of that year, they had effectively placed the colonies outside the protection of the law and into a state of nature. At least that was the perception of the colonists. Whatever political bands once had existed were no more. In that state of nature, the Americans were free to reconstitute political societies on the basis of a social contract they chose.

That project occurred organically at the state level. Massachusetts had been operating as an independent entity since the royal governor, General Thomas Gage, had dissolved the General Court of the colony in June, 1774. That action led to the extra-constitutional election by the residents of a provincial congress in October. Thereafter, it was this assemblage that effectively governed the colony. The other colonies followed suit in short order.

In Virginia, a similar process occurred in the summer of 1774. It culminated two years later in the “Declaration of Rights and the Constitution or Form of Government,” begun by a convention of delegates on May 6, 1776, and formally approved in two stages the following month. The initial document was a motley combination of a plan of government, a declaration of independence, and a collection of enumerated rights and high-sounding political propositions. In the part regarding independence, the accusations against King George are remarkably similar, often verbatim, precursors to Jefferson’s language in the Declaration of Independence of the “united States” two months later. George Mason, whom Jefferson praised as the “wisest man of his generation,” was the principal author. Still, it may have been Jefferson himself who proposed this language through the drafts he submitted to the Virginia convention.

Both documents, the Virginia declaration and the Declaration of Independence, cite as a reason for “dissolv[ing] the Political Bands” that the king had abandoned the government by declaring the Americans out of his protection. George III, like James II a century before, had breached the social contract and forced a return to an extra-political state of nature. The Declaration of Independence merely formalized what had already occurred on the ground. With those bands broken, the next step, that of forming a new government, already taken by Virginia and other states, now lay before the “united States.”

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:




Podcast by Maureen Quinn




Click Here For Next Essay

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 4 - Guest Essayist: Joerg Knipprath

There are two recognized types of war, war between nations (“international war”) and war within a nation (“civil war”). In a civil war, some portion of the inhabitants forcibly seeks political change. The goal often is to replace the existing constitutional government with their own by taking over the entire structure or by separating themselves and seeking independence from their current compatriots.

A civil war may be an insurrection or a rebellion, the stages being distinguished by a rebellion’s higher degree of organization of military forces, creation of a formal political apparatus, greater popular participation, and more sophistication and openness of military operations. By those measures, the American effort began as an insurrection during the localized, brief, and poorly organized eruptions in the 1760s and early 1770s. Various petitions, speeches, and resolves opposing the Revenue Act, the Stamp Act, the Quartering Act, and others, were reactive, not strategic. Even circular letters among colonial governments for unified action, such as that by the Massachusetts assembly in February, 1768, against the Townshend Acts, or hesitant steps toward union, such as the Stamp Act Congress of 1765, were of that nature. Much rhetoric was consumed along with impressive quantities of Madeira wine, but tactical successes were soon superseded by the next controversy.

In similar vein, local bands of the Sons of Liberty, the middle-class groups of rabble-rousers that emerged in 1765, fortified in their numbers by wharf-rats and other layabouts, might destroy property, intimidate and assault royal officials, and harass locals seen as insufficiently committed to opposing an often-contrived outrage du jour. They might incite and participate in violent encounters with the British authorities. But, while they engaged in melodramatic and, to some Americans, satisfying political theater, they were no rebel force. Moreover, the political goals were limited, focused on repeal or, at least, non-enforceability of this or that act of Parliament.

Yet, those efforts, despite their limited immediate successes, triggered discussions of constitutional theory and provided organizational experience. In that manner, they laid the groundwork that, eventually, made independence possible, even if no one could know that and few desired it. Gradually, the vague line between insurrection and rebellion was crossed. The consequences of the skirmishes at Lexington and Concord have made it clear, in retrospect, that, by the spring of 1775, a rebellion was under way.

The Second Continental Congress met on May 10, 1775, and, in contrast to its predecessor, did not adjourn after concluding a limited agenda. Rather, it began to act as a government of a self-regarding political entity, including control over an organized armed force and a navy. Congress sent diplomatic agents abroad, took control over relations with the Indian tribes, and sent a military force under Benedict Arnold north against the British to “assist” Canada to join the American coalition. It appointed George Washington as commander-in-chief of the “Army of the United Colonies.” That army, and other forces, achieved several tactical military successes against the British during 1775 and early 1776, although the Canadian expedition narrowly failed.

Still, something was lacking. The scope of the effort was not matched by an equally ambitious goal. The end was not in focus. Certainly, repeal of the Coercive Acts, which had been enacted in the spring of 1774, urgently needed to be achieved. Those acts had closed the port of Boston, brought the government of Massachusetts under more direct royal control by eliminating elected legislative offices, and authorized the peacetime quartering of troops in private homes. These laws appeared reasonable from the British perspective. Thus, the Quartering Act intended to alleviate the dire conditions of British soldiers who were forced to sleep on Boston Common. The Government and Administration of Justice Act was to ensure, in part, fair trials for British officials and soldiers accused of murder as had happened in 1770 in the “Boston Massacre.” At the same time, though these acts were limited to Massachusetts, many colonists feared that a similar program awaited them. These laws were so despised that they were collectively known to Americans also as the “Intolerable Acts.”

Was there to be more? In unity lay strength, and the Second Continental Congress was tasked with working out an answer. But Congress was more follower than leader, as delegates had to wait for instructions from their colonial assemblies. That meant the process was driven by the sentiments of the people in the colonies, and the Tory residents of New York thought differently than the Whigs of beleaguered Massachusetts. Within each colony, sentiments, quite naturally, also varied. The more radical the potential end, the less likely people were to support it. Even as late as that spring of 1775, there existed no clear national identity as “American.” People still considered themselves part of the British Empire. The rights that they claimed were denied them by the government in London were the “ancient rights of Englishmen.” The official American flag, used by the armed forces until June, 1777, was composed of the familiar, to us, thirteen red and white stripes in its field, but its canton was the British Union Jack. Without irony, Congress’s military operations were made in the name of the king. General Washington was still toasting the king each night at the officer’s mess in Cambridge while besieging the British forces in Boston.

The gentlemen who met in Philadelphia came from the colonial elite, as would be expected. But they were also distinguished in sagacity and learning, more so than one has come to expect from today’s Congress drawn from a much larger population. Almost none favored independence. Those few that did, the Adams cousins from Massachusetts, Sam and John; the Lees of Virginia, Francis Lightfoot and Richard Henry; Benjamin Franklin of Pennsylvania; and Christopher Gadsden of South Carolina, the “Sam Adams of the South” as he came to be known, kept their views under wraps. Instead, the goal initially appeared to be some sort of conciliation within a new constitutional relationship of yet-to-be-determined form. Many delegates had also served in the First Continental Congress dedicated to sending remonstrances and petitions. On the other hand, Georgia had not sent delegates to the First, so its delegation consisted entirely of four novices. Peyton Randolph of Virginia was chosen president, as he had been of the First Continental Congress. He was soon replaced by John Hancock when Randolph had to return to Virginia because of his duties as Speaker of the House of Burgesses.

One person missing from the assemblage was Joseph Galloway of Pennsylvania. He had attended the First Continental Congress, where he had drafted a plan of union between the colonies and Britain. Parliament would control foreign affairs and external trade. As to internal colonial affairs, Parliament and a new American parliament would each effectively have veto power over the acts of the other. His plan would have recognized a degree of colonial sovereignty, but within the British system. It was rejected by one vote, six colonies to five, because a more confrontational proposal, the Suffolk Resolves, had recently been adopted by the towns around Boston which outflanked his proposal politically. Congress instead endorsed the Resolves, and voted to expunge Galloway’s plan from the record. Still, his proposal was a prototype for the future federal structure between the states and the general government under the Articles of Confederation. Repulsed by what he saw as the increasing radicalism of the various assemblies, he maintained his allegiance to the king. By 1778, he was living in London and advising the British government.

Congress sought to thread the needle between protecting the Americans from intrusive British laws and engaging in sedition and treason. In constitutional terms, it meant maintaining a balance between the current state of submission to a Parliament and a ministry in which they saw themselves as unrepresented, and the de facto revolution developing on the ground. The first effort, by John Dickinson of Pennsylvania and Thomas Jefferson of Virginia, was the “Declaration on the Causes of Taking Up Arms.” It declared, “We mean not to dissolve that union which has so long and so happily subsisted between us…. We have not raised armies with ambitious designs of separation from Great Britain, and establishing independent States.” Then why the effort? “[W]e are reduced to the alternative of choosing an unconditional submission to the tyranny of irritated ministers, or resistance by force. The latter is our choice.” Note the problem: not the king, not even Parliament, but “irritated ministers.” The path to resolution of the conflict, it seemed, was to appeal to the king himself, who, it was surmised, must have been kept in the dark about the dire state of affairs of his loyal colonial subjects by his ministers’ perfidy.

On July 8, 1775, Congress adopted the “Olive Branch Petition,” also drafted by John Dickinson. That gentleman, a well-respected constitutional lawyer, member of the First Continental Congress, and eventual principal drafter of the Articles of Confederation in 1777, wanted to leave no diplomatic stone unturned to avoid a breach with Great Britain. The historian Samuel Eliot Morison relates remarks attributed to John Adams about the supposed reasons for Dickinson’s caution. According to Adams, “His (Dickinson’s) mother said to him, ‘Johnny you will be hanged, your estate will be forfeited and confiscated, you will leave your excellent wife a widow, and your charming children orphans, beggars, and infamous.’ From my Soul, I pitied Mr. Dickinson…. I was very happy that my Mother and my Wife…and all her near relations, as well as mine, had been uniformly of my Mind, so that I always enjoyed perfect Peace at home.” A new topic of study thus presents itself to historians of the era: the effect of a statesman’s domestic affairs on his view of national affairs.

The Petition appealed to the king to help stop the war, repeal the Coercive Acts, restore the prior “harmony between [Great Britain] and these colonies,” and establish “a concord…between them upon so firm a basis as to perpetuate its blessing ….” Almost all who signed the later Declaration of Independence signed the Petition, largely to placate Dickinson and, for some, to justify more vigorous future measures. As feared by many, and hoped by some, on arrival in London, the American agents were told that the king would not receive a petition from rebels.

British politicians were as unsure and divided about moving forward as their American counterparts in Congress. But George III could rest assured of the support of his people, judging by the 60,000 that lined the route of his carriage from St. James Palace to the Palace of Westminster on the occasion of his speech to both houses for the opening of Parliament on October 26, 1775. The twenty-minute speech, delivered in a strong voice, provides a sharp counterpoint to the future American Declaration of Independence. Outraged by the attempted invasion of Canada, a peaceful and loyal colony, the king already on August 23 had declared that an open rebellion existed.

He now affirmed and elaborated on that proclamation. Leaders in America were traitors who in a “desperate conspiracy” had inflamed people through “gross misrepresentation.” They were feigning loyalty to the Crown while preparing for rebellion. Now came the bill of particulars against the Americans: “They have raised troops, and are collecting a naval force. They have seized the public revenue, and assumed to themselves legislative, executive, and judicial powers, which they already exercise in the most arbitrary manner…. And although many of these unhappy people may still retain their loyalty…the torrent of violence [by the Americans] has been strong enough to compel their acquiescence till a sufficient force shall appear to support them.”

Despite these provocations, he and the Parliament had acted with moderation, he assured his audience, and he was “anxious to prevent, if it had been possible, the effusion of the blood of my subjects, and the calamities which are inseparable from a state of war.” Nevertheless, he was determined to defend the colonies which the British nation had “encouraged with many commercial advantages, and protected and defended at much expense of blood and treasure.” He bemoaned in personal sorrow the baleful effects of the rebellion on his faithful subjects, but promised to “receive the misled with tenderness and mercy,” once they had come to their senses. Showing that his political sense was more acute than that of many Americans, as well as many members of Parliament, the king charged that the true intent of the rebels was to create an “independent empire.”

Two months later, Parliament followed the king’s declaration with an act to prohibit all commerce with the colonies and to make all colonial vessels subject to seizure as lawful prizes, with their crews subject to impressment into the Royal Navy.

The king’s speech was less well-received in the colonies, and it gave the radicals an opportunity to press their case that the king himself was at the center of the actions against the Americans. It was critical to the radicals’ efforts towards independence that the natural affinity for the king that almost all Americans shared with their countrymen in the motherland be sundered. Some snippets about the king’s character from the historian David McCullough illustrate why George III was popular. After ascending the throne in 1760 at age 22, “he remained a man of simple tastes and few pretensions. He liked plain food and drank but little, and wine only. Defying fashion, he refused to wear a wig…. And in notable contrast to much of fashionable society and the Court, … the king remained steadfastly faithful to his very plain Queen, with whom [he ultimately would produce fifteen children].”  Recent depictions of him as unattractive, dull, and insane, are far off the mark. He was tall, well above-average in looks at the time, and good-natured. By the 1770s, he was sufficiently skilled in the political arts to wield his patronage power to the advantage of himself and his political allies. One must not forget that, but a decade earlier, colonial governments had voted to erect statues in his honor. It was the very affability of George III and his appeal as a sort of “people’s king” that made it imperative for Jefferson to portray him in the Declaration of Independence as the ruthless and calculating tyrant he was not.

Between November, 1775, and January, 1776, New York, New Jersey, Pennsylvania, and Maryland still explicitly instructed their delegates to vote against independence. But events soon overtook the fitfulness of the state assemblies and Congress. Parliament’s actions, once they became known, left no room for conciliation. The colonies effectively had been declared into outlawry and, in Lockean terms, reverted to a “state of nature” in relation to the British government. The struggles in the colonial assemblies between moderates who had pressed for negotiation and radicals who pushed for independence now tilted clearly in favor of the latter.

Yet before news of Parliament’s actions reached the colonies, another event proved to be even more of a catalyst for the shift from conciliation to independence. In January, 1776, Thomas Paine, an English corset maker brought to Pennsylvania by Benjamin Franklin, published, anonymously, a pamphlet titled “Common Sense.” Paine ridiculed monarchy and denounced George III as a particularly despicable example. The work’s unadorned but stirring prose, short length, and simplistically propagandistic approach to political systems made it a best seller that delivered an electric jolt to the public debate. The extent to which it influenced the deliberations of Congress is unclear, however.

The irresolution of the Congress, it must be noted, was mirrored by the fumblings of Parliament. The Americans had many friends for their cause in London, even including various ministries, some of which nevertheless were reviled in the colonies. This had been the case beginning the prior decade, when American objections to a particular act of Parliament resulted in repeal of the act, only to be followed by another that the Americans found unacceptable, whereupon the dance continued. Still, the overall trend had been to tighten the reins on the colonies. But that did not deter Edmund Burke, a solid—but at times exasperated—supporter of the Americans, to introduce a proposal for reconciliation in Parliament in November, 1775. Unfortunately, it was voted down. Others, including Adam Smith and Lord Barrington, the secretary at war, urged all British troops to be removed and the Americans to be allowed to determine whether, and under what terms, they wished to remain in union with Britain.

Other proposals for a revised union were debated in Parliament even after the Americans declared independence. These proposals resembled the dominion structure that the British, having learned their lesson too late, provided for many of their colonies and dependencies in subsequent generations. The last of these, the Conciliatory Bill, which actually was passed on February 17, 1778, gave the Americans more than they had demanded in 1775. Too late. The American alliance with France made peace impossible. Had those proposals, allowing significant control by the colonists over local affairs, been adopted in a timely manner, the independence drive well may have stalled even in 1776. Even Adams, Jefferson, and other radicals of those earlier years had urged a dominion structure, whereby the Americans would have controlled their own affairs but would have remained connected to Britain through the person of the king. The quote attributed to the former Israeli Foreign Minister Abba Eban about the Arabs of our time might as well have applied to the British of the 1770s, “[They] never miss[ed] an opportunity to miss an opportunity.”

Reflecting the shifting attitudes in the assemblies, and responding to the seemingly inexorable move to independence by the states, the Second Continental Congress also bent to the inevitable. The Virginia House of Burgesses on May, 15, 1776, appointed a committee to draft a constitution for an independent Commonwealth, and directed its delegates in Congress to vote for independence. Other states followed suit. Finally, Richard Henry Lee moved in Congress, “That these United Colonies are, and of right ought to be, Independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.” The die was cast.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn. 



Click Here For Next Essay 

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Guest Essayist: Joerg Knipprath

On March 23, 2010, President Barack Obama signed into law the Patient Protection and Affordable Care Act (“ACA”), sometimes casually referred to as “Obamacare,” a sobriquet that Obama himself embraced in 2013. The ACA covered 900 pages and hundreds of provisions. The law was so opaque and convoluted that legislators, bureaucrats, and Obama himself at times were unclear about its scope. For example, the main goal of the law was presented as providing health insurance to all Americans who previously were unable to obtain it due to, among other factors, lack of money or pre-existing health conditions. The law did increase the number of individuals covered by insurance, but stopped well short of universal coverage. Several of its unworkable or unpopular provisions were delayed by executive order. Others were subject to litigation to straighten out conflicting requirements. The ACA represented a probably not-yet-final step in the massive bureaucratization of health insurance and care over the past several decades, as health care moved from a private arrangement to a government-subsidized “right.”

The law achieved its objectives to the extent it did by expanding Medicaid eligibility to higher income levels and by significantly restructuring the “individual” policy market. In other matters, the ACA sought to control costs by further reducing Medicare reimbursements to doctors, which had the unsurprising consequence that Medicare patients found it still more difficult to get medical care, and by levying excise taxes on medical devices, drug manufacturers, health insurance providers, and high-benefit “Cadillac plans” set-up by employers. The last of these was postponed and, along with most of the other taxes, repealed in December, 2019. On the whole, existing employer plans and plans under collective-bargaining agreements were only minimally affected. Insurers had to cover defined “essential health services,” whether or not the purchaser wanted or needed those services. As a result, certain basic health plans that focused on “catastrophic events” coverage were substandard and could no longer be offered. Hence, while coverage expanded, many people also found that the new, permitted plans cost them more than their prior coverage. They also found that the reality did not match Obama’s promise, “if you like your health care plan, you can keep your health care plan.”

The ACA required insurance companies to “accept all comers.” This policy would have the predictable effect that healthy (mostly young) people would forego purchasing insurance until a condition arose that required expensive treatment. That, in turn, would devastate the insurance market. Imagine being able to buy a fire policy to cover damage that had already arisen from a fire. Such policies would not be issued. Private, non-employer, health insurance plans potentially would disappear. Some commentators opined that this was exactly the end the reformers sought, at least secretly, so as to shift to a single-payer system, in other words, to “Medicare for all.” The ACA sought to address that problem by imposing an “individual mandate.” Unless exempt from the mandate, such as illegal immigrants or 25-year-olds covered under their parents’ policy, every person must purchase insurance through their employer or individually from an insurer through one of the “exchanges.” Barring that, the person had to pay a penalty, to be collected by the IRS.

There have been numerous legal challenges to the ACA. Perhaps the most significant constitutional challenge was decided by the Supreme Court in 2012 in National Federation of Independent Business v. Sebelius (NFIB). There, the Court addressed the constitutionality of the individual mandate under Congress’s commerce and taxing powers, and of the Medicaid expansion under Congress’s spending power. These two provisions were deemed the keys to the success of the entire project.

Before the Court could address the case’s merits, it had to rule that the petitioners had standing to bring their constitutional claim. The hurdle was the Anti-Injunction Act. That law prohibited courts from issuing an injunction against the collection of any tax, in order to prevent litigation from obstructing tax collection. Instead, a party must pay the tax and sue for a refund to test the tax’s constitutionality. The issue turned on whether the individual mandate was a tax or a penalty. Chief Justice John Roberts concluded that Congress had described this “shared responsibility payment” if one did not purchase qualified health insurance as a “penalty,” not a “tax.” Roberts noted that other parts of the ACA imposed taxes, so that Congress’s decision to apply a different label was significant. Left out of the opinion was the reason that Congress made what was initially labeled a “tax” into a “penalty” in the ACA’s final version, namely, Democrats’ sensitivity about Republican allegations that the proposed bill raised taxes on Americans.

Having confirmed the petitioners’ standing, Roberts proceeded to the substantive merits of the challenge to the ACA. The government argued that the health insurance market (and health care, more generally) was a national market in which everyone would participate, sooner or later. While this is a likely event, it is by no means a necessary one, as a person might never seek medical services. If, for whatever reason, people did not have suitable insurance, the government claimed, they might not be able to pay for those services. Because hospitals are legally obligated to provide some services regardless of the patient’s ability to pay, hospitals would pass along their uncompensated costs to insured patients, whose insurance companies in turn would charge those patients higher premiums. The ACA’s broadened insurance coverage and “guaranteed-issue” requirements, subsidized by the minimum insurance coverage requirement, would ameliorate this cost-shifting. Moreover, the related individual mandate was “necessary and proper” to deal with the potential distortion of the market that would come from younger, healthier people opting not to purchase insurance as sought by the ACA.

Of course, Congress could pass laws under the Necessary and Proper Clause only to further its other enumerated powers, hence, the need to invoke the Commerce Clause. The government relied on the long-established, but still controversial, precedent of Wickard v. Filburn. In that 1942 case, the Court upheld a federal penalty imposed on farmer Filburn for growing wheat for home consumption in excess of his allotment under the Second Agricultural Adjustment Act. Even though Filburn’s total production was an infinitesimally small portion of the nearly one billion bushels grown in the U.S. at that time, the Court concluded, tautologically,  that the aggregate of production by all farmers had a substantial effect on the wheat market. Thus, since Congress could act on overall production, it could reach all aspects of it, even marginal producers such as Filburn. The government claimed that the ACA’s individual mandate was analogous. Even if one healthy individual’s failure to buy insurance would scarcely affect the health insurance market, a large number of such individuals and of “free riders” failing to get insurance until after a medical need arose would, in the aggregate, have such a substantial effect.

Roberts, in effect writing for himself and the formally dissenting justices on that issue, disagreed. He emphasized that Congress has only limited, enumerated powers, at least in theory. Further, Congress might enact laws needed to exercise those powers. However, such laws must not only be necessary, but also proper. In other words, they must not themselves seek to achieve objectives not permitted under the enumerated powers. As opinions in earlier cases, going back to Chief Justice John Marshall in Gibbons v. Ogden had done, Roberts emphasized that the enumeration of congressional powers in the Constitution meant that there were some things Congress could not reach.

As to the Commerce Clause itself, the Chief Justice noted that Congress previously had only used that power to control activities in which parties first had chosen to engage. Here, however, Congress sought to compel people to act who were not then engaged in commercial activity. However broad Congress’s power to regulate interstate commerce had become over the years with the Court’s acquiescence, this was a step too far. If Congress could use the Commerce Clause to compel people to enter the market of health insurance, there was no other product or service Congress could not force on the American people.

This obstacle had caused the humorous episode at oral argument where the Chief Justice inquired whether the government could require people to buy broccoli. The government urged, to no avail, that health insurance was unique, in that people buying broccoli would have to pay the grocer before they received their ware, whereas hospitals might have to provide services and never get paid. Of course, the only reason hospitals might not get paid is because state and federal laws require them to provide certain services up front, and there is no reason why laws might not be adopted in the future that require grocers to supply people with basic “healthy” foods, regardless of ability to pay. Roberts also acknowledged that, from an economist’s perspective, choosing not to participate in a market may affect that market as much as choosing to participate. After all, both reflect demand, and a boycott has economic effects just as a purchasing fad does. However, to preserve essential constitutional structures, sometimes lines must be drawn that reflect considerations other than pure economic policy.

The Chief Justice was not done, however. Having rejected the Commerce Clause as support for the ACA, he embraced Congress’s taxing power, instead. If the individual mandate was a tax, it would be upheld because Congress’s power to tax was broad and applied to individuals, assets, and income of any sort, not just to activities, as long as its purpose or effect was to raise revenue. On the other hand, if the individual mandate was a “penalty,” it could not be upheld under the taxing power, but had to be justified as a necessary and proper means to accomplish another enumerated power, such as the commerce clause. Of course, that path had been blocked in the preceding part of the opinion. Hence, everything rested on the individual mandate being a “tax.”

At first glance it appeared that this avenue also was a dead end, due to Roberts’s decision that the individual mandate was not a tax for the purpose of the Anti-Injunction Act. On closer analysis, however, the Chief Justice concluded that something can be both a tax and not be a tax, seemingly violating the non-contradiction principle. Roberts sought to escape this logical trap by distinguishing what Congress can declare as a matter of statutory interpretation and meaning from what exists in constitutional reality. Presumably, Congress can define that, for the purpose of a particular federal law, 2+2=5 and the Moon is made of green cheese. In applying a statute’s terms, the courts are bound by Congress’s will, however contrary that may be to reason and ordinary reality.

However, when the question before a court is the meaning of an undefined term in the Constitution, an “originalist” judge will attempt to discern the commonly-understood meaning of that term when the Constitution was adopted, subject possibly to evolution of that understanding through long-adhered-to judicial, legislative, and executive usage. Here, Roberts applied factors the Court had developed beginning in Bailey v. Drexel Furniture Co. in 1922. Those factors compelled the conclusion that the individual mandate was, functionally, a tax. Particularly significant for Roberts was that the ACA limited the payment to less than the price for insurance, and that it was administered by the IRS through the normal channels of tax collection. Further, because the tax would raise substantial revenue, its ancillary purpose of expanding insurance coverage was of no constitutional consequence. Taxes often affect behavior, understood in the old adage that, if the government taxes something, it gets less of it.

Roberts’s analysis reads as the constitutional law analogue to quantum mechanics and the paradox of Schroedinger’s Cat, in that the individual mandate is both a tax and a penalty until it is observed by the Chief Justice. His opinion has produced much mirth—and frustration—among commentators, and there were inconvenient facts in the ACA itself. The mandate was in the ACA’s operative provisions, not its revenue provisions, and Congress referred to the mandate as a “penalty” eighteen times in the ACA. Still, he has a valid, if not unassailable, point. A policy that has the characteristics associated with a tax ordinarily is a tax. If Congress nevertheless consciously chooses to designate it as a penalty, then for the limited purpose of assessing the policy’s connection to another statute which carefully uses a different term, here the Anti-Injunction Act, the blame for any absurdity lies with Congress.

The Medicaid expansion under the ACA was struck down. Under the Constitution, Congress may spend funds, subject to certain ill-defined limits. One of those is that the expenditure must be for the “general welfare.” Under classic republican theory, this meant that Congress could spend the revenue collected from the people of the several states on projects that would benefit the United States as a whole, not some constituent part, or an individual or private entity. It was under that conception of “general welfare” that President Grover Cleveland in 1887 vetoed a bill that appropriated $10,000 to purchase seeds to be distributed to Texas farmers hurt by a devastating drought. Since then, the phrase has been diluted to mean anything that Congress deems beneficial to the country, however remotely.

Moreover, while principles of federalism prohibit Congress from compelling states to enact federal policy—known as the “anti-commandeering” doctrine—Congress can provide incentives to states through conditional grants of federal funds. As long as the conditions are clear, relevant to the purpose of the grant, and not “coercive,” states are free to accept the funds with the conditions or to reject them. Thus, Congress can try to achieve indirectly through the spending power what it could not require directly. For example, Congress cannot, as of now, direct states to teach a certain curriculum in their schools. However, Congress can provide funds to states that teach certain subjects, defined in those grants, in their schools. The key issue usually is whether the condition effectively coerces the states to submit to the federal financial blandishment. If so, the conditional grant is unconstitutional because it reduces the states to mere satrapies of the federal government rather than quasi-sovereigns in our federal system.

In what was a judicial first, Roberts found that the ACA unconstitutionally coerced the states into accepting the federal grants. Critical to that conclusion was that a state’s failure to accept the ACA’s expansion of Medicaid would result not just in the state being ineligible to receive federal funds for the new coverage. Rather, the state would lose all of its existing Medicaid funding. As well, here the program affected—Medicaid—accounted for over 20% of the typical state’s budget. Roberts described this as “economic dragooning that leaves the States with no real option but to acquiesce in the Medicaid expansion.” Roberts noted that the budgetary impact on a state from rejecting the expansion dwarfed anything triggered by a refusal to accept federal funds under previous conditional grants.

One peculiarity of the opinions in NFIB was the stylistic juxtaposition of Roberts’s opinion for the Court and the principal dissent, penned by Justice Antonin Scalia. Roberts at one point uses “I” to defend a point of law he makes, which is common in dissents or concurrences, instead of the typical “we” or “the Court” used by a majority. By contrast, Scalia consistently uses “we” (such as “We conclude that [the ACA is unconstitutional.” and “We now consider respondent’s second challenge….”), although that might be explained because he wrote for four justices, Anthony Kennedy, Clarence Thomas, Samuel Alito, and himself. He also refers to Justice Ruth Bader Ginsburg’s broadly as “the dissent.” Most significant, Scalia’s entire opinion reads like that of a majority. He surveys the relevant constitutional doctrines more magisterially than does the Chief Justice, even where he and Roberts agree, something that dissents do not ordinarily do. He repeatedly and in detail criticizes the government’s arguments and the “friend-of the-court” briefs that support the government, tactics commonly used by the majority opinion writer.

These oddities have provoked much speculation, chiefly that Roberts initially joined Scalia’s opinion, which would have made it the majority opinion, but got cold feet. Rumor spread that Justice Anthony Kennedy had attempted until shortly before the decision was announced to persuade Roberts to rejoin the Scalia group. Once that proved fruitless, it was too late to make anything but cosmetic changes to Scalia’s opinion for the four now-dissenters. Only the justices know what actually happened, but the scenario seems plausible.

Why would Roberts do this? Had Scalia’s opinion prevailed, the ACA would have been struck down in its entirety. That would have placed the Court in a difficult position, especially during an election year, having exploded what President Obama considered his signature achievement. The President already had a fractious relationship with the Supreme Court and earlier had made what some interpreted as veiled political threats against the Court over the case. Roberts’s “switch in time” blunted that. The chief justice is at most primus inter pares, having no greater formal powers than his associates. But he is often the public and political figurehead of the Court. Historically, chief justices have been more “political” in the sense of being finely attuned to maintaining the institutional vitality of the Court. John Marshall, William Howard Taft, and Charles Evans Hughes especially come to mind. Associate justices can be jurisprudential purists, often through dissents, to a degree a chief justice cannot.

Choosing his path allowed Roberts to uphold the ACA in part, while striking jurisprudential blows against the previously constant expansion of the federal commerce and spending powers. Even as to the taxing power, which he used to uphold that part of the ACA, Roberts planted a constitutional land mine. Should the mandate ever be made really effective, if Congress raised it above the price of insurance, the “tax” argument would fail and a future court could strike it down as an unconstitutional penalty. Similarly, if the tax were repealed, as eventually happened, and the mandate were no longer supported under the taxing power, it could threaten the entire ACA.

After NFIB, attempts to modify or eliminate the ACA through legislation or litigation continued, with mixed success. Noteworthy is that the tax payment for the individual mandate was repealed in 2017. This has produced a new challenge to the ACA as a whole, because the mandate is, as the government conceded in earlier arguments, a crucial element of the whole health insurance structure. The constitutional question is whether the mandate is severable from the rest of the ACA. The district court held that the mandate was no longer a tax and, thus, under NFIB, is unconstitutional. Further, because of the significance that Congress attached to the mandate for the vitality of the ACA, the mandate could not be severed from the ACA, and the entire law is unconstitutional. The Fifth Circuit agreed that the mandate is unconstitutional, but disagreed about the extent that affects the rest of the ACA. The Supreme Court will hear the issue in its 2020-2021 term in California v.. Texas.

On the political side, the American public seems to support the ACA overall, although, or perhaps because, it has been made much more modest than its proponents had planned. So, the law, somewhat belatedly and less boldly, achieved a key goal of President Obama’s agenda. That success came at a stunning political cost to the President’s party, however. The Democrats hemorrhaged over 1,000 federal and state legislative seats during Obama’s tenure. In 2010 alone, they lost a historic 63 House seats, the biggest mid-term election rout since 1938, plus 6 Senate seats. The moderate “blue-dog” Democrats who had been crucial to the passage of the ACA were particularly hard hit. Whatever the ACA’s fate turns out to be in the courts, the ultimate resolution of controversial social issues remains with the people, not lawyers and judges.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath
President Nixon Farewell Speech to White House Staff, August 9, 1974

On Thursday, August 8, 1974, a somber Richard Nixon addressed the American people in a 16-minute speech via television to announce that he was planning to resign from the Presidency of the United States. He expressed regret over mistakes he made about the break-in at the Democratic Party offices at the Watergate Hotel and the aftermath of that event. He further expressed the hope that his resignation would begin to heal the political divisions the matter had exacerbated. The next day, having resigned, he boarded a helicopter and, with his family, left Washington, D.C.

Nixon had won the 1972 election against Senator George McGovern of South Dakota with over 60% of the popular vote and an electoral vote of 520-17 (one vote having gone to a third candidate). Yet less than two years after what is one of the most overwhelming victories in American elections, Nixon was politically dead. Nixon has been described as a tragic figure, in a literary sense, due to his struggle to rise to the height of political power, only to be undone when he had achieved the pinnacle of success. The cause of this astounding change of fortune has been much debated. It resulted from a confluence of factors, political, historical, and personal.

Nixon was an extraordinarily complex man. He was highly intelligent, even brilliant, yet was the perennial striver seeking to overcome, by unrelenting work, his perceived limitations. He was an accomplished politician with a keen understanding of political issues, yet socially awkward and personally insecure. He was perceived as the ultimate insider, yet, despite his efforts, was always somehow outside the “establishment,” from his school days to his years in the White House. Alienated from the social and political elites, who saw him as an arriviste, he emphasized his marginally middle-class roots and tied his political career to that “silent majority.” He could arouse intense loyalty among his supporters, yet equally intense fury among his opponents. Nixon infamously kept an “enemies list,” the only surprise of which is that it was so incomplete. Seen by the Left as an operative of what is today colloquialized as the “Deep State,” yet he rightly mistrusted the bureaucracy and its departments and agencies, and preferred to rely on White House staff and hand-picked loyal individuals. Caricatured as an anti-Communist ideologue and would-be right-wing dictator, Nixon was a consummately pragmatic politician who was seen by many supporters of Senator Barry Goldwater and Governor Ronald Reagan as insufficiently in line with their world view.

The Watergate burglary and attempted bugging of the Democratic Party offices in June, 1972, and investigations by the FBI and the Government Accountability Office that autumn into campaign finance irregularities by the Committee to Re-Elect the President (given the unfortunate acronym CREEP by Nixon’s opponents) initially had no impact on Nixon and his comprehensive political victory. In January, 1973, the trial of the operatives before federal judge John Sirica in Washington, D.C., revealed possible White House involvement. This perked the interest of the press, never Nixon’s friends. These revelations, now spread before the public, caused the Democratic Senate majority to appoint a select committee under Senator Sam Ervin of North Carolina for further investigation. Pursuant to an arrangement with Senate Democrats, Attorney General Elliot Richardson named Democrat Archibald Cox, a Harvard law professor and former Kennedy administration solicitor general, as special prosecutor.

Cox’s efforts uncovered a series of missteps by Nixon, as well as actions that were viewed as more seriously corrupt and potentially criminal. Some of these sound rather tame by today’s standards. Others are more problematic. Among the former were allegations that Nixon had falsely backdated a gift of presidential papers to the National Archives to get a tax credit, not unlike Bill Clinton’s generously-overestimated gift of three pairs of his underwear in 1986 for an itemized charitable tax deduction. Another was that he was inexplicably careless in preparing his tax return. Given the many retroactively amended tax returns and campaign finance forms filed by politicians, such as the Clintons and their eponymous foundations, this, too, seems of slight import. More significant was the allegation that he had used the Internal Revenue Service to attack political enemies. Nixon certainly considered that, although it is not shown that any such actions were undertaken. Another serious charge was that Nixon had set up a secret structure to engage in political intelligence and espionage.

The keystone to the impeachment was the discovery of a secret taping system in the Oval Office that showed that Nixon had participated in a cover-up of the burglary and obstructed the investigation. Nixon, always self-reflective and sensitive to his position in history, had set up the system to provide a clear record of conversations within the Oval Office for his anticipated post-Presidency memoirs. It proved to be his downfall. When Cox became aware of the system, he sought a subpoena to obtain nine of the tapes in July, 1973. Nixon refused, citing executive privilege relating to confidential communications. That strategy had worked when the Senate had demanded the tapes; Judge Sirica had agreed with Nixon. But Judge Sirica rejected that argument when Cox sought the information, a decision upheld 5-2 by the federal Circuit Court for the District of Columbia.

Nixon then offered to give Cox authenticated summaries of the nine tapes. Cox refused. After a further clash between the President and the special prosecutor, Nixon ordered Attorney General Richardson to remove Cox. Both Richardson and Assistant Attorney General William Ruckelshaus refused and resigned. However, by agreement between these two and Solicitor General Robert Bork, Cox was removed by Bork in his new capacity as Acting Attorney General. It was well within Nixon’s constitutional powers as head of the unitary executive to fire his subordinates. But what the President is constitutionally authorized to do is not the same as what the President politically should do. The reaction of the political, academic, and media elites to the “Saturday Night Massacre” was overwhelmingly negative, and precipitated the first serious effort at impeaching Nixon.

A new special prosecutor, Democrat Leon Jaworski, was appointed by Bork in consultation with Congress. The agreement among the three parties was that, though Jaworski would operate within the Justice Department, he could not be removed except for specified causes and with notification to Congress. Jaworski also was specifically authorized to contest in court any claim of executive privilege. When Jaworski again sought various specific tapes, and Nixon again claimed executive privilege, Jaworski eventually took the case to the Supreme Court. On July 24, 1974, Chief Justice Warren Burger’s opinion in the 8-0 decision in United States v. Nixon (William Rehnquist, a Nixon appointee who had worked in the White House, had recused himself) overrode the executive privilege claim. The justices also rejected the argument that this was a political intra-branch dispute between the President and a subordinate that rendered the matter non-justiciable, that is, beyond the competence of the federal courts.

At the same time, in July, 1974, with bipartisan support, the House Judiciary Committee voted out three articles of impeachment. Article I charged obstruction of justice regarding the Watergate burglary. Article II charged him with violating the Constitutional rights of citizens and “contravening the laws governing agencies of the executive branch,” which dealt with Nixon’s alleged attempted misuse of the IRS, and with his misuse of the FBI and CIA. Article III charged Nixon with ignoring congressional subpoenas, which sounds remarkably like an attempt to obstruct Congress, a dubious ground for impeachment. Two other proposed articles were rejected. When the Supreme Court ordered Nixon to release the tapes, that of June 23, 1972, showed obstruction of justice by the President instructing his staff to use the CIA to end the Watergate investigation. The tape was released on August 5. Nixon was then visited by a delegation of Republican Representatives and Senators who informed him of the near-certainty of impeachment by the House and of his extremely tenuous position to avoid conviction by the Senate. The situation having become politically hopeless, Nixon resigned, making his resignation formal on Friday, August 9, 1974.

The Watergate affair produced several constitutional controversies. First, the Supreme Court addressed executive privilege to withhold confidential information. Nixon’s opponents had claimed that the executive lacked such a privilege because the Constitution did not address it, unlike the privilege against self-incrimination. Relying on consistent historical practice going back to the Washington administration, the Court found instead that such a privilege is inherent in the separation of powers and necessary to protect the President in exercising the executive power and others granted under Article II of the Constitution. However, unless the matter involves state secrets, that privilege could be overridden by a court, if warranted in a criminal case, and the “presumptively privileged” information ordered released. While the Court did not directly consider the matter, other courts have agreed with Judge Sirica that, based on long practice, the privilege will be upheld if Congress seeks such confidential information. The matter then is a political question, not one for courts to address at all.

Another controversy arose over the President’s long-recognized power to fire executive branch subordinates without restriction by Congress. This is essential to the President’s position as head of the executive branch. For example, the President has inherent constitutional authority to fire ambassadors as Barack Obama and Donald Trump did, or to remove U.S. Attorneys, as Bill Clinton and George W. Bush did. Jaworski’s appointment under the agreement not to remove him except for specified cause interfered with that power, yet the Court upheld that limitation in the Nixon case.

After Watergate, in 1978, Congress passed the Ethics in Government Act that provided a broad statutory basis for the appointment of special prosecutors outside the normal structure of the Justice Department. Such prosecutors, too, could not be removed except for specified causes. In Morrison v. Olson, in 1988, the Supreme Court, by 7-1, upheld this incursion on executive independence over the lone dissent of Justice Antonin Scalia. At least as to inferior executive officers, which the Court found special prosecutors to be, Congress could limit the President’s power to remove, as long as the limitation did not interfere unduly with the President’s control over the executive branch. The opinion, by Chief Justice Rehnquist, was in many ways risible from a constitutional perspective, but it upheld a law that became the starting point for a number of highly-partisan and politically-motivated investigations into actions taken by Presidents Ronald Reagan, George H.W. Bush, and Bill Clinton, and by their subordinates. Only once the last of these Presidents was being subjected to such oversight did opposition to the law become sufficiently bipartisan to prevent its reenactment.

The impeachment proceeding itself rekindled the debate over the meaning of the substantive grounds for such an extraordinary interference with the democratic process. While treason is defined in the Constitution and bribery is an old and well-litigated criminal law concept, the third basis, of “high crimes and misdemeanors,” is open to considerable latitude of meaning. One view, taken by defenders of the official under investigation, is that this phrase requires conduct amounting to a crime, an “indictable offense.” The position of the party pursuing impeachment, Republican or Democrat, has been that this phrase more broadly includes unfitness for office and reaches conduct which is not formally criminal but which shows gross corruption or a threat to the constitutional order. The Framers’ understanding appears to have been closer to the latter, although the much greater number and scope of criminal laws today may have narrowed the difference. However, what the Framers considered sufficiently serious impeachable corruption likely was more substantial than what has been proffered recently. They were acutely aware of the potential for merely political retaliation and similar partisan mischief that a low standard for impeachment would produce. These and other questions surrounding the rather sparse impeachment provisions in the Constitution have not been resolved. They continue to be, foremost, political matters addressed on a case-by-case basis, as demonstrated the past twelve months.

As has been often observed, Nixon’s predicament was not entirely of his own making. In one sense, he was the victim of political trends that signified a reaction against what had come to be termed the “Imperial Presidency.” It had long been part of the progressive political faith that there was “nothing to fear but fear itself” as far as broadly exercised executive power, as long as the presidential tribune using “a pen and a phone” was subject to free elections. Actions routinely done by Presidents such as Franklin Roosevelt, Harry Truman, and Nixon’s predecessor, Lyndon Johnson, now became evidence of executive overreach. For example, those presidents, as well as others going back to at least Thomas Jefferson had impounded appropriated funds, often to maintain fiscal discipline over profligate Congresses. Nixon claimed that his constitutional duty “to take care that the laws be faithfully executed” was also a power that allowed him to exercise discretion as to which laws to enforce, not just how to enforce them. In response, the Democratic Congress in 1974 passed the Budget and Impoundment Control Act of 1974. The Supreme Court in Train v. City of New York declared presidential impoundment unconstitutional and limited the President’s authority to impound funds to whatever extent was permitted by Congress in statutory language.

In military matters, the elites’ reaction against the Vietnam War, shaped by negative press coverage and antiwar demonstrations on elite college campuses, gradually eroded popular support. The brunt of the responsibility for the vast expansion of the war lay with Lyndon Johnson and the manipulative use of a supposed North Vietnamese naval attack on an American destroyer, which resulted in the Gulf of Tonkin Resolution. At a time when Nixon had ended the military draft, drastically reduced American troop numbers in Vietnam, and agreed to the Paris Peace Accords signed at the end of January, 1973, Congress enacted the War Powers Resolution of 1973 over Nixon’s veto. The law limited the President’s power to engage in military hostilities to specified situations, in the absence of a formal declaration of war. It also basically required pre-action consultation with Congress for any use of American troops and a withdrawal of such troops unless Congress approved within sixty days. It also, somewhat mystifyingly, purported to disclaim any attempt to limit the President’s war powers. The Resolution has been less than successful in curbing presidential discretion in using the military and remains largely symbolic.

Another restriction on presidential authority occurred through the Supreme Court. In United States v. United States District Court in 1972, the Supreme Court rejected the administration’s program of warrantless electronic surveillance for domestic security. This was connected to the Huston Plan of warrantless searches of mail and other communications of Americans. Warrantless wiretaps were connected on some members of the National Security Council and several journalists. Not touched by the Court was the President’s authority to conduct warrantless electronic surveillance of foreigners or their agents for national security-related information gathering. On the latter, Congress nevertheless in 1978 passed the Foreign Intelligence Surveillance Act, which, ironically, has expanded the President’s power in that area. Because it can be applied to communications of Americans deemed agents of a foreign government, FISA, along with the President’s inherent constitutional powers regarding foreign intelligence-gathering, can be used to circumvent the Supreme Court’s decision. It has even been used in the last several years to target the campaign of then-candidate Donald Trump.

Nixon’s use of the “pocket veto” and his imposition of price controls also triggered resentment and reaction in Congress, although once again his actions were hardly novel. None of these various executive policies, by themselves, were politically fatal. Rather, they demonstrate the political climate in which what otherwise was just another election-year dirty trick, the Watergate Hotel burglary, could result in the historically extraordinary resignation from office of a President who had not long before received the approval of a large majority of American voters. Nixon’s contemplated use of the IRS to audit “enemies” was no worse than the Obama Administration’s actual use of the IRS to throttle conservative groups’ tax exemption. His support of warrantless wiretaps under his claimed constitutional authority to target suspected domestic troublemakers, while unconstitutional, hardly is more troubling than Obama’s use of the FBI and CIA to manipulate the FISA system into spying on a presidential candidate to assist his opponent. Nixon’s wiretapping of NSC officials and several journalists is not dissimilar to Obama’s search of phone records of various Associated Press reporters and of spying on Fox News’s James Rosen. Obama’s FBI also accused Rosen of having violated the Espionage Act. The Obama administration brought more than twice as many prosecutions—including under the Espionage Act—against leakers than all prior Presidents combined. That was in his first term.

There was another, shadowy factor at work. Nixon, the outsider, offended the political and media elites. Nixon himself disliked the bureaucracy, which had increased significantly over the previous generation through the New Deal’s “alphabet agencies” and the demands of World War II and the Cold War. The Johnson Administration’s Great Society programs sped up this growth. The agencies were staffed at the upper levels with left-leaning members of the bureaucratic elite. Nixon’s relationship with the press was poisoned not only by their class-based disdain for him, but by the constant flow of leaks from government insiders who opposed him. Nixon tried to counteract that by greatly expanding the White House offices and staffing them with members who he believed were personally loyal to him. His reliance on those advisers rather than on the advice of entrenched establishment policy-makers threatened the political clout and personal self-esteem of the latter. What has been called Nixon’s plebiscitary style of executive government, relying on the approval of the voters rather than on that of the elite administrative cadre, also was a threat to the existing order. As Senator Charles Schumer warned President Trump in early January, 2017, about the intelligence “community,” “Let me tell you: You take on the intelligence community — they have six ways from Sunday at getting back at you.” Nixon, too, lived that reality.

Once out of office, Nixon generally stayed out of the limelight. The strategy worked well. As seems to be the custom for Republican presidents, once they are “former,” many in the press and among other “right-thinking people” came to see him as the wise elder statesman, much to be preferred to the ignorant cowboy (and dictator) Ronald Reagan. Who, of course, then came to be preferred to the ignorant cowboy (and dictator) George W. Bush. Who, of course, then came to be preferred to the ignorant reality television personality (and dictator) Donald Trump. Thus, the circle of political life continues. It ended for Nixon on April 22, 1994. His funeral five days later was attended by all living Presidents. Tens of thousands of mourners paid their respects.

The parallel to recent events should be obvious. That said, a comparison between seriousness of the Watergate Affair that resulted in President Nixon’s resignation and the Speaker Nancy Pelosi/Congressman Adam Schiff/Congressman Jerry Nadler impeachment of President Trump brings to mind what may be Karl Marx’s only valuable observation, that historic facts appear twice, “the first time as tragedy, the second time as farce.”

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

On February 15, 1898, an American warship, U.S.S. Maine, blew up in the harbor of Havana, Cuba. A naval board of inquiry reported the following month that the explosion had been caused by a submerged mine. That conclusion was confirmed in 1911, after a more exhaustive investigation and careful examination of the wreck. What was unclear, and remains so, is who set the mine. During the decade, tensions with Spain had been rising over that country’s handling of a Cuban insurgency against Spanish rule. The newspaper chains of William Randolph Hearst and Joseph Pulitzer had long competed for circulation by sensationalist reporting. The deteriorating political conditions in Cuba and the harshness of Spanish attempts to suppress the rebels provided fodder for the newspapers’ “yellow” journalism. Congress had pressured the American government to do something to resolve the crisis, but neither President Grover Cleveland nor President William McKinley had taken the bait thus far.

With the heavy loss of life that accompanied the sinking, “Remember the Maine” became a national obsession. Although Spain had very little to gain from sinking an American warship, whereas Cuban rebels had much to gain in order to bring the United States actively to their cause, the public outcry was directed against Spain. The Spanish government previously had offered to change its military tactics in Cuba and to allow Cubans limited home rule. The offer now was to grant an armistice to the insurgents. The American ambassador in Spain believed that the Spanish government would even be willing to grant independence to Cuba, if there were no political or military attempt to humiliate Spain.

Neither the Spanish government nor McKinley wanted war. However, the latter proved unable to resist the new martial mood and the aroused jingoism in the press and Congress. On April 11, 1898, McKinley sent a message to Congress that did not directly call for war, but declared that he had “exhausted every effort” to resolve the matter and was awaiting Congress’s action. Congress declared war. A year later, McKinley observed, “But for the inflamed state of public opinion, and the fact that Congress could no longer be held in check, a peaceful solution might have been had.” He might have added that, had he been possessed of a stiffer political spine, that peaceful solution might have been had, as well.

The “splendid little war,” in the words of the soon-to-be Secretary of State, John Hay, was exceedingly popular and resulted in an overwhelming and relatively easy American victory. Only 289 were killed in action, although, due to poor hygienic conditions, many more died from disease. Psychologically, it proved cathartic for Americans after the national trauma of the Civil War. One symbolic example of the new unity forged by the war with Spain was that Joe Wheeler and Fitzhugh Lee, former Confederate generals, were generals in the U.S. Army.

Spain signed a preliminary peace treaty in August. The treaty called for the surrender of Cuba, Puerto Rico, and Guam. The status of the Philippines was left for final negotiations. The ultimate treaty was signed in Paris on December 10, 1898. The Philippines, wracked by insurrection, were ceded to the United States for $2 million. The administration believed that it would be militarily advantageous to have a base in the Far East to protect American interests.

The war may have been popular, but the peace was less so. The two-thirds vote needed for Senate approval of the peace treaty was a close-run matter. There was a militant group of “anti-imperialists” in the Senate who considered it a betrayal of American republicanism to engage in the same colonial expansion as the European powers. Americans had long imagined themselves to be unsullied by the corrupt motives and brutal tactics that such colonial ventures represented in their minds. McKinley, who had reluctantly agreed to the treaty, reassured himself and Americans, “No imperial designs lurk in the American mind. They are alien to American sentiment, thought, and purpose.” But, with a nod to Rudyard Kipling’s urging that Americans take on the “white man’s burden,” McKinley cast the decision in republican missionary garb, “If we can benefit those remote peoples, who will object? If in the years of the future they are established in government under law and liberty, who will regret our perils and sacrifices?”

The controversy around an “American Empire” was not new. Early American republicans like Thomas Jefferson, Alexander Hamilton, and John Marshall, among many others, had described the United States in that manner and without sarcasm. The government might be a republic in form, but the United States would be an empire in expanse, wealth, and glory. Why else acquire the vast Louisiana territory in 1803? Why else demand from Mexico that huge sparsely-settled territory west of Texas in 1846? “Westward the Course of Empire Takes Its Way,” painted Emanuel Leutze in 1861. Manifest Destiny became the aspirational slogan.

While most Americans cheered those developments, a portion of the political elite had misgivings. The Whigs opposed the annexation of Texas and the Mexican War. To many Whigs, the latter especially was merely a war of conquest and the imposition of American rule against the inhabitants’ wishes. Behind the republican facade lay a more fundamental political concern. The Whigs’ main political power was in the North, but the new territory likely would be settled by Southerners and increase the power of the Democrats. That movement of settlers would also give slavery a new lease on life, something much reviled by most Whigs, among them a novice Congressman from Illinois, Abraham Lincoln.

Yet, by the 1890s, the expansion across the continent was completed. Would it stop there or move across the water to distant shores? One omen was the national debate over Hawaii that culminated in the annexation of the islands in 1898. Some opponents drew on the earlier Whig arguments and urged that, if the goal of the continental expansion was to secure enough land for two centuries to realize Jefferson’s ideal of a large American agrarian republic, the goal had been achieved. Going off-shore had no such republican fig leaf to cover its blatant colonialism.

Other opponents emphasized the folly of nation-building and trying to graft Western values and American republicanism onto alien cultures who neither wanted them nor were sufficiently politically sophisticated to make them work. They took their cue from John C. Calhoun, who, in 1848, had opposed the fanciful proposal to annex all of Mexico, “We make a great mistake in supposing that all people are capable of self-government. Acting under that impression, many are anxious to force free Governments on all the people of this continent, and over the world, if they had the power…. It is a sad delusion. None but a people advanced to a high state of moral and intellectual excellence are capable in a civilized condition, of forming and maintaining free Governments ….”

With peace at hand, the focus shifted to political and legal concerns. The question became whether or not the Constitution applied to these new territories ex proprio vigore: “Does the Constitution follow the flag?” Neither President McKinley nor Congress had a concrete policy. The Constitution, having been formed by thirteen states, along the eastern slice of a vast continent, was unclear. The Articles of Confederation had provided for the admission of Canada and other British colonies, such as the West Indies, but that document was moot. The matter was left to the judiciary, and the Supreme Court provided a settlement of sorts in a series of cases over two decades called the Insular Cases.

Cuba was easy. Congress’s declaration of war against Spain had been clear: “The United States hereby disclaims any disposition or intention to exercise sovereignty, jurisdiction, or control over said island except for the pacification thereof, and asserts its determination, when that is accomplished, to leave the government and control of the island to its people.” In Neely v. Henkel (1901), the Court unanimously held that the Constitution did not apply to Cuba. Effectively, Cuba was already a foreign country outside the Constitution. Cuba became formally independent in 1902. In similar manner, the United States promised independence to the Philippine people, a process that took several decades due to various military exigencies. Thus, again, the Constitution did not apply there, at least not tout court, as the Court affirmed beginning in Dorr v. U.S. in 1904. That took care of the largest overseas dominions, and Americans could tentatively congratulate themselves that they were not genuine colonialists.

More muddled was the status of Puerto Rico and Guam. In Puerto Rico, social, political, and economic conditions did not promise an easy path to independence, and no such assurance was given. The territory was not deemed capable of surviving on its own. Rather, the peace treaty expressly provided that Congress would determine the political status of the inhabitants. In 1900, Congress passed the Foraker Act, which set up a civil government patterned on the old British imperial system with which Americans were familiar. The locals would elect an assembly, but the President would appoint a governor and executive council. Guam was in a similar state of dependency.

In Downes v. Bidwell (1901), the Court established the new status of Puerto Rico as neither outside nor entirely inside the United States. Unlike Hawaii or the territories that were part of Manifest Destiny, there was no clear determination that Puerto Rico was on a path to become a state and, thus, was already incorporated into the entity called the United States. It belonged to the United States, but was not part of the United States. The Constitution, on its own, applied only to states and to territory that was expected to become part of the United States. Puerto Rico was more like, but not entirely like, temporarily administered foreign territory. Congress determined the governance of that territory by statute or treaty, and, with the exception of certain “natural rights” reflected in particular provisions of the Bill of Rights, the Constitution applied only to the extent to which Congress specified.

These cases adjusted constitutional doctrine to a new political reality inaugurated by the sinking of the Maine and the war that event set in motion. The United States no longer looked inward to settle its own large territory and to resolve domestic political issues relating to the nature of the union. Rather, the country was looking beyond its shores and was emerging as a world power. That metamorphosis would take a couple of generations and two world wars to complete, the last of which triggered by another surprise attack on American warships.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

In 1834, Dr. Emerson, an Army surgeon, took his slave Dred Scott from Missouri, a slave state, to Illinois, a free state, and then, in 1836, to Fort Snelling in Wisconsin Territory. The latter was north of the geographic line at latitude 36°30′ established under the Missouri Compromise of 1820 as the division between free territory and that potentially open to slavery. In addition, the law that organized Wisconsin Territory in 1836 made the domain free. Emerson, his wife, and Scott and his family eventually returned to Missouri by 1840. Emerson died in Iowa in 1843. Ownership of Scott and his family ultimately passed to Emerson’s brother-in-law, John Sanford, of New York.

With financial assistance from the family of his former owner, the late Peter Blow, Scott sued for his freedom in Missouri state court, beginning in 1846. He argued that he was free due to having resided in both a free state and a free territory. After some procedural delays, the lower court jury eventually agreed with him in 1850, but the Missouri Supreme Court in 1852 overturned the verdict. The judges rejected Scott’s argument, on the basis that the laws of Illinois and Wisconsin Territory had no extraterritorial effect in Missouri once he returned there.

It has long been speculated that the case was contrived. Records were murky, and it was not clear that Sanford actually owned Scott. Moreover, Sanford’s sister Irene, the late Dr. Emerson’s widow, had remarried an abolitionist Congressman. Finally, the suit was brought in the court of Judge Alexander Hamilton, known to be sympathetic to such “freedom suits.”

Having lost in the state courts, in 1853 Scott tried again, in the United States Circuit Court for Missouri, which at that time was a federal trial court. The basic thrust of the case at that level was procedural sufficiency. Federal courts, as courts of limited and defined jurisdiction under Article III of the Constitution, generally can hear only cases between citizens of different states or if a claim is based on a federal statute or treaty, or on the Constitution. There being no federal law of any sort involved, Scott’s claim rested on diversity of citizenship. Scott claimed that he was a free citizen of Missouri and Sanford a citizen of New York. On the substance, Scott reiterated the position from his state court claim. Sanford sought a dismissal on the basis of lack of subject matter jurisdiction because, being black, Scott could not be a citizen of Missouri.

When Missouri sought admission to statehood in 1820, its constitution excluded free blacks from living in the state. The compromise law passed by Congress prohibited the state constitution from being interpreted to authorize a law that would exclude citizens of any state from enjoying the constitutional privileges and immunities of citizenship the state recognized for its own citizens. That prohibition was toothless, and Sanford’s argument rested on Missouri’s negation of citizenship for all blacks. Thus, Scott’s continued status as a slave was not crucial to resolve the case. Rather, his racial status, free or slave, meant that he was not a citizen of Missouri. Thus, the federal court lacked jurisdiction over the suit and could not hear Scott’s substantive claim. Instead, the appropriate forum to determine Scott’s status was the Missouri state court. As already noted, that was a dry well and could not water the fountain of justice.

In a confusing action, the Circuit Court appeared to reject Sanford’s jurisdictional argument, but the jury nevertheless ruled for Sanford on the merits, based on Missouri law. Scott appealed to the United States Supreme Court by writ of error, a broad corrective tool to review decisions of lower courts. The Court heard argument in Dred Scott v. Sandford (the “d” is a clerical error) at its February, 1856, term. The justices were divided on the preliminary jurisdictional issue. They bound the case over to the December, 1856, term, after the contentious 1856 election. There seemed to be a way out of the ticklish matter. In Strader v. Graham in 1850, the unanimous Supreme Court had held that a slave’s status rested finally on the decision of the relevant state court. The justices also had refused to consider independently the claim that a slave became free simply through residence in a free state. Seven of the justices in Dred Scott believed Strader to be on point, and Justice Samuel Nelson drafted an opinion on that basis. Such a narrow resolution would have steered clear of the hot political issue of extension of slavery into new territories that was roiling the political waters and threatening to tear apart the Union.

It was not to be. Several of the Southern justices were sufficiently alarmed by the public debate and affected by sectional loyalty to prepare concurring opinions to address the lurking issue of Scott’s status. Justice James Wayne of Georgia then persuaded his fellows to take up all issues raised by Scott’s suit. Chief Justice Roger Taney would write the opinion.

Writing for himself and six associate justices, Taney delivered the Court’s opinion on March 6, 1857, just a couple of days after the inauguration of President James Buchanan. In his inaugural address, Buchanan hinted at the coming decision through which the slavery question would “be speedily and finally settled.” Apparently having received advance word of the decision, Buchanan declared that he would support the decision, adding coyly, “whatever this may be.” Some historians have wondered if Buchanan actually appreciated the breadth of the Court’s imminent opinion or misunderstood what was about to happen. Of the seven justices that joined the decision that Scott lacked standing to sue and was still a slave, five were Southerners (Taney of Maryland, Wayne, John Catron of Tennessee, Peter Daniel of Virginia, and John Campbell of Georgia). Two were from the North (Samuel Nelson of New York and Robert Grier of Pennsylvania). Two Northerners (Benjamin Curtis of Massachusetts and John McLean of Ohio) dissented.

Taney’s ruling concluded that Scott was not a citizen of the United States, because he was black, and because he was a slave. Thus, the federal courts lacked jurisdiction, and by virtue of the Missouri Supreme Court’s decision, Scott was still a slave. Taney’s argument rested primarily on a complex analysis of citizenship. When the Constitution was adopted, neither slaves nor free blacks were part of the community of citizens in the several states. Thereafter, some states made citizens of free blacks, as they were entitled to do. But that did not affect the status of such individuals in other states, as state laws could not act extraterritorially. Only United States citizenship or state citizenship conferred directly under the Constitution could be the same in all states. Neither slaves nor free blacks were understood to be part of the community of citizens in the states in 1788 when the Constitution was adopted, the only time that state citizenship could have also conferred national citizenship. Thereafter, only Congress could extend national citizenship to free blacks, but had never done so. States could not now confer U.S. citizenship, because the two were distinct, which reflected basic tenets of dual sovereignty.

Taney rejected the common law principle of birthright citizenship based on jus soli, that citizenship arose from where the person was born. This was not traditionally the only source of citizenship, the other being the jus sanguinis, by which citizenship arose through the parents’ citizenship, unless a person was an alien and became naturalized under federal law. Since blacks were not naturalized aliens, and their parental lineage could not confer citizenship on them under Taney’s reasoning, the rejection of citizenship derived from birth in the United States meant that even free blacks were merely subordinate American nationals owing obligations and allegiance to the United States but not enjoying the inherent political, legal, and civil rights of full citizenship. This was a novel status, but one that became significant several decades later when the United States acquired overseas dominions.

After the Civil War, the 14th Amendment was adopted. The very first sentence defines one basis of citizenship. National citizenship and state citizenship are divided, but the division is not identical to Taney’s version. To counter the Dred Scott Case and to affirm the citizenship of the newly-freed slaves, and, by extension, all blacks, national citizenship became rooted in jus soli. If one was born (or naturalized) in the United States and was subject to the jurisdiction of the United States, that is, one owed no loyalty to a foreign government, national citizenship applied. State citizenship was derivative of national citizenship, not independent of it, as Taney had held, and was based on domicile in that state.

The Chief Justice also rejected the idea that blacks were entitled to the same privileges and immunities of citizenship as whites. Although Taney viewed the Constitution’s privileges and immunities clause in Article IV broadly, if blacks were regarded as full state citizens under the Constitution, then Southern states could not enforce their laws that restricted the rights of blacks regarding free speech, assembly, and the keeping and bearing of arms. That, in turn, would threaten the social order and the stability of the slave system.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

Dred Scott lost his appeal for a second reason, his status as a slave. The Court’s original, since-abandoned, plan had been to decide the whole suit on the basis of the Strader precedent that Scott was a slave because the Missouri Supreme Court had so found. That approach still could have been used to deal summarily with this issue in the eventual opinion. But Taney struck a bolder theme. He analyzed the effect of Scott’s residence in Illinois and Wisconsin Territory on his status. This allowed Taney to challenge more broadly the prevailing idea that the federal government could interfere with the movement of slavery throughout the nation.

Taney opined that the federal government’s power to regulate directly the status of slavery in the territories, including its abolition, was derived from Article IV, Section 3, of the Constitution, which authorized Congress to “make all needful Rules and Regulations respecting the Territory … belonging to the United States” and to admit new states. However, Taney claimed, this provision applied only to the land that had been ceded to the United States by the several states under the Articles of Confederation. Thus, Congress could abolish slavery in the Northwest Ordinance of 1787, reenacted in 1789, because it applied to such ceded land. Any territory acquired by the United States thereafter, such as through the Louisiana Purchase or the Treaty of Guadalupe Hidalgo after the Mexican War, was held by the United States in trust for the whole people of the United States. Thus, white citizens who settled in those territories did not lose the rights they had acquired residing within their previous states. They were not “mere colonists, … to be governed by any laws [the general government] may think proper.” These rights would include that to property and extended to the property in slaves.

Lastly, Taney explained, the Fifth Amendment expressly protected against federal laws that sought to deprive a person of his life, liberty, or property without due process. Due process guaranteed not only a fair trial, but protected generally against arbitrary laws. A law that deprived a person of property, including slaves, simply because he moved into a territory controlled by the federal government, “could hardly be dignified with the name of due process of law.” This was a founding example of the doctrine of substantive due process that has been invoked by the courts in more recent cases to strike down laws against abortion and same-sex marriage. Taney’s distinction between the constitutional rights of citizens and colonists and his postulate that the Constitution limited Congress’s power of administering the territories settled by Americans reappeared in modified form a half century later in cases dealing with Congress’s control over overseas territory acquired after the Spanish-American War.

Scott did not become free by residing in free territory, because the Missouri Compromise of 1820, which excluded slavery from Wisconsin Territory, was unconstitutional. That decision was radical because it upset a long constitutional custom of geographically dividing free from (potentially) slave territory, beginning with the Northwest Ordinance, but which had been undermined in the Compromise of 1850 and the Kansas-Nebraska Act of 1854. Nor could Wisconsin’s territorial legislature abolish slavery, in Taney’s analysis, through the newly-minted doctrine of “popular sovereignty.” That legislature was merely an agent of Congress, and had no more power to destroy constitutional rights than did its principal.

“Popular sovereignty” lay at the core of the Compromise of 1850 and the Kansas- Nebraska Act. That doctrine, championed by Senators Henry Clay and Stephen Douglas, allowed slave holders to bring their property into all parts of the politically unorganized territorial area. Under the Northern view, once organized as a territory, the people acting through a convention or through their territorial legislature might authorize or prohibit slavery. Under the Southern view, only states could abolish slavery, and any such prohibition had to await a decision of the people when seeking statehood or thereafter. The Court thus endorsed the Southern perspective, further inflaming sectional tensions because the two federal compromise laws had always been a bitter pill to swallow for many in the North.

Four of the concurring justices wrote opinions that reached the same result via various other doctrinal paths. Two dissented. The main dissent, by Benjamin Curtis—whose brother George Ticknor Curtis was one of Scott’s attorneys—relied on the theory that state citizenship was the source of national citizenship. Therefore, once someone resided in a state, and was not merely a sojourner, he acquired the rights of citizenship in that state. Scott, having resided in a free state, had shed his status as a slave and could not be reduced to that status merely by returning to Missouri. Once free, he was also entitled to all privileges and immunities of citizens, which included the right to travel freely to other states. Curtis’s theory, by focusing on states as the source of all citizenship, was even more inconsistent than Taney’s with the eventual language of the Fourteenth Amendment, which embodied a national supremacy approach.

From the beginning, the Dred Scott Case was received poorly by the public. Its controversial, and to us odious, result also tarnished the legacy of Roger Taney. Viewed from our more distant historical perspective, perhaps a more nuanced evaluation is possible. Judged by intellectual standards, Taney’s opinion showed considerable judicial craftsmanship. Taney himself was an accomplished and influential Chief Justice, whose Court addressed legal and constitutional matters significant for the country’s development.

Why then did Taney opt for an approach that destroyed the delicate balances worked out politically in the Congress, and would have nationalized the spread of slavery? After all, the narrower route of Strader lay open to the Court for the same result. Part of it was sympathy for the Southern cause, although Taney by then was not himself a slave owner. Indeed, while in law practice, Taney had vigorously denounced slavery when defending an abolitionist minister accused of inciting slave rebellions. Mostly, it was the perception that the political process was becoming unable to negotiate the hardening positions of both sides on the various facets of the slavery controversy. Those facets included protection of the “peculiar institution” in the existing slave states, expansion of slavery into new territory, and recapture of fugitive slaves from states hostile to such efforts.

The relatively successful compromises of the late 18th and early 19th centuries with their attendant comity among the states were in the distant past. Congressional efforts were increasingly strained and laborious, as experience with the convoluted process that led to the Compromise of 1850 had shown. Southerners’ paranoia about their section’s diminished political power and comparative industrial inadequacy, as well as Northerners’ moral self-righteousness and sense of political ascendancy eroded the mutual good will needed for compromise. Presidential leadership had proved counterproductive to sectional accommodation, as with James Polk and the controversy over potential expansion of slavery into territory from the Mexican War. Or, such executive efforts were ineffective, as with Franklin Pierce’s failed attempt to act while President like the compromise candidate that he had been at the Democratic convention. Worse yet, eventually such leadership was non-existent, as with James Buchanan.

There remained only the judicial solution to prevent the rupture of the political order that was looming. Legal decisions, unlike political ones, are binary and generally produce a basic clarity. One side wins, the other loses. Constitutional cases add to that the veneer of moral superiority. If the Constitution is seen as a collection of moral principles, not just a pragmatic collection of political compromises, the winner in a constitutional dispute has a moral legitimacy that the loser lacks. Hence, Taney decided to cut the Gordian knot and hope that the Court’s decision would be accepted even by those who opposed slavery. Certainly, President Buchanan, having received advance word of the impending decision, announced in his inaugural address that he would accept the Court’s decision and expected all good citizens to do likewise.

Unfortunately, matters turned out differently. At best, the decision had no impact on the country’s lurch toward violence. At worst, the decision hastened secession and war. Abraham Lincoln presented the moderate opposition to the decision. In a challenge to the Court, he defended the President’s independent powers to interpret the Constitution. In his first inaugural address, Lincoln disavowed any intention to overturn the decision and free Dred Scott. He then declared, “At the same time, the candid citizen must confess that if the policy of the government upon vital questions affecting the whole people is to be irrevocably fixed by decisions of the Supreme Court, … the people will have ceased to be their own rulers, having to that extent practically resigned their government into the hands of that eminent tribunal.”

Scott and his family were freed by manumission in May, 1837, two months after the decision in his case. Scott died a year later.

In the eyes of many, the Court’s institutional legitimacy suffered from its attempt to solve undemocratically such a deep public controversy about a fundamental moral issue. A more recent analogue springs to mind readily. Many years after Dred Scott, partially dissenting in the influential abortion case Planned Parenthood v. Casey in 1992, Justice Antonin Scalia described a portrait of Taney painted in 1859: “There seems to be on his face, and in his deep-set eyes, an expression of profound sadness and disillusionment. Perhaps he always looked that way, even when dwelling upon the happiest of thoughts. But those of us who know how the lustre of his great Chief Justiceship came to be eclipsed by Dred Scott cannot help believing that he had that case—its already apparent consequences for the Court and its soon-to-be-played-out consequences for the Nation—burning on his mind.” Scalia’s linkage of Taney’s ill-fated undemocratic attempt to settle definitively the slavery question by judicial decree to the similar attempt by his own fellow justices to settle the equally morally fraught abortion issue was none- too-subtle. Lest someone miss the point, Scalia concluded: “[B]y foreclosing all democratic outlet for the deep passions this issue arouses, by banishing the issue from the political forum that gives all participants, even the losers, the satisfaction of a fair hearing and an honest fight, by continuing the imposition of a rigid national rule instead of allowing for regional differences, the Court merely prolongs and intensifies the anguish.”

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

Two weeks after the death of George Washington on December 14, 1799, his long-time friend General Henry “Light Horse Harry” Lee delivered a funeral oration to Congress that lauded the deceased as, “First in war- first in peace- and first in the hearts of his countrymen, he was second to none in the humble and endearing scenes of private life; pious, just, humane, temperate and sincere; uniform, dignified and commanding, his example was as edifying to all around him, as were the effects of that example lasting.” The Jeffersonian newspaper of Philadelphia, The Aurora, had a rather different opinion than those countrymen. Sounding like his current counterparts in their sentiments about today’s President, the publisher declared on Washington’s retirement from office in 1797, “[T]his day ought to be a jubilee in the United States … for the man who is the source of all the misfortunes of our country, is this day reduced to a level with his fellow citizens.”

Each author likely could point to examples to buttress his case. Washington wore many hats in his public life, and his last service, as President from 1789 to 1797, had its shares of controversies. Washington kept his private life just that to his best abilities, with the result that it soon became mythologized. In public, Washington was reserved (or “dull,” to his detractors), dignified (or “stiff,” to his detractors), and self-disciplined. Yet his usually even-tempered nature occasionally flared, which few were willing to risk. According to Samuel Eliot Morison, during the Philadelphia Constitutional Convention, Alexander Hamilton bet Gouverneur Morris a dinner that the latter would not approach Washington, slap him on the back, and say, “How are you today, my dear General?” Morris, the convention’s jokester, took the bet, but after the look that Washington gave him upon the event, professed that he would never do so again for a thousand dinners. Washington’s formality had its limits. A Senate committee proposed that the official address to the President should be, “His Highness the President of the United States of America and the Protector of the Rights of the Same.” The Senate rejected this effusive extravagance, and Washington was simply addressed as Mr. President.

On a later occasion, Morris wrote Washington, “No constitution is the same on paper and in life. The exercise of authority depends on personal character. Your cool, steady temper is indispensably necessary to give firm and manly tone to the new government.” Not only is this a correct observation about constitutions in general. A formal charter, the “Constitution” as law, is not all that describes how the political system actually operates, that is, the “constitution” as custom and practice. It is particularly true about Article II of the Constitution of 1787, which establishes the executive branch and delineates most of its powers. While some of those powers are set out precisely, others are ambiguous, such as the “executive power” and “commander-in-chief” clauses.

In several contributions to The Federalist, most thoroughly in No. 70, Hamilton explained how the Constitution created a unitary executive. He stressed the need for energy and for clarity of accountability that comes from such a system. In No. 67, he ridiculed “extravagant” misrepresentations and “counterfeit resemblances” by which opponents had sought to demonize the President as a potentate with royal prerogatives. Still, it has often been acknowledged that the Constitution sets up a potentially strong executive-style government. Justice Robert Jackson in Youngstown Sheet & Tube Co. v. Sawyer in 1952 described the President’s real powers, “The Constitution does not disclose the measure of the actual controls wielded by the modern presidential office. That instrument must be understood as an Eighteenth-Century sketch of a government hoped for, not as a blueprint of the Government that is…. Executive power has the advantage of concentration in a single head in whose choice the whole Nation has a part, making him the focus of public hopes and expectations…. By his prestige as head of state and his influence upon public opinion, he exerts a leverage upon those who are supposed to check and balance his power which often cancels their effectiveness.” Washington was keenly aware of his groundbreaking role and used events during his time in office to define the constitutional boundaries of Article II and to shape the office of the President from this “sketch.”

Washington’s actions in particular controversies helped shape the contours of various ambiguous clauses in Article II of the Constitution. He shored up the consolidation of the executive branch into a “unitary” entity headed by the President and guarded its independence from the Congress. From the start, Washington was hamstrung by the absence of an administrative apparatus. The Confederation had officers and agents, but due to its circumscribed powers and lack of financial independence, it relied heavily on state officials to administer peacetime federal policy. The new Congress established various administrative departments, which quickly produced a controversy over the removal of federal officers. Would the President have this power exclusively, or would he have to receive Senate consent, as a parallel to the appointment power? The Constitution was silent. After much debate over the topic in the bill to establish the Departments of State and War, a closely-divided Congress assigned that power to the President alone. Some opponents of the law objected that the President already had that power as chief executive, and that the statute could be read as giving him that power only as a matter of legislative grace, to be withdrawn as Congress saw fit.

Even if this removal power was the President’s alone by implication from the executive power in Article II, the same analysis might not apply to other officers. Congress had been clear to note that the departments in the statute were closely tied to essential attributes of executive power, that is, foreign relations and control over the military during war. The position of the Treasury Secretary, on the other hand, was constitutionally much more ambiguous, given Congress’s preeminent role in fiscal matters. The Treasury Secretary was a sort of go-between who straddled Congress’s power over the purse and the President’s power to direct the administration of government. The law that created the Treasury Department required the Secretary to report to Congress and to “perform all such services relative to the finances, as he shall be directed to perform.”

This implied that the Secretary was responsible to Congress rather than the President. If followed with other departments, this would move the federal government in direction of a British-style parliamentary system and blur the separation of powers between the branches. Washington resisted that trend, but his victory in the removal question was incomplete. It was not until the Andrew Jackson administration that the matter was settled. Jackson removed two Treasury Secretaries who had refused his order to transfer government funds from the Second Bank of the United States. While the Senate censured him for assuming unconstitutional powers, Jackson’s position ultimately prevailed and the censure was later rescinded. Still, controversy over the removal of cabinet heads without Senatorial consent flared up again after the Civil War with the Tenure of Office Act of 1867 and led to the impeachment of President Andrew Johnson in 1868. It was not until 1926 in Myers v. U.S. that the Supreme Court acknowledged the President’s inherent removal power over executive officers.

A matter of much greater immediate controversy during the Washington administration was the President’s Neutrality Proclamation in 1793. The country was in no position, militarily, to get between the two European powers fighting each other, Great Britain and the French Republic. To stave off pressure from both sides, and from their American partisans, to join their cause, Washington declared the United States to be neutral. Domestic critics charged that this invaded the powers of Congress. Hamilton, ever eager to defend executive power, wrote public “letters” under the appropriately clever pen name “Pacificus.” He set forth a very broad theory of implied powers derived from elastic clauses in Article II, primarily the executive power clause. In light of those powers and the President’s position as head of the executive branch, the President could do whatever he deemed necessary for the well-being of the country and its people, unless the Constitution expressly limited him or gave the claimed power to Congress. In this instance, until Congress declared war, Washington could declare peace.

Hamilton’s position made sense, especially as Congress met only a few weeks each year, while the President could respond to events more quickly. However, Hamilton did not go unchallenged. At the urging of Jefferson, a reluctant James Madison wrote his “Helviticus” letters that presented a much more constrained view of those same constitutional clauses. Hamilton’s asseverations have generally carried the day, although political struggles between Congress and the President over claimed executive excesses have punctuated our constitutional history and continue to serve as flashpoints today. Hamilton’s theory, and Washington’s application thereof, cemented the “unitary executive” conception of the presidency.

While generally silent on foreign affairs, the Constitution does address treaties. The power to make treaties was part of the federative power of the British monarch. Thus, at least from Hamilton’s perspective, the President could conduct foreign affairs and make treaties as the sole representative of the country. However, constitutional limits must be observed. Thus, the Senate has an “advice and consent” role. Originally, this was understood to require the President to consult with the Senate on negotiating treaties before he actually made one.

Washington tried this approach early in his administration. He and Secretary of War Henry Knox appeared before the Senate to discuss pending treaty negotiations with the Creek Indians. Rather than engaging the President and Knox, the Senate referred the matter to a committee. Washington angrily left, declaring, “This defeats every purpose of my coming here.” Twice more he sent messages to get advice on negotiations. Receiving no responses, Washington gave up even those efforts. Since then, Presidents have made treaties without prior formal consultation with the Senate. The Senate’s role now is to approve or reject treaties through its “consent” function. Of course, informal discussions with individual Senators may occur. The Senate’s similar formal advice role for appointments of federal officers likewise has atrophied.

Washington also used constitutional tools to participate effectively in domestic policy. For one, the Constitution obliged the President to deliver to Congress from time to time information on the state of the union and to recommend proposals. Washington used this opportunity for an annual report that he presented in person at the opening of each session of Congress. Presidents have continued this tradition, although, beginning with Jefferson, they no longer appeared personally until Woodrow Wilson revived the practice.

Another such tool was the President’s qualified veto over legislation. A potentially powerful mechanism for executive dominance, early Presidents used it sparingly. The controversy was over the permissible basis of a veto. Could it be used for any reason, such as political disagreement with the legislation’s policy, or only for constitutional qualms? Washington sympathized with the latter position, advocated by Jefferson. On that ground, he first vetoed an apportionment of the House of Representatives in 1791 that he believed violated the Constitution’s prohibition against giving a state more than one representative for every 30,000 inhabitants. Andrew Jackson eventually used the veto for purely political reasons, which has become the modern practice.

One more constitutional evolution that Washington set in motion involved government secrecy and the President’s right to withhold information from Congress and the courts, a doctrine known as “executive privilege.” It appears nowhere in the Constitution, but was recognized under the common law. There are two broad aspects to this doctrine. One is to protect the confidentiality of communications between the President and his executive branch subordinates. The other is to guard state secrets in the interest of national and military security. Again, under Hamilton’s implied powers, the President needs such privilege to carry out the duties of his office and to protect the independence of the executive branch. Two events during Washington’s administration gave an early shape to this doctrine.

In October, 1791, General Arthur St. Clair, the governor of the Northwest Territory took 2,000 men, including the entire regular army plus several hundred militia, to build a fort to counter attacks by an alliance of Indian tribes supported by the British. On November 4, St. Clair’s force, down to about 920 from desertion and illness, was surprised by the Indians and suffered 900 casualties in the rout, the great majority of them killed. The Indians also killed the 200 camp followers, including wives and children, in what became the worst defeat of the American army by Indians. To no one’s surprise, the House ordered an inquiry and sought various documents from the War Department relating to the campaign.

Washington consulted his cabinet in what was perhaps the first meeting of the entire body. With the cabinet’s agreement, Washington refused to turn over most of the requested documents on the ground that they must be kept secret for the public good. Thus was the state secrets doctrine incorporated into American constitutional government. A committee in the House eventually exonerated St. Clair and blamed the rout on poor planning and equipping of the force. The defeat of St. Clair was reversed by General Anthony Wayne with a larger force of 2,000 regulars and 700 militia in August, 1794, at the Battle of Fallen Timbers. That victory produced a peace treaty, which ended the Indian threat.

The second occurred when the House demanded that the administration disclose to them the instructions Washington had given to American negotiators regarding the unpopular Jay Treaty of 1794 with Great Britain. The President declined on grounds of confidentiality, relying on the Constitution’s placement of the treaty power in the President and Senate. The flaw with Washington’s argument was that the House had to appropriate funds required by the treaty. The House insisted on receiving the documents to carry out its constitutional appropriations function. Washington stood his ground, and the House grudgingly dropped the matter.

Any overview of the Washington administration requires at least a brief mention of the influence of Alexander Hamilton. Hamilton had long enjoyed Washington’s support, well before he became Secretary of the Treasury. His influence was well-earned. It is not uncommon for historians to refer to the United States of the 1790s as Hamilton’s Republic. Perhaps his signal achievement were his reports on the public credit and on manufactures, which Congress had asked him to prepare. The former, which he submitted on January 14, 1790, recommended that the foreign and domestic debt of the United States be paid off at full value, rather than at the depreciated levels at which the notes were then trading. As well, the United States would assume the states’ outstanding debts. The entirety would be funded at par by newly-issued bonds paying 6% interest. Import duties and excise taxes imposed under Congress’s new taxing power would provide the source to pay the interest and principal. Congress narrowly approved Hamilton’s proposal after he struck a deal with Jefferson that would place the new national capital in the South in 1800. The foreign debt was paid off in 1795 and the domestic debt forty years later.

The plan also established the Bank of the United States, modeled broadly on the Bank of England and the abortive Bank of North America, a venture by Robert Morris and Hamilton under the Articles of Confederation. Among other functions, the Bank would stabilize monetary excesses and protect American credit rating. Congress approved the Bank Bill in February, 1791. Hamilton’s recommendations in his Report on Manufactures, presented at the end of 1791, were not accepted by Congress. They eventually became the foundation for protectionist policies in favor of nascent domestic industries in the nineteenth century.

Washington’s last contribution to American constitutional development was his refusal to serve more than two terms. He had agreed only reluctantly even to that second term. His retirement was not the first time he had left office voluntarily even though he had sufficient standing to retain power. Years earlier, he had surrendered his command of the Army to Congress at the end of the Revolutionary War. The Constitution was silent on presidential term limits. Indeed, Hamilton had argued against them in The Federalist. By leaving the Presidency after eight years, Washington established the two-term custom that was not violated until Franklin Roosevelt in the 1940 election. Fear of such “third-termites,” made worse by FDR’s election to a fourth term, soon produced the 22nd Amendment, which formalized the two-term custom.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

In perhaps its most significant legislative action, the Congress of the Articles of Confederation passed the Northwest Ordinance on July 13, 1787. This landmark law was an act of institutional strength during a period of marked institutional weakness, a reminder of a national will that had been battered by fears of disunion, and a source of constitutional principles that defined parts of the fundamental charter that would replace the Articles a year later.

The domain ceded by the British to the states under the Treaty of Paris of 1783 extended to the Mississippi River, well westward of the main area of settlement and even of the “backcountry” areas such as the Piedmont regions of Virginia and the Carolinas. The Confederation and its component states were land-rich and cash-poor. The answer would appear to be to open up this land for settlement by selling tracts to bona fide purchasers and to encourage immigration from Europeans. But matters were not that easy.

For some years before independence, there had been a gradual stream of westward migration past the Allegheny and Cumberland Mountains. Shocked into action by the ferocity of the Pontiac War that flared even as the war with the French in North America was winding down, the British sought to end this movement.  Accordingly, King George III issued the Proclamation of 1763 in October of that year, which prohibited colonial governments from granting land titles to Whites beyond the sources of rivers that flow into the Atlantic Ocean. Nor could White squatters occupy this land. The objective was to pacify the Indians, secure the existing frontier of White settlement, reduce speculation in vast tracts of land, divert immigration to British Canada, and protect British commerce and importation of British goods by a population concentrated near the coast.

While the policy initially succeeded in damping western settlement, in the longer term it alienated the Americans and helped trigger the move to independence. Ironically, during the Revolutionary War, those who actually had moved to the western settlements often considered themselves aggrieved and politically marginalized by the colonial assemblies, provincial congresses, and early state legislatures that were controlled by the eastern counties. Westerners were more likely to sit out the war, flee to the off-limits lands, or even align with the British.

Over time, the policy increasingly was ignored. Subsequent treaties moved the line of settlement westward. Squatters, land speculators and local governments evaded that revision, too. The historian Samuel Eliot Morison describes the actions of George Washington and his partner William Crawford in obtaining deeds from the colonial government of Pennsylvania to a large tract of land that lay west of the Proclamation line. In a letter to Crawford, Washington expressed his conviction that the proclamation was temporary and bound to end in a few years. “Any person therefore who neglects the present opportunity of hunting out good lands and in some measure marking … them for their own (in order to keep others from settling them) will never regain it…. The scheme [of marking the claim must be] snugly carried out by you under the pretense of hunting other game.” Washington’s secretive “scheme” was standard practice.

Washington was a comparatively minor participant. Speculators included a who’s who of colonial (and British) politicians and upper class merchants. While the British government vetoed some of the more flagrant schemes that involved many millions of acres, the practice continued under the Articles of Confederation and the Constitution of 1787. With independence a reality, Americans need no longer be influenced by British imperial policy. The new governments could accede to the popular clamor to open up the western lands.

However, three issues needed to be resolved: the conflicting state claims to western land, by having the states cede the contested areas to the Confederation; the orderly disposition of public lands, by surveying, selling, and granting legal title; and the creation of a path to statehood for this unorganized wilderness. The Articles of Confederation addressed none of these. The first was accomplished by Congress in 1779 and 1780 through resolutions urging the states to turn over such disputed land claims to the Confederation as public land. Most did. Unlike other actions by Congress under the Articles that required assent by the state legislatures, these public lands would be administered directly by the Congress. During the later debate on the Constitution of 1787, James Madison and others used Congress’s control over the western lands as an example of the dangers of unchecked unenumerated powers. This was quite in contrast to their usual complaints about the Confederation’s weakness. To be fair to Madison, he admitted that he supported what Congress had done. Congress solved the second issue on May 20, 1785, when it legislated a system of surveying the new public lands, dividing them into townships, and selling the surveyed land by public auction. The third resulted in the Northwest Ordinance.

The catalyst for this last solution was the Ohio Company, one of the land speculation syndicates. General Rufus Putnam and various New England war veterans organized the company to purchase 1.5 million acres for $1 million in depreciated Continental currency with an actual value of about one-eighth of the face amount. Even with the potential to raise money for the Confederation’s empty coffers, Congress barely met its quorum when eight states met to consider the proposal. As a condition of the deal, the Ohio Company wanted the Northwest Ordinance in order to make their land sales more attractive to investors. Rufus King and Nathan Dane of Massachusetts drafted the Ordinance. All eight states represented approved the law, with all but one of the 18 delegates in favor. Ultimately, the Ohio Company was able to raise only half the amount promised and purchased 750,000 acres. However, the Ordinance applied throughout the unorganized territory north of the Ohio River.

The Ordinance did not spring spontaneously from the effort of King and Dane. Congress in 1780 had declared in its earlier resolution that the lands ceded to the Confederation would be administered directly by the Congress with the goal that they would be “settled and formed into distinct republican states, which shall become members of the Federal Union.” Four years later, Thomas Jefferson presented a proposal to Congress, which, with some amendments, was adopted as the Land Ordinance of 1784. It provided for division of the territory into ten eventual states, the establishment of a territorial government when the population reached 20,000, and statehood when the population reached the same as that of the smallest of the original thirteen.

The Ordinance had three important components. First, of course, the statute provided for the political organization of the territory. The whole territory was divided into three “districts.” A territorial assembly would be established for a portion of the territory as soon as that area had at least 5,000 male inhabitants. Congress would appoint a governor, and a territorial court would be established. All of these officials had to meet various property requirements consisting of freehold estates between 200 and 1,000 acres. Voting, too, required ownership of an estate of at least 50 acres. Once the population reached 60,000, the area could apply to Congress for admission to statehood on equal terms with the original states. Eventually, five states, Ohio, Indiana, Illinois, Michigan, and Wisconsin emerged from the Northwest Territory. The process of colonization and decolonization established under the Ordinance became the model followed in its general terms through the admission of Alaska and Hawaii in 1959.

Another critical feature of the Ordinance was the inclusion of an embryonic bill of rights in the first and second articles. The first protected the free exercise of religion. The second was more expansive and singled out, among others, various natural rights, such as the protection against cruel and unusual punishments, against uncompensated takings, and against retroactive interference with vested contract rights. The enumeration of specific restrictions on government power was consistent with constitutional practice at the state level. It also bolstered the demand of critics of the original Constitution of 1787 that a bill of rights be included in that document.

As a final matter, the Ordinance addressed the controversial question of slavery. Article VI both prohibited slavery itself in the territory and required that a fugitive slave escaping from one of the original states be “conveyed to the person claiming his, or her labor, or service ….” While this compromise was not ideal for Southern slave states, their delegations acquiesced because the Ordinance did not cover the territory most consequential to them, which extended westward from Virginia, North Carolina, and Georgia. The compromise also established a geographic line for the exclusion of slavery, which approach was not challenged until the debate over the admission of Missouri to statehood in 1819-1820. The eventual Missouri Compromise retained that solution, although a different geographic line was drawn. The fugitive slave provision and its successors were generally enforced until the 1830s, when the issue began to vex American politics and pit various states against each other and the federal government.

Article III of the Ordinance declared, “Religion, morality, and knowledge, being necessary to good government and the happiness of mankind, schools, and the means of education shall forever be encouraged.” This affirmation reflected republican theory of the time. John Adams would write in 1798, “Our Constitution was made only for a moral and religious People.” George Washington made a similar point in his Farewell Address on September 19, 1796, “Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports. . . . And let us with caution indulge the supposition that morality can be maintained without religion.” In that same speech, Washington tied religion and morality to human happiness and to popular and free government. Article III thus embodied a classic conception of the path to human fulfillment (“happiness”) and virtuous citizenship. Training in these necessary virtues must start early. Thus, schools were needed. Unlike for us and our modern sensibilities, there was no scruple that this would be an improper establishment of religion. The earlier Ordinance of 1785 had provided that in each surveyed township a certain area would be set aside to build schools. This article called for the spirit that would animate their physical structure.

The Confederation’s greatest achievement proved to be its last. The Northwest Ordinance had to be renewed when the Constitution of 1787 replaced the Confederation. The new Congress did so, with minor changes, in 1789, and President Washington signed the bill into law on August 7 of that year. On May 26, 1790, the Southwest Ordinance was approved to organize the territory south of the Ohio River. The terms of that statute were similar to its northern counterpart, except in the crucial matter of slavery. The Southwest Ordinance prohibited Congress from making any laws within the territory that would tend to the emancipation of slaves. This signaled Congress’s willingness to permit the “peculiar institution” to be extended into new states, if the settlers wished. Taken together with the Northwest Ordinance, the statutes set the pattern for compromise on the slavery issue that lasted until the 1850s. Intended to organize the “Old Southwest,” the Southwest Ordinance ultimately governed only Tennessee’s passage to statehood. The Northwest Ordinance affected a much larger area and lasted longer, ending with the admission of Wisconsin to the union in 1848.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

The Declaration of Independence that formalized the revolutionary action of the Second Continental Congress of the thirteen states did not, however, establish a plan of government at the highest level of this American confederacy. The members of that body understood that such a task needed to be done to help their assembly move from a revolutionary body to a constitutional one. A political constitution in its elemental form merely describes a set of widely shared norms about who governs and how the governing authority is to be exercised. A collection of would-be governors becomes constitutional when a sufficiently large portion of the population at least tacitly accepts that assemblage as deserving of political obedience. Such acceptance may occur over time, even as a result of resigned sufferance. Presenting a formal plan of government to the population may consolidate that new constitutional order more quickly and smoothly.

That process was well underway at the state level before July 4, 1776. Almost all colonies had provincial congresses by the end of 1774, which, presently, assumed the functions of the previous colonial assemblies and operated without the royal governors. In 1775, the remaining three colonies, New York, Pennsylvania, and Georgia, followed suit. Although they foreswore any design for independence, as a practical matter, these bodies exercised powers of government, albeit as revolutionary entities.

In 1776, the colonies moved to formalize their de facto status as self-governing entities by adopting constitutions. New Hampshire did so by way of a rudimentary document in January, followed in March by South Carolina. A Virginia convention drawn from the House of Burgesses drafted a constitution in May and adopted it in June. Rhode Island and Connecticut simply used their royal charters, with suitable amendments to take account of their new republican status. On May 10, still two months before the Declaration of Independence, the Second Continental Congress, somewhat late to the game, resolved that the colonies should create regular governments. These steps, completed in 1777 by the rest of the states, other than Massachusetts, established them as formal political sovereignties, although their continued viability was uncertain until the British military was evicted and the Treaty of Paris was signed in 1783.

At the level of the confederacy, the Second Continental Congress continued to act as a revolutionary assembly, but took steps to establish a formal foundation for that union beyond resolutions and proclamations. A committee of 13, headed by John Dickinson of Pennsylvania, the body’s foremost constitutional lawyer, completed an initial draft in July, 1776. That draft was rejected, because many members claimed it gave too much power to Congress at the expense of the states. Although time was of the essence to set up a government to run the war effort successfully, Congress could not agree to a plan until November 15, 1777, when they voted to present the Articles of Confederation to the states for their approval.

Ten states approved in fairly short order by early 1778, two within another year. Maryland held out until March 1, 1781, just a half year before the military situation was decided decisively in favor of the Americans as a result of the Battle of Yorktown. Since the Articles required unanimous consent to go into effect, this meant that the war had been conducted without a formal governmental structure. But necessity makes its own rules, and the Congress acted all along as if the Articles had been approved. Such repeated and consistent action, accepted by all parties established a de facto constitution. While the British might demur, at some point between the approval of the Articles in Congress and Maryland’s formal acceptance, the Congress ceased to be merely a revolutionary body of delegates and became a constitutional body. Maryland’s belated action merely formalized what already existed. The Continental Congress became the Confederation Congress, although it was still referred to colloquially by its former name.

One of the persistent arguments about the Articles questions their political status. Were they a constitution of a recognized separate sovereignty, or merely a treaty among essentially independent entities. There clearly are textual indicia of each. The charter was styled “Articles of Confederation and Perpetual Union,” a phrase repeated emphatically in the document. On the other hand, Article II assured each state that it retained its “sovereignty, freedom, and independence, and every Power, Jurisdiction, and right, which is not … expressly delegated to the United States.” Moreover, Article III expressly declared that the states were severally entering into “a firm league of friendship with each other, ….”

Article I provided, “The Stile of this confederacy shall be ‘The United States of America.” That suggests a separate political entity beyond its component parts. Yet the document had numerous references to the “united states in congress assembled,” and defined “their” actions. This, in turn, suggests that the states were united merely in an operative capacity, and that an action by Congress merely represented those states’ collective choice. Indeed, the very word “congress” is usually attached to an assemblage of independent political entities, such as the Congress of Vienna.

As an interesting note, such linguistic nods to state independence continue in some fashion under the Constitution of 1787. Federal laws are still enacted by a “Congress.” More significant, each time that the phrase “United States” appears in the Constitution, where the structure makes the singular or plural form decisive, the plural form is used. For example, Article III, section 3 declares, “Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, ….”

The government established by the Articles had the structure of a classic confederation. Theoretical sovereignty remained in the states, and practical sovereignty nearly did. The Articles were a union of states, not directly of citizens. The state legislatures, as part of the corporate state governments, rather than the people themselves or through conventions, approved the Articles. Approval had to be unanimous, in that each state had to agree. The issue of state representation proved touchy, as it would later in the Philadelphia convention that drafted the Constitution of 1787. While the larger states wanted more power, based on factors such as wealth, population, and trade, this proved to be both too difficult to calculate and unacceptable politically to the smaller states. Due to the need to get something drafted during the war crisis, the solution was to continue with the system of state equality used in the Continental Congress and to leave further refinements for later. States were authorized, however, to send between two and seven delegates that would caucus to determine their state delegation’s vote. This state equality principal was also consistent with the idea of a confederation of separate sovereignties.

The Confederation Congress had no power to act directly on individuals, but only on the states. It was commonly described as a federal head acting on the body of the states. Congress also had no enforcement powers. They could requisition, direct, plead, cajole, and admonish, but nothing more. Much depended on good faith action by state politicians or on the threat of interstate retaliation if a state failed to abide by its obligations. Of course, such retaliation, done vigorously, might be the catalyst for the very evil of disunion that the Articles were designed to prevent.

From a certain perspective, the Congress was an administrative body over the operative political units, the states, at least as far as matters internal to this confederation. This was consistent with the “dominion theory” of the British Empire that Dickinson and others had envisioned for the colonies before the Revolution, where the colonies governed themselves internally and were administered by a British governor-general who represented the interests of the empire. Thus, Congress could not tax directly. Instead, it would direct requisitions apportioned on the basis of the assessed value of occupied land in each state, which the states were obligated to collect. With funds often uncollected and states frequently in arrears, Congress had to resort to borrowing funds from foreign sources and emitting “bills of credit,” that is, paper money unbacked by gold or silver. Those issues, the Continental currency, quickly depreciated. “Not worth a continental” became a phrase synonymous with useless. Neither could Congress regulate commerce directly, although it could oversee disputes among states over commerce and other issues, by providing a forum to resolve them. Article IX provided a complex procedure for the selection of a court to resolve such “disputes and differences … between two or more states concerning … any cause whatever.”

It was easy for critics, then and more recently, to dismiss the Articles as weak and not a true constitution of an independent sovereign. The British foreign secretary Charles James Fox sarcastically advised John Adams, then American minister to London, when the latter sought a commercial treaty with Britain after independence, that ambassadors from the states needed to be present, since the Congress would not be able to enforce its terms. Yet, a union it was in many critical ways, as was recognized in the preamble to its successor: “We, the People of the United States, in Order to form a more perfect Union, ….” The indissolubility of this union was attested to by affirmations of its perpetuity. The Articles gave the Congress power over crucial matters of war and peace, foreign relations, control of the military, coinage, and trade and other relations with the Indians. Indeed, the states were specifically prohibited from engaging in war, conducting foreign relations, or maintaining naval or regular peacetime land forces, without consent from Congress. As to congressional consent, exceptions were made if the state was actually invaded by enemies or had received information that “some nation of Indians” was preparing to invade before Congress could address the matter. A state could also fit out vessels of war, if “such state be infested by pirates,” a matter that seems almost comical to us, but was of serious concern to Americans into the early 19th century.

The controversial matter of who controlled the western lands, Congress or the states, was not addressed. Nor did Congress have any power to force states to end their conflicting claims over such lands, except to provide a forum to settle disputes if a state requested that. Instead, Congress in 1779 and 1780 passed resolutions to urge the states to turn over such disputed land claims to Congress, which most eventually did. This very issue of conflicting territorial claims caused Maryland to refuse its assent to the Articles until 1781.

Yet, it was precisely on this issue of control over the unsettled lands where Congress unexpectedly showed it could act decisively. Despite lacking clear authority to do so, the Confederation Congress passed the Land Ordinance of 1785 and the even more important Northwest Ordinance of 1787. Those statutes opened up the western lands for organized settlement, a matter that had been dear to Americans since the British Proclamation of 1763 effectively put the Trans-Allegheny west off-limits to White settlers. Ironically, during the later debate on the Constitution of 1787, James Madison, in Federalist No. 38, theatrically used these acts of strength by Congress to point to the dangers of unchecked unenumerated powers. This was quite in contrast to the usual portrait of the Confederation’s weakness that Madison and others painted. To be fair, Madison conceded that Congress could not have done otherwise.

Significant also were the bonds of interstate unity that the Articles established. Article IV provided, “The better to secure and perpetuate mutual friendship and intercourse among the people of the different states in this union, the free inhabitants of each state shall be entitled to all privileges and immunities of free citizens in the several states; ….” These rights would include free travel and the ability to engage in trade and commerce. As well, that Article required that fugitives be turned over to the authorities of the states from which they had fled, and that each state give full faith and credit to the decisions of the courts in other states. These same three clauses were brought into Article IV of the Constitution of 1787.

The Articles were doomed by their perceived structural weakness. Numerous attempts to reform them had foundered on the shoals of the required unanimity of the states for amendments. Another factor that likely caused the Philadelphia Convention of 1787 to abandon its quest merely to amend the Articles were their complexity and prolixity, with grants of power followed by exceptions, restrictions, and reservations set out in excruciating detail. The Articles’ weak form of federalism was replaced by the stronger form of the Constitution of 1787, stronger in the sense that the latter represented a more clearly distinct entity of the United States, with its republican legitimacy derived from the same source as the component states, that is, the people.

All of that acknowledged, the victor writes the history. Defenders of the Articles at the time correctly pointed out that this early constitution, drafted under intense pressure at a critical time in the country’s history and intended to deal foremost with the exigencies of war, had been remarkably successful. It was, after all, under this maligned plan that the Congress had formed commercial and military alliances, raised and disciplined a military force, and administered a huge territory, all while defeating a preeminent military and naval power to gain independence.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

The adoption of the Declaration of Independence of “the thirteen united States of America” on July 4, 1776, formally ended a process that had been set in motion almost as soon as colonies were established in what became British North America. The early settlers, once separated physically from the British Isles by an immense ocean, in due course began to separate themselves politically, as well. Barely a decade after Jamestown was founded, the Virginia Company in 1619 acceded to the demands of the residents to form a local assembly, the House of Burgesses, which, together with a governor and council, would oversee local affairs. This arrangement eventually was recognized by the crown after the colony passed from the insolvent Virginia Company to become part of the royal domain. This structure then became the model of colonial government followed in all other colonies.

As the number and size of the colonies grew, the Crown sought to increase its control and draw them closer to England. However, those efforts were sporadic and of limited success during most of the 17th century, due to the isolation and the economic and political insignificance of the colonies, the power struggles between the King and Parliament, and the constitutional chaos caused in turn by the English Civil War, the Cromwell Protectorate, the Restoration, and the Glorious Revolution. There was, then, a period of benign neglect under which the colonies controlled their own affairs independent of British interference, save the inevitable local tussles between the assemblies and the royal governors jockeying for political position. Still, the increasingly imperial objectives of the British government and expansion of British control over disconnected territories eventually convinced the British of the need for more centralized policy.

This change was reflected in North America by a process of subordinating the earlier charter- or covenant-based colonial governments to more direct royal control, one example being the consolidation in the 1680s of the New England colonies, plus New York and New Jersey into the Dominion of New England. While the Dominion itself was short-lived, and some of the old colonies regained charters after the Glorious Revolution, their new governments were much more tightly under the King’s influence. Governors would be appointed by the King, laws passed by local assemblies had to be reviewed and approved by royal officials such as the Board of Trade, and trade restrictions under the Navigation Acts and related laws were enforced by British customs officials stationed in the colonies. William Penn and the other proprietors retained their possessions and claims, but the King, frequently allying himself with anti-proprietor sentiments among the settlers, forced them to make political concessions that benefited the Crown.

Trade and general imperial policy were dictated by Parliament and administered from London. Still, the colonial assemblies retained significant local control and, particularly in the decades between 1720 and 1760, took charge of colonial finance through taxation and appropriations and appointment of finance officers to administer the expenditure of funds. While direction of Indian policy, local defense, and intercolonial relations belonged to the Crown, in fact even these matters were left largely to local governments. The Crown’s interests were represented in the person of the royal governor. However strong the political position of those governors was in theory, in practice they were quite dependent on the colonial assemblies for financial support. The overall division of political authority between the colonial governments and the British government in London was not unlike the federal structure that the Americans adopted to define the state-nation relationship after independence.

A critical change occurred with the vast expansion of British control over North America and other possessions in the wake of the Seven Years’ War (the French and Indian War) in 1763. Britain was heavily indebted from the war, and its citizens labored under significant taxes. Thus, the government saw the lightly-taxed colonials as the obvious source of revenue to contribute to the cost of stationing a projected 10,000 troops to defend North America from hostilities from Indian tribes and from French or Spanish forces. Parliament’s actions to impose taxes and, after colonial protests, abandon those taxes, only to enact new ones, both emboldened and infuriated the Americans. This friction led to increasingly vigorous protests by various local and provincial entities and to “congresses” of the colonies that drew them into closer union a decade before the formal break. Colonials organized as the Sons of Liberty and similar grass-roots radicals destroyed British property and attacked royal officials, sometimes in brutal fashion. At the same time, British tactics against the Americans became more repressive, in ways economic, political, and, ultimately, military. That cycle began to feed on itself in a chain reaction that, by the early 1770s, was destined to lead to a break.

The progression from the protests of the Stamp Act Congress in 1765, to the Declaration of Resolves of the First Continental Congress and subsequent formation of the Continental Association to administer a collective boycott against importation of British goods in 1774, to the Declaration of the Causes and Necessity of Taking Up Arms issued by the Second Continental Congress in 1775, to the Declaration of Independence of 1776, shows a gradual but pronounced evolution of militancy in the Americans’ position. Protestations of loyalty to King and country and disavowal of a goal of independence were still common, but were accompanied by increasingly urgent promises of resistance to “unconstitutional” Parliamentary acts. American political leaders and polemicists advocated a theory of empire in which the local assemblies, along with a general governing body of the united colonies, would control internal affairs and taxation, subject only to the King’s assent. This “dominion theory” significantly reduced the role of Parliament, which would be limited to control of external commerce and foreign affairs. It was analogous to the status of Scotland within the realm, but was based on the constitutional argument that the colonies were in the King’s dominion, having emerged as crown colonies from the embryonic status of their founding as covenant, corporate, or proprietary colonies. Had the British government embraced such a constitutional change, as Edmund Burke and some other members urged Parliament to do, the resulting “British Commonwealth” status likely would have delayed independence until the next century, at least.

In early 1776, sentiment among Americans shifted decisively in the direction of the radicals. Continued military hostilities, the raising of American troops, the final organization of functioning governments at all levels, the realization that the British viewed them as a hostile population reflected in the withdrawal of British protection by the Prohibitory Act of 1775, and Thomas Paine’s short polemic Common Sense opened the eyes of a critical mass of Americans. They were independent already, in everything but name and military reality. Achieving those final steps now became a pressing, yet difficult, task.

The Declaration was the work of a committee composed of Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston. They were appointed on June 11, 1776, in response to a resolution introduced four days earlier by Richard Henry Lee acting on instruction of the state of Virginia. Jefferson prepared the first draft, while Franklin and the others edited that effort to alter or remove some of the more inflammatory and domestically divisive language, especially regarding slavery. They completed their work by June 28, and presented it to Congress. On July 2, Congress debated Lee’s resolution on independence. The result was no foregone conclusion. Pennsylvania’s John Dickinson and Robert Morris, both of whom had long urged caution and conciliation, agreed to stay away so that the Pennsylvania delegation could vote for independence. The Delaware delegation was deadlocked until Cesar Rodney made a late appearance in favor. The South Carolina delegation, representing the tidewater-based political minority that controlled the state, was persuaded to agree. The New York delegates abstained until the end. Two days later, the Declaration itself was adopted. It was proclaimed publicly on July 8 and signed on July 19.

Jefferson claimed that he did not rely on any book or pamphlet to write the Declaration. Yet the bill of particulars in the Declaration that accused King George of numerous perfidies is taken wholesale, and frequently verbatim, from Chapter II of the Virginia Declaration of Rights and Constitution proposed by a convention on May 6, 1776, and approved in two phases in June. Moreover, Jefferson’s Declaration clearly exposes its roots in John Locke’s Second Treatise of Government. It would be astounding if Jefferson, a Virginian deeply involved in the state’s affairs, was unaware of such a momentous event or was oblivious to the influence of Locke on the many debates and publications of his contemporaries.

Three fundamental ideas coalesced in the Declaration: 17th-century social compact and consent of the governed as the ethical basis of the state, a right of revolution if the government violates the powers it holds in trust for the people, and classic natural law/natural rights as the divinely-ordained origin of rights inherent in all humans. The fusion of these different strands of political philosophy showed the progression of ideas that had matured over the preceding decade from the at-times simplistic slogans about the ancient rights of Englishmen rooted in the king’s concessions to the nobles in Magna Charta and from the incendiary proclamations by the Sons of Liberty and other provocateurs.

The structure was that of a legal brief. The King was in the dock as an accused usurper, and he and the jury of mankind were about to hear the charges and the proposed remedy. At the heart of the case against the King were some fundamental propositions, “self-evident truths”: Mankind is created equal; certain rights are “unalienable” and come from God, not some earthly king or parliament; governments “derive their just powers from the consent of the governed” and exist to secure those rights; and, borrowing heavily from Locke, there exists a residual recourse to revolution against a “long train of abuses and usurpations.”

Once the legal basis of the complaint was set, supporting facts were needed. Jefferson’s list is emotional and provocative. As with any legal brief, it is also far from impartial or nuanced. Some of the nearly thirty accusations seem rather quaint and technical for a “tyrant,” such as having required legislative bodies to sit “at places unusual, uncomfortable, and distant from the depository of their public Records.” Others do not strike us as harsh under current circumstances as they might have been at the time, such as King George having “endeavoured to prevent the population of these States; for that purposed obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither.” At least one other, describing the warfare by “the merciless Indian Savages,” sounds politically incorrect to the more sensitive among our modern ears.

The vituperative tone of these accusations is striking and results in a gross caricature of the monarch. But this was a critical part of the Declaration. Having brushed aside through prior proclamations and resolves Parliament’s legitimacy to control their affairs, the Americans needed to do likewise to the King’s authority. King George was young, energetic, and politically involved, with a handsome family, and generally popular with the British people. Many Americans, too, had favored him based on their opinion, right or wrong, that he had been responsible for Parliament repealing various unpopular laws, such as the Stamp Act. As well, as Hamilton remarked later at the constitutional convention in Philadelphia, the King was bound up in his person with the Nation, so it was emotionally difficult for many people to sever that common identity between themselves and the monarch. To “dissolve the political bands” finally, it would no longer suffice to blame various lords and ministers for the situation; the King himself must be made the villain.

Before the ultimate and extraordinary remedy of independence could be justified, it must be shown, of course, that more ordinary relief had proved unavailing. Jefferson mentions numerous unsuccessful warnings, explanations, and appeals to the British government and “our British brethren.” Those having proved ineffective, only one path remained forward: “We, therefore, the Representatives of the united States of America … declare, That these United Colonies are … Independent States.”

The Declaration was a manifesto for change, not a plan of government. That second development, moving from a revolutionary to a constitutional system, would have to await the adoption of the Articles of Confederation and, eventually, the Constitution of 1787. True, since the early days of the Republic, various advocates of causes such as the abolition of slavery have held up the Declaration’s principles of liberty and equality as infusing the “spirit” of the Constitution. But this has always been more a projection by those advocates of their own fervent wishes than a measure of what most Americans in 1776 actually believed.

Being “created equal” was a political idea in that there would be no hereditary monarchy or aristocracy in a republic based on consent. It was also a religious idea, in that all were equal before God. It did not mean, however, that people were equal “in their possessions, their opinions, and their passions,” as James Madison would mockingly write in The Federalist No. 10. He and Jefferson, along with most others, were convinced that, if people were left to their own devices, the natural inequality among mankind would sort things out socially, politically, and economically. Even less did such formal equality call for affirmative action by government to cure inequality of condition. It was, after all, as Madison explained in that same essay, “a rage for paper money, for an abolition of debts, for an equal division of property” that were the “improper and wicked project[s]” against which the councils of government must be secured.

In the specific context of slavery, the Declaration trod carefully. Jefferson’s criticism of the British negation of colonial anti-slave trade laws in his original draft of the Declaration was quickly excised by cooler heads who did not want to stir that pot, especially since almost all of the states permitted slavery. Jefferson’s later lamentation regarding slavery that “I tremble for my country when I reflect that God is just” was a distinct minority view. Many Americans had escaped grinding poverty in Europe, had served years of indentured servitude, or lived under dangerous and hardscrabble frontier conditions. As a result, as the historian Forrest McDonald observed, few of them trembled with Jefferson. It remained for later generations and the crucible of the Civil War and Reconstruction to realize the promise of equality that the Declaration held for the opponents of slavery.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

“In the name of God, amen. We whose names are under written … [h]aving undertaken for the Glory of God, and advancement of the christian [sic] faith, and the honour of our King and country, a voyage to plant the first colony in the northern parts of Virginia; do by these presents solemnly and mutually, in the presence of God and one another, covenant and combine ourselves together into a civil body politick, for our better ordering and preservation, and furtherance of the ends aforesaid: And by virtue hereof, do enact, constitute and frame such just and equal laws, ordinances, acts, constitutions and officers, from time to time, as shall be thought most meet and convenient for the general good of the colony ….”

Thus pledged 41 men on board the ship Mayflower that day, November 11, 1620, having survived a rough 64-day sea voyage, and facing an even more grueling winter and a “great sickness” like what had ravaged the Jamestown colony in Virginia. These Pilgrim Fathers had sailed to the New World with their families from exile in Leyden, Holland, with a stop in England to secure consent from the Virginia Company to settle on the latter’s territory. They were delayed by various exigencies from leaving England until the fall of 1620. The patent from the Company permitted the Pilgrims to establish a “plantation” near the mouth of today’s Hudson River, at the northern boundary of the Company’s own grant.

For whatever reason, either a major storm, as the Pilgrims claimed, or intent to avoid the reach of English creditors’ claims on indentured servants, as some historians allege, the ship ended up at Cape Cod on November 9. Bad weather and the precarious state of the passengers made further travel chancy, and the Pilgrim leaders decided to find a nearby place for settlement. Cape Cod was deemed unsuitable for human habitation. Instead, the Pilgrims disembarked on December 16 at Plymouth, so named earlier by Captain John Smith of the Virginia Company during one of his explorations. Since they were now a couple of hundred miles outside the Virginia Company’s territory, their patent was worthless. It became necessary to establish a new binding basis for government of their society.

The result was the Mayflower Compact, infused with a remarkable confluence of religious and political theory. The Pilgrims, like the Puritans who settled Massachusetts Bay in 1630, were dissenters from the Church of England. The former opted to separate themselves from what they perceived as the corruption of the Church of England, whereas the less radical nonconformists, the Puritans, sought to reform that church from within. Both groups, however, found the political and religious climate under the Stuart monarchs to be unfriendly to dissenters.

As common historical understanding has it, both groups sought to escape to the New World to practice their religion freely. However, that meant their religion. They set out to establish their vision of the City of God in an earthly commonwealth. As the Compact stated, their move was “undertaken for the Glory of God, and advancement of the christian faith.” Neither group set out to establish a classically liberal secular society tolerant of diverse faiths or even a commonwealth akin to the Dutch Republic, with an established church, yet accepting of religious dissent. The corrosive effect of such dissent would have been particularly dangerous to the survival of the small Pilgrim community clinging precariously to their isolated new home in Plymouth. Indeed, once the colony became established and became focused on commerce and trade, more devout members disturbed by this turn to the material left to form new communities of believers.

The religious orientation of the Mayflower Compact grew out of the Pilgrims’ Calvinist faith. In contrast to the Roman Catholic Church and its successor establishment in the Church of England, Calvinists rejected centralized authority with its dogmas and traditions as having erected impious barriers and distractions to a personal relationship with God. Instead, the congregation of like-minded believers gathered in community. It was a community founded on consent of the participants and given meaning by their shared religious belief. Those who rejected significant aspects of that belief would leave (or be shunned).

In Europe, those religious communities operated within–and chafed under–hostile existing political orders, most of which still were organized on principles other than consent of the participants. Once transplanted across the Atlantic Ocean, the Pilgrims were free of such restraints and could organize their religious life together with their political commonwealth within the Calvinist congregational framework. Their brethren, the Puritans of Massachusetts Bay, established their colony on the same type of religious foundation, as did a number of later communities that spread from the original settlements. The successor to the Puritans and Pilgrims was the Congregational Church, organized along those communitarian lines based on consent. That church became the de facto established church of Massachusetts Bay Colony and the state of Massachusetts under a system of state tax support, a practice that survived until 1833.

On the political side, the Mayflower Compact was one of three types of constitutions among the colonies in British North America. The others were the joint stock company or corporation model of the Virginia Company and the Massachusetts Bay Company, and the proprietary grant model, the dominant 17th-century form used for the remaining colonies, such as the grant to Lord Calvert for Maryland and William Penn for Pennsylvania. Of the three, the Mayflower Compact most profoundly and explicitly rested on the consent of the governed. It provided the model for other early American “constitutions” in New England, such as the 1636 compact among Roger Williams and his followers in founding Providence, Rhode Island, the compacts among settlers that similarly established Newport and Portsmouth in Rhode Island and the New Haven Colony in 1639, and, most significantly, the Fundamental Orders of Connecticut. The Orders, in 1639, united the Connecticut River Valley towns of Hartford, Windsor, and Wethersfield and provided a formal frame of government. Like the Mayflower Compact, the Orders rested on the consent of the people to join in community, but in their structure they closely resembled the Massachusetts Bay Company agreement.

The political analogue to the congregational organization of the Calvinist denominations was the “social compact” theory, an ethical basis for the state that also rested on the consent of the governed. Classical Greek theory had held that the polis represented a progression of human association beyond family and clan and evolved as the consummate means conducive to human flourishing. In its medieval scholastic version epitomized by the writings of Thomas Aquinas, the state was ordained by God to provide for the welfare and happiness of its people within an ordered universe governed by God’s law. By contrast, the social compact theory rested on the will of the individuals that came together to found the commonwealth. It was a rejection of the static universal political (and religious) order that had governed Western Christendom and in which one’s status and privileges depended on one’s place in that order. After the Reformation, Protestant sects had many, sometimes conflicting, assumptions about the nature and the specifics of the relationship between the believer and God. In similar manner, social compact theory was not a unified doctrine, but varied widely in its details of the relationship between the individual and the state, depending on the particular proponent.

The two social compact theorists with the greatest influence on Americans of the Revolutionary Era were Thomas Hobbes and John Locke, with the latter’s postulates the more evident among American essayists and political leaders. Locke’s reflections on religion and politics were greatly influenced by the Puritanism of his upbringing. Although the governments established under the various state constitutions, as well as those created through the Articles of Confederation and the Constitution of 1787, more closely resembled the corporate structures of the colonial joint stock company arrangements, they were formed through the direct or indirect consent of the governed. The Constitution of 1787, for example, very conspicuously required that no state would become a member of the broader “united” community without its consent. In turn, such consent had to be obtained through the most “explicit and authentic act” of the state’s people practicable under the circumstances, that is, through a state convention.

To whatever concrete extent the Mayflower Compact’s foundation on consent may have found its way into the organizing of American governments during the latter part of the 18th century, it is the Declaration of Independence that most clearly incorporates the compact’s essence. The influence of Locke and his expositors on Thomas Jefferson’s text has been analyzed long and frequently. But it is worth noting some of the language itself. The Declaration asserted that Americans were no longer connected in any bond (that is, any obligation) to the people of Britain, just as the Pilgrims, having sailed to a wilderness not under the control of the Virginia Company, believed that they were not bound by the obligations of the patent they had received. The Americans would establish a government based on the “consent of the governed,” “laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness,” just as the signatories of the Mayflower Compact had pledged.

So it came about that a brief pledge, signed by 41 men aboard a cramped vessel in 1620, “with no friends to welcome them, no inns to entertain or refresh them, no houses, or much less towns to repair unto to seek for succour,” with “a mighty ocean which they had passed…and now separate[d] them from all the civil parts of the world” behind them, and with “a hideous and desolate wilderness, full of wilde beasts and wilde men” in front of them, deeply affected the creation of the revolutionary political commonwealth founded in the New World a century and a half later.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.