Essay 15 – Guest Essayist: Kyle A. Scott

“He has refused his Assent to Laws, the most wholesome and necessary for the public good.”

In the first grievance, we get a comprehensive view of the reason for declaring independence. While there are other grievances, and there are those that fall into a different category of grievance, the refusal to abide by law, and for the Crown to replace will for law, is the foundation of all claims justifying the move to independence.

The Preamble provides a statement about severing ties with Great Britain, a rebellious, and thus, lawless act. The Declaration of Independence was an act of treason from the view of Great Britain. To those unwedded from the historical record, or who have allegiances to the Crown, the signers were nothing more than disgruntled colonists looking to break away from Britain for no reason other than self-interest. The reason why most of us today do not view it as such is because the rebels were successful, and the U.S. has become the greatest republic on historical record. But that sort of post hoc justification is shallow and without merit. The goodness of the Declaration, and the intentions of the signers, is best found in the grievances as therein lies a justification for independence through the pursuit of the public good as achieved intentionally through the rule of law and proper governance.

There is a difference between just and unjust rebellion and the signers are making the case that their actions are just because of their commitment to the law and King George’s refusal to abide by law and accepted practice. John Locke, the obvious muse of Thomas Jefferson, wrote, “The difference betwixt a king and a tyrant to consist only in this, that one makes the laws the bounds of his power, and the good of the public, the end of government; the other makes all give way to his own will and appetite…Where-ever law ends, tyranny begins.” By positioning their actions within the context of law, those signing the Declaration position themselves within a tradition that authorized the dissolution of government when the rule of law was no longer in force.

The Declaration is usually read as a philosophical document rather than a governing document. A political theory can certainly be distilled from the Preamble and the grievances, but the grievances themselves serve as a governance structure. More than a theoretical justification for independence, or an articulation of high-minded ideals that a government ought to embody, the grievances lay out in practical terms how a government should function by providing the contrasting vision of legitimate government. An illegitimate government is ruled by an executive that refuses to assent to the laws; therefore, a legitimate government must have an executive that adheres to and enforces duly passed legislation.

Embodied with the first grievance is the political principle that laws passed by a representative body should be assented to by the executive body. Thus, the assumption is that there needs to be a separation between the legislative and executive functions of government and that those two bodies are equal. It also posits that the laws, and not the caprice or whim of those in government, ought to restrict the actions of the government. The primacy of the rule of law is clear throughout the Declaration, but the first grievance gives us a clear articulation of a separation of powers as being essential to—if not an assumed trait of—legitimate government. While itself a governing document, it does anticipate the modes and orders that would be codified in the U.S. Constitution.

Kyle Scott, PhD, MBA serves on the Board of Trustees for the Lone Star College System and teaches political science at the University of Houston and is an affiliated scholar with the Baylor College of Medicine’s Center for Health Policy and Medical Ethics. Kyle has authored over 70 op-eds, dozens of academic articles and five books, the most recent of which is The Limits of Politics: Making the Case for Literature in Political Analysis. He can be reached at or on Twitter: @kanthonyscott.

Podcast by Maureen Quinn


Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 14 – Guest Essayist: Val Crofts

“But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.— Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government. The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.”

The Declaration of Independence serves as the cornerstone of our nation, and the men who created this statement of natural rights did not do so lightly. Their causes to break from Great Britain were not “light and transient causes” and they wanted to make sure that the world who was going to be reading this declaration would understand the events and circumstances that brought the colonies to the point of separation in the summer of 1776.

The above portion of the Declaration shows us the point in the document where the necessary change that is required by the colonies should be independence, as well as showing how we have arrived at this point and who is to blame. The document had previously stated that we were separating from Great Britain and started to explain the justification for doing so. It also details that the colonies are not taking this usurpation lightly, but have strong reasons for doing so. The Declaration details that most people throughout history have been content to suffer under oppressive forms of government, but these men are not. In this section, the writers of the Declaration are submitting to the world why they will not be suffering under the rule of King George III any longer.

A long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism..”  – The colonies had been enduring what they felt were abuses and usurpations (abuses of power with no right to do so) for years. The French and Indian War had ended in 1763 and the British Empire was heavily in debt as a result. The British felt that the American colonies were going to have to shoulder some of the burden of paying this debt.

The colonies were also told where they could and could not settle by the Proclamation of 1763, which told the colonists that they could not settle West of the Appalachian Mountains. The colonists were outraged by this and the subsequent taxes and acts that followed from 1763 through the beginning of the American Revolution in 1775. The colonists, as British subjects, also felt that their rights under the English Constitution were not being recognized or respected. Some colonists also believed that King George III was abusing his power at the expense of the colonists and that, because of this, he was not fit to be their king.

“ is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.”

After realizing that their king had betrayed them, the colonists now felt that they needed to do something about it. They believed that not only was it their right to get rid of the king and the British Empire as their rulers: it was their duty! They felt called to do this for themselves and the future generations of their new nation. The king’s actions had led the colonists to this place in history and their sense of betrayal was felt very heavily. The colonies then adopted measures to prevent these actions from continuing. Those who boycotted British goods and protested the king and Parliament’s legislation believed they were being deprived of their rights as free Englishmen and that they deserved representation by the British Parliament as a voice for their concerns as well. They took action when those rights were not given to them and those actions would lead the colonists towards revolution.

“Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government.”

Most colonists had tried to maintain patience throughout the various acts of Parliament and the effects and consequences that had resulted from them. That patience partially came from the fact that most colonists believed a reconciliation would occur with the King. They wanted that to happen. They were British subjects and hoped for an amicable reunion. However, after several acts, taxes and policies that the colonists felt were unfair and oppressive of their rights as English subjects, they had had enough and felt that it was time to do something to remedy it. The colonist arrived at the conclusion that they needed to change their situation. By the summer of 1776, after over a year of open warfare, it was difficult, if not impossible to reconcile with the mother country. The colonists wanted to escape an oppressive government that they believed was not respecting them or looking out for them; they wanted a better life for themselves and their future ancestors. The results of that oppression now made it absolutely necessary for the colonists to change their form of government from a monarchy to, eventually, a republic.

“The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States.”

In the recent years of history (the 1760s and 1770s), the actions of the King and Parliament indicated to the colonists that England was trying to oppress them. An objective of these actions was to harm and mistreat the colonies. Furthermore, King George III also had an objective to rule as a tyrant. As a result of these actions, the colonies were now going to leave the British Empire.

“To prove this, let Facts be submitted to a candid world.”

The Declaration will now be transitioning to a list of grievances that will give the evidence to the world that will show how the colonists had been suffering under this monarch and his actions. These facts attempt to prove that the king is an oppressive ruler and an unfit king to these colonies. They will also attempt to show that he has been and will continue to be, an oppressive and tyrannical ruler, which is why we are declaring our independence.

Val Crofts is a Social Studies teacher from Janesville, Wisconsin. He teaches as Milton High School in Milton, Wisconsin and has been there 16 years. He teaches AP U.S. Government and Politics, U.S. History and U.S. Military History. Val has also taught for the Wisconsin Virtual School for seven years, teaching several Social Studies courses for them. Val is also a member of the U.S. Semiquincentennial Commission celebrating the 250th Anniversary of the Declaration of Independence.

Podcast by Maureen Quinn



Click Here for Next Essay

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Essay 13 – Guest Essayist: The Honorable David L. Robbins

“Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed.”

The above passage in the United States Declaration of Independence, warns of revolution for “light and transient causes” by overthrowing government, long established. The British Monarchy dates to 1066 when England was conquered by the Normans. And, while some monarchs were removed forcefully, the monarchs ruled England since this early beginning. The Declaration of Independence was challenging a “long established” government that ruled England for 710 years, and the members of the Second Continental Congress were aware there would be challenges in forming a new nation.

The American revolutionary leaders included many well-educated, wealthy, businessmen. They all realized signing this document would be signing their death sentences if the revolution was unsuccessful. Plus, their links to England were not casual, but well embedded in colonial life. Family, customs, education, language, business, and even religion were long-term bonds between the colonies and England. But, actions by England had become insufferable.

The Founders of America did not necessarily want to change the whole world, even though they did, but after years of insufferable treatment by King George, his government and military, they believed they had to attempt to throw off the “forms to which they are accustomed.” The Founders pulled material from many different sources to form a new government, but they didn’t necessarily have all the answers to form a successful government to replace the British monarchy.

The initial Articles of Confederation were deemed inadequate by 1785, just two years after the end of the Revolutionary War. In 1786, Congress met and debated the Constitution of the United States. While the Constitution was deemed immensely superior to the Articles of Confederation, several states refused to ratify the new Constitution without additional assurances which produced the first ten amendments, referred to as the Bill of Rights.

The Constitution of the United States is an incredible document. It has survived over 230 years and after the original ten amendments, only 17 additional amendments have been approved, fewer than one every 13 years. Through the election of Representatives, Senators, and the President and Vice-President, this document permits peaceful change in our government via elections every two, four, and six years. Most of these “mini-revolutions” have been peaceful. However, the history of change in the United States has not always been peaceful.

The U.S. Civil War was about drastically different visions of government, society, and treatment of people. While these may not have been viewed as “light and transient causes,” the impacts were devastating to the entire country. The U.S. Civil War lasted over four years from April 12, 1861 to May 9, 1865 and cost over 655,000 lives.  It ended with massive changes and new amendments to the Constitution. The U.S. Revolutionary War, by contrast, resulted in approximately 25,000 American deaths and approximately 50,000 in total.

In 1968, the United States was in turmoil during a presidential election year with a war in Viet Nam, riots at home, the assassination of two prominent national leaders: one a civil rights leader and another a presidential candidate. During this chaos, a British pop music group, the Beatles, released a song called “Revolution” in August with lyrics to demand change while casting aside violence or destruction. The line from the song, “we all want to change the world” still resonates today as it did over 200 years ago.

Many individuals, organizations, and political groups over the history of the United States have pushed for change in our country, some minor, some drastic. Change is inevitable, but the Founders of the United States left a cautionary note in the Declaration of Independence, one hopefully taken to heart by both those wanting change and those resistant to change.

David L. Robbins serves as Public Education Commissioner in New Mexico.


Podcast by Maureen Quinn

Click Here for Next Essay 

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 12 - Guest Essayist: Will Morrisey

“…and to institute new Government, laying its foundation on such principles and organizing its powers in such form as to them shall seem most likely to effect their Safety and Happiness.”

In declaring their independence from the British empire, Americans did not merely assert themselves. They declared “the causes which impel them to the Separation” and submitted facts, evidence “to a candid World.” In doing so, they selected a way of arguing that can be understood not only by Americans and Englishmen but by human beings as such. Human beings are by their nature capable of reasoning, of thinking according to the principle of non-contradiction. If I say, ‘Think of a circle,’ you know what I mean, so long as you know the meaning of the words in that sentence. If I say, ‘Think of a square,’ you also know what I mean. But if I say, ‘Think of a square circle,’ you don’t know what I could possibly mean. I have contradicted myself.

A formal argument founded on the principle of non-contradiction is called a logical syllogism. That is exactly what the Declaration of Independence is. A logical syllogism consists of one or more ‘major premises’—the foundations of the argument—one or more ‘minor premises’—typically, specific facts—followed by a conclusion. To give the standard example: ‘All men are mortal. Socrates is a man. Therefore, Socrates is mortal.’ The major premise is a general or foundational statement; the minor premise is a factual statement; the conclusion follows from the two premises. You could disprove the argument by showing that either or both premises is false, or that the conclusion doesn’t follow from the premises, that it somehow violates the principle of non-contradiction. So, for example, if the ‘Socrates’ you are referring to is an angel, the conclusion is wrong, since angels may not be mortal.

In the Declaration of Independence, the clause we are considering is one of the several main premises of the argument; the minor premises are the specific, factual charges against the British king and parliament. The major premises stated before this are the famous ones: that all men are created equal respecting their unalienable rights to life, liberty, and the pursuit of happiness; that men institute government to secure those rights; that the governments they institute derive their just powers from the consent of the governed; and that, conversely, a people whose government violates their unalienable rights may rightly be abolished.

According to the logic of the argument, then, the “consent” of the governed cannot mean simply the assent of the governed. Consent can only mean assent to a government that really does secure the rights human beings have by nature, thanks to their Creator, before they form the government. Once they no longer consent to their government because it no longer serves the “end” or purpose a government ought to have, not only do we have the right to alter or abolish it, we also have the right, even the obligation, to frame a new government, one that does secure the rights they old government failed to secure.

How will we do that? By doing two things. First, we do it by “laying its foundations” on the foundations or major premises of the Declaration of Independence: the natural, unalienable rights of human beings. Second, we do it by founding a new regime, a regime which includes a government with a new “form,” a new structure, an architecture, which is logically consistent with those natural foundations. By so shaping the means to the end, the form of the government to the defense of natural rights, we can effect our safety and happiness—secure our natural rights in practice, not merely recognize them in theory.

This clause of the Declaration is the link between the Declaration and the preamble to the United States Constitution. Justice, domestic tranquility, common defense, the general welfare, and securing the blessings of liberty are all elements of our safety and happiness as an independent, self-governing people. The Constitution lays out exactly the form or structure of the government designed to achieve those purposes, replacing the Articles of Confederation, which had not achieved them, which in turn had replaced the regime of the British empire, which had violated them.

Thus the right of revolution follows logically from the purpose of government, just as the purpose of government follows logically from the existence of unalienable natural rights in all human beings. In presenting their Declaration of Independence in the form of a logical syllogism, the American Founders justified their action not only to themselves, not only to their “British brethren,” but to a “candid world”—to all human beings who think rationally, wherever and whenever they live.


Will Morrisey is Professor Emeritus of Politics, Hillsdale College; Editor and Publisher, Will Morrisey Reviews


Click Here for Next Essay

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 11 - Guest Essayist: James D. Best

“That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it.”

The “right of the people to alter or abolish” their government is derived from our natural right to self-governance. The notion of self-governance is relatively new. In 1776, the world was ruled by royalty or warrior chieftains. Some upstart colonialists then penned the most revolutionary document in the history of man. Kings and queens no longer enjoyed a Divine Right to rule. Instead, the individual was now the one endowed by their Creator with certain unalienable rights. Like most revolutionary visions, this one didn’t suddenly spring onto the world stage. Baron de Montesquieu, John Locke, David Hume, Adam Smith, Thomas Paine, and many others had advocated that “consent of the governed” was dictated by the laws of nature and of nature’s God. Of course, not everyone accepted this concept—certainly not King George III or English nobility. It took seven years of warfare for the colonies to solidify their claim of self-governance.

“The infant periods of most nations are buried in silence, or veiled in fable, and perhaps the world has lost little it should regret. But the origins of the American Republic contain lessons of which posterity ought not to be deprived.” — James Madison

The Founders, however, were steeped in this incendiary idea. Self-governance had been part of their experience in the New World. The colonists were subjects of England, but a round-trip sail across the great Atlantic put three to four months between them and their king. Self-rule started with the Pilgrims. The Mayflower Compact began by pledging loyalty to King James, but then decreed that the colonists would

“combine together into a civil body politick, for our better ordering and preservation, and furtherance of the ends aforesaid: and by virtue hereof do enact, constitute, and frame, such just and equal laws, ordinances, acts, constitutions, and officers, from time to time, as shall be thought most convenient for the general good of the colony.”

Basically, the Mayflower Compact was a written statement declaring self-government in colonial America.

“under absolute Despotism, it is their right, it is their duty, to throw off such Government” —Declaration of Independence.

Geography may have allowed the early colonists to govern themselves, but it was the writings of the Enlightenment that declared that self-rule was a natural right. This grand idea eventually led to the Declaration of Independence, which asserted that it was the right of the people “to institute a new government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness.” This founding principle basically said that the people themselves held the power to form a new government at any time and in any shape that met their needs. It was a radical concept used to justify radical action.

The power to “institute a new government” also conveys the power to “alter or to abolish it.” The 1787 replacement of the Articles of Confederation with our Constitution is a historical example of this concept. Since that date, we have not seen a need to abolish our government because we have been able to alter it continuously with amendments, laws, and political movements.

Our government at the national level is not a direct democracy. (Half of the states allow ballot initiatives which, if passed by a majority of the voters, have the force of law.) Instead, we elect representatives to write laws and a president to administer those laws. When the people’s will is thwarted, regular elections give them the opportunity to dismiss their representatives and appoint new ones. As a further safeguard, our government theoretically only has powers delegated by the people, reinforcing the concept that power resides with the people, not political leaders. The principle of self-governance is echoed in the 9th and 10th Amendments to the United States Constitution.

As long as people believe their voices count, fair and honest elections prevent the more drastic action of abolishment. Revolutions are bred when people believe their voices go unheard, especially in periods of hardship.


James D. Best is the author of Tempest at Dawn, a novel about the 1787 Constitutional Convention, Principled Action, Lessons From the Origins of the American Republic, and the Steve Dancy Tales.

Podcast by Maureen Quinn


Click Here for Next Essay

Click Here for Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 10 - Guest Essayist: Tony Williams

The Declaration of Independence laid down several important principles about free government predicated upon all humans being created with an equality of natural rights. From that equality flowed the idea that all who made a political regime through a social contract equally gave their consent to that government. The American polity was a republican form of government rooted upon a continuing consent of the sovereign people.

The American colonists were drawn to the principle of consensual government in the decade of resistance before the Declaration of Independence. The main argument of the American Revolution was, of course, “no taxation without representation.” The colonists were willing to pay taxes as British subjects, but they demanded in countless pamphlets, newspapers, petitions, declarations of rights, and speeches that they could only be taxed by their consent. This consent would be given in their colonial legislatures since they were not and could not reasonably be represented in Parliament.

In 1774, George Washington said it well when he described it with a practical example: “I think the Parliament of Great Britain hath no more Right to put their hands into my Pocket, without my consent, than I have to put my hands into yours, for money.” Washington thought it was violated constitutional and natural rights. Taxation without consent was “repugnant to every principle of natural justice…that it is not only repugnant to natural Right, but Subversive of the Laws & Constitution of Great Britain itself.”

In Federalist #39, James Madison described the principle of consent:

“We may define a republic to be, or at least may bestow that name on, a government which derives all its powers directly or indirectly from the great body of the people, and is administered by persons holding their offices during pleasure for a limited period, or during good behavior. It is essential to such a government that it be derived from the great body of the society…It is sufficient for such a government that the persons administering it be appointed, either directly or indirectly, by the people.”

Madison’s quote points us to important considerations about consensual republican government. First, it derives its power from the sovereign people. Second, it is governed by representatives of the people (from among the people) they have elected directly or indirectly in free elections.

The Constitution contained several provisions that institutionalized popular consent. “We the People” established the constitutional government divided into three branches of government with the Congress, and specifically the House of Representatives, representing the people most directly. As Madison wrote, “In republican government, the legislative authority necessarily predominates.” The Constitution provided for free direct and indirect elections and limited terms of office. The document guaranteed “to every State in this Union a Republican Form of Government.”

Representative government was naturally and reasonably based fundamentally upon majority rule. The majority, however, was guided and limited by the principles of natural law and natural justice. Madison explained in Federalist #51: “It is of great importance in a republic, not only to guard the society against the oppression of its rulers; but to guard one part of the society against the injustice of the other part.” Thomas Jefferson agreed in his First Inaugural: “All, too, will bear in mind this sacred principle, that though the will of the majority is in all cases to prevail, that will to be rightful must be reasonable; that the minority possess their equal rights, which equal law must protect, and to violate would be oppression.” According to the founders, majority tyranny was just as bad as tyranny of the few or one. Majority rule was only just if minority rights were protected.

During the mid-nineteenth century, the idea of popular consent and majority rule was challenged. John Calhoun’s “concurrent majority” created the idea that the means of preventing supposedly tyrannical majority rule was by allowing the minority to have a veto on what it believed unjust. Concurrence was virtually akin to unanimity and laid the basis for nullification. Stephen Douglas’ view of “popular sovereignty” advocated that the people of each state govern their affairs however they want including owning slaves. Douglas’ “don’t care” policy on slavery was a gross violation of natural rights and justice by an oppressive majority against a racial minority. His relativist stance on popular government did not accord with the ideas of Madison and Jefferson above about majority rule/minority rights.

In his First Inaugural, Abraham Lincoln reasserted the underlying principle of majority rule and consent. Lincoln focused attention on the need for a sense of restraint in popular government and the checks and balances and other devices that help provide limits. Moreover, he noted that republican governments based upon the consent of the governed are rooted in free and reasonable deliberation and persuasion are necessary in shaping just majorities. But, it also means that the minority must submit to just rule. It cannot reject majority rule because it disagrees with a chosen course of action or does not win the debate. Lincoln said:

“A majority, held in restraint by constitutional checks, and limitations, and always changing easily, with deliberate changes of popular opinions and sentiments, is the only true sovereign of a free people, Whoever rejects it, does, of necessity, fly to anarchy or to despotism. Unanimity is impossible; the rule of a minority, as a permanent arrangement, is wholly inadmissible; so that, rejecting the majority principle, anarchy, or despotism in some form, is all that is left.”

The twentieth century witnessed several challenges to consensual self-government. The executive agencies of the administrative state that were overseen by experts in the public interest were seen as a counter to the messy, slow, and deliberative lawmaking of Congress. The later rise of the “imperial presidency” subverted the other branches of government and popular consent. Many observers argued that an “imperial judiciary” allowed unelected judges to substitute their personal views for the will of the people. Today, many are concerned that big tech elites and their political allies attempt to control and limit popular will. The debate has continued and will endure because of the central importance of the constitutional principle of consent in the American regime and national character.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Podcast by Maureen Quinn


Click Here For Next Essay

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 9 - Guest Essayist: Gary Porter
Founding Fathers John Adams, Benjamin Franklin, Thomas Jefferson kneeling in prayer at Valley Forge, PA, bronze sculpture by Stan Watts at Freedoms Foundation of Valley Forge

“that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness,”


According to Mr. Thomas Jefferson, it is a self-evident truth (or, if you prefer: a “sacred and undeniable truth”[1]) “that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness,”

This is one of the most memorable and yet controversial statements in English prose. Memorable it has become due to its striking simplicity. Controversial? It shouldn’t be. Jefferson is writing to the Americans of 1776; but his words also apply to Americans of 2021. A truth is a truth.

In 1776, Jefferson’s was a claim few would dispute or even take much notice of; it expressed an idea that had been “hackneyed about” in America for fifty to a hundred years. This was, simply, “an expression of the American Mind” of 1776. But today? While only 1 in 10 Americans believe there is no God at all, only about half of Americans believe God is an active participant in their lives.[2] Only 40% of Americans believe God actually created the world as Jefferson alludes,[3] and fewer still believe in the existence of God-given rights. Some today even claim there is danger in insisting that rights come from God. Instead, these people insist that these rights come from “human progress.”[4] There are grave implications to this alternative view, as we will see in a moment.

But, as author Brian Vanyo points out:

the Founding Fathers and other Natural Law philosophers did not take for granted that God existed. They did not base their strong conviction in God on religious dogma. Rather, they deduced that God must exist because an alternative conclusion was irrational…Belief in God was so common among the founding generation that further validation of God’s existence was often unnecessary and unwelcome.” [5]

Jefferson claimed these unalienable rights were an endowment – a gift – from our Creator: natural rights result from “the Laws of Nature and Nature’s God.” Later in life, in the only book he ever wrote, Jefferson reiterated this view.[6] The colonists had been making this claim to their King – that these were their natural rights, and they were being violated – for many years.

The standard formula up until 1776 had been: “Life + Liberty + Property = Our Fundamental Natural Rights.” [7] Why did Jefferson now substitute “pursuit of happiness”?  Some scholars insist Jefferson borrowed the “pursuit of happiness” idea from John Locke. Locke indeed explored this idea in An Essay Concerning Human Understanding (published 1689), which Jefferson no doubt studied. And it is undisputed that Jefferson modeled other phrases in the Declaration after Locke.[8]  But “pursuit of happiness” and similar phrases were commonly encountered during the Founding period. Take this excerpt from a 1773 Election Sermon by Pastor Simeon Howard:

“In a state of nature, or where men are under no civil government, God has given to every one liberty to pursue his own happiness in whatever way, and by whatever means he pleases, without asking the consent or consulting the inclination of any other man, provided he keeps within the bounds of the law of nature. Within these bounds, he may govern his actions, and dispose of his property and person, as he thinks proper, Nor has any man, or any number of men, a right to restrain him in the exercise of this liberty, or punish, or call him to account for using it. This however is not a state of licentiousness, for the law of nature which bounds this liberty, forbids all injustice and wickedness, allows no man to injure another in his person or property, or to destroy his own life.”[9]

Much has been written dissecting Jefferson’s choice of “pursuit of happiness” over “property,”[10] so I won’t take more time with the subject here other than to say there is no evidence that suggests Jefferson did not believe the right to property to also be a natural right.

Alexander Hamilton concurred that God was the source of the colonists’ rights. Answering an essayist calling himself “The Farmer,” Hamilton wrote:

The fundamental source of all your errors, sophisms and false reasonings is a total ignorance of the natural rights of mankind. Were you once to become acquainted with these, you could never entertain a thought, that all men are not, by nature, entitled to a parity of privileges. You would be convinced, that natural liberty is a gift of the beneficent Creator to the whole human race, and that civil liberty is founded in that; and cannot be wrested from any people, without the most manifest violation of justice. Civil liberty is only natural liberty, modified and secured by the sanctions of civil society. It is not a thing, in its own nature, precarious and dependent on human will and caprice; but it is conformable to the constitution of man, as well as necessary to the well-being of society…The sacred rights of mankind are not to be rummaged for, among old parchments, or musty records. They are written, as with a sun beam, in the whole volume of human nature, by the hand of the divinity itself; and can never be erased or obscured by mortal power.”[11]

So did James Wilson:

“What was the primary and principal object in the institution of government? Was it – I speak of the primary and principal object – was it to acquire new rights by a human establishment? Or was it, by human establishment, to acquire new security for the possession or the recovery of those rights, to the enjoyment or acquisition of which we were previously entitled by the immediate gift, or by the unerring law, of our all-wise and all-beneficent Creator? The latter, I presume, was the case…”[12]

And John Adams:

I say RIGHTS, for such they have, undoubtedly, antecedent to all earthly governments; rights that cannot be repealed or restrained by human laws; rights derived from the Great Legislator of the Universe.”[13]

And John Dickinson:

Kings or parliaments could not give the rights essential to happiness… We claim them from a higher source – from the King of kings, and Lord of all the earth. They are not annexed to us by parchments and seals. They are created in us by the decrees of Providence, which establish the laws of our nature. They are born with us; exist with us; and cannot be taken from us by any human power without taking our lives. In short they are founded on the immutable maxims of reason and justice.”[14]

The prevailing understanding of the founding era was that God was the source of natural rights, period. But, even in the founding era that understanding was beginning to change, and the change has picked up speed in the modern era.

Today, it is not uncommon to encounter people claiming that man himself is the source of his rights. When interviewing controversial Judge Roy Moore, then Chief Justice of the Alabama Supreme Court, CNN commentator Chris Cuomo famously declared:  “Our rights do not come from God, your Honor, and you know that, they come from man.”

But, there is a problem with this belief, a big problem. If our rights come from man, i.e., from the laws we human beings enact, then how can these rights ever be considered unalienable? Does this mean certain men can pass a civil law creating a certain civil right with the understanding that future men will somehow be prevented from revoking that law and thus revoking the right it created? Manmade rights can simply not be unalienable.

Could there be a middle ground where both unalienable and alienable rights are part of the human condition? What if both Cuomo and Moore are right each in their own unique way?

I think we must acknowledge that man can indeed create rights through civil law. The right to vote, for instance (some insist it is a privilege, not a right), could not be a natural right. In the hypothetical state of nature, voting would have no meaning, there being no society and no government. So, some rights, as Cuomo insists, do indeed “come from man.” These rights must be considered alienable. The law that creates a right for certain individuals to vote today can easily be revoked tomorrow.

But, what then of natural rights, rights that would be part of the human condition were there no society, no government? Some today suggest that even these need not have a Heavenly source – as most of the Founders would insist – but that these rights became part of the human condition as man “evolved.”

The idea that human beings have inherent rights, inherent to being human, goes back to antiquity, but it began to gain significant adherents during The Enlightenment. One of those new adherents was Englishman Jeremy Bentham (1748-1832). Regarded as the founder of modern utilitarianism, Bentham explained the “fundamental axiom” of his philosophy as the principle that “it is the greatest happiness of the greatest number that is the measure of right and wrong.” Bentham famously called the idea of natural rights sourced in God as “nonsense upon stilts.”

John Dewey thought that “[n]atural rights and natural liberties exist only in the kingdom of mythological social zoology.”[15]

We do find some Founders using the “inherent” terminology; George Mason begins the Virginia Declaration of Rights by stating:

“That all men are by nature equally free and independent and have certain inherent rights, of which, when they enter into a state of society, they cannot, by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety.”[16]

George Washington spoke of inherent natural rights in a Letter to the Hebrew Congregation of Newport, Rhode Island, August 17, 1790.[17]

Even Jefferson himself wrote that “Nothing is unchangeable but the inherent and unalienable rights of man.[18]

However, “inherent” and “natural” rights are not irreconcilable concepts.  Being inherent does not exclude God as the ultimate source. If God, as Creator, wished his human creations to understand they had these rights, he need only “embed” them into our consciousness. Both Jeremiah 31:33 and Hebrews 8:10 remind us that God’s law will be “written upon our hearts;” is it not reasonable to assume our rights are “inscribed” there as well?

We will not settle the “inherent” versus “natural” argument today, suffice it to say that if you like your rights “unalienable,” you best look to God as their source.

Which natural rights exist?  How many are there?

Note that in our subject phrase Jefferson points to only “certain” unalienable rights as included in the Creator’s endowment. “Life, Liberty and the pursuit of Happiness” are among the rights created and given by God. Jefferson thus implies that other rights, beyond these three, are part of God’s endowment. This understanding, that there are other, perhaps even uncountable natural rights, was also part of the “American Mind,” so much so that we see it codified in the Ninth Amendment.[19]

One of the frequent objections to including a Bill of Rights in the Constitution was that “it would not only be useless, but dangerous, to enumerate a number of rights which are not intended to be given up; because it would be implying, in the strongest manner, that every right not included in the exception might be impaired by the government without usurpation; and it would be impossible to enumerate every one…”[20]

James Madison, in proposing the Bill of Rights on the floor of Congress in 1789, acknowledged the power of this objection but showed it had been anticipated. He said: “This is one of the most plausible arguments I have ever heard urged against the admission of a bill of rights into this system; but, I conceive, that may be guarded against. I have attempted it, as gentlemen may see by turning to the last clause of the 4th resolution (which would eventually become the Ninth Amendment).”[21]

“The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.”

But, we can see an obvious question arise here: if there are unenumerated rights which government should not “deny or disparage,” what are they? Who gets to identify or “enumerate” them? The Framers of the Constitution gave us no hint.

Thus far in our country’s history we have let the court system identify them. In 1965, the Supreme Court identified, for the first time, a right to privacy lurking in a “penumbra” of the Constitution. Eight years later the Justices expanded this right to include the “right” to terminate the life of an unborn baby. In 2008, the court pulled out of the “inkblot”[22] of the Ninth Amendment the “right” of two homosexuals to marry.

Note, however, that the Constitution begins not with the words: “We the Congress,” “I the President,” or even “We the Judges.” The Constitution represents a contract between the American people and the government the document creates. The people are sovereign; they hold the ultimate political power over the government. It is We the People who have the rightful authority to identify the rights we wish secured by the words of the Constitution. And the rightful mechanism for bringing those rights into the security of the Constitution is amendment, not judicial decree.

Thomas Jefferson’s words are as sacred and undeniable today as they were 245 years ago. Since Congress has declared the Declaration of Independence to be part of the Organic Law of the United States,[23] we would do well to reflect on and heed them.

Natural rights?  I’ll take mine unalienable, please.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people. CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at, on Facebook or Twitter (@constitutionled).

Podcast by Maureen Quinn

[1] These were Jefferson’s words in the original draft of the Declaration.




[5] Brian Vanyo, The American Ideology, Taking Back our Country with the Philosophy of our Founding Fathers, Liberty Publishing, 2012. p. 20-21.

[6] “And can the liberties of a nation be thought secure when we have removed their only firm basis, a conviction in the minds of the people that these liberties are the gift of God?” Thomas Jefferson, Notes on the State of Virginia, 1785.

[7] See both Declaration and Resolves, October 14, 1774 and A Declaration on the Causes and Necessity of Their Taking Up Arms, July 6, 1775

[8] See Two Treatises on Government, Bk II

[9] A sermon preached to the Ancient and Honorable Artillery-Company, in Boston, New-England, June 7th, 1773. : Being the anniversary of their election of officers, by Pastor Simeon Howard, accessed at:


[11] The Farmer Refuted, 1775

[12] Mark David Hall, The Political and Legal Philosophy of James Wilson, 1742-1798 (Columbia: University of Missouri Press, 1997) pp. 1053-1054

[13] A Dissertation on the Canon and Feudal Law, 1765

[14] An Address to the Committee of Correspondence in Barbados, 1766

[15] John Dewey, Liberalism and Social Action, 1935, page 17.

[16] George Mason, Virginia Declaration of Rights, 1776, accessed at


[18] Letter to John Cartwright, 1824.

[19] “The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.”

[20] James Iredell, speaking at the North Carolina Ratifying Convention, July 29, 1788.


[22] “An inkblot” is the way Judge Robert Bork characterized the Ninth Amendment in his unfruitful confirmation hearing for a seat on the Supreme Court.


Click Here for Next Essay

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Essay 8 - Guest Essayist: Tony Williams

The Declaration of Independence made a bold assertion about human nature and natural rights. The central claim that “all men are created equal” had profound implications for the American regime of liberty. The “self-evident truth” of human equality meant that humans had equal natural rights, equally gave their consent to create a republican government, had equal dignity, and were equal under the law.

Throughout history, most societies were either monarchies, aristocracies, or despotisms. In those societies, leaders and elite social classes (or those of a certain ethnicity or religion) had certain rights and privileges that common people did not have. These societies were characterized by inequality.

The Enlightenment and ideas of John Locke significantly influenced the founders’ belief that all humans were created equal and had equal natural rights. The Declaration stated, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” The nature of the political regime was then shaped by this idea of natural human equality.

Again, influenced by Locke, the Declaration stated that all were equally free and independent to give their consent to create a free, representative government. The Declaration stated, “That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed.” This was the basis of social contract or social compact theory. It created an equal citizenry and self-governance in a republic.

The citizens in the republican government enjoyed equality under the Constitution. The Constitution created an equal rule of law for all in which they could enjoy their liberties. It equally protected the individual rights of all citizens and guaranteed due process. The Fifth Amendment to the Constitution reads, “No person shall be…deprived of life, liberty, or property, without due process of law.” The Constitution banned titles of nobility and aristocratic privileges showing that it was a republican constitution not one that supported oligarchy, or rule by the few.

The principle of equality protected the liberties of all citizens to create a just society. All citizens enjoyed equal political liberty by giving their consent to representative government at all levels and by participating in government. All possessed freedom of conscience regarding their religious beliefs and worship. They also had economic equality. This understanding of equality did not mean that all people had the same amount of income or property, but that they had property rights and ought to have equal opportunity to pursue their happiness and keep the fruits of their labor in a free society. During the 1858 Lincoln-Douglas debates, Lincoln explained that the idea, “You work and toil and earn bread, and I’ll eat it,” is the “tyrannical principle” of monarchy and slavery.

Human beings had the same natural rights and enjoyed equality under the law in the political regime, but they were unequal in some important and obvious ways. The founders understood that human beings can never be perfectly equal in society because of the differences among individuals. Humans are unequal in physical strength, intelligence, talents, abilities, and character. Thus, individuals have different faculties, abilities, and virtues to make use of in pursuing their happiness. These differences result in social inequalities especially in terms of how much wealth a person might earn or some advantages in opportunities. Republican government must guard against allowing natural inequalities to create the conditions under which oligarchy and tyranny rule, but it can never create a utopian society of perfect equality.

For the founders, human equality was an axiomatic principle that was universally true for all people at all times. However, the principle was increasingly challenged by the middle of the nineteenth century. Senator John C. Calhoun called the equality principle an “utterly false view of the subordinate relation of the black to the white race” and the idea of equality of the races “an error.” In the infamous Dred Scott v. Sanford (1857) decision, Chief Justice Roger Taney opined that, “it is too clear for dispute that the enslaved African race were not intended to be included,” in the Declaration of Independence. In his 1858 debates with Lincoln, Senator Stephen Douglas stated, “I hold that the signers of the Declaration of Independence had no reference to negroes at all when they declared all men to be created equal.” In 1861, the vice-president of the Confederacy, Alexander Stephens, said that the “corner-stone [of the Confederate States of America] rests, upon the great truth that the negro is not equal to the white man.”

Many abolitionists and statesmen, including Frederick Douglass and Lincoln, took exception to the arguments of the opponents of black equality and inclusion in the Declaration of Independence. Their repeated claims that blacks were equal human beings endowed with equal natural rights was a significant demand for racial egalitarianism.

The equality principle continued to influence American thinking about their republican regime. While Lincoln continued to believe in the self-evident truth of the Declaration, he conceded that it was being fundamentally challenged before and during the Civil War. Lincoln was a student of ancient Greek mathematician Euclid and used the language of a proposition in the Gettysburg Address. The proposition of human equality was either true or false, and he believed in its truth and that it could be proven. “Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.”

In 1963, Martin Luther King, Jr. delivered his “I Have a Dream” speech on the steps of the Lincoln Memorial. He opened the speech by stating, “Five score years ago, a great American, in whose symbolic shadow we stand today, signed the Emancipation Proclamation.” Using the biblical language of the Gettysburg Address, King rhetorically appealed to the liberty and equality of the Emancipation Proclamation and Declaration of Independence. He referred to the equality principle of the Declaration of Independence as a “promissory note” because it had been unfulfilled for black Americans. “When the architects of our republic wrote the magnificent words of the Constitution and the Declaration of Independence, they were signing a promissory note to which every American was to fall heir. This note was a promise that all men – yes, black men as well as white men – would be guaranteed the unalienable rights of life, liberty and the pursuit of happiness.” King had not given up on the American ideal of equality. Black Americans attended the March on Washington and demonstrated peacefully in places like Birmingham to make that promise a reality.

The principle of equality has powerfully stood at the core of the American regime for more than two centuries. The challenges and debates over the principle have animated American deliberations about their national character of their free government and free society throughout that time and will continue to do so.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Podcast by Maureen Quinn


Click Here for Next Essay

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 7 - Guest Essayist: Tony Williams

The Americans of the founding period were a strongly Protestant people of various denominations including dissenting Presbyterians, Baptists, and Congregationalists. Some historians have estimated that Protestants made up over 98% of the American population. Their Protestantism was characterized by a strong dissenting tradition against religious and civil tyranny as well as a strong streak of individualism.

Their Protestantism—especially the Puritan tradition—was also exemplified by appeals to the natural law in its covenant theology that was consistent with Lockean social compact theory. Covenant theology caused Americans to view themselves as a Chosen People of a new Israel who formed a covenant with God. The natural law of covenant theology was consistent with both reason and revelation as they reconciled their reason and faith in the natural law and natural rights philosophy of the American Revolution.

The American founders drew from a variety of traditions in arguing for their natural rights and liberties. Ancient thought from Greece and Rome, the English tradition, and the ideas of John Locke and other Enlightenment thinkers combined with Protestantism for a rich tapestry. While the Enlightenment provided a strong influence on the founders, the contribution of their religious beliefs has often been downplayed or ignored. The average American colonial farmer or artisan may not have read John Locke’s Two Treatises of Government or ancient philosophy, but they heard dissenting religious ideals and Lockean principles from the pulpit at religious services.

Toward the end of his life, Thomas Jefferson had cause to reflect on the meaning of the Declaration of Independence. He wrote to Henry Lee in 1825 about the purpose of the Declaration:

“This was the object of the Declaration of Independence. Not to find out new principles, or new arguments, never before thought of, not merely to say things which had never been said before; but to place before mankind the common sense of the subject, in terms so plain and firm as to command their assent…it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion. All its authority rests then on the harmonizing sentiments of the day.”

The “harmonizing sentiments” of the 1760s and 1770s supported a natural law opposition to British tyranny in the American colonies. James Otis was one of the earliest articulators of natural law resistance. In 1764, he wrote, “Should an act of Parliament be against any of his natural laws, which are immutably true, their declaration would be contrary to eternal truth, equity, and justice, and consequently void.”

In 1774, Thomas Jefferson expressed the same sentiments in his Summary View of the Rights of British America. In the pamphlet, he wrote that God was the author of natural rights inherent in each human being. The Americans were “a free people claiming their rights, as derived from the laws of nature, and not as the gift of their chief magistrate… the God who gave us life gave us liberty at the same time: the hand of force may destroy, but cannot disjoin them.”

A year later, a young Alexander Hamilton wrote a pamphlet, Farmer Refuted, in which he eloquently described the divine source of universal rights. “The sacred rights of mankind are not to be rummaged for, among old parchments, or musty records. They are written, as with a sun beam, in the whole volume of human nature, by the hand of the divinity itself; and can never be erased or obscured by mortal power.”

These “expressions of the American mind” were common formulations of natural rights that influenced the Declaration of Independence. The four mentions of God in the document demonstrate their understanding of the divine, but it also showed that God was the author of good government according to natural law.

First, the Declaration appeals to the “separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.” This first mention of God is that of Protestant and Enlightenment natural law. They saw God as the author of truth in the moral order of the universe. This moral order defined their thinking about republican self-government.

Second, the Declaration asserts that, “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” God is the Creator and author of natural rights in this formulation. Since rights are from a higher authority, no earthly power can violate an individual’s inherent rights. Interestingly, God here acts as a supreme legislator who makes the natural law and grants natural rights.

Third, the Declaration appealed to “the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good People of these Colonies, solemnly publish and declare, That these United Colonies are, and of Right ought to be Free and Independent States.” God is a judge who authored the idea of justice and who judges human actions. God here represents the judicial branch of government.

Fourth, the Declaration stated that, “With a firm reliance on the protection of divine Providence, we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.” Americans believed that God was a providential God who intervened in human affairs and protected his Chosen People. This conception of God represents the executive branch of government.

The Declaration of Independence was a reflection that the American natural rights republic was rooted in the natural law. Reason and divine revelation supported the natural law that shaped a good government built upon the understanding of human nature and the rights given to humans by God.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.


Podcast by Maureen Quinn



Click Here for Next Essay

Click Here For Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 6 - Guest Essayist: Joerg Knipprath

On June 7, 1776, delegate Richard Henry Lee of Virginia rose to move in the Second Continental Congress, “That these United Colonies are, and of right ought to be, Independent States, that they are absolved from all allegiance to the British Crown, and that all connection between them and the State of Great Britain is, and ought to be, totally dissolved….” The motion was not immediately considered, because four states, lacking instructions from their assemblies, were not prepared to vote. Nevertheless, Congress appointed a committee of five to prepare a declaration of independence. The committee, composed of Benjamin Franklin, John Adams, Roger Sherman, Robert R. Livingston, and Thomas Jefferson, assigned the task of preparing the initial draft to Jefferson.

After numerous revisions by Adams and Franklin and, eventually, by Congress itself, the final draft and report were presented to Congress on July 2, 1776. Formal adoption of the Declaration had to await a vote on Lee’s motion for independence. That was approved by the states the same day, with only the New York delegation abstaining. After a few more minor changes, the Declaration was adopted on July 4, 1776. Copies were sent to the states the next day, and it was publicly read from the balcony at Independence Hall on the 8th. Finally, on August 2nd, the document was signed.

General Washington, at New York, received a copy and a letter from John Hancock. The next day, July 9, Washington had the Declaration read to his troops. Whereas those troops responded with great enthusiasm for the cause, reaction elsewhere to the Declaration was divided, to say the least. Supporters of independence were aware of the momentousness of the occasion. As Washington’s commander of artillery, Henry Knox, wrote, “The eyes of all America are upon us. As we play our part posterity will bless or curse us.” Others were less impressed. The anti-independence leader in Congress, John Dickinson, dismissed it as a “skiff made of paper.”

The Declaration’s preamble embraced four themes fundamental to Western political philosophy in the 17th and 18th centuries: Natural law and rights, popular sovereignty exercised through the consent of the governed, the compact basis of the legitimate state, and the right of revolution.

The idea of a universal moral law, obligatory on earthly rulers and to which human law must conform, went back at least to the Stoics nearly two millennia prior, and indirectly even to Aristotle’s conception of natural justice. Cicero, among Roman writers, and the Christian Aristotelian Thomas Aquinas, among medieval Scholastics, postulated the existence of a natural order directed by universal laws. Humans were part of this order created by God and governed by physical laws. More important for these writers was the divinely-ordained universal moral law, in which humans participated through their reason and their ability to express complex abstract concepts. By virtue of its universality and its moral essence, this natural law imposed moral obligations on all, ruler and ruled alike. All were created equal, and all were equal before God and God’s law. Viewed from a metaphysical and practical perspective, these obligations provided the best path to individual flourishing within a harmonious social order in a manner that reflected both the inherent value of each person and man’s nature as a social creature. The need to meet these universal obligations of the natural moral law necessarily then gave rise to certain universal rights that all humans had by nature.

However, the shattering of universal Christendom in the West, with its concomitant shattering of the idea of a universal moral law and of a political order based thereon, changed the conception of natural law, natural rights and the ethical state. No longer was it man’s reason that must guide his actions and his institutions, including government and law, for the purpose of realizing the ends of this order. Rather, in the emerging modernity, there was a “turn to the subject” and, in the words of the ancient Greek pre-Socratic philosopher Protagoras, “man [became] the measure of all things.”

Political legitimacy and, thereby, the basis for political and legal obligation came to rest on individual acts of will. The most prominent foundation for this ethical structure was the construct of the “social contract” or “social compact.” “Natural law” became deracinated of its moral content and was reduced to describing the rules which applied in a fictional state of nature in which humans lived prior to the secular creation of a political commonwealth, in contrast to the civil law that arose after that creation. Natural rights were those that sovereign individuals enjoyed while in the state of nature, in contrast to civil rights, such as voting, which were created only within a political society.

Although expositors of the social contract theory appeared from the 16th to the 18th centuries, and came from several European cultures, the most influential for the American founding were various English and colonial philosophers and clergymen. Most prominent among them was John Locke.

Locke’s version of the state of nature is not as bleak and hostile as was that of his predecessor Thomas Hobbes. Nor, however, is it a romanticized secular Garden of Eden as posited by Jean-Jacques Rousseau, writing a century later. For Locke, existence in the state of nature allows for basic social arrangements to develop, such as the family, economic relationships, and religious congregations. However, despite Locke’s general skepticism about the Aristotelian epistemology then still dominant at the English universities, he agreed with the ancient sage that human flourishing best proceeds within a political commonwealth. Accordingly, sovereign individuals enter into a compact with each other to leave the state of nature and to surrender some of their natural rights in order to make themselves and their estates more secure. They agree to arbitrate their disputes by recourse to a judge, and to be governed by civil law made by a legislator and enforced by an executive. Under a second contract, those sovereign individuals collectively then convey those powers of government to specified others in trust to be exercised for the benefit of the people.

Thus, the political commonwealth is a human creation and derives its legitimacy through the consent of those it governs. This act of human free will is unmoored from some external order or the command of God. For Hobbes, the suspected atheist, human will was motivated to act out of fear.

Locke allows for much greater involvement by God, in that God gave man a nature that “put him under strong Obligations of Necessity, Convenience, and Inclination to drive him into Society, ….” Moreover, the natural rights of humans derive from the inherent dignity bestowed on humans as God’s creation. The human will still acts out of self-interest, but the contract is a much more deliberate and circumscribed bargain than Hobbes’s adhesion contract. For Locke, the government’s powers are limited to achieve the purposes for which it was established, and nothing more. With Hobbes, the individual only retained his inviolate natural right to life. With Locke, the individual retains his natural rights to liberty and property, as well as his right to life, all subject to only those limitations that make the possession of those same rights by all more secure. Any law that is inimical to those objectives and tramples on those retained rights is not true law.

There remained the delicate issue of what to do if the government breaches its trust by passing laws or otherwise acting in a manner that make people less secure in their persons or estates. Among private individuals, such a breach of fiduciary duty by a trustee would result in a court invalidating the breach, ordering fitting compensation, and, perhaps, removing the trustee. If the government breached such a duty, recourse to the English courts was unavailable, since, at least as to such constitutional matters, the courts had no remedial powers against the king or Parliament.

Petitions to redress grievances were tried-and-true tools in English constitutional theory and history. But what if those petitions repeatedly fell on deaf ears? One might elect other members of the government. But, what if one could not vote for such members and, consequently, was not represented therein? What if, further, the executive authority was not subject to election? A private party may repudiate a contract if the other side fails to perform the material part of the bargain. Is there a similar remedy to void the social contract with the government and place oneself again in a state of nature? More pointedly, do the people collectively retain a right of revolution to replace a usurping government?

This was the very situation in which many Americans and their leaders imagined themselves to be in 1776. Previous writers had been very circumscribed about recognizing a right of revolution. Various rationales were urged against such a right. Thomas Aquinas might cite religious reasons, but there was also the very practical medieval concern about stability in a rough political environment where societal security and survival were not to be assumed. Thomas Hobbes could not countenance such a right, as it would return all to the horrid state of nature, where life once again would be “solitary, poor, nasty, brutish, and short.” Moreover, as someone who had experienced the English Civil War and the regicide of Charles I, albeit from his sanctuary in France, and who was fully aware of the bloodletting during the contemporaneous Thirty Years’ War, revolution was to be avoided at all cost.

Locke was more receptive than Hobbes to some vague right of revolution, one not to be exercised in response to trivial or temporary infractions, however. Left unclear was exactly who were the people to exercise such a right, and how many of them were needed to legitimize the undertaking. Locke wrote at the time of the Glorious Revolution of 1688. His main relevant work, the Second Treatise on Civil Government, was published in 1689, though some scholars believe that it was written earlier. The Catholic king, James II, had been in a political and religious struggle with Parliament and the Church of England. When Parliament invited the stadholder (the chief executive) of the United Netherlands to bring an army to England to settle matters in favor of itself, James eventually fled to France.

Parliament declared the throne vacant, issued a Declaration of Rights and offered the throne to William and his wife, Mary. In essence, by James’s flight, the people of England had returned to an extra-political state of nature where they, through the Parliament, could form a new social contract.

The American Revolution and Jefferson’s writings in the Declaration of Independence follow a similar progression. When King George declared the colonies to be in rebellion on August 23, 1775, and Parliament passed the Prohibitory Act in December of that year, they had effectively placed the colonies outside the protection of the law and into a state of nature. At least that was the perception of the colonists. Whatever political bands once had existed were no more. In that state of nature, the Americans were free to reconstitute political societies on the basis of a social contract they chose.

That project occurred organically at the state level. Massachusetts had been operating as an independent entity since the royal governor, General Thomas Gage, had dissolved the General Court of the colony in June, 1774. That action led to the extra-constitutional election by the residents of a provincial congress in October. Thereafter, it was this assemblage that effectively governed the colony. The other colonies followed suit in short order.

In Virginia, a similar process occurred in the summer of 1774. It culminated two years later in the “Declaration of Rights and the Constitution or Form of Government,” begun by a convention of delegates on May 6, 1776, and formally approved in two stages the following month. The initial document was a motley combination of a plan of government, a declaration of independence, and a collection of enumerated rights and high-sounding political propositions. In the part regarding independence, the accusations against King George are remarkably similar, often verbatim, precursors to Jefferson’s language in the Declaration of Independence of the “united States” two months later. George Mason, whom Jefferson praised as the “wisest man of his generation,” was the principal author. Still, it may have been Jefferson himself who proposed this language through the drafts he submitted to the Virginia convention.

Both documents, the Virginia declaration and the Declaration of Independence, cite as a reason for “dissolv[ing] the Political Bands” that the king had abandoned the government by declaring the Americans out of his protection. George III, like James II a century before, had breached the social contract and forced a return to an extra-political state of nature. The Declaration of Independence merely formalized what had already occurred on the ground. With those bands broken, the next step, that of forming a new government, already taken by Virginia and other states, now lay before the “united States.”

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:




Podcast by Maureen Quinn




Click Here For Next Essay

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 5 - Guest Essayist: Tony Williams

In 1861, President Abraham Lincoln had occasion to reflect upon the principles of the American Founding. Using a biblical metaphor, he thought that the Declaration of Independence was an “apple of gold” because it contained the foundational principles of the new country. The Constitution was the “picture of silver” framing the apple with the structures of republican government. In the mind of Lincoln—and those of the Founders—an inextricable link bound together the two documents in creating a free government.

The Declaration of Independence and Constitution seem to have had different purposes. The Declaration was an assertion of independence that included laying down the Enlightenment and Lockean principles of natural rights and republican self-government based upon consent. The Constitution created the framework of the national government with three separate branches operating with certain powers. However, a close reading of the Declaration of Independence and the Preamble to the Constitution reveal a common set of republican principles as Lincoln saw it with his metaphor.

The Declaration of Independence affirmed the republican principle of popular government. The people were the source of all sovereignty, or authority, in the representative government and gave their consent for it to govern. It stated, “That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed.”

The Constitution was significantly rooted in popular sovereignty. The Preamble to the Constitution agreed that the new constitutional government was to be based upon the principle of popular sovereignty. It began, “We the People of the United States, in Order to form a more perfect Union.” The previous government under the Articles of Confederation (1781-1789) did not have sufficient powers to govern the nation adequately so the Framers decided to create a new government with powers to achieve its ends.

The Constitution supported popular sovereignty in several ways. The Congress, and especially the House of Representatives, was closest to the people and represented them. As James Madison wrote in Federalist #51, “In republican government, the legislative authority necessarily predominates.” The people directly or indirectly elected several offices in free elections and for fixed terms. In addition, the people and their representatives were responsible for ratifying the Constitution as fundamental law in popular ratifying conventions.

Republican government was predicated upon majority rule of the sovereign people and their representatives. Majority rule was based upon reason as well as justice in preserving minority rights. President Thomas Jefferson reminded Americans of the moral basis for majority rule in his First Inaugural Address: “All, too, will bear in mind this sacred principle, that though the will of the majority is in all cases to prevail, that will to be rightful must be reasonable; that the minority possess their equal rights, which equal law must protect.”

The core principle—the “apple of gold—of the Declaration of Independence was human equality in natural rights. “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights.” This principle of equality was enshrined in the constitutional government and closely related to building a just and equal political order.

The Constitution created a system whereby all were equal under the law and equal in their rights. The Fifth Amendment reads, “No person shall be…deprived of life, liberty, or property, without due process of law.” In Federalist #51, Madison recognized the defining importance of justice when he wrote, “Justice is the end of government. It is the end of civil society.”

The Declaration supports the rule of law based upon popular consent. The people form a government with a rule of law to protect their rights. They have the power to overthrow a tyrannical government but have a responsibility to “institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.” A rule of law allows citizens to live their lives peacefully and civil society to function normally.

The Declaration claimed that the natural rights to “Life, Liberty, and the pursuit of Happiness” were self-evident. Political, economic, and religious liberty were among the fundamental and inalienable rights of the individual. The very purpose of republican government is to protect liberty, and its powers would be limited to achieve that goal.

The weakness of the Articles of Confederation actually endangered liberty by allowing unjust laws and little power to govern properly to preserve liberty. The more robust constitutional system was intended to do a better job of preserving liberty with laws that were more just and national security that was more vibrant.

The Founders created a free constitutional republic so that Americans might govern themselves by their own consent through their representatives. Limited government meant that its powers were restricted to guarding the people’s rights and governing effectively so that the people might live their lives freely. A free people would pursue their happiness and interact amicably in the public square for a healthy civil society.

In Federalist #1, Alexander Hamilton explained the entire purpose of establishing free government based upon the principles of the Declaration of Independence and Constitution. He stated that Americans had the opportunity and responsibility to form good government by “reflection and choice,” not by “accident and force.” The United States was founded uniquely upon a set of principles and ideals.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 


Podcast by Maureen Quinn



Click Here For Next Essay

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 4 - Guest Essayist: Joerg Knipprath

There are two recognized types of war, war between nations (“international war”) and war within a nation (“civil war”). In a civil war, some portion of the inhabitants forcibly seeks political change. The goal often is to replace the existing constitutional government with their own by taking over the entire structure or by separating themselves and seeking independence from their current compatriots.

A civil war may be an insurrection or a rebellion, the stages being distinguished by a rebellion’s higher degree of organization of military forces, creation of a formal political apparatus, greater popular participation, and more sophistication and openness of military operations. By those measures, the American effort began as an insurrection during the localized, brief, and poorly organized eruptions in the 1760s and early 1770s. Various petitions, speeches, and resolves opposing the Revenue Act, the Stamp Act, the Quartering Act, and others, were reactive, not strategic. Even circular letters among colonial governments for unified action, such as that by the Massachusetts assembly in February, 1768, against the Townshend Acts, or hesitant steps toward union, such as the Stamp Act Congress of 1765, were of that nature. Much rhetoric was consumed along with impressive quantities of Madeira wine, but tactical successes were soon superseded by the next controversy.

In similar vein, local bands of the Sons of Liberty, the middle-class groups of rabble-rousers that emerged in 1765, fortified in their numbers by wharf-rats and other layabouts, might destroy property, intimidate and assault royal officials, and harass locals seen as insufficiently committed to opposing an often-contrived outrage du jour. They might incite and participate in violent encounters with the British authorities. But, while they engaged in melodramatic and, to some Americans, satisfying political theater, they were no rebel force. Moreover, the political goals were limited, focused on repeal or, at least, non-enforceability of this or that act of Parliament.

Yet, those efforts, despite their limited immediate successes, triggered discussions of constitutional theory and provided organizational experience. In that manner, they laid the groundwork that, eventually, made independence possible, even if no one could know that and few desired it. Gradually, the vague line between insurrection and rebellion was crossed. The consequences of the skirmishes at Lexington and Concord have made it clear, in retrospect, that, by the spring of 1775, a rebellion was under way.

The Second Continental Congress met on May 10, 1775, and, in contrast to its predecessor, did not adjourn after concluding a limited agenda. Rather, it began to act as a government of a self-regarding political entity, including control over an organized armed force and a navy. Congress sent diplomatic agents abroad, took control over relations with the Indian tribes, and sent a military force under Benedict Arnold north against the British to “assist” Canada to join the American coalition. It appointed George Washington as commander-in-chief of the “Army of the United Colonies.” That army, and other forces, achieved several tactical military successes against the British during 1775 and early 1776, although the Canadian expedition narrowly failed.

Still, something was lacking. The scope of the effort was not matched by an equally ambitious goal. The end was not in focus. Certainly, repeal of the Coercive Acts, which had been enacted in the spring of 1774, urgently needed to be achieved. Those acts had closed the port of Boston, brought the government of Massachusetts under more direct royal control by eliminating elected legislative offices, and authorized the peacetime quartering of troops in private homes. These laws appeared reasonable from the British perspective. Thus, the Quartering Act intended to alleviate the dire conditions of British soldiers who were forced to sleep on Boston Common. The Government and Administration of Justice Act was to ensure, in part, fair trials for British officials and soldiers accused of murder as had happened in 1770 in the “Boston Massacre.” At the same time, though these acts were limited to Massachusetts, many colonists feared that a similar program awaited them. These laws were so despised that they were collectively known to Americans also as the “Intolerable Acts.”

Was there to be more? In unity lay strength, and the Second Continental Congress was tasked with working out an answer. But Congress was more follower than leader, as delegates had to wait for instructions from their colonial assemblies. That meant the process was driven by the sentiments of the people in the colonies, and the Tory residents of New York thought differently than the Whigs of beleaguered Massachusetts. Within each colony, sentiments, quite naturally, also varied. The more radical the potential end, the less likely people were to support it. Even as late as that spring of 1775, there existed no clear national identity as “American.” People still considered themselves part of the British Empire. The rights that they claimed were denied them by the government in London were the “ancient rights of Englishmen.” The official American flag, used by the armed forces until June, 1777, was composed of the familiar, to us, thirteen red and white stripes in its field, but its canton was the British Union Jack. Without irony, Congress’s military operations were made in the name of the king. General Washington was still toasting the king each night at the officer’s mess in Cambridge while besieging the British forces in Boston.

The gentlemen who met in Philadelphia came from the colonial elite, as would be expected. But they were also distinguished in sagacity and learning, more so than one has come to expect from today’s Congress drawn from a much larger population. Almost none favored independence. Those few that did, the Adams cousins from Massachusetts, Sam and John; the Lees of Virginia, Francis Lightfoot and Richard Henry; Benjamin Franklin of Pennsylvania; and Christopher Gadsden of South Carolina, the “Sam Adams of the South” as he came to be known, kept their views under wraps. Instead, the goal initially appeared to be some sort of conciliation within a new constitutional relationship of yet-to-be-determined form. Many delegates had also served in the First Continental Congress dedicated to sending remonstrances and petitions. On the other hand, Georgia had not sent delegates to the First, so its delegation consisted entirely of four novices. Peyton Randolph of Virginia was chosen president, as he had been of the First Continental Congress. He was soon replaced by John Hancock when Randolph had to return to Virginia because of his duties as Speaker of the House of Burgesses.

One person missing from the assemblage was Joseph Galloway of Pennsylvania. He had attended the First Continental Congress, where he had drafted a plan of union between the colonies and Britain. Parliament would control foreign affairs and external trade. As to internal colonial affairs, Parliament and a new American parliament would each effectively have veto power over the acts of the other. His plan would have recognized a degree of colonial sovereignty, but within the British system. It was rejected by one vote, six colonies to five, because a more confrontational proposal, the Suffolk Resolves, had recently been adopted by the towns around Boston which outflanked his proposal politically. Congress instead endorsed the Resolves, and voted to expunge Galloway’s plan from the record. Still, his proposal was a prototype for the future federal structure between the states and the general government under the Articles of Confederation. Repulsed by what he saw as the increasing radicalism of the various assemblies, he maintained his allegiance to the king. By 1778, he was living in London and advising the British government.

Congress sought to thread the needle between protecting the Americans from intrusive British laws and engaging in sedition and treason. In constitutional terms, it meant maintaining a balance between the current state of submission to a Parliament and a ministry in which they saw themselves as unrepresented, and the de facto revolution developing on the ground. The first effort, by John Dickinson of Pennsylvania and Thomas Jefferson of Virginia, was the “Declaration on the Causes of Taking Up Arms.” It declared, “We mean not to dissolve that union which has so long and so happily subsisted between us…. We have not raised armies with ambitious designs of separation from Great Britain, and establishing independent States.” Then why the effort? “[W]e are reduced to the alternative of choosing an unconditional submission to the tyranny of irritated ministers, or resistance by force. The latter is our choice.” Note the problem: not the king, not even Parliament, but “irritated ministers.” The path to resolution of the conflict, it seemed, was to appeal to the king himself, who, it was surmised, must have been kept in the dark about the dire state of affairs of his loyal colonial subjects by his ministers’ perfidy.

On July 8, 1775, Congress adopted the “Olive Branch Petition,” also drafted by John Dickinson. That gentleman, a well-respected constitutional lawyer, member of the First Continental Congress, and eventual principal drafter of the Articles of Confederation in 1777, wanted to leave no diplomatic stone unturned to avoid a breach with Great Britain. The historian Samuel Eliot Morison relates remarks attributed to John Adams about the supposed reasons for Dickinson’s caution. According to Adams, “His (Dickinson’s) mother said to him, ‘Johnny you will be hanged, your estate will be forfeited and confiscated, you will leave your excellent wife a widow, and your charming children orphans, beggars, and infamous.’ From my Soul, I pitied Mr. Dickinson…. I was very happy that my Mother and my Wife…and all her near relations, as well as mine, had been uniformly of my Mind, so that I always enjoyed perfect Peace at home.” A new topic of study thus presents itself to historians of the era: the effect of a statesman’s domestic affairs on his view of national affairs.

The Petition appealed to the king to help stop the war, repeal the Coercive Acts, restore the prior “harmony between [Great Britain] and these colonies,” and establish “a concord…between them upon so firm a basis as to perpetuate its blessing ….” Almost all who signed the later Declaration of Independence signed the Petition, largely to placate Dickinson and, for some, to justify more vigorous future measures. As feared by many, and hoped by some, on arrival in London, the American agents were told that the king would not receive a petition from rebels.

British politicians were as unsure and divided about moving forward as their American counterparts in Congress. But George III could rest assured of the support of his people, judging by the 60,000 that lined the route of his carriage from St. James Palace to the Palace of Westminster on the occasion of his speech to both houses for the opening of Parliament on October 26, 1775. The twenty-minute speech, delivered in a strong voice, provides a sharp counterpoint to the future American Declaration of Independence. Outraged by the attempted invasion of Canada, a peaceful and loyal colony, the king already on August 23 had declared that an open rebellion existed.

He now affirmed and elaborated on that proclamation. Leaders in America were traitors who in a “desperate conspiracy” had inflamed people through “gross misrepresentation.” They were feigning loyalty to the Crown while preparing for rebellion. Now came the bill of particulars against the Americans: “They have raised troops, and are collecting a naval force. They have seized the public revenue, and assumed to themselves legislative, executive, and judicial powers, which they already exercise in the most arbitrary manner…. And although many of these unhappy people may still retain their loyalty…the torrent of violence [by the Americans] has been strong enough to compel their acquiescence till a sufficient force shall appear to support them.”

Despite these provocations, he and the Parliament had acted with moderation, he assured his audience, and he was “anxious to prevent, if it had been possible, the effusion of the blood of my subjects, and the calamities which are inseparable from a state of war.” Nevertheless, he was determined to defend the colonies which the British nation had “encouraged with many commercial advantages, and protected and defended at much expense of blood and treasure.” He bemoaned in personal sorrow the baleful effects of the rebellion on his faithful subjects, but promised to “receive the misled with tenderness and mercy,” once they had come to their senses. Showing that his political sense was more acute than that of many Americans, as well as many members of Parliament, the king charged that the true intent of the rebels was to create an “independent empire.”

Two months later, Parliament followed the king’s declaration with an act to prohibit all commerce with the colonies and to make all colonial vessels subject to seizure as lawful prizes, with their crews subject to impressment into the Royal Navy.

The king’s speech was less well-received in the colonies, and it gave the radicals an opportunity to press their case that the king himself was at the center of the actions against the Americans. It was critical to the radicals’ efforts towards independence that the natural affinity for the king that almost all Americans shared with their countrymen in the motherland be sundered. Some snippets about the king’s character from the historian David McCullough illustrate why George III was popular. After ascending the throne in 1760 at age 22, “he remained a man of simple tastes and few pretensions. He liked plain food and drank but little, and wine only. Defying fashion, he refused to wear a wig…. And in notable contrast to much of fashionable society and the Court, … the king remained steadfastly faithful to his very plain Queen, with whom [he ultimately would produce fifteen children].”  Recent depictions of him as unattractive, dull, and insane, are far off the mark. He was tall, well above-average in looks at the time, and good-natured. By the 1770s, he was sufficiently skilled in the political arts to wield his patronage power to the advantage of himself and his political allies. One must not forget that, but a decade earlier, colonial governments had voted to erect statues in his honor. It was the very affability of George III and his appeal as a sort of “people’s king” that made it imperative for Jefferson to portray him in the Declaration of Independence as the ruthless and calculating tyrant he was not.

Between November, 1775, and January, 1776, New York, New Jersey, Pennsylvania, and Maryland still explicitly instructed their delegates to vote against independence. But events soon overtook the fitfulness of the state assemblies and Congress. Parliament’s actions, once they became known, left no room for conciliation. The colonies effectively had been declared into outlawry and, in Lockean terms, reverted to a “state of nature” in relation to the British government. The struggles in the colonial assemblies between moderates who had pressed for negotiation and radicals who pushed for independence now tilted clearly in favor of the latter.

Yet before news of Parliament’s actions reached the colonies, another event proved to be even more of a catalyst for the shift from conciliation to independence. In January, 1776, Thomas Paine, an English corset maker brought to Pennsylvania by Benjamin Franklin, published, anonymously, a pamphlet titled “Common Sense.” Paine ridiculed monarchy and denounced George III as a particularly despicable example. The work’s unadorned but stirring prose, short length, and simplistically propagandistic approach to political systems made it a best seller that delivered an electric jolt to the public debate. The extent to which it influenced the deliberations of Congress is unclear, however.

The irresolution of the Congress, it must be noted, was mirrored by the fumblings of Parliament. The Americans had many friends for their cause in London, even including various ministries, some of which nevertheless were reviled in the colonies. This had been the case beginning the prior decade, when American objections to a particular act of Parliament resulted in repeal of the act, only to be followed by another that the Americans found unacceptable, whereupon the dance continued. Still, the overall trend had been to tighten the reins on the colonies. But that did not deter Edmund Burke, a solid—but at times exasperated—supporter of the Americans, to introduce a proposal for reconciliation in Parliament in November, 1775. Unfortunately, it was voted down. Others, including Adam Smith and Lord Barrington, the secretary at war, urged all British troops to be removed and the Americans to be allowed to determine whether, and under what terms, they wished to remain in union with Britain.

Other proposals for a revised union were debated in Parliament even after the Americans declared independence. These proposals resembled the dominion structure that the British, having learned their lesson too late, provided for many of their colonies and dependencies in subsequent generations. The last of these, the Conciliatory Bill, which actually was passed on February 17, 1778, gave the Americans more than they had demanded in 1775. Too late. The American alliance with France made peace impossible. Had those proposals, allowing significant control by the colonists over local affairs, been adopted in a timely manner, the independence drive well may have stalled even in 1776. Even Adams, Jefferson, and other radicals of those earlier years had urged a dominion structure, whereby the Americans would have controlled their own affairs but would have remained connected to Britain through the person of the king. The quote attributed to the former Israeli Foreign Minister Abba Eban about the Arabs of our time might as well have applied to the British of the 1770s, “[They] never miss[ed] an opportunity to miss an opportunity.”

Reflecting the shifting attitudes in the assemblies, and responding to the seemingly inexorable move to independence by the states, the Second Continental Congress also bent to the inevitable. The Virginia House of Burgesses on May, 15, 1776, appointed a committee to draft a constitution for an independent Commonwealth, and directed its delegates in Congress to vote for independence. Other states followed suit. Finally, Richard Henry Lee moved in Congress, “That these United Colonies are, and of right ought to be, Independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.” The die was cast.

Joerg W. Knipprath is an expert on constitutional law, and member of the Southwestern Law School faculty, Professor Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Podcast by Maureen Quinn. 



Click Here For Next Essay 

Click Here For Previous Essay 

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 3 - Guest Essayist: Tony Williams

In an 1857 speech criticizing the Supreme Court decision in Dred Scott v. Sanford (1857), Abraham Lincoln commented that the principle of equality in the Declaration of Independence was “meant to set up a standard maxim for a free society.” That maxim, however, that the Declaration of Independence and its principles have been debated and contested throughout history.

American constitutional democracy needs vigorous deliberation and debate by citizens and their representatives. This civil and political dialogue helps Americans understand the principles and ideas upon which their country was founded and the means of working on achieving them. Indeed, throughout American history, many Americans appealed to the Declaration of Independence to make liberty and equality a reality for all.

In the 1770s and 1780s, enslaved persons in New England immediately appealed to the natural rights principles of the Declaration and state constitutions as they petitioned state legislatures and sued in state courts for freedom and the abolition of slavery. For example, a group of free blacks in New Hampshire stated, “That the God of nature gave them life and freedom, upon the terms of the most perfect equality with other men; That freedom is an inherent right of the human species, not to be surrendered, but by consent.” As a result, they won their freedom and helped to end slavery there.

The women and men who assembled at the 1848 Seneca Falls Convention for women’s rights adopted a Declaration of Rights and Grievances. The document was modeled after the Declaration of Independence, but changed the language to read, “We hold these truths to be self-evident: that all men and women are created equal.”

The Declaration of Independence was one of the centerpieces of the national debate over slavery. Abolitionists such as Frederick Douglass and William Lloyd Garrison all invoked the Declaration of Independence in denouncing slavery. Douglass stated that the Declaration “contains a true doctrine—that ‘all men are born equal.’” Douglass thought the document was an expression of the “eternal laws of the moral universe.” Garrison publicly burned the Constitution because he believed it to be a pro-slavery document, but always upheld the principles of the Declaration.

On the other hand, Senators Stephen Douglas and John Calhoun, Chief Justice Roger Taney, and Confederate vice-president Alexander Stephens all denied that the Declaration of Independence was meant to apply to black people. Calhoun thought slavery a “positive good” and asserted that the idea that all men are created equal was “the most false and dangerous of all political errors” because black persons were inferior and subordinate to the white race. Stephens stated,

“Our new government is founded upon exactly the opposite idea; its foundations are laid, its corner-stone rests, upon the great truth that the negro is not equal to the white man…our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.”

Abraham Lincoln’s political philosophy and statesmanship was rooted upon the principles of the Declaration of Independence and their realization according to constitutional means. He consistently held that the Declaration of Independence had universal natural rights principles that were “applicable to all men and all time.” In his Gettysburg Address, Lincoln stated that the nation was “conceived in Liberty, and dedicated to the proposition that all men are created equal.”

The expansion of American world power in the wake of the Spanish-American War of 1898 triggered another debate using the Declaration of Independence. Supporters of American expansion argued that the country would bring the ideals of liberty and self-government to those people who had not previously enjoyed them. On the other hand, anti-imperialists countered that American empire violated the Declaration of Independence by taking away the liberty of self-determination and consent from Filipinos and Cubans.

Politicians of differing perspectives viewed the Declaration in opposing ways during the early twentieth century. Progressives such as Presidents Theodore Roosevelt and Woodrow Wilson argued that the principles of the Declaration of Independence were important for an earlier period in American history to gain independence from Great Britain and set up the new nation. However, they argued, modern America faced new challenges introduced by an industrial economy and needed a new set of principles based upon equality of condition.

Progressive John Dewey represented this line of thinking when he wrote,

“The ideas of Locke embodied in the Declaration of Independence were congenial to our pioneer conditions that gave individuals the opportunity to carve their own careers….But the majority who call themselves liberal today are committed to the principle that organized society must use its powers to establish the conditions under which the mass of individuals can possess actual as distinct from merely legal liberty.”

Modern conservatives such as President Calvin Coolidge argued that the ideals of the Declaration of Independence should be preserved and respected. On the 150th anniversary of the Declaration, Coolidge stated that the principles formed the American creed and were still the basis of American republican institutions. Coolidge was a conservative who wanted to preserve the past, “reaffirm and reestablish” American principles, and generate a “reverence and respect” for principles of the Declaration and American founding. They were still applicable regardless of how much society changed. Indeed, Americans needed to revere the principles precisely because of rapid social change.

Modern American social movements for justice and equality called upon the Declaration of Independence and its principles. For example, Martin Luther King, Jr., stated in his “I Have a Dream” speech:

“When the architects of our republic wrote the magnificent words of the Constitution and the Declaration of Independence, they were signing a promissory note to which every American was to fall heir. This note was a promise that all men – yes, black men as well as white men – would be guaranteed the unalienable rights of life, liberty and the pursuit of happiness.”

King demanded that the United States live up to its “sacred obligation” of liberty and equality for all.

The natural rights republican ideals of the Declaration of Independence influenced the creation of American constitutional government founded upon liberty and equality. They also shaped the expectations that a free people would live in a just society. Achieving those ideals has always been part of a robust and dynamic debate among the sovereign people and their representatives.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 



Podcast by Maureen Quinn


Click Here For Next Essay 

Click Here For Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay 2 - Guest Essayist: George Landrith

The Magna Carta created the moral and political premise that, in many ways, the American founding was built upon. The Magna Carta came to represent the idea that the people can assert their rights against an oppressive ruler and that the power of government can be limited to protect those rights. These concepts were clearly foundational and central to both the Declaration of Independence and the United States Constitution.

First, a bit of history about Magna Carta — its full name was Magna Carta Libertatum which is Latin for “Great Charter of Freedoms.” But, it became commonly known as simply Magna Carta or the “Great Charter.” It was written in 1215 to settle an intense political dispute between King John of England and a group of barons who were challenging King John’s absolute right to rule. The terms of the charter were negotiated over the course of three days. When they reached agreement on June 15, 1215, the document was signed by the King and the barons at Runnymede outside of London.

This was a time when kings asserted the absolute right to rule, and that they were above the law and that they were personally chosen to rule by God. At this time, even questioning the King’s power was both treasonous and an act of defiance to God himself.

The Magna Carta limited the king’s absolute claim to power. It provided a certain level of religious freedom or independence from the crown, protected barons from illegal imprisonment, and limited the taxes that the crown could impose upon the barons, among other things. It did not champion the rights of every Englishman. It only focused on the rights of the barons. But, it was an important start to the concept of limiting the absolute power of governments or kings that claimed God had given them the absolute right to rule.

Magna Carta is important because of the principles it stood for and the ideas that it came to represent — not because it lasted a long time. Shortly after signing the charter, King John asked Pope Innocent III to annul it, which he did. Then there was a war known as the First Barons War that began in 1215 and finally ended in 1217.

After King John died in 1216, the regency government of John’s nine-year-old son, Henry III reissued the Magna Carta, after having stripped out some of its more “radical” elements in hopes of reuniting the country under his rule. That didn’t work, but at the end of the war in 1217, the original Magna Carta’s terms became the foundation for a peace treaty.

Over the following decades and centuries, the importance of Magna Carta ebbed and flowed depending on the current king’s view of it and his willingness to accept it, or abide by it its concepts. But subsequent kings further legitimized or confirmed the principles of Magna Carta — often in exchange for some grant of new taxes or some other political concession. But the path towards limited government and individual rights had been planted and continued to grow.

Despite its relatively short political life as a working document, Magna Carta created and memorialized the idea that the people had the right to limit the powers of their government and they had the right to protect basic and important rights. By the end of the Sixteenth Century, the political lore of Magna Carta grew and the idea of an ancient source for individual rights became cemented in the minds of reform-minded political scholars, thinkers and writers.

Obviously, it wasn’t as written in 1215 a document that protected the rights of the average Englishman. It only protected English barons. But the concepts of individual rights and the limitations of governmental power had grown and were starting to mature. Magna Carta was the seed of those powerful concepts of freedom and constitutionally limited government.  By the 17th and 18th Centuries, those arguing for reforms and greater individual rights and protections used Magna Carta as their foundation. These ideas are at the very center of both the Declaration of Independence and the United States Constitution.

As English settlers came to the shores of North America, they brought with them charters under the authority of the King. The Virginia Charter of 1606 promised the English settlers all the same “liberties, franchises and immunities” as people born in England.[1]  The Massachusetts Bay Company charter acknowledged the rights of the settlers to be treated as “free and natural subjects.”[2]

In 1687, William Penn, an early American leader, who had at one point been imprisoned in the Tower of London for his political and religious views, published a pamphlet on freedom and religious liberty that included a copy of the Magna Carta and discussed it as a source of fundamental law.[3] American scholars began to see Magna Carta as the source of their guaranteed rights of trial by jury and habeas corpus (which prevented a king from simply locking up his enemies without charges or due process). While that isn’t necessarily correct history, it is part of the growth of the seed of freedom and liberty that Magna Carta planted.

By July 4, 1776, the idea that government could, and should be, limited by the consent of its citizens and that government must protect individual rights was widely seen as springing forth from Magna Carta. The beautiful and important words penned by Thomas Jefferson in the Declaration spring from the fertile soil of Magna Carta:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. — That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed — That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.

Obviously, Thomas Jefferson’s ideas of liberty and freedom had developed a great deal since Magna Carta was penned in 1215. But, it is impossible to read Magna Carta and the Declaration of Independence and not see the common DNA.

When the Founders debated, drafted and ratified the U.S. Constitution, it is also clear they were creating a set of rules and procedures to limit and check the power of government and to guarantee basic, individual rights.

The Fifth Amendment to the Constitution which guarantees “no person shall be deprived of life, liberty, or property, without due process of law,” is a concept that comes from Magna Carta. Our constitutional guarantees of “a speedy trial” as found in the Sixth Amendment are also founded in the political thought that grew from Magna Carta. The Constitution’s guarantee of the “privilege of the writ of habeas corpus” (Art.1, Sec. 9) is also a concept that grew from Magna Carta.

Even the phrase “the law of the land” comes from Magna Carta’s history. And now we use that phrase in the United States to describe our Constitution which we proudly label “the law of the land.”

To this day, Magna Carta is an important symbol of liberty in both England and the United States.

The Declaration of Independence and the U.S. Constitution are in my estimation the two most important and influential political documents ever written. What they did to provide promote and protect the freedom, opportunity and security of the average person is almost impossible to overstate. As British Prime Minister William Gladstone said in 1878, “the American Constitution is the most wonderful work ever struck off at a given time by the brain and purpose of man.”[4]

I believe Gladstone was correct. But, Magna Carta was an important development in political thought and understanding about government power and individual rights. It is difficult to imagine the Declaration of Independence or the U.S. Constitution without the foundational elements provided by Magna Carta.


Podcast by Maureen Quinn.



Click Here For The Next Essay

Click Here To View The Previous Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

[1] The Library of Congress ( as shown on 2/13/2021.

[2] The Library of Congress ( as shown on 2/13/2021.

[3] Ralph V. Turner, Magna Carta: Through the Ages (2003).

[4] William E. Gladstone, “Kin Beyond Sea,” The North American Review, September–October 1878, p. 185-86.

Introduction: Revolutionary Importance of the Declaration of Independence by The Honorable Michael Warren

The importance of the Declaration of Independence can hardly be overstated. It established for the first time in world history a new nation based on the First Principles of the rule of law, unalienable rights, limited government, the Social Compact, equality, and the right to alter or abolish oppressive government.

Contrary to the beliefs of some, the American Revolution was not fought for lower taxes or to protect slavery. In fact, the tea tax which provoked the Boston Tea Party actually lowered the price of tea, and many of the Founding Fathers were opposed to slavery.

Indeed, the second paragraph of the Declaration of Independence announces for the whole world to see our underlying motivation for the American Revolution:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness. That to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed. That whenever any form of government becomes destructive to these ends, it is the right of the people to alter or to abolish it, and to institute new government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness.

The Declaration announced the Founding Fathers’ belief in the “truth” –

there was no relative moralism here. They believed some truths were so obvious, that they were “self-evident,” that is, that they need not be proven: 2 + 2 = 4, not a cow. You, our dear reader, are not the moon. This essay is in English, not water. The Founders declared, against the historical experiences and beliefs of the ages, six founding First Principles, all of which were quite revolutionary at the time, and remain revolutionary today:

1. The Rule of Law: Although not articulated expressly, undergirding the entire Declaration of Independence is the idea of the rule of law. In other words, the government and the People are both bound by the law. The reason we needed to declare independence was because the British Empire was no longer following the fundamental unwritten English Constitution. Until 1776, it was just assumed that most rulers did not need to follow the law, and that huge swaths of the privileged were exempt from the laws that applied to the vast majority of the People. The Declaration of Independence declared – no more! The law should apply equally to all in society, whether they be in the government or the masses, the richest or the most poor. We turned the world upside down.

2. Equality: All men are created equal. This idea is perhaps the most controversial of them all, because the Founding Fathers fell so short of its ideal in practice. But, the Founding Fathers were the very first to proclaim that a nation should be dedicated in this belief. It is based on the belief that the Creator (Nature and Nature’s God) created all people, and therefore we are all equal in His eyes and under our law. Until 1776, no government was established on equality or even declared it should be so. Instead, inequality was the key historical reality and belief of the day. A privileged few lorded over subjects. It was done as a matter of tradition and codified into the law. We fell short in our reality, but we were the first to commit our nation to equality.

3. Unalienable Rights: We are used to thinking we have rights that government must respect, but this was quite revolutionary in 1776. In fact, the People were “subjects” and had “privileges” which means that the government lorded over the people and the people could only do was permitted by the government. A right means the People do not have to seek permission from the government. Moreover, “unalienable” means that the rights cannot be taken away, they are born within each person and can never be taken away by the government. “Alienability” is an old-fashioned word for the ability “to sell” or “transfer” something. Because our rights come from God, they cannot be sold or taken away. Today, too many act like their rights come from government, and they need to ask for permission to do things. Not so. No other society in human society has rested on the foundation of unalienable rights.

4. Social Compact: The idea of the Social Compact is that the People have come together and created a government to protect their unalienable rights. If we don’t have a government, we have the natural right to defend ourselves, but without a police force, we have to resort to vigilante justice. By allowing the government to create a police force, fire department, border patrol, and military, we have given up some of our unalienable rights to self-defense and agreed to abide by the government. This means that the government rests on the consent of the People and only acts justly with that consent. Before 1776, likely no government believed in a true Social Compact, they usually took power by force and violence, and coerced its subjects to follow its dictates.

5. Limited Government. Because the government is formed to protect our unalienable rights, the just limit of its powers is to protect those rights and some ancillary powers. To ensure that the government remains free and just, we limit its powers and authority. In most of human history, governments were developed with the opposite belief that they were unlimited unless they carved out some privileges to their subjects.

6. Reform and Revolution: If a government becomes unjust and violates our unalienable rights, we have the right to reform or even abolish it. That is, after all, the whole point of the Declaration of Independence. If reform failed, and the government undertook a long train of abuses with the intention to assert an absolute despotism on the People, then the People have the right – in fact, the duty – to overthrow the government and start anew. We are a revolutionary people and had no intention of giving away the rights we enjoyed.

Religious texts aside, the Declaration of Independence may be the most important document in human history. It totally upended the prevailing orthodoxy about government and has led to momentous changes across time and the world. Certainly we have fallen short, over and over again, of its ideals. But without the First Principles of the Declaration of Independence, we would live in the total darkness of oppression as mankind had for a millennia before.

Judge Michael Warren is the co-creator of Patriot Week (, author of America’s Survival Guide, and host of the Patriot Lessons: American History & Civics Podcast.


Podcast by Maureen Quinn.



Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Essay One - Guest Essayist: Judge Michael Warren

The importance of the Declaration of Independence can hardly be overstated. It established for the first time in world history a new nation based on the First Principles of the rule of law, unalienable rights, limited government, the Social Compact, equality, and the right to alter or abolish oppressive government.

Contrary to the beliefs of some, the American Revolution was not fought for lower taxes or to protect slavery. In fact, the tea tax which provoked the Boston Tea Party actually lowered the price of tea, and many of the Founding Fathers were opposed to slavery.

Indeed, the second paragraph of the Declaration of Independence announces for the whole world to see our underlying motivation for the American Revolution:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness. That to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed. That whenever any form of government becomes destructive to these ends, it is the right of the people to alter or to abolish it, and to institute new government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness.

The Declaration announced the Founding Fathers’ belief in the “truth” –

there was no relative moralism here. They believed some truths were so obvious, that they were “self-evident,” that is, that they need not be proven: 2 + 2 = 4, not a cow. You, our dear reader, are not the moon. This essay is in English, not water. The Founders declared, against the historical experiences and beliefs of the ages, six founding First Principles, all of which were quite revolutionary at the time, and remain revolutionary today:

1. The Rule of Law: Although not articulated expressly, undergirding the entire Declaration of Independence is the idea of the rule of law. In other words, the government and the People are both bound by the law. The reason we needed to declare independence was because the British Empire was no longer following the fundamental unwritten English Constitution. Until 1776, it was just assumed that most rulers did not need to follow the law, and that huge swaths of the privileged were exempt from the laws that applied to the vast majority of the People. The Declaration of Independence declared – no more! The law should apply equally to all in society, whether they be in the government or the masses, the richest or the most poor. We turned the world upside down.

2. Equality: All men are created equal. This idea is perhaps the most controversial of them all, because the Founding Fathers fell so short of its ideal in practice. But, the Founding Fathers were the very first to proclaim that a nation should be dedicated in this belief. It is based on the belief that the Creator (Nature and Nature’s God) created all people, and therefore we are all equal in His eyes and under our law. Until 1776, no government was established on equality or even declared it should be so. Instead, inequality was the key historical reality and belief of the day. A privileged few lorded over subjects. It was done as a matter of tradition and codified into the law. We fell short in our reality, but we were the first to commit our nation to equality.

3. Unalienable Rights: We are used to thinking we have rights that government must respect, but this was quite revolutionary in 1776. In fact, the People were “subjects” and had “privileges” which means that the government lorded over the people and the people could only do was permitted by the government. A right means the People do not have to seek permission from the government. Moreover, “unalienable” means that the rights cannot be taken away, they are born within each person and can never be taken away by the government. “Alienability” is an old-fashioned word for the ability “to sell” or “transfer” something. Because our rights come from God, they cannot be sold or taken away. Today, too many act like their rights come from government, and they need to ask for permission to do things. Not so. No other society in human society has rested on the foundation of unalienable rights.

4. Social Compact: The idea of the Social Compact is that the People have come together and created a government to protect their unalienable rights. If we don’t have a government, we have the natural right to defend ourselves, but without a police force, we have to resort to vigilante justice. By allowing the government to create a police force, fire department, border patrol, and military, we have given up some of our unalienable rights to self-defense and agreed to abide by the government. This means that the government rests on the consent of the People and only acts justly with that consent. Before 1776, likely no government believed in a true Social Compact, they usually took power by force and violence, and coerced its subjects to follow its dictates.

5. Limited Government. Because the government is formed to protect our unalienable rights, the just limit of its powers is to protect those rights and some ancillary powers. To ensure that the government remains free and just, we limit its powers and authority. In most of human history, governments were developed with the opposite belief that they were unlimited unless they carved out some privileges to their subjects.

6. Reform and Revolution: If a government becomes unjust and violates our unalienable rights, we have the right to reform or even abolish it. That is, after all, the whole point of the Declaration of Independence. If reform failed, and the government undertook a long train of abuses with the intention to assert an absolute despotism on the People, then the People have the right – in fact, the duty – to overthrow the government and start anew. We are a revolutionary people and had no intention of giving away the rights we enjoyed.

Religious texts aside, the Declaration of Independence may be the most important document in human history. It totally upended the prevailing orthodoxy about government and has led to momentous changes across time and the world. Certainly we have fallen short, over and over again, of its ideals. But without the First Principles of the Declaration of Independence, we would live in the total darkness of oppression as mankind had for a millennia before.

Judge Michael Warren is the co-creator of Patriot Week (, author of America’s Survival Guide, and host of the Patriot Lessons: American History & Civics Podcast.


Podcast by Maureen Quinn.

Click Here For The Next Essay

Click Here To Sign up for the Daily Essay From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 

Click Here To View the Schedule of Topics From Our 2021 90-Day Study: Our Lives, Our Fortunes & Our Sacred Honor 


Gerald Huesken is a Social Studies teacher, focused on World History and Government & Economics, at Elizabethtown Area High School in Elizabethtown, Pennsylvania. A graduate of Shippensburg University of Pennsylvania, Mr Huesken has been teaching at Etown for the better part of a decade. At the high school, he serves not only as a teacher, but also a club advisor and assistant director / drill instructor for the Elizabethtown Area High School Marching Band. His 11th Grade Government & Economics class is a well-designed effort to get students not only to understand the workings of government and the US Constitution, but also think about it in their day-to-day lives and connect it with current issues and events with their own community. Mr Huesken also holds a Masters degree in History from Millersville University of Pennsylvania and a Masters degree in Education from Wilkes University. He serves locally as a researcher and historian of local history, teaches an elective course on state and local history of Pennsylvania and the Etown community, and is a published historian, having had research appear in publications like the Journal of Lancaster County History. A state certified K-12 online educator by the Pennsylvania Department of Education, Mr Huesken lives in Elizabethtown with his wife, Emily, his children Olivia and Owen, and their many family dogs and cats. Feel free to check out more of what is going on in Mr Huesken classroom at his district website:

Click here to download Mr. Huesken’s Honorable Mention Lesson Plan, The Preamble Video Challenge!

Click here to download The Preamble Video Challenge Student Handout

Click here for the playlist to Mr. Huesken’s students’ work


Samir is currently an MBA student at Emory University in Atlanta. He’s an Army veteran with deployments to Iraq and Afghanistan as a cavalry officer; he still serves in the Army Reserves.  He has a strong interest in entrepreneurship and real estate.  He graduated from West Point with a degree in Economics and Arabic language.

Click here or below to watch Samir’s winning speech!



My name is Melanie and I’m in the 8th grade.  I enjoy creative STEM projects, coding and watching movies.
I am the student council Vice-President at my school.  In 2018 and 2019, I won grants from the City of Las Vegas Youth Neighborhood Association Partnership Program (YNAPP) which rewards grants to youth who want to make positive changes in their neighborhoods.  I completed projects to help youth with special needs in my community.   In 2020, I was selected and honored to serve as a Youth member of the board for YNAPP.
I currently hold a 4.0 GPA in Advanced and Leadership classes at my school and was named student of the month for September.
Read About Melanie’s Winning Stem Project In Her Own Words!
I am so excited to share with you my United States Constitution Middle School STEM Project!  I am a “STEM Girl” who enjoys building websites and robots!  I’m currently in Robotics in school and will take Advanced STEM next semester. That’s why I knew I had to do the STEM Project to talk about our country’s beautiful constitution.

Here is my website!

Here are the results of the Quiz and Survey!

Quiz Results:

Survey Results:

On my website, you will see beautiful colorful backgrounds, videos, quotes, photos, a quiz, and a survey. I had so much fun finding everything and adding them to my site. I chose the backgrounds of red, white, and blue because they represent our flag which stands for freedom. The quote from Abraham Lincoln is so inspirational to me because he knew, even back then, the importance of the constitution.

Please make sure to click under the “Click Here to Learn More” buttons because I have a video.  These videos of previous Presidents like Ronald Reagan, I believe, are important to our country’s history of freedom. The video of Abraham Lincoln is my favorite because it comes from Disneyland’s “Great Moments with Mr. Lincoln” presentation.  Every time my family visits Disneyland, we would watch the presentation and talk about how we are so proud to be American.

Thank you for holding the workshop! I live in Las Vegas so I was in school for part of it but what I was able to go to, gave me more information about the importance of Freedom of Speech. I learned it is very important to hold up the Constitution, even if it is different than what the media is saying. Also, I learned how the United States Constitution plays a huge role in our civil liberties as American citizens.

What I enjoyed most about making this website was having a quiz and survey for anyone to take!  I found out wonderful information about what people think and how much people currently know about our American Constitution.

I plan to implement this information in my everyday life by being proud to be an American and make others aware that the American Constitution stands for freedom.

Thank you for the opportunity to make this website and share it with you and a whole lot of people across the United States.

David Edelman has been a Social Studies teacher and Peer Instructional Coach in NYC Public Schools for over a decade. David provides instructional coaching, mentoring and professional learning to colleagues, in addition to teaching Government & Economics classes. His classroom serves as a learning lab and demonstration classroom to foster inter and intra school collaboration. David teaches at Union Square Academy for Health Sciences (USA) a new, unscreened, public high school with a Career & Technical Education focus in NY, NY. Most of David’s students will be first in their family to attend college. Students at USA take hands on lab classes in either dentistry or pharmacy technology, in addition to receiving a standard liberal arts education. All students have paid internships and professional mentors who expose students to their careers. It was David’s service in AmeriCorps NCCC, a yearlong national service program similar to the domestic Peace Corps that solidified David’s desire to follow in his mother’s footsteps and become a public school teacher. David was invited by Representative Carolyn McCarthy in 2007 to testify before Congress to reauthorize AmeriCorps and advocate for the GIVES Act. These experiences helped foster his desire to center learning on student led activism and civic engagement. When David isn’t teaching, he’s probably having fun with his two daughters Mila and Sophia and his wife Dahlia in Forest Hills, Queens. You can learn more about David, his teaching and see examples of his students’ work at his website

Click here for David’s winning lesson plan: How has The Supreme Court interpreted The Bill of Rights as it applies to schools and students? 


Carson Winkie is the son of Ken and Leslie Winkie, and the youngest of five children. Carson, a graduate of Bridgeport High School, served as the Student Body president throughout his senior year. He is currently serving as the governor of West Virginia Boys State and as a senator to the American Legion Boys Nation. He was appointed as the secretary of Homeland Security while at Boys Nation. In the Winter of his senior year, Carson was selected as 1 of 2 students from West Virginia to represent his state in the United States Senate Youth Program. This competitive national program aims to instill profound knowledge of the American political process and a lifelong commitment to public service. Athletically, he was the captain of his high school football team and helped lead his team to win the West Virginia AA State Championship Football Game. After the game, Carson was named the State Championship MVP and also broke a record for having the most carries in a WV State Championship game, 43 carries. He also received the distinction of First Team All State for his linebacker position. Carson is an advocate for community service and he always tries to help in anything he can. He has volunteered since his sophomore year in the Rotary Youth Leadership Awards, where he helps develop young high school leaders. He has also volunteered for Wreaths Across America placing wreaths to honor each veteran in his local community. Throughout high school, Carson was an active member in National Honor Society and was valedictorian of his class. Additionally, Carson was recognized as an AP Scholar with Distinction in his junior and senior years. He has lettered in both academics and orchestra throughout his high school experience. Outside of highschool, Carson is a member of Our Lady of Perpetual Help Parish in Stonewood, WV. As a parishioner, he has assisted in the delivery of items to homebound parishioners and played cello for services. 

Carson is going to attend Harvard this fall (2020) with an interest in either a public service or pre-med concentration. While at Harvard, Carson will also be a member of the Varsity Football Team.

Lawson is a retired Air Force pilot (26 years, served in Viet Nam), retired airline pilot (flew domestic and international routes), retired farmer, and retired Track/Cross Country coach. He and his wife have 3 children—one daughter with 3 grandchildren, two sons (both served in the Army, one still in the Army Reserve with deployments to Iraq & Afghanistan). Staying active (physically & mentally) and following/supporting the grandchildren’s activities are the primary focus. Lawson is a graduate of the USAF Academy with a BS in Chemistry and a MBA from Southern Illinois University/Edwardsville.


Lawson Barclay’s Winning Essay

“I am afraid that many Americans take our freedom for granted. Americans are privileged to live in the ‘land of the free and the home of the brave.’ Many assume that the freedoms that presently exist will continue far into the future.

Depending on one’s age, parents or grandparents served in the armed forces during WWII or worked on the home front to provide food and supplies for the war effort. Tom Brokaw labeled these men and women as ‘The Greatest Generation.’ This generation grew up during the Great Depression and went on to fight WWII or provide labor to produce materials for the war effort. Everyone contributed in one way or another.

My parents were part of that generation. Until Brokaw’s book, I characterized them as ‘old school.’ ” Click here to read the rest of Lawson’s Winning essay!


Aubrey Jackman was born and raised in Tooele, Utah. Growing up, she loved being the middle of seven children in her family. After High School, Aubrey served an 18-month volunteer mission for the Church of Jesus Christ of Latter-Day Saints in the Seattle, Washington area. Aubrey enjoys spending her free time playing sports, such as basketball and tennis. She is an avid sports fan and aspires to be an athletic coach in the future. She now attends Brigham Young University with her husband, Makay. Aubrey is a Junior, majoring in Family Studies but has always enjoyed exploring technology and finding new ways to create.

Aubrey describes her STEM Project: 

To summarize my STEM project, I realized that a lot of people do not know simple facts about the United States Constitution. I created an online website, quiz, and study help to aid those interested in becoming more familiar with the Constitution. I wanted to design a website that would catch the attention of my peers in order to increase their excitement about this vital document. I wanted to promote the Constitution in a way that would inspire this website and survey’s visitors to gain more understanding of the importance of this lasting document. This is the link to the website:
Included on the website is a quiz I created. I chose some questions that most people do not generally know, in an effort to help them realize they need to know more! This quiz also contains simple facts that each of us should know and understand. It is important to understand how our rights are protected and how our government is run. We are blessed to have this organized document that fulfills that need. A direct link to the survey I created is here:
To make learning convenient, on the website I created an option to click on a button labeled “study here” which directly sends the user to a “Quizlet” page. Here, I created digital flash cards with simple facts about the United States Constitution and its amendments. This gives the user several choices and styles of learning to best fit their personal learning needs. This is the direct link to that Quizlet study guide and flashcards:
Toward the bottom of the website, I created a timeline visual aid of the dates that each amendment was added to the Constitution. This helped me better understand how the Constitution is a living document and I hope this timeline will teach others the same truth. I also included another option to learn more by clicking a direct link to where I myself have found new and interesting information about the United States Constitution.
Results of the survey from 65 participants can be found at this link:

Magda Smith is a freshman at George Washington University. She is interested in creating positive change through policy and promoting nuance, empathy, and open-mindedness in political discussions. In her free time, she loves to read and go on walks. Magda’s essays, songs, and poems have won several national and international awards including a Merit Award from the National YoungArts Foundation and a Gold Medal, a Silver Medal, and 30 more honors from the Scholastic Art & Writing Awards. Over the summer and autumn of 2020, she worked as a researcher with the Global Student Policy Alliance to create ​a database​ of every country in the world’s climate policies.


Click Here or below To Watch Magda Smith’s Winning Speech!


Emmalisa Horlacher, 23, Best College Song Winner, grew up in Utah but spent recent years living in Virginia where she got to tour the Museum of National History displaying the Constitution as well as visit multiple Civil War sites. She is currently attending BYU and is in the process of applying for the Media Arts Program. She is involved in two Art Internships for Future House Publishing company and The Valley New Media Project. She spends free-time doing freelance video editing work for Peter Myers. She loves to write, sing, volunteer and work on creative projects like making pop-up cards and wire wrapped rings.



Corbin Jones, Best College Song Winner: My name is Corbin Jones. I Currently live in Provo Utah, I have lived in Huntsville Alabama and in the Portland area. I am one of 8 kids and have grown up with a family that are very strong supporters of the constitution. I grew up taking several classes about the founding of the country, and have been a huge advocate for supporting America and the beliefs it was founded upon. I have played guitar since I was fourteen years old and play the piano and ukulele as well. Music is a very big part pod my life. I love to play sports, camp, just be outdoors and be with other pole. I am a product design major at Brigham Young University. I spent two years on a mission trip for the Church of Jesus Christ of Latter-day Saints in Northern California, serving and teaching people.



Click here or below to watch Emmalisa Horlacher & Corbin Jones’ winning song!

Gianna Voce is a 15-year-old homeschool student from Northern Virginia. She has been interested in  STEM from an early age, teaching herself Python before starting a Girls Who Code club for homeschooled students. She is currently exploring an interest in programming, cybersecurity, and website design. She plans to graduate early and hopes to pursue a degree in engineering and data science. She has served her community in various ways through her involvement in American Heritage Girls, Wreaths Across America, welcoming veterans on Honor Flights at Dulles Airport, and volunteering at her local library and food pantry. Her outside interests including reading, art, writing, Krav Maga, hiking, skiing, and mountain biking.

Click here to view Gianna’s Winning STEM Project – a Constitution Website with sections for each age group! Explore her Middle School “Lego Amendments!”


Courtney is a very active 17-year-old junior at Midway High School in Waco, Texas. She is a member of the National Honor Society, founder and president of her school’s Trivia Club, and serves as an officer in the Spanish Club. Courtney’s Business Professionals of America Start-up Enterprise team has advanced to nationals and placed in the top ten teams in the nation. She is also in two varsity choirs at her high school and has been recognized on the regional level in state vocal competitions. She loves to write music, sing, and excels in playing the piano, guitar, and ukulele and loves to share her gift of music to bless others. However, Courtney’s greatest passion is her faith and serving God. She is actively involved in her church where she also plays and sings in the youth worship band. Courtney looks forward to pursuing a career in business/entrepreneurship and Spanish. When she was just nine years old, her mother authored a book for those who need hope in difficult circumstances, so Courtney decided to organize her friend to make beaded bracelets and sell them in local gift shops and online. For each bracelet she sold, she and her mom were able to donate a book to hospitals, hospice and cancer centers, churches and others who need hope. Through Courtney’s Creations, she has raised more than $12,000 and donated more than 1,000 books. Courtney loves to express her heart and creativity in arts and crafts, songwriting and all things music-related. She loves the mission of Constituting America and is delighted to be part of raising the awareness and importance of our nation’s Constitution!


Click here or scroll down to listen to Courtney’s Winning Song!

Joaddan Villard is a junior at Laurel High School. She originally began editing in 4th grade with the oh so famous “Movie Maker.” Now 17, she enjoys editing in her free time. Never passing up an opportunity to try something new, many things have helped her reach where she is today. Joaddan has participated in many programs such as Fresh Films where she won in the Quad Cities category. Not letting her film desire die, she also created a FIlm CLub at her school. Though Joaddan aspires to join the entertainment industry, that does not stop her from expanding her horizons. Other than editing, Joaddan also enjoys debating and speeches. Joaddan has won a Barack Obama Oratorical Contest and Multiple local film festivals. Energetic, multilingual, and optimistic, she aspires that one day her voice will be heard by people everywhere.

Click here or below to view Joaddan’s winning PSA!

Lily Cring is 16 years old and a Junior at Western High School in Ft. Lauderdale. She is on the Executive Board for her theatre and drama department as well as a National Speech & Debate competitor. She has a passion for contributing to her community and In her free time, is an intern for a non-profit that organizes student-led programs for feeding underprivileged children. She has a special place in her heart for cows and has sworn off eating beef for that reason.

Click here for Lily’s winning PSA

Margaret Alvine is an 18-year-old homeschooled high school Senior. She has been homeschooled all her life, along with her six younger siblings. She currently lives in the Mojave Desert in Southern California.

Her interest in the Constitution grew through her sophomore AP US Government class, which she really enjoyed. She also enjoys reading the speeches of the late Antonin Scalia, former Supreme Court judge, and looks up to him as an example of a life lived in faith and reason. She is excited for this opportunity to help inform others about our wonderful Constitution.

She competes in multiple speech categories in National Christian Forensics and Communication Association (NCFCA), including a persuasive speech on the necessity of historical knowledge to a free people. In addition to her schoolwork and extracurriculars, Margaret teaches Religious Education at her parish, and this year has shifted online due to Covid.

In Margaret’s spare time, she enjoys crafting and other forms of designing, playing flute and recorder, coming up with skits with her sister, reading, and engaging others in conversation.

Click here to read Margaret’s winning high school essay

My name is Simran and I am an 8th grade student at Lakeside Middle. I enjoy reading, running, baking and playing clarinet not necessarily in that order.

Some of my accomplishments are receiving STEM award in Vex Robotics Competition in 5th grade, making to County Honor Band in the 6th grade and the District Honor Band in 8th grade. I have participated in essay contests before and won third place for “Enlighten America Contest” by B’nai B’rith International. Other contests include Young Georgia Authors where I was the winning entry for Lakeside in 7th grade and another entry was for Veteran of Foreign Wars.

Not only have I entered essay contests I also participate in math competitions including MathCounts, AMC 8 and AMC 10.

I had an amazing time writing for the Constituting America contest and am looking forward to the trip to Washington DC, meeting mentors and doors of opportunity it would open up for me.

Click here to read Simran’s Winning Essay!

Advika is a 4th Grader in Texas. She loves to draw,  swim, sing and ride her bike.
Advika likes to tinker around in the kitchen and makes curious snacks by mixing strange ingredients we wouldn’t normally use together. She likes trying different cuisines and make an inter- continental experimental fusion which are tried out on us. Advika is also a Cub Scout and she loves to spend time with her grandparents. Lately she’s been trying to kayak on her own and get into one without flipping it over.


Click here to view Advika’s winning artwork or scroll down.

Mary Crosby, Best Middle School Song Winner from Washington State
Mary Crosby was born in Mount Vernon, Washington in 2007. She is the 5th of six Crosby children and is homeschooled. She has always enjoyed playing music and started violin at the age of six. At seven, Mary added piano and has since taught herself guitar, ukulele, and mandolin. Mary’s family has always valued public speaking skills, so she started learning these skills from a young age. Mary currently competes in the National Christian Forensics and Communication Association. She currently competes in 5 speech events: Apologetics, Impromptu, Informative, Digital, and Duo Interpretation. Last year, Mary won first place in the Digital Presentation category for Region II. Recently discovering a love of teaching younger students, Mary has taken on piano, violin, and ukulele students. She enjoys teaching others and learning right along with them.

Mary is so excited to be a part of Constituting America and can’t wait to learn even more about music from her mentor this year.

Click here or below to watch Mary’s Winning Song!

Elise Esparza, Winner, Best Poem

Elise Esparza is a 10 year old fifth grader from Reagan Elementary in Cedar Park, Texas. She loves to write, draw, dance, swim, and ride her bike. Elise’s love for writing was passed down from her late grandmother, Nam Kỳ Cô Nương, who was a prolific Vietnamese poet. Though Elise resides in Texas, she typically spends her summers in Los Angeles, California, where she was born. In addition to her creative and athletic endeavors, Elise likes to cook (and eat!). She started a cooking channel on YouTube when she was 6 years old called “Baking with Elise.” Elise is extremely excited and honored to be the Best Poem winner for 2020. She knows that her grandmother would be proud of her.

Click here or scroll down to read Elise’s Winning Poem



We the People

Without the Constitution
We’d be lost
With no rules and no order
It’d be pure chaos
Not one, not two, but three branches of government
Ensure that “We the People”
Are the boss

The legislative branch makes the laws
And the executive branch carries them out
But the judicial branch interprets the laws
And can say, “Hey! That’s not what the Constitution is about!”

It’s a system of checks and balances
And it’s the highest law of the land
The U.S. Constitution of America
Meant to protect every child, woman, and man

Elhaam Atiq, 10, is a proud Texan who lives in Richmond, TX. She is the second child to her parents and has 2 sisters.  Her winning artwork depicts the diversity we see in America today and how this is a strength that adds to our country’s greatness.

Elhaam is a member of her school’s National Elementary Honor Society (NEHS), Art Club and Destination Imagination Team. As a Girl Scout, Elhaam is very good at taking up responsibility and leading a team. She has developed entrepreneurial skills by actively participating in Cookie sales. Elhaam loves community service and jumps at the opportunity to take a leading role. At a local relief center, she sorted clothes for needy refugee families locally and abroad. She helped to lead a bake sale which raised funds for a service project for the Hispanic Muslim community.

She loves to spend time with her grandparents and often indulges in conversation of how life was 30 years ago. In her free time, Elhaam enjoys making Tik Tok videos and pranking her sisters. She has been swimming for years and is a skilled swimmer.

Elhaam loves camping and travelling. Her best vacation was the trip to Europe in the summer of 2019 in which she explored different cultures, languages and cuisines. Even though Elhaam was born with a congenital heart defect, she is a healthy, strong and happy girl today who values the gift of life.

Click Here to view Elhaam’s Winning Artwork or scroll down!

Click Here To Sign up for the Daily Essay!

Click Here To Read the Declaration of Independence


Our beloved Auction Coordinator Mollie McCreary has passed away, or transitioned, as she liked to say. Mollie worked for Constituting America from 2014  to just a few weeks before she passed. Mollie was a dear friend to us all at Constituting America, and to our wonderful donors who make our Constitution Education programs possible. Mollie used to say “we have the most wonderful donors. I just love them all.”

Mollie’s dear friends have shared these thoughts, which perfectly capture her spirit and essence:

“Today, the world lost a most indomitable woman. One who never gave up; who overcame obstacle after obstacle. Until one last obstacle was just too much. Cancer was finally just too big. A published author, a mom, a friend. She loved life, a good laugh, and her scotch. She was the embodiment of southern hospitality known for her great meals, charm, and wit. My world is just a little bleak right now. I will miss you.”

“I will miss Mollie so much. I am sorry I never got to meet her in person but I always felt I knew her for years. I thought the world of her. She was one of the most kind, considerate, thoughtful people I have had the pleasure to know,” Cher McCoy

“Mollie was the BEST FRIEND I’ve ever had…despite never meeting in person.
But regardless I will miss this wonderful Christian woman forever. She could make your day, inspire you to do your best, and just exuded enthusiasm for whatever the task at hand.
God has a strong believer and dedicated servant in His presence today. Her loved ones are in my thoughts and prayers.
Mollie will ever be in my memories. I will miss her.” Chuck Clowdis

“Mollie McCreary was a friend and inspiration to everyone around her, and even while fighting an illness that took strength from her physically, she remained positive and encouraging. Mollie always had a smile to share and was never bitter. She loved life and made the most of every day. Mollie was a leader who lifted others up so they could shine and be their best. She is dearly missed,” Amanda Hughes

Mollie will be missed. Even in the face of a terminal illness Mollie remained upbeat and joyful. I had the pleasure of visiting her and sitting on her porch with my dog. Mollie, we love you and will miss you forever” Bobby Rodriguez

“Mollie had reached out to me when I needed guidance when I fell ill. She was a shining light that radiated with love, bringing me comfort. She was loving, kind and especially a brave soul, she knew when it was her time to be with the Lord, and with the Lord she went. She was everything that I still strive to be and Mollie will forever be in my heart,” Cindy Sue Clark

“Mollie was a great friend & dedicated coworker. I’ll miss our phone chats about work, family, Ireland & the musings of every day life. Mollie was passionate about her work & always thinking of ways to make our auctions more successful !I loved her stories about her travels, the people she met & adventures she had! I’ll miss her so very much but take comfort that she is in heaven reunited with her darling daughter. We’ll miss you dear Mollie.” 😢 Love, Jeanette Kraynak

“I knew Mollie through emails that we exchanged when it was time to offer up items for the Constituting America auction. I never personally met her or spoke to her on the phone, but every single communication I had with her made it abundantly clear that she was a very special person, as she was just so incredibly kind and complimentary.
May God bless and comfort her family and loved ones, as I know this wonderful woman will be greatly missed,” Brian Karadashian

Horace Cooper is a senior fellow with the National Center for Public Policy Research, co-chairman of the Project 21 National Advisory Board and a legal commentator. Horace averages over 400 talk radio appearances per year representing the National Center and Project 21, in addition to regular television appearances and interviews by the print media. He taught constitutional law at George Mason University in Virginia and was a senior counsel to U.S. House Majority Leader Dick Armey. Horace is also a member of Constituting America’s Leadership board has written numerous essays for our 90 Day Study and is a longtime friend to all of us at Constituting America.


Guest Essayist: Will Morrisey

To secure the unalienable natural rights of the American people, the American Founders designed a republican regime. Republics had existed before: ancient Rome, modern Switzerland and Venice. By 1776, Great Britain itself could be described as a republic, with a strong legislature counterbalancing a strong monarchy—even if the rule of that legislature and that monarchy over the overseas colonies of the British Empire could hardly be considered republican. But the republicanism instituted after the War of Independence, especially as framed at the Philadelphia Convention in 1787, featured a combination of elements never seen before, and seldom thereafter.

The American definition of republicanism was itself unique. ‘Republic’ or res publica means simply, ‘public thing’—a decidedly vague notion that might apply to any regime other than a monarchy. In the tenth Federalist, James Madison defined republicanism as representative government, that is, by a specific way of constructing the country’s ruling institutions. The Founders gave republicanism a recognizable form beyond ‘not-monarchy.’ From the design of the Virginia House of Burgesses to the Articles of Confederation and finally to the Constitution itself, representation provided Americans with real exercise of self-rule, while at the same time avoiding the turbulence and folly of pure democracies, which had so disgraced themselves in ancient Greece that popular sovereignty itself had been dismissed by political thinkers ever since. Later on, Abraham Lincoln’s Lyceum Address shows how republicanism must defend the rule of law against mob violence; even the naming of Lincoln’s party as the Republican Party was intended to contrast it with the rule of slave-holding plantation oligarchs in the South.

The American republic had six additional characteristics, all of them clearly registered in this 90-Day Study. America was a natural-rights republic, limiting the legitimate exercise of popular rule to actions respecting the unalienable rights of its citizens; it was a democratic republic, with no formal ruling class of titled lords and ladies or hereditary monarchs; it was an extended republic, big enough to defend itself against the formidable empires that threatened it; it was a commercial republic, encouraging prosperity and innovation; it was a federal republic, leaving substantial political powers in the hands of state and local representatives; and it was a compound republic, dividing the powers of the national government into three branches, each with the means of defending itself against encroachments by the others.

Students of the American republic could consider each essay in this series as a reflection on one or more of these features of the American regime as designed by the Founders, or, in some cases, as deviations from that regime. Careful study of what the Declaration of Independence calls “the course of events” in America shows how profound and pervasive American republicanism has been, how it has shaped our lives throughout our history, and continues to do so today.

A Natural-Rights Republic

The Jamestown colony’s charter was written in part by the great English authority on the common law, Sir Edward Coke. Common law was an amalgam of natural law and English custom. The Massachusetts Bay Colony, founded shortly thereafter, was an attempt to establish the natural right of religious liberty. And of course the Declaration of Independence rests squarely on the foundation of the laws of Nature and of Nature’s God as the foundation of unalienable natural rights, several of which were given formal status in the Constitution’s Bill of Rights. As the articles on Nat Turner’s slave rebellion in 1831, the Dred Scott case in 1857, the Civil Rights amendments of the 1860s, and the attempt at replacing plantation oligarchy with republican regimes in the states after the Civil War all show, natural rights have been the pivot of struggles over the character of America. Dr. Martin Luther King, Jr. and the early civil rights leaders invoked the Declaration and natural rights to argue for civic equality, a century after the civil war. As a natural-rights republic, America rejects in principle race, class, and gender as bars to the protection of the rights to life, liberty, and the pursuit of happiness. In practice, Americans have often failed to live up to their principles—as human beings are wont to do—but the principles remain as their standard of right conduct.

A Democratic Republic

The Constitution itself begins with the phrase “We the People,” and the reason constitutional law governs all statutory laws is that the sovereign people ratified that Constitution. George Washington was elected as America’s first president, but he astonished the world by stepping down eight years later; he had no ambition to become another George III, or a Napoleon. The Democratic Party which began to be formed by Thomas Jefferson and James Madison when they went into opposition against the Adams administration named itself for this feature of the American regime. The Seventeenth Amendment to the Constitution, providing for popular election of U. S. Senators, the Nineteenth Amendment, guaranteeing voting rights for women, and the major civil rights laws of the 1960s all express the democratic theme in American public life.

An Extended Republic

Unlike the ancient democracies, which could only rule small territories, American republicanism gave citizens the chance of ruling themselves in a territory large enough to defend itself against the powerful states and empires that had arisen in modern times. All of this was contingent, however, on Jefferson’s idea that this extended republic would be an “empire of liberty,” by which he meant that new territories would be eligible to join the Union on an equal footing with the original thirteen states. Further, every state was to have a republican regime, as stipulated in the Constitution’s Article IV, section iv. In this series of Constituting America essays, the extension of the extended republic is very well documented, from the 1803 Louisiana Purchase and the Lewis and Clark expedition to the Indian Removal Act of 1830 and the Mexican War of 1848, to the purchase of Alaska and the Transcontinental Railroad of the 1960s, to the Interstate Highway Act of 1956. The construction of the Panama Canal, the two world wars, and the Cold War all followed from the need to defend this large republic from foreign regime enemies and to keep the sea lanes open for American commerce.

A Commercial Republic

Although it has proven itself eminently capable of defending itself militarily, America was not intended to be a military republic, like ancient Rome and the First Republic of France. The Constitution prohibits interstate tariffs, making the United States a vast free-trade zone—something Europe could not achieve for another two centuries. We have seen Alexander Hamilton’s brilliant plan to retire the national debt after the Revolutionary War and the founding of the New York Stock Exchange in 1792. Above all, we have seen how the spirit of commercial enterprise leads to innovation: Eli Morse’s telegraph; Alexander Graham Bell’s telephone; Thomas Edison’s phonography and light bulb; the Wright Brothers’ flying machine; and Philo Farnworth’s television. And we have seen how commerce in a free market can go wrong if the legislation and federal policies governing it are misconceived, as they often were before, during, and sometimes after the Great Depression.

A Federal Republic

A republic might be ‘unitary’—ruled by a single, centralized government. The American Founders saw that this would lead to an overbearing national government, one that would eventually undermine republican self-government itself. They gave the federal government enumerated powers, leaving the remaining governmental powers “to the States, or the People.” The Civil War was fought over this issue, as well as slavery, the question of whether the American Union could defend itself against its internal enemies. The substantial centralization of federal government power seen in the New Deal of the 1930s, the Great Society legislation of the 1960s, and the Affordable Care Act of 2010 have renewed the question of how far such power is entitled to reach.

A Compound Republic

A simple republic would elect one branch of government to exercise all three powers: legislative, executive, and judicial. This was the way the Articles of Confederation worked. The Constitution ended that, providing instead for the separation and balance of those three powers. As the essays here have demonstrated, the compound character of the American republic has been eroded by such notions as ‘executive leadership’—a principle first enunciated by Woodrow Wilson but firmly established by Franklin Roosevelt and practiced by all of his successors—and ‘broad construction’ of the Constitution by the Supreme Court. The most dramatic struggle between the several branches of government in recent decades was the Watergate controversy, wherein Congress attempted to set limits on presidential claims of ‘executive privilege.’ Recent controversies over the use of ‘executive orders’ have reminded Americans of all political stripes that government by decree can gore anyone’s prized ox.

The classical political philosophers classified the forms of political rule, giving names to the several ‘regimes’ they saw around them. They emphasized the importance of regimes because regimes, they knew, designate who rules us, the institutions by which the rulers rule, the purposes of that rule, and finally the way of life of citizens or subjects. In choosing a republican regime on a democratic foundation, governing a large territory for commercial purposes with a carefully calibrated set of governmental powers, all intended to secure the natural rights of citizens according to the laws of Nature and of Nature’s God, the Founders set the course of human events on a new and better direction. Each generation of Americans has needed to understand the American way of life and to defend it.

Will Morrisey is Professor Emeritus of Politics at Hillsdale College, editor and publisher of Will Morrisey Reviews, an on-line book review publication.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.


Guest Essayist: Joerg Knipprath

On March 23, 2010, President Barack Obama signed into law the Patient Protection and Affordable Care Act (“ACA”), sometimes casually referred to as “Obamacare,” a sobriquet that Obama himself embraced in 2013. The ACA covered 900 pages and hundreds of provisions. The law was so opaque and convoluted that legislators, bureaucrats, and Obama himself at times were unclear about its scope. For example, the main goal of the law was presented as providing health insurance to all Americans who previously were unable to obtain it due to, among other factors, lack of money or pre-existing health conditions. The law did increase the number of individuals covered by insurance, but stopped well short of universal coverage. Several of its unworkable or unpopular provisions were delayed by executive order. Others were subject to litigation to straighten out conflicting requirements. The ACA represented a probably not-yet-final step in the massive bureaucratization of health insurance and care over the past several decades, as health care moved from a private arrangement to a government-subsidized “right.”

The law achieved its objectives to the extent it did by expanding Medicaid eligibility to higher income levels and by significantly restructuring the “individual” policy market. In other matters, the ACA sought to control costs by further reducing Medicare reimbursements to doctors, which had the unsurprising consequence that Medicare patients found it still more difficult to get medical care, and by levying excise taxes on medical devices, drug manufacturers, health insurance providers, and high-benefit “Cadillac plans” set-up by employers. The last of these was postponed and, along with most of the other taxes, repealed in December, 2019. On the whole, existing employer plans and plans under collective-bargaining agreements were only minimally affected. Insurers had to cover defined “essential health services,” whether or not the purchaser wanted or needed those services. As a result, certain basic health plans that focused on “catastrophic events” coverage were substandard and could no longer be offered. Hence, while coverage expanded, many people also found that the new, permitted plans cost them more than their prior coverage. They also found that the reality did not match Obama’s promise, “if you like your health care plan, you can keep your health care plan.”

The ACA required insurance companies to “accept all comers.” This policy would have the predictable effect that healthy (mostly young) people would forego purchasing insurance until a condition arose that required expensive treatment. That, in turn, would devastate the insurance market. Imagine being able to buy a fire policy to cover damage that had already arisen from a fire. Such policies would not be issued. Private, non-employer, health insurance plans potentially would disappear. Some commentators opined that this was exactly the end the reformers sought, at least secretly, so as to shift to a single-payer system, in other words, to “Medicare for all.” The ACA sought to address that problem by imposing an “individual mandate.” Unless exempt from the mandate, such as illegal immigrants or 25-year-olds covered under their parents’ policy, every person must purchase insurance through their employer or individually from an insurer through one of the “exchanges.” Barring that, the person had to pay a penalty, to be collected by the IRS.

There have been numerous legal challenges to the ACA. Perhaps the most significant constitutional challenge was decided by the Supreme Court in 2012 in National Federation of Independent Business v. Sebelius (NFIB). There, the Court addressed the constitutionality of the individual mandate under Congress’s commerce and taxing powers, and of the Medicaid expansion under Congress’s spending power. These two provisions were deemed the keys to the success of the entire project.

Before the Court could address the case’s merits, it had to rule that the petitioners had standing to bring their constitutional claim. The hurdle was the Anti-Injunction Act. That law prohibited courts from issuing an injunction against the collection of any tax, in order to prevent litigation from obstructing tax collection. Instead, a party must pay the tax and sue for a refund to test the tax’s constitutionality. The issue turned on whether the individual mandate was a tax or a penalty. Chief Justice John Roberts concluded that Congress had described this “shared responsibility payment” if one did not purchase qualified health insurance as a “penalty,” not a “tax.” Roberts noted that other parts of the ACA imposed taxes, so that Congress’s decision to apply a different label was significant. Left out of the opinion was the reason that Congress made what was initially labeled a “tax” into a “penalty” in the ACA’s final version, namely, Democrats’ sensitivity about Republican allegations that the proposed bill raised taxes on Americans.

Having confirmed the petitioners’ standing, Roberts proceeded to the substantive merits of the challenge to the ACA. The government argued that the health insurance market (and health care, more generally) was a national market in which everyone would participate, sooner or later. While this is a likely event, it is by no means a necessary one, as a person might never seek medical services. If, for whatever reason, people did not have suitable insurance, the government claimed, they might not be able to pay for those services. Because hospitals are legally obligated to provide some services regardless of the patient’s ability to pay, hospitals would pass along their uncompensated costs to insured patients, whose insurance companies in turn would charge those patients higher premiums. The ACA’s broadened insurance coverage and “guaranteed-issue” requirements, subsidized by the minimum insurance coverage requirement, would ameliorate this cost-shifting. Moreover, the related individual mandate was “necessary and proper” to deal with the potential distortion of the market that would come from younger, healthier people opting not to purchase insurance as sought by the ACA.

Of course, Congress could pass laws under the Necessary and Proper Clause only to further its other enumerated powers, hence, the need to invoke the Commerce Clause. The government relied on the long-established, but still controversial, precedent of Wickard v. Filburn. In that 1942 case, the Court upheld a federal penalty imposed on farmer Filburn for growing wheat for home consumption in excess of his allotment under the Second Agricultural Adjustment Act. Even though Filburn’s total production was an infinitesimally small portion of the nearly one billion bushels grown in the U.S. at that time, the Court concluded, tautologically,  that the aggregate of production by all farmers had a substantial effect on the wheat market. Thus, since Congress could act on overall production, it could reach all aspects of it, even marginal producers such as Filburn. The government claimed that the ACA’s individual mandate was analogous. Even if one healthy individual’s failure to buy insurance would scarcely affect the health insurance market, a large number of such individuals and of “free riders” failing to get insurance until after a medical need arose would, in the aggregate, have such a substantial effect.

Roberts, in effect writing for himself and the formally dissenting justices on that issue, disagreed. He emphasized that Congress has only limited, enumerated powers, at least in theory. Further, Congress might enact laws needed to exercise those powers. However, such laws must not only be necessary, but also proper. In other words, they must not themselves seek to achieve objectives not permitted under the enumerated powers. As opinions in earlier cases, going back to Chief Justice John Marshall in Gibbons v. Ogden had done, Roberts emphasized that the enumeration of congressional powers in the Constitution meant that there were some things Congress could not reach.

As to the Commerce Clause itself, the Chief Justice noted that Congress previously had only used that power to control activities in which parties first had chosen to engage. Here, however, Congress sought to compel people to act who were not then engaged in commercial activity. However broad Congress’s power to regulate interstate commerce had become over the years with the Court’s acquiescence, this was a step too far. If Congress could use the Commerce Clause to compel people to enter the market of health insurance, there was no other product or service Congress could not force on the American people.

This obstacle had caused the humorous episode at oral argument where the Chief Justice inquired whether the government could require people to buy broccoli. The government urged, to no avail, that health insurance was unique, in that people buying broccoli would have to pay the grocer before they received their ware, whereas hospitals might have to provide services and never get paid. Of course, the only reason hospitals might not get paid is because state and federal laws require them to provide certain services up front, and there is no reason why laws might not be adopted in the future that require grocers to supply people with basic “healthy” foods, regardless of ability to pay. Roberts also acknowledged that, from an economist’s perspective, choosing not to participate in a market may affect that market as much as choosing to participate. After all, both reflect demand, and a boycott has economic effects just as a purchasing fad does. However, to preserve essential constitutional structures, sometimes lines must be drawn that reflect considerations other than pure economic policy.

The Chief Justice was not done, however. Having rejected the Commerce Clause as support for the ACA, he embraced Congress’s taxing power, instead. If the individual mandate was a tax, it would be upheld because Congress’s power to tax was broad and applied to individuals, assets, and income of any sort, not just to activities, as long as its purpose or effect was to raise revenue. On the other hand, if the individual mandate was a “penalty,” it could not be upheld under the taxing power, but had to be justified as a necessary and proper means to accomplish another enumerated power, such as the commerce clause. Of course, that path had been blocked in the preceding part of the opinion. Hence, everything rested on the individual mandate being a “tax.”

At first glance it appeared that this avenue also was a dead end, due to Roberts’s decision that the individual mandate was not a tax for the purpose of the Anti-Injunction Act. On closer analysis, however, the Chief Justice concluded that something can be both a tax and not be a tax, seemingly violating the non-contradiction principle. Roberts sought to escape this logical trap by distinguishing what Congress can declare as a matter of statutory interpretation and meaning from what exists in constitutional reality. Presumably, Congress can define that, for the purpose of a particular federal law, 2+2=5 and the Moon is made of green cheese. In applying a statute’s terms, the courts are bound by Congress’s will, however contrary that may be to reason and ordinary reality.

However, when the question before a court is the meaning of an undefined term in the Constitution, an “originalist” judge will attempt to discern the commonly-understood meaning of that term when the Constitution was adopted, subject possibly to evolution of that understanding through long-adhered-to judicial, legislative, and executive usage. Here, Roberts applied factors the Court had developed beginning in Bailey v. Drexel Furniture Co. in 1922. Those factors compelled the conclusion that the individual mandate was, functionally, a tax. Particularly significant for Roberts was that the ACA limited the payment to less than the price for insurance, and that it was administered by the IRS through the normal channels of tax collection. Further, because the tax would raise substantial revenue, its ancillary purpose of expanding insurance coverage was of no constitutional consequence. Taxes often affect behavior, understood in the old adage that, if the government taxes something, it gets less of it.

Roberts’s analysis reads as the constitutional law analogue to quantum mechanics and the paradox of Schroedinger’s Cat, in that the individual mandate is both a tax and a penalty until it is observed by the Chief Justice. His opinion has produced much mirth—and frustration—among commentators, and there were inconvenient facts in the ACA itself. The mandate was in the ACA’s operative provisions, not its revenue provisions, and Congress referred to the mandate as a “penalty” eighteen times in the ACA. Still, he has a valid, if not unassailable, point. A policy that has the characteristics associated with a tax ordinarily is a tax. If Congress nevertheless consciously chooses to designate it as a penalty, then for the limited purpose of assessing the policy’s connection to another statute which carefully uses a different term, here the Anti-Injunction Act, the blame for any absurdity lies with Congress.

The Medicaid expansion under the ACA was struck down. Under the Constitution, Congress may spend funds, subject to certain ill-defined limits. One of those is that the expenditure must be for the “general welfare.” Under classic republican theory, this meant that Congress could spend the revenue collected from the people of the several states on projects that would benefit the United States as a whole, not some constituent part, or an individual or private entity. It was under that conception of “general welfare” that President Grover Cleveland in 1887 vetoed a bill that appropriated $10,000 to purchase seeds to be distributed to Texas farmers hurt by a devastating drought. Since then, the phrase has been diluted to mean anything that Congress deems beneficial to the country, however remotely.

Moreover, while principles of federalism prohibit Congress from compelling states to enact federal policy—known as the “anti-commandeering” doctrine—Congress can provide incentives to states through conditional grants of federal funds. As long as the conditions are clear, relevant to the purpose of the grant, and not “coercive,” states are free to accept the funds with the conditions or to reject them. Thus, Congress can try to achieve indirectly through the spending power what it could not require directly. For example, Congress cannot, as of now, direct states to teach a certain curriculum in their schools. However, Congress can provide funds to states that teach certain subjects, defined in those grants, in their schools. The key issue usually is whether the condition effectively coerces the states to submit to the federal financial blandishment. If so, the conditional grant is unconstitutional because it reduces the states to mere satrapies of the federal government rather than quasi-sovereigns in our federal system.

In what was a judicial first, Roberts found that the ACA unconstitutionally coerced the states into accepting the federal grants. Critical to that conclusion was that a state’s failure to accept the ACA’s expansion of Medicaid would result not just in the state being ineligible to receive federal funds for the new coverage. Rather, the state would lose all of its existing Medicaid funding. As well, here the program affected—Medicaid—accounted for over 20% of the typical state’s budget. Roberts described this as “economic dragooning that leaves the States with no real option but to acquiesce in the Medicaid expansion.” Roberts noted that the budgetary impact on a state from rejecting the expansion dwarfed anything triggered by a refusal to accept federal funds under previous conditional grants.

One peculiarity of the opinions in NFIB was the stylistic juxtaposition of Roberts’s opinion for the Court and the principal dissent, penned by Justice Antonin Scalia. Roberts at one point uses “I” to defend a point of law he makes, which is common in dissents or concurrences, instead of the typical “we” or “the Court” used by a majority. By contrast, Scalia consistently uses “we” (such as “We conclude that [the ACA is unconstitutional.” and “We now consider respondent’s second challenge….”), although that might be explained because he wrote for four justices, Anthony Kennedy, Clarence Thomas, Samuel Alito, and himself. He also refers to Justice Ruth Bader Ginsburg’s broadly as “the dissent.” Most significant, Scalia’s entire opinion reads like that of a majority. He surveys the relevant constitutional doctrines more magisterially than does the Chief Justice, even where he and Roberts agree, something that dissents do not ordinarily do. He repeatedly and in detail criticizes the government’s arguments and the “friend-of the-court” briefs that support the government, tactics commonly used by the majority opinion writer.

These oddities have provoked much speculation, chiefly that Roberts initially joined Scalia’s opinion, which would have made it the majority opinion, but got cold feet. Rumor spread that Justice Anthony Kennedy had attempted until shortly before the decision was announced to persuade Roberts to rejoin the Scalia group. Once that proved fruitless, it was too late to make anything but cosmetic changes to Scalia’s opinion for the four now-dissenters. Only the justices know what actually happened, but the scenario seems plausible.

Why would Roberts do this? Had Scalia’s opinion prevailed, the ACA would have been struck down in its entirety. That would have placed the Court in a difficult position, especially during an election year, having exploded what President Obama considered his signature achievement. The President already had a fractious relationship with the Supreme Court and earlier had made what some interpreted as veiled political threats against the Court over the case. Roberts’s “switch in time” blunted that. The chief justice is at most primus inter pares, having no greater formal powers than his associates. But he is often the public and political figurehead of the Court. Historically, chief justices have been more “political” in the sense of being finely attuned to maintaining the institutional vitality of the Court. John Marshall, William Howard Taft, and Charles Evans Hughes especially come to mind. Associate justices can be jurisprudential purists, often through dissents, to a degree a chief justice cannot.

Choosing his path allowed Roberts to uphold the ACA in part, while striking jurisprudential blows against the previously constant expansion of the federal commerce and spending powers. Even as to the taxing power, which he used to uphold that part of the ACA, Roberts planted a constitutional land mine. Should the mandate ever be made really effective, if Congress raised it above the price of insurance, the “tax” argument would fail and a future court could strike it down as an unconstitutional penalty. Similarly, if the tax were repealed, as eventually happened, and the mandate were no longer supported under the taxing power, it could threaten the entire ACA.

After NFIB, attempts to modify or eliminate the ACA through legislation or litigation continued, with mixed success. Noteworthy is that the tax payment for the individual mandate was repealed in 2017. This has produced a new challenge to the ACA as a whole, because the mandate is, as the government conceded in earlier arguments, a crucial element of the whole health insurance structure. The constitutional question is whether the mandate is severable from the rest of the ACA. The district court held that the mandate was no longer a tax and, thus, under NFIB, is unconstitutional. Further, because of the significance that Congress attached to the mandate for the vitality of the ACA, the mandate could not be severed from the ACA, and the entire law is unconstitutional. The Fifth Circuit agreed that the mandate is unconstitutional, but disagreed about the extent that affects the rest of the ACA. The Supreme Court will hear the issue in its 2020-2021 term in California v.. Texas.

On the political side, the American public seems to support the ACA overall, although, or perhaps because, it has been made much more modest than its proponents had planned. So, the law, somewhat belatedly and less boldly, achieved a key goal of President Obama’s agenda. That success came at a stunning political cost to the President’s party, however. The Democrats hemorrhaged over 1,000 federal and state legislative seats during Obama’s tenure. In 2010 alone, they lost a historic 63 House seats, the biggest mid-term election rout since 1938, plus 6 Senate seats. The moderate “blue-dog” Democrats who had been crucial to the passage of the ACA were particularly hard hit. Whatever the ACA’s fate turns out to be in the courts, the ultimate resolution of controversial social issues remains with the people, not lawyers and judges.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

For those old enough to remember, September 11, 2001, 9:03 a.m. is burned into our collective memory. It was at that moment that United Flight 175 crashed into the South Tower of the World Trade Center in New York City.

Everyone was watching. American Airlines Flight 11 had crashed into the North Tower seventeen minutes earlier. For those few moments there was uncertainty whether the first crash was a tragic accident. Then, on live television, the South Tower fireball vividly announced to the world that America was under attack.

The nightmare continued. As horrifying images of people trapped in the burning towers riveted the nation, news broke at 9:37 a.m. that American Flight 77 had plowed into the Pentagon.

For the first time since December 11, 1941, Americans were collectively experiencing full scale carnage from a coordinated attack on their soil.

The horror continued as the twin towers collapsed, sending clouds of debris throughout lower Manhattan and igniting fires in adjoining buildings. Questions filled the minds of government officials and every citizen: How many more planes? What were their targets? How many have died? Who is doing this to us?

At 10:03 a.m., word came that United Flight 93 had crashed into a Pennsylvania field. Speculation exploded as to what happened. Later investigations revealed that Flight 93 passengers, alerted by cell phone calls of the earlier attacks, revolted causing the plane to crash. Their heroism prevented this final hijacked plane from destroying the U.S. Capitol Building.

That final accounting was devastating: 2,977 killed and over 25,000 injured. The death toll continues to climb to this day as first responders and building survivors perish from respiratory conditions caused by inhaling the chemical-laden smoke. It was the deadliest terrorist attack in human history.

How this happened, why this happened, and what happened next compounds the tragedy.

Nineteen terrorists, most from Saudi Arabia, were part a radical Islamic terrorist organization called al-Qaeda “the Base.” This was the name given the training camp for the radical Islamicists who fought the Soviets in Afghanistan.

Khalid Sheikh Mohammed, a Pakistani, was the primary organizer of the attack. Osama Bin Laden, a Saudi, was the leader and financier. Their plan was based upon an earlier failed effort in the Philippines. It was mapped out in late 1998. Bin Laden personally recruited the team, drawn from experienced terrorists. They insinuated themselves into the U.S., with several attending pilot training classes. Five-man teams would board the four planes, overpower the pilots, and fly them as bombs into significant buildings.

They banked on plane crews and passengers responding to decades of “normal” hijackings. They would assume the plane would be commandeered, flown to a new location, demands would be made, and everyone would live. This explains the passivity on the first three planes. Flight 93 was different, because it was delayed in its departure, allowing time for passengers to learn about the fate of the other planes. Last minute problems also reduced the Flight 93 hijacker team to only four.

The driving force behind the attack was Wahhabism, a highly strict, anti-Western version of Sunni Islam.

The Saudi Royal Family owes its rise to power to Muhammad ibn Abd al-Wahhab (1703-1792). He envisioned a “pure” form of Islam that purged most worldly practices (heresies), oppressed women, and endorsed violence against nonbelievers (infidels), including Muslims who differed with his sect. This extremely conservative and violent form of Islam might have died out in the sands of central Arabia were in not for a timely alliance with a local tribal leader, Muhammad bin Saud.

The House of Saud was just another minor tribe, until the two Muhammads realized the power of merging Sunni fanaticism with armed warriors. Wahhab’s daughter married Saud’s son, merging their two blood lines to this day. The House of Saud and its warriors rapidly expanded throughout the Arabia Peninsula, fueled by Wahhabi fanaticism. These various conflicts always included destruction of holy sites of rival sects and tribes. While done in the name of “purification,” the result was erasing the physical touchstones of rival cultures and governments.

In the early 20th Century, Saudi leader, ibn Saud, expertly exploited the decline of the Ottoman Empire, and alliances with European Powers, to consolidate his permanent hold over the Arabian Peninsula. Control of Mecca and Medina, Islam’s two holiest sites, gave the House of Saud the power to promote Wahhabism as the dominant interpretation of Sunni Islam. This included internally contradictory components of calling for eradicating infidels while growing rich from Christian consumption of oil and pursuing lavish hedonism when not in public view.

In the mid-1970s Saudi Arabia used the flood of oil revenue to become the “McDonalds of Madrassas.” Religious schools and new Mosques popped up throughout Africa, Asia, and the Middle East. This building boom had nothing to do with education and everything to do with spreading the cult of Wahhabism. Pakistan became a major hub for turning Wahhabi madrassas graduates into dedicated terrorists.

Wahhabism may have remained a violent, dangerous, but diffused movement, except it found fertile soil in Afghanistan.

Afghanistan was called the graveyard of empires as its rugged terrain and fierce tribal warriors thwarted potential conquerors for centuries. In 1973, the last king of Afghanistan was deposed leading to years of instability. In April 1978, the opposition Communist Party seized control in a bloody coup. The communist tried to brutally consolidate power, which ignited a civil war among factions supported by Pakistan, China, Islamists (known as the Mujahideen), and the Soviet Union. Amidst the chaos, U.S. Ambassador Adolph Dubbs was killed on February 14, 1979.

On December 24, 1979, the Soviet Union invaded Afghanistan, killing their ineffectual puppet President, and ultimately bringing over 100,000 military personnel into the country. What followed was a vicious war between the Soviet military and various Afghan guerrilla factions. Over 2 million Afghans died.

The Reagan Administration covertly supported the anti-Soviet Afghan insurgents, primarily aiding the secular pro-west Northern Alliance. Arab nations supported the Mujahideen. Bin Laden entered the insurgent caldera as a Mujahideen financier and fighter. By 1988, the Soviets realized their occupation had failed. They removed their troops, leaving behind another puppet government and Soviet trained military.

When the Soviet Union collapsed, Afghanistan was finally free. Unfortunately, calls for reunifying the country by reestablishing the monarchy and strengthening regional leadership went unheeded. Attempts at recreating the pre-invasion faction ravaged parliamentary system only led to new rounds of civil war.

In September 1994, the weak U.S. response opened the door for the Taliban, graduates from Pakistan’s Wahhabi madrassas, to launch their crusade to take control of Afghanistan.  By 1998, the Taliban controlled 90% of the country.

Bin Laden and his al-Qaeda warriors made Taliban-controlled territory in Afghanistan their new base of operations. In exchange, Bin Laden helped the Taliban eliminate their remaining opponents. This was accomplished on September 9, 2001, when suicide bombers disguised as a television camera crew blew-up Ahmad Shah Massoud, the charismatic pro-west leader of the Northern Alliance.

Two days later, Bin Laden’s plan to establish al-Qaeda as the global leader of Islamic terrorism was implemented with hijacking four planes and turning them into guided bombs.

The 9-11 attacks, along with the earlier support against the Soviets in Afghanistan, was part of Bin Laden’s goal to lure infidel governments into “long wars of attrition in Muslim countries, attracting large numbers of jihadists who would never surrender.” He believed this would lead to economic collapse of the infidels, by “bleeding” them dry. Bin Laden outlined his strategy of “bleeding America to the point of bankruptcy” in a 2004 tape released through Al Jazeera.

On September 14, amidst the World Trade Center rubble, President George W. Bush addressed those recovering bodies and extinguishing fires using a bullhorn:

“The nation stands with the good people of New York City and New Jersey and Connecticut as we mourn the loss of thousands of our citizens”

A rescue worker yelled, “I can’t hear you!”

President Bush spontaneously responded: “I can hear you! The rest of the world hears you! And the people who knocked these buildings down will hear all of us soon!”

Twenty-three days later, on October 7, 2001, American and British warplanes, supplemented by cruise missiles fired from naval vessels, began destroying Taliban operations in Afghanistan.

U.S. Special Forces entered Afghanistan. Working the Northern Alliance, they defeated major Taliban units. They occupied Kabul, the Afghan Capital, on November 13, 2001.

On May 2, 2011, U.S. Special Forces raided an al-Qaeda compound in Abbottabad, Pakistan, killing Osama bin Laden.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

In October of 1989, hundreds of thousands of East German citizens demonstrated in Leipzig, following a pattern of demonstrations for freedom and human rights throughout Eastern Europe and following the first ever free election in a Communist country, Poland, in the Spring of 1989. Hungary had opened its southern border with Austria and East Germans seeking a better life were fleeing there. Czechoslovakia had done likewise on its western border and the result was the same.

The East German government had been on edge and was seeking to reduce domestic tensions by granting limited passage of its citizens to West Germany. And that’s when the dam broke.

On November 9, 1989, thousands of elated East Berliners started pouring into West Berlin. There was a simple bureaucratic error earlier in the day when an East German official read a press release he hadn’t previously studied and proclaimed that residents of Communist East Berlin were permitted to cross into West Berlin, freely and, most importantly, immediately. He had missed the end of the release which instructed that passports would be issued in an orderly fashion when government offices opened the next day.

This surprising information about free passage was spread throughout East Berlin, East Germany and, indeed, around the word like a lightning bolt. Massive crowds gathered near-instantaneously and celebrated at the heavily guarded Wall gates which, in a party-like atmosphere amid total confusion, were opened by hard core communist yet totally outmanned Border Police, who normally had orders to shoot-to-kill anyone attempting to escape. A floodgate was opened and an unstoppable flood of freedom-seeking humanity passed through, unimpeded.

Shortly thereafter, the people tore down the Wall with every means available. The clarion bell had been sounded and the reaction across communist Eastern Europe was swift. Communist governments fell like dominoes.

The Wall itself was a glaring symbol of totalitarian communist repression and the chains that bound satellite countries to the communist Soviet Union. But the “bureaucratic error” of a low-level East German functionary was the match needed to set off an explosion of freedom that had been years in-the-making throughout the 1980s. And that is critical to understanding just why the Cold War came to an end, precipitously and symbolically, with the fall of the Wall.

With the election of Ronald Reagan to the presidency of the United States, Margaret Thatcher to Prime Minister of Great Britain and the Polish Cardinal, Jean Paul II becoming Pope of the Roman Catholic Church, the foundation was laid in the 1980s for freedom movements in Soviet Communist-dominated Eastern Europe to evolve and grow. Freedom lovers and fighters had friends in high places who believed deeply in their cause. These great leaders of the West understood the enormous human cost of communist rule and were eager to fight back in their own unique and powerful way, leading their respective countries and allies in the process.

Historic figures like labor leader Lech Walesa, head of the Polish Solidarity Movement and Czech playwright Vaclav Havel, an architect of the Charter 77 call for basic human rights had already planted the seeds for historic change. Particularly in Poland, the combination of Solidarity and the Catholic Church, supported staunchly in the non-communist world by Reagan and Thatcher, anti-communism flourished despite repression and brutal crackdowns.

And then, there was a new General Secretary of the Communist Party of the Soviet Union, Mikhail Gorbachev. When he came to power in 1985, he sought to exhort workers to increase productivity in the economy, stamp out the resistance to Soviet occupation in Afghanistan via a massive bombing campaign and keep liquor stores closed till 2:00 pm. However, exhortation didn’t work and the economy continued to decline, Americans gave Stinger missiles to the Afghan resistance and the bombing campaign failed and liquor stores were being regularly broken into by angry citizens not to be denied their vodka. The Afghan war was a body blow to a Soviet military, ‘always victorious’ and Soviet mothers protested their sons coming back in body bags. The elites (“nomenklatura”) were taken aback and demoralized by what was viewed as a military debacle in a then Fourth World country. “Aren’t we supposed to be a superpower?”

Having failed at run-of-the-mill Soviet responses to problems, Gorbachev embarked on a bold-for-the-USSR effort to restructure the failing Soviet economy via Perestroika which sought major reform but within the existing burdensome central-planning bureaucracy. On the political front, he introduced Glasnost, opening discussion of social and economic problems heretofore forbidden since the USSR’s beginning. Previously banned books were published. Working and friendly relationships with President Reagan and Margaret Thatcher were also initiated.

In the meantime, America under President Reagan’s leadership was not only increasing its military strength in an accelerated and expensive arms race but was also opposing Soviet-backed communist regimes and their so-called “wars of national liberation” all over the world. The cold war turned hot under the Reagan Doctrine. President Reagan also pushed “Star Wars,” an anti-ballistic missile system that could potentially neutralize Soviet long-range missiles. Star Wars, even if off in the future, worried Gorbachev’s military and communist leadership of an electronically and computer technology-challenged Soviet Union.

Competing economically and militarily with a resurgent anti-communist American engine firing on all cylinders became too expensive for the economically and technologically disadvantaged Soviet Union. There are those who say the USSR collapsed of its own weight, but they are wrong. If that were so, a congenitally overweight USSR would have collapsed a lot earlier. Gorbachev deserves a lot of credit to be sure but there should be no doubt, he and the USSR were encouraged to shift gears and change course. Unfortunately for communist rulers, their reforms initiated a downward spiral in their ability to control their citizens. Totalitarian control was first diminished and then lost. Author’s note: A lesson which was not lost on the rulers of Communist China.

Summing up: A West with economic and military backbone plus spiritual leadership, combined with brave dissident and human rights movements in Eastern Europe and the USSR itself, forced changes in behavior of the communist monolith. Words and deeds mattered. When Ronald Reagan called the Soviet Union an “evil empire” before the British Parliament, the media and political opposition worldwide was aghast… but in the Soviet Gulag, political prisoners rejoiced. When President Reagan said “Mr. Gorbachev, tear down this wall,” consternation reigned in the West… but the people from East Germany to the Kremlin heard it loud and clear.

And so fell the Berlin Wall.

The Honorable Don Ritter, Sc. D., served seven terms in the U.S. Congress from Pennsylvania including both terms of Ronald Reagan’s presidency. Dr. Ritter speaks fluent Russian and lived in the USSR for a year as a Nation Academy of Sciences post-doctoral Fellow during Leonid Brezhnev’s time. He served in Congress as Ranking Member of the Congressional Helsinki Commission and was a leader in Congress in opposition to the Soviet invasion and occupation of Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Danny de Gracia

It’s hard to believe that this year marks thirty years since Saddam Hussein invaded Kuwait in August of 1990. In history, some events can be said to be turning points for civilization that set the world on fire, and in many ways, our international system has not been the same since the invasion of Kuwait.

Today, the Iraq that went to war against Kuwait is no more, and Saddam Hussein himself is long dead, but the battles that were fought, the policies that resulted, and the history that followed is one that will haunt the world for many more years to come.

Iraq’s attempts to annex Kuwait in 1990 would bring some of the most powerful nations into collision, and would set in motion a series of events that would give rise to the Global War on Terror, the rise of ISIS, and an ongoing instability in the region that frustrates the West to this day.

To understand the beginning of this story, one must go back in time to the Iranian Revolution in 1979, where a crucial ally of the United States of America at the time – Iran – was in turmoil because of public discontent with the leadership of its shah, Mohammad Reza Pahlavi.

Iran’s combination of oil resources and strategic geographic location made it highly profitable for the shah and his allies in the West over the years, and a relationship emerged where Iran’s government, flush with oil money, kept America’s defense establishment in business.

For years, the shah had been permitted to purchase nearly any American weapons system he pleased, no matter how advanced or powerful it may be, and Congress was only all too pleased to give it to him.

The Vietnam War had broken the U.S. military and hollowed out the resources of the armed services, but the defense industry needed large contracts if was to continue to support America.

Few people realize that Iran, under the Shah, was one of the most important client states in the immediate post-Vietnam era, making it possible for America to maintain production lines of top-of-the-line destroyers, fighter planes, engines, missiles, and many other vital elements of the Cold War’s arms race against the Soviet Union. As an example, the Grumman F-14A Tomcat, America’s premier naval interceptor of 1986 “Top Gun” fame, would never have been produced in the first place if it were not for the commitment of the Iranians as a partner nation in the first batch of planes.

When the Iranian Revolution occurred, an embarrassing ulcer to American interests emerged in Western Asia, as one of the most important gravity centers of geopolitical power had slipped out of U.S. control. Iran, led by an ultra-nationalistic religious revolutionary government, and armed with what was at the time some of the most powerful weapons in the world, had gone overnight from trusted partner to sworn enemy.

Historically, U.S. policymakers typically prefer to contain and buffer enemies rather than directly opposing them. Iraq, which had also gone through a regime change in July of 1979 with the rise of Saddam Hussein in a bloody Baath Party purge, was an rival to Iran, making it a prime candidate to be America’s new ally in the Middle East.

The First Persian Gulf War: A Prelude

Hussein, a brutal, transactional-minded leader who rose to power through a combination of violence and political intrigue, was one to always exploit opportunities. Recognizing Iran’s potential to overshadow a region he himself deemed himself alone worthy to dominate, Hussein used the historical disagreement over ownership of the strategic, yet narrow Shatt al-Arab waterway that divided Iran from Iraq to start a war on September 22, 1980.

Iraq, flush with over $33 billion in oil profits, had become formidably armed with a modern military that was supplied by numerous Western European states and, bizarrely, even the Soviet Union as well. Hussein, like Nazi Germany’s Adolf Hitler, had a fascination for superweapons and sought to amass a high-tech military force that could not only crush Iran, but potentially take over the entire Middle East.

In Hussein’s bizarre arsenal would eventually include everything from modified Soviet ballistic missiles (the “al-Hussein”) to Dassault Falcon 50 corporate jets modified to carry anti-ship missiles, a nuclear weapons program at Osirak, and even work on a supergun capable of firing telephone booth-sized projectiles into orbit nicknamed Project Babylon.

Assured of a quick campaign against Iran and tacitly supported by the United States, Hussein saw anything but a decisive victory, and spent almost a decade in a costly war of attrition with Iran. Hussein, who constantly executed his own military officers for making tactical withdrawals or failing in combat, denied his military the ability to learn from defeats and handicapped his army by his own micromanagement.

Iraq’s Pokémon-like “gotta catch ‘em all” model of military procurement during the war even briefly put it at odds with the United States on May 17, 1987, when one of its heavily armed Falcon 50 executive jets, disguised on radar as a Mirage F1EQ fighter, accidentally launched a French-acquired Exocet missile against a U.S. Navy frigate, the USS Stark. President Ronald Reagan, though privately horrified at the loss of American sailors, still considered Iraq a necessary counterweight to Iran, and used the Stark incident to increase political pressure on Iran.

While Iraq had begun its war against Iran in the black, years of excessive military spending, meaningless battles, and rampant destruction of the Iraqi army had taken its toll. Hussein’s war had put the country in over $37 billion dollars in debt, much of which had been owed to neighboring Kuwait.

Faced with a strained economy, tens of thousands of soldiers returning wounded from the war, and a military that was virtually on the brink of deposing Saddam Hussein just as he had deposed his predecessor Ahmed Hassan al-Bakr in 1979, Iraq had no choice but to end its war against Iran.

Both Iran and Iraq would ultimately submit to a UN brokered ceasefire, but ironically, what would be one of the decisive elements in bringing the first Persian Gulf war to a close would not be the militaries of either country, but the U.S. military, when it launched a crippling air and naval attack against Iranian forces on April 18, 1988.

Iran, which had mined important sailing routes of the Persian Gulf as part of its area denial strategy during the war, succeeded on April 14, 1988 in striking the USS Samuel B. Roberts, an American frigate deployed to the region to protect shipping.

In response, the U.S. military retaliated with Operation: Praying Mantis which hit Iranian oil platforms (which had since been reconfigured as offensive gun platforms), naval vessels, and other military targets. The battle, which was so overwhelming in its scope that it actually was and remains to this day as the largest carrier and surface ship battle since World War II, resulted in the destruction of most of Iran’s navy and was a major contributing factor in de-fanging Iran for the next decade to come.

Kuwait and Oil

Saddam Hussein, claiming victory over Iran amidst the UN ceasefire, and now faced with a new U.S. president, George H.W. Bush in 1989, felt that the time was right to consolidate his power and pull his country back from collapse. In Hussein’s mind, he had been the “savior” of the Arab and Gulf States, who had protected them during the Persian Gulf war against the encroachment of Iranian influence. As such, he sought from Kuwait a forgiveness of debts incurred in the war with Iran, but would find no such sympathy. The 1990s were just around the corner, and Kuwait had ambitions of its own to grow in the new decade as a leading economic powerhouse.

Frustrated and outraged by what he perceived was a snub, Hussein reached into his playbook of once more leveraging territorial disputes for political gain and accused Kuwait of stealing Iraqi oil by means of horizontal slant drilling into the Rumaila oil fields of southern Iraq.

Kuwait found itself in an unenviable situation neighboring the heavily armed Iraq, and as talks over debt and oil continued, the mighty Iraqi Republican Guard appeared to be gearing up for war. Most political observers at the time, including many Arab leaders, felt that Hussein was merely posturing and that it was a grand bluff to maintain his image as a strong leader. For Hussein to invade a neighboring Arab ally was unthinkable at the time, especially given Kuwait’s position as an oil producer.

On July 25, 1990, U.S. Ambassador to Iraq, April Glaspie, met with President Saddam Hussein and his deputy, Tariq Aziz on the topic of Kuwait. Infamously, Glaspie is said to have told the two, “We have no opinion on your Arab/Arab conflicts, such as your dispute with Kuwait. Secretary Baker has directed me to emphasize the instruction, first given to Iraq in the 1960s, that the Kuwait issue is not associated with America.”

While the George H.W. Bush administration’s intentions were obviously aimed at taking no side in a regional territorial dispute, Hussein, whose personality was direct and confrontational, likely interpreted the Glaspie meeting as America backing down.

In the Iraqi leader’s eyes, one always takes the initiative and always shows an enemy their dominance. For a powerful country such as the United States to tell Hussein that it had “no opinion” on Arab/Arab conflict, this was most likely a sign of permission or even weakness that the Iraqi leader felt he had to exploit.

America, still reeling from the shadow of the Vietnam War failure and the disastrous Navy SEAL incident in Operation: Just Cause in Panama, may have appeared in that moment to Hussein as a paper tiger that could be out-maneuvered or deterred by aggressive action. Whatever the case was, Iraq stunned the world when just days later on August 2, 1990 it invaded Kuwait.

The Invasion of Kuwait

American military forces and intelligence agencies had been closely monitoring the buildup of Iraqi forces for what appeared like an invasion of Kuwait, but it was still believed right up to the moment of the attack that perhaps Saddam Hussein was only bluffing. The United States Central Command had set WATCHCON 1 – or Watch Condition One – the highest state of non-nuclear alertness in the region just prior to Iraq’s attack, and was regularly employing satellites, reconnaissance aircraft, and electronic surveillance platforms to observe the Iraqi Army.

Nevertheless, if there is one mantra that perfectly encapsulates the posture of the United States and European powers from the beginnings of the 20th century to the present, it is “Western countries too slow to act.” As is often the result with aggressive nations that challenge the international order, Iraq plowed into Kuwait and savaged the local population.

While America and her allies have always had the best technologies, the best weapons, and the best early warning systems or sensors, these historically for more than a century have been rendered useless because they often provide information that is not actionably employed to stop an attack or threat. Such was the case with Iraq, where all of the warning signs were present that an attack was imminent, but no action was taken to stop them.

Kuwait’s military courageously fought Iraq’s invading army, and even notably fought air battles with their American-made A-4 Skyhawks, some of them launching from highways after their air bases were destroyed. But the Iraqi army, full of troops who had fought against Iran and equipped with the fourth largest military in the world at that time, was simply too powerful to overcome. 140,000 Iraqi troops flooded into Kuwait and seized one of the richest oil producing nations in the region.

As Hussein’s military overran Kuwait, sealed its borders, and began plundering the country and ravaging its civilian population, the worry of the United States immediately shifted from Kuwait to Saudi Arabia, for fear that the kingdom might be next. On August 7, 1990, President Bush commenced “Operation: Desert Shield,” a military operation to defend Saudi Arabia and prevent any further advance of the Iraqi army.

At the time that Operation: Desert Shield commenced, I was living in Hampton Roads, Virginia and my father was a lieutenant colonel assigned to Tactical Air Command headquarters at the nearby Langley Air Force Base, and 48 F-15 Eagle fighter planes from that base immediately deployed to the Middle East in support of that operation. In the days that followed, our base became a flurry of activity and I remember seeing a huge buildup of combat aircraft from all around the United States forming at Langley.

President Bush, who himself had been a fighter pilot and U.S. Navy officer who fought in World War II, was all too familiar with what could happen when a megalomaniacal dictator started invading their neighbors. Whatever congeniality of convenience existed between the U.S. and Iraq to oppose Iran was now a thing of the past in the wake of the occupation of Kuwait.

Having fought against both the Nazis and Imperial Japanese in WWII, Bush saw many similarities of Adolf Hitler in Saddam Hussein, and immediately began comparing the Iraqi leader and his government to the Nazis in numerous speeches and public appearances as debates raged over what the U.S. should do regarding Kuwait.

As retired, former members of previous presidential administrations urged caution and called for long-term sanctions on Iraq rather than a kinetic military response, the American public, still captivated by the Vietnam experience, largely felt that the matter in Kuwait was not a concern that should involve military forces. Protests began to break out across America with crowds shouting “Hell no, we won’t go to war for Texaco” and others singing traditional protest songs of peace like “We Shall Overcome.”

Bush, persistent in his beliefs that Iraq’s actions were intolerable, made every effort to keep taking the moral case for action to the American public in spite of these pushbacks. As a leader seasoned by the horrors of war and combat, Bush must have known, as Henry Kissinger once said, that leadership is not about popularity polls, but about “an understanding of historical cycles and courage.”

On September 11, 1990, before a joint session of Congress, Bush gave a fiery address that to this day still stands as one of the most impressive presidential addresses in history.

“Vital issues of principle are at stake. Saddam Hussein is literally trying to wipe a country off the face of the Earth. We do not exaggerate,” President Bush would say before Congress. “Nor do we exaggerate when we say Saddam Hussein will fail. Vital economic interests are at risk as well. Iraq itself controls some 10 percent of the world’s proven oil reserves. Iraq, plus Kuwait, controls twice that. An Iraq permitted to swallow Kuwait would have the economic and military power, as well as the arrogance, to intimidate and coerce its neighbors, neighbors who control the lion’s share of the world’s remaining oil reserves. We cannot permit a resource so vital to be dominated by one so ruthless, and we won’t!”

Members of Congress erupted in roaring applause at Bush’s words, and he issued a stern warning to Saddam Hussein: “Iraq will not be permitted to annex Kuwait. And that’s not a threat, that’s not a boast, that’s just the way it’s going to be.”

Ejecting Saddam from Kuwait

As America prepared for action, in Saudi Arabia, another man would also be making promises to defeat Saddam Hussein and his military. Osama bin Laden, who had participated in the earlier war in Afghanistan as part of the Mujahideen that resisted the Soviet occupation, now offered his services to Saudi Arabia, pledging to use a jihad to force Iraq out of Kuwait in the same way that he had forced the Soviets out of Afghanistan. The Saudis, however, would hear none of it; having already received the protection of the United States and its powerful allies, bin Laden, seen as a useless bit player on the world stage, was brushed aside.

Herein the seeds for a future conflict would be sown, as not only did bin Laden take offense to being rejected by the Saudi government, but the presence of American military forces on holy Saudi soil was seen as blasphemous to him and a morally corrupting influence on the Saudi people.

In fact, the presence of female U.S. Air Force personnel in Saudi Arabia seen without traditional cover or driving around in vehicles, caused many Saudi women to begin petitioning their government – and even in some instances, committing acts of civil disobedience – for more rights. This caused even more outrage among a number of fundamentalist groups in Saudi Arabia, and lent additional support, albeit covert in some instances, to bin Laden and other jihadist leaders.

Despite these cultural tensions boiling beneath the surface, President Bush successfully persuaded not only his own Congress but the United Nations as well to empower the formation of a global coalition of 35 nations to eject Iraqi occupying forces from Kuwait and to protect Saudi Arabia and the rest of the Gulf from further aggression.

On November 29, 1990, the die was cast when the United Nations passed Resolution 678, authorizing “Member States co-operating with the Government of Kuwait, unless Iraq on or before 15 January 1991 [withdraws from Kuwait] … to use all necessary means … to restore international peace and security in the area.”

Subsequently, on January 15, 1991, President Bush issued an ultimatum to Saddam Hussein to leave Kuwait. Hussein ignored the threat, believing that America was weak, and its public easily susceptible to knee-jerk reactions at the sight of losing soldiers from its prior experience in Vietnam. Hussein believed that he could not only cause the American people to back down, but that he could unravel Arab support for the UN coalition by enticing Israel to attack Iraq. As such, he persisted in occupying Kuwait and boasted that a “Mother of all Battles” was to commence, in which Iraq would emerge victorious.

History, however, shows us that this was not the case, and days later on the evening of January 16, 1991, Operation: Desert Shield became Operation: Desert Storm, when a massive aerial bombardment and air superiority campaign commenced against Iraqi forces. Unlike prior wars which combined a ground invasion with supporting air forces, the start of Desert Storm was a bombing campaign that consisted of heavy attacks by aircraft and naval-launched cruise missiles against Iraq.

The operational name “Desert Storm” may have in part been influenced by a war plan developed months earlier by Air Force planner, Colonel John A. Warden who conceived an attack strategy named “Instant Thunder” which used conventional, non-nuclear airpower in a precise manner to topple Iraqi defenses.

A number of elements from Warden’s top secret plan were integrated into the opening shots of Desert Storm’s air campaign, as U.S. and coalition aircraft knocked out Iraqi radars, missile sites, command headquarters, power stations, and other key targets in just the first night alone.

Unlike the Vietnam air campaigns which were largely political and gradual escalations of force, the Air Force, having suffered heavy losses in Vietnam, wanted as General Chuck Horner would later explain, “instant” and “maximum” escalation so that their enemies could not have time to react or rearm.

This was precisely what happened, such to the point that the massive Iraqi air force would be either annihilated by bombs on the ground, shot down by coalition combat air patrols, or forced to flee into neighboring Iran.

A number of radical operations and new weapons were employed in the air campaign of Desert Storm. For one, the U.S. Air Force had secretly converted a number of nuclear AGM-86 Air Launched Cruise Missiles (ALCMs) into conventional, high explosive precision guided missiles and equipped them on 57 B-52 bombers for a January 17 night raid called Operation: Senior Surprise.

Known internally and informally to the B-52 pilots as “Operation: Secret Squirrel,” the cruise missiles knocked out numerous Iraqi defenses and opened the door for more coalition aircraft to surge against Saddam Hussein’s military.

The Navy also employed covert strikes against Iraq, also firing BGM-109 Tomahawk Land Attack Missiles (TLAMs) that had also been converted to carry high explosive (non-nuclear) warheads. Because the early BGM-109s were guided and aimed by a primitive digital scene matching area correlator (DSMAC) that took digital photos of the ground below and compared it with pre-programmed topography in its terrain computer, the flat deserts of Iraq were thought to be problematic in employing cruise missiles, so the Navy came up with a highly controversial solution: secretly fire cruise missiles into Iran – a violation of Iranian airspace and international law – then turn them towards the mountain ranges as aiming points, and fly them into Iraq.

The plan worked, however, and the Navy would ultimately rain down on Iraq some 288 TLAMs that destroyed hardened hangars, runways, parked aircraft, command buildings, and scores of other targets in highly accurate strikes.

Part of the air war came home personally to me when a U.S. Air Force B-52, serial number 58-0248, participated in a night time raid over Iraq when it was accidentally fired upon by a friendly F-4G “Wild Weasel” that mistook the lumbering bomber’s AN/ASG-21 radar-guided tail gun as an Iraqi air defense platform. The Wild Weasel fired an AGM-88 High-speed Anti-Radiation Missile (HARM) at the B-52 that hit and exploded in its tail, but still left the aircraft in flyable condition.

At the time, my family had moved to Andersen AFB in Guam, and 58-0248 made for Guam to land for repairs. When the B-52 arrived, it was parked in a cavernous hangar and crews immediately began patching up the aircraft. My father, always wanting to ensure that I learned something about the real world so I could get an appreciation for America, brought me to the hangar to see the stricken B-52, which was affectionately given the nickname “In HARM’s Way.”

I never forgot that moment, and it caused me to realize that the war was more than just some news broadcast we watched on TV, and that war had real consequences for not only those who fought in it, but people back home as well. I remember feeling an intense surge of pride as I saw that B-52 parked in the hangar, and I felt that I was witnessing history in action.

Ultimately, the air war against Saddam Hussein’s military would go on for a brutal six weeks, leaving many of his troops shell-shocked, demoralized, and eager to surrender. In fighting Iran for a decade, the Iraqi army had never known the kind of destructive scale or deadly precision that coalition forces were able to bring to bear against them.

Once the ground campaign commenced against Iraqi forces on February 24, 1991, that portion of Operation: Desert Storm only lasted a mere 100 hours before a cease-fire would be called, not because Saddam Hussein had pleaded for survival, but because back in Washington, D.C., national leaders watching the war on CNN began to see a level of carnage that they were not prepared for.

Gen. Colin Powell, seeing that most of the coalition’s UN objectives had been essentially achieved, personally lobbied for the campaign to wrap up, feeling that further destruction of Iraq would be “unchivalrous” and fearing the loss of any more Iraqi or American lives. It was also feared that if America had actually tried to make a play for regime change in Iraq in 1991, that the Army would be left holding the bag in securing and rebuilding the country, something that not only would be costly, but might turn the Arab coalition against America. On February 28, 1991, the U.S. officially declared a cease-fire.

The Aftermath

Operation: Desert Storm successfully accomplished the UN objectives that were established for the coalition forces and it liberated Kuwait. But a number of side effects of the war would follow that would haunt America and the rest of the world for decades to come.

First, Saddam Hussein remained in power. As a result, the U.S. military would remain in the region for years as a defensive contingent, not only continuing to inflame existing cultural tensions in Saudi Arabia, but also becoming a target for jihadist terrorist attacks, including the Khobar Towers bombing on June 25, 1996 and the USS Cole bombing on October 12, 2000.

Osama bin Laden’s al Qaeda terrorist group would ultimately change the modern world as we knew it when his men hijacked commercial airliners and flew them into the Pentagon and World Trade Center on September 11, 2001. It should not be lost on historical observers that 15 of the 19 hijackers that day were Saudi citizens, a strategic attempt by bin Laden to drive a wedge between the United States and Saudi Arabia to get American military forces out of the country.

9/11 would also provide an opportunity for George H.W. Bush’s son, President George W. Bush, to attempt to take down Saddam Hussein. Many of the new Bush Administration members were veterans of the previous one during Desert Storm, and felt that the elder Bush’s decision not to “take out” the Iraqi dictator was a mistake. And while the 2003 campaign against Iraq was indeed successful in taking down the Baathist-party rule in Iraq and changing the regime, it allowed many disaffected former Iraqi officers and jihadists to rise up against the West, which ultimately led to the rise of ISIS in the region.

It is my hope that the next generation of college and high school students who read this essay and reflect on world affairs will understand that history is often complex and that every action taken leaves ripples in our collective destinies. A Holocaust survivor once told me, “There are times when the world goes crazy, and catches on fire. Desert Storm was one such time when the world caught on fire.”

What can we learn from the invasion of Kuwait, and what lessons can we take forward into our future? Let us remember always that allies are not always friends; victories are never permanent; and sometimes even seemingly unrelated personalities and forces can lead to world-changing events.

Our young people, especially those who wish to enter into national service, must study history and seek to possess, as the Bible says in the book of Revelation 17:9 in the Amplified Bible translation, “a mind to consider, that is packed with wisdom and intelligence … a particular mode of thinking and judging of thoughts, feelings, and purposes.”

Indeed, sometimes the world truly goes crazy and catches on fire, and some may say that 2020 is such a time. Let us study the past now, and prepare for the future!

Dr. Danny de Gracia, Th.D., D.Min., is a political scientist, theologist, and former committee clerk to the Hawaii State House of Representatives. He is an internationally acclaimed author and novelist who has been featured worldwide in the Washington Times, New York Times, USA Today, BBC News, Honolulu Civil Beat, and more. He is the author of the novel American Kiss: A Collection of Short Stories.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams
Ronald Reagan Speech, Brandenburg Gate & Berlin Wall 1987

“Mr. Gorbachev, tear down this wall!” – Ronald Reagan

After World War II, a Cold War erupted between the world’s two superpowers – the United States and the Soviet Union. Germany was occupied and then divided after the war as was its capital, Berlin. The Soviet Union erected the Berlin Wall in 1961 as a symbol of the divide between East and West in the Cold War and between freedom and tyranny.

During the 1960s and 1970s, the superpowers entered into a period of détente or decreasing tensions. However, the Soviet Union took advantage of détente to use revenue from rising oil prices and arms sales to engage in a massive arms build-up, supported communist insurrections in developing nations around the globe, and invaded Afghanistan.

Ronald Reagan was elected president in 1980 during a time of foreign-policy reversals including the Vietnam War and the Iranian Hostage Crisis. He blamed détente for strengthening and emboldening the Soviets and sought to improve American strength abroad.

As president, Reagan instituted a tough stance towards the Soviets that was designed to reverse their advances and win the Cold War. His administration supported the Polish resistance movement known as Solidarity, increased military spending, started the Strategic Defense Initiative (SDI), and armed resistance fighters around the world, including the mujahideen battling a Soviet invasion in Afghanistan.

Reagan had a long history of attacking communist states and the idea of communism itself that shaped his strategic outlook. In the decades after World War II, like many Americans, he was concerned about Soviet dominance in Eastern Europe spreading elsewhere. In 1952, Reagan compared communism to Nazism and other forms of totalitarianism characterized by a powerful state that limited individual freedoms.

“We have met [the threat] back through the ages in the name of every conqueror that has ever set upon a course of establishing his rule over mankind,” he said. “It is simply the idea, the basis of this country and of our religion, the idea of the dignity of man, the idea that deep within the heart of each one of us is something so godlike and precious that no individual or group has a right to impose his or its will upon the people.”

In a seminal televised speech in 1964 called “A Time for Choosing,” Reagan stated that he believed there could be no accommodation with the Soviets. “We cannot buy our security, our freedom from the threat of the bomb by committing an immorality so great as saying to a billion human beings now in slavery behind the Iron Curtain, ‘Give up your dreams of freedom because to save our own skins, we are willing to make a deal with your slave-masters.’”

Reagan targeted the Berlin Wall as a symbol of communism in a 1967 televised town hall debate with Robert Kennedy. “I think it would be very admirable if the Berlin Wall should…disappear,” Reagan said, “We just think that a wall that is put up to confine people, and keep them within their own country…has to be somehow wrong.”

In 1978, Reagan visited the wall and heard the story of Peter Fechter, one of hundreds who were shot by East German police while trying to escape to freedom over the Berlin Wall. As a result, Reagan told an aide, “My idea of American policy toward the Soviet Union is simple, and some would say simplistic.  It is this: We win and they lose.”

As president, he continued his unrelenting attack on the idea of communism according to his moral vision of the system.  In a 1982 speech to the British Parliament, he predicted that communism would end up “on the ash heap of history,” and that the wall was “the signature of the regime that built it.”  When he visited the wall during the same trip, he stated that “It’s as ugly as the idea behind it.” In a 1983 speech, he referred to the Soviet Union an “evil empire.”

Reagan went to West Berlin to speak during a ceremony commemorating the 750th anniversary of the city and faced a choice. He could confront the Soviets about the wall, or he could deliver a speech without controversy.

In June 1987, many officials in his administration and West Germany were opposed to any provocative words or actions during the anniversary speech. Many Germans also did not want Reagan to deliver his speech anywhere near the wall and feared anything that might be perceived as an aggressive signal. Secretary of State George Schultz and Chief of Staff Howard Baker questioned the speech and asked the president and his speechwriters to tone down the language. Deputy National Security Advisor Colin Powell and other members of the National Security Council wanted to alter the speech and offered several revisions. Reagan demanded to speak next to the Berlin Wall and determined that he would use the occasion to confront the threat the wall posed to human freedom.

Reagan and his team arrived in West Berlin on June 12. He spoke to reporters and nervous German officials, telling them, “This is the only wall that has ever been built to keep people in, not keep people out.” Meanwhile, in East Berlin, the German secret police and Russian KGB agents cordoned off an area a thousand yards wide opposite the spot where Reagan was to speak on the other side of the wall. They wanted to ensure that no one could hear the message of freedom.

Reagan spoke at the Brandenburg Gate with the huge, imposing wall in the background. “As long as this gate is closed, as long as this scar of a wall is permitted to stand, it is not the German question alone that remains open, but the question of freedom for all mankind.”

Reagan challenged Soviet General Secretary Mikhail Gorbachev directly, stating, “If you seek peace, if you seek prosperity for the Soviet Union and Eastern Europe, if you seek liberalization: Come here to this gate! Mr. Gorbachev, open this gate! Mr. Gorbachev, tear down this wall!”

He finished by predicting the wall would not endure because it stood for oppression and tyranny. “This wall will fall. For it cannot withstand faith; it cannot withstand truth. The wall cannot withstand freedom.” No one imagined that the Berlin Wall would fall only two years later on November 9, 1989, as communism collapsed across Eastern Europe.

A year later, Reagan was at a summit with Gorbachev in Moscow and addressed the students at Moscow State University. “The key is freedom,” Reagan boldly and candidly told them. “It is the right to put forth an idea, scoffed at by the experts, and watch it catch fire among the people. It is the right to dream – to follow your dream or stick to your conscience, even if you’re the only one in a sea of doubters.” Ronald Reagan believed that he had a responsibility to bring an end to the Cold War and destroy all nuclear weapons to benefit both the United States as well as the world for an era of peace. He dedicated himself to achieving this goal. Partly due to these efforts, the Berlin Wall fell by 1989, and communism collapsed in the Soviet Union by 1991.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

The election of Ronald Reagan on November 4, 1980 was one of the two most important elections of the 20th Century. It was a revolution in every way.

In 1932, Franklin Roosevelt (FDR) decisively defeated one term incumbent Herbert Hoover by 472-59 Electoral votes. His election

ushered in the era of aggressive liberalism, expanding the size of government, and establishing diplomatic relations with the Soviet Union. Roosevelt’s inner circle, his “brain trust,” were dedicated leftists, several of whom conferred with Lenin and Stalin on policy issues prior to 1932.

In 1980, Ronald Reagan decisively defeated one-term incumbent Jimmy Carter by 489-49 Electoral votes. His election ended the liberal era, shrunk the size of government, and rebuilt America’s military, diplomatic, economic, and intelligence capabilities. America reestablished its leadership in the world, ending the Soviet Empire, and the Soviet Union itself.

Reagan was a key leader in creating and promoting the conservative movement, whose policy and political operatives populated and guided his administration. He was a true “thought leader” who defined American conservatism in the late 20th Century. Through his writings, speeches, and radio program, Reagan laid the groundwork, and shaped the mandate, for one of the most impactful Presidencies in American history.

The road from Roosevelt’s “New Deal” to Reagan’s Revolution began in 1940.

FDR, at the height of his popularity, choose to run for an unprecedented third term. Roosevelt steered ever more leftward, selecting Henry Wallace as his running mate. Wallace would run as a socialist under the Progressive Party banner in 1948. Republican Wendell Willkie was the first private sector businessman to become a major party’s nominee.

Willkie had mounted numerous legal challenges to Roosevelt’s regulatory overreach. While losing, Willkie’s legacy inspired a generation of economists and activists to unite against big government.

As the allied victory in World War II became inevitable, the Willkie activists, along with leading conservative economists from across the globe, established policy organizations, “think tanks,” and publications to formulate and communicate an alternative to Roosevelt’s New Deal.

Human Events, the premiere conservative newspaper, began publishing in 1944. The Foundation for Economic Education was founded in 1946.

In 1947, conservative, “free market,” anti-regulatory economists met at the Mont Pelerin resort at the base of Mont Pelerin near Montreux, Switzerland. The greatest conservative minds of the 20th Century, including Friedrich Hayek, Ludwig von Mises, and Milton Friedman, organized the “Mont Pelerin Society” to counter the globalist economic policies arising from the Bretton Woods Conference.  The Bretton Woods economists had met at the Hotel Washington, at the base of Mount Washington in New Hampshire, to launch the World Bank and International Monetary Fund.

Conservative writer and thinker, William F. Buckley Jr. founded National Review on November 19, 1955. His publication, more than any other, would serve to define, refine and consolidate the modern Conservative Movement.

The most fundamental change was realigning conservatism with the international fight against the Soviet Union, which was leading global Communist expansion. Up until this period, American conservatives tended to be isolationist. National Review’s array of columnists developed “Fusionism” which provided the intellectual justification of conservatives being for limited government at home while aggressively fighting Communism abroad. In 1958, the American Security Council was formed to focus the efforts of conservative national security experts on confronting the Soviets.

Conservative Fusionism was politically launched by Senator Barry Goldwater (R-AZ) during the Republican Party Platform meetings for their 1960 National Convention. Conservative forces prevailed. This laid the groundwork for Goldwater to run and win the Republican Party Presidential nomination in 1964.

The policy victories of Goldwater and Buckley inspired the formation of the Young Americans for Freedom, the major conservative youth movement. Meeting at Buckley’s home in Sharon, Connecticut on September 11, 1960, the YAF manifesto became the Fusionist Canon. The conservative movement added additional policy centers, such as the Hudson Institute, founded on July 20, 1961.

Goldwater’s campaign was a historic departure from traditional Republican politics. His plain-spoken assertion of limited government and aggressive action against the Soviets inspired many, but scared many more. President John F. Kennedy’s assassination had catapulted Vice President Lyndon B. Johnson into the Presidency. LBJ had a vision of an even larger Federal Government, designed to mold urban minorities into perpetually being beholding to Democrat politicians.

Goldwater’s alternative vision was trounced on election day, but the seeds for Reagan’s Conservative Revolution were sown.

Reagan was unique in American politics. He was a pioneer in radio broadcasting and television. His movie career made him famous and wealthy. His tenure as President of the Screen Actors Guild thrust him into the headlines as Hollywood confronted domestic communism.

Reagan’s pivot to politics began when General Electric hired him to host their popular television show, General Electric Theater. His contract included touring GE plants to speak about patriotism, free market economics, and anti-communism. His new life within corporate America introduced him to a circle of conservative businessmen who would become known as his “Kitchen Cabinet.”

The Goldwater campaign reached out to Reagan to speak on behalf of their candidate on a television special during the last week of the campaign. On October 27, 1964, Reagan drew upon his GE speeches to deliver “A Time for Choosing.” His inspiring address became a political classic, which included lines that would become the core of “Reaganism”:

“The Founding Fathers knew a government can’t control the economy without controlling people. And they knew when a government sets out to do that, it must use force and coercion to achieve its purpose. So, we have come to a time for choosing … You and I are told we must choose between a left or right, but I suggest there is no such thing as a left or right. There is only an up or down. Up to man’s age-old dream—the maximum of individual freedom consistent with order—or down to the ant heap of totalitarianism.”

The Washington Post declared Reagan’s “Time for Choosing” “the most successful national political debut since William Jennings Bryan electrified the 1896 Democratic convention with his Cross of Gold speech.” It immediately established Reagan as the heir to Goldwater’s movement.

The promise of Reagan fulfilling the Fusionist vision of Goldwater, Buckley, and a growing conservative movement inspired the formation of additional groups, such as the American Conservative Union in December 1964.

In 1966, Reagan trounced two-term Democrat incumbent Pat Brown to become Governor of California, winning by 57.5 percent. Reagan’s two terms became the epicenter of successful conservative domestic policy attracting top policy and political operatives who would serve him throughout his Presidency.

Retiring after two terms, Reagan devoted full time to being the voice, brain, and face of the Conservative Movement. This included a radio show that was followed by over 30 million listeners.

In 1976. the ineffectual moderate Republicanism of President Gerald Ford led Reagan to mount a challenge. Reagan came close to the unprecedented unseating of his Party’s incumbent. His concession speech on the last night of the Republican National Convention became another political classic. It launched his successful march to the White House.

Reagan’s 1980 campaign was now aided by a more organized, broad, and capable Conservative Movement. Reagan’s “California Reaganites” were linked to Washington, DC-based “Fusionists,” and conservative grassroots activists who were embedded in Republican Party units across America. The Heritage Foundation had become a major conservative policy center on February 16, 1973. A new hub for conservative activists, The Conservative Caucus, came into existence in 1974.

Starting in 1978, Reagan’s inner circle, including his “Kitchen Cabinet,” worked seamlessly with this vast network of conservative groups: The Heritage Foundation, Kingston, Stanton, Library Court, Chesapeake Society, Monday Club, Conservative Caucus, American Legislative Exchange Council, Committee for the Survival of a Free Congress, the Eagle Forum, and many others. They formed a unified and potent political movement that overwhelmed Republican moderates to win the nomination and then buried Jimmy Carter and the Democrat Party in November 1980.

After his landslide victory, which also swept in the first Republican Senate majority since 1956, Reaganites and Fusionists placed key operatives into Reagan’s transition. They identified over 17,000 positions that affected Executive Branch operations. A separate team identified the key positions in each cabinet department and major agency that had to be under Reagan’s control in the first weeks of his presidency.

On January 21, 1981, Reagan’s personnel team immediately removed every Carter political appointee. These Democrat functionaries were walked out the door, identification badge taken, files sealed, and their security clearance terminated. The Carter era’s impotent foreign policy and intrusive domestic policy ended completely and instantaneously.

Reagan went on to lead one of the most successful Presidencies in American history. His vision of a “shining city on a hill” continues to inspire people around the world to seek better lives through freedom, open societies, and economic liberty.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner
Iranian Students Climb Wall of U.S. Embassy, Tehran, Nov. 1979

The long tragic road to the September 11, 2001 terror attacks began with President Jimmy Carter, and his administration’s involvement in the Iranian Revolution, and their fundamentally weak response to the Iranian Hostage Crisis.

The Iranian Hostage Crisis was the most visible act of the Iranian Revolution. Starting on November 4, 1979, and lasting for 444 days, 52 Americans were imprisoned in brutal conditions. The world watched as the Carter Administration repeatedly failed to free the hostages, both through poor diplomacy and the rescue attempt fiasco.

The result was the crippling of U.S. influence throughout the Middle East and the spawning of radical Islamic movements that terrorize the world to this day.

Islam’s three major sects, Sunni, Shiite, and Sufi, all harbor the seeds of violence and hatred. In 1881 a Sufi mystic ignited the Mahdi Revolt in the Sudan leading to eighteen years of death and misery throughout the upper Nile. During World War II, the Sunni Grand Mufti of Jerusalem befriended Hitler and helped Heinrich Himmler form Islamic Stormtrooper units to kill Jews in the Balkans.

After World War II, Islam secularized as mainstream leaders embraced Western economic interests to tap their vast oil and gas reserves.

Activists became embroiled in the Middle East’s Cold War chess board, aiding U.S. or Soviet interests.

The Iranian Revolution changed that. Through the success of the Iranian Revolution, Islamic extremists of all sects embraced the words of Shiite Ayatollah Ruhollah Khomeini:

“If the form of government willed by Islam were to come into being, none of the governments now existing in the world would be able to resist it; they would all capitulate.”

Islamic dominance became an end in and of itself.

This did not have to happen at all.

Iran has been a pivotal regional player for 2,500 years. The Persian Empire was the bane of ancient Greece. As the Greek Empire withered, Persia, later Iran, remained a political, economic, and cultural force. This is why their 1979 Revolution and subsequent confrontation with the U.S. inspired radicals throughout the Islamic world to become the Taliban, ISIS and other terrorists of today.

Iran’s modern history began as part of the East-West conflict following World War II. The Soviets heavily influenced and manipulated Iran’s first elected government. On August 19, 1953, British and America intelligence toppled that government and returned Shah Mohammad Reza to power.

“The Shah” as he became known globally, was reform minded. He launched his “White Revolution” to build a modern, pro-West, pro-capitalist Iran in 1963. The Shah’s “Revolution” built the region’s largest middle class, and broke centuries of tradition by enfranchising women. It was opposed by many traditional powers, including fundamentalist Islamic leaders like the Ayatollah Ruhollah Khomeini. Khomeini’s agitation for violent opposition to the Shah’s reforms led to his arrest and exile.

Throughout his reign, the Shah was vexed by radical Islamic and communist agitation. His secret police brutally suppressed fringe dissidents. This balancing act between western reforms and control worked well, with a trend towards more reforms as the Shah aged. The Shah enjoyed warm relationships with American Presidents of both parties and was rewarded with lavish military aid.

That was to change in 1977.

From the beginning, the Carter Administration expressed disdain for the Shah. President Carter pressed for the release of political prisoners. The Shah complied, allowing many radicals the freedom to openly oppose him.

Not satisfied with the pace or breadth of the Shah’s human rights reforms, Carter envoys began a dialogue with the Ayatollah Khomeini, first at his home in Iraq and more intensely when he moved to a Paris suburb.

Indications that the U.S. was souring on the Shah emboldened dissidents across the political spectrum to confront the regime. Demonstrations, riots, and general strikes began to destabilize the Shah and his government. In response, the Shah accelerated reforms. This was viewed as weakness by the opposition.

The Western media, especially the BBC, began to promote the Ayatollah as a moderate alternative to the Shah’s “brutal regime.” The Ayatollah assured U.S. intelligence operatives and State Department officials that he would only be the “figure head” for a western parliamentary system.

During the fall of 1978, strikes and demonstrations paralyzed the country. The Carter Administration, led by Secretary of State Cyrus Vance and U.S. Ambassador to Iran William Sullivan, coalesced around abandoning the Shah and helping install Khomeini, who they viewed as a “moderate clergyman” who would be Iran’s “Ghandi-like” spiritual leader.

Time and political capital were running out. On January 16, 1979, The Shah, after arranging for an interim government, resigned and went into exile.

The balance of power now remained with the Iranian Military.

While the Shah was preparing for his departure, General Robert Huyser, Deputy Commander of NATO and his top aides, arrived in Iran. They were there to neutralize the military leaders. Using ties of friendship, promises of aid, and assurance of safety, Huyser and his team convinced the Iranian commanders to allow the transitional government to finalize arrangements for Khomeini becoming part of the new government.

Many of these Iranian military leaders, and their families, were slaughtered as Khomeini and his Islamic Republican Guard toppled the transitional government and seized power during the Spring of 1979.  “It was a most despicable act of treachery, for which I will always be ashamed,” admitted one NATO general years later.

While Iran was collapsing, so were America’s intelligence capabilities.

One of President Carter’s earliest appointments was placing Admiral Stansfield Turner in charge of the Central Intelligence Agency (CIA). Turner immediately eviscerated the Agency’s human intelligence and clandestine units. He felt they had gone “rogue” during the Nixon-Ford era. He also thought electronic surveillance and satellites could do as good a job.

Turner’s actions led to “one of the most consequential strategic surprises that the United States has experienced since the CIA was established in 1947” – the Embassy Takeover and Hostage Crisis.

The radicalization of Iran occurred at lightning speed. Khomeini and his lieutenants remade Iran’s government and society into a totalitarian fundamentalist Islamic state. Anyone who opposed their Islamic Revolution were driven into exile, imprisoned, or killed.

Khomeini’s earlier assurances of moderation and working with the West vanished. Radicalized mobs turned their attention to eradicating all vestiges of the West. This included the U.S. Embassy.

The first attack on the U.S. Embassy occurred on the morning of February 14, 1979. Coincidently, this was the same day that Adolph Dubs, the U.S. ambassador to Afghanistan, was kidnapped and fatally shot by Muslim extremists in Kabul. In Tehran, Ambassador Sullivan surrendered the U.S. Embassy and was able to resolve the occupation within hours through negotiations with the Iranian Foreign Minister.

Despite this attack, and the bloodshed in Kabul, nothing was done to either close the Tehran Embassy, reduce personnel, or strengthen its defenses. During the takeover, Embassy personnel failed to burn sensitive documents as their furnaces malfunctioned. They installed cheaper paper shredders. During the 444-day occupation, rug weavers were employed to reconstruct the sensitive shredded documents, creating global embarrassment of America.

Starting in September 1979, radical students began planning a more extensive assault on the Embassy. This included daily demonstrations outside the U.S. Embassy to trigger an Embassy security response. This allowed organizers to assess the size and capabilities of the Embassy security forces.

On November 4, 1979, one of the demonstrations erupted into an all-out conflict by the Embassy’s Visa processing public entrance. The assault leaders deployed approximately 500 students. Female students hid metal cutters under their robes, which were used to breach the Embassy gates.

Khomeini was in a meeting outside of Tehran and did not have prior knowledge of the takeover. He immediately issued a statement of support, declaring it “the second revolution” and the U.S. Embassy an “America spy den in Tehran.”

What followed was an unending ordeal of terror and depravation for the 66 hostages, who through various releases, were reduced to a core of 52. The 2012 film “Argo” chronicled the audacious escape of six Americans who had been outside the U.S. Embassy at the time of the takeover.

ABC News began a nightly update on the hostage drama. This became “Nightline.” During the 1980 Presidential campaign, it served as a nightly reminder of the ineffectiveness of President Carter.

On April 24, 1980, trying to break out of this chronic crisis, Carter initiated an ill-conceived, and poorly executed, rescue mission called Operation Eagle Claw. It ended with crashed helicopters and eight dead soldiers at the staging area outside of the Iranian Capital, designated Desert One. Another attempt was made through diplomacy as part of a hoped for “October Surprise,” but the Iranians cancelled the deal just as planes were being mustered at Andrews Air Force Base.

Carter paid the price for his Iranian duplicity. On November 4, 1980, Ronald Reagan obliterated Carter in the worst defeat suffered by an incumbent President since Herbert Hoover in 1932.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

The election of Ronald Reagan in 1980 marked THE crucial turning point in winning the Cold War with Russia-dominated Communism, the USSR.

Reagan’s rise to national prominence began with the surge in communist insurgencies and revolutions worldwide that began after the fall of Saigon on April 30, 1975, and all South Vietnam to the communists. After 58,000 American lives and trillions in treasure lost over the tenures of five American Presidents, the United States left the Vietnam War and South Vietnam to the communists.

Communist North Vietnam in league with fellow communist governments in Russia and China accurately saw the weakening of a new American President, Gerald Ford, and a new ‘anti-war’ Congress as a result of the ‘Watergate’ scandal and President Richard Nixon’s subsequent resignation. In the minds of the communists, it was a signal opportunity to forcibly “unify,” read invade, the non-communist South with magnum force, armed to the teeth by both the People’s Republic of China and the USSR. President Nixon’s Secret Letter to South Vietnamese President Thieu pledging all-out support of U.S. air and naval power if the communists broke the Paris Peace Agreement and invaded was irrelevant as Nixon was gone. With the communist invasion beginning, seventy-four new members of Congress, all anti-war Democrats guaranteed the ”No” vote on the Ford Administration’s Bill to provide $800 million for ammunition and fuel to the South Vietnamese military to roll their tanks and fly their planes. That Bill lost in Congress by only one vote. The fate of South Vietnam was sealed. The people of South Vietnam, in what seemed then like an instant, were abandoned by their close American ally of some 20 years. Picture that.

Picture the ignominy of it all. Helicopters rescuing Americans and some chosen Vietnamese from rooftops while U.S. Marines staved off the desperate South Vietnamese who had worked with us for decades. Picture Vietnamese people clinging to helicopter skids and airplane landing gears in desperation, falling to their death as these aircraft ascended. Picture drivers of South Vietnamese tanks and pilots of fighter planes not able to engage for want of fuel. Picture famous South Vietnamese Generals committing suicide rather than face certain torture and death in Re-Education Camps, read Gulags with propaganda lessons. Picture perhaps hundreds of thousands of “Boat People,” having launched near anything that floated to escape the wrath of their conquerors, at the bottom of the South China Sea. Picture horrific genocide in Cambodia where Pol Pot and his henchmen murdered nearly one-third of the population to establish communism… and through it all, the West, led by the United States, stayed away.

Leonid Brezhvnev, Secretary General of the Communist Party of the Soviet Union and his Politburo colleagues could picture it… all of it. The Cold War was about to get hot.

The fall of the non-communist government in South Vietnam and the election of President Jimmy Carter was followed by an American military and intelligence services-emasculating U.S. Congress. Many in the Democratic Party took the side of the insurgents. I remember well, Sen. Tom Harkin from Iowa claiming that the Sandinista Communists (in Nicaragua) were more like “overzealous Boy Scouts” than hardened Communists. Amazing.

Global communism with the USSR in the lead and America in retreat, was on the march.

In just a few years, in Asia, Africa and Latin America, repressive communist-totalitarian regimes had been foisted on the respective peoples by small numbers of ideologically committed, well-trained and well-armed (by the Soviet Union) insurgencies. “Wars of national liberation” and intensive Soviet subversion raged around the world. Think Angola and Southern Africa, Ethiopia and Somalia in the Horn of Africa. Think the Middle East and the Philippines, Malaysia and Afghanistan (there a full-throated Red Army invasion) in Asia.

Think Central America in our own hemisphere and Nicaragua where the USSR and their right hand in the hemisphere, communist Cuba, took charge along with a relatively few committed Marxist-Leninist Nicaraguans, backed by Cuba and the Soviet Union, even creating a Soviet-style Politburo and Central Committee! On one my several trips to the region, I personally met with Tomas Borge, the Stalinist leader of the Nicaraguan Communist Party and his colleagues. Total Bolsheviks. To make things even more dangerous for the United States, these wars of national liberation were also ongoing in El Salvador, Honduras and Guatemala.

A gigantic airfield that could land Soviet jumbo transports was being completed under the Grenadian communist government of Maurice Bishop. Warehouses with vast storage capacity for weapons to fuel insurgency in Latin America were built. I personally witnessed these facilities and found the diary of one leading Politburo official, Liam James, who was on the payroll of the Soviet Embassy at the time. They all were but he, being the Treasurer of the government, actually wrote it down! These newly-minted communist countries and other ongoing insurgencies, with Marxist-Leninist values in direct opposition to human freedom and interests of the West, were being funded and activated by Soviet intelligence agencies, largely the KGB and were supplied by the economies of the Soviet Union and their Warsaw Pact empire in Eastern and Central Europe. Many leaders of these so-called “Third World” countries were on Moscow’s payroll.

In the words of one KGB General, “The world was going our way.” Richard Andrew, ‘The KGB and the Battle for the Third World’ (based on the Mitrokhin archives). These so-called wars of national liberation didn’t fully end until some ten years later, when the weapons and supplies from the Soviet Union dried up as the Soviet Empire began to disintegrate, thanks to a new U.S. President who led the way during  the 1980s.

Enter Ronald Wilson Reagan. To the chagrin of the Soviet communists and their followers worldwide, it was the beginning of the end of their glory days when in January of 1981, Ronald Reagan, having beaten the incumbent President, Jimmy Carter, in November, was sworn in as President of the United States. Ronald Reagan was no novice in the subject matter. President Reagan had been an outspoken critic of communism over three decades. He had written and given speeches on communism and the genuinely evil nature of the Soviet Union. He was a committed lover of human freedom, human rights and free markets. As Governor of California, he had gained executive experience in a large bureaucracy and during that time had connected with a contingent of likeminded political and academic conservatives. The mainstream media was ruthless with him, characterizing him as an intellectual dolt and warmonger who would bring on World War III. He would prove his detractors so wrong. He would prove to be the ultimate Cold Warrior, yet a sweet man with an iron fist when needed.

When his first National Security Advisor, Richard Allen, asked the new President Reagan about his vision of the Cold War, Reagan’s response was, “We win, they lose.” Rare moral clarity rarely enunciated.

To the end of his presidency, he continued to be disparaged by the mainstream media, although less aggressively. However, the American people grew to appreciate and even love the man as he and his team, more than anyone would be responsible for winning the Cold War and bringing down a truly “Evil Empire.” Just ask those who suffered most, the Polish, Czech, Hungarian, Ukrainian, Rumanian, Baltic, and yes, the Russian people, themselves. To this very day, his name is revered by those who suffered and still suffer under the yoke of communism.

Personally, I have often pondered that had Ronald Reagan not been elected President of the United States in 1980, the communist behemoth USSR would be standing strong today and the Cold War ended with communism, the victor.

The Honorable Don Ritter, Sc. D., served seven terms in the U.S. Congress from Pennsylvania including both terms of Ronald Reagan’s presidency. Dr. Ritter speaks fluent Russian and lived in the USSR for a year as a Nation Academy of Sciences post-doctoral Fellow during Leonid Brezhnev’s time. He served in Congress as Ranking Member of the Congressional Helsinki Commission and was a leader in Congress in opposition to the Soviet invasion and occupation of Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath
President Nixon Farewell Speech to White House Staff, August 9, 1974

On Thursday, August 8, 1974, a somber Richard Nixon addressed the American people in a 16-minute speech via television to announce that he was planning to resign from the Presidency of the United States. He expressed regret over mistakes he made about the break-in at the Democratic Party offices at the Watergate Hotel and the aftermath of that event. He further expressed the hope that his resignation would begin to heal the political divisions the matter had exacerbated. The next day, having resigned, he boarded a helicopter and, with his family, left Washington, D.C.

Nixon had won the 1972 election against Senator George McGovern of South Dakota with over 60% of the popular vote and an electoral vote of 520-17 (one vote having gone to a third candidate). Yet less than two years after what is one of the most overwhelming victories in American elections, Nixon was politically dead. Nixon has been described as a tragic figure, in a literary sense, due to his struggle to rise to the height of political power, only to be undone when he had achieved the pinnacle of success. The cause of this astounding change of fortune has been much debated. It resulted from a confluence of factors, political, historical, and personal.

Nixon was an extraordinarily complex man. He was highly intelligent, even brilliant, yet was the perennial striver seeking to overcome, by unrelenting work, his perceived limitations. He was an accomplished politician with a keen understanding of political issues, yet socially awkward and personally insecure. He was perceived as the ultimate insider, yet, despite his efforts, was always somehow outside the “establishment,” from his school days to his years in the White House. Alienated from the social and political elites, who saw him as an arriviste, he emphasized his marginally middle-class roots and tied his political career to that “silent majority.” He could arouse intense loyalty among his supporters, yet equally intense fury among his opponents. Nixon infamously kept an “enemies list,” the only surprise of which is that it was so incomplete. Seen by the Left as an operative of what is today colloquialized as the “Deep State,” yet he rightly mistrusted the bureaucracy and its departments and agencies, and preferred to rely on White House staff and hand-picked loyal individuals. Caricatured as an anti-Communist ideologue and would-be right-wing dictator, Nixon was a consummately pragmatic politician who was seen by many supporters of Senator Barry Goldwater and Governor Ronald Reagan as insufficiently in line with their world view.

The Watergate burglary and attempted bugging of the Democratic Party offices in June, 1972, and investigations by the FBI and the Government Accountability Office that autumn into campaign finance irregularities by the Committee to Re-Elect the President (given the unfortunate acronym CREEP by Nixon’s opponents) initially had no impact on Nixon and his comprehensive political victory. In January, 1973, the trial of the operatives before federal judge John Sirica in Washington, D.C., revealed possible White House involvement. This perked the interest of the press, never Nixon’s friends. These revelations, now spread before the public, caused the Democratic Senate majority to appoint a select committee under Senator Sam Ervin of North Carolina for further investigation. Pursuant to an arrangement with Senate Democrats, Attorney General Elliot Richardson named Democrat Archibald Cox, a Harvard law professor and former Kennedy administration solicitor general, as special prosecutor.

Cox’s efforts uncovered a series of missteps by Nixon, as well as actions that were viewed as more seriously corrupt and potentially criminal. Some of these sound rather tame by today’s standards. Others are more problematic. Among the former were allegations that Nixon had falsely backdated a gift of presidential papers to the National Archives to get a tax credit, not unlike Bill Clinton’s generously-overestimated gift of three pairs of his underwear in 1986 for an itemized charitable tax deduction. Another was that he was inexplicably careless in preparing his tax return. Given the many retroactively amended tax returns and campaign finance forms filed by politicians, such as the Clintons and their eponymous foundations, this, too, seems of slight import. More significant was the allegation that he had used the Internal Revenue Service to attack political enemies. Nixon certainly considered that, although it is not shown that any such actions were undertaken. Another serious charge was that Nixon had set up a secret structure to engage in political intelligence and espionage.

The keystone to the impeachment was the discovery of a secret taping system in the Oval Office that showed that Nixon had participated in a cover-up of the burglary and obstructed the investigation. Nixon, always self-reflective and sensitive to his position in history, had set up the system to provide a clear record of conversations within the Oval Office for his anticipated post-Presidency memoirs. It proved to be his downfall. When Cox became aware of the system, he sought a subpoena to obtain nine of the tapes in July, 1973. Nixon refused, citing executive privilege relating to confidential communications. That strategy had worked when the Senate had demanded the tapes; Judge Sirica had agreed with Nixon. But Judge Sirica rejected that argument when Cox sought the information, a decision upheld 5-2 by the federal Circuit Court for the District of Columbia.

Nixon then offered to give Cox authenticated summaries of the nine tapes. Cox refused. After a further clash between the President and the special prosecutor, Nixon ordered Attorney General Richardson to remove Cox. Both Richardson and Assistant Attorney General William Ruckelshaus refused and resigned. However, by agreement between these two and Solicitor General Robert Bork, Cox was removed by Bork in his new capacity as Acting Attorney General. It was well within Nixon’s constitutional powers as head of the unitary executive to fire his subordinates. But what the President is constitutionally authorized to do is not the same as what the President politically should do. The reaction of the political, academic, and media elites to the “Saturday Night Massacre” was overwhelmingly negative, and precipitated the first serious effort at impeaching Nixon.

A new special prosecutor, Democrat Leon Jaworski, was appointed by Bork in consultation with Congress. The agreement among the three parties was that, though Jaworski would operate within the Justice Department, he could not be removed except for specified causes and with notification to Congress. Jaworski also was specifically authorized to contest in court any claim of executive privilege. When Jaworski again sought various specific tapes, and Nixon again claimed executive privilege, Jaworski eventually took the case to the Supreme Court. On July 24, 1974, Chief Justice Warren Burger’s opinion in the 8-0 decision in United States v. Nixon (William Rehnquist, a Nixon appointee who had worked in the White House, had recused himself) overrode the executive privilege claim. The justices also rejected the argument that this was a political intra-branch dispute between the President and a subordinate that rendered the matter non-justiciable, that is, beyond the competence of the federal courts.

At the same time, in July, 1974, with bipartisan support, the House Judiciary Committee voted out three articles of impeachment. Article I charged obstruction of justice regarding the Watergate burglary. Article II charged him with violating the Constitutional rights of citizens and “contravening the laws governing agencies of the executive branch,” which dealt with Nixon’s alleged attempted misuse of the IRS, and with his misuse of the FBI and CIA. Article III charged Nixon with ignoring congressional subpoenas, which sounds remarkably like an attempt to obstruct Congress, a dubious ground for impeachment. Two other proposed articles were rejected. When the Supreme Court ordered Nixon to release the tapes, that of June 23, 1972, showed obstruction of justice by the President instructing his staff to use the CIA to end the Watergate investigation. The tape was released on August 5. Nixon was then visited by a delegation of Republican Representatives and Senators who informed him of the near-certainty of impeachment by the House and of his extremely tenuous position to avoid conviction by the Senate. The situation having become politically hopeless, Nixon resigned, making his resignation formal on Friday, August 9, 1974.

The Watergate affair produced several constitutional controversies. First, the Supreme Court addressed executive privilege to withhold confidential information. Nixon’s opponents had claimed that the executive lacked such a privilege because the Constitution did not address it, unlike the privilege against self-incrimination. Relying on consistent historical practice going back to the Washington administration, the Court found instead that such a privilege is inherent in the separation of powers and necessary to protect the President in exercising the executive power and others granted under Article II of the Constitution. However, unless the matter involves state secrets, that privilege could be overridden by a court, if warranted in a criminal case, and the “presumptively privileged” information ordered released. While the Court did not directly consider the matter, other courts have agreed with Judge Sirica that, based on long practice, the privilege will be upheld if Congress seeks such confidential information. The matter then is a political question, not one for courts to address at all.

Another controversy arose over the President’s long-recognized power to fire executive branch subordinates without restriction by Congress. This is essential to the President’s position as head of the executive branch. For example, the President has inherent constitutional authority to fire ambassadors as Barack Obama and Donald Trump did, or to remove U.S. Attorneys, as Bill Clinton and George W. Bush did. Jaworski’s appointment under the agreement not to remove him except for specified cause interfered with that power, yet the Court upheld that limitation in the Nixon case.

After Watergate, in 1978, Congress passed the Ethics in Government Act that provided a broad statutory basis for the appointment of special prosecutors outside the normal structure of the Justice Department. Such prosecutors, too, could not be removed except for specified causes. In Morrison v. Olson, in 1988, the Supreme Court, by 7-1, upheld this incursion on executive independence over the lone dissent of Justice Antonin Scalia. At least as to inferior executive officers, which the Court found special prosecutors to be, Congress could limit the President’s power to remove, as long as the limitation did not interfere unduly with the President’s control over the executive branch. The opinion, by Chief Justice Rehnquist, was in many ways risible from a constitutional perspective, but it upheld a law that became the starting point for a number of highly-partisan and politically-motivated investigations into actions taken by Presidents Ronald Reagan, George H.W. Bush, and Bill Clinton, and by their subordinates. Only once the last of these Presidents was being subjected to such oversight did opposition to the law become sufficiently bipartisan to prevent its reenactment.

The impeachment proceeding itself rekindled the debate over the meaning of the substantive grounds for such an extraordinary interference with the democratic process. While treason is defined in the Constitution and bribery is an old and well-litigated criminal law concept, the third basis, of “high crimes and misdemeanors,” is open to considerable latitude of meaning. One view, taken by defenders of the official under investigation, is that this phrase requires conduct amounting to a crime, an “indictable offense.” The position of the party pursuing impeachment, Republican or Democrat, has been that this phrase more broadly includes unfitness for office and reaches conduct which is not formally criminal but which shows gross corruption or a threat to the constitutional order. The Framers’ understanding appears to have been closer to the latter, although the much greater number and scope of criminal laws today may have narrowed the difference. However, what the Framers considered sufficiently serious impeachable corruption likely was more substantial than what has been proffered recently. They were acutely aware of the potential for merely political retaliation and similar partisan mischief that a low standard for impeachment would produce. These and other questions surrounding the rather sparse impeachment provisions in the Constitution have not been resolved. They continue to be, foremost, political matters addressed on a case-by-case basis, as demonstrated the past twelve months.

As has been often observed, Nixon’s predicament was not entirely of his own making. In one sense, he was the victim of political trends that signified a reaction against what had come to be termed the “Imperial Presidency.” It had long been part of the progressive political faith that there was “nothing to fear but fear itself” as far as broadly exercised executive power, as long as the presidential tribune using “a pen and a phone” was subject to free elections. Actions routinely done by Presidents such as Franklin Roosevelt, Harry Truman, and Nixon’s predecessor, Lyndon Johnson, now became evidence of executive overreach. For example, those presidents, as well as others going back to at least Thomas Jefferson had impounded appropriated funds, often to maintain fiscal discipline over profligate Congresses. Nixon claimed that his constitutional duty “to take care that the laws be faithfully executed” was also a power that allowed him to exercise discretion as to which laws to enforce, not just how to enforce them. In response, the Democratic Congress in 1974 passed the Budget and Impoundment Control Act of 1974. The Supreme Court in Train v. City of New York declared presidential impoundment unconstitutional and limited the President’s authority to impound funds to whatever extent was permitted by Congress in statutory language.

In military matters, the elites’ reaction against the Vietnam War, shaped by negative press coverage and antiwar demonstrations on elite college campuses, gradually eroded popular support. The brunt of the responsibility for the vast expansion of the war lay with Lyndon Johnson and the manipulative use of a supposed North Vietnamese naval attack on an American destroyer, which resulted in the Gulf of Tonkin Resolution. At a time when Nixon had ended the military draft, drastically reduced American troop numbers in Vietnam, and agreed to the Paris Peace Accords signed at the end of January, 1973, Congress enacted the War Powers Resolution of 1973 over Nixon’s veto. The law limited the President’s power to engage in military hostilities to specified situations, in the absence of a formal declaration of war. It also basically required pre-action consultation with Congress for any use of American troops and a withdrawal of such troops unless Congress approved within sixty days. It also, somewhat mystifyingly, purported to disclaim any attempt to limit the President’s war powers. The Resolution has been less than successful in curbing presidential discretion in using the military and remains largely symbolic.

Another restriction on presidential authority occurred through the Supreme Court. In United States v. United States District Court in 1972, the Supreme Court rejected the administration’s program of warrantless electronic surveillance for domestic security. This was connected to the Huston Plan of warrantless searches of mail and other communications of Americans. Warrantless wiretaps were connected on some members of the National Security Council and several journalists. Not touched by the Court was the President’s authority to conduct warrantless electronic surveillance of foreigners or their agents for national security-related information gathering. On the latter, Congress nevertheless in 1978 passed the Foreign Intelligence Surveillance Act, which, ironically, has expanded the President’s power in that area. Because it can be applied to communications of Americans deemed agents of a foreign government, FISA, along with the President’s inherent constitutional powers regarding foreign intelligence-gathering, can be used to circumvent the Supreme Court’s decision. It has even been used in the last several years to target the campaign of then-candidate Donald Trump.

Nixon’s use of the “pocket veto” and his imposition of price controls also triggered resentment and reaction in Congress, although once again his actions were hardly novel. None of these various executive policies, by themselves, were politically fatal. Rather, they demonstrate the political climate in which what otherwise was just another election-year dirty trick, the Watergate Hotel burglary, could result in the historically extraordinary resignation from office of a President who had not long before received the approval of a large majority of American voters. Nixon’s contemplated use of the IRS to audit “enemies” was no worse than the Obama Administration’s actual use of the IRS to throttle conservative groups’ tax exemption. His support of warrantless wiretaps under his claimed constitutional authority to target suspected domestic troublemakers, while unconstitutional, hardly is more troubling than Obama’s use of the FBI and CIA to manipulate the FISA system into spying on a presidential candidate to assist his opponent. Nixon’s wiretapping of NSC officials and several journalists is not dissimilar to Obama’s search of phone records of various Associated Press reporters and of spying on Fox News’s James Rosen. Obama’s FBI also accused Rosen of having violated the Espionage Act. The Obama administration brought more than twice as many prosecutions—including under the Espionage Act—against leakers than all prior Presidents combined. That was in his first term.

There was another, shadowy factor at work. Nixon, the outsider, offended the political and media elites. Nixon himself disliked the bureaucracy, which had increased significantly over the previous generation through the New Deal’s “alphabet agencies” and the demands of World War II and the Cold War. The Johnson Administration’s Great Society programs sped up this growth. The agencies were staffed at the upper levels with left-leaning members of the bureaucratic elite. Nixon’s relationship with the press was poisoned not only by their class-based disdain for him, but by the constant flow of leaks from government insiders who opposed him. Nixon tried to counteract that by greatly expanding the White House offices and staffing them with members who he believed were personally loyal to him. His reliance on those advisers rather than on the advice of entrenched establishment policy-makers threatened the political clout and personal self-esteem of the latter. What has been called Nixon’s plebiscitary style of executive government, relying on the approval of the voters rather than on that of the elite administrative cadre, also was a threat to the existing order. As Senator Charles Schumer warned President Trump in early January, 2017, about the intelligence “community,” “Let me tell you: You take on the intelligence community — they have six ways from Sunday at getting back at you.” Nixon, too, lived that reality.

Once out of office, Nixon generally stayed out of the limelight. The strategy worked well. As seems to be the custom for Republican presidents, once they are “former,” many in the press and among other “right-thinking people” came to see him as the wise elder statesman, much to be preferred to the ignorant cowboy (and dictator) Ronald Reagan. Who, of course, then came to be preferred to the ignorant cowboy (and dictator) George W. Bush. Who, of course, then came to be preferred to the ignorant reality television personality (and dictator) Donald Trump. Thus, the circle of political life continues. It ended for Nixon on April 22, 1994. His funeral five days later was attended by all living Presidents. Tens of thousands of mourners paid their respects.

The parallel to recent events should be obvious. That said, a comparison between seriousness of the Watergate Affair that resulted in President Nixon’s resignation and the Speaker Nancy Pelosi/Congressman Adam Schiff/Congressman Jerry Nadler impeachment of President Trump brings to mind what may be Karl Marx’s only valuable observation, that historic facts appear twice, “the first time as tragedy, the second time as farce.”

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Danny de Gracia

The story of how men first set foot on the Moon one fateful day on July 20, 1969, will always be enshrined as one of America’s greatest contributions to history. When the first humans looked upwards to the night sky thousands of years ago, they must have marveled at the pale Moon looming in the heavens, set against the backdrop of countless stars. Inspired by the skies, and driven by a natural desire for exploration, humans must have wondered what was out there, and if it would be somehow possible to ever explore the distant heavens above.

Indeed, even the Bible tells us that the patriarch of faith, Abraham, was told by God in Genesis 15:5, “Look now toward heaven, and count the stars if you are able to number them. So shall your descendants be.”

The word given to Abraham may have been more than just an impressive way of promising an elderly man way past the age of conception that he would bear many children; it seems more like an invitation that mankind’s destiny belongs not merely on Earth, but among the stars of the limitless cosmos, as a spacefaring civilization.

Early Beginnings

For most of mankind’s history, space travel was relegated to wild myths, hopeless dreams, and fanciful science fiction. The first hurdle in reaching for the stars would be mastering staying aloft in Earth’s atmosphere, which by itself was no easy task. Observing birds, humans for millennia had tried to emulate organic wings with little to no success, not truly understanding the science of lift or the physics of flight.

Like Icarus of Greek mythology, the 11th century English Benedictine monk Eilmer of Malmesbury attempted to foray into the skies by fashioning wings as a kind of primitive glider, but he only succeeded in flying a short distance before he crashed, breaking his legs. Later, Jean-François Pilâtre de Rozier would give mankind a critical first in flight when he took off aboard the Montgolfier hot air balloon in 1783.

Ironically, it would not be benevolent inspiration that would free mankind from his millennia-old ties to the ground beneath his feet, but the pressing demands of war and increasing militarization of the planet. As the Industrial Age began, so also arose the age of warfare, and men knew from countless battles that whoever held the high ground could defend any stronghold or defeat any army. And what greater high ground could afford victory, than the heavens themselves?

Once balloons had been proven an effective and stable means of flight, militaries began to use them as spotting platforms to see enemy movements from a distance and provide accurate targeting for artillery. Notably, during the American Civil War, balloons made for a kind of early air forces for both the Union and Confederacy.

When the Wright Brothers at last mastered the art of controlled and powered flight in a fixed-wing aircraft on December 17, 1903, less than a decade later after the invention of the airplane, the First World War would erupt and aircraft and blimps would become crucial weapons in deciding the outcome of battles.

Germany’s defeat, which was seen by many Germans as something that should not have happened and should never happen again, stirred people like the former army lance corporal Adolf Hitler to pursue more advanced aerial weapons as a means of establishing military superiority.

Even as propeller planes were seen as the ultimate form of aircraft by most militaries of the time, in the late 1930s, German engineers Eugen Sänger and Irene Bredt were already envisioning spacecraft to attack enemies from orbit. In 1941, they conceived plans for the Silbervogel (“Silver Bird”), a rocket-powered space bomber that could take off into low Earth orbit, descend, and bounce off the outer atmosphere like a tossed stone skipping across a pond to reach an enemy target even half a world away.

Fortunately for the United States, the Silbervogel would never be produced, but other German scientists would be working on wonder weapons of their own, one of them being Wernher von Braun, an engineer who had childhood dreams of landing men on the Moon with rockets.

Working at the Peenemünde Army Research Center, von Braun infamously gave Nazi Germany the power to use V-2 rockets, a kind of early ballistic missile that could deliver a high-explosive warhead hundreds of miles away. One such V-2 rocket, MW 18014, test launched on June 20, 1944, became the first man-made object to cross above the Kármán line – Earth’s atmospheric edge of space – when it reached an apogee of 176 kilometers in flight.

While these weapons did not win the war for Nazi Germany, they aroused the interest of both the United States and the Soviets, and as the victorious Allies reclaimed Europe, a frantic effort to capture German scientists for their aerospace knowledge would become the prelude to a coming Cold War.

The Nuclear Age and Space

The use of the Fat Man and Little Boy atomic bombs against Japan brought to light a realization among planners in both the United States and the Soviet Union: The next battleground for control of the planet would be space. Between the difficulty in intercepting weapons like the V-2 rocket, and the destructive capability of the atom bomb, the nations that emerged victorious in WWII all saw potential in combining these technologies together.

At the end of WWII, both the Soviet Union and the United States brought back to their countries numerous German scientists and unused V-2 rockets for the purposes of creating their own next-generation of missiles.

The early V-2 rockets developed by von Braun for Nazi Germany were primitive and inaccurate weapons, but they had demonstrated the capability to carry objects, such as an explosive warhead, in high ballistic arcs over the earth. Early atomic bombs were bulky and extremely heavy, which meant that in order to deliver these weapons of mass destruction across space, larger rockets would need to be developed.

It is no accident then that the early space launchers of both the Soviet Union and the United States were, in fact, converted intercontinental ballistic missiles (or ICBMs) meant for delivering nuclear payloads. The first successful nuclear ICBM was the Soviet R-7 Semyorka (NATO reporting name SS-6 “Sapwood”), which would be the basis for the modified rocket 8K71PS No. M1-1PS, that sent Sputnik, the world’s first artificial satellite, into orbit on October 4, 1957.

The success of the Soviets in putting the first satellite into orbit awed the entire world, but was disturbing to the President Dwight D. Eisenhower White House, because it was not lost on the U.S. military that this accomplishment was more or less a demonstration of nuclear delivery capabilities by the Russians.

And while the United States in 1957 had an overwhelming superiority in nuclear weapons numbers relative to the Soviets, the nuclear doctrine of the early Cold War was structured around a bluff of “massive retaliation” created by Secretary of State John Foster Dulles that intended to minimize the proliferation of new conflicts – including space –  by threatening atomic use as the default response.

“If an enemy could pick his time and place and method of warfare,” Dulles had said in a dinner before the Council on Foreign Relations in January 1954, “and if our policy was to remain the traditional one of meeting aggression by direct and local opposition, then we needed to be ready to fight in the Arctic and in the Tropics; in Asia, the Near East; and in Europe; by sea, by land, and by air; with old weapons, and with new weapons.”

A number of terrifying initial conclusions emerged from the success of Sputnik. First, it showed that the Soviets had reached the ultimate high ground before U.S./NATO forces, and that their future ICBMs could potentially put any target in the world at risk for nuclear bombardment.

To put things into perspective, a jet plane like the American B-47, B-52, or B-58 bombers of the time, took upwards of 8 hours or more cruising through the stratosphere to strike a target from its airbase. But an ICBM, which can reach speeds of Mach 23 or faster in its terminal descent from orbit, can hit any target in the world in 35 minutes or less from launch. This destabilizing development whittled down the U.S. advantage, as it gave the Soviets the possibility of firing first in a surprise attack to “decapitate” any superior American or NATO forces that might be used against them.

The second, and more alarming perception paved by the Soviet entry into space was that America had dropped the ball and been left behind, not only technologically, but historically. In the Soviet Union, Nikita Khrushchev sought to gut check the integrity of both the United States and the NATO alliance by showcasing novel technological accomplishments, such as the Sputnik launch, to cast a long shadow over Western democracies and to imply that communism would be the wave of the future.

In a flurry of briefings and technical research studies that followed the Sputnik orbit, von Braun and other scientists in the U.S. determined that while the Soviets had beaten the West into orbit, the engineering and industrial capabilities of America would ultimately make it feasible for the U.S. over the long term to accomplish a greater feat, in which a man could be landed on the Moon.

Texas Senator Lyndon B. Johnson, later to be vice president to the young, idealistic John F. Kennedy, would be one of the staunchest drivers behind the scenes in pushing for America’s landing on the Moon. The early years of the space race were tough to endure, as NASA, America’s fledgling new civilian space agency, seemed – at least in public – to always be one step behind the Soviets in accomplishing space firsts.

Johnson, a rough-on-the-edges, technocratic leader who saw the necessity of preventing a world “going to sleep by the light of a communist Moon” pushed to keep America in the space fight even when it appeared, to some, as though American space rockets “always seemed to blow up.” His leadership would put additional resolve in the Kennedy administration to stay the course, and may have arguably ensured America being the first and only nation to land men on the Moon.

The Soviets would score another blow to America when on April 12, 1961, cosmonaut Yuri Gagarin became the first human in space when he made a 108-minute orbital flight, launched on the Vostok-K 8K72K rocket, another R-7 ICBM derivative.

But a month later on May 5, 1961, NASA began to catch-up with the Soviets when Alan Shepard and his Freedom 7 space capsule successfully made it into space, brought aloft by the Mercury-Redstone rocket which was adapted from the U.S. Army’s PGM-11 short range nuclear ballistic missile.

Each manned launch and counter-launch between the two superpowers was more than just a demonstration of scientific discovery; they were suggestions of nuclear launch capabilities, specifically, the warhead throw weight power of either country’s missiles, and a thinly veiled competition of who, at any given point in time, was winning the Cold War.

International Politics and Space

President Kennedy, speaking at Rice University on September 12, 1962, just one month before the Cuban Missile Crisis, hinted to the world that the Soviet advantage in space was not quite what it seemed to be, and that perhaps some of their “less public” space launches had been failures. Promising to land men on the Moon before the decade ended, Kennedy’s “Moon speech” at Rice has been popularly remembered as the singular moment when America decided to come together and achieve the impossible, but this is not the whole story.

In truth, ten days after giving the Moon speech, Kennedy privately reached out to Khrushchev pleading with him to make the landing a joint affair, only to be rebuffed, and then to find himself in October 14 of that same year ambushed by the Soviets with offensive nuclear missiles pointed at the U.S. in Cuba.

Kennedy thought himself to be a highly persuasive, flexible leader who could peaceably talk others into agreeing to make political changes, which set him at odds with the more hard-nosed, realpolitik-minded members of both his administration and the U.S. military. It also invited testing of his mettle by the salty Khrushchev, who saw the youthful American president – “Profiles in Courage” aside – as inexperienced, pliable, and a pushover.

Still, while the Moon race was a crucial part of keeping America and her allies encouraged amidst the ever-chilling Cold War, the Cuban Missile Crisis deeply shook Kennedy and brought him face-to-face with the possibility of a nuclear apocalypse.

Kennedy had already nearly gone to nuclear war once before during the now largely forgotten Berlin Crisis of 1961 when his special advisor to West Berlin, Lucius D. Clay, responded to East German harassment of American diplomatic staff with aggressive military maneuvers, but the Cuba standoff had become one straw too heavy for the idealistic JFK.

Fearing the escalating arms race, experiencing sticker shock over the growing cost of the Moon race he had committed America to, and ultimately wanting to better relations with the Soviet Union, a year later on September 20, 1963 before the United Nations, Kennedy dialed his public Moon rhetoric back and revisited his private offer to Khrushchev when he asked, albeit rhetorically, “Why, therefore, should man’s first flight to the Moon be a matter of national competition?”

The implications of a joint U.S.-Soviet Moon landing may have tickled the ears of world leaders throughout the General Assembly, but behind the scenes, it agitated both Democrats and Republicans alike, who not-so-secretly began to wonder if Kennedy was “soft” on communism.

Even Kennedy’s remarks to the press over the developing conflict in Vietnam during his first year as president were especially telling about his worldview amidst the arms race and space race of the Cold War: “But we happen to live – because of the ingenuity of science and man’s own inability to control his relationships with one another – we happen to live in the most dangerous time in the history of the human race.”

Kennedy’s handling of the Bay of Pigs, Berlin, the Cuban Missile Crisis, and his more idealistic approaches to the openly belligerent Soviet Union began to shake the political establishment, and the possibility of ceding the Moon to a kind of squishy, joint participation trophy embittered those who saw an American landing as a crucial refutation of Soviet advances.

JFK was an undeniably formidable orator, but in the halls of power, he was beginning to develop a reputation in his presidency as eroding America’s post-WWII advantages as a military superpower and leader of the international system. His rhetoric made some nervous, and suggestions of calling off an American Moon landing put a question mark over the future of the West for some.

Again, the Moon race wasn’t just about landing men on the Moon; it was about showcasing the might of one superpower over the other, and Kennedy’s attempts to roll back America’s commitment to space in favor of acquiescing to a Moon shared with the Soviets could have potentially cost the West the outcome of the Cold War.

As far back as 1961, NASA had already sought the assistance of the traditionally military-oriented National Reconnaissance Office (NRO) to gain access to top secret, exotic spy technologies which would assist them in surveying the Moon for future landings, and would later enter into memorandums of agreement with the NRO, Department of Defense, and Central Intelligence Agency. This is important, because the crossover between the separations of civilian spaceflight and military/intelligence space exploitation reflects how the space race served strategic goals rather than purely scientific ones.

On August 28, 1963, Secretary of Defense Robert McNamara and NASA Administrator James Webb had signed an MOA titled “DOD/CIA-NASA Agreement on NASA Reconnaissance Programs” (Document BYE-6789-63) which stated “NRO, by virtue of its capabilities in on-going reconnaissance satellite programs, has developed the necessary technology, contractor resources, and management skills to produce satisfactory equipments, and appropriate security methods to preserve these capabilities, which are currently covert and highly sensitive. The arrangement will properly match NASA requirements with NRO capabilities to perform lunar reconnaissance.”

Technology transfers also went both ways. The Gemini space capsules, developed by NASA as part of the efforts to master orbital operations such as spacewalks, orbital docking, and other aspects deemed critical to an eventual Moon landing, would even be considered by the United States Air Force for a parallel military space program on December 16, 1963. Adapting the civilian Gemini design into an alternate military version called the “Gemini-B,” the Air Force intended to put crews in orbit to a space station called the Manned Orbiting Laboratory (MOL), which would serve as a reconnaissance platform to take pictures of Soviet facilities.

While the MOL program would ultimately be canceled in its infancy before ever actually going online by the President Richard Nixon Administration in 1969, it was yet another demonstration of the close-knit relationship between civilian and military space exploration to accomplish the same interests.

Gold Fever at NASA

Whatever President Kennedy’s true intentions may have been moving forward on the space race, his unfortunate death two months after his UN speech at the hands of assassin Lee Harvey Oswald in Dallas on November 22, 1963 would be seized upon as a justification by the establishment to complete the original 1962 Rice University promise of landing an American first on the Moon, before the end of the decade.

Not surprisingly, one of Johnson’s very first actions in assuming the presidency after the death of Kennedy was to issue Executive Order 11129 on November 29, 1963, re-naming NASA’s Launch Operations Center in Florida as the “John F. Kennedy Space Center,” a politically adroit maneuver which ensured the space program was now seen as synonymous with the fallen president.

In world history, national icons and martyrs – even accidental or involuntary ones – are powerful devices for furthering causes that would ordinarily burnout and lose interest, if left to private opinion alone. Kennedy’s death led to a kind of “gold fever” at NASA in defeating the Soviets, and many stunning advances in space technology would be won in the aftermath of his passing.

So intense was the political pressure and organizational focus at NASA that some began to worry that corners were being cut and that there were serious issues that needed to be addressed.

On January 27, 1967, NASA conducted a “plugs out test” of their newly developed Apollo space capsule, where launch conditions would be simulated on the launch pad with the spacecraft running on internal power. The test mission, designated AS-204, had been strongly cautioned against by the spacecraft’s manufacturer, North American Aviation, because of the fact that it would take place at sea level and with pure oxygen, where the pressure would be dangerously higher than normal atmospheric pressure. Nevertheless, NASA proceeded with the test.

Veteran astronauts Roger B. Chaffee, Virgil “Gus” Grissom, and Ed White, who crewed the test mission, would perish when an electrical malfunction sparked a fire that spread rapidly as a result of the pure oxygen atmosphere of the capsule. Their deaths nearly threatened to bring the entire U.S. space program to a screeching halt, but NASA was able to rise above the tragedy, adding the loss of their astronauts as yet another compelling case for making it to the Moon before the decade would end.

On January 30, 1967, the Monday that followed the “Apollo 1” fire, NASA flight director Eugene F. Kranz gathered his staff together and gave an impromptu speech that would change the space agency forever.

“Spaceflight will never tolerate carelessness, incapacity, and neglect,” he began. “Somewhere, somehow, we screwed up. It could have been in design, build, or test. Whatever it was, we should have caught it.”

He would go on to say, “We did not do our job. We were rolling the dice, hoping that things would come together by launch day, when in our hearts we knew it would be a miracle. We were pushing the schedule and betting that the Cape would slip before we did. From this day forward, Flight Control will be known by two words: Tough and Competent. ‘Tough’ means we are forever accountable for what we do or what we fail to do. We will never again compromise our responsibilities. Every time we walk into Mission Control, we will know what we stand for.”

“‘Competent’ means we will never take anything for granted. We will never be found short in our knowledge and in our skills; Mission Control will be perfect. When you leave this meeting today, you will go back to your office and the first thing you will do there is to write ‘Tough and Competent’ on your blackboards. It will never be erased. Each day when you enter the room, these words will remind you of the price paid by Grissom, White, and Chaffee. These words are the price of admission to the ranks of Mission Control.”

And “tough and competent” would be exactly what NASA would become in the days, months, and years to follow. The U.S. space agency in the wake of the Apollo fire would set exacting standards of professionalism, quality, and safety, even as they continued to increase in mastery of the technology and skills necessary to make it to the Moon.

America’s Finest Hour

Unbeknownst to U.S. intelligence agencies, the Soviets had already fallen vastly far behind in their own Moon program, and their N1 rocket, which was meant to compete with the U.S. Saturn V rocket, was by no means ready for manned use. Unlike NASA, the Soviet space program had become completely dependent on a volatile combination of personalities and politics, which bottlenecked innovation, slowed necessary changes, and in the end, made it impossible to adapt appropriately in the race for the Moon.

On December 21, 1968, the U.S. leapt into first place in the space race when Apollo 8 entered history as the first crewed spacecraft to leave Earth, orbit the Moon, and return. Having combined decades of military and civilian science, overcome terrible tragedies, and successfully applied lessons learned into achievements won, NASA could at last go on to attain mankind’s oldest dream of landing on the Moon with the Apollo 11 mission, launched on July 16, 1969 from the Kennedy Space Center launch complex LC-39A.

Astronauts Neil A. Armstrong, Edwin “Buzz” E. Aldrin Jr., and Michael Collins would reach the Moon’s orbit on July 19, where they would survey their target landing site at the Sea of Tranquility and begin preparations for separation from the Command Module, Columbia, and landing in the Lunar Module, The Eagle.

On Sunday, July 20, Armstrong and Aldrin would leave Collins behind to pilot the Apollo Command Module and begin their descent to the lunar surface below. Discovering their landing area strewn with large boulders, Armstrong took the Lunar Module out of computer control and manually steered the lander on its descent while searching for a suitable location, finding himself with only a mere 50 seconds of fuel left. But at 8:17 pm, Armstrong would touch down safely, declaring to a distant Planet Earth, “Houston, Tranquility Base here, The Eagle has landed!”

Communion on the Moon

As if to bring humanity full circle, two hours after landing on the surface of the Moon, Aldrin, a Presbyterian, quietly and unknown to NASA back on Earth, would remove from his uniform a small 3” x 5” notecard with a hand-written passage from John 15:5. Taking Communion on the Moon, Aldrin would read within the Lunar Module, “As Jesus said: I am the Vine, you are the branches. Whoever remains in Me, and I in Him, will bear much fruit; for you can do nothing without Me.”

Abraham, the Bible’s “father of faith,” could almost be said to have been honored by Aldrin’s confession of faith. In a sense, the landing of a believing astronaut on a distant heavenly object was like a partial fulfillment of the prophecy of Genesis 15:5, in which Abraham’s descendants would be like the stars in the sky.

Later, when Armstrong left the Lunar Module and scaled the ladder down to the Moon’s dusty surface, he would radio back to Earth, “That’s one small step for a man; one giant leap for mankind.” Due to a 35-millisecond interruption in the signal, listeners would not hear the “one small step for a man” but instead, “one small step for man,” leaving the entire world with the impression that the NASA astronauts had won not just a victory for America, but for humankind, as a whole.

After planting Old Glory, the flag of the United States of America in the soft lunar dust, the Moon race had officially been won, and the Soviets, having lost the initiative, would scale back their space program to focus on other objectives, such as building space stations and attempting to land probes on other planets. The Soviets not only lost the Moon race, but their expensive investment that produced no propaganda success would also, ultimately, cost them the Cold War as well.

America would go on to send men to the Moon a total of six times and with twelve different astronauts between July 20, 1969 (Apollo 11) and December 11, 1972 (Apollo 17). The result of the U.S. winning the Moon race would be the caper of assuring the planet that the Western world would not be overtaken by the communist bloc, and many useful technologies which were employed either for the U.S. civilian space program or military aerospace applications would later find themselves in commercial, everyday use.

While other nations, including Russia, the European Union, Japan, India, China, Luxembourg, and Israel all have successfully landed unmanned probes on the Moon, to this date, only the United States holds the distinction of having placed humans on the Moon.

Someday, hopefully soon, humans will return once again to the Moon, and even travel from there to distant planets, or even distant stars. But no matter how far humanity travels, the enduring legacy of July 20, 1969 will be that freedom won the 20th century because America, not the Soviets, won the Moon race.

Landing on the Moon was a global victory for humanity, but getting there first will forever be a uniquely American accomplishment.

Dr. Danny de Gracia, Th.D., D.Min., is a political scientist, theologist, and former committee clerk to the Hawaii State House of Representatives. He is an internationally acclaimed author and novelist who has been featured worldwide in the Washington Times, New York Times, USA Today, BBC News, Honolulu Civil Beat, and more. He is the author of the novel American Kiss: A Collection of Short Stories.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

On March 12, 1947, President Harry Truman delivered a speech advocating assistance to Greece and Turkey to resist communism as part of the early Cold War against the Soviet Union. The speech enunciated the Truman Doctrine, which led to a departure from the country’s traditional foreign policy to a more expansive direction in global affairs.

Truman said, “I believe that it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.” Protecting the free world against communist expansion became the basis for the policy of Cold War containment.

The United States fought a major war in Korea in the early 1950s to halt the expansion of communism in Asia especially after the loss of China in 1949. Although President Dwight D. Eisenhower had resisted the French appeal to intervene in Vietnam at Dien Bien Phu in 1954, the United States gradually increased its commitment and sent thousands of military advisers and billions of dollars in financial assistance over the next decade.

In the summer of 1964, President Lyndon B. Johnson was in the midst of a presidential campaign against Barry Goldwater and pushing his Great Society legislative program through Congress. He did not want to allow foreign affairs to imperil either and downplayed increased American involvement in the war.

Administration officials were quietly considering bombing North Vietnam or sending ground troops to interdict the Viet Cong insurgency in South Vietnam. Meanwhile, the United States Navy was running covert operations in the waters off North Vietnam in the Gulf of Tonkin.

On August 2, the destroyer USS Maddox and several U.S. fighter jets from a nearby carrier exchanged fire with some North Vietnamese gunboats. The U.S. warned North Vietnam that further “unprovoked” aggression would have “grave consequences.” The USS Turner Joy was dispatched to patrol with the Maddox.

On August 4, the Maddox picked up multiple enemy radar contacts in severe weather, but no solid proof confirmed the presence of the enemy. Whatever the uncertainty related to the event, the administration proceeded as if a second attack had definitely occurred. It immediately ordered a retaliatory airstrike and sought a congressional authorization of force. President Johnson delivered a national television address and said, “Repeated acts of violence against the armed forces of the United States must be met….we seek no wider war.”

On August 7, Congress passed the Tonkin Gulf Resolution which authorized the president “to take all necessary measures to repeal any armed attack against the forces of the United States and to prevent any further aggression.” The House passed the joint resolution unanimously, and the Senate passed it with only two dissenting votes.

The Tonkin Gulf Resolution became the basis for fighting the Vietnam War. World War II remained the last congressional declaration of war.

President Johnson had promised the electorate that he would not send “American boys to fight a war Asian boys should fight for themselves.” However, the administration escalated the war over the next several months.

On February 7, 1965, the Viet Cong launched an attack on the American airbase at Pleiku. Eight Americans were killed and more than one hundred wounded. President Johnson and Secretary of Defense Robert McNamara used the incident to expand the American commitment significantly but sought a piecemeal approach that would largely avoid a contentious public debate over American intervention.

Within a month, American ground troops were introduced into Vietnam as U.S. Marines went ashore and were stationed at Da Nang to protect an airbase there. The president soon authorized deployment of thousands more troops. In April, he approved Operation Rolling Thunder which launched a sustained bombing campaign against North Vietnam.

It did not take long for the Marines to establish offensive operations against the communists. The Marines initiated search and destroy missions to engage the Viet Cong. They fought several battles with the enemy, requiring the president to send more troops.

In April 1965, the president finally explained his justification for escalating the war, which included the Cold War commitment to the free world. He told the American people, “We fight because we must fight if we are to live in a world where every country can shape its own destiny. And only in such a world will our own freedom be finally secure.”

As a result, Johnson progressively sent more and more troops to fight in Vietnam until there were 565,000 troops in 1968. The Tet Offensive in late January 1968 was a profound shock to the American public which had received repeated promises of progress in the war. Even though U.S. forces recovered from the initial shock and won on overwhelming military victory that effectively neutralized the Viet Cong and devastated North Vietnamese Army forces, President Johnson was ruined politically and announced he would not run for re-election. His “credibility gap” contributed to growing distrust of government and concern about an unlimited and unchecked “imperial presidency” soon made worse by Watergate.

The Vietnam War contributed to profound division on the home front. Hundreds of thousands of Americans from across the spectrum protested American involvement in Vietnam. Young people from the New Left were at the center of teach-ins and demonstrations on college campuses across the country. The Democratic Party was shaken by internal convulsions over the war, and conservatism dominated American politics for a generation culminating in the presidency of Ronald Reagan.

Eventually, more than 58,000 troops were lost in the war. The Cold War consensus on containment suffered a dislocation, and a Vietnam syndrome affected morale in the U.S. military and contributed to significant doubts about the projection of American power abroad. American confidence recovered in the 1980s as the United States won the Cold War, but policymakers have struggled to define the purposes of American foreign policy with the rise of new global challenges in the post-Cold War world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

It took almost a century for Congress, and President Lyndon B. Johnson, a Democrat from Texas, to enact the Civil Rights Act of 1964, putting America back on the side of defending the equality before the law of all U.S. Citizens. That act formally made segregation illegal. It legally required states to stop applying facially neutral election laws differently, depending on the race of the citizen trying to register and vote. If the Civil Rights Act of 1957 had raised expectations by showing what was now possible, the Civil Rights Act of 1964 again dramatically raised expectations to actual equal treatment by governments.

But the defenders of segregation were not yet done. They continued to pursue the “massive resistance” to integration that emerged in the year between Brown I and Brown II.[1] They continued to refuse to register black voters, to use “literacy” tests (which tested esoteric knowledge, rather than literacy) only to deny black citizens the chance to register, and to murder those who didn’t get the message.

Jimmie Lee Jackson was one such victim of last-ditch defiance. In February 1965, Jackson, an Alabamian church deacon, led a demonstration in favor of voting rights in his hometown of Marion, Alabama; as he did so, state troopers beat him to death. The Southern Christian Leadership Conference (in apparent coordination with the White House) responded by organizing a far larger march for voting rights, one that would cover the 54 miles from Selma, Alabama to the capitol in Montgomery. On March 7, 1965, that march reached the Edmund Pettis Bridge in Selma, where national and international television cameras captured (and broadcast into living rooms everywhere) Alabama state troopers gassing and beating unarmed demonstrators.  When the SCLC committed to continuing the march, others flocked to join them. Two days later, as a federal court considered enjoining further state action against the demonstrators, a mob lynched James Reeb, a Unitarian minister from Boston who had flown in for that purpose.

Johnson Returns to Congress

Less than a week later, President Johnson had called Congress into a special session and began it with a nationally televised Presidential address to a Joint Session.[2] Urging “every member of both parties, Americans of all religions and of all colors, from every section of this country” to join him in working “for the dignity of man and the destiny of democracy,” President Johnson, the heavily accented man-of-the-South that Senator Richard Russell, a Democrat from Georgia, once had connived to get into the Presidency, compared the historical “turning point” confronting the nation to other moments “in man’s unending search for freedom” including the battles of “Lexington and Concord” and the surrender at “Appomattox.” President Johnson defined the task before Congress as a “mission” that was “at once the oldest and the most basic of this country: to right wrong, to do justice, to serve man.” The President identified the core issue – that “of equal rights for American Negroes” – as one that “lay bare the secret heart of America itself[,]” a “challenge, not to our growth or abundance, or our welfare or our security, but rather to the values, and the purposes, and the meaning of our beloved nation.”

He said more. President Johnson recognized that “[t]here is no Negro problem. There is no Southern problem. There is no Northern problem. There is only an American problem. And we are met here tonight as Americans — not as Democrats or Republicans. We are met here as Americans to solve that problem.”  And still more:

“This was the first nation in the history of the world to be founded with a purpose. The great phrases of that purpose still sound in every American heart, North and South: ‘All men are created equal,’ ‘government by consent of the governed,’ ‘give me liberty or give me death.’ Well, those are not just clever words, or those are not just empty theories. In their name Americans have fought and died for two centuries, and tonight around the world they stand there as guardians of our liberty, risking their lives.

“Those words are a promise to every citizen that he shall share in the dignity of man. This dignity cannot be found in a man’s possessions; it cannot be found in his power, or in his position.  It really rests on his right to be treated as a man equal in opportunity to all others. It says that he shall share in freedom, he shall choose his leaders, educate his children, provide for his family according to his ability and his merits as a human being. To apply any other test – to deny a man his hopes because of his color, or race, or his religion, or the place of his birth is not only to do injustice, it is to deny America and to dishonor the dead who gave their lives for American freedom.

“Every American citizen must have an equal right to vote.

“There is no reason which can excuse the denial of that right.  There is no duty which weighs more heavily on us than the duty we have to ensure that right.

“Yet the harsh fact is that in many places in this country men and women are kept from voting simply because they are Negroes. Every device of which human ingenuity is capable has been used to deny this right. The Negro citizen may go to register only to be told that the day is wrong, or the hour is late, or the official in charge is absent. And if he persists, and if he manages to present himself to the registrar, he may be disqualified because he did not spell out his middle name or because he abbreviated a word on the application. And if he manages to fill out an application, he is given a test. The registrar is the sole judge of whether he passes this test. He may be asked to recite the entire Constitution, or explain the most complex provisions of State law. And even a college degree cannot be used to prove that he can read and write.

“For the fact is that the only way to pass these barriers is to show a white skin. Experience has clearly shown that the existing process of law cannot overcome systematic and ingenious discrimination. No law that we now have on the books – and I have helped to put three of them there – can ensure the right to vote when local officials are determined to deny it. In such a case our duty must be clear to all of us. The Constitution says that no person shall be kept from voting because of his race or his color. We have all sworn an oath before God to support and to defend that Constitution. We must now act in obedience to that oath.

“We cannot, we must not, refuse to protect the right of every American to vote in every election that he may desire to participate in. And we ought not, and we cannot, and we must not wait another eight months before we get a bill. We have already waited a hundred years and more, and the time for waiting is gone.

“But even if we pass this bill, the battle will not be over. What happened in Selma is part of a far larger movement which reaches into every section and State of America. It is the effort of American Negroes to secure for themselves the full blessings of American life. Their cause must be our cause too.  Because it’s not just Negroes, but really it’s all of us, who must overcome the crippling legacy of bigotry and injustice.

And we shall overcome.

“The real hero of this struggle is the American Negro. His actions and protests, his courage to risk safety and even to risk his life, have awakened the conscience of this nation. His demonstrations have been designed to call attention to injustice, designed to provoke change, designed to stir reform.  He has called upon us to make good the promise of America.  And who among us can say that we would have made the same progress were it not for his persistent bravery, and his faith in American democracy.

“For at the real heart of [the] battle for equality is a deep[-]seated belief in the democratic process. Equality depends not on the force of arms or tear gas but depends upon the force of moral right; not on recourse to violence but on respect for law and order.

“And there have been many pressures upon your President and there will be others as the days come and go. But I pledge you tonight that we intend to fight this battle where it should be fought – in the courts, and in the Congress, and in the hearts of men.”

The Passage and Success of the Voting Rights Act

Congress made good on the President’s promises and fulfilled its oath.  The Voting Rights Act, the crowning achievement of the Civil Rights Movement, was signed into law in August 1965, less than five (5) months after those bloody events in Selma.

The VRA would allow individuals to sue in federal court when their voting rights were denied. It would allow the Department of Justice to do the same. And, recognizing that “voting discrimination … on a pervasive scale” justified an “uncommon exercise of congressional power[,]” despite the attendant “substantial federalism costs[,]” it required certain states and localities, for a limited time, to obtain the approval (or “pre-clearance”) of either DOJ or a federal court sitting in Washington, DC before making any alteration to their voting laws, from registration requirements to the location of polling places.[3]

And it worked.

The same Alabama Governor and Democrat, George Wallace, who (on first losing re-election) had promised himself never to be “out-segged” again and who, on getting back into office in 1963, had proclaimed “segregation today, segregation tomorrow, segregation forever[!]” would win re-election in 1982 by seeking and obtaining the majority support of Alabama’s African Americans. By 2013, “African-American voter turnout exceeded white voter turnout in five of the six States originally covered by [the pre-clearance requirement], with a gap in the sixth State of less than one half of one percent;”[4] the percentage of preclearance submissions drawing DOJ objections had dropped about 100-fold between the first decade under pre-clearance and 2006.[5]

At long last, with only occasional exceptions (themselves addressed through litigation under the VRA), American elections were held consistent with the requirements of the Constitution and the equality before the law of all U.S. Citizens.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Southern states might now be required by law to integrate their public schools, but, by and large, they didn’t yet do so.  That would follow around 1970 when a pair of events forced the issue: (a) a University of Southern California football team led by O.J. Simpson drubbed the University of Alabama in the Crimson Tide’s 1970 home opener – so allowing Alabama Coach Bear Bryant to finally convince Alabama Governor George Wallace that the state must choose between having competitive football or segregated football; and (b) President Nixon quietly confronting the Southern governments that had supported his election with the conclusion of the American intelligence community that their failure to integrate was costing America the Cold War – they must decide whether they hated their black neighbors more than they hated the godless Communists.  However ironically, what finally killed Jim Crow was a love of football and a hatred of Marxism.

[2] See,

[3] Shelby County v. Holder, 570 U.S. 529, 133 S.Ct. 2612, 2620 and 2624 (2013) (each citing South Carolina v. Katzenbach, 383 U.S. 301, 308 and 334 (1966)); and at 2621 (citing Northwest Austin Municipal Util. Dist. No. One v. Holder, 557 U.S. 193, 202-03 (2009)), respectively.

[4] Id. at 2626.

[5] Id.

Guest Essayist: Dan Morenoff

For a decade after the Civil War, the federal government sought to make good its promises and protect the rights of the liberated as American citizens.  Most critically, in the Civil Rights Act of 1866, Congress created U.S. Citizenship and, in the Civil Rights Act of 1875, Congress guaranteed all American Citizens access to all public accommodations. Then, stretching from 1877 to the end of the century following the close of the Civil War, the federal government did nothing to assure that those rights were respected. Eventually, in Brown v. Board of Education, the Supreme Court started to admit that this was a problem, a clear failure to abide by our Constitution. But the Supreme Court (in Brown II) also made clear that it wouldn’t do anything about it.

So things stood, until a man in high office made it his business to get the federal government again on the side of right, equality, and law. That man was Lyndon Baines Johnson. And while this story could be told in fascinating, exhaustive detail,[1] these are its broad outlines.

Jim Crow’s Defenders

Over much of the century following the Civil War’s close, the American South was an accepted aberration, where the federal government turned a blind-eye to government mistreatment of U.S. Citizens (as well as to the systematic failure of governments to protect U.S. Citizens from mob-rule and racially-tinged violence), and where the highest office White Southerners could realistically dream of attaining was a seat in the U.S. Senate from which such a Southerner could keep those federal eyes blind.[2], [3] So, for the decades when it mattered, Southern Senators used their seniority and the procedures of the Senate (most prominently the filibuster, pioneered the previous century by South Carolina’s John C. Calhoun in the early 1800s) to block any federal ban on lynching, to protect the region’s racial caste system from federal intrusion, and to steadily steer federal money into the rebuilding of their broken region.  For decades, the leader of these efforts was Senator Richard Russell, a Democrat from Georgia and an avowed racist, if one whose insistence on the prerogatives of the Senate and leadership on other issues nonetheless earned him the unofficial title, “the Conscience of the Senate.”

LBJ Enters the Picture

Then Lyndon Baines Johnson got himself elected to the Senate as a Democrat from Texas in 1948. He did so through fraud in a hotly contested election. The illegal ballots counted on his behalf turned a narrow defeat into an 87-vote victory that triggered his Senate colleagues calling him “Landslide Lyndon” for the rest of his career.

By that time, LBJ had established a number of traits that would remain prominent throughout the rest of his life. Everywhere he went, LBJ consistently managed to convince powerful men to treat him as if he was their professional son. For a few examples, LBJ had convinced the president of his college to treat him alone among decades of students as a preferred heir. For another, as a Congressman, he managed to convince Sam Rayburn, a Democrat from Texas and Speaker of the House for 17 of 21 years, a man before whom everyone else in Washington coward, to allow LBJ to regularly walk up to him in large gatherings to kiss his bald head. And everywhere he went, LBJ consistently managed to identify positions no one else wanted that held potential leverage and therefore could be made focal points of enormous power. When LBJ worked as a Capitol-Hill staffer, he turned a “model Congress,” in which staffers played at being their bosses, into a vehicle to move actual legislation through the embarrassment of his rivals’ bosses. On a less positive note, everywhere he went, LBJ had demonstrated (time and again) an enthusiasm for verbally and emotionally abusing those subject to his authority such as staffers, girl-friends, his wife…, sometimes in the service of good causes and other times entirely in the name of his caprice and meanness.

In the Senate, LBJ followed form. He promptly won the patronage of Richard Russell, convincing the arch-segregationist both that he was the Southerner capable of taking up Russell’s mantle after him and that Russell should teach him everything he knew about Senate procedure.  Arriving at a time that everyone else viewed Senate leadership positions as thankless drudgery, LBJ talked his way into being named his party’s Senate Whip in only his second Congress in the chamber. Four years later, having impressed his fellow Senators with his ability to accurately predict how they would vote, even as they grew to fear his beratings and emotional abuse, LBJ emerged as Senate Majority Leader. And in 1957, using as instigation the support of President Dwight D. Eisenhower, a Republican from Kansas, for such a measure and the recent Supreme Court issuance of Brown, LBJ managed to convince Russell both that the Senate must pass the first Civil Rights Act since Reconstruction, a comparatively weak bill, so palatable to Russell as a way to prevent the passage of a stronger one and that Russell should help him pass it to advance LBJ’s chances of winning the Presidency in 1960 as a loyal Southerner. Substantively, that 1957 Act created the U.S. Civil Rights Commission, a clearinghouse for ideas for further reforms, but one with no enforcement powers. The Act’s real power, though, wasn’t in its substance. Its real power lay in what it demonstrated was suddenly possible: where a weak act could pass, a stronger one was conceivable. And where one was conceivable, millions of Americans long denied equality, Americans taught by Brown, in the memorable phraseology of the Reverend Martin Luther King, Jr. that “justice too long delayed is justice denied” would demand the passage of the possible.

The Kennedy Years

Of course, Johnson didn’t win the Presidency in 1960. But, in part thanks to Rayburn and Russell’s backing, he did win the Vice Presidency. There, he could do nothing, and did. President John F. Kennedy, a Democrat from Massachusetts, didn’t trust him, the Senate gave him no role, and Bobby Kennedy, the President’s in-house proxy and functional Prime Minister, officially serving as Attorney General, openly mocked and dismissed Johnson as a washed up, clownish figure. So as the Civil Rights Movement pressed for action to secure the equality long denied, as students were arrested at lunch-counters and freedom riders were murdered, LBJ could only take the Attorney General’s abuse, silently sitting back and watching the President commit the White House to pushing for a far more aggressive Civil Rights Act, even as it had no plan for how to get it passed over the opposition of Senator Russell and his block of Southern Senators.

Dallas, the Presidency, and How Passage Was Finally Obtained

Not long before his assassination in 1963, President Kennedy proposed stronger legislation and said the nation “will not be fully free until all its citizens are free.” But, when an assassin’s bullet tragically slayed President Kennedy on a Dallas street, LBJ became the new president of the United States. The man with a knack for finding leverage and power where others saw none suddenly sat Center Stage, with every conceivable lever available to him. And he wasted no time deploying those levers. Uniting with the opposition party’s leadership in the Senate, Everett Dirksen, a Republican from Illinois, was the key man who delivered the support of eighty-two percent (82%) of his party’s Senators, President Johnson employed every tool available to the chief magistrate to procure passage of the stronger Civil Rights Act he had once promised Senator Russell that the 1957 Act would forestall.

The bill he now backed, like the Civil Rights Act of 1875, would outlaw discrimination based on race, color, religion, or national origin in public accommodations through Title II. It would do more. Title I would forbid the unequal application of voter registration laws to different races. Title III would bar state and local governments from denying access to public facilities on the basis of race, color, religion, or national origin. Title IV would authorize the Department of Justice to bring suits to compel the racial integration of schools. Title VI would bar discrimination on the basis of race, color, or national origin by federally funded programs and activities. And Title VII would bar employers from discriminating in hiring or firing on the basis of race, color, religion, sex, or national origin.

This was the bill approved by the House of Representatives after the President engineered a discharge petition to force the bill out of committee. This was the bill filibustered by 18 Senators for a record 60 straight days. This was the bill where that filibuster was finally broken on June 10, 1964, the first filibuster of any kind defeated since 1927. After the lengthy Democrat filibuster in the Senate, the bill was finally passed 73-27. The Senate passed that Civil Rights bill on June 19, 1964. The House promptly re-passed it as amended by the Senate.

On July 2, 1964, President Johnson signed the Civil Rights Act into law on national television. Finally, on the same day that John Adams had predicted 188 years earlier would be forever commemorated as a “Day of Deliverance” with “Pomp and Parade, with Shews, Games, Sports, Guns, Bells, Bonfires and Illuminations from one End of this Continent to the other[,]” the federal government had restored the law abandoned with Reconstruction in 1876. Once more, the United States government would stand for the equality before the law for all its Citizens.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Robert Caro has, so far, written four (4) books over as many decades telling this story over thousands of pages.  The author recommends them, even as he provides this TLDR summation.  Caro’s books on the subject are: The Years of Lyndon Johnson: The Path to Power, The Years of Lyndon Johnson: Means of Ascent, The Years of Lyndon Johnson: Master of the Senate, and The Years of Lyndon Johnson: The Passage of Power.

[2] Black Southerners, almost totally barred from voting, could not realistically hope for election to any office over this period.  It is worth noting, however, that the Great Migration saw a substantial portion of America’s Black population move North, where this was not the case and where such migrants (and their children) could and did win elective office.

[3] The exception proving the rule is President Woodrow Wilson.  Wilson, the son of a Confederate veteran, was born in Virginia and raised mostly in South Carolina.  Yet he ran for the Presidency as the Governor of New Jersey, a position be acquired as a result of his career at Princeton University (and the progressive movement’s adoration of the “expertise” that an Ivy League President seemed to promise).  Even then, Wilson could only win the Presidency (which empowered him to segregate the federal workforce) when his two predecessors ran against each other and split their shared, super-majority support.

Guest Essayist: Robert L. Woodson, Sr.

When President Lyndon B. Johnson announced the launch of a nationwide War on Poverty in 1964, momentary hope arose that it would uplift the lives of thousands of impoverished Americans and their inner-city neighborhoods. But the touted antipoverty campaign of the 60s is a classic example of injury with the helping hand.

Regardless of intention—or mantras—the ultimate measure of any effort to reduce poverty is the impact it has on its purported beneficiaries. After more than 60 years and the investment of $25 trillion of tax-payers’ money, poverty numbers have virtually remained the same, while conditions in low-income neighborhoods have spiraled downward.

While impoverished Americans may not be rising up, what has become a virtual “poverty industry” and the bureaucracy of welfare system has prospered, expanding to 89 separate programs spread across 14 government programs and agencies.  In sum, 70% of anti-poverty funding has not reached the poor but has been absorbed by those who serve the poor. As a consequence, the system has created a commodity out of the poor with perverse incentives to maintain people in poverty as dependents. The operative question became not which problems are solvable, but which ones are fundable.

I had first-hand experience of power and money grabs that followed the launch of Johnson’s antipoverty agenda. As a young civil rights leader at the time of its introduction, I was very hopeful that, at long-last, policies would be adopted that would direct resources to empower the poor to rise. I was working for the summer in Pasadena, California, leading a work project with the American Friends Service Committee in the year after the Watts riots and the government’s response with the War on Poverty.

Initially, the anti-poverty money funded grassroots leaders in high-crime, low-income neighborhoods who had earned the trust and confidence of local people and had their best interests at heart. But many of the local grassroots leaders who were paid by the program began to raise questions about the functions of the local government and how it was assisting the poor. These challenges from the residents became very troublesome to local officials and they responded by appealing to Washington to change the rules to limit the control that those grassroots leaders could exercise over programs to aid their peers.

One of the ways the Washington bureaucracy responded was to institute a requirement that all outreach workers had to be college-educated as a condition of their employment. Overnight, committed and trusted workers on the ground found themselves out of a job. In addition, it was ruled that the allocation and distribution of all incoming federal dollars was to be controlled by a local anti-poverty board of directors that represented three groups: 1/3 local officials, 1/3 business leaders and 1/3 local community leaders. I knew from the moment those structural changes occurred that the poverty program was going to be a disaster and that it would serve the interests of those who served the poor with little benefit to its purported beneficiaries.

Since only a third of the participants on the board would be from the community, the other two-thirds were careful to ensure that the neighborhood residents would be ineffective and docile representatives who would ratify the opportunistic and often corrupt decisions they made. In the town where I was engaged in civil rights activities, I witnessed local poverty agencies awarding daycare contracts to business members on the board who would lease space at three times the market-value rate.

Years of such corruption throughout the nation were later followed by many convictions and the incarceration of people who were exploiting the programs and hurting the poor. When they were charged with corruption, many of the perpetrators used the issue of race to defend themselves. The practice of using race as a shield of defense against charges for corrupt activity continues to this day. The disgraced former Detroit Mayor Kwame Kilpatrick received a 28-year sentence for racketeering, bribery, extortion and tax crimes. Last year, more than 40 public and private officials were charged as part of a long-running and expanding federal investigation into public corruption in metro Detroit, including fifteen police, five suburban trustees, millionaire moguls and a former state senator. Much of the reporting about corruption in the administration of poverty programs never rose to the level of public outrage or indignation and were treated as local issues.

Yet the failure of the welfare system and the War on Poverty is rooted in something deeper than the opportunistic misuse of funds. Its most devastating impact is in undermining pillars of strength that have empowered the black community to survive and thrive in spite of oppression: a spirit of enterprise and mutual cooperation, and the sustaining support of family and community.

In the past, even during periods of legalized discrimination and oppression, a spirit of entrepreneurship and agency permeated the black community. Within the first 50 years after the Emancipation Proclamation, black Americans had accumulated a personal wealth of $700 million. They owned more than 40,000 businesses and more than 930,000 farms, Black commercial enclaves in Durham, North Carolina and the Greenwood Avenue section of Tulsa, Oklahoma, were known as the Negro Wall Street. When blacks were barred from white establishments and services, they created their own thriving alternative transit systems. When whites refused to lend money to blacks, they established more than 103 banks and savings and loans associations and more than 1,000 inns and hotels. When whites refused to treat blacks in hospitals, they established 230 hospitals and medical schools throughout the country.

In contrast, within the bureaucracy of the burgeoning poverty industry, low-income people were defined as the helpless victims of an unfair and unjust society. The strategy of the liberal social engineers is to right this wrong by the redistribution of wealth, facilitated by the social services bureaucracy in the form of cash payments or equivalent benefits. The cause of a person’s poverty was assumed beyond their power and ability to control and, therefore, resources were given with no strings attached and there was no assumption of the possibility of upward mobility towards self-sufficiency. The empowering notions of personal responsibility and agency were decried as “blaming the victim” and, with the spread of that mentality and the acceptance of a state of dependency the rich heritage of entrepreneurship in the black community fell by the wayside.

Until the mid-60s, in 85% of all black families, two parents were raising their children. Since the advent of the Welfare State, more than 75% of black children were born to single mothers. The system included penalties for marriage and work through which benefits would be decreased or terminated. As income was detached from work, the role of fathers in the family was undermined and dismissed. The dissolution of the black family was considered as necessary collateral damage in a war that was being waged in academia against capitalism in America, led by Columbia University professors Richard Cloward and Frances Fox Piven who promoted a massive rise of dependency with a goal to overload the U.S. public welfare system and elicit “radical change.”

Reams of research has found that youths in two-parent families are less likely to become involved in delinquent behavior and drug abuse or suffer depression and more likely to succeed in school and pursue higher education. As generations of children grew up on the streets of the inner city, drug addiction and school drop-out rates soared. When youths turned to gangs for identity, protection, and a sense of belonging, entire neighborhoods became virtual killing fields of warring factions. Statistics from Chicago alone bring home the tragic toll that has been taken. Within Fathers’ Day weekend, 104 people were shot across the city, 15 of them, including five children, fatally. Within a three-day period of the preceding week, a three-year-old child was shot and killed in the South Austin community, the third child under the age of 10 who was shot.

In the midst of this tragic scenario, the true casualties of the War on Poverty have been its purported beneficiaries.

Robert L. Woodson, Sr. founded the Woodson Center in 1981 to help residents of low-income neighborhoods address the problems of their communities. A former civil rights activist, he has headed the National Urban League Department of Criminal Justice, and has been a resident fellow at the American Enterprise Foundation for Public Policy Research. Referred to by many as “godfather” of the neighborhood empowerment movement, for more than four decades, Woodson has had a special concern for the problems of youth. In response to an epidemic of youth violence that has afflicted urban, rural and suburban neighborhoods alike, Woodson has focused much of the Woodson Center’s activities on an initiative to establish Violence-Free Zones in troubled schools and neighborhoods throughout the nation. He is an early MacArthur “genius” awardee and the recipient of the 2008 Bradley Prize, the Presidential Citizens Award, and a 2008 Social Entrepreneurship Award from the Manhattan Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Andrew Langer

We are going to assemble the best thought and broadest knowledge from all over the world to find these answers. I intend to establish working groups to prepare a series of conferences and meetings—on the cities, on natural beauty, on the quality of education, and on other emerging challenges. From these studies, we will begin to set our course toward the Great Society. – President Lyndon Baines Johnson, Anne Arbor, MI, May 22, 1964

In America in 1964, the seeds of the later discontent of the 1960s were being planted. The nation had just suffered an horrific assassination of an enormously charismatic president, John F. Kennedy, we were in the midst of an intense national conversation on race and civil rights, and we were just starting to get mired in a military conflict in Southeast Asia.

We were also getting into a presidential election, and while tackling poverty in America wasn’t a centerpiece, President Johnson started giving a series of speeches talking about transforming the United States into a “Great Society”—a concept that was going to be the most-massive series of social welfare reforms since Franklin Roosevelt’s post-depression “New Deal” of the 1930s.

In that time, there was serious debate over whether the federal government even had the power to engage in what had, traditionally, been state-level social support work—or, previously, private charitable work. The debate centered around the Constitution’s “general welfare” clause, the actionable part of the Constitution building on the Preamble’s “promote the general welfare” language, saying in Article I, Section 8, Clause 1 that, “The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;” (emphasis added)

Proponents of an increased federal role in social service spending have argued that “welfare” for this purpose means just what politicians today proffer that it does: that “welfare” means social service spending, and that because the Constitution grants Congress this power, such power is expansive (if not unlimited).

But this flies in the face of the whole concept of the Constitution itself—which is the idea of a federal government of limited, carefully-enumerated powers. The founders were skeptical of powerful, centralized government (and had fought a revolution over that very point), and the debate of just how powerful, how centralized was at the core of the Constitutional Convention’s debates.

Constitutional author (and later president) James Madison said this in Federalist 41:

It has been urged and echoed, that the power “to lay and collect taxes, duties, imposts, and excises, to pay the debts, and provide for the common defense and general welfare of the United States,’’ amounts to an unlimited commission to exercise every power which may be alleged to be necessary for the common defense or general welfare. No stronger proof could be given of the distress under which these writers labor for objections, than their stooping to such a misconstruction. Had no other enumeration or definition of the powers of the Congress been found in the Constitution, than the general expressions just cited, the authors of the objection might have had some color for it; though it would have been difficult to find a reason for so awkward a form of describing an authority to legislate in all possible cases.

In 1831, he also said, more plainly:

With respect to the words “general welfare,” I have always regarded them as qualified by the detail of powers connected with them. To take them in a literal and unlimited sense would be a metamorphosis of the Constitution into a character which there is a host of proofs was not contemplated by its creators.

This was, essentially, the interpretation of the clause that stood for nearly 150 years—only to be largely gutted in the wake of FDR’s New Deal programs. As discussed in the essay on FDR’s first 100 days, there was great back and forth within the Supreme Court over the constitutionality of the New Deal—with certain members of the court eventually apparently succumbing to the pressure of a proposed plan to “stack” the Supreme Court with newer, younger members.

A series of cases, starting with United States v. Butler (1936) and then Helvering v. Davis (1937), essentially ruled that Congress’ power to spend was non-reviewable by the Supreme Court… that there could be no constitutional challenge to spending plans, that if Congress said a spending plan was to “promote the general welfare” then that’s what it was.

Madison was right to be fearful—when taken into the context of an expansive interpretation of the Commerce Clause, it gives the federal government near-unlimited power. Either something is subject to federal regulation because it’s an “item in or related to commerce” or it’s subject to federal spending because it “promotes the general welfare.”

Building on this, LBJ moved forward with the Great Society in 1964, creating a series of massive spending and federal regulatory programs whose goal was to eliminate poverty and create greater equity in social service programs.

Problematically, LBJ created a series of “task forces” to craft these policies—admittedly because he didn’t want public input or scrutiny that would lead to criticism of the work his administration was doing.

Normally, when the executive branch engages in policymaking, those policies are governed by a series of rules aimed at ensuring public participation—both so that the public can offer their ideas at possible solutions, but also to ensure that the government isn’t abusing its powers.

Here, the Johnson administration did no such thing—creating, essentially, a perfect storm of problematic policymaking: a massive upheaval of government policy, coupled with massive spending proposals, coupled with little public scrutiny.

Had they allowed for greater public input, someone might have pointed out what the founders knew: that there was a reason such social support has traditionally been either the purview of local governance or private charity, that such programs are much more effective when they are locally-driven and/or community based. Local services work because they better understand the challenges their local communities face.

And private charities provide more-effective services because they not only have a vested-interest in the outcomes, that vested-interest is driven by building relationships centered around faith and hope. If government programs are impersonal, government programs whose management is far-removed from the local communities is far worse.

The end result is two-fold:  faceless entitlement bureaucracies whose only incentive is self-perpetuation (not solving problems), and people who have little incentive to move themselves off of these programs.

Thus, Johnson’s Great Society was a massive failure. Not only did it not end poverty, it created a devastating perpetual cycle of it. Enormous bureaucratic programs which still exist today—and which, despite pressures at various points in time (the work of President Bill Clinton and the GOP-led Congress after the 1994 election at reforming the nation’s welfare programs as one example), seem largely resistant to change or improvement.

The founders knew that local and private charity did a better job at promoting “the general welfare” of a community than a federal program would. They knew the dangers of expansive government spending (and the power that would accrue with it). Once again, as Justice Sandra Day O’Connor said in New York v. United States (1992), the “Constitution protects us from our own best intentions.”

Andrew Langer is President of the Institute for Liberty. He teaches in the Public Policy Program at the College of William & Mary

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joshua Schmid

The Cold War was a time of immense tension between the world’s superpowers, the Soviet Union and the United States. However, the two never came into direct conflict during the first decade and a half and rather chose to pursue proxy wars in order to dominate the geopolitical landscape. The Cuban Missile Crisis of October 1962 threatened to reverse this course by turning the war “hot” as the leader of the free world and the leader of the world communist revolution squared off in a deadly game of nuclear cat and mouse.

At the beginning of the 1960s, some members of the Soviet Union’s leadership desired more aggressive policies against the United States. The small island of Cuba, located a mere 100 miles off the coast of Florida, provided Russia with an opportunity. Cuba had recently undergone a communist revolution and its leadership was happy to accept Soviet intervention if it would minimize American harassments like the failed Bay of Pigs invasion in 1961. Soviet Premier Nikita Khrushchev offered to place nuclear missiles on Cuba, which would put him within striking range of nearly any target on the continental U.S. The Cubans accepted and work on the missile sites began during the summer of 1962.

Despite an elaborate scheme to disguise the missiles and the launch sites, American intelligence discovered the Soviet scheme by mid-October. President John F. Kennedy immediately convened a team of security advisors, who suggested a variety of options. These included ignoring the missiles, using diplomacy to pressure the Soviets to remove the missiles, invading Cuba, blockading the island, and strategic airstrikes on the missile sites. Kennedy’s military advisors strongly suggested a full-scale invasion of Cuba as the only way to defeat the threat. However, the president ultimately overrode them and decided any attack would only provoke greater conflict with the Russians. On October 22, Kennedy gave a speech to the American people in which he called for a “quarantine” of the island under which “all ships of any kind bound for Cuba, from whatever nation or port, will, if found to contain cargoes of offensive weapons, be turned back.”

The Russians appeared unfazed by the bravado of Kennedy’s speech, and announced they would interpret any attempts to quarantine the island of Cuba as an aggressive act. However, as the U.S. continued to stand by its policy, the Soviet Union slowly backed down. When Russian ships neared Cuba, they broke course and moved away from the island rather than challenging the quarantine. Despite this small victory, the U.S. still needed to worry about the missiles already installed.

In the ensuing days, the U.S. continued to insist on the removal of the missiles from Cuba. As the haggling between the two nations continued, the nuclear launch sites became fully operational. Kennedy began a more aggressive policy that included a threat to invade Cuba. Amidst these tensions, the most harrowing event of the entire Cuban Missile Crisis occurred. The Soviet submarine B-59 neared the blockade line and was harassed by American warships dropping depth charges. The submarine had lost radio contact with the rest of the Russian navy and could not surface to refill its oxygen. The captain of B-59 decided that war must have broken out between the U.S. and Soviet Union, and proposed that the submarine launch its nuclear missile. This action required a unanimous vote by the top three officers onboard. Fortunately, the executive officer cast the lone veto vote against what surely would have been an apocalyptic action.

Eventually, Khrushchev and Kennedy reached an agreement that brought an end to the crisis. The Russians removed the missiles from Cuba and the U.S. promised not to invade the island. Additionally, Kennedy removed missiles stationed near the Soviet border in Turkey and Italy as a show of good faith. A brief cooling period between the two superpowers would ensue, during which time a direct communication line between the White House and the Kremlin was established. And while the Cold War would continue for three more decades, never again would the two blocs be so close to nuclear annihilation as they were in October 1962.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

The Cold War between the United States and Soviet Union was a geopolitical struggle around the globe characterized by an ideological contest between capitalism and communism, and a nuclear arms race. An important part of the Cold War was the space race which became a competition between the two superpowers.

Each side sought to be the first to achieve milestones in the space race and used the achievements for propaganda value in the Cold War. The Soviet launch of the satellite, Sputnik, while a relatively modest accomplishment, became a symbolically important event that triggered and defined the dawn of the space race. The space race was one of the peaceful competitions of the Cold War and pushed the boundaries of the human imagination.

The Cold War nuclear arms race helped lead to the development of rocket technology that made putting humans into space a practical reality in a short time. Only 12 years after the Russians launched a satellite into orbit around the Earth, Americans sent astronauts to walk on the moon.

The origins of Sputnik and spaceflight occurred a few decades before World War II, with the pioneering flights of liquid-fueled rockets in the United States and Europe. American Robert Goddard launched one from a Massachusetts farm in 1926 and continued to develop the technology on a testing range in New Mexico in the 1930s. Meanwhile, Goddard’s research influenced the work of German rocketeer Hermann Oberth who fired the first liquid-fueled rocket in Europe in 1930 and dreamed of spaceflight. In Russia, Konstantin Tsiolkovsky developed the idea of rocket technology, and his ideas influenced Sergei Korolev in the 1930s.

The greatest advance in rocket technology took place in Nazi Germany, where Werner von Braun led efforts to build V-2 and other rockets that could hit England and terrorize civilian populations when launched from continental Europe. Hitler’s superweapons never had the decisive outcome for victory as he hoped, but the rockets had continuing military and civilian applications.

At the end of the war, Russian and Allied forces raced to Berlin as the Nazi regime collapsed in the spring of 1945. Preferring to surrender to the Americans because of the Red Army’s well-deserved reputation for brutality, von Braun and his team famously surrendered to Private Fred Schneikert and his platoon. They turned over 100 unfinished V-2 rockets and 14 tons of spare parts and blueprints to the Americans who whisked the scientists, rocketry, and plans away just days before the Soviet occupation of the area.

In Operation Paperclip, the Americans secretly brought thousands of German scientists and engineers to the United States including more than 100 German rocket scientists from Von Braun’s team to the United States. The operation was controversial because of Nazi Party affiliations, but few were rabid devotees to Nazi ideology, and their records were cleared. The Americans did not want them contributing to Soviet military production and brought them instead to Texas and then to Huntsville, Alabama, to develop American rocket technology as part of the nuclear arms race to build immense rockets to carry nuclear warheads. Within a decade, both sides had intercontinental ballistic missiles (ICBMs) in their arsenals.

During the next decade, the United States developed various missile systems producing rockets of incredible size, thrust, and speed that could travel large distances. Interservice rivalry meant that the U.S. Army, Navy, and Air Force developed and built their own competing rocket systems including the Redstone, Vanguard, Jupiter-C, Polaris, and Atlas rockets. Meanwhile, the Soviets were secretly building their own R-7 missiles erected as a cluster rather than staged rocket.

On October 4, 1957, the Russians shocked Americans by successfully launching a satellite into orbit. Sputnik was a metal sphere weighing 184 pounds that emitted a beeping sound to Earth that was embarrassingly picked up by U.S. global tracking stations. The effort was not only part of the Cold War, but also the International Geophysical Year in which scientists from around the world formed a consortium to share information on highly active solar flares and a host of other scientific knowledge. However, both the Soviets and Americans were highly reluctant to share any knowledge that might have relationship to military technology.

While American intelligence had predicted the launch, Sputnik created a wave of panic and near hysteria. Although President Dwight Eisenhower was publicly unconcerned because the United States was preparing its own satellite, the American press, the public, and Congress were outraged, fearing the Russians were spying on them or could rain down nuclear weapons from space. Moreover, it seemed as if the Americans were falling behind the Soviets. Henry Jackson, a Democratic senator from the state of Washington, called Sputnik “a devastating blow to the United States’ scientific, industrial, and technical prestige in the world.” Sputnik initiated the space race between the United States and Soviet Union as part of the Cold War superpower rivalry.

A month later, the Soviets sent a dog named Laika into space aboard Sputnik II. Although the dog died because it only had life support systems for a handful of days, the second successful orbiting satellite—this one carrying a living creature—further humiliated Americans even if they humorously dubbed it “Muttnik.”

The public relations nightmare was further exacerbated by the explosion of a Vanguard rocket carrying a Navy satellite at the Florida Missile Test Range on Patrick Air Force Base on Cape Canaveral. on December 6. The event was aired on television and watched by millions. The launch was supposed to restore pride in American technology, but it was an embarrassing failure. The press had a field-day and labeled it “Kaputnik” and “Flopnik.”

On January 31, 1958, Americans finally had reason to cheer when a Jupiter-C rocket lifted off and went into orbit carrying a thirty-one-pound satellite named Explorer. The space race was now on and each side competed to be the first to accomplish a goal. The space race also had significant impacts upon American society.

In 1958, Congress passed the National Defense Education Act to spend more money to promote science, math, and engineering education at all levels. To signal its peaceful intentions, Congress also created the National Aeronautics and Space Administration (NASA) as a civilian organization to lead the American efforts in space exploration, whereas the Russian program operated as part of the military.

In December 1958, NASA announced Project Mercury with the purpose of putting an astronaut in space which would be followed by Projects Gemini and Apollo which culminated in Neil Armstrong and Buzz Aldrin walking on the moon. The space race was an important part of the Cold War and also about the spirit of human discovery and pushing the frontiers of knowledge and space.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

While speaking on June 14, 1954, Flag Day, President Dwight D. Eisenhower talked about the importance of reaffirming religious faith in America’s heritage and future, that doing so would “constantly strengthen those spiritual weapons which forever will be our country’s most powerful resource, in peace or in war.” In 1864 during the Civil War, the phrase “In God We Trust” first appeared on U.S. coins. On July 30, 1956, “In God We Trust” became the nation’s motto as President Eisenhower signed into law a bill declaring it, along with having the motto printed in capital letters, on every United States denomination of paper currency.

The Hand of providence has been so conspicuous in all this, that he must be worse than an infidel that lacks faith, and more than wicked, that has not gratitude enough to acknowledge his obligations.” George Washington, 1778.[i]

It becomes a people publicly to acknowledge the over-ruling hand of Divine Providence and their dependence upon the Supreme Being as their Creator and Merciful Preserver . . .” Samuel Huntington, 1791.[ii]

We are a religious people whose institutions presuppose a Supreme Being.” Associate Justice William O. Douglas, 1952.[iii]

One of the most enduring battles in American politics has been over the question of whether America is or ever was a Christian Nation. For Supreme Court Associate Justice David Brewer the answer was simple: yes. The United States was formed as and, in Brewer’s 1892 at least, still was, a Christian Nation. The Justice said as much in Church of the Holy Trinity vs. United States. But his simple answer did not go unsupported.

“[I]n what sense can [the United States] be called a Christian nation? Not in the sense that Christianity is the established religion or the people are compelled in any manner to support it…Neither is it Christian in the sense that all its citizens are either in fact or in name Christians. On the contrary, all religions have free scope within its borders. Numbers of our people profess other religions, and many reject all…Nevertheless, we constantly speak of this republic as a Christian Nation – in fact, as the leading Christian Nation of the world. This popular use of the term certainly has significance. It is not a mere creation of the imagination. It is not a term of derision but has substantial basis – on which justifies its use. Let us analyze a little and see what is the basis.”[iv]

Brewer went on, of course, to do just that.

Regrettably, it lies beyond the scope of this short essay to repeat Brewer’s arguments. In 1905, Brewer re-assembled them into a book: The United States a Christian Nation. It was republished in 2010 by American Vision and is worth the read.[v]  For the purposes of this essay I will stipulate, with Brewer, that America is a Christian nation. If that be the case, it should come as no surprise that such a nation would take the advice of Samuel Huntington and openly acknowledge its trust in God on multiple occasions and in a variety of ways: on its coinage, for instance. How we came to do that as a nation is an interesting story stretching over much of our history.

Trusting God was a familiar concept to America’s settlers – they spoke and wrote of it often. Their Bibles, at least one in every home, contained many verses encouraging believers to place their trust in God,[vi] and early Americans knew their Bible.[vii] Upon surviving the perilous voyage across the ocean, their consistent first act was to thank the God of the Bible for their safety.

Benjamin Franklin’s volunteer Pennsylvania militia of 1747-1748 reportedly had regimental banners displaying “In God We Trust.”[viii] In 1776, our Declaration of Independence confirmed the signers had placed “a firm reliance on the protection of divine Providence.”[ix] In 1814, Francis Scott Key penned his famous poem which eventually became our national anthem. The fourth stanza contains the words: “Then conquer we must, when our cause is just, and this be our motto: ‘In God is our trust.’”

In 1848, construction began on the first phase of the Washington Monument (it was not completed until 1884). “In God We Trust” sits among Bible verses chiseled on the inside walls and “Praise God” (“Laus Deo” in Latin) can be found on its cap plate. But it would be another thirteen years before someone suggested putting a “recognition of the Almighty God” on U.S. coins.

That someone, Pennsylvania minister M. R. Watkinson, wrote to Salmon P. Chase, Abraham Lincoln’s Secretary of the Treasury, and suggested that such a recognition of the Almighty God would “place us openly under the Divine protection we have personally claimed.” Watkinson suggested the words “PERPETUAL UNION” and “GOD, LIBERTY, LAW.” Chase liked the basic idea but not Watkinson’s suggestions. He instructed James Pollock, Director of the Mint at Philadelphia, to come up with a motto for the coins: “The trust of our people in God should be declared on our national coins. You will cause a device to be prepared without unnecessary delay with a motto expressing in the fewest and tersest words possible this national recognition (emphasis mine).

Secretary Chase “wordsmithed” Director Pollock’s suggestions a bit and came up with his “tersest” words: “IN GOD WE TRUST,” which was ordered to be so engraved by an Act of Congress on April 22, 1864. First to bear the words was the 1864 two-cent coin.

The following year, another Act of Congress allowed the Mint Director to place the motto on all gold and silver coins that “shall admit the inscription thereon.” The motto was promptly placed on the gold double-eagle coin, the gold eagle coin, and the gold half-eagle coin. It was also minted on silver coins, and on the nickel three-cent coin beginning in 1866.

One might guess that the phrase has appeared on all U.S. coins since 1866 – one would be wrong.

The U.S. Treasury website explains (without further details) that “the motto disappeared from the five-cent coin in 1883, and did not reappear until production of the Jefferson nickel began in 1938.” The motto was also “found missing from the new design of the double-eagle gold coin and the eagle gold coin shortly after they appeared in 1907. In response to a general demand, Congress ordered it restored, and the Act of May 18, 1908, made it mandatory on all coins upon which it had previously appeared” [x] (emphasis added). I’m guessing someone got fired over that disappearance act. Since 1938, all United States coins have borne the phrase. None others have had it “go missing.”

The date 1956 was a watershed year.  As you read in the introduction to this essay, that year, President Dwight D. Eisenhower signed a law (P.L. 84-140) which declared “In God We Trust” to be the national motto of the United States. The bill had passed the House and the Senate unanimously and without debate. The following year the motto began appearing on U.S. paper currency, beginning with the one-dollar silver certificate. The Treasury gradually included it as part of the back design of all classes and denominations of currency.

Our story could end there – but it doesn’t.

There is no doubt Founding Era Americans would have welcomed the phrase on their currency had someone suggested it, but it turns out some Americans today have a problem with it – a big problem.

America’s atheists continue to periodically challenge the constitutionality of the phrase appearing on government coins. The first challenge occurred in 1970; Aronow v. United States would not be the last. Additional challenges were mounted in 1978 (O’Hair v. Blumenthal) and 1979 (Madalyn Murray O’Hair vs W. Michael Blumenthal). Each of these cases was decided at the circuit court level against the plaintiff, with the court affirming that the “primary purpose of the slogan was secular.”

Each value judgment under the Religion Clauses must therefore turn on whether particular acts in question are intended to establish or interfere with religious beliefs and practices or have the effect of doing so. [xi]

Having the national motto on currency neither established nor interfered with “religious beliefs and practices.”

In 2011, in case some needed a reminder, the House of Representatives passed a new resolution reaffirming “In God We Trust” as the official motto of the United States by a 396–9 vote (recall that the 1956 vote had been unanimous, here in the 21st century it was not).

Undaunted by the courts’ previous opinions on the matter, atheist activist Michael Newdow brought a new challenge in 2019 — and lost in the Eighth Circuit. The Supreme Court (on April 23, 2020) declined to hear the appeal. At my count, Newdow is now 0-5. His 2004 challenge[xii] that the words “under God” in the Pledge of Allegiance violated the First Amendment was a bust, as was his 2009 attempt to block Chief Justice John Roberts from including the phrase “So help me God” when administering the presidential oath of office to Barack Obama. He tried to stop the phrase from being recited in the 2013 and 2017 inaugurations as well – each time unsuccessfully.

In spite of atheist challenges, or perhaps because of them, our national motto is enjoying a bit of resurgence of late, at least in the more conservative areas of the country:

In 2014, the Mississippi legislature voted to add the words, “In God We Trust” to their state seal.

In 2015, Jefferson County, Illinois decided to put the national motto on their police squad cars. Many other localities followed suit, including York County, Virginia, and Bakersfield, California, in 2019.

In March, 2017, Arkansas required their public schools to display posters which included the national motto. Similar laws were passed in Florida (2018), Tennessee (2018), South Dakota (2019) and Louisiana (2019).

On March 3, 2020, the Oklahoma House of Representatives passed a bill that would require all public buildings in the state to display the motto. Kansas, Indiana, and Oklahoma are considering similar bills.

But here is the question which lies at the heart of this issue: Does America indeed trust in God?

I think it is clear that America’s Founders, by and large did – at least they said and acted as though they did. But when you look around the United States today, outside of some limited activity on Sunday mornings and on the National Day of Prayer, does America actually trust in God? There is ample evidence we trust in everything, anything, but God.

Certainly we seem to trust in science, or what passes for science today.  We put a lot of trust in public education, it would seem, even though the results are quite unimpressive and the curriculum actually works to undermine trust in God. Finally, we put a lot of trust in our elected officials even though they betray that trust with alarming regularity.[xiii]

Perhaps citizens of the United States need to see our motto on our currency, on school and court room walls to simply remind us of what we should be doing, and doing more often.

“America trusts in God,” we declare. Do we mean it?

“And those who know your name put their trust in you, for you, O Lord, have not forsaken those who seek you.” Psalm 9:10 ESV

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people. CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[i] Letter to Thomas Nelson, August 20, 1778.

[ii] Samuel Huntington was a signer of the Declaration Of Independence; President of Congress;
Judge; and Governor of Connecticut.  Quoted from A Proclamation for a Day of Fasting, Prayer and Humiliation, March 9, 1791.

[iii] Zorach v. Clauson, 343 U.S. 306 (1952).

[iv] Church of the Holy Trinity v. United States, 143 U.S. 457 (1892).


[vi] Examples include: Psalm 56:3, Isaiah 26:4, Psalm 20:7, Proverbs 3:5-6 and Jeremiah 17:7.

[vii] “Their many quotations from and allusions to both familiar and obscure scriptural passages confirms that [America’s Founders] knew the Bible from cover to cover.” Daniel L. Driesbach, 2017, Reading the Bible with the Founding Fathers, Oxford University Press, p.1

[viii] See

[ix] Thomas Jefferson, Declaration of Independence, July 1776.



[xii] Newdow v. United States, 328 F.3d 466 (9th Cir. 2004)


Guest Essayist: Tony Williams

In 1919, Dwight Eisenhower was part of a U.S. Army caravan of motor vehicles traveling across the country as a publicity stunt. The convoy encountered woeful and inadequate roads in terrible condition. The journey took two months by the time it was completed.

When Eisenhower was in Germany after the end of World War II, he was deeply impressed by the Autobahn because of its civilian and military applications. The experiences were formative in shaping Eisenhower’s thinking about developing a national highway system in the United States. He later said, “We must build new roads,” and asked Congress for “forward looking action.”

As president, Eisenhower generally held to the postwar belief called “Modern Republicanism.” This meant that while he did not support a massive increase in spending on the federal New Deal welfare state, he would also not roll it back. He was a fiscal conservative who supported decreased federal spending and balanced budgets, but he advocated a national highway system as a massive public infrastructure project to facilitate private markets and economic growth.

The postwar consumer culture was dominated by the automobile. Americans loved their large cars replete with large tail fins and abundant amounts of chrome. By 1960, 80 percent of American families owned a car. American cars symbolized their geographical mobility, consumer desires, and global industrial predominance. They needed a modern highway system to get around the sprawling country. By 1954, President Eisenhower was ready to pitch the idea of a national highway system to Congress and the states. He called it, “The biggest peacetime construction project of any description every undertaken by the United States or any other country.”

In July, Eisenhower dispatched his vice-president, Richard Nixon, to the meeting of Governors’ Conference to win support. The principle of federalism was raised with many states in opposition to federal control and taxes.

That same month, the president asked his friend, General Lucius Clay, who was an engineer by training and supervised the occupation of postwar Germany, to manage the planning of the project and present it to Congress. He organized the President’s Advisory on a National Highway Program.

The panel held hearings and spoke to a variety of experts and interests including engineers, financiers, construction and trucking companies, and labor unions. Based upon the information it amassed, the panel put together a plan by January 1955.

The plan proposed 41,000 miles of highway construction at an estimated cost of $101 billion over ten years. It recommended the creation of a federal highway corporation that would use 30-year bonds to finance construction. There would be a gas tax but no tolls or federal taxes. A bill was written based upon the terms of the plan.

The administration sent the bill to Congress the following month, but a variety of interests expressed opposition to the bill. Southern members of Congress, for example, were particularly concerned about federal control because it might set a precedent for challenging segregation. Eisenhower and his allies pushed hard for the bill and used the Cold War to sell the bill as a means of facilitating evacuation from cities in case of a nuclear attack. The bill passed the Senate but then stalled in the House where it died during the congressional session.

The administration reworked the bill and sent it to Congress again. The revised proposal created a Highway Trust Fund that would be funded and replenished with taxes primarily on gasoline, diesel oil, and tires. No federal appropriations would be used for interstate highways.

The bill passed both houses of Congress in May and June 1956, and the president triumphantly signed the bill into the law creating the National System of Interstate and Defense Highways on June 29.

The interstate highway system transformed the landscape of the United States in the postwar period. It linked the national economy, markets, and large cities together. It contributed to the growth of suburban America as commuters could now drive their cars to work in cities or consumers could drive to shopping malls. Tourists could travel expeditiously to vacations at distant beaches, national parks, and amusement parks like Disneyland. Cheap gas, despite the taxes to fund the highways, was critical to travel along the interstates.

The interstate highway system later became entwined in national debates over energy policy in the 1970s when OPEC embargoed oil to the United States. Critics said gas-guzzling cars should be replaced by more efficient cars or public transportation, that American love of cars contributed significantly to the degradation of the environment, and that America had reached an age of limits.

The creation of the interstate highway system was a marvel of American postwar prosperity and contributed to its unrivaled affluence. It also symbolized some of the challenges Americans faced. Both the success of  completing the grand public project and the ability to confront and solve new challenges represented the American spirit.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

You can count on one hand the number of Supreme Court decisions that normal people can identify by name and subject. Brown is one of them (and, arguably, both the widest and most accurately known). Ask any lawyer what the most important judicial decision in American history is, and they will almost certainly tell you, with no hesitation, Brown v. Board of Education. It’s the case that, for decades, Senators have asked every nominee to become a judge to explain why is right.

It’s place in the public mind is well-deserved, even if it should be adjusted to reflect more accurately its place in modern American history.

Backstory: From Reconstruction’s Promise to Enshrinement of Jim Crow in Plessy

Remember the pair of course reversals that followed the Civil War.

Between 1865 and 1876, Congress sought to make good the Union’s promises to the freedmen emancipated during the war. In the face of stiff, violent resistance by those who refused to accept the war’s verdict, America amended the Constitution three (3) times, with: (a) the Thirteenth Amendment banning slavery; (b) the Fourteenth Amendment: (i) affirmatively acting to create and bestow American citizenship on all those born here, (ii) barring states from “abridg[ing] the privileges or immunities of citizens of the United States[,]” and (iii) guaranteeing the equal protection of the laws; and (c) the Fifteenth Amendment barring states from denying American citizens the right to vote “on account of race, color, or previous condition of servitude.” Toward the same end, Congress passed the Civil Rights Acts of 1866 and 1875, the Enforcement Acts of 1870 and 1871, and the Ku Klux Klan Act. They created the Department of Justice to enforce these laws and supported President Grant in his usage of the military to prevent states from reconstituting slavery under another name.

Until 1876. To solve the constitutional crisis of a Presidential election with no clear winner, Congress (and President Hayes) effectively, if silently, agreed to effectively and abruptly end all that. The federal government removed troops from former Confederate states and stopped trying to enforce federal law. And the states “redeemed” by the violent forces of retaliation amended their state constitutions and passed the myriad of laws creating the “Jim Crow” regime of American apartheid.  Under Jim Crow, races were separated, the public services available to an American came to radically differ depending on that American’s race, and the rights of disfavored races became severely curtailed. Most African Americans were disenfranchised, then disarmed, and then subjected to mob-violence to incentivize compliance with the “redeemer” community’s wishes.

One could point to a number of crystallizing moments as the key point when the federal government made official that it and national law would do nothing to stop any of this. But the most commonly cited is the Plessy v. Ferguson decision of the Supreme Court, issued in 1896. It was a case arising out of New Orleans and its even-then-long-multi-hued business community. There, predictably, there were companies and entrepreneurs that hated these laws interfering with their businesses and their ability to provide services to willing buyers on the (racially integrated) basis they preferred. A particularly hated law passed by the State of Louisiana compelled railroads (far and away the largest industry of the day) to separate customers into different cars on the basis of race. With admirable truth in advertising, the Citizens Committee to Test the Constitutionality of the Separate Car Law formed and went to work to rid New Orleans of this government micromanagement. Forgotten in the long sweep of history, the Committee (acting through the Pullman Company, one of America’s largest manufacturers at the time) actually won their first case at the Louisiana Supreme Court, which ruled that any state law requiring separate accommodations in interstate travel violated the U.S. Constitution (specifically, Article I’s grant of power to Congress alone to regulate interstate travel). They then sought to invalidate application of the same law to train travel within Louisiana as a violation of the Fourteenth Amendment. With coordination between the various actors involved, Homer Plessy (a man with 7 “white” and 1 “black” great-grandparent(s) purchased and used a seat in the state-law required “white” section of a train that the train company wanted to sell him; they then assured a state official knew he was there, was informed of his racial composition, and would willingly arrest Mr. Plessy to create the test case the Committee wanted. It is known to us as Plessy v. Ferguson.[1] This time, though, things didn’t go as planned: the trial court ruled the statute enforceable and the Louisiana Supreme Court upheld its application to Mr. Plessy. The Supreme Court of the United States accepted the case, bringing the national spotlight onto this specific challenge to the constitutionality of the states’ racial-caste-enforcing laws. In 1896, over the noteworthy, highly-praised, sole dissent of Justice John Marshall Harlan, the Supreme Court agreed that, due to its language requiring “equal, but separate” accommodations for the races (and without ever really considering whether the accommodations provided actually were “equal”), the separate car statute was consistent with the U.S. Constitution; they added that the Fourteenth Amendment was not intended “to abolish distinctions based upon color, or to enforce social … equality … of the two races.”

For decades, the Plessy ruling was treated as the federal government’s seal of approval for the continuation of Jim Crow.

Killing Jim Crow

Throughout those decades, African Americans (and conscientious whites) continued to object to American law treating races differently as profoundly unjust. And they had ample opportunities to note the intensity of the injustice. A sampling (neither comprehensive, nor fully indicative of the scope) would include: Woodrow Wilson’s segregation of the federal work force, the resurgence of lynchings following the 1915 rebirth of the Ku Klux Klan (itself an outgrowth of the popularity of Birth of a Nation, the intensely racist film that Woodrow Wilson made the first ever screened at the White House), and the spate of anti-black race riots surrounding America’s participation in World War I.

For the flavor of those riots, consider the fate of the African American community living in the Greenwood section of Tulsa, Oklahoma. In the spring of 1921, Greenwood’s professional class had done so well that it became known as “Negro Wall Street” or “Black Wall Street.” On the evening of May 31, 1921, a mob gathered at the Tulsa jail and demanded that an African American man accused of attempting to assault a white woman be handed over to them. When African Americans, including World War I veterans, came to the jail in order to prevent a lynching, shots were fired and a riot began. Over the next 12 hours, at least three hundred African Americans were killed. In addition, 21 churches, 21 restaurants, 30 grocery stores, two movie theaters, a hospital, a bank, a post office, libraries, schools, law offices, a half dozen private airplanes, and a bus system were utterly destroyed. The Tulsa race riot (perhaps better styled a pogrom, given the active participation of the national guard in these events) has been called “the single worst incident of racial violence in American history.”[2]

But that is far from the whole story of these years. What are today described as Historically Black Colleges and Universities graduated generations of students, who went on to live productive lives and better their communities (whether racially defined or not). They saw the rise of the Harlem Renaissance, where African American luminaries like Duke Ellington, Langston Hughes, and Zora Neale Hurston acquired followings across the larger population and, indeed, the world. The Negro Leagues demonstrated through the national pastime that the athletic (and business) skills of African Americans were equal to those of any others;[3] the leagues developed into some of the largest black-owned businesses in the country and developed fan-followings across America. Eventually, these years saw Jackie Robinson, one of the Negro Leagues’ brightest stars, sign a contract with the Brooklyn Dodgers in 1945 and “break the color barrier” in 1947 as the first black Major Leaguer since Cap Anson successfully pushed for their exclusion in the 1880s.[4] He would be: (a) named Major League Baseball’s Rookie of the Year in 1947; (b) voted the National League MVP in 1949; and (c) voted by fans as an All Star six (6) times (spanning each of the years from 1949-1954). Robinson also led the Dodgers to the World Series in four (4) of those six (6) years.

For the main plot of our story, though, the most important reaction to the violence of Tulsa (and elsewhere)[5] was the “newfound sense of determination” that “emerged” to confront it.[6] Setting aside the philosophical debate that raged across the African American community over the broader period on the best way to advance the prospects of those most impacted by these laws,[7] the National Association for the Advancement of Colored People (the “NAACP”) began to plan new strategies to defeat Jim Crow.”[8]  The initial architect of this challenge was Charles Hamilton Houston, who joined the NAACP and developed and implemented the framework of its legal strategy after graduating from Harvard Law School in 1922, the year following the Tulsa race riot.[9]

Between its founding in 1940, under the leadership of Houston-disciple Thurgood Marshall,[10] and 1955, the NAACP Legal Defense and Education Fund brought a series of cases designed to undermine Plessy.  Houston had believed from the outset that unequal education was the Achilles heel of Jim Crow and the LDF targeted that weak spot.

The culmination of these cases came with a challenge to the segregated public schools operated by Topeka, Kansas. While schools were racially segregated many places, the LDF specifically chose to bring its signature case against the Topeka Board of Education, precisely because Kansas was not Southern, had no history of slavery, and institutionally praised John Brown;[11] the case highlighted that its issues were national, not regional, in scope.[12]

LDF, through Marshall and Greenberg, convinced the Supreme Court to reverse Plessy and declare Topeka’s school system unconstitutional. On May 17, 1954, Chief Justice Earl Warren handed down the unanimous opinion of the Court. Due to months of wrangling and negotiation of the final opinion, there were no dissents and no concurrences. With a single voice the Supreme Court proclaimed that:

…in the field of public education the doctrine of “separate but equal” has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs and others similarly situated for whom the actions have been brought are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment.

These sweeping tones are why the decision holds the place it does in our collective imagination. They are why Brown is remembered as the end of legal segregation. They are why Brown is the most revered precedent in American jurisprudence.

One might have thought that they would mean an immediate end to all race-based public educational systems (and, indeed, to all segregation by law in American life). Indeed, as Justice Marshall told his biographer Dennis Hutchison in 1979, he thought just that: “the biggest mistake [I] made was assuming that once Jim Crow was deconstitutionalized, the whole structure would collapse – ‘like pounding a stake in Dracula’s heart[.]’”

But that was not to be. For the Court to get to unanimity, the Justices needed to avoid ruling on the remedy for the violation they could jointly agree to identify. So they asked the parties to return and reargue the question of what to do about it the following year. When they again addressed the Brown case, the Supreme Court reiterated its ruling on the merits from 1954, but as to what to do about it, ordered nothing more than that the states “make a prompt and reasonable start toward full compliance” and get around to “admit[ting children] to public schools on a racially nondiscriminatory basis with all deliberate speed.”

So the true place of Brown in the story of desegregation is best reflected in Justice Marshall’s words (again, to Dennis Hutchison in 1979): “…[i]n the twelve months between Brown I and Brown II, [I] realized that [I] had yet to win anything….  ‘In 1954, I was delirious. What a victory!  I thought I was the smartest lawyer in the entire world. In 1955, I was shattered.  They gave us nothing and then told us to work for it. I thought I was the dumbest Negro in the United States.’”

Of course, Justice Marshall was far from dumb, however he felt in 1955.  But actual integration didn’t come from Brown. That would have to wait for action by Congress, cajoling by a President, and the slow development of the cultural facts-on-the-ground arising from generations of white American children growing up wanting to be like, rooting for, and seeing the equal worth in men like Duke Ellington, Langston Hughes, Jackie Robinson, and Larry Doby.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] In the terminology of the day, Mr. Ferguson was a “Carpetbagger.”  A native of Massachusetts who had married into a prominent abolitionist family, Mr. Ferguson studied law in Boston before moving to New Orleans in 1865.  He was the same judge who, at the trial court level, had ruled that Louisiana’s separate cars act could not be constitutionally applied to interstate travel.  Since Plessy’s prosecution also was initially conducted in Mr. Ferguson’s courtroom, he became the named defendant, despite his own apparent feelings about the propriety of the law.

[2] All Deliberate Speed: Reflections on the First Half-Century of Brown v. Board of Education, by Charles J. Ogletree, Jr. W.W. Norton & Company (2004).

[3] In 1936, Jesse Owens did the same on an amateur basis at the Berlin Olympics.

[4] Larry Doby became the first black American League player ever weeks later (the AL had not existed in the 1880s).

[5] There were parallel riots in Omaha and Chicago in 1919.

[6] See, All Deliberate Speed, in Fn. 2, above.

[7] The author recommends delving into this debate.  Worthy samples of contributions to it the reader might consider include: (a) Booker T. Washington’s 1895 Address to Atlanta’s Cotton States and International Exposition (; and (b) W.E.B. Du Bois’s The Souls of Black Folk.

[8]  See, All Deliberate Speed, in Fn. 2, above.

[9] Houston was the first African American elected to the Harvard Law Review and has been called “the man who killed Jim Crow.”

[10] Later a Justice of U.S. Supreme Court himself, Justice Marshall was instrumental in the NAACP’s choice of legal strategies.  But LDF was not a one-man shop.  Houston had personally recruited Marshall and Oliver Hill, the first- and second-ranked students in the Law School Class of 1933 at Howard University – itself, a historically black institution founded during Reconstruction – to fight these legal battles.  Later, Jack Greenberg was Marshall’s Assistant Counsel was and hand-chosen successor to lead the LDF

[11] The Kansas State Capitol, in Topeka, has featured John Brown as a founding hero since the 1930s (

[12] This was all the more true when the case was argued before the Supreme Court, because the Supreme Court had consolidated Brown for argument with other cases from across the nation.  Those cases were Briggs v. Elliot (from South Carolina), Davis v. County School Board of Prince Edward County (from Virginia), Belton (Bulah) v. Gebhart (from Delaware), and Bolling v. Sharpe (District of Columbia).

Juneteenth’s a celebration of Liberation Day
When word of emancipation reached Texas slaves they say.
In sorrow were we brought here to till a harvest land.
We live and died and fought here
‘Til freedom was at hand.

They tore apart our families
They stole life’s nascent breath.
Turned women into mammies
And worked our men to death.

They shamed the very nation
Which fostered freedom’s birth
It died on the plantation
Denying man his worth.

But greed and misplaced honor
Brought crisis to a head
And Justice felt upon her
The weight of Union Dead.

They fought to save a nation.
And yet they saved its soul
From moral condemnation
And made the country whole.

But when the war was waning
And the battle was in doubt.
The soldiers were complaining
An many dropping out.

There seemed but one solution
Which might yet save the day.
Although its execution
Loomed several months away.

The Congress was divided.
The Cabinet as well.
Abe did his best to hide it.
And no one did he tell.

He meant to sign an order
To deal the South a blow.
The Mason Dixon border
And the Rebel states below

Would now have to contend with
The Freedman on their land.
For slavery had endeth
For woman, child and man.

The time 18 and 63
The first day of the year.
But June of 65 would be
The time we would hold dear.

For that would be when Freedom’s thought
First saw full light of day.
And justified why men had fought
And died along the way.

Now every June we celebrate
What Lincoln had in mind
The day he did emancipate
The bonds of all mankind.

Copyright All rights reserved

Noah Griffin, America 250 Commissioner, is a lifelong student of history and is founder and artistic director of the Cole Porter Society.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

Little focused the public’s mind in the early 1950s like the atom bomb and the potential for vast death and destruction in the event of nuclear war with the Soviet Union. Who can forget the classroom drills where students dropped to the floor and hid under their desks ostensibly to reduce exposure to an exploding atomic bomb? It was a prevailing subject of discussion amongst average people as well as elites in government, media and the arts.

The Soviet Union had attained “the bomb” in 1949, four years after Hiroshima and Nagasaki. With the atom bomb at their disposal, the leadership of the Soviet Union was likely emboldened to accelerate its deeply felt ideological imperative to spread communism opportunistically. Getting an A-bomb led to a military equality with the United States that far reduced the threat of nuclear retaliation against their superior land armies in the event of an East-West military confrontation. The blatant invasion of South Korea, supported by the U.S. by communist North Korea in 1950 with total Soviet political and logistical commitment and indeed, encouragement, was likely an outcome of the Soviets possessing the atomic bomb.

In January of 1950, British intelligence, on information provided by the FBI, arrested East Germany-born and a British-educated and citizen, atomic scientist, Klaus Fuchs, who was spying for the Soviet Union. Fuchs had worked at the very highest level at Los Alamos on the American project to develop an atom bomb and was passing secrets to American Communist Party members who were also spying for the Soviet Union. He admitted his espionage and provided names of his American collaborators at Los Alamos. Those connections led to the arrest of Julius Rosenberg in June of 1950 on suspicion of espionage and two months later, his wife Ethel on the same charge.

Julius Rosenberg, an electrical engineer, and his wife Ethel were dedicated members of the Communist Party USA and had been working for years for Soviet Military Intelligence (GRU) delivering secret American work on advanced weaponry such as radar detection, jet engines and guided missiles. In hindsight, that information probably exceeded the value of atomic secrets given to the Soviet Union although consensus is that the Rosenbergs’ bomb design information confirmed the direction of Soviet bomb development. Ethel Rosenberg’s brother, David Greenglass was working at Los Alamos and evidence brought to light over the years strongly suggest that Ethel was the one who recruited her brother to provide atom bomb design secrets to her husband and worked hand-in-glove with him in his espionage activities.

The Rosenbergs, never admitting their crimes, were tried and convicted on the charge of “conspiracy to commit espionage.” The Death Penalty was their sentence. They professed their innocence until the very end when in June 1953, they were electrocuted at Sing Sing prison.

Politically, there was another narrative unfolding. The political Left in the United States and worldwide strongly supported the Rosenbergs’  innocence, reminiscent of their support for former State Department official Alger Hiss who was tried in 1949 and convicted in 1950 of perjury and not espionage as the Statute of Limitations on espionage had expired. The world-renowned Marxist intellectual, Jean-Paul Sartre called the Rosenberg trial a “legal lynching.” On execution day, there was a demonstration of several hundred outside Sing Sing paying their last respects. For decades to follow, the Rosenbergs’ innocence became a rallying cry of the political Left.

Leaders on the political and intellectual Left blamed anti-communist fervor drummed up by McCarthyism for the federal government’s pursuit of the Rosenbergs and others accused of spying for the Soviet Union. At the time, there was great sympathy on the Left with the ideals of communism and America’s former communist ally, the Soviet Union, which had experienced great loss in WW II in defeating hated Nazi fascism. They fervently believed the Rosenbergs’ plea of innocence.

When the Venona Project, secret records of intercepted Soviet messages, were made public in the mid-1990s, with unequivocal information pointing to the Rosenbergs’ guilt, the political Left’s fervor for the Rosenbergs was greatly diminished. Likewise, with material copied from Soviet KGB archives (the Vassillyev Notebooks) in 2009. However, some said, (paraphrasing) “OK, they did it but U.S. government Cold War mentality and McCarthyism were even greater threats” (e. g. the Nation magazine, popular revisionist Historian Howard Zinn).

Since then, the Left and not only the Left, led by the surviving sons of the Rosenbergs, have focused on the unfairness of the sentence, particularly Ethel Rosenberg’s, and that she should have not received the death penalty. Federal prosecutors likely hoped that such a charge would get the accused to talk, implicate others and provide insights into Soviet espionage operations. It did not. The Rosenbergs became martyrs to the Left and likely as martyrs, continued to better serve the Soviet communist cause than serving a prison sentence. Perhaps that was even their reason for professing innocence.

Debate continues to this day. But these days it’s over the severity of the sentence as just about all agree the Rosenbergs were spies for the Soviet Union. In today’s climate, there would be no death sentence but at the height of the Cold War…

However, there is absolutely no doubt that they betrayed America by spying for the Soviet Union at a time of great peril to America and world.

Don Ritter is President and CEO Emeritus (after serving eight years in that role) of the Afghan American Chamber of Commerce (AACC) and a 15-year founding member of the Board of Directors. Since 9-11, 2001, he has worked full time on Afghanistan and has been back to the country more than 40 times. He has a 38-year history in Afghanistan.

Ritter holds a B.S. in Metallurgical Engineering from Lehigh University and a Masters and Doctorate from MIT in physical-mechanical metallurgy. After MIT, where his hobby was Russian language and culture, he was a NAS-Soviet Academy of Sciences Exchange Fellow in the Soviet Union in the Brezhnev era for one year doing research at the Baikov Institute for Physical Metallurgy on high temperature materials. He speaks fluent Russian (and French), is a graduate of the Bronx High School of Science and recipient of numerous awards from scientific and technical societies and human rights organizations.

After returning from Russia in 1968, he spent a year teaching at California State Polytechnic University, Pomona, where he was also a contract consultant to General Dynamics in their solid-state physics department. He then returned, as a member of the faculty and administration, to his alma-mater, Lehigh University. At Lehigh, in addition to his teaching, research and industry consulting, Dr. Ritter was instrumental in creating a university-wide program linking disciplines of science and engineering to the social sciences and humanities with the hope of furthering understanding of the role of technology in society.

After10 years at Lehigh, Dr. Ritter represented Pennsylvania’s 15th district, the “Lehigh Valley” from 1979 to 1993 in the U.S. House of Representatives where he served on the Science and Technology and Energy and Commerce Committees. Ritter’s main mission as a ‘scientist congressman’ was to work closely with the science, engineering and related industry communities to bring a greater science-based perspective to the legislative, regulatory and political processes.

In Congress, as ranking member on the Congressional Helsinki Commission, he fought for liberty and human rights in the former Soviet Union. The Commission was Ritter’s platform to gather congressional support to the Afghan resistance to the Soviet invasion and occupation during the 1980s. Ritter was author of the “Material Assistance” legislation and founder and House-side Chairman of the “Congressional Task Force on Afghanistan.”

Dr. Ritter continued his effort in the 1990’s after Congress as founder and Chairman of the Washington, DC-based Afghanistan Foundation. In 2003, as creator of a six million-dollar USAID-funded initiative, he served as Senior Advisor to AACC in the creation of the first independent, free-market oriented Chamber of Commerce in the history of the country. Dr. Ritter presently is part of AACC’s seminal role in assisting the development of the Afghan market economy to bring stability and prosperity to Afghanistan. He is also a businessman and investor in Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

World War II ended in 1945 but the ideological imperative of Soviet communism’s expansion did not. By 1950, the Soviet Union (USSR) had solidified its empire by conquest and subversion in all Central and Eastern Europe. But to Stalin & Co., there were other big fish to fry. At the Yalta Conference in February 1945 between Stalin, Roosevelt and Churchill, the USSR was asked to participate in ending the war in the Pacific against Japan. Even though Japan’s defeat was not in doubt, the atom bomb would not be tested until July and it was not yet known to our war planners if it would work.

An invasion of Japan, their home island, was thought to mean huge American and allied casualties, perhaps half a million, a conclusion reached given the tenacity which Japanese soldiers had defended islands like Iwo Jima and Okinawa. So much blood was yet to be spilled… they were fighting to the death. The Soviet Red Army, so often oblivious to casualties in their onslaught against Nazi Germany, would share in the burden of invasion of Japan.

Japan had controlled Manchukuo (later Manchuria).  The Korean peninsula was dominated by Japan historically and actually annexed early in the 20th century. Islands taken from Czarist Russia in the Russo-Japanese War of 1905 were also in play.

Stalin and the communist USSR’s presence at the very end of the war in Asia was solidified at Yalta and that is how they got to create a communist North Korea.

Fast forward to April of 1950, Kim Il Sung had traveled to Moscow to discuss how communist North Korea, might take South Korea and unify the peninsula under communist rule for the communist world. South Korea or the Republic of Korea (ROK) was dependent on the United States. The non-communist ROK was in the middle of the not abnormal chaos of establishing a democracy, an economy, and a new country. Their military was far from ready. Neither was that of the U.S.

Kim and Stalin concluded that South was weak and ripe for adding new realm to their communist world. Stalin gave Kim the go-ahead to invade and pledged full Soviet support. Vast quantities of supplies, artillery and tanks would be provided to the Army of North Korea for a full-fledged attack on the South. MIG-15 fighter aircraft, flown by Soviet pilots masquerading as Koreans would be added. Close by was Communist China for whom the USSR had done yeoman service in their taking control. That was one large insurance policy should things go wrong.

On June 25, 1950, a North Korean blitzkrieg came thundering down on South Korea. Closely spaced large artillery firing massive barrages followed by tanks and troops, a tactic perfected in the Red Army’s battles with the Nazis, wreaked havoc on the overpowered South Korean forces. Communist partisans infiltrated into the South joined the fray, against the ROK. The situation was dire as it looked like the ROK would collapse.

President Harry Truman decided that an expansionist Soviet communist victory in Korea was not only unacceptable but that it would not stop there. He committed the U.S. to fight back and fight back, we did. In July of 1950, the Americans got support from the UN Security Council to form a UN Command (UNC) under U.S. leadership. As many as 70 countries would get involved eventually but the U.S. troops bore the brunt of the war with Great Britain and Commonwealth troops, a very distant second.

It is contested to this day as to why the USSR under Stalin had not been there at the Security Council session to veto the engagement of the UN with the U.S. leading the charge. The Soviets had walked out in January and did not return until August. Was it a grand mistake or did Stalin want to embroil America in a war in Asia so he could more easily deal with his new and possibly expanding empire in Europe? Were the Soviets so confident of a major victory in Korea that would embarrass the U.S. and signal to others that America would be weakened by a defeat in Korea, and thus be unable to lead the non-communist world?

At a time when ROK and U.S. troops were reeling backwards, when the communist North had taken the capital of the country, Seoul, and much more, Supreme UN Commander, General Douglas McArthur had a plan for a surprise attack. He would attack at a port twenty-five miles from Seoul, Inchon, using the American 1st Marine Division as the spearhead of an amphibious operation landing troops, tanks and artillery. That put UNC troops north of the North Korean forces in a position to sever the enemy’s supply lines and inflict severe damage on their armies. Seoul was retaken. The bold Inchon landing changed the course of the Korean war and put America back on offense.

While MacArthur rapidly led the UNC all the way to the Yalu River bordering China, when Communist China entered the war, everything changed. MacArthur had over-extended his own supply lines and apparently had not fully considered the potential for a military disaster if China entered the war. The Chinese People’s Liberation Army (PLA) counterattacked. MacArthur was sacked by Truman. There was a debate in the Truman administration over the use of nuclear weapons to counter the Chinese incursion.

Overwhelming numbers of Chinese forces employing sophisticated tactics, and a willingness to take huge casualties, pushed the mostly American troops back to the original dividing line between the north and south, the 38th parallel (38 degrees latitude)… which, ironically, after two more years of deadly stalemate, is where we and our South Korean allies stand today.

Looking back, airpower was our ace in the hole and a great equalizer given the disparity in ground troops. B-29 Superfortresses blasted targets in the north, incessantly. Jet fighters like the legendary F-86 Sabre jet dominated the Soviet MIG-15s.  But if you discount nuclear weapons, wars are won by troops on the ground, and on the ground, we ended up where we started.

33, 000 Americans died in combat. Other UNC countries lost about 7,000. South Korea, 134,000. North Korea, 213,000. The Chinese lost an estimated 400,000 troops in combat! Civilians all told, 2.7 million, a staggering number.

The Korean war ended in 1953 when Dwight D. Eisenhower was the U.S. President. South Korea has evolved from a nation of rice paddies to a modern industrial power with strong democratic institutions and world-class living standards. North Korea, under communist dictatorship, is one of the poorest and most repressive nations on earth yet they develop nuclear weapons. China, still a communist dictatorship but having adopted capitalist economic principles, has surged in its economic and military development to become a great power with the capacity to threaten the peace in Asia and beyond.

Communist expansion was halted by a hot war in Korea from 1950 to 1953 but the Cold War continued with no letup.

A question for the reader: What would the world be like if America and its allies had lost the war in Korea.

Don Ritter is President and CEO Emeritus (after serving eight years in that role) of the Afghan American Chamber of Commerce (AACC) and a 15-year founding member of the Board of Directors. Since 9-11, 2001, he has worked full time on Afghanistan and has been back to the country more than 40 times. He has a 38-year history in Afghanistan.

Ritter holds a B.S. in Metallurgical Engineering from Lehigh University and a Masters and Doctorate from MIT in physical-mechanical metallurgy. After MIT, where his hobby was Russian language and culture, he was a NAS-Soviet Academy of Sciences Exchange Fellow in the Soviet Union in the Brezhnev era for one year doing research at the Baikov Institute for Physical Metallurgy on high temperature materials. He speaks fluent Russian (and French), is a graduate of the Bronx High School of Science and recipient of numerous awards from scientific and technical societies and human rights organizations.

After returning from Russia in 1968, he spent a year teaching at California State Polytechnic University, Pomona, where he was also a contract consultant to General Dynamics in their solid-state physics department. He then returned, as a member of the faculty and administration, to his alma-mater, Lehigh University. At Lehigh, in addition to his teaching, research and industry consulting, Dr. Ritter was instrumental in creating a university-wide program linking disciplines of science and engineering to the social sciences and humanities with the hope of furthering understanding of the role of technology in society.

After10 years at Lehigh, Dr. Ritter represented Pennsylvania’s 15th district, the “Lehigh Valley” from 1979 to 1993 in the U.S. House of Representatives where he served on the Science and Technology and Energy and Commerce Committees. Ritter’s main mission as a ‘scientist congressman’ was to work closely with the science, engineering and related industry communities to bring a greater science-based perspective to the legislative, regulatory and political processes.

In Congress, as ranking member on the Congressional Helsinki Commission, he fought for liberty and human rights in the former Soviet Union. The Commission was Ritter’s platform to gather congressional support to the Afghan resistance to the Soviet invasion and occupation during the 1980s. Ritter was author of the “Material Assistance” legislation and founder and House-side Chairman of the “Congressional Task Force on Afghanistan.”

Dr. Ritter continued his effort in the 1990’s after Congress as founder and Chairman of the Washington, DC-based Afghanistan Foundation. In 2003, as creator of a six million-dollar USAID-funded initiative, he served as Senior Advisor to AACC in the creation of the first independent, free-market oriented Chamber of Commerce in the history of the country. Dr. Ritter presently is part of AACC’s seminal role in assisting the development of the Afghan market economy to bring stability and prosperity to Afghanistan. He is also a businessman and investor in Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

When Time Magazine was at its heyday and the dominant ‘last word’ in American media, over a ten-year period, Whittaker Chambers was its greatest writer and editor. He was a founding editor of National Review along with William F. Buckley. He received the Presidential Medal of Freedom posthumously from President Ronald Reagan in 1984. His memoir, Witness, is an American classic.

But all that was a vastly different world from his earlier life as a card-carrying member of the Communist Party in the 1920s and spy for Soviet Military Intelligence (GRU) in the 1930s.

We recognize Chambers today for the nation’s focus given to his damning testimony in the Alger Hiss congressional investigations and spy trials from 1948-50 and a trove of documents called the Pumpkin Papers.

Alger Hiss came from wealth and was a member of the privileged class, attended Harvard Law School and was upwardly mobile in the State Department reaching high-ranking positions with access to extremely sensitive information. He was an organizer of the Yalta Conference between Stalin, Roosevelt and Churchill. He helped create the United Nations and in 1949, was President of the prestigious Carnegie Endowment for International Peace.

In Congress in 1948, based on FBI information, a number of Americans were being investigated for spying for the Soviet Union dating back to the early 1930s and during WW II, particularly in the United States Department of State. These were astonishing accusations at the time. When an American spy for the Soviets, Elizabeth Bentley, defected and accused Alger Hiss and a substantial group of U.S. government officials in the Administration of Franklin Roosevelt of spying for the Soviet Union, Hiss vehemently denied the charges. Handsome and sophisticated, Hiss was for a lifetime, well-connected, well-respected and well-spoken. He made an extremely credible witness before the House Unamerican Activities Committee. Plus, most public figures involved in media, entertainment and academe came to his defense.

Whittaker Chambers, by then a successful editor at Time, reluctantly and fearing retribution by the GRU, was subpoenaed before HUAC to testify. He accused Hiss of secretly being a communist and passing secret documents to him for transfer to Soviet Intelligence. He testified that he and Hiss had been together on several occasions. Hiss denied it. Chambers was a product of humble beginnings, divorced parents, a brother who committed suicide at 22 and was accused of having psychological problems. All this was prequel to his adoption – “something to live for and something to die for” – of the communist cause. His appearance, dress, voice and demeaner, no less his stinging message, were considered less than attractive. The comparison to the impression that Hiss made was stark and Chambers was demeaned and derided by Hiss’ supporters.

Then came the trial in 1949. During the pre-trial discovery period, Chambers eventually released large quantities of microfilm he had kept hidden as insurance against any GRU reprisal, including murder. Eliminating defectors was not uncommon in GRU practice then… and exists unfortunately to this day.

A then little-known Member of Congress and member of HUAC, one Richard Nixon, had gained access to the content of Chambers’ secret documents, and adamantly pursued the case before the Grand Jury. Nixon at first refused to give the actual evidence to the Grand Jury but later relented. Two HUAC investigators went to Chambers’ farm in Westminster, Maryland, and from there, guided by Chambers, to his garden. There in a capped and hollowed out orange gourd (not a pumpkin!) were the famous “Pumpkin Papers.” Contained in the gourd were hundreds of documents on microfilm including four hand-written pages by Hiss, implicating him as spying for the Soviet Union.

Hiss was tried and convicted of perjury as the statute of limitations on espionage by then had run out. He was sentenced to two five-year terms and ended up serving three and a half years total in federal prison.

Many on the political Left refused to believe that Alger Hiss was guilty and to this day there are some who still support him. However, the Venona Papers released by the U.S. National Security Agency in 1995 which contained intelligence intercepts from the Soviet Union during Hiss’ time as a Soviet spy showed conclusively that Hiss was indeed a Soviet spy. The U.S. government at the highest levels knew all along that Hiss was a spy but in order to keep the Venona Project a secret and to keep gathering intelligence from the Soviet Union during nuclear standoff and the Cold War, it could not divulge publicly what it knew.

Alger Hiss died at the ripe old age of 92, Whittaker Chambers at the relatively young age of 61. Many believe that stress from his life as a spy, and later the pervasive and abusive criticism he endured, weakened his heart and led to his early death.

The Hiss case is seminal in the history of the Cold War and its impact on America because it led to the taking sides politically on the Left and on the Right, a surge in anti-communism on the Right and the reaction to anti-communism on the Left. At the epicenter of the saga is Whittaker Chambers.

Author’s Postscript:

To me, this is really the story of Whittaker Chambers, whose brilliance as a thinker and as a writer alone did more to unearth and define the destructive nature of communism than any other American of his time. His memoir, Witness, a best-seller published in 1952, is one of the most enlightening works of non-fiction one can read. It reflects a personal American journey through a dysfunctional family background and depressed economic times when communism and Soviet espionage, were ascendant, making the book both an educational experience and page-turning thriller. In Witness, as a former Soviet spy who became disillusioned with communism’s murder and lies, Chambers intellectually and spiritually defined its tyranny and economic incompetence to Americans in a way that previously, only those who experienced it personally could understand. It gave vital insights into the terrible and insidious practices of communism to millions of Americans.

Don Ritter, Sc.D., served in the United States House of Representatives for the 15th Congressional District of Pennsylvania. As founder of the Afghanistan-American Foundation, he was senior advisor to the Afghan-American Chamber of Commerce (AACC) and the Afghan International Chamber of Commerce (AICC). Congressman Ritter currently serves as president and CEO of the Afghan-American Chamber of Commerce. He holds a B.S. in Metallurgical Engineering from Lehigh University and a M.S. and Sc. D. (Doctorate) from the Massachusetts Institute of Technology, M.I.T, in Physical Metallurgy and Materials Science. For more information about the work of Congressman Don Ritter, visit

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

It was a time when history hung in the balance. The outcome of a struggle between free and controlled peoples – democratic versus totalitarian rule – was at stake.

Here’s the grim picture in early 1948. Having fought for 4 years against the Nazis in history’s biggest and bloodiest battles, victorious Soviet communist armies have thrown back the Germans across all of Eastern and Central Europe and millions of Soviet troops are either occupying their ‘liberated’ lands or have installed oppressive communist governments. Soviet army and civilian losses in WW II are unimaginable, and soldiers killed number around 10 million. Perhaps 20 million when civilians are included. Josef Stalin, the murderous Soviet communist dictator is dead set on not giving up one inch.

Czechoslovakia has just succumbed to communist control in February under heavy Soviet pressure. Poland fell to the communists back in 1946 with Stalin, reneging on his promise to American President Roosevelt and British Prime Minster Churchill at Yalta for free elections, instead installed a Soviet puppet government while systematically eradicating Polish opposition. Churchill had delivered his public-awakening “Iron Curtain” speech 2 years earlier. The major Allies, America, Great Britain and France, are extremely worried about Stalin and the Red Army’s next moves.

Under agreements between the Soviet Union and the allies – Americans, British and French – the country of Germany is divided into 4 Economic Zones, each controlled by the respective 4 countries. The Allies control the western half and the Soviet Union (USSR), the eastern. Berlin itself, once the proud capital of Germany, is now a wasteland of rubble, poverty and hunger after city-shattering house-to-house combat between Nazi and Soviet soldiers. There’s barely a building left standing. There’s hardly any men left in the city. They are either killed in battle or taken prisoner by the Red Army. Berlin, a hundred miles inside the Soviet-controlled Zone in eastern Germany, is also likewise divided between the Allies and the USSR.

That’s the setting for what is to take place next in the pivotal June of 1948.

The Allies had for some time decided that a democratic, western-oriented Germany would be the best defense against further Soviet communist expansion westward. Germany, in a short period of time, had made substantial progress towards democratization and rebuilding. This unnerved Stalin who all along had planned for a united Germany in the communist orbit and the Soviets were gradually increasing pressure on transport in and out of Berlin.

The Allies announced on June 1 of 1948 the addition of the French Zone to the already unified Brit and American zones. Then, on June 18, the Allies announced the creation of a common currency, the Deutschmark, to stimulate economic recovery across the three allied Zones. Stalin and the Soviet leadership, seeing the potential for a new, vital, non-communist Western Germany in these actions, on June 24, decided to blockade Berlin’s rails, roads and canals to choke off what had become a western-nation-allied West Germany and West Berlin.

Stalin’s chess move was to starve the citizens of the city by cutting off their food supply, their electricity, and their coal to heat homes, power remaining factories and rebuild. His plan also was to make it difficult to resupply allied military forces. This was a bold move to grab West Berlin for the communists. Indeed, there were some Americans and others who felt that Germany, because of its crimes against humanity, should never again be allowed to be an industrial nation and that we shouldn’t stand up for Berlin. But that opinion did not hold sway with President Truman.

What Stalin and the Soviet communists didn’t count on was the creativity, ingenuity, perseverance and capacity of America and its allies.

Even though America had nuclear weapons at the time and the Soviet Union did not, it had pretty much demobilized after the war. So, rather than fight the Red Army, firmly dug in with vast quantities of men, artillery and tanks in eastern Germany and risk another world war, the blockade would be countered by an airlift. The greatest airlift of all time. Food, supplies and coal would be transported to the people of Berlin, mainly on American C54s flown by American, British, French and other allied pilots. But only America had the numbers of aircraft, the amount of fuel and the logistical resources, to actually do what looked to Stalin and the Soviets to be impossible.

One can only imagine the enormity of the 24-7 activity. Nearly 300,000 flights were made from June 24 of 1948 till September 30, 1949. Flights were coming in every 30 seconds at height of the airlift. It was a truly amazing logistical achievement to work up to the delivery of some three and a half thousand tons daily to meet the city’s needs. Think of the energy and dedication of the pilots and mechanics, those involved in the supply chains and the demanding delivery schedules… the sheer complexity of such an operation is mind-boggling.

Stalin, seeing the extent of Allied perseverance and capability over a year’s time and meanwhile, suffering an enormous propaganda defeat worldwide, relented.

Think of the Americans who led this history-making endeavor, all the men and women, from the Generals to the soldiers, airmen and civilians and their achievement on behalf of creating a free and prosperous Germany. A free Germany that sat side-by-side in stark contrast with the brutal communist east. To them, known as the “the greatest generation,” we owe our everlasting gratitude for victory in this monumental first ‘battle’ of the Cold War.

Don Ritter, Sc.D., served in the United States House of Representatives for the 15th Congressional District of Pennsylvania. As founder of the Afghanistan-American Foundation, he was senior advisor to the Afghan-American Chamber of Commerce (AACC) and the Afghan International Chamber of Commerce (AICC). Congressman Ritter currently serves as president and CEO of the Afghan-American Chamber of Commerce. He holds a B.S. in Metallurgical Engineering from Lehigh University and a M.S. and Sc. D. (Doctorate) from the Massachusetts Institute of Technology, M.I.T, in Physical Metallurgy and Materials Science. For more information about the work of Congressman Don Ritter, visit

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

The fall of 1939 saw dramatic changes in world events that would alter the course of history. On September 1, 1939, Nazi Germany invaded Poland to trigger the start of World War II but imperial Japan had been ravaging Manchuria and China for nearly a decade. Even though the United States was officially neutral in the world war, President Franklin Roosevelt had an important meeting in mid-October.

Roosevelt met with his friend, Alexander Sachs, who hand-delivered a letter from scientists Albert Einstein and Leo Szilard. They informed the president that German scientists had recently discovered fission and might possibly be able to build a nuclear bomb. The warning prompted Roosevelt to initiate research into the subject and beat the Nazis.

The United States entered the war after Japan bombed Pearl Harbor on December 7, 1941, and the Roosevelt administration began the highly secretive Manhattan Project in October 1942. The project had facilities in far-flung places and employed the efforts of more than half a million Americans across the country. The weapons research laboratory resided in Los Alamos, New Mexico, under the direction of J. Robert Oppenheimer.

As work progressed on a nuclear weapon, the United States waged a global war in the Pacific, North Africa, and Europe. The Pacific witnessed a particularly brutal war against Japan. After the Battle of Midway in June 1942, the Americans launched an “island-hopping” campaign. They were forced to eliminate a tenacious and dug-in enemy in heavy jungles in distant places like Guadalcanal. The Japanese forces gained a reputation for suicidal banzai charges and fighting to the last man.

By late 1944, the United States was closing in on Japan and invaded the Philippines. The U.S. Navy won the Battle of Leyte Gulf, but the Japanese desperately launched kamikaze attacks that inflicted a heavy toll, sinking and damaging several ships and causing thousands of American casualties. The nature of the attacks helped confirm the Americans believed they were fighting a fanatical enemy.

The battles of Iwo Jima and Okinawa greatly shaped American views of Japanese barbarism. Iwo Jima was a key island for airstrips to bomb Japan and U.S. naval assets as they built up for the invasion of Japan. On February 19, 1945, the Fourth and Fifth Marine Divisions landed mid-island after a massive preparatory bombardment. After a dreadful slog against an entrenched enemy, the Marines took Mt. Suribachi and famously raised an American flag on its heights.

The worst was yet to come against the nearly 22,000-man garrison in a complex network of tunnels. The brutal fighting was often hand-to-hand. The Americans fought for each yard of territory by using grenades, satchel charges, and flamethrowers to attack pillboxes. The Japanese fought fiercely and sent waves of hundreds of men in banzai charges. The Marines and Navy lost 7,000 dead and nearly one-third of the Marines who fought on the island were casualties. Almost all the defenders perished.

The battle for Okinawa was just as bloody. Two Marine and two Army divisions landed unopposed on Okinawa on April 1 after another relatively ineffective bombardment and quickly seized two airfields. The Japanese built nearly impregnable lines of defense, but none was stronger than the southern Shuri line of fortresses where 97,000 defenders awaited.

The Marines and soldiers attacked in several frontal assaults and were ground up by mine fields, grenades, and pre-sighted machine-guns and artillery covering every inch. For their part, the Japanese launched several fruitless attacks that bled them dry. The war of attrition finally ended with 13,000 Americans dead and 44,000 wounded. On the Japanese side, more than 70,000 soldiers and tens of thousands of Okinawan civilians were killed. The naval battle in the waters surrounding the island witnessed kamikaze and bombing attacks that sank 28 U.S. ships and damaged an additional 240.

Okinawa was an essential staging area for the invasion of Japan and additional proof of the fanatical nature of the enemy. Admiral Chester Nimitz, General Douglas MacArthur, and the members of the Joint Chiefs of Staff were planning Operation Downfall—the invasion of Japan—beginning with Operation Olympic in southern Japan in the fall of 1945 with fourteen divisions and twenty-eight aircraft carriers, followed by Operation Coronet in central Japan in early 1946.

While the U.S. naval blockade and aerial bombing of Japan were very successful in grinding down the enemy war machine, Japanese resistance was going to be even stronger and more fanatical than Iwo Jim and Okinawa. The American planners expected to fight a horrific war against the Japanese forces, kamikaze attacks, and a militarized civilian population. Indeed, the Japanese reinforced Kyushu with thirteen divisions of 450,000 entrenched men by the end of July and had an estimated 10,000 aircraft at their disposal. Japan was committed to a decisive final battle defending its home. Among U.S. military commanders, only MacArthur underestimated the difficulty of the invasion as he was wont to do.

Harry Truman succeeded Roosevelt as president when he died on April 12, 1945. Besides the burdens of command decisions in fighting the war to a conclusion, holding together a fracturing alliance with the Soviets, and shaping the postwar world, Truman learned about the Manhattan Project.

While some of the scientists who worked on the project expressed grave concerns about the use of the atomic bomb, most decision-makers expected that it would be used if it were ready. Secretary of War Henry Stimson headed the Interim Committee that considered the use of the bomb. The committee rejected the idea of a demonstration or a formal warning to the Japanese in case it failed and strengthened their resolve.

On the morning of July 16, the “gadget” nuclear device was successfully exploded at Alamogordo, New Mexico. The test was code-name “Trinity,” and word was immediately sent to President Truman then at the Potsdam Conference negotiating the postwar world. He was ecstatic and tried to use it to impress Stalin, who impassively received the news because he had several spies in the Manhattan project. The Allies issued the Potsdam Declaration demanding unconditional surrender from Japan or face “complete and utter destruction.”

After possible targets were selected, the B-29 bomber, Enola Gay, carried the uranium atomic bomb nicknamed Little Boy from Tinian Island. The Enola Gay dropped Little Boy over Hiroshima, where the blast and resulting firestorm killed 70,000 and grievously injured and sickened tens of thousands of others. The Japanese government still adamantly refused to surrender.

On August 9, another B-29 dropped the plutonium bomb Fat Man over Nagasaki which was a secondary target. Heavy cloud cover meant that the bomb was dropped in a valley that restricted the effect of the blast somewhat. Still, approximately 40,000 were killed. The dropping of the second atomic bombs and the simultaneous invasion of Manchukuo by the Soviet Union led to the Emperor Hirohito to announce Japan’s surrender on August 15. The formal surrender took place on the USS Missouri on September 2.

General MacArthur closed the ceremony with a moving speech in which he said,

It is my earnest hope, and indeed the hope of all mankind, that from this solemn occasion a better world shall emerge out of the blood and carnage of the past—a world founded upon faith and understanding, a world dedicated to the dignity of man and the fulfillment of his most cherished wish for freedom, tolerance, and justice…. Let us pray that peace now be restored to the world, and that God will preserve it always. These proceedings are closed.

World War II had ended, but the Cold War and atomic age began.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joshua Schmid

‘A Date Which Will Live in Infamy’

The morning of December 7, 1941 was another day in paradise for the men and women of the U.S. armed forces stationed at the Pearl Harbor Naval Base on Oahu, Hawaii. By 7:30 am, the air temperature was already a balmy 73 degrees. A sense of leisure was in the air as sailors enjoyed the time away from military duties that Sundays offered. Within the next half hour, the serenity of the island was shattered. Enemy aircraft streaked overhead, marked only by a large red circle. The pilots—who had been training for months for this mission—scanned their surroundings and set their eyes on their target: Battleship Row. The eight ships—the crown of the United States’ Pacific fleet—sat silently in harbor, much to the delight of the oncoming Japanese pilots, who began their attack.

Since the Japanese invasion of Manchuria in 1931, the relationship between the United States and Japan had significantly deteriorated. Over the course of the ensuing decade, the U.S. imposed embargoes on strategic materials such as oil and froze Japanese assets to deter the Empire of the Rising Sun’s continual aggressions in the Pacific. For many in the American political and military leadership, it became not a question whether violent conflict would erupt between the two nations but rather when. Indeed, throughout the month of November 1941, the two military commanders at Pearl Harbor—Admiral Husband Kimmel and Lieutenant General Walter Short—received multiple warnings from Washington, D.C. that conflict with Japan somewhere in the Pacific would very soon be a reality. In response, Kimmel and Short ordered that aircraft be moved out of their hangars at Pearl Harbor and lined up on runways in order to prevent sabotage. Additionally, radar—a new technology that had not yet reached its full capabilities—began operation a few hours a day on the island of Oahu. Such a lackluster response to war warnings can largely be attributed to the fact that American intelligence suspected that the initial Japanese strike would fall on U.S. bases in the remote Pacific such as at the Philippines or Wake Island. The logistical maneuvering it would take to carry out a large-scale attack on Pearl Harbor—nearly 4,000 miles from mainland Japan—seemed ludicrously impossible.

Such beliefs, of course, were immediately drowned out by the wails of the air raid sirens and the repeated message, “Air raid Pearl Harbor. This is not a drill” on the morning of what turned out to be perhaps the most momentous day of the entire twentieth century. The Japanese strike force launched attacks from aircraft carriers in two waves. Torpedo and dive bombers attacked hangars and the ships anchored in the harbor while fighters provided strafing runs and air defense. In addition to the eight American battleships, a variety of cruisers, destroyers, and support ships were at Pearl Harbor.

A disaster quickly unfolded for the Americans. Many sailors had been granted leave that day given it was a Sunday. These men were not at their stations as the attack began—a fact that Japanese planners likely expected. Members of the American radar teams did in fact spot blips of a large array of aircraft before the attack. However, when they reported it to their superiors, they were told it was incoming American planes. The American aircraft that were lined up in clusters on runways to prevent sabotage now made easy targets for the Japanese strike force. Of the 402 military aircraft at Pearl Harbor and the surrounding airfields, 188 were destroyed and 159 damaged. Only a few American pilots were able to take off—those who did bravely took on the overwhelming swarm of Japanese aircraft and successfully shot a few down. Ships in the harbor valiantly attempted to get under way despite being undermanned, but with little success. The battleship Nevada attempted to lumber its way out of the narrow confines but her captain purposefully scuttled it to avoid blocking the harbor after it suffered multiple bomb hits. All eight of the battleships took some form of damage, and four were sunk. In the most infamous event of the entire attack, a bomb struck the forward magazine of the battleship Arizona, causing a mass explosion that literally ripped the ship apart. Of the nearly 2,500 Americans killed in the attack on Pearl Harbor, nearly half were sailors onboard the Arizona. In addition to the battleships, a number of cruisers, destroyers, and other ships were also sunk or severely damaged. In contrast, only 29 Japanese planes were shot down during the raid. The Japanese fleet immediately departed and moved to conduct other missions against British, Dutch, and U.S. holdings in the Pacific, believing that they had achieved the great strike that would incapacitate American naval power in the Pacific for years to come.

On the morning of the attack at Pearl Harbor, the aircraft carrier U.S.S. Saratoga was in port at San Diego on a mission. The other two carriers in the Pacific fleet were also noticeably absent from Pearl Harbor when the bombs began to fall. Japanese planners thought little of it in the ensuing weeks—naval warfare theory at the time was fixated on the idea of battleships dueling each other from long range with giant guns. Without their battleships, how could the Americans hope to stop the Japanese from dominating in the Pacific? However, within a year and a half, these three carriers would win a huge victory at the Battle of Midway and helped turn the tide in the Pacific in favor of the Americans and made it a carrier war.

The victory at Midway would give morale to an American people already hard at work since December 7, 1941 at mobilizing its entire society for war in one of the greatest human efforts in history. Of the eight battleships damaged at Pearl Harbor, all but the Arizona and Oklahoma were salvaged and returned to battle before the end of the war. In addition, the U.S. produced thousands of ships between 1941-1945 as part of a massive new navy. In the end, rather than striking a crushing blow, the Japanese task force merely awoke a sleeping giant who eagerly sought to avenge its wounds. As for the men and women who fought and died on December 7, 1941—a date that President Franklin Roosevelt declared would “live in infamy”—they will forever be enshrined in the hearts and minds of Americans for their courage and honor on that fateful day.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Andrew Langer

In 1992, United States Supreme Court Justice Sandra Day O’Connor enunciated an axiomatic principle of constitutional governance, that the Constitution “protects us from our own best intentions,” dividing power precisely so that we might resist the temptation to concentrate that power as “the expedient solution to the crisis of the day.”[1] It is a sentiment that echoes through American history, as there has been a constant “push-pull” between the demands of the populace and the divisions and restrictions on power as laid out by the Constitution.

Before President Franklin Delano Roosevelt’s first term, the concept of a 100-Day agenda simply didn’t exist. But, since 1933, incoming new administrations have been measured by that arbitrary standard—what they plan on accomplishing in those first hundred days, and what they actually accomplished.

The problem, of course, is that public policy decision making should not only be a thorough and deliberative process, but in order to protect the rights of the public, must allow for significant public input. Without that deliberation, without that public participation, significant mistakes can be made. This is why policy made in a crisis is almost always bad policy—and thus Justice O’Connor’s vital warning.

FDR came into office with America under its most significant crisis since the Civil War. Nearly three and a half years into an economic disaster—nearly a quarter of the population was out of work, banks and businesses were failing, millions of Americans were completely devastated and looking for real answers.

The 1932 presidential election was driven by this crisis. Incumbent President Herbert Hoover was seen as a “do-nothing” president, whose efforts at stabilizing the economy through tariffs and tax increases hadn’t stemmed the economic tide of the Great Depression. FDR had built a reputation as governor of New York for action, and on the campaign trail raised a series of ambitious plans that he intended to enact that he called “The New Deal.” Significant portions of this New Deal were to be enacted during those first 100 days in office.

This set a standard that later presidents would be held to: what they wanted to accomplish during those first hundred days, and how those goals might compare to the goals laid out by FDR.

At the core of those enactments were the creation of three major federal programs: the Federal Deposit Insurance Corporation, the Civilian Conservation Corps, and the National Industrial Recovery Administration. Of these three, the FDIC remains in existence today, with its mission largely unchanged: to guarantee the monetary accounts of bank customers, and, in doing so, ensure that banks aren’t closed down because of customers suddenly withdrawing all their money from a bank and closing their accounts.

This had happened with great frequency following the stock market crash of 1929—and such panicked activity was known, popularly, as a “bank run.”[2]

FDR was inaugurated on March 4, 1933. On March 6, he closed the entire American banking system! Three days later, on March 9, Congress passed the Emergency Banking Act—which essentially created the FDIC. Three days later, on Sunday, March 12, FDR gave the first of his “fireside chats,” assuring the nation that when the banks re-opened the following day, the federal government would be protecting Americans’ money.

But there were massive questions over the constitutionality of much of FDR’s New Deal proposals, and many of them were challenged in federal court. At the same time, a number of states were also attempting their own remedies for the nation’s economic morass—and in challenges to some of those policies, the Supreme Court upheld them, citing a new and vast interpretation of the Constitution’s Commerce Clause, with sweeping ramifications.

In the Blaisdell Case[3], the Supreme Court upheld a Minnesota law that essentially suspended the ability of mortgage holders from collecting mortgage monies or pursuing remedies when monies had not been paid.  The court said that due to the severe national emergency created by the Great Depression, government had vast and enormous power to deal with it.

But critics have understood the serious and longstanding ramifications of such decisions. Adjunct Scholar at the libertarian-leaning Cato Institute and NYU law professor Richard Epstein said of Blaisdell that, “trumpeted a false liberation from the constitutional text that has paved the way for massive government intervention that undermines the security of private transactions. Today the police power exception has come to eviscerate the contracts clause.”

In other words—in a conflict between the rights of private parties under the contracts clause and the power of government under the commerce clause, when it comes to emergencies, the power of government wins.

Interestingly enough, due to a series of New Deal programs that had been ruled unconstitutional by the Supreme Court, in 1937, FDR attempted to change the make-up of the court in what became known as the “court-packing scheme.” The proposal essentially called for remaking the balance of the court by appointing an additional justice (up to six additional) for every justice who was over the age of 70 years and 6 months.

Though the legislation languished in Congress, the pressure was brought to bear on the Supreme Court and Associate Justice Owen Roberts began casting votes in support of FDR’s New Deal programs—fundamentally shifting the direction of federal power towards concentration, a shift that continued until the early 1990s, when the high court began issuing decisions (like New York v. United States) that limited the power of the federal government and the expansive interpretation of the commerce clause.

But it’s the sweeping power for the federal government to act within a declared emergency, and the impact of the policies that are created within that crisis that is of continued concern. Much in the same way that the lack of deliberation during FDR’s first 100 days led to programs that had sweeping and lasting impact on public life, and created huge unintended consequences, we are seeing those same mistakes played out today—the declaration of a public emergency, sweeping polices created without any real deliberation and public input, and massive (and devastating consequences) to businesses, jobs, and society in general.

If we are to learn anything from those first hundred days, it should be that we shouldn’t let a deliberative policy process be hijacked, and certainly not for political reasons. Moreover, when polices are enacted without deliberation, we should be prepared for the potential consequences of that policy… and adjust those policies accordingly when new information presents itself (and when the particular crisis has passed). Justice O’Connor was correct—the Constitution does protect us from our own best intentions.

We should rely on it, especially when we are in a crisis.

Andrew Langer is President of the Institute for Liberty.  He teaches in the Public Policy program at the College of William and Mary.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] New York v. US, 505 US 144 (1992)

[2] Bank runs were so engrained in the national mindset that Frank Capra dramatized one in his famous film, It’s A Wonderful Life. In it, the Bedford Falls Bank is the victim of a run and “saved” by the film’s antagonist, Mr. Potter.  Potter offers to “guarantee” the Bailey Building and Loan, but, knowing it would give Potter Control, the film’s hero, George Bailey, uses his own money to keep his firm intact.

[3] Home Building and Loan Association v Blaisdell, 290 US 398 (1934)

Guest Essayist: John Steele Gordon

Wall Street, because it tries to discern future values, is usually a leading indicator. It began to recover, for instance, from the financial debacle of 2008 in March of the next year. But the economy didn’t begin to grow again until June of 2009.

But sometimes Wall Street separates from the underlying economy and loses touch with economic reality. That is what happened in 1929 and brought about history’s most famous stock market crash.

The 1920’s were a prosperous time for most areas of the American economy and Wall Street reflected that expansion. But rural America was not prospering. In 1900, one-third of all American crop land had been given over to fodder crops to feed the country’s vast herd of horses and mules. But by 1930, horses had largely given way to automobiles and trucks while the mules had been replaced by the tractor.

As more and more agricultural land was turned over to growing crops for human consumption, food prices fell and rural areas began to fall into depression. Rural banks began failing at the rate of about 500 a year.

Because the major news media, then as now, was highly concentrated in the big cities, this economic problem went largely unnoticed. Indeed, while the overall economy rose 59 percent in the 1920s, the Dow Jones Industrial Average increased 400 percent.

The Federal Reserve in the fall of 1928, raised the discount rate from 3.5 percent to 5 percent and began to reduce the increase in the money supply, in hopes of getting the stock market to cool off.

But by then, Wall Street was in a speculative bubble. Fueling that bubble was a very misguided policy by the Fed. It allowed member banks to borrow at the discount window at five percent. The banks in turn, loaned the money to brokerage houses at 12 percent. The brokers then loaned the money to speculators at 20 percent. The Fed tried to use “moral suasion” to get the banks to stop borrowing in this way. But if a bank can make 7 percent on someone else’s money, it is going to do so. The Fed should have just closed the window for those sorts of loans, but didn’t.

By Labor Day, 1929, the American economy was in a recession but Wall Street still had not noticed. On the day after Labor Day, the Dow hit a new all-time high at 381.17. It would not see that number again for 25 years. Two days later the market began to wake up.

A stock market analyst of no great note, Roger Babson, gave a talk that day in Wellesley, Massachusetts, and said that, “I repeat what I said at this time last year and the year before, that sooner or later a crash is coming.” When news of this altogether unremarkable prophecy crossed the broad tape at 2:00 that afternoon, all hell broke loose. Prices plunged (US Steel fell 9 points, AZT&T 6) and volume in the last two hours of trading was a fantastic two million shares.

Remembered as the Babson Break, it was like a slap across the face of an hysteric, and the mood on the Street went almost in an instant from “The sky’s the limit” to “Every man for himself.”

For the next six weeks, the market trended downwards, with some plunges followed by weak recoveries. Then on Thursday, October 23rd, selling swamped the market on the second highest volume on record. The next morning there was a mountain of sell orders in brokerage offices across the country and prices plunged across the board. This set off a wave of margin calls, further depressing prices, while short sellers put even more pressure on prices.

A group of the Street’s most important bankers met at J. P. Morgan and Company, across Broad Street from the exchange.

Together they raised $20 million to support the market and entrusted it to Richard Whitney, the acting president of the NYSE.

At 1:30, Whitney strode onto the floor and asked the price of US Steel. He was told that it had last traded at 205 but that it had fallen several points since, with no takers.

“I bid 205 for 10,000 Steel,” Whitney shouted. He then went to other posts, buying large blocks of blue chips. The market steadied as shorts closed their positions and some stocks even ended up for the day. But the volume had been an utterly unprecedented 13 million shares.

The rally continued on Friday but there was modest profit taking at the Saturday morning session. Then, on Monday, October 28th, selling resumed as rumors floated around that some major speculators had committed suicide and that new bear pools were being formed.

On Tuesday, October 29th, remembered thereafter as Black Tuesday, there was no stopping the collapse in prices. Volume reached 16 million shares, a record that would stand for nearly 40 years, and the tape ran four hours late. The Dow was down a staggering 23 percent on the day and nearly 40 percent below its September high.

Prices trended downwards for more than another month, but by the spring of 1930 the market, badly over sold by December, had recovered about 45 percent of its autumn losses. Many thought the recession was over. But then the federal government and the Federal Reserve began making a series of disastrous policy blunders that would turn an ordinary recession into the Great Depression.

John Steele Gordon was born in New York City in 1944 into a family long associated with the city and its financial community. Both his grandfathers held seats on the New York Stock Exchange. He was educated at Millbrook School and Vanderbilt University, graduating with a B.A. in history in 1966.

After college he worked as a production editor for Harper & Row (now HarperCollins) for six years before leaving to travel, driving a Land-Rover from New York to Tierra del Fuego, a nine-month journey of 39,000 miles. This resulted in his first book, Overlanding. Altogether he has driven through forty-seven countries on five continents.

After returning to New York he served on the staffs of Congressmen Herman Badillo and Robert Garcia. He has been a full-time writer for the last twenty years. His second book, The Scarlet Woman of Wall Street, a history of Wall Street in the 1860’s, was published in 1988. His third book, Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt, was published in 1997. The Great Game: The Emergence of Wall Street as a World Power, 1653-2000, was published by Scribner, a Simon and Schuster imprint, in November, 1999. A two-hour special based on The Great Game aired on CNBC on April 24th, 2000. His latest book, a collection of his columns from American Heritage magazine, entitled The Business of America, was published in July, 2001, by Walker. His history of the laying of the Atlantic Cable, A Thread Across the Ocean, was published in June, 2002. His next book, to be published by HarperCollins, is a history of the American economy.

He specializes in business and financial history. He has had articles published in, among others, Forbes, Forbes ASAP, Worth, the New York Times and The Wall Street Journal Op-Ed pages, the Washington Post’s Book World and Outlook. He is a contributing editor at American Heritage, where he has written the “Business of America” column since 1989.

In 1991 he traveled to Europe, Africa, North and South America, and Japan with the photographer Bruce Davidson for Schlumberger, Ltd., to create a photo essay called “Schlumberger People,” for the company’s annual report.

In 1992 he was the co-writer, with Timothy C. Forbes and Steve Forbes, of Happily Ever After?, a video produced by Forbes in honor of the seventy-fifth anniversary of the magazine.

He is a frequent commentator on Marketplace, the daily Public Radio business-news program heard on more than two hundred stations throughout the country. He has appeared on numerous other radio and television shows, including New York: A Documentary Film by Ric Burns, Business Center and Squawk Box on CNBC, and The News Hour with Jim Lehrer on PBS. He was a guest in 2001 on a live, two-hour edition of Booknotes with Brian Lamb on C-SPAN.

Mr. Gordon lives in North Salem, New York. His email address is

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

An admirer of inventors Bell, Edison, and Einstein’s theories, scientist and inventor Philo T. Farnsworth designed the first electric television based on an idea he sketched in a high school chemistry class. He studied and learned some success was gained with transmitting and projecting images. While plowing fields, Farnsworth realized television could become a system of horizontal lines, breaking up images, but forming an electronic picture of solid images. Despite attempts by competitors to impede Farnsworth’s original inventions, in 1928, Farnsworth presented his idea for a television to reporters in Hollywood, launching him into more successful efforts that would revolutionize moving pictures.

On September 3, 1928, Philo Farnsworth, a twenty-two year old inventor with virtually no formal credentials as a scientist, demonstrated his wholly electronic television system to reporters in California. A few years later, a much improved television system was demonstrated to larger crowds of onlookers at the Franklin Institute in Philadelphia, proving to the world that this new medium could broadcast news, entertainment, and educational content across the nation.

Farnsworth had come far from his boyhood roots in northern Utah and southern Idaho. He was born in a log cabin lacking indoor plumbing or electrical power. His family moved to a farm outside of Rigby, Idaho, when Farnsworth was a young boy. For the first time, Farnsworth could examine electrical appliances and electric generators in action. He quickly learned to take electrical gadgets apart and put them back together again, often making adaptations to improve their function. He also watched each time the family’s generator was repaired. Soon, still a  young boy, he could do those repairs himself. Farnsworth was a voracious reader of science books and magazines, but also devoured what is now termed science fiction, although that term was not in use during his youth. He became a skilled violinist, possibly because of the example of his idol, Albert Einstein, who also played the instrument.[1]

Farnsworth excelled in his classes in school, particularly in mathematics and other sciences, but he did present his teachers and school administrators with a bit of a problem when he repeatedly appealed to take classes intended for much older students. According to school rules, only high school juniors and seniors were supposed to enroll in the advanced classes, but Farnsworth determined to find courses that would challenge him intellectually. The school resisted his entreaties, but one chemistry teacher, Justin Tolman, agreed to tutor Philo and give him extra assignments both before and after school.

One day, Farnsworth presented a visual demonstration of an idea that he had for transmitting visual images across space. He later claimed that he had come up with the basic idea for this process one year earlier, when he was only fourteen. As he was plowing a field on his family farm, Philo had seen a series of straight rows of plowed ground. Farnsworth thought it might be possible to represent visual images by breaking up the entire image into a sequence of lines of various shades of light and dark images. The images could be projected electronically and re-assembled as pictures made up of a collection of lines, placed one on top of another. Farnsworth  believed that this could be accomplished based on his understanding of Einstein’s path-breaking work on the “photoelectric effect” which had discovered that a particle of light, called a photon, that hit a metal plate would displace electrons with some residual energy transferred to a free-roaming negative charge, called a photoelectron.[2] Farnsworth had developed a conceptual model of a device that he called an “image dissector” that could break the images apart and transmit them for reassembly at a receiver. He had no means of creating this device with the resources he had at hand, but he did develop a model representation of the device, along with mathematical equations to convey the causal mechanisms. He presented all of this on the blackboard of a classroom in the high school in Rigby.   Tolman was stunned by the intellectual prowess of the fifteen year old standing in front of him. He thought Farnsworth’s model might actually work, and he copied down some of the drawings from the blackboard onto a piece of paper, which he kept for years.[3] It is fortunate for Farnsworth that Tolman held on to those pieces of paper.

Farnsworth was accepted into the United States Naval Academy but very soon was granted an honorable discharge under a provision permitting new midshipman to leave the university and the service to care for their families after the death of a parent. Farnsworth’s father had died the previous year, and Farnsworth returned to Utah, where his family had relocated after the sale of the farm. Farnsworth enrolled at Brigham Young University but worked at various jobs to support himself, his mother, and his younger siblings. As he had in high school, Farnsworth asked to be allowed to register in advanced classes rather than take only freshman level course work. He quickly earned a technical certificate but no baccalaureate degree. While in Utah, Farnsworth met, courted and eventually married his wife, “Pem,” who would later help in his lab creating and building instruments. One of her brothers would also provide lab assistance. One of Farnsworth’s job during his time in Utah was with the local Community Chest. There he met George Everson and Leslie Gorrell, who were regional Community Chest administrators who were experienced in money-raising. Farnsworth explained his idea to them about electronic television, which he had never before done to anyone except his father, now deceased, and his high school teacher, Justin Tolman. Everson and Gorrell were impressed with Farnsworth’s idea, although they barely understood most of the science behind it. Everson and Gorrell invited Farnsworth to travel with them to California to discuss his research with scientists from the California Institute of Technology (a.k.a., Cal Tech). Farnsworth agree to do so, and made the trek to Los Angeles to meet first with scientists and then with bankers to solicit funds to support his research.     When discussing his proposed electronic television model, Farnsworth became transformed from a shy, socially awkward, somewhat tongue-tied young man to a confident and articulate advocate of his project. He was able to explain the broad outline of his research program in terms that lay people could understand. He convinced Gorrell and Everson to put up some money and a few years later got several thousand more dollars from a California bank.[4]

Philo and Pem Farnsworth re-located first to Los Angeles and then to San Francisco to establish a laboratory. Farnsworth believed that his work would progress more quickly if he were close to a number of other working scientists and technical experts at Cal Tech and other universities. Farnsworth also wanted to be near to those in the motion picture industry who had technical expertise. With a little start-up capital, Farnsworth and a few other backers incorporated their business, although Farnsworth did not create a publicly traded corporation until several years later. At the age of twenty-one, in 1927, Farnsworth filed the first two of his many patent applications. Those two patents were approved by the patent office in 1930. By the end of his life he had three hundred patents, most of which dealt with television or radio components. As of 1938, three-fourths of all patents dealing with television were by Farnsworth.[5]

When Farnsworth began his work in California, he and his wife and brother-in-law had to create many of the basic components for his television system. There was very little that they could simply buy off-the-shelf at any sort of store that they could simply assemble into the device Farnsworth had in mind. So much of their time was devoted to soldering wires and creating vacuum tubes, as well as testing materials to determine which performed best. After a while, Farnsworth hired some assistants, many of them graduate students at Cal Tech or Stanford. One of his assistants, Russell Varian, would later make a name for himself as a physicist in his own right and would become one of the founders of Silicon Valley. Farnsworth’s lab also had many visitors, including Hollywood celebrities such as Douglas Fairbanks and Mary Pickford, as well as a number of scientists and engineers. One visitor was Vladimir Zworykin, a Russian émigré with a PhD in electrical engineering who worked for Westinghouse Corporation. Farnsworth showed Zworykin not only his lab but also examples of most of his key innovations, including his image dissector. Zworykin expressed admiration for the devices that he observed, and said that he wished that he had invented the dissector. What Farnsworth did not know was that a few weeks earlier, Zworykin had been hired away from Westinghouse by David Sarnoff, then the managing director and later the president of the Radio Corporation of America (a.k.a., RCA). Sarnoff grilled Zworykin about what he had learned from his trip to Farnsworth’s lab and immediately set him to work on television research. RCA was already a leading manufacturer of radio sets and would soon become the creator of the National Broadcasting Corporation (a.k.a., NBC). After government antitrust regulators forced RCA to divest itself of some of its broadcasting assets, RCA created the American Broadcasting Corporation (a.k.a., ABC) as a separate company[6]. RCA and Farnsworth would remain competitors and antagonists for the rest of Farnsworth’s career.

In 1931, Philco, a major radio manufacturer and electronics corporation entered into a deal with Farnsworth to support his research. The company was not buying out Farnsworth’s company, but was purchasing non-exclusive licenses for Farnsworth’s patents. Farnsworth then moved with his family and some of his research staff to Philadelphia.   Ironically, RCA’s television lab was located in Camden, New Jersey, just a few miles away. On many occasions, Farnsworth and RCA could receive the experimental television broadcasts transmitted from their rival’s lab. Farnsworth and his team were working at a feverish pace to improve their inventions to make them commercially feasible. The Federal Radio Commission, later known as the Federal Communications Commission, classified television as a merely experimental communications technology, rather than one that was commercially viable and subject to license. The commission wished to create standards for picture resolution and frequency bandwidth. Many radio stations objected to television licensing because they believed that television signals would crowd out the bandwidth available for their broadcasts.   Farnsworth developed the capacity to transmit television signals over a more narrow bandwidth than any competing televisions’ transmissions.

Personal tragedy struck the Farnsworth family in 1932 when Philo and Pem’s young son, Kenny, still a toddler, died of a throat infection, an ailment that today could easily have been treated with antibiotics. The Farnsworths decided to have the child buried back in Utah, but Philco refused to allow Philo time off to go west to bury his son. Pem made the trip alone, causing a rift between the couple that would take months to heal. Farnsworth was struggling to perfect his inventions, while at the same time RCA devoted an entire team to television research and engaged in a public relations campaign to convince industry leaders and the public that it had the only viable television system. At this time, Farnsworth’s health was declining. He was diagnosed with ulcers and he began to drink heavily, even though Prohibition had not yet been repealed. He finally decided to sever his relationship with Philco and set up his own lab in suburban Philadelphia. He soon also took the dramatic step of filing a patent infringement complaint against RCA in 1934.[7]

Farnsworth and his friend and patent attorney, Donald Lippincott, presented their argument before the patent examination board that Farnsworth was the original inventor of what was now known as electronic television and that Sarnoff and RCA had infringed on patents approved in 1930. Zworykin had some important patents prior to that time but had not patented the essential inventions necessary to create an electronic television system. RCA went on the offensive by claiming that it was absurd to claim that a young man in his early twenties with no more than one year of college could create something that well-educated scientists had failed to invent. Lippincott responded with evidence of the Zworykin visit to the Farnsworth lab in San Francisco. After leaving Farnsworth, Zworykin had returned first to the labs at Westinghouse and had duplicates of Farnsworth’s tubes constructed on the spot. Then researchers were sent to Washington to make copies of Farnsworth’s patent applications and exhibits. Lippincott also was able to produce Justin Lippincott, Philo’s old and then retired teacher, who appeared before the examination board to testify that the basic idea of the patent had been developed when Farnsworth was a teenager. When queried, Tolman removed a yellowed piece of notebook paper with a diagram that he had copied off the blackboard in 1922. Although the document was undated, the written document, in addition to Tolman’s oral testimony, may have convinced the board that Farnsworth’s eventual patent was for a novel invention.[8]

The examining board took several months to render a decision. In July of 1935, the examiner of interferences from the U.S. Patent Office mailed a forty-eight page document to the parties involved. After acknowledging the significance of inventions by Zworykin, the patent office declared that those inventions were not equivalent to what was understood to be electronic television. Farnsworth’s claims had priority.   The decision was appealed in 1936, but the result remained unchanged.  Beginning in 1939, RCA began paying royalties to Farnsworth.

Farnsworth and his family, friends, and co-workers were ecstatic with the outcome when the patent infringement case was decided. For the first time, Farnsworth was receiving the credit and the promise of the money that he thought he was due. However, the price he had paid already was very high. Farnsworth’s physical and emotional health was declining. He was perpetually nervous and exhausted. As unbelievable as it may sound today, one doctor advised him to take up smoking to calm his nerves. He continued to drink heavily and his weight dropped.      His company was re-organized as the Farnsworth Television & Radio Corporation and had its initial public offering of stock in 1939.    Whether out of necessity or personal choice, Farnsworth’s work in running his lab and his company diminished.

While vacationing in northern, rural Maine in 1938, the Farnsworth family came across a plot of land that reminded Philo of his home and farm outside of Rigby. Farnsworth bought the property, re-built an old house, constructed a dam for a small creek, and erected a building that could house a small laboratory. He spent most of the next few years on the property. Even though RCA had lost several patent infringement cases against Farnsworth, the company was still engaging in public demonstrations of television broadcasts in which it claimed that David Sarnoff was the founder of television and that Vladimir Zworykin was the sole inventor of television. The most significant of these demonstrations was at the World’s Fair at Flushing Meadows, New York. Many reporters accepted the propaganda that was distributed at that event and wrote up glowing stories of the supposedly new invention. Only a few years before, Farnsworth had demonstrated his inventions at the Franklin Institute, but the World’s Fair was a much bigger venue with a wider media audience. In 1949, NBC launched a special televised broadcast celebrating the 25th anniversary of the creation of television by RCA, Sarnoff, and Zworykin. No mention was made of Farnsworth at all.[9]

The FCC approved television as a commercial broadcast enterprise, subject to licensure, in 1939. The commission also set standards for broadcast frequency and picture quality. However, the timing to start off a major commercial venture for the sale of a discretionary consumer product was far from ideal. In fact, the timing of Farnsworth’s milestone accomplishments left much to be desired. His first patents were approved shortly after the nation entered the Great Depression. His inventions created an industry that was already subject to stringent government regulation focused on a related but potentially rival technology: radio. Once television was ready for mass marketing, the nation was poised to enter World War II. During the war, production of televisions and many other consumer products ceased and resources were devoted to war-related materiel. Farnsworth’s company and RCA both produced radar and other electronics equipment. Farnsworth’s company also produced wooden ammunition boxes. Farnsworth allowed the military to enjoy free use of his patents for radar tubes.[10]

Farnsworth enjoyed royalties from his patent for the rest of his life.   However, his two most important patents were his earliest inventions.  The patents were approved in 1930 for a duration of seventeen years. In 1947, the patents became part of the public domain. It was really only in the late 1940s and 1950s that television exploded as a popular consumer good, but by that time Farnsworth could receive no royalties for his initial inventions. Other, less fundamental components that he had patented did provide him with some royalty income. Before the war, Farnsworth’s company had purchased the Capehart Company of Fort Wayne, Indiana, and eventually closed down their Philadelphia area facility and moved their operations entirely to Indiana. A devastating wildfire swept through the countryside in rural Maine, burning down the buildings on Farnsworth’s property, only days before his property insurance policy was activated. Farnsworth’s company fell upon hard times, as well, and eventually was sold to International Telephone and Telegraph. Farnsworth’s health never completely recovered, and he took a disability retirement pension at the age of sixty and returned to Utah.   In his last few years, Farnsworth devoted little time to television research, but did develop devices related to cold fusion, which he hoped to use to produce abundant electrical power for the whole world to enjoy. As of now, cold fusion has not been a viable electric power generator, but it has proved useful in neutron production and medical isotopes.

Farnsworth died in 1971 at the age of sixty-four. At the time of his death, he was not well-known outside of scientific circles. His hopes and dreams of television as a cultural and educational beacon to the whole world had not been realized, but he did find some value in at least some of what he could see on the screen. About two years before he died, Philo and Pem along with millions of other people around the world saw Neil Armstrong set foot on the moon. At that moment, Philo turned to his wife and said that he believed that all of his work was worthwhile.

Farnsworth’s accomplishments demonstrated that a more or less single inventor, with the help of a few friends, family members, and paid staff, could create significant and useful inventions that made a mark on the world.[11] In the long run, corporate product development by rivals such as RCA surpassed what he could do to make his brainchild marketable.   Farnsworth had neither the means nor the inclination to compete with major corporations in all respects. But he did wish to have at least some recognition and some financial reward for his efforts. Unfortunately, circumstances often wiped out what gains he received. Farnsworth also demonstrated that individuals lacking paper credentials can also accomplish significant achievements. With relatively little schooling and precious little experience, Farnsworth developed devices that older and more well-educated competitors could not. Sadly, Farnsworth’s experiences display the role of seemingly chance events in curbing personal success. Had he developed his inventions a bit earlier or later, avoiding most of the Depression and the Second World War, he might have gained much greater fame and fortune. None of us, of course, choose the time into which we are born.

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1]  Schwartz, Evan I. 2002. The Last Lone Inventor: A Tale of Genius, Deceit, and the Birth of Television. New York: HarperCollins.


[3] Schwartz, op cit.

[4] Schwartz, op cit.

[5] Jewkes, J. “Monopoly and Economic Progress.” Economica, New Series, 20, no. 79 (1953): 197-214

[6] Schwartz, op cit.

[7] Schwartz, op cit.

[8] Schwartz, op cit.

[9] Schwartz, op cit.

[10]Schwartz, op cit.

[11]Lemley, Mark A. 2012. “The Myth of the Sole Inventor.” Michigan Law Review 110 (5): 709–60.


Guest Essayist: Tony Williams

Americans have long held the belief that they are exceptional and have a providential destiny to be a “city upon a hill” as a beacon for democracy for the world.

Unlike the French revolutionaries who believed that they were bound to destroy monarchy and feudalism everywhere, the American revolutionaries laid down the principle of being an example for the world instead of imposing the belief on other countries.

In 1821, Secretary of State John Quincy Adams probably expressed this idea best during a Fourth of July address when he asserted the principle of American foreign policy that:

Wherever the standard of freedom and independence has been or shall be unfurled, there will her heart, her benedictions and her prayers be. But she goes not abroad in search of monsters to destroy. She is the well-wisher to the freedom and independence of all. She is the champion and vindicator only of her own.

While the Spanish-American War raised a debate over the nature of American expansionism and foundational principles, the reversal of the course of American diplomatic history found its fullest expression in the progressive presidency of Woodrow Wilson.

Progressives such as President Wilson embraced the idea that a perfect world could be achieved with the spread of democracy and adoption of a greater international outlook instead of national interests for world peace. As president, Wilson believed that America had a responsibility to spread democracy around the world by destroying monarchy and enlightening people in self-government.

When World War I broke out in August 1914 after the assassination of Austrian Archduke Franz Ferdinand, Wilson declared American neutrality and asked a diverse nation of immigrants to be “impartial in thought as well as in action.”

American neutrality was tested in many different ways. Many first-generation American immigrants from different countries still had strong attachments and feelings toward their nation of origin. Americans also sent arms and loans to the Allies (primarily Great Britain, France, and Russia) that undermined claims of U.S. neutrality. Despite the sinking of the liner Lusitania by a German U-boat (submarine) in May 1915 that killed 1,200 including 128 Americans, Secretary of State William Jennings Bryan resigned because he thought the U.S. should protest the British blockade of Germany as much as German actions in the Atlantic.

Throughout 1915 and 1916, German U-boats sank several more American vessels, though Germany apologized and promised no more incidents against merchant vessels of neutrals. By late 1916, however, more than two years of trench warfare and stalemate on the Western Front had led to millions of deaths, and the belligerents sought for ways to break the stalemate.

On February 1, 1917, the German high command decided to launch a policy of unrestricted U-boat warfare in which all shipping was subject to attack. The hope was to knock Great Britain out of the war and attain victory before the United States could enter the war and make a difference.

Simultaneously, Germany curiously sent a secret diplomatic message to Mexico offering territory in Texas, New Mexico, and Arizona in exchange for entering the war against the United States. British intelligence intercepted this foolhardy Zimmerman Telegram and shared it with the Wilson administration. Americans were predictably outraged when news of the telegram became public.

On April 2, President Wilson delivered a message to Congress asking for a declaration of war. He focused on what he labeled the barbaric and cruel inhumanity of attacking neutral ships and killing innocents on the high seas. He spoke of American freedom of the seas and neutral rights but primarily painted a stark moral picture of why the United States should go to war with the German Empire which had violated “the most sacred rights of our Nation.”

Wilson took an expansive view of the purposes of American foreign policy that reshaped American exceptionalism. He had a progressive vision of remaking the world by using the war to spread democratic principles and end autocratic regimes. In short, he thought, “The world must be made safe for democracy.”

Wilson argued that the United States had a duty as “one of the champions of the rights of mankind.” It would not merely defeat Germany but free its people. Americans were entering the war “to fight thus for the ultimate peace of the world and for the liberation of its peoples, the German peoples included: for the rights of nations great and small and the privilege of men everywhere to choose their way of life.”

Wilson believed that the United States had larger purposes than merely defending its national interests. It was now responsible for world peace and the freedom of all.  “Neutrality is no longer feasible or desirable where the peace of the world is involved and the freedom of its peoples, and the menace to that peace and freedom lies in the existence of autocratic governments backed by organized force which is controlled wholly by their will, not by the will of their people.”

At the end of the war and during the Versailles conference, Wilson further articulated this vision of a new world with his Fourteen Points and proposal for a League of Nations to prevent future wars and ensure a lasting world peace.

Wilson’s vision failed to come to come to fruition. The Senate refused to ratify the Treaty of Versailles because it was committed to defending American national sovereign power over declaring war. The great powers were more dedicated to their national interests rather than world peace. Moreover, the next twenty years saw the spread of totalitarian, communist, and fascist regimes rather than progressive democracies. Finally, World War II shattered his vision of remaking the world.

Wilson’s ideals were not immediately adopted, but in the long run helped to reshape American foreign policy. The twentieth and twenty-first centuries saw increasing Wilsonian appeals by American presidents and policymakers to go to war to spread democracy throughout the world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.