Constituting America Founder, Actress Janine Turner


Constituting America first published this message from Founder Janine Turner over Memorial Day Weekend, 2010, the inaugural year of our organization. We are pleased to share it with you again, as we celebrate our 13th birthday!  

Read more

Guest Essayist: Will Morrisey

To secure the unalienable natural rights of the American people, the American Founders designed a republican regime. Republics had existed before: ancient Rome, modern Switzerland and Venice. By 1776, Great Britain itself could be described as a republic, with a strong legislature counterbalancing a strong monarchy—even if the rule of that legislature and that monarchy over the overseas colonies of the British Empire could hardly be considered republican. But the republicanism instituted after the War of Independence, especially as framed at the Philadelphia Convention in 1787, featured a combination of elements never seen before, and seldom thereafter.

The American definition of republicanism was itself unique. ‘Republic’ or res publica means simply, ‘public thing’—a decidedly vague notion that might apply to any regime other than a monarchy. In the tenth Federalist, James Madison defined republicanism as representative government, that is, by a specific way of constructing the country’s ruling institutions. The Founders gave republicanism a recognizable form beyond ‘not-monarchy.’ From the design of the Virginia House of Burgesses to the Articles of Confederation and finally to the Constitution itself, representation provided Americans with real exercise of self-rule, while at the same time avoiding the turbulence and folly of pure democracies, which had so disgraced themselves in ancient Greece that popular sovereignty itself had been dismissed by political thinkers ever since. Later on, Abraham Lincoln’s Lyceum Address shows how republicanism must defend the rule of law against mob violence; even the naming of Lincoln’s party as the Republican Party was intended to contrast it with the rule of slave-holding plantation oligarchs in the South.

The American republic had six additional characteristics, all of them clearly registered in this 90-Day Study. America was a natural-rights republic, limiting the legitimate exercise of popular rule to actions respecting the unalienable rights of its citizens; it was a democratic republic, with no formal ruling class of titled lords and ladies or hereditary monarchs; it was an extended republic, big enough to defend itself against the formidable empires that threatened it; it was a commercial republic, encouraging prosperity and innovation; it was a federal republic, leaving substantial political powers in the hands of state and local representatives; and it was a compound republic, dividing the powers of the national government into three branches, each with the means of defending itself against encroachments by the others.

Students of the American republic could consider each essay in this series as a reflection on one or more of these features of the American regime as designed by the Founders, or, in some cases, as deviations from that regime. Careful study of what the Declaration of Independence calls “the course of events” in America shows how profound and pervasive American republicanism has been, how it has shaped our lives throughout our history, and continues to do so today.

A Natural-Rights Republic

The Jamestown colony’s charter was written in part by the great English authority on the common law, Sir Edward Coke. Common law was an amalgam of natural law and English custom. The Massachusetts Bay Colony, founded shortly thereafter, was an attempt to establish the natural right of religious liberty. And of course the Declaration of Independence rests squarely on the foundation of the laws of Nature and of Nature’s God as the foundation of unalienable natural rights, several of which were given formal status in the Constitution’s Bill of Rights. As the articles on Nat Turner’s slave rebellion in 1831, the Dred Scott case in 1857, the Civil Rights amendments of the 1860s, and the attempt at replacing plantation oligarchy with republican regimes in the states after the Civil War all show, natural rights have been the pivot of struggles over the character of America. Dr. Martin Luther King, Jr. and the early civil rights leaders invoked the Declaration and natural rights to argue for civic equality, a century after the civil war. As a natural-rights republic, America rejects in principle race, class, and gender as bars to the protection of the rights to life, liberty, and the pursuit of happiness. In practice, Americans have often failed to live up to their principles—as human beings are wont to do—but the principles remain as their standard of right conduct.

A Democratic Republic

The Constitution itself begins with the phrase “We the People,” and the reason constitutional law governs all statutory laws is that the sovereign people ratified that Constitution. George Washington was elected as America’s first president, but he astonished the world by stepping down eight years later; he had no ambition to become another George III, or a Napoleon. The Democratic Party which began to be formed by Thomas Jefferson and James Madison when they went into opposition against the Adams administration named itself for this feature of the American regime. The Seventeenth Amendment to the Constitution, providing for popular election of U. S. Senators, the Nineteenth Amendment, guaranteeing voting rights for women, and the major civil rights laws of the 1960s all express the democratic theme in American public life.

An Extended Republic

Unlike the ancient democracies, which could only rule small territories, American republicanism gave citizens the chance of ruling themselves in a territory large enough to defend itself against the powerful states and empires that had arisen in modern times. All of this was contingent, however, on Jefferson’s idea that this extended republic would be an “empire of liberty,” by which he meant that new territories would be eligible to join the Union on an equal footing with the original thirteen states. Further, every state was to have a republican regime, as stipulated in the Constitution’s Article IV, section iv. In this series of Constituting America essays, the extension of the extended republic is very well documented, from the 1803 Louisiana Purchase and the Lewis and Clark expedition to the Indian Removal Act of 1830 and the Mexican War of 1848, to the purchase of Alaska and the Transcontinental Railroad of the 1960s, to the Interstate Highway Act of 1956. The construction of the Panama Canal, the two world wars, and the Cold War all followed from the need to defend this large republic from foreign regime enemies and to keep the sea lanes open for American commerce.

A Commercial Republic

Although it has proven itself eminently capable of defending itself militarily, America was not intended to be a military republic, like ancient Rome and the First Republic of France. The Constitution prohibits interstate tariffs, making the United States a vast free-trade zone—something Europe could not achieve for another two centuries. We have seen Alexander Hamilton’s brilliant plan to retire the national debt after the Revolutionary War and the founding of the New York Stock Exchange in 1792. Above all, we have seen how the spirit of commercial enterprise leads to innovation: Eli Morse’s telegraph; Alexander Graham Bell’s telephone; Thomas Edison’s phonography and light bulb; the Wright Brothers’ flying machine; and Philo Farnworth’s television. And we have seen how commerce in a free market can go wrong if the legislation and federal policies governing it are misconceived, as they often were before, during, and sometimes after the Great Depression.

A Federal Republic

A republic might be ‘unitary’—ruled by a single, centralized government. The American Founders saw that this would lead to an overbearing national government, one that would eventually undermine republican self-government itself. They gave the federal government enumerated powers, leaving the remaining governmental powers “to the States, or the People.” The Civil War was fought over this issue, as well as slavery, the question of whether the American Union could defend itself against its internal enemies. The substantial centralization of federal government power seen in the New Deal of the 1930s, the Great Society legislation of the 1960s, and the Affordable Care Act of 2010 have renewed the question of how far such power is entitled to reach.

A Compound Republic

A simple republic would elect one branch of government to exercise all three powers: legislative, executive, and judicial. This was the way the Articles of Confederation worked. The Constitution ended that, providing instead for the separation and balance of those three powers. As the essays here have demonstrated, the compound character of the American republic has been eroded by such notions as ‘executive leadership’—a principle first enunciated by Woodrow Wilson but firmly established by Franklin Roosevelt and practiced by all of his successors—and ‘broad construction’ of the Constitution by the Supreme Court. The most dramatic struggle between the several branches of government in recent decades was the Watergate controversy, wherein Congress attempted to set limits on presidential claims of ‘executive privilege.’ Recent controversies over the use of ‘executive orders’ have reminded Americans of all political stripes that government by decree can gore anyone’s prized ox.

The classical political philosophers classified the forms of political rule, giving names to the several ‘regimes’ they saw around them. They emphasized the importance of regimes because regimes, they knew, designate who rules us, the institutions by which the rulers rule, the purposes of that rule, and finally the way of life of citizens or subjects. In choosing a republican regime on a democratic foundation, governing a large territory for commercial purposes with a carefully calibrated set of governmental powers, all intended to secure the natural rights of citizens according to the laws of Nature and of Nature’s God, the Founders set the course of human events on a new and better direction. Each generation of Americans has needed to understand the American way of life and to defend it.

Will Morrisey is Professor Emeritus of Politics at Hillsdale College, editor and publisher of Will Morrisey Reviews, an on-line book review publication.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.


Guest Essayist: Joerg Knipprath

On March 23, 2010, President Barack Obama signed into law the Patient Protection and Affordable Care Act (“ACA”), sometimes casually referred to as “Obamacare,” a sobriquet that Obama himself embraced in 2013. The ACA covered 900 pages and hundreds of provisions. The law was so opaque and convoluted that legislators, bureaucrats, and Obama himself at times were unclear about its scope. For example, the main goal of the law was presented as providing health insurance to all Americans who previously were unable to obtain it due to, among other factors, lack of money or pre-existing health conditions. The law did increase the number of individuals covered by insurance, but stopped well short of universal coverage. Several of its unworkable or unpopular provisions were delayed by executive order. Others were subject to litigation to straighten out conflicting requirements. The ACA represented a probably not-yet-final step in the massive bureaucratization of health insurance and care over the past several decades, as health care moved from a private arrangement to a government-subsidized “right.”

The law achieved its objectives to the extent it did by expanding Medicaid eligibility to higher income levels and by significantly restructuring the “individual” policy market. In other matters, the ACA sought to control costs by further reducing Medicare reimbursements to doctors, which had the unsurprising consequence that Medicare patients found it still more difficult to get medical care, and by levying excise taxes on medical devices, drug manufacturers, health insurance providers, and high-benefit “Cadillac plans” set-up by employers. The last of these was postponed and, along with most of the other taxes, repealed in December, 2019. On the whole, existing employer plans and plans under collective-bargaining agreements were only minimally affected. Insurers had to cover defined “essential health services,” whether or not the purchaser wanted or needed those services. As a result, certain basic health plans that focused on “catastrophic events” coverage were substandard and could no longer be offered. Hence, while coverage expanded, many people also found that the new, permitted plans cost them more than their prior coverage. They also found that the reality did not match Obama’s promise, “if you like your health care plan, you can keep your health care plan.”

The ACA required insurance companies to “accept all comers.” This policy would have the predictable effect that healthy (mostly young) people would forego purchasing insurance until a condition arose that required expensive treatment. That, in turn, would devastate the insurance market. Imagine being able to buy a fire policy to cover damage that had already arisen from a fire. Such policies would not be issued. Private, non-employer, health insurance plans potentially would disappear. Some commentators opined that this was exactly the end the reformers sought, at least secretly, so as to shift to a single-payer system, in other words, to “Medicare for all.” The ACA sought to address that problem by imposing an “individual mandate.” Unless exempt from the mandate, such as illegal immigrants or 25-year-olds covered under their parents’ policy, every person must purchase insurance through their employer or individually from an insurer through one of the “exchanges.” Barring that, the person had to pay a penalty, to be collected by the IRS.

There have been numerous legal challenges to the ACA. Perhaps the most significant constitutional challenge was decided by the Supreme Court in 2012 in National Federation of Independent Business v. Sebelius (NFIB). There, the Court addressed the constitutionality of the individual mandate under Congress’s commerce and taxing powers, and of the Medicaid expansion under Congress’s spending power. These two provisions were deemed the keys to the success of the entire project.

Before the Court could address the case’s merits, it had to rule that the petitioners had standing to bring their constitutional claim. The hurdle was the Anti-Injunction Act. That law prohibited courts from issuing an injunction against the collection of any tax, in order to prevent litigation from obstructing tax collection. Instead, a party must pay the tax and sue for a refund to test the tax’s constitutionality. The issue turned on whether the individual mandate was a tax or a penalty. Chief Justice John Roberts concluded that Congress had described this “shared responsibility payment” if one did not purchase qualified health insurance as a “penalty,” not a “tax.” Roberts noted that other parts of the ACA imposed taxes, so that Congress’s decision to apply a different label was significant. Left out of the opinion was the reason that Congress made what was initially labeled a “tax” into a “penalty” in the ACA’s final version, namely, Democrats’ sensitivity about Republican allegations that the proposed bill raised taxes on Americans.

Having confirmed the petitioners’ standing, Roberts proceeded to the substantive merits of the challenge to the ACA. The government argued that the health insurance market (and health care, more generally) was a national market in which everyone would participate, sooner or later. While this is a likely event, it is by no means a necessary one, as a person might never seek medical services. If, for whatever reason, people did not have suitable insurance, the government claimed, they might not be able to pay for those services. Because hospitals are legally obligated to provide some services regardless of the patient’s ability to pay, hospitals would pass along their uncompensated costs to insured patients, whose insurance companies in turn would charge those patients higher premiums. The ACA’s broadened insurance coverage and “guaranteed-issue” requirements, subsidized by the minimum insurance coverage requirement, would ameliorate this cost-shifting. Moreover, the related individual mandate was “necessary and proper” to deal with the potential distortion of the market that would come from younger, healthier people opting not to purchase insurance as sought by the ACA.

Of course, Congress could pass laws under the Necessary and Proper Clause only to further its other enumerated powers, hence, the need to invoke the Commerce Clause. The government relied on the long-established, but still controversial, precedent of Wickard v. Filburn. In that 1942 case, the Court upheld a federal penalty imposed on farmer Filburn for growing wheat for home consumption in excess of his allotment under the Second Agricultural Adjustment Act. Even though Filburn’s total production was an infinitesimally small portion of the nearly one billion bushels grown in the U.S. at that time, the Court concluded, tautologically,  that the aggregate of production by all farmers had a substantial effect on the wheat market. Thus, since Congress could act on overall production, it could reach all aspects of it, even marginal producers such as Filburn. The government claimed that the ACA’s individual mandate was analogous. Even if one healthy individual’s failure to buy insurance would scarcely affect the health insurance market, a large number of such individuals and of “free riders” failing to get insurance until after a medical need arose would, in the aggregate, have such a substantial effect.

Roberts, in effect writing for himself and the formally dissenting justices on that issue, disagreed. He emphasized that Congress has only limited, enumerated powers, at least in theory. Further, Congress might enact laws needed to exercise those powers. However, such laws must not only be necessary, but also proper. In other words, they must not themselves seek to achieve objectives not permitted under the enumerated powers. As opinions in earlier cases, going back to Chief Justice John Marshall in Gibbons v. Ogden had done, Roberts emphasized that the enumeration of congressional powers in the Constitution meant that there were some things Congress could not reach.

As to the Commerce Clause itself, the Chief Justice noted that Congress previously had only used that power to control activities in which parties first had chosen to engage. Here, however, Congress sought to compel people to act who were not then engaged in commercial activity. However broad Congress’s power to regulate interstate commerce had become over the years with the Court’s acquiescence, this was a step too far. If Congress could use the Commerce Clause to compel people to enter the market of health insurance, there was no other product or service Congress could not force on the American people.

This obstacle had caused the humorous episode at oral argument where the Chief Justice inquired whether the government could require people to buy broccoli. The government urged, to no avail, that health insurance was unique, in that people buying broccoli would have to pay the grocer before they received their ware, whereas hospitals might have to provide services and never get paid. Of course, the only reason hospitals might not get paid is because state and federal laws require them to provide certain services up front, and there is no reason why laws might not be adopted in the future that require grocers to supply people with basic “healthy” foods, regardless of ability to pay. Roberts also acknowledged that, from an economist’s perspective, choosing not to participate in a market may affect that market as much as choosing to participate. After all, both reflect demand, and a boycott has economic effects just as a purchasing fad does. However, to preserve essential constitutional structures, sometimes lines must be drawn that reflect considerations other than pure economic policy.

The Chief Justice was not done, however. Having rejected the Commerce Clause as support for the ACA, he embraced Congress’s taxing power, instead. If the individual mandate was a tax, it would be upheld because Congress’s power to tax was broad and applied to individuals, assets, and income of any sort, not just to activities, as long as its purpose or effect was to raise revenue. On the other hand, if the individual mandate was a “penalty,” it could not be upheld under the taxing power, but had to be justified as a necessary and proper means to accomplish another enumerated power, such as the commerce clause. Of course, that path had been blocked in the preceding part of the opinion. Hence, everything rested on the individual mandate being a “tax.”

At first glance it appeared that this avenue also was a dead end, due to Roberts’s decision that the individual mandate was not a tax for the purpose of the Anti-Injunction Act. On closer analysis, however, the Chief Justice concluded that something can be both a tax and not be a tax, seemingly violating the non-contradiction principle. Roberts sought to escape this logical trap by distinguishing what Congress can declare as a matter of statutory interpretation and meaning from what exists in constitutional reality. Presumably, Congress can define that, for the purpose of a particular federal law, 2+2=5 and the Moon is made of green cheese. In applying a statute’s terms, the courts are bound by Congress’s will, however contrary that may be to reason and ordinary reality.

However, when the question before a court is the meaning of an undefined term in the Constitution, an “originalist” judge will attempt to discern the commonly-understood meaning of that term when the Constitution was adopted, subject possibly to evolution of that understanding through long-adhered-to judicial, legislative, and executive usage. Here, Roberts applied factors the Court had developed beginning in Bailey v. Drexel Furniture Co. in 1922. Those factors compelled the conclusion that the individual mandate was, functionally, a tax. Particularly significant for Roberts was that the ACA limited the payment to less than the price for insurance, and that it was administered by the IRS through the normal channels of tax collection. Further, because the tax would raise substantial revenue, its ancillary purpose of expanding insurance coverage was of no constitutional consequence. Taxes often affect behavior, understood in the old adage that, if the government taxes something, it gets less of it.

Roberts’s analysis reads as the constitutional law analogue to quantum mechanics and the paradox of Schroedinger’s Cat, in that the individual mandate is both a tax and a penalty until it is observed by the Chief Justice. His opinion has produced much mirth—and frustration—among commentators, and there were inconvenient facts in the ACA itself. The mandate was in the ACA’s operative provisions, not its revenue provisions, and Congress referred to the mandate as a “penalty” eighteen times in the ACA. Still, he has a valid, if not unassailable, point. A policy that has the characteristics associated with a tax ordinarily is a tax. If Congress nevertheless consciously chooses to designate it as a penalty, then for the limited purpose of assessing the policy’s connection to another statute which carefully uses a different term, here the Anti-Injunction Act, the blame for any absurdity lies with Congress.

The Medicaid expansion under the ACA was struck down. Under the Constitution, Congress may spend funds, subject to certain ill-defined limits. One of those is that the expenditure must be for the “general welfare.” Under classic republican theory, this meant that Congress could spend the revenue collected from the people of the several states on projects that would benefit the United States as a whole, not some constituent part, or an individual or private entity. It was under that conception of “general welfare” that President Grover Cleveland in 1887 vetoed a bill that appropriated $10,000 to purchase seeds to be distributed to Texas farmers hurt by a devastating drought. Since then, the phrase has been diluted to mean anything that Congress deems beneficial to the country, however remotely.

Moreover, while principles of federalism prohibit Congress from compelling states to enact federal policy—known as the “anti-commandeering” doctrine—Congress can provide incentives to states through conditional grants of federal funds. As long as the conditions are clear, relevant to the purpose of the grant, and not “coercive,” states are free to accept the funds with the conditions or to reject them. Thus, Congress can try to achieve indirectly through the spending power what it could not require directly. For example, Congress cannot, as of now, direct states to teach a certain curriculum in their schools. However, Congress can provide funds to states that teach certain subjects, defined in those grants, in their schools. The key issue usually is whether the condition effectively coerces the states to submit to the federal financial blandishment. If so, the conditional grant is unconstitutional because it reduces the states to mere satrapies of the federal government rather than quasi-sovereigns in our federal system.

In what was a judicial first, Roberts found that the ACA unconstitutionally coerced the states into accepting the federal grants. Critical to that conclusion was that a state’s failure to accept the ACA’s expansion of Medicaid would result not just in the state being ineligible to receive federal funds for the new coverage. Rather, the state would lose all of its existing Medicaid funding. As well, here the program affected—Medicaid—accounted for over 20% of the typical state’s budget. Roberts described this as “economic dragooning that leaves the States with no real option but to acquiesce in the Medicaid expansion.” Roberts noted that the budgetary impact on a state from rejecting the expansion dwarfed anything triggered by a refusal to accept federal funds under previous conditional grants.

One peculiarity of the opinions in NFIB was the stylistic juxtaposition of Roberts’s opinion for the Court and the principal dissent, penned by Justice Antonin Scalia. Roberts at one point uses “I” to defend a point of law he makes, which is common in dissents or concurrences, instead of the typical “we” or “the Court” used by a majority. By contrast, Scalia consistently uses “we” (such as “We conclude that [the ACA is unconstitutional.” and “We now consider respondent’s second challenge….”), although that might be explained because he wrote for four justices, Anthony Kennedy, Clarence Thomas, Samuel Alito, and himself. He also refers to Justice Ruth Bader Ginsburg’s broadly as “the dissent.” Most significant, Scalia’s entire opinion reads like that of a majority. He surveys the relevant constitutional doctrines more magisterially than does the Chief Justice, even where he and Roberts agree, something that dissents do not ordinarily do. He repeatedly and in detail criticizes the government’s arguments and the “friend-of the-court” briefs that support the government, tactics commonly used by the majority opinion writer.

These oddities have provoked much speculation, chiefly that Roberts initially joined Scalia’s opinion, which would have made it the majority opinion, but got cold feet. Rumor spread that Justice Anthony Kennedy had attempted until shortly before the decision was announced to persuade Roberts to rejoin the Scalia group. Once that proved fruitless, it was too late to make anything but cosmetic changes to Scalia’s opinion for the four now-dissenters. Only the justices know what actually happened, but the scenario seems plausible.

Why would Roberts do this? Had Scalia’s opinion prevailed, the ACA would have been struck down in its entirety. That would have placed the Court in a difficult position, especially during an election year, having exploded what President Obama considered his signature achievement. The President already had a fractious relationship with the Supreme Court and earlier had made what some interpreted as veiled political threats against the Court over the case. Roberts’s “switch in time” blunted that. The chief justice is at most primus inter pares, having no greater formal powers than his associates. But he is often the public and political figurehead of the Court. Historically, chief justices have been more “political” in the sense of being finely attuned to maintaining the institutional vitality of the Court. John Marshall, William Howard Taft, and Charles Evans Hughes especially come to mind. Associate justices can be jurisprudential purists, often through dissents, to a degree a chief justice cannot.

Choosing his path allowed Roberts to uphold the ACA in part, while striking jurisprudential blows against the previously constant expansion of the federal commerce and spending powers. Even as to the taxing power, which he used to uphold that part of the ACA, Roberts planted a constitutional land mine. Should the mandate ever be made really effective, if Congress raised it above the price of insurance, the “tax” argument would fail and a future court could strike it down as an unconstitutional penalty. Similarly, if the tax were repealed, as eventually happened, and the mandate were no longer supported under the taxing power, it could threaten the entire ACA.

After NFIB, attempts to modify or eliminate the ACA through legislation or litigation continued, with mixed success. Noteworthy is that the tax payment for the individual mandate was repealed in 2017. This has produced a new challenge to the ACA as a whole, because the mandate is, as the government conceded in earlier arguments, a crucial element of the whole health insurance structure. The constitutional question is whether the mandate is severable from the rest of the ACA. The district court held that the mandate was no longer a tax and, thus, under NFIB, is unconstitutional. Further, because of the significance that Congress attached to the mandate for the vitality of the ACA, the mandate could not be severed from the ACA, and the entire law is unconstitutional. The Fifth Circuit agreed that the mandate is unconstitutional, but disagreed about the extent that affects the rest of the ACA. The Supreme Court will hear the issue in its 2020-2021 term in California v.. Texas.

On the political side, the American public seems to support the ACA overall, although, or perhaps because, it has been made much more modest than its proponents had planned. So, the law, somewhat belatedly and less boldly, achieved a key goal of President Obama’s agenda. That success came at a stunning political cost to the President’s party, however. The Democrats hemorrhaged over 1,000 federal and state legislative seats during Obama’s tenure. In 2010 alone, they lost a historic 63 House seats, the biggest mid-term election rout since 1938, plus 6 Senate seats. The moderate “blue-dog” Democrats who had been crucial to the passage of the ACA were particularly hard hit. Whatever the ACA’s fate turns out to be in the courts, the ultimate resolution of controversial social issues remains with the people, not lawyers and judges.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

For those old enough to remember, September 11, 2001, 9:03 a.m. is burned into our collective memory. It was at that moment that United Flight 175 crashed into the South Tower of the World Trade Center in New York City.

Everyone was watching. American Airlines Flight 11 had crashed into the North Tower seventeen minutes earlier. For those few moments there was uncertainty whether the first crash was a tragic accident. Then, on live television, the South Tower fireball vividly announced to the world that America was under attack.

The nightmare continued. As horrifying images of people trapped in the burning towers riveted the nation, news broke at 9:37 a.m. that American Flight 77 had plowed into the Pentagon.

For the first time since December 11, 1941, Americans were collectively experiencing full scale carnage from a coordinated attack on their soil.

The horror continued as the twin towers collapsed, sending clouds of debris throughout lower Manhattan and igniting fires in adjoining buildings. Questions filled the minds of government officials and every citizen: How many more planes? What were their targets? How many have died? Who is doing this to us?

At 10:03 a.m., word came that United Flight 93 had crashed into a Pennsylvania field. Speculation exploded as to what happened. Later investigations revealed that Flight 93 passengers, alerted by cell phone calls of the earlier attacks, revolted causing the plane to crash. Their heroism prevented this final hijacked plane from destroying the U.S. Capitol Building.

That final accounting was devastating: 2,977 killed and over 25,000 injured. The death toll continues to climb to this day as first responders and building survivors perish from respiratory conditions caused by inhaling the chemical-laden smoke. It was the deadliest terrorist attack in human history.

How this happened, why this happened, and what happened next compounds the tragedy.

Nineteen terrorists, most from Saudi Arabia, were part a radical Islamic terrorist organization called al-Qaeda “the Base.” This was the name given the training camp for the radical Islamicists who fought the Soviets in Afghanistan.

Khalid Sheikh Mohammed, a Pakistani, was the primary organizer of the attack. Osama Bin Laden, a Saudi, was the leader and financier. Their plan was based upon an earlier failed effort in the Philippines. It was mapped out in late 1998. Bin Laden personally recruited the team, drawn from experienced terrorists. They insinuated themselves into the U.S., with several attending pilot training classes. Five-man teams would board the four planes, overpower the pilots, and fly them as bombs into significant buildings.

They banked on plane crews and passengers responding to decades of “normal” hijackings. They would assume the plane would be commandeered, flown to a new location, demands would be made, and everyone would live. This explains the passivity on the first three planes. Flight 93 was different, because it was delayed in its departure, allowing time for passengers to learn about the fate of the other planes. Last minute problems also reduced the Flight 93 hijacker team to only four.

The driving force behind the attack was Wahhabism, a highly strict, anti-Western version of Sunni Islam.

The Saudi Royal Family owes its rise to power to Muhammad ibn Abd al-Wahhab (1703-1792). He envisioned a “pure” form of Islam that purged most worldly practices (heresies), oppressed women, and endorsed violence against nonbelievers (infidels), including Muslims who differed with his sect. This extremely conservative and violent form of Islam might have died out in the sands of central Arabia were in not for a timely alliance with a local tribal leader, Muhammad bin Saud.

The House of Saud was just another minor tribe, until the two Muhammads realized the power of merging Sunni fanaticism with armed warriors. Wahhab’s daughter married Saud’s son, merging their two blood lines to this day. The House of Saud and its warriors rapidly expanded throughout the Arabia Peninsula, fueled by Wahhabi fanaticism. These various conflicts always included destruction of holy sites of rival sects and tribes. While done in the name of “purification,” the result was erasing the physical touchstones of rival cultures and governments.

In the early 20th Century, Saudi leader, ibn Saud, expertly exploited the decline of the Ottoman Empire, and alliances with European Powers, to consolidate his permanent hold over the Arabian Peninsula. Control of Mecca and Medina, Islam’s two holiest sites, gave the House of Saud the power to promote Wahhabism as the dominant interpretation of Sunni Islam. This included internally contradictory components of calling for eradicating infidels while growing rich from Christian consumption of oil and pursuing lavish hedonism when not in public view.

In the mid-1970s Saudi Arabia used the flood of oil revenue to become the “McDonalds of Madrassas.” Religious schools and new Mosques popped up throughout Africa, Asia, and the Middle East. This building boom had nothing to do with education and everything to do with spreading the cult of Wahhabism. Pakistan became a major hub for turning Wahhabi madrassas graduates into dedicated terrorists.

Wahhabism may have remained a violent, dangerous, but diffused movement, except it found fertile soil in Afghanistan.

Afghanistan was called the graveyard of empires as its rugged terrain and fierce tribal warriors thwarted potential conquerors for centuries. In 1973, the last king of Afghanistan was deposed leading to years of instability. In April 1978, the opposition Communist Party seized control in a bloody coup. The communist tried to brutally consolidate power, which ignited a civil war among factions supported by Pakistan, China, Islamists (known as the Mujahideen), and the Soviet Union. Amidst the chaos, U.S. Ambassador Adolph Dubbs was killed on February 14, 1979.

On December 24, 1979, the Soviet Union invaded Afghanistan, killing their ineffectual puppet President, and ultimately bringing over 100,000 military personnel into the country. What followed was a vicious war between the Soviet military and various Afghan guerrilla factions. Over 2 million Afghans died.

The Reagan Administration covertly supported the anti-Soviet Afghan insurgents, primarily aiding the secular pro-west Northern Alliance. Arab nations supported the Mujahideen. Bin Laden entered the insurgent caldera as a Mujahideen financier and fighter. By 1988, the Soviets realized their occupation had failed. They removed their troops, leaving behind another puppet government and Soviet trained military.

When the Soviet Union collapsed, Afghanistan was finally free. Unfortunately, calls for reunifying the country by reestablishing the monarchy and strengthening regional leadership went unheeded. Attempts at recreating the pre-invasion faction ravaged parliamentary system only led to new rounds of civil war.

In September 1994, the weak U.S. response opened the door for the Taliban, graduates from Pakistan’s Wahhabi madrassas, to launch their crusade to take control of Afghanistan.  By 1998, the Taliban controlled 90% of the country.

Bin Laden and his al-Qaeda warriors made Taliban-controlled territory in Afghanistan their new base of operations. In exchange, Bin Laden helped the Taliban eliminate their remaining opponents. This was accomplished on September 9, 2001, when suicide bombers disguised as a television camera crew blew-up Ahmad Shah Massoud, the charismatic pro-west leader of the Northern Alliance.

Two days later, Bin Laden’s plan to establish al-Qaeda as the global leader of Islamic terrorism was implemented with hijacking four planes and turning them into guided bombs.

The 9-11 attacks, along with the earlier support against the Soviets in Afghanistan, was part of Bin Laden’s goal to lure infidel governments into “long wars of attrition in Muslim countries, attracting large numbers of jihadists who would never surrender.” He believed this would lead to economic collapse of the infidels, by “bleeding” them dry. Bin Laden outlined his strategy of “bleeding America to the point of bankruptcy” in a 2004 tape released through Al Jazeera.

On September 14, amidst the World Trade Center rubble, President George W. Bush addressed those recovering bodies and extinguishing fires using a bullhorn:

“The nation stands with the good people of New York City and New Jersey and Connecticut as we mourn the loss of thousands of our citizens”

A rescue worker yelled, “I can’t hear you!”

President Bush spontaneously responded: “I can hear you! The rest of the world hears you! And the people who knocked these buildings down will hear all of us soon!”

Twenty-three days later, on October 7, 2001, American and British warplanes, supplemented by cruise missiles fired from naval vessels, began destroying Taliban operations in Afghanistan.

U.S. Special Forces entered Afghanistan. Working the Northern Alliance, they defeated major Taliban units. They occupied Kabul, the Afghan Capital, on November 13, 2001.

On May 2, 2011, U.S. Special Forces raided an al-Qaeda compound in Abbottabad, Pakistan, killing Osama bin Laden.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

In October of 1989, hundreds of thousands of East German citizens demonstrated in Leipzig, following a pattern of demonstrations for freedom and human rights throughout Eastern Europe and following the first ever free election in a Communist country, Poland, in the Spring of 1989. Hungary had opened its southern border with Austria and East Germans seeking a better life were fleeing there. Czechoslovakia had done likewise on its western border and the result was the same.

The East German government had been on edge and was seeking to reduce domestic tensions by granting limited passage of its citizens to West Germany. And that’s when the dam broke.

On November 9, 1989, thousands of elated East Berliners started pouring into West Berlin. There was a simple bureaucratic error earlier in the day when an East German official read a press release he hadn’t previously studied and proclaimed that residents of Communist East Berlin were permitted to cross into West Berlin, freely and, most importantly, immediately. He had missed the end of the release which instructed that passports would be issued in an orderly fashion when government offices opened the next day.

This surprising information about free passage was spread throughout East Berlin, East Germany and, indeed, around the word like a lightning bolt. Massive crowds gathered near-instantaneously and celebrated at the heavily guarded Wall gates which, in a party-like atmosphere amid total confusion, were opened by hard core communist yet totally outmanned Border Police, who normally had orders to shoot-to-kill anyone attempting to escape. A floodgate was opened and an unstoppable flood of freedom-seeking humanity passed through, unimpeded.

Shortly thereafter, the people tore down the Wall with every means available. The clarion bell had been sounded and the reaction across communist Eastern Europe was swift. Communist governments fell like dominoes.

The Wall itself was a glaring symbol of totalitarian communist repression and the chains that bound satellite countries to the communist Soviet Union. But the “bureaucratic error” of a low-level East German functionary was the match needed to set off an explosion of freedom that had been years in-the-making throughout the 1980s. And that is critical to understanding just why the Cold War came to an end, precipitously and symbolically, with the fall of the Wall.

With the election of Ronald Reagan to the presidency of the United States, Margaret Thatcher to Prime Minister of Great Britain and the Polish Cardinal, Jean Paul II becoming Pope of the Roman Catholic Church, the foundation was laid in the 1980s for freedom movements in Soviet Communist-dominated Eastern Europe to evolve and grow. Freedom lovers and fighters had friends in high places who believed deeply in their cause. These great leaders of the West understood the enormous human cost of communist rule and were eager to fight back in their own unique and powerful way, leading their respective countries and allies in the process.

Historic figures like labor leader Lech Walesa, head of the Polish Solidarity Movement and Czech playwright Vaclav Havel, an architect of the Charter 77 call for basic human rights had already planted the seeds for historic change. Particularly in Poland, the combination of Solidarity and the Catholic Church, supported staunchly in the non-communist world by Reagan and Thatcher, anti-communism flourished despite repression and brutal crackdowns.

And then, there was a new General Secretary of the Communist Party of the Soviet Union, Mikhail Gorbachev. When he came to power in 1985, he sought to exhort workers to increase productivity in the economy, stamp out the resistance to Soviet occupation in Afghanistan via a massive bombing campaign and keep liquor stores closed till 2:00 pm. However, exhortation didn’t work and the economy continued to decline, Americans gave Stinger missiles to the Afghan resistance and the bombing campaign failed and liquor stores were being regularly broken into by angry citizens not to be denied their vodka. The Afghan war was a body blow to a Soviet military, ‘always victorious’ and Soviet mothers protested their sons coming back in body bags. The elites (“nomenklatura”) were taken aback and demoralized by what was viewed as a military debacle in a then Fourth World country. “Aren’t we supposed to be a superpower?”

Having failed at run-of-the-mill Soviet responses to problems, Gorbachev embarked on a bold-for-the-USSR effort to restructure the failing Soviet economy via Perestroika which sought major reform but within the existing burdensome central-planning bureaucracy. On the political front, he introduced Glasnost, opening discussion of social and economic problems heretofore forbidden since the USSR’s beginning. Previously banned books were published. Working and friendly relationships with President Reagan and Margaret Thatcher were also initiated.

In the meantime, America under President Reagan’s leadership was not only increasing its military strength in an accelerated and expensive arms race but was also opposing Soviet-backed communist regimes and their so-called “wars of national liberation” all over the world. The cold war turned hot under the Reagan Doctrine. President Reagan also pushed “Star Wars,” an anti-ballistic missile system that could potentially neutralize Soviet long-range missiles. Star Wars, even if off in the future, worried Gorbachev’s military and communist leadership of an electronically and computer technology-challenged Soviet Union.

Competing economically and militarily with a resurgent anti-communist American engine firing on all cylinders became too expensive for the economically and technologically disadvantaged Soviet Union. There are those who say the USSR collapsed of its own weight, but they are wrong. If that were so, a congenitally overweight USSR would have collapsed a lot earlier. Gorbachev deserves a lot of credit to be sure but there should be no doubt, he and the USSR were encouraged to shift gears and change course. Unfortunately for communist rulers, their reforms initiated a downward spiral in their ability to control their citizens. Totalitarian control was first diminished and then lost. Author’s note: A lesson which was not lost on the rulers of Communist China.

Summing up: A West with economic and military backbone plus spiritual leadership, combined with brave dissident and human rights movements in Eastern Europe and the USSR itself, forced changes in behavior of the communist monolith. Words and deeds mattered. When Ronald Reagan called the Soviet Union an “evil empire” before the British Parliament, the media and political opposition worldwide was aghast… but in the Soviet Gulag, political prisoners rejoiced. When President Reagan said “Mr. Gorbachev, tear down this wall,” consternation reigned in the West… but the people from East Germany to the Kremlin heard it loud and clear.

And so fell the Berlin Wall.

The Honorable Don Ritter, Sc. D., served seven terms in the U.S. Congress from Pennsylvania including both terms of Ronald Reagan’s presidency. Dr. Ritter speaks fluent Russian and lived in the USSR for a year as a Nation Academy of Sciences post-doctoral Fellow during Leonid Brezhnev’s time. He served in Congress as Ranking Member of the Congressional Helsinki Commission and was a leader in Congress in opposition to the Soviet invasion and occupation of Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Danny de Gracia

It’s hard to believe that this year marks thirty years since Saddam Hussein invaded Kuwait in August of 1990. In history, some events can be said to be turning points for civilization that set the world on fire, and in many ways, our international system has not been the same since the invasion of Kuwait.

Today, the Iraq that went to war against Kuwait is no more, and Saddam Hussein himself is long dead, but the battles that were fought, the policies that resulted, and the history that followed is one that will haunt the world for many more years to come.

Iraq’s attempts to annex Kuwait in 1990 would bring some of the most powerful nations into collision, and would set in motion a series of events that would give rise to the Global War on Terror, the rise of ISIS, and an ongoing instability in the region that frustrates the West to this day.

To understand the beginning of this story, one must go back in time to the Iranian Revolution in 1979, where a crucial ally of the United States of America at the time – Iran – was in turmoil because of public discontent with the leadership of its shah, Mohammad Reza Pahlavi.

Iran’s combination of oil resources and strategic geographic location made it highly profitable for the shah and his allies in the West over the years, and a relationship emerged where Iran’s government, flush with oil money, kept America’s defense establishment in business.

For years, the shah had been permitted to purchase nearly any American weapons system he pleased, no matter how advanced or powerful it may be, and Congress was only all too pleased to give it to him.

The Vietnam War had broken the U.S. military and hollowed out the resources of the armed services, but the defense industry needed large contracts if was to continue to support America.

Few people realize that Iran, under the Shah, was one of the most important client states in the immediate post-Vietnam era, making it possible for America to maintain production lines of top-of-the-line destroyers, fighter planes, engines, missiles, and many other vital elements of the Cold War’s arms race against the Soviet Union. As an example, the Grumman F-14A Tomcat, America’s premier naval interceptor of 1986 “Top Gun” fame, would never have been produced in the first place if it were not for the commitment of the Iranians as a partner nation in the first batch of planes.

When the Iranian Revolution occurred, an embarrassing ulcer to American interests emerged in Western Asia, as one of the most important gravity centers of geopolitical power had slipped out of U.S. control. Iran, led by an ultra-nationalistic religious revolutionary government, and armed with what was at the time some of the most powerful weapons in the world, had gone overnight from trusted partner to sworn enemy.

Historically, U.S. policymakers typically prefer to contain and buffer enemies rather than directly opposing them. Iraq, which had also gone through a regime change in July of 1979 with the rise of Saddam Hussein in a bloody Baath Party purge, was an rival to Iran, making it a prime candidate to be America’s new ally in the Middle East.

The First Persian Gulf War: A Prelude

Hussein, a brutal, transactional-minded leader who rose to power through a combination of violence and political intrigue, was one to always exploit opportunities. Recognizing Iran’s potential to overshadow a region he himself deemed himself alone worthy to dominate, Hussein used the historical disagreement over ownership of the strategic, yet narrow Shatt al-Arab waterway that divided Iran from Iraq to start a war on September 22, 1980.

Iraq, flush with over $33 billion in oil profits, had become formidably armed with a modern military that was supplied by numerous Western European states and, bizarrely, even the Soviet Union as well. Hussein, like Nazi Germany’s Adolf Hitler, had a fascination for superweapons and sought to amass a high-tech military force that could not only crush Iran, but potentially take over the entire Middle East.

In Hussein’s bizarre arsenal would eventually include everything from modified Soviet ballistic missiles (the “al-Hussein”) to Dassault Falcon 50 corporate jets modified to carry anti-ship missiles, a nuclear weapons program at Osirak, and even work on a supergun capable of firing telephone booth-sized projectiles into orbit nicknamed Project Babylon.

Assured of a quick campaign against Iran and tacitly supported by the United States, Hussein saw anything but a decisive victory, and spent almost a decade in a costly war of attrition with Iran. Hussein, who constantly executed his own military officers for making tactical withdrawals or failing in combat, denied his military the ability to learn from defeats and handicapped his army by his own micromanagement.

Iraq’s Pokémon-like “gotta catch ‘em all” model of military procurement during the war even briefly put it at odds with the United States on May 17, 1987, when one of its heavily armed Falcon 50 executive jets, disguised on radar as a Mirage F1EQ fighter, accidentally launched a French-acquired Exocet missile against a U.S. Navy frigate, the USS Stark. President Ronald Reagan, though privately horrified at the loss of American sailors, still considered Iraq a necessary counterweight to Iran, and used the Stark incident to increase political pressure on Iran.

While Iraq had begun its war against Iran in the black, years of excessive military spending, meaningless battles, and rampant destruction of the Iraqi army had taken its toll. Hussein’s war had put the country in over $37 billion dollars in debt, much of which had been owed to neighboring Kuwait.

Faced with a strained economy, tens of thousands of soldiers returning wounded from the war, and a military that was virtually on the brink of deposing Saddam Hussein just as he had deposed his predecessor Ahmed Hassan al-Bakr in 1979, Iraq had no choice but to end its war against Iran.

Both Iran and Iraq would ultimately submit to a UN brokered ceasefire, but ironically, what would be one of the decisive elements in bringing the first Persian Gulf war to a close would not be the militaries of either country, but the U.S. military, when it launched a crippling air and naval attack against Iranian forces on April 18, 1988.

Iran, which had mined important sailing routes of the Persian Gulf as part of its area denial strategy during the war, succeeded on April 14, 1988 in striking the USS Samuel B. Roberts, an American frigate deployed to the region to protect shipping.

In response, the U.S. military retaliated with Operation: Praying Mantis which hit Iranian oil platforms (which had since been reconfigured as offensive gun platforms), naval vessels, and other military targets. The battle, which was so overwhelming in its scope that it actually was and remains to this day as the largest carrier and surface ship battle since World War II, resulted in the destruction of most of Iran’s navy and was a major contributing factor in de-fanging Iran for the next decade to come.

Kuwait and Oil

Saddam Hussein, claiming victory over Iran amidst the UN ceasefire, and now faced with a new U.S. president, George H.W. Bush in 1989, felt that the time was right to consolidate his power and pull his country back from collapse. In Hussein’s mind, he had been the “savior” of the Arab and Gulf States, who had protected them during the Persian Gulf war against the encroachment of Iranian influence. As such, he sought from Kuwait a forgiveness of debts incurred in the war with Iran, but would find no such sympathy. The 1990s were just around the corner, and Kuwait had ambitions of its own to grow in the new decade as a leading economic powerhouse.

Frustrated and outraged by what he perceived was a snub, Hussein reached into his playbook of once more leveraging territorial disputes for political gain and accused Kuwait of stealing Iraqi oil by means of horizontal slant drilling into the Rumaila oil fields of southern Iraq.

Kuwait found itself in an unenviable situation neighboring the heavily armed Iraq, and as talks over debt and oil continued, the mighty Iraqi Republican Guard appeared to be gearing up for war. Most political observers at the time, including many Arab leaders, felt that Hussein was merely posturing and that it was a grand bluff to maintain his image as a strong leader. For Hussein to invade a neighboring Arab ally was unthinkable at the time, especially given Kuwait’s position as an oil producer.

On July 25, 1990, U.S. Ambassador to Iraq, April Glaspie, met with President Saddam Hussein and his deputy, Tariq Aziz on the topic of Kuwait. Infamously, Glaspie is said to have told the two, “We have no opinion on your Arab/Arab conflicts, such as your dispute with Kuwait. Secretary Baker has directed me to emphasize the instruction, first given to Iraq in the 1960s, that the Kuwait issue is not associated with America.”

While the George H.W. Bush administration’s intentions were obviously aimed at taking no side in a regional territorial dispute, Hussein, whose personality was direct and confrontational, likely interpreted the Glaspie meeting as America backing down.

In the Iraqi leader’s eyes, one always takes the initiative and always shows an enemy their dominance. For a powerful country such as the United States to tell Hussein that it had “no opinion” on Arab/Arab conflict, this was most likely a sign of permission or even weakness that the Iraqi leader felt he had to exploit.

America, still reeling from the shadow of the Vietnam War failure and the disastrous Navy SEAL incident in Operation: Just Cause in Panama, may have appeared in that moment to Hussein as a paper tiger that could be out-maneuvered or deterred by aggressive action. Whatever the case was, Iraq stunned the world when just days later on August 2, 1990 it invaded Kuwait.

The Invasion of Kuwait

American military forces and intelligence agencies had been closely monitoring the buildup of Iraqi forces for what appeared like an invasion of Kuwait, but it was still believed right up to the moment of the attack that perhaps Saddam Hussein was only bluffing. The United States Central Command had set WATCHCON 1 – or Watch Condition One – the highest state of non-nuclear alertness in the region just prior to Iraq’s attack, and was regularly employing satellites, reconnaissance aircraft, and electronic surveillance platforms to observe the Iraqi Army.

Nevertheless, if there is one mantra that perfectly encapsulates the posture of the United States and European powers from the beginnings of the 20th century to the present, it is “Western countries too slow to act.” As is often the result with aggressive nations that challenge the international order, Iraq plowed into Kuwait and savaged the local population.

While America and her allies have always had the best technologies, the best weapons, and the best early warning systems or sensors, these historically for more than a century have been rendered useless because they often provide information that is not actionably employed to stop an attack or threat. Such was the case with Iraq, where all of the warning signs were present that an attack was imminent, but no action was taken to stop them.

Kuwait’s military courageously fought Iraq’s invading army, and even notably fought air battles with their American-made A-4 Skyhawks, some of them launching from highways after their air bases were destroyed. But the Iraqi army, full of troops who had fought against Iran and equipped with the fourth largest military in the world at that time, was simply too powerful to overcome. 140,000 Iraqi troops flooded into Kuwait and seized one of the richest oil producing nations in the region.

As Hussein’s military overran Kuwait, sealed its borders, and began plundering the country and ravaging its civilian population, the worry of the United States immediately shifted from Kuwait to Saudi Arabia, for fear that the kingdom might be next. On August 7, 1990, President Bush commenced “Operation: Desert Shield,” a military operation to defend Saudi Arabia and prevent any further advance of the Iraqi army.

At the time that Operation: Desert Shield commenced, I was living in Hampton Roads, Virginia and my father was a lieutenant colonel assigned to Tactical Air Command headquarters at the nearby Langley Air Force Base, and 48 F-15 Eagle fighter planes from that base immediately deployed to the Middle East in support of that operation. In the days that followed, our base became a flurry of activity and I remember seeing a huge buildup of combat aircraft from all around the United States forming at Langley.

President Bush, who himself had been a fighter pilot and U.S. Navy officer who fought in World War II, was all too familiar with what could happen when a megalomaniacal dictator started invading their neighbors. Whatever congeniality of convenience existed between the U.S. and Iraq to oppose Iran was now a thing of the past in the wake of the occupation of Kuwait.

Having fought against both the Nazis and Imperial Japanese in WWII, Bush saw many similarities of Adolf Hitler in Saddam Hussein, and immediately began comparing the Iraqi leader and his government to the Nazis in numerous speeches and public appearances as debates raged over what the U.S. should do regarding Kuwait.

As retired, former members of previous presidential administrations urged caution and called for long-term sanctions on Iraq rather than a kinetic military response, the American public, still captivated by the Vietnam experience, largely felt that the matter in Kuwait was not a concern that should involve military forces. Protests began to break out across America with crowds shouting “Hell no, we won’t go to war for Texaco” and others singing traditional protest songs of peace like “We Shall Overcome.”

Bush, persistent in his beliefs that Iraq’s actions were intolerable, made every effort to keep taking the moral case for action to the American public in spite of these pushbacks. As a leader seasoned by the horrors of war and combat, Bush must have known, as Henry Kissinger once said, that leadership is not about popularity polls, but about “an understanding of historical cycles and courage.”

On September 11, 1990, before a joint session of Congress, Bush gave a fiery address that to this day still stands as one of the most impressive presidential addresses in history.

“Vital issues of principle are at stake. Saddam Hussein is literally trying to wipe a country off the face of the Earth. We do not exaggerate,” President Bush would say before Congress. “Nor do we exaggerate when we say Saddam Hussein will fail. Vital economic interests are at risk as well. Iraq itself controls some 10 percent of the world’s proven oil reserves. Iraq, plus Kuwait, controls twice that. An Iraq permitted to swallow Kuwait would have the economic and military power, as well as the arrogance, to intimidate and coerce its neighbors, neighbors who control the lion’s share of the world’s remaining oil reserves. We cannot permit a resource so vital to be dominated by one so ruthless, and we won’t!”

Members of Congress erupted in roaring applause at Bush’s words, and he issued a stern warning to Saddam Hussein: “Iraq will not be permitted to annex Kuwait. And that’s not a threat, that’s not a boast, that’s just the way it’s going to be.”

Ejecting Saddam from Kuwait

As America prepared for action, in Saudi Arabia, another man would also be making promises to defeat Saddam Hussein and his military. Osama bin Laden, who had participated in the earlier war in Afghanistan as part of the Mujahideen that resisted the Soviet occupation, now offered his services to Saudi Arabia, pledging to use a jihad to force Iraq out of Kuwait in the same way that he had forced the Soviets out of Afghanistan. The Saudis, however, would hear none of it; having already received the protection of the United States and its powerful allies, bin Laden, seen as a useless bit player on the world stage, was brushed aside.

Herein the seeds for a future conflict would be sown, as not only did bin Laden take offense to being rejected by the Saudi government, but the presence of American military forces on holy Saudi soil was seen as blasphemous to him and a morally corrupting influence on the Saudi people.

In fact, the presence of female U.S. Air Force personnel in Saudi Arabia seen without traditional cover or driving around in vehicles, caused many Saudi women to begin petitioning their government – and even in some instances, committing acts of civil disobedience – for more rights. This caused even more outrage among a number of fundamentalist groups in Saudi Arabia, and lent additional support, albeit covert in some instances, to bin Laden and other jihadist leaders.

Despite these cultural tensions boiling beneath the surface, President Bush successfully persuaded not only his own Congress but the United Nations as well to empower the formation of a global coalition of 35 nations to eject Iraqi occupying forces from Kuwait and to protect Saudi Arabia and the rest of the Gulf from further aggression.

On November 29, 1990, the die was cast when the United Nations passed Resolution 678, authorizing “Member States co-operating with the Government of Kuwait, unless Iraq on or before 15 January 1991 [withdraws from Kuwait] … to use all necessary means … to restore international peace and security in the area.”

Subsequently, on January 15, 1991, President Bush issued an ultimatum to Saddam Hussein to leave Kuwait. Hussein ignored the threat, believing that America was weak, and its public easily susceptible to knee-jerk reactions at the sight of losing soldiers from its prior experience in Vietnam. Hussein believed that he could not only cause the American people to back down, but that he could unravel Arab support for the UN coalition by enticing Israel to attack Iraq. As such, he persisted in occupying Kuwait and boasted that a “Mother of all Battles” was to commence, in which Iraq would emerge victorious.

History, however, shows us that this was not the case, and days later on the evening of January 16, 1991, Operation: Desert Shield became Operation: Desert Storm, when a massive aerial bombardment and air superiority campaign commenced against Iraqi forces. Unlike prior wars which combined a ground invasion with supporting air forces, the start of Desert Storm was a bombing campaign that consisted of heavy attacks by aircraft and naval-launched cruise missiles against Iraq.

The operational name “Desert Storm” may have in part been influenced by a war plan developed months earlier by Air Force planner, Colonel John A. Warden who conceived an attack strategy named “Instant Thunder” which used conventional, non-nuclear airpower in a precise manner to topple Iraqi defenses.

A number of elements from Warden’s top secret plan were integrated into the opening shots of Desert Storm’s air campaign, as U.S. and coalition aircraft knocked out Iraqi radars, missile sites, command headquarters, power stations, and other key targets in just the first night alone.

Unlike the Vietnam air campaigns which were largely political and gradual escalations of force, the Air Force, having suffered heavy losses in Vietnam, wanted as General Chuck Horner would later explain, “instant” and “maximum” escalation so that their enemies could not have time to react or rearm.

This was precisely what happened, such to the point that the massive Iraqi air force would be either annihilated by bombs on the ground, shot down by coalition combat air patrols, or forced to flee into neighboring Iran.

A number of radical operations and new weapons were employed in the air campaign of Desert Storm. For one, the U.S. Air Force had secretly converted a number of nuclear AGM-86 Air Launched Cruise Missiles (ALCMs) into conventional, high explosive precision guided missiles and equipped them on 57 B-52 bombers for a January 17 night raid called Operation: Senior Surprise.

Known internally and informally to the B-52 pilots as “Operation: Secret Squirrel,” the cruise missiles knocked out numerous Iraqi defenses and opened the door for more coalition aircraft to surge against Saddam Hussein’s military.

The Navy also employed covert strikes against Iraq, also firing BGM-109 Tomahawk Land Attack Missiles (TLAMs) that had also been converted to carry high explosive (non-nuclear) warheads. Because the early BGM-109s were guided and aimed by a primitive digital scene matching area correlator (DSMAC) that took digital photos of the ground below and compared it with pre-programmed topography in its terrain computer, the flat deserts of Iraq were thought to be problematic in employing cruise missiles, so the Navy came up with a highly controversial solution: secretly fire cruise missiles into Iran – a violation of Iranian airspace and international law – then turn them towards the mountain ranges as aiming points, and fly them into Iraq.

The plan worked, however, and the Navy would ultimately rain down on Iraq some 288 TLAMs that destroyed hardened hangars, runways, parked aircraft, command buildings, and scores of other targets in highly accurate strikes.

Part of the air war came home personally to me when a U.S. Air Force B-52, serial number 58-0248, participated in a night time raid over Iraq when it was accidentally fired upon by a friendly F-4G “Wild Weasel” that mistook the lumbering bomber’s AN/ASG-21 radar-guided tail gun as an Iraqi air defense platform. The Wild Weasel fired an AGM-88 High-speed Anti-Radiation Missile (HARM) at the B-52 that hit and exploded in its tail, but still left the aircraft in flyable condition.

At the time, my family had moved to Andersen AFB in Guam, and 58-0248 made for Guam to land for repairs. When the B-52 arrived, it was parked in a cavernous hangar and crews immediately began patching up the aircraft. My father, always wanting to ensure that I learned something about the real world so I could get an appreciation for America, brought me to the hangar to see the stricken B-52, which was affectionately given the nickname “In HARM’s Way.”

I never forgot that moment, and it caused me to realize that the war was more than just some news broadcast we watched on TV, and that war had real consequences for not only those who fought in it, but people back home as well. I remember feeling an intense surge of pride as I saw that B-52 parked in the hangar, and I felt that I was witnessing history in action.

Ultimately, the air war against Saddam Hussein’s military would go on for a brutal six weeks, leaving many of his troops shell-shocked, demoralized, and eager to surrender. In fighting Iran for a decade, the Iraqi army had never known the kind of destructive scale or deadly precision that coalition forces were able to bring to bear against them.

Once the ground campaign commenced against Iraqi forces on February 24, 1991, that portion of Operation: Desert Storm only lasted a mere 100 hours before a cease-fire would be called, not because Saddam Hussein had pleaded for survival, but because back in Washington, D.C., national leaders watching the war on CNN began to see a level of carnage that they were not prepared for.

Gen. Colin Powell, seeing that most of the coalition’s UN objectives had been essentially achieved, personally lobbied for the campaign to wrap up, feeling that further destruction of Iraq would be “unchivalrous” and fearing the loss of any more Iraqi or American lives. It was also feared that if America had actually tried to make a play for regime change in Iraq in 1991, that the Army would be left holding the bag in securing and rebuilding the country, something that not only would be costly, but might turn the Arab coalition against America. On February 28, 1991, the U.S. officially declared a cease-fire.

The Aftermath

Operation: Desert Storm successfully accomplished the UN objectives that were established for the coalition forces and it liberated Kuwait. But a number of side effects of the war would follow that would haunt America and the rest of the world for decades to come.

First, Saddam Hussein remained in power. As a result, the U.S. military would remain in the region for years as a defensive contingent, not only continuing to inflame existing cultural tensions in Saudi Arabia, but also becoming a target for jihadist terrorist attacks, including the Khobar Towers bombing on June 25, 1996 and the USS Cole bombing on October 12, 2000.

Osama bin Laden’s al Qaeda terrorist group would ultimately change the modern world as we knew it when his men hijacked commercial airliners and flew them into the Pentagon and World Trade Center on September 11, 2001. It should not be lost on historical observers that 15 of the 19 hijackers that day were Saudi citizens, a strategic attempt by bin Laden to drive a wedge between the United States and Saudi Arabia to get American military forces out of the country.

9/11 would also provide an opportunity for George H.W. Bush’s son, President George W. Bush, to attempt to take down Saddam Hussein. Many of the new Bush Administration members were veterans of the previous one during Desert Storm, and felt that the elder Bush’s decision not to “take out” the Iraqi dictator was a mistake. And while the 2003 campaign against Iraq was indeed successful in taking down the Baathist-party rule in Iraq and changing the regime, it allowed many disaffected former Iraqi officers and jihadists to rise up against the West, which ultimately led to the rise of ISIS in the region.

It is my hope that the next generation of college and high school students who read this essay and reflect on world affairs will understand that history is often complex and that every action taken leaves ripples in our collective destinies. A Holocaust survivor once told me, “There are times when the world goes crazy, and catches on fire. Desert Storm was one such time when the world caught on fire.”

What can we learn from the invasion of Kuwait, and what lessons can we take forward into our future? Let us remember always that allies are not always friends; victories are never permanent; and sometimes even seemingly unrelated personalities and forces can lead to world-changing events.

Our young people, especially those who wish to enter into national service, must study history and seek to possess, as the Bible says in the book of Revelation 17:9 in the Amplified Bible translation, “a mind to consider, that is packed with wisdom and intelligence … a particular mode of thinking and judging of thoughts, feelings, and purposes.”

Indeed, sometimes the world truly goes crazy and catches on fire, and some may say that 2020 is such a time. Let us study the past now, and prepare for the future!

Dr. Danny de Gracia, Th.D., D.Min., is a political scientist, theologist, and former committee clerk to the Hawaii State House of Representatives. He is an internationally acclaimed author and novelist who has been featured worldwide in the Washington Times, New York Times, USA Today, BBC News, Honolulu Civil Beat, and more. He is the author of the novel American Kiss: A Collection of Short Stories.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams
Ronald Reagan Speech, Brandenburg Gate & Berlin Wall 1987

“Mr. Gorbachev, tear down this wall!” – Ronald Reagan

After World War II, a Cold War erupted between the world’s two superpowers – the United States and the Soviet Union. Germany was occupied and then divided after the war as was its capital, Berlin. The Soviet Union erected the Berlin Wall in 1961 as a symbol of the divide between East and West in the Cold War and between freedom and tyranny.

During the 1960s and 1970s, the superpowers entered into a period of détente or decreasing tensions. However, the Soviet Union took advantage of détente to use revenue from rising oil prices and arms sales to engage in a massive arms build-up, supported communist insurrections in developing nations around the globe, and invaded Afghanistan.

Ronald Reagan was elected president in 1980 during a time of foreign-policy reversals including the Vietnam War and the Iranian Hostage Crisis. He blamed détente for strengthening and emboldening the Soviets and sought to improve American strength abroad.

As president, Reagan instituted a tough stance towards the Soviets that was designed to reverse their advances and win the Cold War. His administration supported the Polish resistance movement known as Solidarity, increased military spending, started the Strategic Defense Initiative (SDI), and armed resistance fighters around the world, including the mujahideen battling a Soviet invasion in Afghanistan.

Reagan had a long history of attacking communist states and the idea of communism itself that shaped his strategic outlook. In the decades after World War II, like many Americans, he was concerned about Soviet dominance in Eastern Europe spreading elsewhere. In 1952, Reagan compared communism to Nazism and other forms of totalitarianism characterized by a powerful state that limited individual freedoms.

“We have met [the threat] back through the ages in the name of every conqueror that has ever set upon a course of establishing his rule over mankind,” he said. “It is simply the idea, the basis of this country and of our religion, the idea of the dignity of man, the idea that deep within the heart of each one of us is something so godlike and precious that no individual or group has a right to impose his or its will upon the people.”

In a seminal televised speech in 1964 called “A Time for Choosing,” Reagan stated that he believed there could be no accommodation with the Soviets. “We cannot buy our security, our freedom from the threat of the bomb by committing an immorality so great as saying to a billion human beings now in slavery behind the Iron Curtain, ‘Give up your dreams of freedom because to save our own skins, we are willing to make a deal with your slave-masters.’”

Reagan targeted the Berlin Wall as a symbol of communism in a 1967 televised town hall debate with Robert Kennedy. “I think it would be very admirable if the Berlin Wall should…disappear,” Reagan said, “We just think that a wall that is put up to confine people, and keep them within their own country…has to be somehow wrong.”

In 1978, Reagan visited the wall and heard the story of Peter Fechter, one of hundreds who were shot by East German police while trying to escape to freedom over the Berlin Wall. As a result, Reagan told an aide, “My idea of American policy toward the Soviet Union is simple, and some would say simplistic.  It is this: We win and they lose.”

As president, he continued his unrelenting attack on the idea of communism according to his moral vision of the system.  In a 1982 speech to the British Parliament, he predicted that communism would end up “on the ash heap of history,” and that the wall was “the signature of the regime that built it.”  When he visited the wall during the same trip, he stated that “It’s as ugly as the idea behind it.” In a 1983 speech, he referred to the Soviet Union an “evil empire.”

Reagan went to West Berlin to speak during a ceremony commemorating the 750th anniversary of the city and faced a choice. He could confront the Soviets about the wall, or he could deliver a speech without controversy.

In June 1987, many officials in his administration and West Germany were opposed to any provocative words or actions during the anniversary speech. Many Germans also did not want Reagan to deliver his speech anywhere near the wall and feared anything that might be perceived as an aggressive signal. Secretary of State George Schultz and Chief of Staff Howard Baker questioned the speech and asked the president and his speechwriters to tone down the language. Deputy National Security Advisor Colin Powell and other members of the National Security Council wanted to alter the speech and offered several revisions. Reagan demanded to speak next to the Berlin Wall and determined that he would use the occasion to confront the threat the wall posed to human freedom.

Reagan and his team arrived in West Berlin on June 12. He spoke to reporters and nervous German officials, telling them, “This is the only wall that has ever been built to keep people in, not keep people out.” Meanwhile, in East Berlin, the German secret police and Russian KGB agents cordoned off an area a thousand yards wide opposite the spot where Reagan was to speak on the other side of the wall. They wanted to ensure that no one could hear the message of freedom.

Reagan spoke at the Brandenburg Gate with the huge, imposing wall in the background. “As long as this gate is closed, as long as this scar of a wall is permitted to stand, it is not the German question alone that remains open, but the question of freedom for all mankind.”

Reagan challenged Soviet General Secretary Mikhail Gorbachev directly, stating, “If you seek peace, if you seek prosperity for the Soviet Union and Eastern Europe, if you seek liberalization: Come here to this gate! Mr. Gorbachev, open this gate! Mr. Gorbachev, tear down this wall!”

He finished by predicting the wall would not endure because it stood for oppression and tyranny. “This wall will fall. For it cannot withstand faith; it cannot withstand truth. The wall cannot withstand freedom.” No one imagined that the Berlin Wall would fall only two years later on November 9, 1989, as communism collapsed across Eastern Europe.

A year later, Reagan was at a summit with Gorbachev in Moscow and addressed the students at Moscow State University. “The key is freedom,” Reagan boldly and candidly told them. “It is the right to put forth an idea, scoffed at by the experts, and watch it catch fire among the people. It is the right to dream – to follow your dream or stick to your conscience, even if you’re the only one in a sea of doubters.” Ronald Reagan believed that he had a responsibility to bring an end to the Cold War and destroy all nuclear weapons to benefit both the United States as well as the world for an era of peace. He dedicated himself to achieving this goal. Partly due to these efforts, the Berlin Wall fell by 1989, and communism collapsed in the Soviet Union by 1991.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

The election of Ronald Reagan on November 4, 1980 was one of the two most important elections of the 20th Century. It was a revolution in every way.

In 1932, Franklin Roosevelt (FDR) decisively defeated one term incumbent Herbert Hoover by 472-59 Electoral votes. His election

ushered in the era of aggressive liberalism, expanding the size of government, and establishing diplomatic relations with the Soviet Union. Roosevelt’s inner circle, his “brain trust,” were dedicated leftists, several of whom conferred with Lenin and Stalin on policy issues prior to 1932.

In 1980, Ronald Reagan decisively defeated one-term incumbent Jimmy Carter by 489-49 Electoral votes. His election ended the liberal era, shrunk the size of government, and rebuilt America’s military, diplomatic, economic, and intelligence capabilities. America reestablished its leadership in the world, ending the Soviet Empire, and the Soviet Union itself.

Reagan was a key leader in creating and promoting the conservative movement, whose policy and political operatives populated and guided his administration. He was a true “thought leader” who defined American conservatism in the late 20th Century. Through his writings, speeches, and radio program, Reagan laid the groundwork, and shaped the mandate, for one of the most impactful Presidencies in American history.

The road from Roosevelt’s “New Deal” to Reagan’s Revolution began in 1940.

FDR, at the height of his popularity, choose to run for an unprecedented third term. Roosevelt steered ever more leftward, selecting Henry Wallace as his running mate. Wallace would run as a socialist under the Progressive Party banner in 1948. Republican Wendell Willkie was the first private sector businessman to become a major party’s nominee.

Willkie had mounted numerous legal challenges to Roosevelt’s regulatory overreach. While losing, Willkie’s legacy inspired a generation of economists and activists to unite against big government.

As the allied victory in World War II became inevitable, the Willkie activists, along with leading conservative economists from across the globe, established policy organizations, “think tanks,” and publications to formulate and communicate an alternative to Roosevelt’s New Deal.

Human Events, the premiere conservative newspaper, began publishing in 1944. The Foundation for Economic Education was founded in 1946.

In 1947, conservative, “free market,” anti-regulatory economists met at the Mont Pelerin resort at the base of Mont Pelerin near Montreux, Switzerland. The greatest conservative minds of the 20th Century, including Friedrich Hayek, Ludwig von Mises, and Milton Friedman, organized the “Mont Pelerin Society” to counter the globalist economic policies arising from the Bretton Woods Conference.  The Bretton Woods economists had met at the Hotel Washington, at the base of Mount Washington in New Hampshire, to launch the World Bank and International Monetary Fund.

Conservative writer and thinker, William F. Buckley Jr. founded National Review on November 19, 1955. His publication, more than any other, would serve to define, refine and consolidate the modern Conservative Movement.

The most fundamental change was realigning conservatism with the international fight against the Soviet Union, which was leading global Communist expansion. Up until this period, American conservatives tended to be isolationist. National Review’s array of columnists developed “Fusionism” which provided the intellectual justification of conservatives being for limited government at home while aggressively fighting Communism abroad. In 1958, the American Security Council was formed to focus the efforts of conservative national security experts on confronting the Soviets.

Conservative Fusionism was politically launched by Senator Barry Goldwater (R-AZ) during the Republican Party Platform meetings for their 1960 National Convention. Conservative forces prevailed. This laid the groundwork for Goldwater to run and win the Republican Party Presidential nomination in 1964.

The policy victories of Goldwater and Buckley inspired the formation of the Young Americans for Freedom, the major conservative youth movement. Meeting at Buckley’s home in Sharon, Connecticut on September 11, 1960, the YAF manifesto became the Fusionist Canon. The conservative movement added additional policy centers, such as the Hudson Institute, founded on July 20, 1961.

Goldwater’s campaign was a historic departure from traditional Republican politics. His plain-spoken assertion of limited government and aggressive action against the Soviets inspired many, but scared many more. President John F. Kennedy’s assassination had catapulted Vice President Lyndon B. Johnson into the Presidency. LBJ had a vision of an even larger Federal Government, designed to mold urban minorities into perpetually being beholding to Democrat politicians.

Goldwater’s alternative vision was trounced on election day, but the seeds for Reagan’s Conservative Revolution were sown.

Reagan was unique in American politics. He was a pioneer in radio broadcasting and television. His movie career made him famous and wealthy. His tenure as President of the Screen Actors Guild thrust him into the headlines as Hollywood confronted domestic communism.

Reagan’s pivot to politics began when General Electric hired him to host their popular television show, General Electric Theater. His contract included touring GE plants to speak about patriotism, free market economics, and anti-communism. His new life within corporate America introduced him to a circle of conservative businessmen who would become known as his “Kitchen Cabinet.”

The Goldwater campaign reached out to Reagan to speak on behalf of their candidate on a television special during the last week of the campaign. On October 27, 1964, Reagan drew upon his GE speeches to deliver “A Time for Choosing.” His inspiring address became a political classic, which included lines that would become the core of “Reaganism”:

“The Founding Fathers knew a government can’t control the economy without controlling people. And they knew when a government sets out to do that, it must use force and coercion to achieve its purpose. So, we have come to a time for choosing … You and I are told we must choose between a left or right, but I suggest there is no such thing as a left or right. There is only an up or down. Up to man’s age-old dream—the maximum of individual freedom consistent with order—or down to the ant heap of totalitarianism.”

The Washington Post declared Reagan’s “Time for Choosing” “the most successful national political debut since William Jennings Bryan electrified the 1896 Democratic convention with his Cross of Gold speech.” It immediately established Reagan as the heir to Goldwater’s movement.

The promise of Reagan fulfilling the Fusionist vision of Goldwater, Buckley, and a growing conservative movement inspired the formation of additional groups, such as the American Conservative Union in December 1964.

In 1966, Reagan trounced two-term Democrat incumbent Pat Brown to become Governor of California, winning by 57.5 percent. Reagan’s two terms became the epicenter of successful conservative domestic policy attracting top policy and political operatives who would serve him throughout his Presidency.

Retiring after two terms, Reagan devoted full time to being the voice, brain, and face of the Conservative Movement. This included a radio show that was followed by over 30 million listeners.

In 1976. the ineffectual moderate Republicanism of President Gerald Ford led Reagan to mount a challenge. Reagan came close to the unprecedented unseating of his Party’s incumbent. His concession speech on the last night of the Republican National Convention became another political classic. It launched his successful march to the White House.

Reagan’s 1980 campaign was now aided by a more organized, broad, and capable Conservative Movement. Reagan’s “California Reaganites” were linked to Washington, DC-based “Fusionists,” and conservative grassroots activists who were embedded in Republican Party units across America. The Heritage Foundation had become a major conservative policy center on February 16, 1973. A new hub for conservative activists, The Conservative Caucus, came into existence in 1974.

Starting in 1978, Reagan’s inner circle, including his “Kitchen Cabinet,” worked seamlessly with this vast network of conservative groups: The Heritage Foundation, Kingston, Stanton, Library Court, Chesapeake Society, Monday Club, Conservative Caucus, American Legislative Exchange Council, Committee for the Survival of a Free Congress, the Eagle Forum, and many others. They formed a unified and potent political movement that overwhelmed Republican moderates to win the nomination and then buried Jimmy Carter and the Democrat Party in November 1980.

After his landslide victory, which also swept in the first Republican Senate majority since 1956, Reaganites and Fusionists placed key operatives into Reagan’s transition. They identified over 17,000 positions that affected Executive Branch operations. A separate team identified the key positions in each cabinet department and major agency that had to be under Reagan’s control in the first weeks of his presidency.

On January 21, 1981, Reagan’s personnel team immediately removed every Carter political appointee. These Democrat functionaries were walked out the door, identification badge taken, files sealed, and their security clearance terminated. The Carter era’s impotent foreign policy and intrusive domestic policy ended completely and instantaneously.

Reagan went on to lead one of the most successful Presidencies in American history. His vision of a “shining city on a hill” continues to inspire people around the world to seek better lives through freedom, open societies, and economic liberty.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner
Iranian Students Climb Wall of U.S. Embassy, Tehran, Nov. 1979

The long tragic road to the September 11, 2001 terror attacks began with President Jimmy Carter, and his administration’s involvement in the Iranian Revolution, and their fundamentally weak response to the Iranian Hostage Crisis.

The Iranian Hostage Crisis was the most visible act of the Iranian Revolution. Starting on November 4, 1979, and lasting for 444 days, 52 Americans were imprisoned in brutal conditions. The world watched as the Carter Administration repeatedly failed to free the hostages, both through poor diplomacy and the rescue attempt fiasco.

The result was the crippling of U.S. influence throughout the Middle East and the spawning of radical Islamic movements that terrorize the world to this day.

Islam’s three major sects, Sunni, Shiite, and Sufi, all harbor the seeds of violence and hatred. In 1881 a Sufi mystic ignited the Mahdi Revolt in the Sudan leading to eighteen years of death and misery throughout the upper Nile. During World War II, the Sunni Grand Mufti of Jerusalem befriended Hitler and helped Heinrich Himmler form Islamic Stormtrooper units to kill Jews in the Balkans.

After World War II, Islam secularized as mainstream leaders embraced Western economic interests to tap their vast oil and gas reserves.

Activists became embroiled in the Middle East’s Cold War chess board, aiding U.S. or Soviet interests.

The Iranian Revolution changed that. Through the success of the Iranian Revolution, Islamic extremists of all sects embraced the words of Shiite Ayatollah Ruhollah Khomeini:

“If the form of government willed by Islam were to come into being, none of the governments now existing in the world would be able to resist it; they would all capitulate.”

Islamic dominance became an end in and of itself.

This did not have to happen at all.

Iran has been a pivotal regional player for 2,500 years. The Persian Empire was the bane of ancient Greece. As the Greek Empire withered, Persia, later Iran, remained a political, economic, and cultural force. This is why their 1979 Revolution and subsequent confrontation with the U.S. inspired radicals throughout the Islamic world to become the Taliban, ISIS and other terrorists of today.

Iran’s modern history began as part of the East-West conflict following World War II. The Soviets heavily influenced and manipulated Iran’s first elected government. On August 19, 1953, British and America intelligence toppled that government and returned Shah Mohammad Reza to power.

“The Shah” as he became known globally, was reform minded. He launched his “White Revolution” to build a modern, pro-West, pro-capitalist Iran in 1963. The Shah’s “Revolution” built the region’s largest middle class, and broke centuries of tradition by enfranchising women. It was opposed by many traditional powers, including fundamentalist Islamic leaders like the Ayatollah Ruhollah Khomeini. Khomeini’s agitation for violent opposition to the Shah’s reforms led to his arrest and exile.

Throughout his reign, the Shah was vexed by radical Islamic and communist agitation. His secret police brutally suppressed fringe dissidents. This balancing act between western reforms and control worked well, with a trend towards more reforms as the Shah aged. The Shah enjoyed warm relationships with American Presidents of both parties and was rewarded with lavish military aid.

That was to change in 1977.

From the beginning, the Carter Administration expressed disdain for the Shah. President Carter pressed for the release of political prisoners. The Shah complied, allowing many radicals the freedom to openly oppose him.

Not satisfied with the pace or breadth of the Shah’s human rights reforms, Carter envoys began a dialogue with the Ayatollah Khomeini, first at his home in Iraq and more intensely when he moved to a Paris suburb.

Indications that the U.S. was souring on the Shah emboldened dissidents across the political spectrum to confront the regime. Demonstrations, riots, and general strikes began to destabilize the Shah and his government. In response, the Shah accelerated reforms. This was viewed as weakness by the opposition.

The Western media, especially the BBC, began to promote the Ayatollah as a moderate alternative to the Shah’s “brutal regime.” The Ayatollah assured U.S. intelligence operatives and State Department officials that he would only be the “figure head” for a western parliamentary system.

During the fall of 1978, strikes and demonstrations paralyzed the country. The Carter Administration, led by Secretary of State Cyrus Vance and U.S. Ambassador to Iran William Sullivan, coalesced around abandoning the Shah and helping install Khomeini, who they viewed as a “moderate clergyman” who would be Iran’s “Ghandi-like” spiritual leader.

Time and political capital were running out. On January 16, 1979, The Shah, after arranging for an interim government, resigned and went into exile.

The balance of power now remained with the Iranian Military.

While the Shah was preparing for his departure, General Robert Huyser, Deputy Commander of NATO and his top aides, arrived in Iran. They were there to neutralize the military leaders. Using ties of friendship, promises of aid, and assurance of safety, Huyser and his team convinced the Iranian commanders to allow the transitional government to finalize arrangements for Khomeini becoming part of the new government.

Many of these Iranian military leaders, and their families, were slaughtered as Khomeini and his Islamic Republican Guard toppled the transitional government and seized power during the Spring of 1979.  “It was a most despicable act of treachery, for which I will always be ashamed,” admitted one NATO general years later.

While Iran was collapsing, so were America’s intelligence capabilities.

One of President Carter’s earliest appointments was placing Admiral Stansfield Turner in charge of the Central Intelligence Agency (CIA). Turner immediately eviscerated the Agency’s human intelligence and clandestine units. He felt they had gone “rogue” during the Nixon-Ford era. He also thought electronic surveillance and satellites could do as good a job.

Turner’s actions led to “one of the most consequential strategic surprises that the United States has experienced since the CIA was established in 1947” – the Embassy Takeover and Hostage Crisis.

The radicalization of Iran occurred at lightning speed. Khomeini and his lieutenants remade Iran’s government and society into a totalitarian fundamentalist Islamic state. Anyone who opposed their Islamic Revolution were driven into exile, imprisoned, or killed.

Khomeini’s earlier assurances of moderation and working with the West vanished. Radicalized mobs turned their attention to eradicating all vestiges of the West. This included the U.S. Embassy.

The first attack on the U.S. Embassy occurred on the morning of February 14, 1979. Coincidently, this was the same day that Adolph Dubs, the U.S. ambassador to Afghanistan, was kidnapped and fatally shot by Muslim extremists in Kabul. In Tehran, Ambassador Sullivan surrendered the U.S. Embassy and was able to resolve the occupation within hours through negotiations with the Iranian Foreign Minister.

Despite this attack, and the bloodshed in Kabul, nothing was done to either close the Tehran Embassy, reduce personnel, or strengthen its defenses. During the takeover, Embassy personnel failed to burn sensitive documents as their furnaces malfunctioned. They installed cheaper paper shredders. During the 444-day occupation, rug weavers were employed to reconstruct the sensitive shredded documents, creating global embarrassment of America.

Starting in September 1979, radical students began planning a more extensive assault on the Embassy. This included daily demonstrations outside the U.S. Embassy to trigger an Embassy security response. This allowed organizers to assess the size and capabilities of the Embassy security forces.

On November 4, 1979, one of the demonstrations erupted into an all-out conflict by the Embassy’s Visa processing public entrance. The assault leaders deployed approximately 500 students. Female students hid metal cutters under their robes, which were used to breach the Embassy gates.

Khomeini was in a meeting outside of Tehran and did not have prior knowledge of the takeover. He immediately issued a statement of support, declaring it “the second revolution” and the U.S. Embassy an “America spy den in Tehran.”

What followed was an unending ordeal of terror and depravation for the 66 hostages, who through various releases, were reduced to a core of 52. The 2012 film “Argo” chronicled the audacious escape of six Americans who had been outside the U.S. Embassy at the time of the takeover.

ABC News began a nightly update on the hostage drama. This became “Nightline.” During the 1980 Presidential campaign, it served as a nightly reminder of the ineffectiveness of President Carter.

On April 24, 1980, trying to break out of this chronic crisis, Carter initiated an ill-conceived, and poorly executed, rescue mission called Operation Eagle Claw. It ended with crashed helicopters and eight dead soldiers at the staging area outside of the Iranian Capital, designated Desert One. Another attempt was made through diplomacy as part of a hoped for “October Surprise,” but the Iranians cancelled the deal just as planes were being mustered at Andrews Air Force Base.

Carter paid the price for his Iranian duplicity. On November 4, 1980, Ronald Reagan obliterated Carter in the worst defeat suffered by an incumbent President since Herbert Hoover in 1932.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

The election of Ronald Reagan in 1980 marked THE crucial turning point in winning the Cold War with Russia-dominated Communism, the USSR.

Reagan’s rise to national prominence began with the surge in communist insurgencies and revolutions worldwide that began after the fall of Saigon on April 30, 1975, and all South Vietnam to the communists. After 58,000 American lives and trillions in treasure lost over the tenures of five American Presidents, the United States left the Vietnam War and South Vietnam to the communists.

Communist North Vietnam in league with fellow communist governments in Russia and China accurately saw the weakening of a new American President, Gerald Ford, and a new ‘anti-war’ Congress as a result of the ‘Watergate’ scandal and President Richard Nixon’s subsequent resignation. In the minds of the communists, it was a signal opportunity to forcibly “unify,” read invade, the non-communist South with magnum force, armed to the teeth by both the People’s Republic of China and the USSR. President Nixon’s Secret Letter to South Vietnamese President Thieu pledging all-out support of U.S. air and naval power if the communists broke the Paris Peace Agreement and invaded was irrelevant as Nixon was gone. With the communist invasion beginning, seventy-four new members of Congress, all anti-war Democrats guaranteed the ”No” vote on the Ford Administration’s Bill to provide $800 million for ammunition and fuel to the South Vietnamese military to roll their tanks and fly their planes. That Bill lost in Congress by only one vote. The fate of South Vietnam was sealed. The people of South Vietnam, in what seemed then like an instant, were abandoned by their close American ally of some 20 years. Picture that.

Picture the ignominy of it all. Helicopters rescuing Americans and some chosen Vietnamese from rooftops while U.S. Marines staved off the desperate South Vietnamese who had worked with us for decades. Picture Vietnamese people clinging to helicopter skids and airplane landing gears in desperation, falling to their death as these aircraft ascended. Picture drivers of South Vietnamese tanks and pilots of fighter planes not able to engage for want of fuel. Picture famous South Vietnamese Generals committing suicide rather than face certain torture and death in Re-Education Camps, read Gulags with propaganda lessons. Picture perhaps hundreds of thousands of “Boat People,” having launched near anything that floated to escape the wrath of their conquerors, at the bottom of the South China Sea. Picture horrific genocide in Cambodia where Pol Pot and his henchmen murdered nearly one-third of the population to establish communism… and through it all, the West, led by the United States, stayed away.

Leonid Brezhvnev, Secretary General of the Communist Party of the Soviet Union and his Politburo colleagues could picture it… all of it. The Cold War was about to get hot.

The fall of the non-communist government in South Vietnam and the election of President Jimmy Carter was followed by an American military and intelligence services-emasculating U.S. Congress. Many in the Democratic Party took the side of the insurgents. I remember well, Sen. Tom Harkin from Iowa claiming that the Sandinista Communists (in Nicaragua) were more like “overzealous Boy Scouts” than hardened Communists. Amazing.

Global communism with the USSR in the lead and America in retreat, was on the march.

In just a few years, in Asia, Africa and Latin America, repressive communist-totalitarian regimes had been foisted on the respective peoples by small numbers of ideologically committed, well-trained and well-armed (by the Soviet Union) insurgencies. “Wars of national liberation” and intensive Soviet subversion raged around the world. Think Angola and Southern Africa, Ethiopia and Somalia in the Horn of Africa. Think the Middle East and the Philippines, Malaysia and Afghanistan (there a full-throated Red Army invasion) in Asia.

Think Central America in our own hemisphere and Nicaragua where the USSR and their right hand in the hemisphere, communist Cuba, took charge along with a relatively few committed Marxist-Leninist Nicaraguans, backed by Cuba and the Soviet Union, even creating a Soviet-style Politburo and Central Committee! On one my several trips to the region, I personally met with Tomas Borge, the Stalinist leader of the Nicaraguan Communist Party and his colleagues. Total Bolsheviks. To make things even more dangerous for the United States, these wars of national liberation were also ongoing in El Salvador, Honduras and Guatemala.

A gigantic airfield that could land Soviet jumbo transports was being completed under the Grenadian communist government of Maurice Bishop. Warehouses with vast storage capacity for weapons to fuel insurgency in Latin America were built. I personally witnessed these facilities and found the diary of one leading Politburo official, Liam James, who was on the payroll of the Soviet Embassy at the time. They all were but he, being the Treasurer of the government, actually wrote it down! These newly-minted communist countries and other ongoing insurgencies, with Marxist-Leninist values in direct opposition to human freedom and interests of the West, were being funded and activated by Soviet intelligence agencies, largely the KGB and were supplied by the economies of the Soviet Union and their Warsaw Pact empire in Eastern and Central Europe. Many leaders of these so-called “Third World” countries were on Moscow’s payroll.

In the words of one KGB General, “The world was going our way.” Richard Andrew, ‘The KGB and the Battle for the Third World’ (based on the Mitrokhin archives). These so-called wars of national liberation didn’t fully end until some ten years later, when the weapons and supplies from the Soviet Union dried up as the Soviet Empire began to disintegrate, thanks to a new U.S. President who led the way during  the 1980s.

Enter Ronald Wilson Reagan. To the chagrin of the Soviet communists and their followers worldwide, it was the beginning of the end of their glory days when in January of 1981, Ronald Reagan, having beaten the incumbent President, Jimmy Carter, in November, was sworn in as President of the United States. Ronald Reagan was no novice in the subject matter. President Reagan had been an outspoken critic of communism over three decades. He had written and given speeches on communism and the genuinely evil nature of the Soviet Union. He was a committed lover of human freedom, human rights and free markets. As Governor of California, he had gained executive experience in a large bureaucracy and during that time had connected with a contingent of likeminded political and academic conservatives. The mainstream media was ruthless with him, characterizing him as an intellectual dolt and warmonger who would bring on World War III. He would prove his detractors so wrong. He would prove to be the ultimate Cold Warrior, yet a sweet man with an iron fist when needed.

When his first National Security Advisor, Richard Allen, asked the new President Reagan about his vision of the Cold War, Reagan’s response was, “We win, they lose.” Rare moral clarity rarely enunciated.

To the end of his presidency, he continued to be disparaged by the mainstream media, although less aggressively. However, the American people grew to appreciate and even love the man as he and his team, more than anyone would be responsible for winning the Cold War and bringing down a truly “Evil Empire.” Just ask those who suffered most, the Polish, Czech, Hungarian, Ukrainian, Rumanian, Baltic, and yes, the Russian people, themselves. To this very day, his name is revered by those who suffered and still suffer under the yoke of communism.

Personally, I have often pondered that had Ronald Reagan not been elected President of the United States in 1980, the communist behemoth USSR would be standing strong today and the Cold War ended with communism, the victor.

The Honorable Don Ritter, Sc. D., served seven terms in the U.S. Congress from Pennsylvania including both terms of Ronald Reagan’s presidency. Dr. Ritter speaks fluent Russian and lived in the USSR for a year as a Nation Academy of Sciences post-doctoral Fellow during Leonid Brezhnev’s time. He served in Congress as Ranking Member of the Congressional Helsinki Commission and was a leader in Congress in opposition to the Soviet invasion and occupation of Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath
President Nixon Farewell Speech to White House Staff, August 9, 1974

On Thursday, August 8, 1974, a somber Richard Nixon addressed the American people in a 16-minute speech via television to announce that he was planning to resign from the Presidency of the United States. He expressed regret over mistakes he made about the break-in at the Democratic Party offices at the Watergate Hotel and the aftermath of that event. He further expressed the hope that his resignation would begin to heal the political divisions the matter had exacerbated. The next day, having resigned, he boarded a helicopter and, with his family, left Washington, D.C.

Nixon had won the 1972 election against Senator George McGovern of South Dakota with over 60% of the popular vote and an electoral vote of 520-17 (one vote having gone to a third candidate). Yet less than two years after what is one of the most overwhelming victories in American elections, Nixon was politically dead. Nixon has been described as a tragic figure, in a literary sense, due to his struggle to rise to the height of political power, only to be undone when he had achieved the pinnacle of success. The cause of this astounding change of fortune has been much debated. It resulted from a confluence of factors, political, historical, and personal.

Nixon was an extraordinarily complex man. He was highly intelligent, even brilliant, yet was the perennial striver seeking to overcome, by unrelenting work, his perceived limitations. He was an accomplished politician with a keen understanding of political issues, yet socially awkward and personally insecure. He was perceived as the ultimate insider, yet, despite his efforts, was always somehow outside the “establishment,” from his school days to his years in the White House. Alienated from the social and political elites, who saw him as an arriviste, he emphasized his marginally middle-class roots and tied his political career to that “silent majority.” He could arouse intense loyalty among his supporters, yet equally intense fury among his opponents. Nixon infamously kept an “enemies list,” the only surprise of which is that it was so incomplete. Seen by the Left as an operative of what is today colloquialized as the “Deep State,” yet he rightly mistrusted the bureaucracy and its departments and agencies, and preferred to rely on White House staff and hand-picked loyal individuals. Caricatured as an anti-Communist ideologue and would-be right-wing dictator, Nixon was a consummately pragmatic politician who was seen by many supporters of Senator Barry Goldwater and Governor Ronald Reagan as insufficiently in line with their world view.

The Watergate burglary and attempted bugging of the Democratic Party offices in June, 1972, and investigations by the FBI and the Government Accountability Office that autumn into campaign finance irregularities by the Committee to Re-Elect the President (given the unfortunate acronym CREEP by Nixon’s opponents) initially had no impact on Nixon and his comprehensive political victory. In January, 1973, the trial of the operatives before federal judge John Sirica in Washington, D.C., revealed possible White House involvement. This perked the interest of the press, never Nixon’s friends. These revelations, now spread before the public, caused the Democratic Senate majority to appoint a select committee under Senator Sam Ervin of North Carolina for further investigation. Pursuant to an arrangement with Senate Democrats, Attorney General Elliot Richardson named Democrat Archibald Cox, a Harvard law professor and former Kennedy administration solicitor general, as special prosecutor.

Cox’s efforts uncovered a series of missteps by Nixon, as well as actions that were viewed as more seriously corrupt and potentially criminal. Some of these sound rather tame by today’s standards. Others are more problematic. Among the former were allegations that Nixon had falsely backdated a gift of presidential papers to the National Archives to get a tax credit, not unlike Bill Clinton’s generously-overestimated gift of three pairs of his underwear in 1986 for an itemized charitable tax deduction. Another was that he was inexplicably careless in preparing his tax return. Given the many retroactively amended tax returns and campaign finance forms filed by politicians, such as the Clintons and their eponymous foundations, this, too, seems of slight import. More significant was the allegation that he had used the Internal Revenue Service to attack political enemies. Nixon certainly considered that, although it is not shown that any such actions were undertaken. Another serious charge was that Nixon had set up a secret structure to engage in political intelligence and espionage.

The keystone to the impeachment was the discovery of a secret taping system in the Oval Office that showed that Nixon had participated in a cover-up of the burglary and obstructed the investigation. Nixon, always self-reflective and sensitive to his position in history, had set up the system to provide a clear record of conversations within the Oval Office for his anticipated post-Presidency memoirs. It proved to be his downfall. When Cox became aware of the system, he sought a subpoena to obtain nine of the tapes in July, 1973. Nixon refused, citing executive privilege relating to confidential communications. That strategy had worked when the Senate had demanded the tapes; Judge Sirica had agreed with Nixon. But Judge Sirica rejected that argument when Cox sought the information, a decision upheld 5-2 by the federal Circuit Court for the District of Columbia.

Nixon then offered to give Cox authenticated summaries of the nine tapes. Cox refused. After a further clash between the President and the special prosecutor, Nixon ordered Attorney General Richardson to remove Cox. Both Richardson and Assistant Attorney General William Ruckelshaus refused and resigned. However, by agreement between these two and Solicitor General Robert Bork, Cox was removed by Bork in his new capacity as Acting Attorney General. It was well within Nixon’s constitutional powers as head of the unitary executive to fire his subordinates. But what the President is constitutionally authorized to do is not the same as what the President politically should do. The reaction of the political, academic, and media elites to the “Saturday Night Massacre” was overwhelmingly negative, and precipitated the first serious effort at impeaching Nixon.

A new special prosecutor, Democrat Leon Jaworski, was appointed by Bork in consultation with Congress. The agreement among the three parties was that, though Jaworski would operate within the Justice Department, he could not be removed except for specified causes and with notification to Congress. Jaworski also was specifically authorized to contest in court any claim of executive privilege. When Jaworski again sought various specific tapes, and Nixon again claimed executive privilege, Jaworski eventually took the case to the Supreme Court. On July 24, 1974, Chief Justice Warren Burger’s opinion in the 8-0 decision in United States v. Nixon (William Rehnquist, a Nixon appointee who had worked in the White House, had recused himself) overrode the executive privilege claim. The justices also rejected the argument that this was a political intra-branch dispute between the President and a subordinate that rendered the matter non-justiciable, that is, beyond the competence of the federal courts.

At the same time, in July, 1974, with bipartisan support, the House Judiciary Committee voted out three articles of impeachment. Article I charged obstruction of justice regarding the Watergate burglary. Article II charged him with violating the Constitutional rights of citizens and “contravening the laws governing agencies of the executive branch,” which dealt with Nixon’s alleged attempted misuse of the IRS, and with his misuse of the FBI and CIA. Article III charged Nixon with ignoring congressional subpoenas, which sounds remarkably like an attempt to obstruct Congress, a dubious ground for impeachment. Two other proposed articles were rejected. When the Supreme Court ordered Nixon to release the tapes, that of June 23, 1972, showed obstruction of justice by the President instructing his staff to use the CIA to end the Watergate investigation. The tape was released on August 5. Nixon was then visited by a delegation of Republican Representatives and Senators who informed him of the near-certainty of impeachment by the House and of his extremely tenuous position to avoid conviction by the Senate. The situation having become politically hopeless, Nixon resigned, making his resignation formal on Friday, August 9, 1974.

The Watergate affair produced several constitutional controversies. First, the Supreme Court addressed executive privilege to withhold confidential information. Nixon’s opponents had claimed that the executive lacked such a privilege because the Constitution did not address it, unlike the privilege against self-incrimination. Relying on consistent historical practice going back to the Washington administration, the Court found instead that such a privilege is inherent in the separation of powers and necessary to protect the President in exercising the executive power and others granted under Article II of the Constitution. However, unless the matter involves state secrets, that privilege could be overridden by a court, if warranted in a criminal case, and the “presumptively privileged” information ordered released. While the Court did not directly consider the matter, other courts have agreed with Judge Sirica that, based on long practice, the privilege will be upheld if Congress seeks such confidential information. The matter then is a political question, not one for courts to address at all.

Another controversy arose over the President’s long-recognized power to fire executive branch subordinates without restriction by Congress. This is essential to the President’s position as head of the executive branch. For example, the President has inherent constitutional authority to fire ambassadors as Barack Obama and Donald Trump did, or to remove U.S. Attorneys, as Bill Clinton and George W. Bush did. Jaworski’s appointment under the agreement not to remove him except for specified cause interfered with that power, yet the Court upheld that limitation in the Nixon case.

After Watergate, in 1978, Congress passed the Ethics in Government Act that provided a broad statutory basis for the appointment of special prosecutors outside the normal structure of the Justice Department. Such prosecutors, too, could not be removed except for specified causes. In Morrison v. Olson, in 1988, the Supreme Court, by 7-1, upheld this incursion on executive independence over the lone dissent of Justice Antonin Scalia. At least as to inferior executive officers, which the Court found special prosecutors to be, Congress could limit the President’s power to remove, as long as the limitation did not interfere unduly with the President’s control over the executive branch. The opinion, by Chief Justice Rehnquist, was in many ways risible from a constitutional perspective, but it upheld a law that became the starting point for a number of highly-partisan and politically-motivated investigations into actions taken by Presidents Ronald Reagan, George H.W. Bush, and Bill Clinton, and by their subordinates. Only once the last of these Presidents was being subjected to such oversight did opposition to the law become sufficiently bipartisan to prevent its reenactment.

The impeachment proceeding itself rekindled the debate over the meaning of the substantive grounds for such an extraordinary interference with the democratic process. While treason is defined in the Constitution and bribery is an old and well-litigated criminal law concept, the third basis, of “high crimes and misdemeanors,” is open to considerable latitude of meaning. One view, taken by defenders of the official under investigation, is that this phrase requires conduct amounting to a crime, an “indictable offense.” The position of the party pursuing impeachment, Republican or Democrat, has been that this phrase more broadly includes unfitness for office and reaches conduct which is not formally criminal but which shows gross corruption or a threat to the constitutional order. The Framers’ understanding appears to have been closer to the latter, although the much greater number and scope of criminal laws today may have narrowed the difference. However, what the Framers considered sufficiently serious impeachable corruption likely was more substantial than what has been proffered recently. They were acutely aware of the potential for merely political retaliation and similar partisan mischief that a low standard for impeachment would produce. These and other questions surrounding the rather sparse impeachment provisions in the Constitution have not been resolved. They continue to be, foremost, political matters addressed on a case-by-case basis, as demonstrated the past twelve months.

As has been often observed, Nixon’s predicament was not entirely of his own making. In one sense, he was the victim of political trends that signified a reaction against what had come to be termed the “Imperial Presidency.” It had long been part of the progressive political faith that there was “nothing to fear but fear itself” as far as broadly exercised executive power, as long as the presidential tribune using “a pen and a phone” was subject to free elections. Actions routinely done by Presidents such as Franklin Roosevelt, Harry Truman, and Nixon’s predecessor, Lyndon Johnson, now became evidence of executive overreach. For example, those presidents, as well as others going back to at least Thomas Jefferson had impounded appropriated funds, often to maintain fiscal discipline over profligate Congresses. Nixon claimed that his constitutional duty “to take care that the laws be faithfully executed” was also a power that allowed him to exercise discretion as to which laws to enforce, not just how to enforce them. In response, the Democratic Congress in 1974 passed the Budget and Impoundment Control Act of 1974. The Supreme Court in Train v. City of New York declared presidential impoundment unconstitutional and limited the President’s authority to impound funds to whatever extent was permitted by Congress in statutory language.

In military matters, the elites’ reaction against the Vietnam War, shaped by negative press coverage and antiwar demonstrations on elite college campuses, gradually eroded popular support. The brunt of the responsibility for the vast expansion of the war lay with Lyndon Johnson and the manipulative use of a supposed North Vietnamese naval attack on an American destroyer, which resulted in the Gulf of Tonkin Resolution. At a time when Nixon had ended the military draft, drastically reduced American troop numbers in Vietnam, and agreed to the Paris Peace Accords signed at the end of January, 1973, Congress enacted the War Powers Resolution of 1973 over Nixon’s veto. The law limited the President’s power to engage in military hostilities to specified situations, in the absence of a formal declaration of war. It also basically required pre-action consultation with Congress for any use of American troops and a withdrawal of such troops unless Congress approved within sixty days. It also, somewhat mystifyingly, purported to disclaim any attempt to limit the President’s war powers. The Resolution has been less than successful in curbing presidential discretion in using the military and remains largely symbolic.

Another restriction on presidential authority occurred through the Supreme Court. In United States v. United States District Court in 1972, the Supreme Court rejected the administration’s program of warrantless electronic surveillance for domestic security. This was connected to the Huston Plan of warrantless searches of mail and other communications of Americans. Warrantless wiretaps were connected on some members of the National Security Council and several journalists. Not touched by the Court was the President’s authority to conduct warrantless electronic surveillance of foreigners or their agents for national security-related information gathering. On the latter, Congress nevertheless in 1978 passed the Foreign Intelligence Surveillance Act, which, ironically, has expanded the President’s power in that area. Because it can be applied to communications of Americans deemed agents of a foreign government, FISA, along with the President’s inherent constitutional powers regarding foreign intelligence-gathering, can be used to circumvent the Supreme Court’s decision. It has even been used in the last several years to target the campaign of then-candidate Donald Trump.

Nixon’s use of the “pocket veto” and his imposition of price controls also triggered resentment and reaction in Congress, although once again his actions were hardly novel. None of these various executive policies, by themselves, were politically fatal. Rather, they demonstrate the political climate in which what otherwise was just another election-year dirty trick, the Watergate Hotel burglary, could result in the historically extraordinary resignation from office of a President who had not long before received the approval of a large majority of American voters. Nixon’s contemplated use of the IRS to audit “enemies” was no worse than the Obama Administration’s actual use of the IRS to throttle conservative groups’ tax exemption. His support of warrantless wiretaps under his claimed constitutional authority to target suspected domestic troublemakers, while unconstitutional, hardly is more troubling than Obama’s use of the FBI and CIA to manipulate the FISA system into spying on a presidential candidate to assist his opponent. Nixon’s wiretapping of NSC officials and several journalists is not dissimilar to Obama’s search of phone records of various Associated Press reporters and of spying on Fox News’s James Rosen. Obama’s FBI also accused Rosen of having violated the Espionage Act. The Obama administration brought more than twice as many prosecutions—including under the Espionage Act—against leakers than all prior Presidents combined. That was in his first term.

There was another, shadowy factor at work. Nixon, the outsider, offended the political and media elites. Nixon himself disliked the bureaucracy, which had increased significantly over the previous generation through the New Deal’s “alphabet agencies” and the demands of World War II and the Cold War. The Johnson Administration’s Great Society programs sped up this growth. The agencies were staffed at the upper levels with left-leaning members of the bureaucratic elite. Nixon’s relationship with the press was poisoned not only by their class-based disdain for him, but by the constant flow of leaks from government insiders who opposed him. Nixon tried to counteract that by greatly expanding the White House offices and staffing them with members who he believed were personally loyal to him. His reliance on those advisers rather than on the advice of entrenched establishment policy-makers threatened the political clout and personal self-esteem of the latter. What has been called Nixon’s plebiscitary style of executive government, relying on the approval of the voters rather than on that of the elite administrative cadre, also was a threat to the existing order. As Senator Charles Schumer warned President Trump in early January, 2017, about the intelligence “community,” “Let me tell you: You take on the intelligence community — they have six ways from Sunday at getting back at you.” Nixon, too, lived that reality.

Once out of office, Nixon generally stayed out of the limelight. The strategy worked well. As seems to be the custom for Republican presidents, once they are “former,” many in the press and among other “right-thinking people” came to see him as the wise elder statesman, much to be preferred to the ignorant cowboy (and dictator) Ronald Reagan. Who, of course, then came to be preferred to the ignorant cowboy (and dictator) George W. Bush. Who, of course, then came to be preferred to the ignorant reality television personality (and dictator) Donald Trump. Thus, the circle of political life continues. It ended for Nixon on April 22, 1994. His funeral five days later was attended by all living Presidents. Tens of thousands of mourners paid their respects.

The parallel to recent events should be obvious. That said, a comparison between seriousness of the Watergate Affair that resulted in President Nixon’s resignation and the Speaker Nancy Pelosi/Congressman Adam Schiff/Congressman Jerry Nadler impeachment of President Trump brings to mind what may be Karl Marx’s only valuable observation, that historic facts appear twice, “the first time as tragedy, the second time as farce.”

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Danny de Gracia

The story of how men first set foot on the Moon one fateful day on July 20, 1969, will always be enshrined as one of America’s greatest contributions to history. When the first humans looked upwards to the night sky thousands of years ago, they must have marveled at the pale Moon looming in the heavens, set against the backdrop of countless stars. Inspired by the skies, and driven by a natural desire for exploration, humans must have wondered what was out there, and if it would be somehow possible to ever explore the distant heavens above.

Indeed, even the Bible tells us that the patriarch of faith, Abraham, was told by God in Genesis 15:5, “Look now toward heaven, and count the stars if you are able to number them. So shall your descendants be.”

The word given to Abraham may have been more than just an impressive way of promising an elderly man way past the age of conception that he would bear many children; it seems more like an invitation that mankind’s destiny belongs not merely on Earth, but among the stars of the limitless cosmos, as a spacefaring civilization.

Early Beginnings

For most of mankind’s history, space travel was relegated to wild myths, hopeless dreams, and fanciful science fiction. The first hurdle in reaching for the stars would be mastering staying aloft in Earth’s atmosphere, which by itself was no easy task. Observing birds, humans for millennia had tried to emulate organic wings with little to no success, not truly understanding the science of lift or the physics of flight.

Like Icarus of Greek mythology, the 11th century English Benedictine monk Eilmer of Malmesbury attempted to foray into the skies by fashioning wings as a kind of primitive glider, but he only succeeded in flying a short distance before he crashed, breaking his legs. Later, Jean-François Pilâtre de Rozier would give mankind a critical first in flight when he took off aboard the Montgolfier hot air balloon in 1783.

Ironically, it would not be benevolent inspiration that would free mankind from his millennia-old ties to the ground beneath his feet, but the pressing demands of war and increasing militarization of the planet. As the Industrial Age began, so also arose the age of warfare, and men knew from countless battles that whoever held the high ground could defend any stronghold or defeat any army. And what greater high ground could afford victory, than the heavens themselves?

Once balloons had been proven an effective and stable means of flight, militaries began to use them as spotting platforms to see enemy movements from a distance and provide accurate targeting for artillery. Notably, during the American Civil War, balloons made for a kind of early air forces for both the Union and Confederacy.

When the Wright Brothers at last mastered the art of controlled and powered flight in a fixed-wing aircraft on December 17, 1903, less than a decade later after the invention of the airplane, the First World War would erupt and aircraft and blimps would become crucial weapons in deciding the outcome of battles.

Germany’s defeat, which was seen by many Germans as something that should not have happened and should never happen again, stirred people like the former army lance corporal Adolf Hitler to pursue more advanced aerial weapons as a means of establishing military superiority.

Even as propeller planes were seen as the ultimate form of aircraft by most militaries of the time, in the late 1930s, German engineers Eugen Sänger and Irene Bredt were already envisioning spacecraft to attack enemies from orbit. In 1941, they conceived plans for the Silbervogel (“Silver Bird”), a rocket-powered space bomber that could take off into low Earth orbit, descend, and bounce off the outer atmosphere like a tossed stone skipping across a pond to reach an enemy target even half a world away.

Fortunately for the United States, the Silbervogel would never be produced, but other German scientists would be working on wonder weapons of their own, one of them being Wernher von Braun, an engineer who had childhood dreams of landing men on the Moon with rockets.

Working at the Peenemünde Army Research Center, von Braun infamously gave Nazi Germany the power to use V-2 rockets, a kind of early ballistic missile that could deliver a high-explosive warhead hundreds of miles away. One such V-2 rocket, MW 18014, test launched on June 20, 1944, became the first man-made object to cross above the Kármán line – Earth’s atmospheric edge of space – when it reached an apogee of 176 kilometers in flight.

While these weapons did not win the war for Nazi Germany, they aroused the interest of both the United States and the Soviets, and as the victorious Allies reclaimed Europe, a frantic effort to capture German scientists for their aerospace knowledge would become the prelude to a coming Cold War.

The Nuclear Age and Space

The use of the Fat Man and Little Boy atomic bombs against Japan brought to light a realization among planners in both the United States and the Soviet Union: The next battleground for control of the planet would be space. Between the difficulty in intercepting weapons like the V-2 rocket, and the destructive capability of the atom bomb, the nations that emerged victorious in WWII all saw potential in combining these technologies together.

At the end of WWII, both the Soviet Union and the United States brought back to their countries numerous German scientists and unused V-2 rockets for the purposes of creating their own next-generation of missiles.

The early V-2 rockets developed by von Braun for Nazi Germany were primitive and inaccurate weapons, but they had demonstrated the capability to carry objects, such as an explosive warhead, in high ballistic arcs over the earth. Early atomic bombs were bulky and extremely heavy, which meant that in order to deliver these weapons of mass destruction across space, larger rockets would need to be developed.

It is no accident then that the early space launchers of both the Soviet Union and the United States were, in fact, converted intercontinental ballistic missiles (or ICBMs) meant for delivering nuclear payloads. The first successful nuclear ICBM was the Soviet R-7 Semyorka (NATO reporting name SS-6 “Sapwood”), which would be the basis for the modified rocket 8K71PS No. M1-1PS, that sent Sputnik, the world’s first artificial satellite, into orbit on October 4, 1957.

The success of the Soviets in putting the first satellite into orbit awed the entire world, but was disturbing to the President Dwight D. Eisenhower White House, because it was not lost on the U.S. military that this accomplishment was more or less a demonstration of nuclear delivery capabilities by the Russians.

And while the United States in 1957 had an overwhelming superiority in nuclear weapons numbers relative to the Soviets, the nuclear doctrine of the early Cold War was structured around a bluff of “massive retaliation” created by Secretary of State John Foster Dulles that intended to minimize the proliferation of new conflicts – including space –  by threatening atomic use as the default response.

“If an enemy could pick his time and place and method of warfare,” Dulles had said in a dinner before the Council on Foreign Relations in January 1954, “and if our policy was to remain the traditional one of meeting aggression by direct and local opposition, then we needed to be ready to fight in the Arctic and in the Tropics; in Asia, the Near East; and in Europe; by sea, by land, and by air; with old weapons, and with new weapons.”

A number of terrifying initial conclusions emerged from the success of Sputnik. First, it showed that the Soviets had reached the ultimate high ground before U.S./NATO forces, and that their future ICBMs could potentially put any target in the world at risk for nuclear bombardment.

To put things into perspective, a jet plane like the American B-47, B-52, or B-58 bombers of the time, took upwards of 8 hours or more cruising through the stratosphere to strike a target from its airbase. But an ICBM, which can reach speeds of Mach 23 or faster in its terminal descent from orbit, can hit any target in the world in 35 minutes or less from launch. This destabilizing development whittled down the U.S. advantage, as it gave the Soviets the possibility of firing first in a surprise attack to “decapitate” any superior American or NATO forces that might be used against them.

The second, and more alarming perception paved by the Soviet entry into space was that America had dropped the ball and been left behind, not only technologically, but historically. In the Soviet Union, Nikita Khrushchev sought to gut check the integrity of both the United States and the NATO alliance by showcasing novel technological accomplishments, such as the Sputnik launch, to cast a long shadow over Western democracies and to imply that communism would be the wave of the future.

In a flurry of briefings and technical research studies that followed the Sputnik orbit, von Braun and other scientists in the U.S. determined that while the Soviets had beaten the West into orbit, the engineering and industrial capabilities of America would ultimately make it feasible for the U.S. over the long term to accomplish a greater feat, in which a man could be landed on the Moon.

Texas Senator Lyndon B. Johnson, later to be vice president to the young, idealistic John F. Kennedy, would be one of the staunchest drivers behind the scenes in pushing for America’s landing on the Moon. The early years of the space race were tough to endure, as NASA, America’s fledgling new civilian space agency, seemed – at least in public – to always be one step behind the Soviets in accomplishing space firsts.

Johnson, a rough-on-the-edges, technocratic leader who saw the necessity of preventing a world “going to sleep by the light of a communist Moon” pushed to keep America in the space fight even when it appeared, to some, as though American space rockets “always seemed to blow up.” His leadership would put additional resolve in the Kennedy administration to stay the course, and may have arguably ensured America being the first and only nation to land men on the Moon.

The Soviets would score another blow to America when on April 12, 1961, cosmonaut Yuri Gagarin became the first human in space when he made a 108-minute orbital flight, launched on the Vostok-K 8K72K rocket, another R-7 ICBM derivative.

But a month later on May 5, 1961, NASA began to catch-up with the Soviets when Alan Shepard and his Freedom 7 space capsule successfully made it into space, brought aloft by the Mercury-Redstone rocket which was adapted from the U.S. Army’s PGM-11 short range nuclear ballistic missile.

Each manned launch and counter-launch between the two superpowers was more than just a demonstration of scientific discovery; they were suggestions of nuclear launch capabilities, specifically, the warhead throw weight power of either country’s missiles, and a thinly veiled competition of who, at any given point in time, was winning the Cold War.

International Politics and Space

President Kennedy, speaking at Rice University on September 12, 1962, just one month before the Cuban Missile Crisis, hinted to the world that the Soviet advantage in space was not quite what it seemed to be, and that perhaps some of their “less public” space launches had been failures. Promising to land men on the Moon before the decade ended, Kennedy’s “Moon speech” at Rice has been popularly remembered as the singular moment when America decided to come together and achieve the impossible, but this is not the whole story.

In truth, ten days after giving the Moon speech, Kennedy privately reached out to Khrushchev pleading with him to make the landing a joint affair, only to be rebuffed, and then to find himself in October 14 of that same year ambushed by the Soviets with offensive nuclear missiles pointed at the U.S. in Cuba.

Kennedy thought himself to be a highly persuasive, flexible leader who could peaceably talk others into agreeing to make political changes, which set him at odds with the more hard-nosed, realpolitik-minded members of both his administration and the U.S. military. It also invited testing of his mettle by the salty Khrushchev, who saw the youthful American president – “Profiles in Courage” aside – as inexperienced, pliable, and a pushover.

Still, while the Moon race was a crucial part of keeping America and her allies encouraged amidst the ever-chilling Cold War, the Cuban Missile Crisis deeply shook Kennedy and brought him face-to-face with the possibility of a nuclear apocalypse.

Kennedy had already nearly gone to nuclear war once before during the now largely forgotten Berlin Crisis of 1961 when his special advisor to West Berlin, Lucius D. Clay, responded to East German harassment of American diplomatic staff with aggressive military maneuvers, but the Cuba standoff had become one straw too heavy for the idealistic JFK.

Fearing the escalating arms race, experiencing sticker shock over the growing cost of the Moon race he had committed America to, and ultimately wanting to better relations with the Soviet Union, a year later on September 20, 1963 before the United Nations, Kennedy dialed his public Moon rhetoric back and revisited his private offer to Khrushchev when he asked, albeit rhetorically, “Why, therefore, should man’s first flight to the Moon be a matter of national competition?”

The implications of a joint U.S.-Soviet Moon landing may have tickled the ears of world leaders throughout the General Assembly, but behind the scenes, it agitated both Democrats and Republicans alike, who not-so-secretly began to wonder if Kennedy was “soft” on communism.

Even Kennedy’s remarks to the press over the developing conflict in Vietnam during his first year as president were especially telling about his worldview amidst the arms race and space race of the Cold War: “But we happen to live – because of the ingenuity of science and man’s own inability to control his relationships with one another – we happen to live in the most dangerous time in the history of the human race.”

Kennedy’s handling of the Bay of Pigs, Berlin, the Cuban Missile Crisis, and his more idealistic approaches to the openly belligerent Soviet Union began to shake the political establishment, and the possibility of ceding the Moon to a kind of squishy, joint participation trophy embittered those who saw an American landing as a crucial refutation of Soviet advances.

JFK was an undeniably formidable orator, but in the halls of power, he was beginning to develop a reputation in his presidency as eroding America’s post-WWII advantages as a military superpower and leader of the international system. His rhetoric made some nervous, and suggestions of calling off an American Moon landing put a question mark over the future of the West for some.

Again, the Moon race wasn’t just about landing men on the Moon; it was about showcasing the might of one superpower over the other, and Kennedy’s attempts to roll back America’s commitment to space in favor of acquiescing to a Moon shared with the Soviets could have potentially cost the West the outcome of the Cold War.

As far back as 1961, NASA had already sought the assistance of the traditionally military-oriented National Reconnaissance Office (NRO) to gain access to top secret, exotic spy technologies which would assist them in surveying the Moon for future landings, and would later enter into memorandums of agreement with the NRO, Department of Defense, and Central Intelligence Agency. This is important, because the crossover between the separations of civilian spaceflight and military/intelligence space exploitation reflects how the space race served strategic goals rather than purely scientific ones.

On August 28, 1963, Secretary of Defense Robert McNamara and NASA Administrator James Webb had signed an MOA titled “DOD/CIA-NASA Agreement on NASA Reconnaissance Programs” (Document BYE-6789-63) which stated “NRO, by virtue of its capabilities in on-going reconnaissance satellite programs, has developed the necessary technology, contractor resources, and management skills to produce satisfactory equipments, and appropriate security methods to preserve these capabilities, which are currently covert and highly sensitive. The arrangement will properly match NASA requirements with NRO capabilities to perform lunar reconnaissance.”

Technology transfers also went both ways. The Gemini space capsules, developed by NASA as part of the efforts to master orbital operations such as spacewalks, orbital docking, and other aspects deemed critical to an eventual Moon landing, would even be considered by the United States Air Force for a parallel military space program on December 16, 1963. Adapting the civilian Gemini design into an alternate military version called the “Gemini-B,” the Air Force intended to put crews in orbit to a space station called the Manned Orbiting Laboratory (MOL), which would serve as a reconnaissance platform to take pictures of Soviet facilities.

While the MOL program would ultimately be canceled in its infancy before ever actually going online by the President Richard Nixon Administration in 1969, it was yet another demonstration of the close-knit relationship between civilian and military space exploration to accomplish the same interests.

Gold Fever at NASA

Whatever President Kennedy’s true intentions may have been moving forward on the space race, his unfortunate death two months after his UN speech at the hands of assassin Lee Harvey Oswald in Dallas on November 22, 1963 would be seized upon as a justification by the establishment to complete the original 1962 Rice University promise of landing an American first on the Moon, before the end of the decade.

Not surprisingly, one of Johnson’s very first actions in assuming the presidency after the death of Kennedy was to issue Executive Order 11129 on November 29, 1963, re-naming NASA’s Launch Operations Center in Florida as the “John F. Kennedy Space Center,” a politically adroit maneuver which ensured the space program was now seen as synonymous with the fallen president.

In world history, national icons and martyrs – even accidental or involuntary ones – are powerful devices for furthering causes that would ordinarily burnout and lose interest, if left to private opinion alone. Kennedy’s death led to a kind of “gold fever” at NASA in defeating the Soviets, and many stunning advances in space technology would be won in the aftermath of his passing.

So intense was the political pressure and organizational focus at NASA that some began to worry that corners were being cut and that there were serious issues that needed to be addressed.

On January 27, 1967, NASA conducted a “plugs out test” of their newly developed Apollo space capsule, where launch conditions would be simulated on the launch pad with the spacecraft running on internal power. The test mission, designated AS-204, had been strongly cautioned against by the spacecraft’s manufacturer, North American Aviation, because of the fact that it would take place at sea level and with pure oxygen, where the pressure would be dangerously higher than normal atmospheric pressure. Nevertheless, NASA proceeded with the test.

Veteran astronauts Roger B. Chaffee, Virgil “Gus” Grissom, and Ed White, who crewed the test mission, would perish when an electrical malfunction sparked a fire that spread rapidly as a result of the pure oxygen atmosphere of the capsule. Their deaths nearly threatened to bring the entire U.S. space program to a screeching halt, but NASA was able to rise above the tragedy, adding the loss of their astronauts as yet another compelling case for making it to the Moon before the decade would end.

On January 30, 1967, the Monday that followed the “Apollo 1” fire, NASA flight director Eugene F. Kranz gathered his staff together and gave an impromptu speech that would change the space agency forever.

“Spaceflight will never tolerate carelessness, incapacity, and neglect,” he began. “Somewhere, somehow, we screwed up. It could have been in design, build, or test. Whatever it was, we should have caught it.”

He would go on to say, “We did not do our job. We were rolling the dice, hoping that things would come together by launch day, when in our hearts we knew it would be a miracle. We were pushing the schedule and betting that the Cape would slip before we did. From this day forward, Flight Control will be known by two words: Tough and Competent. ‘Tough’ means we are forever accountable for what we do or what we fail to do. We will never again compromise our responsibilities. Every time we walk into Mission Control, we will know what we stand for.”

“‘Competent’ means we will never take anything for granted. We will never be found short in our knowledge and in our skills; Mission Control will be perfect. When you leave this meeting today, you will go back to your office and the first thing you will do there is to write ‘Tough and Competent’ on your blackboards. It will never be erased. Each day when you enter the room, these words will remind you of the price paid by Grissom, White, and Chaffee. These words are the price of admission to the ranks of Mission Control.”

And “tough and competent” would be exactly what NASA would become in the days, months, and years to follow. The U.S. space agency in the wake of the Apollo fire would set exacting standards of professionalism, quality, and safety, even as they continued to increase in mastery of the technology and skills necessary to make it to the Moon.

America’s Finest Hour

Unbeknownst to U.S. intelligence agencies, the Soviets had already fallen vastly far behind in their own Moon program, and their N1 rocket, which was meant to compete with the U.S. Saturn V rocket, was by no means ready for manned use. Unlike NASA, the Soviet space program had become completely dependent on a volatile combination of personalities and politics, which bottlenecked innovation, slowed necessary changes, and in the end, made it impossible to adapt appropriately in the race for the Moon.

On December 21, 1968, the U.S. leapt into first place in the space race when Apollo 8 entered history as the first crewed spacecraft to leave Earth, orbit the Moon, and return. Having combined decades of military and civilian science, overcome terrible tragedies, and successfully applied lessons learned into achievements won, NASA could at last go on to attain mankind’s oldest dream of landing on the Moon with the Apollo 11 mission, launched on July 16, 1969 from the Kennedy Space Center launch complex LC-39A.

Astronauts Neil A. Armstrong, Edwin “Buzz” E. Aldrin Jr., and Michael Collins would reach the Moon’s orbit on July 19, where they would survey their target landing site at the Sea of Tranquility and begin preparations for separation from the Command Module, Columbia, and landing in the Lunar Module, The Eagle.

On Sunday, July 20, Armstrong and Aldrin would leave Collins behind to pilot the Apollo Command Module and begin their descent to the lunar surface below. Discovering their landing area strewn with large boulders, Armstrong took the Lunar Module out of computer control and manually steered the lander on its descent while searching for a suitable location, finding himself with only a mere 50 seconds of fuel left. But at 8:17 pm, Armstrong would touch down safely, declaring to a distant Planet Earth, “Houston, Tranquility Base here, The Eagle has landed!”

Communion on the Moon

As if to bring humanity full circle, two hours after landing on the surface of the Moon, Aldrin, a Presbyterian, quietly and unknown to NASA back on Earth, would remove from his uniform a small 3” x 5” notecard with a hand-written passage from John 15:5. Taking Communion on the Moon, Aldrin would read within the Lunar Module, “As Jesus said: I am the Vine, you are the branches. Whoever remains in Me, and I in Him, will bear much fruit; for you can do nothing without Me.”

Abraham, the Bible’s “father of faith,” could almost be said to have been honored by Aldrin’s confession of faith. In a sense, the landing of a believing astronaut on a distant heavenly object was like a partial fulfillment of the prophecy of Genesis 15:5, in which Abraham’s descendants would be like the stars in the sky.

Later, when Armstrong left the Lunar Module and scaled the ladder down to the Moon’s dusty surface, he would radio back to Earth, “That’s one small step for a man; one giant leap for mankind.” Due to a 35-millisecond interruption in the signal, listeners would not hear the “one small step for a man” but instead, “one small step for man,” leaving the entire world with the impression that the NASA astronauts had won not just a victory for America, but for humankind, as a whole.

After planting Old Glory, the flag of the United States of America in the soft lunar dust, the Moon race had officially been won, and the Soviets, having lost the initiative, would scale back their space program to focus on other objectives, such as building space stations and attempting to land probes on other planets. The Soviets not only lost the Moon race, but their expensive investment that produced no propaganda success would also, ultimately, cost them the Cold War as well.

America would go on to send men to the Moon a total of six times and with twelve different astronauts between July 20, 1969 (Apollo 11) and December 11, 1972 (Apollo 17). The result of the U.S. winning the Moon race would be the caper of assuring the planet that the Western world would not be overtaken by the communist bloc, and many useful technologies which were employed either for the U.S. civilian space program or military aerospace applications would later find themselves in commercial, everyday use.

While other nations, including Russia, the European Union, Japan, India, China, Luxembourg, and Israel all have successfully landed unmanned probes on the Moon, to this date, only the United States holds the distinction of having placed humans on the Moon.

Someday, hopefully soon, humans will return once again to the Moon, and even travel from there to distant planets, or even distant stars. But no matter how far humanity travels, the enduring legacy of July 20, 1969 will be that freedom won the 20th century because America, not the Soviets, won the Moon race.

Landing on the Moon was a global victory for humanity, but getting there first will forever be a uniquely American accomplishment.

Dr. Danny de Gracia, Th.D., D.Min., is a political scientist, theologist, and former committee clerk to the Hawaii State House of Representatives. He is an internationally acclaimed author and novelist who has been featured worldwide in the Washington Times, New York Times, USA Today, BBC News, Honolulu Civil Beat, and more. He is the author of the novel American Kiss: A Collection of Short Stories.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

On March 12, 1947, President Harry Truman delivered a speech advocating assistance to Greece and Turkey to resist communism as part of the early Cold War against the Soviet Union. The speech enunciated the Truman Doctrine, which led to a departure from the country’s traditional foreign policy to a more expansive direction in global affairs.

Truman said, “I believe that it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.” Protecting the free world against communist expansion became the basis for the policy of Cold War containment.

The United States fought a major war in Korea in the early 1950s to halt the expansion of communism in Asia especially after the loss of China in 1949. Although President Dwight D. Eisenhower had resisted the French appeal to intervene in Vietnam at Dien Bien Phu in 1954, the United States gradually increased its commitment and sent thousands of military advisers and billions of dollars in financial assistance over the next decade.

In the summer of 1964, President Lyndon B. Johnson was in the midst of a presidential campaign against Barry Goldwater and pushing his Great Society legislative program through Congress. He did not want to allow foreign affairs to imperil either and downplayed increased American involvement in the war.

Administration officials were quietly considering bombing North Vietnam or sending ground troops to interdict the Viet Cong insurgency in South Vietnam. Meanwhile, the United States Navy was running covert operations in the waters off North Vietnam in the Gulf of Tonkin.

On August 2, the destroyer USS Maddox and several U.S. fighter jets from a nearby carrier exchanged fire with some North Vietnamese gunboats. The U.S. warned North Vietnam that further “unprovoked” aggression would have “grave consequences.” The USS Turner Joy was dispatched to patrol with the Maddox.

On August 4, the Maddox picked up multiple enemy radar contacts in severe weather, but no solid proof confirmed the presence of the enemy. Whatever the uncertainty related to the event, the administration proceeded as if a second attack had definitely occurred. It immediately ordered a retaliatory airstrike and sought a congressional authorization of force. President Johnson delivered a national television address and said, “Repeated acts of violence against the armed forces of the United States must be met….we seek no wider war.”

On August 7, Congress passed the Tonkin Gulf Resolution which authorized the president “to take all necessary measures to repeal any armed attack against the forces of the United States and to prevent any further aggression.” The House passed the joint resolution unanimously, and the Senate passed it with only two dissenting votes.

The Tonkin Gulf Resolution became the basis for fighting the Vietnam War. World War II remained the last congressional declaration of war.

President Johnson had promised the electorate that he would not send “American boys to fight a war Asian boys should fight for themselves.” However, the administration escalated the war over the next several months.

On February 7, 1965, the Viet Cong launched an attack on the American airbase at Pleiku. Eight Americans were killed and more than one hundred wounded. President Johnson and Secretary of Defense Robert McNamara used the incident to expand the American commitment significantly but sought a piecemeal approach that would largely avoid a contentious public debate over American intervention.

Within a month, American ground troops were introduced into Vietnam as U.S. Marines went ashore and were stationed at Da Nang to protect an airbase there. The president soon authorized deployment of thousands more troops. In April, he approved Operation Rolling Thunder which launched a sustained bombing campaign against North Vietnam.

It did not take long for the Marines to establish offensive operations against the communists. The Marines initiated search and destroy missions to engage the Viet Cong. They fought several battles with the enemy, requiring the president to send more troops.

In April 1965, the president finally explained his justification for escalating the war, which included the Cold War commitment to the free world. He told the American people, “We fight because we must fight if we are to live in a world where every country can shape its own destiny. And only in such a world will our own freedom be finally secure.”

As a result, Johnson progressively sent more and more troops to fight in Vietnam until there were 565,000 troops in 1968. The Tet Offensive in late January 1968 was a profound shock to the American public which had received repeated promises of progress in the war. Even though U.S. forces recovered from the initial shock and won on overwhelming military victory that effectively neutralized the Viet Cong and devastated North Vietnamese Army forces, President Johnson was ruined politically and announced he would not run for re-election. His “credibility gap” contributed to growing distrust of government and concern about an unlimited and unchecked “imperial presidency” soon made worse by Watergate.

The Vietnam War contributed to profound division on the home front. Hundreds of thousands of Americans from across the spectrum protested American involvement in Vietnam. Young people from the New Left were at the center of teach-ins and demonstrations on college campuses across the country. The Democratic Party was shaken by internal convulsions over the war, and conservatism dominated American politics for a generation culminating in the presidency of Ronald Reagan.

Eventually, more than 58,000 troops were lost in the war. The Cold War consensus on containment suffered a dislocation, and a Vietnam syndrome affected morale in the U.S. military and contributed to significant doubts about the projection of American power abroad. American confidence recovered in the 1980s as the United States won the Cold War, but policymakers have struggled to define the purposes of American foreign policy with the rise of new global challenges in the post-Cold War world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

It took almost a century for Congress, and President Lyndon B. Johnson, a Democrat from Texas, to enact the Civil Rights Act of 1964, putting America back on the side of defending the equality before the law of all U.S. Citizens. That act formally made segregation illegal. It legally required states to stop applying facially neutral election laws differently, depending on the race of the citizen trying to register and vote. If the Civil Rights Act of 1957 had raised expectations by showing what was now possible, the Civil Rights Act of 1964 again dramatically raised expectations to actual equal treatment by governments.

But the defenders of segregation were not yet done. They continued to pursue the “massive resistance” to integration that emerged in the year between Brown I and Brown II.[1] They continued to refuse to register black voters, to use “literacy” tests (which tested esoteric knowledge, rather than literacy) only to deny black citizens the chance to register, and to murder those who didn’t get the message.

Jimmie Lee Jackson was one such victim of last-ditch defiance. In February 1965, Jackson, an Alabamian church deacon, led a demonstration in favor of voting rights in his hometown of Marion, Alabama; as he did so, state troopers beat him to death. The Southern Christian Leadership Conference (in apparent coordination with the White House) responded by organizing a far larger march for voting rights, one that would cover the 54 miles from Selma, Alabama to the capitol in Montgomery. On March 7, 1965, that march reached the Edmund Pettis Bridge in Selma, where national and international television cameras captured (and broadcast into living rooms everywhere) Alabama state troopers gassing and beating unarmed demonstrators.  When the SCLC committed to continuing the march, others flocked to join them. Two days later, as a federal court considered enjoining further state action against the demonstrators, a mob lynched James Reeb, a Unitarian minister from Boston who had flown in for that purpose.

Johnson Returns to Congress

Less than a week later, President Johnson had called Congress into a special session and began it with a nationally televised Presidential address to a Joint Session.[2] Urging “every member of both parties, Americans of all religions and of all colors, from every section of this country” to join him in working “for the dignity of man and the destiny of democracy,” President Johnson, the heavily accented man-of-the-South that Senator Richard Russell, a Democrat from Georgia, once had connived to get into the Presidency, compared the historical “turning point” confronting the nation to other moments “in man’s unending search for freedom” including the battles of “Lexington and Concord” and the surrender at “Appomattox.” President Johnson defined the task before Congress as a “mission” that was “at once the oldest and the most basic of this country: to right wrong, to do justice, to serve man.” The President identified the core issue – that “of equal rights for American Negroes” – as one that “lay bare the secret heart of America itself[,]” a “challenge, not to our growth or abundance, or our welfare or our security, but rather to the values, and the purposes, and the meaning of our beloved nation.”

He said more. President Johnson recognized that “[t]here is no Negro problem. There is no Southern problem. There is no Northern problem. There is only an American problem. And we are met here tonight as Americans — not as Democrats or Republicans. We are met here as Americans to solve that problem.”  And still more:

“This was the first nation in the history of the world to be founded with a purpose. The great phrases of that purpose still sound in every American heart, North and South: ‘All men are created equal,’ ‘government by consent of the governed,’ ‘give me liberty or give me death.’ Well, those are not just clever words, or those are not just empty theories. In their name Americans have fought and died for two centuries, and tonight around the world they stand there as guardians of our liberty, risking their lives.

“Those words are a promise to every citizen that he shall share in the dignity of man. This dignity cannot be found in a man’s possessions; it cannot be found in his power, or in his position.  It really rests on his right to be treated as a man equal in opportunity to all others. It says that he shall share in freedom, he shall choose his leaders, educate his children, provide for his family according to his ability and his merits as a human being. To apply any other test – to deny a man his hopes because of his color, or race, or his religion, or the place of his birth is not only to do injustice, it is to deny America and to dishonor the dead who gave their lives for American freedom.

“Every American citizen must have an equal right to vote.

“There is no reason which can excuse the denial of that right.  There is no duty which weighs more heavily on us than the duty we have to ensure that right.

“Yet the harsh fact is that in many places in this country men and women are kept from voting simply because they are Negroes. Every device of which human ingenuity is capable has been used to deny this right. The Negro citizen may go to register only to be told that the day is wrong, or the hour is late, or the official in charge is absent. And if he persists, and if he manages to present himself to the registrar, he may be disqualified because he did not spell out his middle name or because he abbreviated a word on the application. And if he manages to fill out an application, he is given a test. The registrar is the sole judge of whether he passes this test. He may be asked to recite the entire Constitution, or explain the most complex provisions of State law. And even a college degree cannot be used to prove that he can read and write.

“For the fact is that the only way to pass these barriers is to show a white skin. Experience has clearly shown that the existing process of law cannot overcome systematic and ingenious discrimination. No law that we now have on the books – and I have helped to put three of them there – can ensure the right to vote when local officials are determined to deny it. In such a case our duty must be clear to all of us. The Constitution says that no person shall be kept from voting because of his race or his color. We have all sworn an oath before God to support and to defend that Constitution. We must now act in obedience to that oath.

“We cannot, we must not, refuse to protect the right of every American to vote in every election that he may desire to participate in. And we ought not, and we cannot, and we must not wait another eight months before we get a bill. We have already waited a hundred years and more, and the time for waiting is gone.

“But even if we pass this bill, the battle will not be over. What happened in Selma is part of a far larger movement which reaches into every section and State of America. It is the effort of American Negroes to secure for themselves the full blessings of American life. Their cause must be our cause too.  Because it’s not just Negroes, but really it’s all of us, who must overcome the crippling legacy of bigotry and injustice.

And we shall overcome.

“The real hero of this struggle is the American Negro. His actions and protests, his courage to risk safety and even to risk his life, have awakened the conscience of this nation. His demonstrations have been designed to call attention to injustice, designed to provoke change, designed to stir reform.  He has called upon us to make good the promise of America.  And who among us can say that we would have made the same progress were it not for his persistent bravery, and his faith in American democracy.

“For at the real heart of [the] battle for equality is a deep[-]seated belief in the democratic process. Equality depends not on the force of arms or tear gas but depends upon the force of moral right; not on recourse to violence but on respect for law and order.

“And there have been many pressures upon your President and there will be others as the days come and go. But I pledge you tonight that we intend to fight this battle where it should be fought – in the courts, and in the Congress, and in the hearts of men.”

The Passage and Success of the Voting Rights Act

Congress made good on the President’s promises and fulfilled its oath.  The Voting Rights Act, the crowning achievement of the Civil Rights Movement, was signed into law in August 1965, less than five (5) months after those bloody events in Selma.

The VRA would allow individuals to sue in federal court when their voting rights were denied. It would allow the Department of Justice to do the same. And, recognizing that “voting discrimination … on a pervasive scale” justified an “uncommon exercise of congressional power[,]” despite the attendant “substantial federalism costs[,]” it required certain states and localities, for a limited time, to obtain the approval (or “pre-clearance”) of either DOJ or a federal court sitting in Washington, DC before making any alteration to their voting laws, from registration requirements to the location of polling places.[3]

And it worked.

The same Alabama Governor and Democrat, George Wallace, who (on first losing re-election) had promised himself never to be “out-segged” again and who, on getting back into office in 1963, had proclaimed “segregation today, segregation tomorrow, segregation forever[!]” would win re-election in 1982 by seeking and obtaining the majority support of Alabama’s African Americans. By 2013, “African-American voter turnout exceeded white voter turnout in five of the six States originally covered by [the pre-clearance requirement], with a gap in the sixth State of less than one half of one percent;”[4] the percentage of preclearance submissions drawing DOJ objections had dropped about 100-fold between the first decade under pre-clearance and 2006.[5]

At long last, with only occasional exceptions (themselves addressed through litigation under the VRA), American elections were held consistent with the requirements of the Constitution and the equality before the law of all U.S. Citizens.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Southern states might now be required by law to integrate their public schools, but, by and large, they didn’t yet do so.  That would follow around 1970 when a pair of events forced the issue: (a) a University of Southern California football team led by O.J. Simpson drubbed the University of Alabama in the Crimson Tide’s 1970 home opener – so allowing Alabama Coach Bear Bryant to finally convince Alabama Governor George Wallace that the state must choose between having competitive football or segregated football; and (b) President Nixon quietly confronting the Southern governments that had supported his election with the conclusion of the American intelligence community that their failure to integrate was costing America the Cold War – they must decide whether they hated their black neighbors more than they hated the godless Communists.  However ironically, what finally killed Jim Crow was a love of football and a hatred of Marxism.

[2] See,

[3] Shelby County v. Holder, 570 U.S. 529, 133 S.Ct. 2612, 2620 and 2624 (2013) (each citing South Carolina v. Katzenbach, 383 U.S. 301, 308 and 334 (1966)); and at 2621 (citing Northwest Austin Municipal Util. Dist. No. One v. Holder, 557 U.S. 193, 202-03 (2009)), respectively.

[4] Id. at 2626.

[5] Id.

Guest Essayist: Dan Morenoff

For a decade after the Civil War, the federal government sought to make good its promises and protect the rights of the liberated as American citizens.  Most critically, in the Civil Rights Act of 1866, Congress created U.S. Citizenship and, in the Civil Rights Act of 1875, Congress guaranteed all American Citizens access to all public accommodations. Then, stretching from 1877 to the end of the century following the close of the Civil War, the federal government did nothing to assure that those rights were respected. Eventually, in Brown v. Board of Education, the Supreme Court started to admit that this was a problem, a clear failure to abide by our Constitution. But the Supreme Court (in Brown II) also made clear that it wouldn’t do anything about it.

So things stood, until a man in high office made it his business to get the federal government again on the side of right, equality, and law. That man was Lyndon Baines Johnson. And while this story could be told in fascinating, exhaustive detail,[1] these are its broad outlines.

Jim Crow’s Defenders

Over much of the century following the Civil War’s close, the American South was an accepted aberration, where the federal government turned a blind-eye to government mistreatment of U.S. Citizens (as well as to the systematic failure of governments to protect U.S. Citizens from mob-rule and racially-tinged violence), and where the highest office White Southerners could realistically dream of attaining was a seat in the U.S. Senate from which such a Southerner could keep those federal eyes blind.[2], [3] So, for the decades when it mattered, Southern Senators used their seniority and the procedures of the Senate (most prominently the filibuster, pioneered the previous century by South Carolina’s John C. Calhoun in the early 1800s) to block any federal ban on lynching, to protect the region’s racial caste system from federal intrusion, and to steadily steer federal money into the rebuilding of their broken region.  For decades, the leader of these efforts was Senator Richard Russell, a Democrat from Georgia and an avowed racist, if one whose insistence on the prerogatives of the Senate and leadership on other issues nonetheless earned him the unofficial title, “the Conscience of the Senate.”

LBJ Enters the Picture

Then Lyndon Baines Johnson got himself elected to the Senate as a Democrat from Texas in 1948. He did so through fraud in a hotly contested election. The illegal ballots counted on his behalf turned a narrow defeat into an 87-vote victory that triggered his Senate colleagues calling him “Landslide Lyndon” for the rest of his career.

By that time, LBJ had established a number of traits that would remain prominent throughout the rest of his life. Everywhere he went, LBJ consistently managed to convince powerful men to treat him as if he was their professional son. For a few examples, LBJ had convinced the president of his college to treat him alone among decades of students as a preferred heir. For another, as a Congressman, he managed to convince Sam Rayburn, a Democrat from Texas and Speaker of the House for 17 of 21 years, a man before whom everyone else in Washington coward, to allow LBJ to regularly walk up to him in large gatherings to kiss his bald head. And everywhere he went, LBJ consistently managed to identify positions no one else wanted that held potential leverage and therefore could be made focal points of enormous power. When LBJ worked as a Capitol-Hill staffer, he turned a “model Congress,” in which staffers played at being their bosses, into a vehicle to move actual legislation through the embarrassment of his rivals’ bosses. On a less positive note, everywhere he went, LBJ had demonstrated (time and again) an enthusiasm for verbally and emotionally abusing those subject to his authority such as staffers, girl-friends, his wife…, sometimes in the service of good causes and other times entirely in the name of his caprice and meanness.

In the Senate, LBJ followed form. He promptly won the patronage of Richard Russell, convincing the arch-segregationist both that he was the Southerner capable of taking up Russell’s mantle after him and that Russell should teach him everything he knew about Senate procedure.  Arriving at a time that everyone else viewed Senate leadership positions as thankless drudgery, LBJ talked his way into being named his party’s Senate Whip in only his second Congress in the chamber. Four years later, having impressed his fellow Senators with his ability to accurately predict how they would vote, even as they grew to fear his beratings and emotional abuse, LBJ emerged as Senate Majority Leader. And in 1957, using as instigation the support of President Dwight D. Eisenhower, a Republican from Kansas, for such a measure and the recent Supreme Court issuance of Brown, LBJ managed to convince Russell both that the Senate must pass the first Civil Rights Act since Reconstruction, a comparatively weak bill, so palatable to Russell as a way to prevent the passage of a stronger one and that Russell should help him pass it to advance LBJ’s chances of winning the Presidency in 1960 as a loyal Southerner. Substantively, that 1957 Act created the U.S. Civil Rights Commission, a clearinghouse for ideas for further reforms, but one with no enforcement powers. The Act’s real power, though, wasn’t in its substance. Its real power lay in what it demonstrated was suddenly possible: where a weak act could pass, a stronger one was conceivable. And where one was conceivable, millions of Americans long denied equality, Americans taught by Brown, in the memorable phraseology of the Reverend Martin Luther King, Jr. that “justice too long delayed is justice denied” would demand the passage of the possible.

The Kennedy Years

Of course, Johnson didn’t win the Presidency in 1960. But, in part thanks to Rayburn and Russell’s backing, he did win the Vice Presidency. There, he could do nothing, and did. President John F. Kennedy, a Democrat from Massachusetts, didn’t trust him, the Senate gave him no role, and Bobby Kennedy, the President’s in-house proxy and functional Prime Minister, officially serving as Attorney General, openly mocked and dismissed Johnson as a washed up, clownish figure. So as the Civil Rights Movement pressed for action to secure the equality long denied, as students were arrested at lunch-counters and freedom riders were murdered, LBJ could only take the Attorney General’s abuse, silently sitting back and watching the President commit the White House to pushing for a far more aggressive Civil Rights Act, even as it had no plan for how to get it passed over the opposition of Senator Russell and his block of Southern Senators.

Dallas, the Presidency, and How Passage Was Finally Obtained

Not long before his assassination in 1963, President Kennedy proposed stronger legislation and said the nation “will not be fully free until all its citizens are free.” But, when an assassin’s bullet tragically slayed President Kennedy on a Dallas street, LBJ became the new president of the United States. The man with a knack for finding leverage and power where others saw none suddenly sat Center Stage, with every conceivable lever available to him. And he wasted no time deploying those levers. Uniting with the opposition party’s leadership in the Senate, Everett Dirksen, a Republican from Illinois, was the key man who delivered the support of eighty-two percent (82%) of his party’s Senators, President Johnson employed every tool available to the chief magistrate to procure passage of the stronger Civil Rights Act he had once promised Senator Russell that the 1957 Act would forestall.

The bill he now backed, like the Civil Rights Act of 1875, would outlaw discrimination based on race, color, religion, or national origin in public accommodations through Title II. It would do more. Title I would forbid the unequal application of voter registration laws to different races. Title III would bar state and local governments from denying access to public facilities on the basis of race, color, religion, or national origin. Title IV would authorize the Department of Justice to bring suits to compel the racial integration of schools. Title VI would bar discrimination on the basis of race, color, or national origin by federally funded programs and activities. And Title VII would bar employers from discriminating in hiring or firing on the basis of race, color, religion, sex, or national origin.

This was the bill approved by the House of Representatives after the President engineered a discharge petition to force the bill out of committee. This was the bill filibustered by 18 Senators for a record 60 straight days. This was the bill where that filibuster was finally broken on June 10, 1964, the first filibuster of any kind defeated since 1927. After the lengthy Democrat filibuster in the Senate, the bill was finally passed 73-27. The Senate passed that Civil Rights bill on June 19, 1964. The House promptly re-passed it as amended by the Senate.

On July 2, 1964, President Johnson signed the Civil Rights Act into law on national television. Finally, on the same day that John Adams had predicted 188 years earlier would be forever commemorated as a “Day of Deliverance” with “Pomp and Parade, with Shews, Games, Sports, Guns, Bells, Bonfires and Illuminations from one End of this Continent to the other[,]” the federal government had restored the law abandoned with Reconstruction in 1876. Once more, the United States government would stand for the equality before the law for all its Citizens.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Robert Caro has, so far, written four (4) books over as many decades telling this story over thousands of pages.  The author recommends them, even as he provides this TLDR summation.  Caro’s books on the subject are: The Years of Lyndon Johnson: The Path to Power, The Years of Lyndon Johnson: Means of Ascent, The Years of Lyndon Johnson: Master of the Senate, and The Years of Lyndon Johnson: The Passage of Power.

[2] Black Southerners, almost totally barred from voting, could not realistically hope for election to any office over this period.  It is worth noting, however, that the Great Migration saw a substantial portion of America’s Black population move North, where this was not the case and where such migrants (and their children) could and did win elective office.

[3] The exception proving the rule is President Woodrow Wilson.  Wilson, the son of a Confederate veteran, was born in Virginia and raised mostly in South Carolina.  Yet he ran for the Presidency as the Governor of New Jersey, a position be acquired as a result of his career at Princeton University (and the progressive movement’s adoration of the “expertise” that an Ivy League President seemed to promise).  Even then, Wilson could only win the Presidency (which empowered him to segregate the federal workforce) when his two predecessors ran against each other and split their shared, super-majority support.

Guest Essayist: Robert L. Woodson, Sr.

When President Lyndon B. Johnson announced the launch of a nationwide War on Poverty in 1964, momentary hope arose that it would uplift the lives of thousands of impoverished Americans and their inner-city neighborhoods. But the touted antipoverty campaign of the 60s is a classic example of injury with the helping hand.

Regardless of intention—or mantras—the ultimate measure of any effort to reduce poverty is the impact it has on its purported beneficiaries. After more than 60 years and the investment of $25 trillion of tax-payers’ money, poverty numbers have virtually remained the same, while conditions in low-income neighborhoods have spiraled downward.

While impoverished Americans may not be rising up, what has become a virtual “poverty industry” and the bureaucracy of welfare system has prospered, expanding to 89 separate programs spread across 14 government programs and agencies.  In sum, 70% of anti-poverty funding has not reached the poor but has been absorbed by those who serve the poor. As a consequence, the system has created a commodity out of the poor with perverse incentives to maintain people in poverty as dependents. The operative question became not which problems are solvable, but which ones are fundable.

I had first-hand experience of power and money grabs that followed the launch of Johnson’s antipoverty agenda. As a young civil rights leader at the time of its introduction, I was very hopeful that, at long-last, policies would be adopted that would direct resources to empower the poor to rise. I was working for the summer in Pasadena, California, leading a work project with the American Friends Service Committee in the year after the Watts riots and the government’s response with the War on Poverty.

Initially, the anti-poverty money funded grassroots leaders in high-crime, low-income neighborhoods who had earned the trust and confidence of local people and had their best interests at heart. But many of the local grassroots leaders who were paid by the program began to raise questions about the functions of the local government and how it was assisting the poor. These challenges from the residents became very troublesome to local officials and they responded by appealing to Washington to change the rules to limit the control that those grassroots leaders could exercise over programs to aid their peers.

One of the ways the Washington bureaucracy responded was to institute a requirement that all outreach workers had to be college-educated as a condition of their employment. Overnight, committed and trusted workers on the ground found themselves out of a job. In addition, it was ruled that the allocation and distribution of all incoming federal dollars was to be controlled by a local anti-poverty board of directors that represented three groups: 1/3 local officials, 1/3 business leaders and 1/3 local community leaders. I knew from the moment those structural changes occurred that the poverty program was going to be a disaster and that it would serve the interests of those who served the poor with little benefit to its purported beneficiaries.

Since only a third of the participants on the board would be from the community, the other two-thirds were careful to ensure that the neighborhood residents would be ineffective and docile representatives who would ratify the opportunistic and often corrupt decisions they made. In the town where I was engaged in civil rights activities, I witnessed local poverty agencies awarding daycare contracts to business members on the board who would lease space at three times the market-value rate.

Years of such corruption throughout the nation were later followed by many convictions and the incarceration of people who were exploiting the programs and hurting the poor. When they were charged with corruption, many of the perpetrators used the issue of race to defend themselves. The practice of using race as a shield of defense against charges for corrupt activity continues to this day. The disgraced former Detroit Mayor Kwame Kilpatrick received a 28-year sentence for racketeering, bribery, extortion and tax crimes. Last year, more than 40 public and private officials were charged as part of a long-running and expanding federal investigation into public corruption in metro Detroit, including fifteen police, five suburban trustees, millionaire moguls and a former state senator. Much of the reporting about corruption in the administration of poverty programs never rose to the level of public outrage or indignation and were treated as local issues.

Yet the failure of the welfare system and the War on Poverty is rooted in something deeper than the opportunistic misuse of funds. Its most devastating impact is in undermining pillars of strength that have empowered the black community to survive and thrive in spite of oppression: a spirit of enterprise and mutual cooperation, and the sustaining support of family and community.

In the past, even during periods of legalized discrimination and oppression, a spirit of entrepreneurship and agency permeated the black community. Within the first 50 years after the Emancipation Proclamation, black Americans had accumulated a personal wealth of $700 million. They owned more than 40,000 businesses and more than 930,000 farms, Black commercial enclaves in Durham, North Carolina and the Greenwood Avenue section of Tulsa, Oklahoma, were known as the Negro Wall Street. When blacks were barred from white establishments and services, they created their own thriving alternative transit systems. When whites refused to lend money to blacks, they established more than 103 banks and savings and loans associations and more than 1,000 inns and hotels. When whites refused to treat blacks in hospitals, they established 230 hospitals and medical schools throughout the country.

In contrast, within the bureaucracy of the burgeoning poverty industry, low-income people were defined as the helpless victims of an unfair and unjust society. The strategy of the liberal social engineers is to right this wrong by the redistribution of wealth, facilitated by the social services bureaucracy in the form of cash payments or equivalent benefits. The cause of a person’s poverty was assumed beyond their power and ability to control and, therefore, resources were given with no strings attached and there was no assumption of the possibility of upward mobility towards self-sufficiency. The empowering notions of personal responsibility and agency were decried as “blaming the victim” and, with the spread of that mentality and the acceptance of a state of dependency the rich heritage of entrepreneurship in the black community fell by the wayside.

Until the mid-60s, in 85% of all black families, two parents were raising their children. Since the advent of the Welfare State, more than 75% of black children were born to single mothers. The system included penalties for marriage and work through which benefits would be decreased or terminated. As income was detached from work, the role of fathers in the family was undermined and dismissed. The dissolution of the black family was considered as necessary collateral damage in a war that was being waged in academia against capitalism in America, led by Columbia University professors Richard Cloward and Frances Fox Piven who promoted a massive rise of dependency with a goal to overload the U.S. public welfare system and elicit “radical change.”

Reams of research has found that youths in two-parent families are less likely to become involved in delinquent behavior and drug abuse or suffer depression and more likely to succeed in school and pursue higher education. As generations of children grew up on the streets of the inner city, drug addiction and school drop-out rates soared. When youths turned to gangs for identity, protection, and a sense of belonging, entire neighborhoods became virtual killing fields of warring factions. Statistics from Chicago alone bring home the tragic toll that has been taken. Within Fathers’ Day weekend, 104 people were shot across the city, 15 of them, including five children, fatally. Within a three-day period of the preceding week, a three-year-old child was shot and killed in the South Austin community, the third child under the age of 10 who was shot.

In the midst of this tragic scenario, the true casualties of the War on Poverty have been its purported beneficiaries.

Robert L. Woodson, Sr. founded the Woodson Center in 1981 to help residents of low-income neighborhoods address the problems of their communities. A former civil rights activist, he has headed the National Urban League Department of Criminal Justice, and has been a resident fellow at the American Enterprise Foundation for Public Policy Research. Referred to by many as “godfather” of the neighborhood empowerment movement, for more than four decades, Woodson has had a special concern for the problems of youth. In response to an epidemic of youth violence that has afflicted urban, rural and suburban neighborhoods alike, Woodson has focused much of the Woodson Center’s activities on an initiative to establish Violence-Free Zones in troubled schools and neighborhoods throughout the nation. He is an early MacArthur “genius” awardee and the recipient of the 2008 Bradley Prize, the Presidential Citizens Award, and a 2008 Social Entrepreneurship Award from the Manhattan Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Andrew Langer

We are going to assemble the best thought and broadest knowledge from all over the world to find these answers. I intend to establish working groups to prepare a series of conferences and meetings—on the cities, on natural beauty, on the quality of education, and on other emerging challenges. From these studies, we will begin to set our course toward the Great Society. – President Lyndon Baines Johnson, Anne Arbor, MI, May 22, 1964

In America in 1964, the seeds of the later discontent of the 1960s were being planted. The nation had just suffered an horrific assassination of an enormously charismatic president, John F. Kennedy, we were in the midst of an intense national conversation on race and civil rights, and we were just starting to get mired in a military conflict in Southeast Asia.

We were also getting into a presidential election, and while tackling poverty in America wasn’t a centerpiece, President Johnson started giving a series of speeches talking about transforming the United States into a “Great Society”—a concept that was going to be the most-massive series of social welfare reforms since Franklin Roosevelt’s post-depression “New Deal” of the 1930s.

In that time, there was serious debate over whether the federal government even had the power to engage in what had, traditionally, been state-level social support work—or, previously, private charitable work. The debate centered around the Constitution’s “general welfare” clause, the actionable part of the Constitution building on the Preamble’s “promote the general welfare” language, saying in Article I, Section 8, Clause 1 that, “The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;” (emphasis added)

Proponents of an increased federal role in social service spending have argued that “welfare” for this purpose means just what politicians today proffer that it does: that “welfare” means social service spending, and that because the Constitution grants Congress this power, such power is expansive (if not unlimited).

But this flies in the face of the whole concept of the Constitution itself—which is the idea of a federal government of limited, carefully-enumerated powers. The founders were skeptical of powerful, centralized government (and had fought a revolution over that very point), and the debate of just how powerful, how centralized was at the core of the Constitutional Convention’s debates.

Constitutional author (and later president) James Madison said this in Federalist 41:

It has been urged and echoed, that the power “to lay and collect taxes, duties, imposts, and excises, to pay the debts, and provide for the common defense and general welfare of the United States,’’ amounts to an unlimited commission to exercise every power which may be alleged to be necessary for the common defense or general welfare. No stronger proof could be given of the distress under which these writers labor for objections, than their stooping to such a misconstruction. Had no other enumeration or definition of the powers of the Congress been found in the Constitution, than the general expressions just cited, the authors of the objection might have had some color for it; though it would have been difficult to find a reason for so awkward a form of describing an authority to legislate in all possible cases.

In 1831, he also said, more plainly:

With respect to the words “general welfare,” I have always regarded them as qualified by the detail of powers connected with them. To take them in a literal and unlimited sense would be a metamorphosis of the Constitution into a character which there is a host of proofs was not contemplated by its creators.

This was, essentially, the interpretation of the clause that stood for nearly 150 years—only to be largely gutted in the wake of FDR’s New Deal programs. As discussed in the essay on FDR’s first 100 days, there was great back and forth within the Supreme Court over the constitutionality of the New Deal—with certain members of the court eventually apparently succumbing to the pressure of a proposed plan to “stack” the Supreme Court with newer, younger members.

A series of cases, starting with United States v. Butler (1936) and then Helvering v. Davis (1937), essentially ruled that Congress’ power to spend was non-reviewable by the Supreme Court… that there could be no constitutional challenge to spending plans, that if Congress said a spending plan was to “promote the general welfare” then that’s what it was.

Madison was right to be fearful—when taken into the context of an expansive interpretation of the Commerce Clause, it gives the federal government near-unlimited power. Either something is subject to federal regulation because it’s an “item in or related to commerce” or it’s subject to federal spending because it “promotes the general welfare.”

Building on this, LBJ moved forward with the Great Society in 1964, creating a series of massive spending and federal regulatory programs whose goal was to eliminate poverty and create greater equity in social service programs.

Problematically, LBJ created a series of “task forces” to craft these policies—admittedly because he didn’t want public input or scrutiny that would lead to criticism of the work his administration was doing.

Normally, when the executive branch engages in policymaking, those policies are governed by a series of rules aimed at ensuring public participation—both so that the public can offer their ideas at possible solutions, but also to ensure that the government isn’t abusing its powers.

Here, the Johnson administration did no such thing—creating, essentially, a perfect storm of problematic policymaking: a massive upheaval of government policy, coupled with massive spending proposals, coupled with little public scrutiny.

Had they allowed for greater public input, someone might have pointed out what the founders knew: that there was a reason such social support has traditionally been either the purview of local governance or private charity, that such programs are much more effective when they are locally-driven and/or community based. Local services work because they better understand the challenges their local communities face.

And private charities provide more-effective services because they not only have a vested-interest in the outcomes, that vested-interest is driven by building relationships centered around faith and hope. If government programs are impersonal, government programs whose management is far-removed from the local communities is far worse.

The end result is two-fold:  faceless entitlement bureaucracies whose only incentive is self-perpetuation (not solving problems), and people who have little incentive to move themselves off of these programs.

Thus, Johnson’s Great Society was a massive failure. Not only did it not end poverty, it created a devastating perpetual cycle of it. Enormous bureaucratic programs which still exist today—and which, despite pressures at various points in time (the work of President Bill Clinton and the GOP-led Congress after the 1994 election at reforming the nation’s welfare programs as one example), seem largely resistant to change or improvement.

The founders knew that local and private charity did a better job at promoting “the general welfare” of a community than a federal program would. They knew the dangers of expansive government spending (and the power that would accrue with it). Once again, as Justice Sandra Day O’Connor said in New York v. United States (1992), the “Constitution protects us from our own best intentions.”

Andrew Langer is President of the Institute for Liberty. He teaches in the Public Policy Program at the College of William & Mary

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joshua Schmid

The Cold War was a time of immense tension between the world’s superpowers, the Soviet Union and the United States. However, the two never came into direct conflict during the first decade and a half and rather chose to pursue proxy wars in order to dominate the geopolitical landscape. The Cuban Missile Crisis of October 1962 threatened to reverse this course by turning the war “hot” as the leader of the free world and the leader of the world communist revolution squared off in a deadly game of nuclear cat and mouse.

At the beginning of the 1960s, some members of the Soviet Union’s leadership desired more aggressive policies against the United States. The small island of Cuba, located a mere 100 miles off the coast of Florida, provided Russia with an opportunity. Cuba had recently undergone a communist revolution and its leadership was happy to accept Soviet intervention if it would minimize American harassments like the failed Bay of Pigs invasion in 1961. Soviet Premier Nikita Khrushchev offered to place nuclear missiles on Cuba, which would put him within striking range of nearly any target on the continental U.S. The Cubans accepted and work on the missile sites began during the summer of 1962.

Despite an elaborate scheme to disguise the missiles and the launch sites, American intelligence discovered the Soviet scheme by mid-October. President John F. Kennedy immediately convened a team of security advisors, who suggested a variety of options. These included ignoring the missiles, using diplomacy to pressure the Soviets to remove the missiles, invading Cuba, blockading the island, and strategic airstrikes on the missile sites. Kennedy’s military advisors strongly suggested a full-scale invasion of Cuba as the only way to defeat the threat. However, the president ultimately overrode them and decided any attack would only provoke greater conflict with the Russians. On October 22, Kennedy gave a speech to the American people in which he called for a “quarantine” of the island under which “all ships of any kind bound for Cuba, from whatever nation or port, will, if found to contain cargoes of offensive weapons, be turned back.”

The Russians appeared unfazed by the bravado of Kennedy’s speech, and announced they would interpret any attempts to quarantine the island of Cuba as an aggressive act. However, as the U.S. continued to stand by its policy, the Soviet Union slowly backed down. When Russian ships neared Cuba, they broke course and moved away from the island rather than challenging the quarantine. Despite this small victory, the U.S. still needed to worry about the missiles already installed.

In the ensuing days, the U.S. continued to insist on the removal of the missiles from Cuba. As the haggling between the two nations continued, the nuclear launch sites became fully operational. Kennedy began a more aggressive policy that included a threat to invade Cuba. Amidst these tensions, the most harrowing event of the entire Cuban Missile Crisis occurred. The Soviet submarine B-59 neared the blockade line and was harassed by American warships dropping depth charges. The submarine had lost radio contact with the rest of the Russian navy and could not surface to refill its oxygen. The captain of B-59 decided that war must have broken out between the U.S. and Soviet Union, and proposed that the submarine launch its nuclear missile. This action required a unanimous vote by the top three officers onboard. Fortunately, the executive officer cast the lone veto vote against what surely would have been an apocalyptic action.

Eventually, Khrushchev and Kennedy reached an agreement that brought an end to the crisis. The Russians removed the missiles from Cuba and the U.S. promised not to invade the island. Additionally, Kennedy removed missiles stationed near the Soviet border in Turkey and Italy as a show of good faith. A brief cooling period between the two superpowers would ensue, during which time a direct communication line between the White House and the Kremlin was established. And while the Cold War would continue for three more decades, never again would the two blocs be so close to nuclear annihilation as they were in October 1962.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

The Cold War between the United States and Soviet Union was a geopolitical struggle around the globe characterized by an ideological contest between capitalism and communism, and a nuclear arms race. An important part of the Cold War was the space race which became a competition between the two superpowers.

Each side sought to be the first to achieve milestones in the space race and used the achievements for propaganda value in the Cold War. The Soviet launch of the satellite, Sputnik, while a relatively modest accomplishment, became a symbolically important event that triggered and defined the dawn of the space race. The space race was one of the peaceful competitions of the Cold War and pushed the boundaries of the human imagination.

The Cold War nuclear arms race helped lead to the development of rocket technology that made putting humans into space a practical reality in a short time. Only 12 years after the Russians launched a satellite into orbit around the Earth, Americans sent astronauts to walk on the moon.

The origins of Sputnik and spaceflight occurred a few decades before World War II, with the pioneering flights of liquid-fueled rockets in the United States and Europe. American Robert Goddard launched one from a Massachusetts farm in 1926 and continued to develop the technology on a testing range in New Mexico in the 1930s. Meanwhile, Goddard’s research influenced the work of German rocketeer Hermann Oberth who fired the first liquid-fueled rocket in Europe in 1930 and dreamed of spaceflight. In Russia, Konstantin Tsiolkovsky developed the idea of rocket technology, and his ideas influenced Sergei Korolev in the 1930s.

The greatest advance in rocket technology took place in Nazi Germany, where Werner von Braun led efforts to build V-2 and other rockets that could hit England and terrorize civilian populations when launched from continental Europe. Hitler’s superweapons never had the decisive outcome for victory as he hoped, but the rockets had continuing military and civilian applications.

At the end of the war, Russian and Allied forces raced to Berlin as the Nazi regime collapsed in the spring of 1945. Preferring to surrender to the Americans because of the Red Army’s well-deserved reputation for brutality, von Braun and his team famously surrendered to Private Fred Schneikert and his platoon. They turned over 100 unfinished V-2 rockets and 14 tons of spare parts and blueprints to the Americans who whisked the scientists, rocketry, and plans away just days before the Soviet occupation of the area.

In Operation Paperclip, the Americans secretly brought thousands of German scientists and engineers to the United States including more than 100 German rocket scientists from Von Braun’s team to the United States. The operation was controversial because of Nazi Party affiliations, but few were rabid devotees to Nazi ideology, and their records were cleared. The Americans did not want them contributing to Soviet military production and brought them instead to Texas and then to Huntsville, Alabama, to develop American rocket technology as part of the nuclear arms race to build immense rockets to carry nuclear warheads. Within a decade, both sides had intercontinental ballistic missiles (ICBMs) in their arsenals.

During the next decade, the United States developed various missile systems producing rockets of incredible size, thrust, and speed that could travel large distances. Interservice rivalry meant that the U.S. Army, Navy, and Air Force developed and built their own competing rocket systems including the Redstone, Vanguard, Jupiter-C, Polaris, and Atlas rockets. Meanwhile, the Soviets were secretly building their own R-7 missiles erected as a cluster rather than staged rocket.

On October 4, 1957, the Russians shocked Americans by successfully launching a satellite into orbit. Sputnik was a metal sphere weighing 184 pounds that emitted a beeping sound to Earth that was embarrassingly picked up by U.S. global tracking stations. The effort was not only part of the Cold War, but also the International Geophysical Year in which scientists from around the world formed a consortium to share information on highly active solar flares and a host of other scientific knowledge. However, both the Soviets and Americans were highly reluctant to share any knowledge that might have relationship to military technology.

While American intelligence had predicted the launch, Sputnik created a wave of panic and near hysteria. Although President Dwight Eisenhower was publicly unconcerned because the United States was preparing its own satellite, the American press, the public, and Congress were outraged, fearing the Russians were spying on them or could rain down nuclear weapons from space. Moreover, it seemed as if the Americans were falling behind the Soviets. Henry Jackson, a Democratic senator from the state of Washington, called Sputnik “a devastating blow to the United States’ scientific, industrial, and technical prestige in the world.” Sputnik initiated the space race between the United States and Soviet Union as part of the Cold War superpower rivalry.

A month later, the Soviets sent a dog named Laika into space aboard Sputnik II. Although the dog died because it only had life support systems for a handful of days, the second successful orbiting satellite—this one carrying a living creature—further humiliated Americans even if they humorously dubbed it “Muttnik.”

The public relations nightmare was further exacerbated by the explosion of a Vanguard rocket carrying a Navy satellite at the Florida Missile Test Range on Patrick Air Force Base on Cape Canaveral. on December 6. The event was aired on television and watched by millions. The launch was supposed to restore pride in American technology, but it was an embarrassing failure. The press had a field-day and labeled it “Kaputnik” and “Flopnik.”

On January 31, 1958, Americans finally had reason to cheer when a Jupiter-C rocket lifted off and went into orbit carrying a thirty-one-pound satellite named Explorer. The space race was now on and each side competed to be the first to accomplish a goal. The space race also had significant impacts upon American society.

In 1958, Congress passed the National Defense Education Act to spend more money to promote science, math, and engineering education at all levels. To signal its peaceful intentions, Congress also created the National Aeronautics and Space Administration (NASA) as a civilian organization to lead the American efforts in space exploration, whereas the Russian program operated as part of the military.

In December 1958, NASA announced Project Mercury with the purpose of putting an astronaut in space which would be followed by Projects Gemini and Apollo which culminated in Neil Armstrong and Buzz Aldrin walking on the moon. The space race was an important part of the Cold War and also about the spirit of human discovery and pushing the frontiers of knowledge and space.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

While speaking on June 14, 1954, Flag Day, President Dwight D. Eisenhower talked about the importance of reaffirming religious faith in America’s heritage and future, that doing so would “constantly strengthen those spiritual weapons which forever will be our country’s most powerful resource, in peace or in war.” In 1864 during the Civil War, the phrase “In God We Trust” first appeared on U.S. coins. On July 30, 1956, “In God We Trust” became the nation’s motto as President Eisenhower signed into law a bill declaring it, along with having the motto printed in capital letters, on every United States denomination of paper currency.

The Hand of providence has been so conspicuous in all this, that he must be worse than an infidel that lacks faith, and more than wicked, that has not gratitude enough to acknowledge his obligations.” George Washington, 1778.[i]

It becomes a people publicly to acknowledge the over-ruling hand of Divine Providence and their dependence upon the Supreme Being as their Creator and Merciful Preserver . . .” Samuel Huntington, 1791.[ii]

We are a religious people whose institutions presuppose a Supreme Being.” Associate Justice William O. Douglas, 1952.[iii]

One of the most enduring battles in American politics has been over the question of whether America is or ever was a Christian Nation. For Supreme Court Associate Justice David Brewer the answer was simple: yes. The United States was formed as and, in Brewer’s 1892 at least, still was, a Christian Nation. The Justice said as much in Church of the Holy Trinity vs. United States. But his simple answer did not go unsupported.

“[I]n what sense can [the United States] be called a Christian nation? Not in the sense that Christianity is the established religion or the people are compelled in any manner to support it…Neither is it Christian in the sense that all its citizens are either in fact or in name Christians. On the contrary, all religions have free scope within its borders. Numbers of our people profess other religions, and many reject all…Nevertheless, we constantly speak of this republic as a Christian Nation – in fact, as the leading Christian Nation of the world. This popular use of the term certainly has significance. It is not a mere creation of the imagination. It is not a term of derision but has substantial basis – on which justifies its use. Let us analyze a little and see what is the basis.”[iv]

Brewer went on, of course, to do just that.

Regrettably, it lies beyond the scope of this short essay to repeat Brewer’s arguments. In 1905, Brewer re-assembled them into a book: The United States a Christian Nation. It was republished in 2010 by American Vision and is worth the read.[v]  For the purposes of this essay I will stipulate, with Brewer, that America is a Christian nation. If that be the case, it should come as no surprise that such a nation would take the advice of Samuel Huntington and openly acknowledge its trust in God on multiple occasions and in a variety of ways: on its coinage, for instance. How we came to do that as a nation is an interesting story stretching over much of our history.

Trusting God was a familiar concept to America’s settlers – they spoke and wrote of it often. Their Bibles, at least one in every home, contained many verses encouraging believers to place their trust in God,[vi] and early Americans knew their Bible.[vii] Upon surviving the perilous voyage across the ocean, their consistent first act was to thank the God of the Bible for their safety.

Benjamin Franklin’s volunteer Pennsylvania militia of 1747-1748 reportedly had regimental banners displaying “In God We Trust.”[viii] In 1776, our Declaration of Independence confirmed the signers had placed “a firm reliance on the protection of divine Providence.”[ix] In 1814, Francis Scott Key penned his famous poem which eventually became our national anthem. The fourth stanza contains the words: “Then conquer we must, when our cause is just, and this be our motto: ‘In God is our trust.’”

In 1848, construction began on the first phase of the Washington Monument (it was not completed until 1884). “In God We Trust” sits among Bible verses chiseled on the inside walls and “Praise God” (“Laus Deo” in Latin) can be found on its cap plate. But it would be another thirteen years before someone suggested putting a “recognition of the Almighty God” on U.S. coins.

That someone, Pennsylvania minister M. R. Watkinson, wrote to Salmon P. Chase, Abraham Lincoln’s Secretary of the Treasury, and suggested that such a recognition of the Almighty God would “place us openly under the Divine protection we have personally claimed.” Watkinson suggested the words “PERPETUAL UNION” and “GOD, LIBERTY, LAW.” Chase liked the basic idea but not Watkinson’s suggestions. He instructed James Pollock, Director of the Mint at Philadelphia, to come up with a motto for the coins: “The trust of our people in God should be declared on our national coins. You will cause a device to be prepared without unnecessary delay with a motto expressing in the fewest and tersest words possible this national recognition (emphasis mine).

Secretary Chase “wordsmithed” Director Pollock’s suggestions a bit and came up with his “tersest” words: “IN GOD WE TRUST,” which was ordered to be so engraved by an Act of Congress on April 22, 1864. First to bear the words was the 1864 two-cent coin.

The following year, another Act of Congress allowed the Mint Director to place the motto on all gold and silver coins that “shall admit the inscription thereon.” The motto was promptly placed on the gold double-eagle coin, the gold eagle coin, and the gold half-eagle coin. It was also minted on silver coins, and on the nickel three-cent coin beginning in 1866.

One might guess that the phrase has appeared on all U.S. coins since 1866 – one would be wrong.

The U.S. Treasury website explains (without further details) that “the motto disappeared from the five-cent coin in 1883, and did not reappear until production of the Jefferson nickel began in 1938.” The motto was also “found missing from the new design of the double-eagle gold coin and the eagle gold coin shortly after they appeared in 1907. In response to a general demand, Congress ordered it restored, and the Act of May 18, 1908, made it mandatory on all coins upon which it had previously appeared” [x] (emphasis added). I’m guessing someone got fired over that disappearance act. Since 1938, all United States coins have borne the phrase. None others have had it “go missing.”

The date 1956 was a watershed year.  As you read in the introduction to this essay, that year, President Dwight D. Eisenhower signed a law (P.L. 84-140) which declared “In God We Trust” to be the national motto of the United States. The bill had passed the House and the Senate unanimously and without debate. The following year the motto began appearing on U.S. paper currency, beginning with the one-dollar silver certificate. The Treasury gradually included it as part of the back design of all classes and denominations of currency.

Our story could end there – but it doesn’t.

There is no doubt Founding Era Americans would have welcomed the phrase on their currency had someone suggested it, but it turns out some Americans today have a problem with it – a big problem.

America’s atheists continue to periodically challenge the constitutionality of the phrase appearing on government coins. The first challenge occurred in 1970; Aronow v. United States would not be the last. Additional challenges were mounted in 1978 (O’Hair v. Blumenthal) and 1979 (Madalyn Murray O’Hair vs W. Michael Blumenthal). Each of these cases was decided at the circuit court level against the plaintiff, with the court affirming that the “primary purpose of the slogan was secular.”

Each value judgment under the Religion Clauses must therefore turn on whether particular acts in question are intended to establish or interfere with religious beliefs and practices or have the effect of doing so. [xi]

Having the national motto on currency neither established nor interfered with “religious beliefs and practices.”

In 2011, in case some needed a reminder, the House of Representatives passed a new resolution reaffirming “In God We Trust” as the official motto of the United States by a 396–9 vote (recall that the 1956 vote had been unanimous, here in the 21st century it was not).

Undaunted by the courts’ previous opinions on the matter, atheist activist Michael Newdow brought a new challenge in 2019 — and lost in the Eighth Circuit. The Supreme Court (on April 23, 2020) declined to hear the appeal. At my count, Newdow is now 0-5. His 2004 challenge[xii] that the words “under God” in the Pledge of Allegiance violated the First Amendment was a bust, as was his 2009 attempt to block Chief Justice John Roberts from including the phrase “So help me God” when administering the presidential oath of office to Barack Obama. He tried to stop the phrase from being recited in the 2013 and 2017 inaugurations as well – each time unsuccessfully.

In spite of atheist challenges, or perhaps because of them, our national motto is enjoying a bit of resurgence of late, at least in the more conservative areas of the country:

In 2014, the Mississippi legislature voted to add the words, “In God We Trust” to their state seal.

In 2015, Jefferson County, Illinois decided to put the national motto on their police squad cars. Many other localities followed suit, including York County, Virginia, and Bakersfield, California, in 2019.

In March, 2017, Arkansas required their public schools to display posters which included the national motto. Similar laws were passed in Florida (2018), Tennessee (2018), South Dakota (2019) and Louisiana (2019).

On March 3, 2020, the Oklahoma House of Representatives passed a bill that would require all public buildings in the state to display the motto. Kansas, Indiana, and Oklahoma are considering similar bills.

But here is the question which lies at the heart of this issue: Does America indeed trust in God?

I think it is clear that America’s Founders, by and large did – at least they said and acted as though they did. But when you look around the United States today, outside of some limited activity on Sunday mornings and on the National Day of Prayer, does America actually trust in God? There is ample evidence we trust in everything, anything, but God.

Certainly we seem to trust in science, or what passes for science today.  We put a lot of trust in public education, it would seem, even though the results are quite unimpressive and the curriculum actually works to undermine trust in God. Finally, we put a lot of trust in our elected officials even though they betray that trust with alarming regularity.[xiii]

Perhaps citizens of the United States need to see our motto on our currency, on school and court room walls to simply remind us of what we should be doing, and doing more often.

“America trusts in God,” we declare. Do we mean it?

“And those who know your name put their trust in you, for you, O Lord, have not forsaken those who seek you.” Psalm 9:10 ESV

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people. CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[i] Letter to Thomas Nelson, August 20, 1778.

[ii] Samuel Huntington was a signer of the Declaration Of Independence; President of Congress;
Judge; and Governor of Connecticut.  Quoted from A Proclamation for a Day of Fasting, Prayer and Humiliation, March 9, 1791.

[iii] Zorach v. Clauson, 343 U.S. 306 (1952).

[iv] Church of the Holy Trinity v. United States, 143 U.S. 457 (1892).


[vi] Examples include: Psalm 56:3, Isaiah 26:4, Psalm 20:7, Proverbs 3:5-6 and Jeremiah 17:7.

[vii] “Their many quotations from and allusions to both familiar and obscure scriptural passages confirms that [America’s Founders] knew the Bible from cover to cover.” Daniel L. Driesbach, 2017, Reading the Bible with the Founding Fathers, Oxford University Press, p.1

[viii] See

[ix] Thomas Jefferson, Declaration of Independence, July 1776.



[xii] Newdow v. United States, 328 F.3d 466 (9th Cir. 2004)


Guest Essayist: Tony Williams

In 1919, Dwight Eisenhower was part of a U.S. Army caravan of motor vehicles traveling across the country as a publicity stunt. The convoy encountered woeful and inadequate roads in terrible condition. The journey took two months by the time it was completed.

When Eisenhower was in Germany after the end of World War II, he was deeply impressed by the Autobahn because of its civilian and military applications. The experiences were formative in shaping Eisenhower’s thinking about developing a national highway system in the United States. He later said, “We must build new roads,” and asked Congress for “forward looking action.”

As president, Eisenhower generally held to the postwar belief called “Modern Republicanism.” This meant that while he did not support a massive increase in spending on the federal New Deal welfare state, he would also not roll it back. He was a fiscal conservative who supported decreased federal spending and balanced budgets, but he advocated a national highway system as a massive public infrastructure project to facilitate private markets and economic growth.

The postwar consumer culture was dominated by the automobile. Americans loved their large cars replete with large tail fins and abundant amounts of chrome. By 1960, 80 percent of American families owned a car. American cars symbolized their geographical mobility, consumer desires, and global industrial predominance. They needed a modern highway system to get around the sprawling country. By 1954, President Eisenhower was ready to pitch the idea of a national highway system to Congress and the states. He called it, “The biggest peacetime construction project of any description every undertaken by the United States or any other country.”

In July, Eisenhower dispatched his vice-president, Richard Nixon, to the meeting of Governors’ Conference to win support. The principle of federalism was raised with many states in opposition to federal control and taxes.

That same month, the president asked his friend, General Lucius Clay, who was an engineer by training and supervised the occupation of postwar Germany, to manage the planning of the project and present it to Congress. He organized the President’s Advisory on a National Highway Program.

The panel held hearings and spoke to a variety of experts and interests including engineers, financiers, construction and trucking companies, and labor unions. Based upon the information it amassed, the panel put together a plan by January 1955.

The plan proposed 41,000 miles of highway construction at an estimated cost of $101 billion over ten years. It recommended the creation of a federal highway corporation that would use 30-year bonds to finance construction. There would be a gas tax but no tolls or federal taxes. A bill was written based upon the terms of the plan.

The administration sent the bill to Congress the following month, but a variety of interests expressed opposition to the bill. Southern members of Congress, for example, were particularly concerned about federal control because it might set a precedent for challenging segregation. Eisenhower and his allies pushed hard for the bill and used the Cold War to sell the bill as a means of facilitating evacuation from cities in case of a nuclear attack. The bill passed the Senate but then stalled in the House where it died during the congressional session.

The administration reworked the bill and sent it to Congress again. The revised proposal created a Highway Trust Fund that would be funded and replenished with taxes primarily on gasoline, diesel oil, and tires. No federal appropriations would be used for interstate highways.

The bill passed both houses of Congress in May and June 1956, and the president triumphantly signed the bill into the law creating the National System of Interstate and Defense Highways on June 29.

The interstate highway system transformed the landscape of the United States in the postwar period. It linked the national economy, markets, and large cities together. It contributed to the growth of suburban America as commuters could now drive their cars to work in cities or consumers could drive to shopping malls. Tourists could travel expeditiously to vacations at distant beaches, national parks, and amusement parks like Disneyland. Cheap gas, despite the taxes to fund the highways, was critical to travel along the interstates.

The interstate highway system later became entwined in national debates over energy policy in the 1970s when OPEC embargoed oil to the United States. Critics said gas-guzzling cars should be replaced by more efficient cars or public transportation, that American love of cars contributed significantly to the degradation of the environment, and that America had reached an age of limits.

The creation of the interstate highway system was a marvel of American postwar prosperity and contributed to its unrivaled affluence. It also symbolized some of the challenges Americans faced. Both the success of  completing the grand public project and the ability to confront and solve new challenges represented the American spirit.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

You can count on one hand the number of Supreme Court decisions that normal people can identify by name and subject. Brown is one of them (and, arguably, both the widest and most accurately known). Ask any lawyer what the most important judicial decision in American history is, and they will almost certainly tell you, with no hesitation, Brown v. Board of Education. It’s the case that, for decades, Senators have asked every nominee to become a judge to explain why is right.

It’s place in the public mind is well-deserved, even if it should be adjusted to reflect more accurately its place in modern American history.

Backstory: From Reconstruction’s Promise to Enshrinement of Jim Crow in Plessy

Remember the pair of course reversals that followed the Civil War.

Between 1865 and 1876, Congress sought to make good the Union’s promises to the freedmen emancipated during the war. In the face of stiff, violent resistance by those who refused to accept the war’s verdict, America amended the Constitution three (3) times, with: (a) the Thirteenth Amendment banning slavery; (b) the Fourteenth Amendment: (i) affirmatively acting to create and bestow American citizenship on all those born here, (ii) barring states from “abridg[ing] the privileges or immunities of citizens of the United States[,]” and (iii) guaranteeing the equal protection of the laws; and (c) the Fifteenth Amendment barring states from denying American citizens the right to vote “on account of race, color, or previous condition of servitude.” Toward the same end, Congress passed the Civil Rights Acts of 1866 and 1875, the Enforcement Acts of 1870 and 1871, and the Ku Klux Klan Act. They created the Department of Justice to enforce these laws and supported President Grant in his usage of the military to prevent states from reconstituting slavery under another name.

Until 1876. To solve the constitutional crisis of a Presidential election with no clear winner, Congress (and President Hayes) effectively, if silently, agreed to effectively and abruptly end all that. The federal government removed troops from former Confederate states and stopped trying to enforce federal law. And the states “redeemed” by the violent forces of retaliation amended their state constitutions and passed the myriad of laws creating the “Jim Crow” regime of American apartheid.  Under Jim Crow, races were separated, the public services available to an American came to radically differ depending on that American’s race, and the rights of disfavored races became severely curtailed. Most African Americans were disenfranchised, then disarmed, and then subjected to mob-violence to incentivize compliance with the “redeemer” community’s wishes.

One could point to a number of crystallizing moments as the key point when the federal government made official that it and national law would do nothing to stop any of this. But the most commonly cited is the Plessy v. Ferguson decision of the Supreme Court, issued in 1896. It was a case arising out of New Orleans and its even-then-long-multi-hued business community. There, predictably, there were companies and entrepreneurs that hated these laws interfering with their businesses and their ability to provide services to willing buyers on the (racially integrated) basis they preferred. A particularly hated law passed by the State of Louisiana compelled railroads (far and away the largest industry of the day) to separate customers into different cars on the basis of race. With admirable truth in advertising, the Citizens Committee to Test the Constitutionality of the Separate Car Law formed and went to work to rid New Orleans of this government micromanagement. Forgotten in the long sweep of history, the Committee (acting through the Pullman Company, one of America’s largest manufacturers at the time) actually won their first case at the Louisiana Supreme Court, which ruled that any state law requiring separate accommodations in interstate travel violated the U.S. Constitution (specifically, Article I’s grant of power to Congress alone to regulate interstate travel). They then sought to invalidate application of the same law to train travel within Louisiana as a violation of the Fourteenth Amendment. With coordination between the various actors involved, Homer Plessy (a man with 7 “white” and 1 “black” great-grandparent(s) purchased and used a seat in the state-law required “white” section of a train that the train company wanted to sell him; they then assured a state official knew he was there, was informed of his racial composition, and would willingly arrest Mr. Plessy to create the test case the Committee wanted. It is known to us as Plessy v. Ferguson.[1] This time, though, things didn’t go as planned: the trial court ruled the statute enforceable and the Louisiana Supreme Court upheld its application to Mr. Plessy. The Supreme Court of the United States accepted the case, bringing the national spotlight onto this specific challenge to the constitutionality of the states’ racial-caste-enforcing laws. In 1896, over the noteworthy, highly-praised, sole dissent of Justice John Marshall Harlan, the Supreme Court agreed that, due to its language requiring “equal, but separate” accommodations for the races (and without ever really considering whether the accommodations provided actually were “equal”), the separate car statute was consistent with the U.S. Constitution; they added that the Fourteenth Amendment was not intended “to abolish distinctions based upon color, or to enforce social … equality … of the two races.”

For decades, the Plessy ruling was treated as the federal government’s seal of approval for the continuation of Jim Crow.

Killing Jim Crow

Throughout those decades, African Americans (and conscientious whites) continued to object to American law treating races differently as profoundly unjust. And they had ample opportunities to note the intensity of the injustice. A sampling (neither comprehensive, nor fully indicative of the scope) would include: Woodrow Wilson’s segregation of the federal work force, the resurgence of lynchings following the 1915 rebirth of the Ku Klux Klan (itself an outgrowth of the popularity of Birth of a Nation, the intensely racist film that Woodrow Wilson made the first ever screened at the White House), and the spate of anti-black race riots surrounding America’s participation in World War I.

For the flavor of those riots, consider the fate of the African American community living in the Greenwood section of Tulsa, Oklahoma. In the spring of 1921, Greenwood’s professional class had done so well that it became known as “Negro Wall Street” or “Black Wall Street.” On the evening of May 31, 1921, a mob gathered at the Tulsa jail and demanded that an African American man accused of attempting to assault a white woman be handed over to them. When African Americans, including World War I veterans, came to the jail in order to prevent a lynching, shots were fired and a riot began. Over the next 12 hours, at least three hundred African Americans were killed. In addition, 21 churches, 21 restaurants, 30 grocery stores, two movie theaters, a hospital, a bank, a post office, libraries, schools, law offices, a half dozen private airplanes, and a bus system were utterly destroyed. The Tulsa race riot (perhaps better styled a pogrom, given the active participation of the national guard in these events) has been called “the single worst incident of racial violence in American history.”[2]

But that is far from the whole story of these years. What are today described as Historically Black Colleges and Universities graduated generations of students, who went on to live productive lives and better their communities (whether racially defined or not). They saw the rise of the Harlem Renaissance, where African American luminaries like Duke Ellington, Langston Hughes, and Zora Neale Hurston acquired followings across the larger population and, indeed, the world. The Negro Leagues demonstrated through the national pastime that the athletic (and business) skills of African Americans were equal to those of any others;[3] the leagues developed into some of the largest black-owned businesses in the country and developed fan-followings across America. Eventually, these years saw Jackie Robinson, one of the Negro Leagues’ brightest stars, sign a contract with the Brooklyn Dodgers in 1945 and “break the color barrier” in 1947 as the first black Major Leaguer since Cap Anson successfully pushed for their exclusion in the 1880s.[4] He would be: (a) named Major League Baseball’s Rookie of the Year in 1947; (b) voted the National League MVP in 1949; and (c) voted by fans as an All Star six (6) times (spanning each of the years from 1949-1954). Robinson also led the Dodgers to the World Series in four (4) of those six (6) years.

For the main plot of our story, though, the most important reaction to the violence of Tulsa (and elsewhere)[5] was the “newfound sense of determination” that “emerged” to confront it.[6] Setting aside the philosophical debate that raged across the African American community over the broader period on the best way to advance the prospects of those most impacted by these laws,[7] the National Association for the Advancement of Colored People (the “NAACP”) began to plan new strategies to defeat Jim Crow.”[8]  The initial architect of this challenge was Charles Hamilton Houston, who joined the NAACP and developed and implemented the framework of its legal strategy after graduating from Harvard Law School in 1922, the year following the Tulsa race riot.[9]

Between its founding in 1940, under the leadership of Houston-disciple Thurgood Marshall,[10] and 1955, the NAACP Legal Defense and Education Fund brought a series of cases designed to undermine Plessy.  Houston had believed from the outset that unequal education was the Achilles heel of Jim Crow and the LDF targeted that weak spot.

The culmination of these cases came with a challenge to the segregated public schools operated by Topeka, Kansas. While schools were racially segregated many places, the LDF specifically chose to bring its signature case against the Topeka Board of Education, precisely because Kansas was not Southern, had no history of slavery, and institutionally praised John Brown;[11] the case highlighted that its issues were national, not regional, in scope.[12]

LDF, through Marshall and Greenberg, convinced the Supreme Court to reverse Plessy and declare Topeka’s school system unconstitutional. On May 17, 1954, Chief Justice Earl Warren handed down the unanimous opinion of the Court. Due to months of wrangling and negotiation of the final opinion, there were no dissents and no concurrences. With a single voice the Supreme Court proclaimed that:

…in the field of public education the doctrine of “separate but equal” has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs and others similarly situated for whom the actions have been brought are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment.

These sweeping tones are why the decision holds the place it does in our collective imagination. They are why Brown is remembered as the end of legal segregation. They are why Brown is the most revered precedent in American jurisprudence.

One might have thought that they would mean an immediate end to all race-based public educational systems (and, indeed, to all segregation by law in American life). Indeed, as Justice Marshall told his biographer Dennis Hutchison in 1979, he thought just that: “the biggest mistake [I] made was assuming that once Jim Crow was deconstitutionalized, the whole structure would collapse – ‘like pounding a stake in Dracula’s heart[.]’”

But that was not to be. For the Court to get to unanimity, the Justices needed to avoid ruling on the remedy for the violation they could jointly agree to identify. So they asked the parties to return and reargue the question of what to do about it the following year. When they again addressed the Brown case, the Supreme Court reiterated its ruling on the merits from 1954, but as to what to do about it, ordered nothing more than that the states “make a prompt and reasonable start toward full compliance” and get around to “admit[ting children] to public schools on a racially nondiscriminatory basis with all deliberate speed.”

So the true place of Brown in the story of desegregation is best reflected in Justice Marshall’s words (again, to Dennis Hutchison in 1979): “…[i]n the twelve months between Brown I and Brown II, [I] realized that [I] had yet to win anything….  ‘In 1954, I was delirious. What a victory!  I thought I was the smartest lawyer in the entire world. In 1955, I was shattered.  They gave us nothing and then told us to work for it. I thought I was the dumbest Negro in the United States.’”

Of course, Justice Marshall was far from dumb, however he felt in 1955.  But actual integration didn’t come from Brown. That would have to wait for action by Congress, cajoling by a President, and the slow development of the cultural facts-on-the-ground arising from generations of white American children growing up wanting to be like, rooting for, and seeing the equal worth in men like Duke Ellington, Langston Hughes, Jackie Robinson, and Larry Doby.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] In the terminology of the day, Mr. Ferguson was a “Carpetbagger.”  A native of Massachusetts who had married into a prominent abolitionist family, Mr. Ferguson studied law in Boston before moving to New Orleans in 1865.  He was the same judge who, at the trial court level, had ruled that Louisiana’s separate cars act could not be constitutionally applied to interstate travel.  Since Plessy’s prosecution also was initially conducted in Mr. Ferguson’s courtroom, he became the named defendant, despite his own apparent feelings about the propriety of the law.

[2] All Deliberate Speed: Reflections on the First Half-Century of Brown v. Board of Education, by Charles J. Ogletree, Jr. W.W. Norton & Company (2004).

[3] In 1936, Jesse Owens did the same on an amateur basis at the Berlin Olympics.

[4] Larry Doby became the first black American League player ever weeks later (the AL had not existed in the 1880s).

[5] There were parallel riots in Omaha and Chicago in 1919.

[6] See, All Deliberate Speed, in Fn. 2, above.

[7] The author recommends delving into this debate.  Worthy samples of contributions to it the reader might consider include: (a) Booker T. Washington’s 1895 Address to Atlanta’s Cotton States and International Exposition (; and (b) W.E.B. Du Bois’s The Souls of Black Folk.

[8]  See, All Deliberate Speed, in Fn. 2, above.

[9] Houston was the first African American elected to the Harvard Law Review and has been called “the man who killed Jim Crow.”

[10] Later a Justice of U.S. Supreme Court himself, Justice Marshall was instrumental in the NAACP’s choice of legal strategies.  But LDF was not a one-man shop.  Houston had personally recruited Marshall and Oliver Hill, the first- and second-ranked students in the Law School Class of 1933 at Howard University – itself, a historically black institution founded during Reconstruction – to fight these legal battles.  Later, Jack Greenberg was Marshall’s Assistant Counsel was and hand-chosen successor to lead the LDF

[11] The Kansas State Capitol, in Topeka, has featured John Brown as a founding hero since the 1930s (

[12] This was all the more true when the case was argued before the Supreme Court, because the Supreme Court had consolidated Brown for argument with other cases from across the nation.  Those cases were Briggs v. Elliot (from South Carolina), Davis v. County School Board of Prince Edward County (from Virginia), Belton (Bulah) v. Gebhart (from Delaware), and Bolling v. Sharpe (District of Columbia).

Juneteenth’s a celebration of Liberation Day
When word of emancipation reached Texas slaves they say.
In sorrow were we brought here to till a harvest land.
We live and died and fought here
‘Til freedom was at hand.

They tore apart our families
They stole life’s nascent breath.
Turned women into mammies
And worked our men to death.

They shamed the very nation
Which fostered freedom’s birth
It died on the plantation
Denying man his worth.

But greed and misplaced honor
Brought crisis to a head
And Justice felt upon her
The weight of Union Dead.

They fought to save a nation.
And yet they saved its soul
From moral condemnation
And made the country whole.

But when the war was waning
And the battle was in doubt.
The soldiers were complaining
An many dropping out.

There seemed but one solution
Which might yet save the day.
Although its execution
Loomed several months away.

The Congress was divided.
The Cabinet as well.
Abe did his best to hide it.
And no one did he tell.

He meant to sign an order
To deal the South a blow.
The Mason Dixon border
And the Rebel states below

Would now have to contend with
The Freedman on their land.
For slavery had endeth
For woman, child and man.

The time 18 and 63
The first day of the year.
But June of 65 would be
The time we would hold dear.

For that would be when Freedom’s thought
First saw full light of day.
And justified why men had fought
And died along the way.

Now every June we celebrate
What Lincoln had in mind
The day he did emancipate
The bonds of all mankind.

Copyright All rights reserved

Noah Griffin, America 250 Commissioner, is a lifelong student of history and is founder and artistic director of the Cole Porter Society.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

Little focused the public’s mind in the early 1950s like the atom bomb and the potential for vast death and destruction in the event of nuclear war with the Soviet Union. Who can forget the classroom drills where students dropped to the floor and hid under their desks ostensibly to reduce exposure to an exploding atomic bomb? It was a prevailing subject of discussion amongst average people as well as elites in government, media and the arts.

The Soviet Union had attained “the bomb” in 1949, four years after Hiroshima and Nagasaki. With the atom bomb at their disposal, the leadership of the Soviet Union was likely emboldened to accelerate its deeply felt ideological imperative to spread communism opportunistically. Getting an A-bomb led to a military equality with the United States that far reduced the threat of nuclear retaliation against their superior land armies in the event of an East-West military confrontation. The blatant invasion of South Korea, supported by the U.S. by communist North Korea in 1950 with total Soviet political and logistical commitment and indeed, encouragement, was likely an outcome of the Soviets possessing the atomic bomb.

In January of 1950, British intelligence, on information provided by the FBI, arrested East Germany-born and a British-educated and citizen, atomic scientist, Klaus Fuchs, who was spying for the Soviet Union. Fuchs had worked at the very highest level at Los Alamos on the American project to develop an atom bomb and was passing secrets to American Communist Party members who were also spying for the Soviet Union. He admitted his espionage and provided names of his American collaborators at Los Alamos. Those connections led to the arrest of Julius Rosenberg in June of 1950 on suspicion of espionage and two months later, his wife Ethel on the same charge.

Julius Rosenberg, an electrical engineer, and his wife Ethel were dedicated members of the Communist Party USA and had been working for years for Soviet Military Intelligence (GRU) delivering secret American work on advanced weaponry such as radar detection, jet engines and guided missiles. In hindsight, that information probably exceeded the value of atomic secrets given to the Soviet Union although consensus is that the Rosenbergs’ bomb design information confirmed the direction of Soviet bomb development. Ethel Rosenberg’s brother, David Greenglass was working at Los Alamos and evidence brought to light over the years strongly suggest that Ethel was the one who recruited her brother to provide atom bomb design secrets to her husband and worked hand-in-glove with him in his espionage activities.

The Rosenbergs, never admitting their crimes, were tried and convicted on the charge of “conspiracy to commit espionage.” The Death Penalty was their sentence. They professed their innocence until the very end when in June 1953, they were electrocuted at Sing Sing prison.

Politically, there was another narrative unfolding. The political Left in the United States and worldwide strongly supported the Rosenbergs’  innocence, reminiscent of their support for former State Department official Alger Hiss who was tried in 1949 and convicted in 1950 of perjury and not espionage as the Statute of Limitations on espionage had expired. The world-renowned Marxist intellectual, Jean-Paul Sartre called the Rosenberg trial a “legal lynching.” On execution day, there was a demonstration of several hundred outside Sing Sing paying their last respects. For decades to follow, the Rosenbergs’ innocence became a rallying cry of the political Left.

Leaders on the political and intellectual Left blamed anti-communist fervor drummed up by McCarthyism for the federal government’s pursuit of the Rosenbergs and others accused of spying for the Soviet Union. At the time, there was great sympathy on the Left with the ideals of communism and America’s former communist ally, the Soviet Union, which had experienced great loss in WW II in defeating hated Nazi fascism. They fervently believed the Rosenbergs’ plea of innocence.

When the Venona Project, secret records of intercepted Soviet messages, were made public in the mid-1990s, with unequivocal information pointing to the Rosenbergs’ guilt, the political Left’s fervor for the Rosenbergs was greatly diminished. Likewise, with material copied from Soviet KGB archives (the Vassillyev Notebooks) in 2009. However, some said, (paraphrasing) “OK, they did it but U.S. government Cold War mentality and McCarthyism were even greater threats” (e. g. the Nation magazine, popular revisionist Historian Howard Zinn).

Since then, the Left and not only the Left, led by the surviving sons of the Rosenbergs, have focused on the unfairness of the sentence, particularly Ethel Rosenberg’s, and that she should have not received the death penalty. Federal prosecutors likely hoped that such a charge would get the accused to talk, implicate others and provide insights into Soviet espionage operations. It did not. The Rosenbergs became martyrs to the Left and likely as martyrs, continued to better serve the Soviet communist cause than serving a prison sentence. Perhaps that was even their reason for professing innocence.

Debate continues to this day. But these days it’s over the severity of the sentence as just about all agree the Rosenbergs were spies for the Soviet Union. In today’s climate, there would be no death sentence but at the height of the Cold War…

However, there is absolutely no doubt that they betrayed America by spying for the Soviet Union at a time of great peril to America and world.

Don Ritter is President and CEO Emeritus (after serving eight years in that role) of the Afghan American Chamber of Commerce (AACC) and a 15-year founding member of the Board of Directors. Since 9-11, 2001, he has worked full time on Afghanistan and has been back to the country more than 40 times. He has a 38-year history in Afghanistan.

Ritter holds a B.S. in Metallurgical Engineering from Lehigh University and a Masters and Doctorate from MIT in physical-mechanical metallurgy. After MIT, where his hobby was Russian language and culture, he was a NAS-Soviet Academy of Sciences Exchange Fellow in the Soviet Union in the Brezhnev era for one year doing research at the Baikov Institute for Physical Metallurgy on high temperature materials. He speaks fluent Russian (and French), is a graduate of the Bronx High School of Science and recipient of numerous awards from scientific and technical societies and human rights organizations.

After returning from Russia in 1968, he spent a year teaching at California State Polytechnic University, Pomona, where he was also a contract consultant to General Dynamics in their solid-state physics department. He then returned, as a member of the faculty and administration, to his alma-mater, Lehigh University. At Lehigh, in addition to his teaching, research and industry consulting, Dr. Ritter was instrumental in creating a university-wide program linking disciplines of science and engineering to the social sciences and humanities with the hope of furthering understanding of the role of technology in society.

After10 years at Lehigh, Dr. Ritter represented Pennsylvania’s 15th district, the “Lehigh Valley” from 1979 to 1993 in the U.S. House of Representatives where he served on the Science and Technology and Energy and Commerce Committees. Ritter’s main mission as a ‘scientist congressman’ was to work closely with the science, engineering and related industry communities to bring a greater science-based perspective to the legislative, regulatory and political processes.

In Congress, as ranking member on the Congressional Helsinki Commission, he fought for liberty and human rights in the former Soviet Union. The Commission was Ritter’s platform to gather congressional support to the Afghan resistance to the Soviet invasion and occupation during the 1980s. Ritter was author of the “Material Assistance” legislation and founder and House-side Chairman of the “Congressional Task Force on Afghanistan.”

Dr. Ritter continued his effort in the 1990’s after Congress as founder and Chairman of the Washington, DC-based Afghanistan Foundation. In 2003, as creator of a six million-dollar USAID-funded initiative, he served as Senior Advisor to AACC in the creation of the first independent, free-market oriented Chamber of Commerce in the history of the country. Dr. Ritter presently is part of AACC’s seminal role in assisting the development of the Afghan market economy to bring stability and prosperity to Afghanistan. He is also a businessman and investor in Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

World War II ended in 1945 but the ideological imperative of Soviet communism’s expansion did not. By 1950, the Soviet Union (USSR) had solidified its empire by conquest and subversion in all Central and Eastern Europe. But to Stalin & Co., there were other big fish to fry. At the Yalta Conference in February 1945 between Stalin, Roosevelt and Churchill, the USSR was asked to participate in ending the war in the Pacific against Japan. Even though Japan’s defeat was not in doubt, the atom bomb would not be tested until July and it was not yet known to our war planners if it would work.

An invasion of Japan, their home island, was thought to mean huge American and allied casualties, perhaps half a million, a conclusion reached given the tenacity which Japanese soldiers had defended islands like Iwo Jima and Okinawa. So much blood was yet to be spilled… they were fighting to the death. The Soviet Red Army, so often oblivious to casualties in their onslaught against Nazi Germany, would share in the burden of invasion of Japan.

Japan had controlled Manchukuo (later Manchuria).  The Korean peninsula was dominated by Japan historically and actually annexed early in the 20th century. Islands taken from Czarist Russia in the Russo-Japanese War of 1905 were also in play.

Stalin and the communist USSR’s presence at the very end of the war in Asia was solidified at Yalta and that is how they got to create a communist North Korea.

Fast forward to April of 1950, Kim Il Sung had traveled to Moscow to discuss how communist North Korea, might take South Korea and unify the peninsula under communist rule for the communist world. South Korea or the Republic of Korea (ROK) was dependent on the United States. The non-communist ROK was in the middle of the not abnormal chaos of establishing a democracy, an economy, and a new country. Their military was far from ready. Neither was that of the U.S.

Kim and Stalin concluded that South was weak and ripe for adding new realm to their communist world. Stalin gave Kim the go-ahead to invade and pledged full Soviet support. Vast quantities of supplies, artillery and tanks would be provided to the Army of North Korea for a full-fledged attack on the South. MIG-15 fighter aircraft, flown by Soviet pilots masquerading as Koreans would be added. Close by was Communist China for whom the USSR had done yeoman service in their taking control. That was one large insurance policy should things go wrong.

On June 25, 1950, a North Korean blitzkrieg came thundering down on South Korea. Closely spaced large artillery firing massive barrages followed by tanks and troops, a tactic perfected in the Red Army’s battles with the Nazis, wreaked havoc on the overpowered South Korean forces. Communist partisans infiltrated into the South joined the fray, against the ROK. The situation was dire as it looked like the ROK would collapse.

President Harry Truman decided that an expansionist Soviet communist victory in Korea was not only unacceptable but that it would not stop there. He committed the U.S. to fight back and fight back, we did. In July of 1950, the Americans got support from the UN Security Council to form a UN Command (UNC) under U.S. leadership. As many as 70 countries would get involved eventually but the U.S. troops bore the brunt of the war with Great Britain and Commonwealth troops, a very distant second.

It is contested to this day as to why the USSR under Stalin had not been there at the Security Council session to veto the engagement of the UN with the U.S. leading the charge. The Soviets had walked out in January and did not return until August. Was it a grand mistake or did Stalin want to embroil America in a war in Asia so he could more easily deal with his new and possibly expanding empire in Europe? Were the Soviets so confident of a major victory in Korea that would embarrass the U.S. and signal to others that America would be weakened by a defeat in Korea, and thus be unable to lead the non-communist world?

At a time when ROK and U.S. troops were reeling backwards, when the communist North had taken the capital of the country, Seoul, and much more, Supreme UN Commander, General Douglas McArthur had a plan for a surprise attack. He would attack at a port twenty-five miles from Seoul, Inchon, using the American 1st Marine Division as the spearhead of an amphibious operation landing troops, tanks and artillery. That put UNC troops north of the North Korean forces in a position to sever the enemy’s supply lines and inflict severe damage on their armies. Seoul was retaken. The bold Inchon landing changed the course of the Korean war and put America back on offense.

While MacArthur rapidly led the UNC all the way to the Yalu River bordering China, when Communist China entered the war, everything changed. MacArthur had over-extended his own supply lines and apparently had not fully considered the potential for a military disaster if China entered the war. The Chinese People’s Liberation Army (PLA) counterattacked. MacArthur was sacked by Truman. There was a debate in the Truman administration over the use of nuclear weapons to counter the Chinese incursion.

Overwhelming numbers of Chinese forces employing sophisticated tactics, and a willingness to take huge casualties, pushed the mostly American troops back to the original dividing line between the north and south, the 38th parallel (38 degrees latitude)… which, ironically, after two more years of deadly stalemate, is where we and our South Korean allies stand today.

Looking back, airpower was our ace in the hole and a great equalizer given the disparity in ground troops. B-29 Superfortresses blasted targets in the north, incessantly. Jet fighters like the legendary F-86 Sabre jet dominated the Soviet MIG-15s.  But if you discount nuclear weapons, wars are won by troops on the ground, and on the ground, we ended up where we started.

33, 000 Americans died in combat. Other UNC countries lost about 7,000. South Korea, 134,000. North Korea, 213,000. The Chinese lost an estimated 400,000 troops in combat! Civilians all told, 2.7 million, a staggering number.

The Korean war ended in 1953 when Dwight D. Eisenhower was the U.S. President. South Korea has evolved from a nation of rice paddies to a modern industrial power with strong democratic institutions and world-class living standards. North Korea, under communist dictatorship, is one of the poorest and most repressive nations on earth yet they develop nuclear weapons. China, still a communist dictatorship but having adopted capitalist economic principles, has surged in its economic and military development to become a great power with the capacity to threaten the peace in Asia and beyond.

Communist expansion was halted by a hot war in Korea from 1950 to 1953 but the Cold War continued with no letup.

A question for the reader: What would the world be like if America and its allies had lost the war in Korea.

Don Ritter is President and CEO Emeritus (after serving eight years in that role) of the Afghan American Chamber of Commerce (AACC) and a 15-year founding member of the Board of Directors. Since 9-11, 2001, he has worked full time on Afghanistan and has been back to the country more than 40 times. He has a 38-year history in Afghanistan.

Ritter holds a B.S. in Metallurgical Engineering from Lehigh University and a Masters and Doctorate from MIT in physical-mechanical metallurgy. After MIT, where his hobby was Russian language and culture, he was a NAS-Soviet Academy of Sciences Exchange Fellow in the Soviet Union in the Brezhnev era for one year doing research at the Baikov Institute for Physical Metallurgy on high temperature materials. He speaks fluent Russian (and French), is a graduate of the Bronx High School of Science and recipient of numerous awards from scientific and technical societies and human rights organizations.

After returning from Russia in 1968, he spent a year teaching at California State Polytechnic University, Pomona, where he was also a contract consultant to General Dynamics in their solid-state physics department. He then returned, as a member of the faculty and administration, to his alma-mater, Lehigh University. At Lehigh, in addition to his teaching, research and industry consulting, Dr. Ritter was instrumental in creating a university-wide program linking disciplines of science and engineering to the social sciences and humanities with the hope of furthering understanding of the role of technology in society.

After10 years at Lehigh, Dr. Ritter represented Pennsylvania’s 15th district, the “Lehigh Valley” from 1979 to 1993 in the U.S. House of Representatives where he served on the Science and Technology and Energy and Commerce Committees. Ritter’s main mission as a ‘scientist congressman’ was to work closely with the science, engineering and related industry communities to bring a greater science-based perspective to the legislative, regulatory and political processes.

In Congress, as ranking member on the Congressional Helsinki Commission, he fought for liberty and human rights in the former Soviet Union. The Commission was Ritter’s platform to gather congressional support to the Afghan resistance to the Soviet invasion and occupation during the 1980s. Ritter was author of the “Material Assistance” legislation and founder and House-side Chairman of the “Congressional Task Force on Afghanistan.”

Dr. Ritter continued his effort in the 1990’s after Congress as founder and Chairman of the Washington, DC-based Afghanistan Foundation. In 2003, as creator of a six million-dollar USAID-funded initiative, he served as Senior Advisor to AACC in the creation of the first independent, free-market oriented Chamber of Commerce in the history of the country. Dr. Ritter presently is part of AACC’s seminal role in assisting the development of the Afghan market economy to bring stability and prosperity to Afghanistan. He is also a businessman and investor in Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

When Time Magazine was at its heyday and the dominant ‘last word’ in American media, over a ten-year period, Whittaker Chambers was its greatest writer and editor. He was a founding editor of National Review along with William F. Buckley. He received the Presidential Medal of Freedom posthumously from President Ronald Reagan in 1984. His memoir, Witness, is an American classic.

But all that was a vastly different world from his earlier life as a card-carrying member of the Communist Party in the 1920s and spy for Soviet Military Intelligence (GRU) in the 1930s.

We recognize Chambers today for the nation’s focus given to his damning testimony in the Alger Hiss congressional investigations and spy trials from 1948-50 and a trove of documents called the Pumpkin Papers.

Alger Hiss came from wealth and was a member of the privileged class, attended Harvard Law School and was upwardly mobile in the State Department reaching high-ranking positions with access to extremely sensitive information. He was an organizer of the Yalta Conference between Stalin, Roosevelt and Churchill. He helped create the United Nations and in 1949, was President of the prestigious Carnegie Endowment for International Peace.

In Congress in 1948, based on FBI information, a number of Americans were being investigated for spying for the Soviet Union dating back to the early 1930s and during WW II, particularly in the United States Department of State. These were astonishing accusations at the time. When an American spy for the Soviets, Elizabeth Bentley, defected and accused Alger Hiss and a substantial group of U.S. government officials in the Administration of Franklin Roosevelt of spying for the Soviet Union, Hiss vehemently denied the charges. Handsome and sophisticated, Hiss was for a lifetime, well-connected, well-respected and well-spoken. He made an extremely credible witness before the House Unamerican Activities Committee. Plus, most public figures involved in media, entertainment and academe came to his defense.

Whittaker Chambers, by then a successful editor at Time, reluctantly and fearing retribution by the GRU, was subpoenaed before HUAC to testify. He accused Hiss of secretly being a communist and passing secret documents to him for transfer to Soviet Intelligence. He testified that he and Hiss had been together on several occasions. Hiss denied it. Chambers was a product of humble beginnings, divorced parents, a brother who committed suicide at 22 and was accused of having psychological problems. All this was prequel to his adoption – “something to live for and something to die for” – of the communist cause. His appearance, dress, voice and demeaner, no less his stinging message, were considered less than attractive. The comparison to the impression that Hiss made was stark and Chambers was demeaned and derided by Hiss’ supporters.

Then came the trial in 1949. During the pre-trial discovery period, Chambers eventually released large quantities of microfilm he had kept hidden as insurance against any GRU reprisal, including murder. Eliminating defectors was not uncommon in GRU practice then… and exists unfortunately to this day.

A then little-known Member of Congress and member of HUAC, one Richard Nixon, had gained access to the content of Chambers’ secret documents, and adamantly pursued the case before the Grand Jury. Nixon at first refused to give the actual evidence to the Grand Jury but later relented. Two HUAC investigators went to Chambers’ farm in Westminster, Maryland, and from there, guided by Chambers, to his garden. There in a capped and hollowed out orange gourd (not a pumpkin!) were the famous “Pumpkin Papers.” Contained in the gourd were hundreds of documents on microfilm including four hand-written pages by Hiss, implicating him as spying for the Soviet Union.

Hiss was tried and convicted of perjury as the statute of limitations on espionage by then had run out. He was sentenced to two five-year terms and ended up serving three and a half years total in federal prison.

Many on the political Left refused to believe that Alger Hiss was guilty and to this day there are some who still support him. However, the Venona Papers released by the U.S. National Security Agency in 1995 which contained intelligence intercepts from the Soviet Union during Hiss’ time as a Soviet spy showed conclusively that Hiss was indeed a Soviet spy. The U.S. government at the highest levels knew all along that Hiss was a spy but in order to keep the Venona Project a secret and to keep gathering intelligence from the Soviet Union during nuclear standoff and the Cold War, it could not divulge publicly what it knew.

Alger Hiss died at the ripe old age of 92, Whittaker Chambers at the relatively young age of 61. Many believe that stress from his life as a spy, and later the pervasive and abusive criticism he endured, weakened his heart and led to his early death.

The Hiss case is seminal in the history of the Cold War and its impact on America because it led to the taking sides politically on the Left and on the Right, a surge in anti-communism on the Right and the reaction to anti-communism on the Left. At the epicenter of the saga is Whittaker Chambers.

Author’s Postscript:

To me, this is really the story of Whittaker Chambers, whose brilliance as a thinker and as a writer alone did more to unearth and define the destructive nature of communism than any other American of his time. His memoir, Witness, a best-seller published in 1952, is one of the most enlightening works of non-fiction one can read. It reflects a personal American journey through a dysfunctional family background and depressed economic times when communism and Soviet espionage, were ascendant, making the book both an educational experience and page-turning thriller. In Witness, as a former Soviet spy who became disillusioned with communism’s murder and lies, Chambers intellectually and spiritually defined its tyranny and economic incompetence to Americans in a way that previously, only those who experienced it personally could understand. It gave vital insights into the terrible and insidious practices of communism to millions of Americans.

Don Ritter, Sc.D., served in the United States House of Representatives for the 15th Congressional District of Pennsylvania. As founder of the Afghanistan-American Foundation, he was senior advisor to the Afghan-American Chamber of Commerce (AACC) and the Afghan International Chamber of Commerce (AICC). Congressman Ritter currently serves as president and CEO of the Afghan-American Chamber of Commerce. He holds a B.S. in Metallurgical Engineering from Lehigh University and a M.S. and Sc. D. (Doctorate) from the Massachusetts Institute of Technology, M.I.T, in Physical Metallurgy and Materials Science. For more information about the work of Congressman Don Ritter, visit

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

It was a time when history hung in the balance. The outcome of a struggle between free and controlled peoples – democratic versus totalitarian rule – was at stake.

Here’s the grim picture in early 1948. Having fought for 4 years against the Nazis in history’s biggest and bloodiest battles, victorious Soviet communist armies have thrown back the Germans across all of Eastern and Central Europe and millions of Soviet troops are either occupying their ‘liberated’ lands or have installed oppressive communist governments. Soviet army and civilian losses in WW II are unimaginable, and soldiers killed number around 10 million. Perhaps 20 million when civilians are included. Josef Stalin, the murderous Soviet communist dictator is dead set on not giving up one inch.

Czechoslovakia has just succumbed to communist control in February under heavy Soviet pressure. Poland fell to the communists back in 1946 with Stalin, reneging on his promise to American President Roosevelt and British Prime Minster Churchill at Yalta for free elections, instead installed a Soviet puppet government while systematically eradicating Polish opposition. Churchill had delivered his public-awakening “Iron Curtain” speech 2 years earlier. The major Allies, America, Great Britain and France, are extremely worried about Stalin and the Red Army’s next moves.

Under agreements between the Soviet Union and the allies – Americans, British and French – the country of Germany is divided into 4 Economic Zones, each controlled by the respective 4 countries. The Allies control the western half and the Soviet Union (USSR), the eastern. Berlin itself, once the proud capital of Germany, is now a wasteland of rubble, poverty and hunger after city-shattering house-to-house combat between Nazi and Soviet soldiers. There’s barely a building left standing. There’s hardly any men left in the city. They are either killed in battle or taken prisoner by the Red Army. Berlin, a hundred miles inside the Soviet-controlled Zone in eastern Germany, is also likewise divided between the Allies and the USSR.

That’s the setting for what is to take place next in the pivotal June of 1948.

The Allies had for some time decided that a democratic, western-oriented Germany would be the best defense against further Soviet communist expansion westward. Germany, in a short period of time, had made substantial progress towards democratization and rebuilding. This unnerved Stalin who all along had planned for a united Germany in the communist orbit and the Soviets were gradually increasing pressure on transport in and out of Berlin.

The Allies announced on June 1 of 1948 the addition of the French Zone to the already unified Brit and American zones. Then, on June 18, the Allies announced the creation of a common currency, the Deutschmark, to stimulate economic recovery across the three allied Zones. Stalin and the Soviet leadership, seeing the potential for a new, vital, non-communist Western Germany in these actions, on June 24, decided to blockade Berlin’s rails, roads and canals to choke off what had become a western-nation-allied West Germany and West Berlin.

Stalin’s chess move was to starve the citizens of the city by cutting off their food supply, their electricity, and their coal to heat homes, power remaining factories and rebuild. His plan also was to make it difficult to resupply allied military forces. This was a bold move to grab West Berlin for the communists. Indeed, there were some Americans and others who felt that Germany, because of its crimes against humanity, should never again be allowed to be an industrial nation and that we shouldn’t stand up for Berlin. But that opinion did not hold sway with President Truman.

What Stalin and the Soviet communists didn’t count on was the creativity, ingenuity, perseverance and capacity of America and its allies.

Even though America had nuclear weapons at the time and the Soviet Union did not, it had pretty much demobilized after the war. So, rather than fight the Red Army, firmly dug in with vast quantities of men, artillery and tanks in eastern Germany and risk another world war, the blockade would be countered by an airlift. The greatest airlift of all time. Food, supplies and coal would be transported to the people of Berlin, mainly on American C54s flown by American, British, French and other allied pilots. But only America had the numbers of aircraft, the amount of fuel and the logistical resources, to actually do what looked to Stalin and the Soviets to be impossible.

One can only imagine the enormity of the 24-7 activity. Nearly 300,000 flights were made from June 24 of 1948 till September 30, 1949. Flights were coming in every 30 seconds at height of the airlift. It was a truly amazing logistical achievement to work up to the delivery of some three and a half thousand tons daily to meet the city’s needs. Think of the energy and dedication of the pilots and mechanics, those involved in the supply chains and the demanding delivery schedules… the sheer complexity of such an operation is mind-boggling.

Stalin, seeing the extent of Allied perseverance and capability over a year’s time and meanwhile, suffering an enormous propaganda defeat worldwide, relented.

Think of the Americans who led this history-making endeavor, all the men and women, from the Generals to the soldiers, airmen and civilians and their achievement on behalf of creating a free and prosperous Germany. A free Germany that sat side-by-side in stark contrast with the brutal communist east. To them, known as the “the greatest generation,” we owe our everlasting gratitude for victory in this monumental first ‘battle’ of the Cold War.

Don Ritter, Sc.D., served in the United States House of Representatives for the 15th Congressional District of Pennsylvania. As founder of the Afghanistan-American Foundation, he was senior advisor to the Afghan-American Chamber of Commerce (AACC) and the Afghan International Chamber of Commerce (AICC). Congressman Ritter currently serves as president and CEO of the Afghan-American Chamber of Commerce. He holds a B.S. in Metallurgical Engineering from Lehigh University and a M.S. and Sc. D. (Doctorate) from the Massachusetts Institute of Technology, M.I.T, in Physical Metallurgy and Materials Science. For more information about the work of Congressman Don Ritter, visit

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

The fall of 1939 saw dramatic changes in world events that would alter the course of history. On September 1, 1939, Nazi Germany invaded Poland to trigger the start of World War II but imperial Japan had been ravaging Manchuria and China for nearly a decade. Even though the United States was officially neutral in the world war, President Franklin Roosevelt had an important meeting in mid-October.

Roosevelt met with his friend, Alexander Sachs, who hand-delivered a letter from scientists Albert Einstein and Leo Szilard. They informed the president that German scientists had recently discovered fission and might possibly be able to build a nuclear bomb. The warning prompted Roosevelt to initiate research into the subject and beat the Nazis.

The United States entered the war after Japan bombed Pearl Harbor on December 7, 1941, and the Roosevelt administration began the highly secretive Manhattan Project in October 1942. The project had facilities in far-flung places and employed the efforts of more than half a million Americans across the country. The weapons research laboratory resided in Los Alamos, New Mexico, under the direction of J. Robert Oppenheimer.

As work progressed on a nuclear weapon, the United States waged a global war in the Pacific, North Africa, and Europe. The Pacific witnessed a particularly brutal war against Japan. After the Battle of Midway in June 1942, the Americans launched an “island-hopping” campaign. They were forced to eliminate a tenacious and dug-in enemy in heavy jungles in distant places like Guadalcanal. The Japanese forces gained a reputation for suicidal banzai charges and fighting to the last man.

By late 1944, the United States was closing in on Japan and invaded the Philippines. The U.S. Navy won the Battle of Leyte Gulf, but the Japanese desperately launched kamikaze attacks that inflicted a heavy toll, sinking and damaging several ships and causing thousands of American casualties. The nature of the attacks helped confirm the Americans believed they were fighting a fanatical enemy.

The battles of Iwo Jima and Okinawa greatly shaped American views of Japanese barbarism. Iwo Jima was a key island for airstrips to bomb Japan and U.S. naval assets as they built up for the invasion of Japan. On February 19, 1945, the Fourth and Fifth Marine Divisions landed mid-island after a massive preparatory bombardment. After a dreadful slog against an entrenched enemy, the Marines took Mt. Suribachi and famously raised an American flag on its heights.

The worst was yet to come against the nearly 22,000-man garrison in a complex network of tunnels. The brutal fighting was often hand-to-hand. The Americans fought for each yard of territory by using grenades, satchel charges, and flamethrowers to attack pillboxes. The Japanese fought fiercely and sent waves of hundreds of men in banzai charges. The Marines and Navy lost 7,000 dead and nearly one-third of the Marines who fought on the island were casualties. Almost all the defenders perished.

The battle for Okinawa was just as bloody. Two Marine and two Army divisions landed unopposed on Okinawa on April 1 after another relatively ineffective bombardment and quickly seized two airfields. The Japanese built nearly impregnable lines of defense, but none was stronger than the southern Shuri line of fortresses where 97,000 defenders awaited.

The Marines and soldiers attacked in several frontal assaults and were ground up by mine fields, grenades, and pre-sighted machine-guns and artillery covering every inch. For their part, the Japanese launched several fruitless attacks that bled them dry. The war of attrition finally ended with 13,000 Americans dead and 44,000 wounded. On the Japanese side, more than 70,000 soldiers and tens of thousands of Okinawan civilians were killed. The naval battle in the waters surrounding the island witnessed kamikaze and bombing attacks that sank 28 U.S. ships and damaged an additional 240.

Okinawa was an essential staging area for the invasion of Japan and additional proof of the fanatical nature of the enemy. Admiral Chester Nimitz, General Douglas MacArthur, and the members of the Joint Chiefs of Staff were planning Operation Downfall—the invasion of Japan—beginning with Operation Olympic in southern Japan in the fall of 1945 with fourteen divisions and twenty-eight aircraft carriers, followed by Operation Coronet in central Japan in early 1946.

While the U.S. naval blockade and aerial bombing of Japan were very successful in grinding down the enemy war machine, Japanese resistance was going to be even stronger and more fanatical than Iwo Jim and Okinawa. The American planners expected to fight a horrific war against the Japanese forces, kamikaze attacks, and a militarized civilian population. Indeed, the Japanese reinforced Kyushu with thirteen divisions of 450,000 entrenched men by the end of July and had an estimated 10,000 aircraft at their disposal. Japan was committed to a decisive final battle defending its home. Among U.S. military commanders, only MacArthur underestimated the difficulty of the invasion as he was wont to do.

Harry Truman succeeded Roosevelt as president when he died on April 12, 1945. Besides the burdens of command decisions in fighting the war to a conclusion, holding together a fracturing alliance with the Soviets, and shaping the postwar world, Truman learned about the Manhattan Project.

While some of the scientists who worked on the project expressed grave concerns about the use of the atomic bomb, most decision-makers expected that it would be used if it were ready. Secretary of War Henry Stimson headed the Interim Committee that considered the use of the bomb. The committee rejected the idea of a demonstration or a formal warning to the Japanese in case it failed and strengthened their resolve.

On the morning of July 16, the “gadget” nuclear device was successfully exploded at Alamogordo, New Mexico. The test was code-name “Trinity,” and word was immediately sent to President Truman then at the Potsdam Conference negotiating the postwar world. He was ecstatic and tried to use it to impress Stalin, who impassively received the news because he had several spies in the Manhattan project. The Allies issued the Potsdam Declaration demanding unconditional surrender from Japan or face “complete and utter destruction.”

After possible targets were selected, the B-29 bomber, Enola Gay, carried the uranium atomic bomb nicknamed Little Boy from Tinian Island. The Enola Gay dropped Little Boy over Hiroshima, where the blast and resulting firestorm killed 70,000 and grievously injured and sickened tens of thousands of others. The Japanese government still adamantly refused to surrender.

On August 9, another B-29 dropped the plutonium bomb Fat Man over Nagasaki which was a secondary target. Heavy cloud cover meant that the bomb was dropped in a valley that restricted the effect of the blast somewhat. Still, approximately 40,000 were killed. The dropping of the second atomic bombs and the simultaneous invasion of Manchukuo by the Soviet Union led to the Emperor Hirohito to announce Japan’s surrender on August 15. The formal surrender took place on the USS Missouri on September 2.

General MacArthur closed the ceremony with a moving speech in which he said,

It is my earnest hope, and indeed the hope of all mankind, that from this solemn occasion a better world shall emerge out of the blood and carnage of the past—a world founded upon faith and understanding, a world dedicated to the dignity of man and the fulfillment of his most cherished wish for freedom, tolerance, and justice…. Let us pray that peace now be restored to the world, and that God will preserve it always. These proceedings are closed.

World War II had ended, but the Cold War and atomic age began.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joshua Schmid

‘A Date Which Will Live in Infamy’

The morning of December 7, 1941 was another day in paradise for the men and women of the U.S. armed forces stationed at the Pearl Harbor Naval Base on Oahu, Hawaii. By 7:30 am, the air temperature was already a balmy 73 degrees. A sense of leisure was in the air as sailors enjoyed the time away from military duties that Sundays offered. Within the next half hour, the serenity of the island was shattered. Enemy aircraft streaked overhead, marked only by a large red circle. The pilots—who had been training for months for this mission—scanned their surroundings and set their eyes on their target: Battleship Row. The eight ships—the crown of the United States’ Pacific fleet—sat silently in harbor, much to the delight of the oncoming Japanese pilots, who began their attack.

Since the Japanese invasion of Manchuria in 1931, the relationship between the United States and Japan had significantly deteriorated. Over the course of the ensuing decade, the U.S. imposed embargoes on strategic materials such as oil and froze Japanese assets to deter the Empire of the Rising Sun’s continual aggressions in the Pacific. For many in the American political and military leadership, it became not a question whether violent conflict would erupt between the two nations but rather when. Indeed, throughout the month of November 1941, the two military commanders at Pearl Harbor—Admiral Husband Kimmel and Lieutenant General Walter Short—received multiple warnings from Washington, D.C. that conflict with Japan somewhere in the Pacific would very soon be a reality. In response, Kimmel and Short ordered that aircraft be moved out of their hangars at Pearl Harbor and lined up on runways in order to prevent sabotage. Additionally, radar—a new technology that had not yet reached its full capabilities—began operation a few hours a day on the island of Oahu. Such a lackluster response to war warnings can largely be attributed to the fact that American intelligence suspected that the initial Japanese strike would fall on U.S. bases in the remote Pacific such as at the Philippines or Wake Island. The logistical maneuvering it would take to carry out a large-scale attack on Pearl Harbor—nearly 4,000 miles from mainland Japan—seemed ludicrously impossible.

Such beliefs, of course, were immediately drowned out by the wails of the air raid sirens and the repeated message, “Air raid Pearl Harbor. This is not a drill” on the morning of what turned out to be perhaps the most momentous day of the entire twentieth century. The Japanese strike force launched attacks from aircraft carriers in two waves. Torpedo and dive bombers attacked hangars and the ships anchored in the harbor while fighters provided strafing runs and air defense. In addition to the eight American battleships, a variety of cruisers, destroyers, and support ships were at Pearl Harbor.

A disaster quickly unfolded for the Americans. Many sailors had been granted leave that day given it was a Sunday. These men were not at their stations as the attack began—a fact that Japanese planners likely expected. Members of the American radar teams did in fact spot blips of a large array of aircraft before the attack. However, when they reported it to their superiors, they were told it was incoming American planes. The American aircraft that were lined up in clusters on runways to prevent sabotage now made easy targets for the Japanese strike force. Of the 402 military aircraft at Pearl Harbor and the surrounding airfields, 188 were destroyed and 159 damaged. Only a few American pilots were able to take off—those who did bravely took on the overwhelming swarm of Japanese aircraft and successfully shot a few down. Ships in the harbor valiantly attempted to get under way despite being undermanned, but with little success. The battleship Nevada attempted to lumber its way out of the narrow confines but her captain purposefully scuttled it to avoid blocking the harbor after it suffered multiple bomb hits. All eight of the battleships took some form of damage, and four were sunk. In the most infamous event of the entire attack, a bomb struck the forward magazine of the battleship Arizona, causing a mass explosion that literally ripped the ship apart. Of the nearly 2,500 Americans killed in the attack on Pearl Harbor, nearly half were sailors onboard the Arizona. In addition to the battleships, a number of cruisers, destroyers, and other ships were also sunk or severely damaged. In contrast, only 29 Japanese planes were shot down during the raid. The Japanese fleet immediately departed and moved to conduct other missions against British, Dutch, and U.S. holdings in the Pacific, believing that they had achieved the great strike that would incapacitate American naval power in the Pacific for years to come.

On the morning of the attack at Pearl Harbor, the aircraft carrier U.S.S. Saratoga was in port at San Diego on a mission. The other two carriers in the Pacific fleet were also noticeably absent from Pearl Harbor when the bombs began to fall. Japanese planners thought little of it in the ensuing weeks—naval warfare theory at the time was fixated on the idea of battleships dueling each other from long range with giant guns. Without their battleships, how could the Americans hope to stop the Japanese from dominating in the Pacific? However, within a year and a half, these three carriers would win a huge victory at the Battle of Midway and helped turn the tide in the Pacific in favor of the Americans and made it a carrier war.

The victory at Midway would give morale to an American people already hard at work since December 7, 1941 at mobilizing its entire society for war in one of the greatest human efforts in history. Of the eight battleships damaged at Pearl Harbor, all but the Arizona and Oklahoma were salvaged and returned to battle before the end of the war. In addition, the U.S. produced thousands of ships between 1941-1945 as part of a massive new navy. In the end, rather than striking a crushing blow, the Japanese task force merely awoke a sleeping giant who eagerly sought to avenge its wounds. As for the men and women who fought and died on December 7, 1941—a date that President Franklin Roosevelt declared would “live in infamy”—they will forever be enshrined in the hearts and minds of Americans for their courage and honor on that fateful day.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Andrew Langer

In 1992, United States Supreme Court Justice Sandra Day O’Connor enunciated an axiomatic principle of constitutional governance, that the Constitution “protects us from our own best intentions,” dividing power precisely so that we might resist the temptation to concentrate that power as “the expedient solution to the crisis of the day.”[1] It is a sentiment that echoes through American history, as there has been a constant “push-pull” between the demands of the populace and the divisions and restrictions on power as laid out by the Constitution.

Before President Franklin Delano Roosevelt’s first term, the concept of a 100-Day agenda simply didn’t exist. But, since 1933, incoming new administrations have been measured by that arbitrary standard—what they plan on accomplishing in those first hundred days, and what they actually accomplished.

The problem, of course, is that public policy decision making should not only be a thorough and deliberative process, but in order to protect the rights of the public, must allow for significant public input. Without that deliberation, without that public participation, significant mistakes can be made. This is why policy made in a crisis is almost always bad policy—and thus Justice O’Connor’s vital warning.

FDR came into office with America under its most significant crisis since the Civil War. Nearly three and a half years into an economic disaster—nearly a quarter of the population was out of work, banks and businesses were failing, millions of Americans were completely devastated and looking for real answers.

The 1932 presidential election was driven by this crisis. Incumbent President Herbert Hoover was seen as a “do-nothing” president, whose efforts at stabilizing the economy through tariffs and tax increases hadn’t stemmed the economic tide of the Great Depression. FDR had built a reputation as governor of New York for action, and on the campaign trail raised a series of ambitious plans that he intended to enact that he called “The New Deal.” Significant portions of this New Deal were to be enacted during those first 100 days in office.

This set a standard that later presidents would be held to: what they wanted to accomplish during those first hundred days, and how those goals might compare to the goals laid out by FDR.

At the core of those enactments were the creation of three major federal programs: the Federal Deposit Insurance Corporation, the Civilian Conservation Corps, and the National Industrial Recovery Administration. Of these three, the FDIC remains in existence today, with its mission largely unchanged: to guarantee the monetary accounts of bank customers, and, in doing so, ensure that banks aren’t closed down because of customers suddenly withdrawing all their money from a bank and closing their accounts.

This had happened with great frequency following the stock market crash of 1929—and such panicked activity was known, popularly, as a “bank run.”[2]

FDR was inaugurated on March 4, 1933. On March 6, he closed the entire American banking system! Three days later, on March 9, Congress passed the Emergency Banking Act—which essentially created the FDIC. Three days later, on Sunday, March 12, FDR gave the first of his “fireside chats,” assuring the nation that when the banks re-opened the following day, the federal government would be protecting Americans’ money.

But there were massive questions over the constitutionality of much of FDR’s New Deal proposals, and many of them were challenged in federal court. At the same time, a number of states were also attempting their own remedies for the nation’s economic morass—and in challenges to some of those policies, the Supreme Court upheld them, citing a new and vast interpretation of the Constitution’s Commerce Clause, with sweeping ramifications.

In the Blaisdell Case[3], the Supreme Court upheld a Minnesota law that essentially suspended the ability of mortgage holders from collecting mortgage monies or pursuing remedies when monies had not been paid.  The court said that due to the severe national emergency created by the Great Depression, government had vast and enormous power to deal with it.

But critics have understood the serious and longstanding ramifications of such decisions. Adjunct Scholar at the libertarian-leaning Cato Institute and NYU law professor Richard Epstein said of Blaisdell that, “trumpeted a false liberation from the constitutional text that has paved the way for massive government intervention that undermines the security of private transactions. Today the police power exception has come to eviscerate the contracts clause.”

In other words—in a conflict between the rights of private parties under the contracts clause and the power of government under the commerce clause, when it comes to emergencies, the power of government wins.

Interestingly enough, due to a series of New Deal programs that had been ruled unconstitutional by the Supreme Court, in 1937, FDR attempted to change the make-up of the court in what became known as the “court-packing scheme.” The proposal essentially called for remaking the balance of the court by appointing an additional justice (up to six additional) for every justice who was over the age of 70 years and 6 months.

Though the legislation languished in Congress, the pressure was brought to bear on the Supreme Court and Associate Justice Owen Roberts began casting votes in support of FDR’s New Deal programs—fundamentally shifting the direction of federal power towards concentration, a shift that continued until the early 1990s, when the high court began issuing decisions (like New York v. United States) that limited the power of the federal government and the expansive interpretation of the commerce clause.

But it’s the sweeping power for the federal government to act within a declared emergency, and the impact of the policies that are created within that crisis that is of continued concern. Much in the same way that the lack of deliberation during FDR’s first 100 days led to programs that had sweeping and lasting impact on public life, and created huge unintended consequences, we are seeing those same mistakes played out today—the declaration of a public emergency, sweeping polices created without any real deliberation and public input, and massive (and devastating consequences) to businesses, jobs, and society in general.

If we are to learn anything from those first hundred days, it should be that we shouldn’t let a deliberative policy process be hijacked, and certainly not for political reasons. Moreover, when polices are enacted without deliberation, we should be prepared for the potential consequences of that policy… and adjust those policies accordingly when new information presents itself (and when the particular crisis has passed). Justice O’Connor was correct—the Constitution does protect us from our own best intentions.

We should rely on it, especially when we are in a crisis.

Andrew Langer is President of the Institute for Liberty.  He teaches in the Public Policy program at the College of William and Mary.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] New York v. US, 505 US 144 (1992)

[2] Bank runs were so engrained in the national mindset that Frank Capra dramatized one in his famous film, It’s A Wonderful Life. In it, the Bedford Falls Bank is the victim of a run and “saved” by the film’s antagonist, Mr. Potter.  Potter offers to “guarantee” the Bailey Building and Loan, but, knowing it would give Potter Control, the film’s hero, George Bailey, uses his own money to keep his firm intact.

[3] Home Building and Loan Association v Blaisdell, 290 US 398 (1934)

Guest Essayist: John Steele Gordon

Wall Street, because it tries to discern future values, is usually a leading indicator. It began to recover, for instance, from the financial debacle of 2008 in March of the next year. But the economy didn’t begin to grow again until June of 2009.

But sometimes Wall Street separates from the underlying economy and loses touch with economic reality. That is what happened in 1929 and brought about history’s most famous stock market crash.

The 1920’s were a prosperous time for most areas of the American economy and Wall Street reflected that expansion. But rural America was not prospering. In 1900, one-third of all American crop land had been given over to fodder crops to feed the country’s vast herd of horses and mules. But by 1930, horses had largely given way to automobiles and trucks while the mules had been replaced by the tractor.

As more and more agricultural land was turned over to growing crops for human consumption, food prices fell and rural areas began to fall into depression. Rural banks began failing at the rate of about 500 a year.

Because the major news media, then as now, was highly concentrated in the big cities, this economic problem went largely unnoticed. Indeed, while the overall economy rose 59 percent in the 1920s, the Dow Jones Industrial Average increased 400 percent.

The Federal Reserve in the fall of 1928, raised the discount rate from 3.5 percent to 5 percent and began to reduce the increase in the money supply, in hopes of getting the stock market to cool off.

But by then, Wall Street was in a speculative bubble. Fueling that bubble was a very misguided policy by the Fed. It allowed member banks to borrow at the discount window at five percent. The banks in turn, loaned the money to brokerage houses at 12 percent. The brokers then loaned the money to speculators at 20 percent. The Fed tried to use “moral suasion” to get the banks to stop borrowing in this way. But if a bank can make 7 percent on someone else’s money, it is going to do so. The Fed should have just closed the window for those sorts of loans, but didn’t.

By Labor Day, 1929, the American economy was in a recession but Wall Street still had not noticed. On the day after Labor Day, the Dow hit a new all-time high at 381.17. It would not see that number again for 25 years. Two days later the market began to wake up.

A stock market analyst of no great note, Roger Babson, gave a talk that day in Wellesley, Massachusetts, and said that, “I repeat what I said at this time last year and the year before, that sooner or later a crash is coming.” When news of this altogether unremarkable prophecy crossed the broad tape at 2:00 that afternoon, all hell broke loose. Prices plunged (US Steel fell 9 points, AZT&T 6) and volume in the last two hours of trading was a fantastic two million shares.

Remembered as the Babson Break, it was like a slap across the face of an hysteric, and the mood on the Street went almost in an instant from “The sky’s the limit” to “Every man for himself.”

For the next six weeks, the market trended downwards, with some plunges followed by weak recoveries. Then on Thursday, October 23rd, selling swamped the market on the second highest volume on record. The next morning there was a mountain of sell orders in brokerage offices across the country and prices plunged across the board. This set off a wave of margin calls, further depressing prices, while short sellers put even more pressure on prices.

A group of the Street’s most important bankers met at J. P. Morgan and Company, across Broad Street from the exchange.

Together they raised $20 million to support the market and entrusted it to Richard Whitney, the acting president of the NYSE.

At 1:30, Whitney strode onto the floor and asked the price of US Steel. He was told that it had last traded at 205 but that it had fallen several points since, with no takers.

“I bid 205 for 10,000 Steel,” Whitney shouted. He then went to other posts, buying large blocks of blue chips. The market steadied as shorts closed their positions and some stocks even ended up for the day. But the volume had been an utterly unprecedented 13 million shares.

The rally continued on Friday but there was modest profit taking at the Saturday morning session. Then, on Monday, October 28th, selling resumed as rumors floated around that some major speculators had committed suicide and that new bear pools were being formed.

On Tuesday, October 29th, remembered thereafter as Black Tuesday, there was no stopping the collapse in prices. Volume reached 16 million shares, a record that would stand for nearly 40 years, and the tape ran four hours late. The Dow was down a staggering 23 percent on the day and nearly 40 percent below its September high.

Prices trended downwards for more than another month, but by the spring of 1930 the market, badly over sold by December, had recovered about 45 percent of its autumn losses. Many thought the recession was over. But then the federal government and the Federal Reserve began making a series of disastrous policy blunders that would turn an ordinary recession into the Great Depression.

John Steele Gordon was born in New York City in 1944 into a family long associated with the city and its financial community. Both his grandfathers held seats on the New York Stock Exchange. He was educated at Millbrook School and Vanderbilt University, graduating with a B.A. in history in 1966.

After college he worked as a production editor for Harper & Row (now HarperCollins) for six years before leaving to travel, driving a Land-Rover from New York to Tierra del Fuego, a nine-month journey of 39,000 miles. This resulted in his first book, Overlanding. Altogether he has driven through forty-seven countries on five continents.

After returning to New York he served on the staffs of Congressmen Herman Badillo and Robert Garcia. He has been a full-time writer for the last twenty years. His second book, The Scarlet Woman of Wall Street, a history of Wall Street in the 1860’s, was published in 1988. His third book, Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt, was published in 1997. The Great Game: The Emergence of Wall Street as a World Power, 1653-2000, was published by Scribner, a Simon and Schuster imprint, in November, 1999. A two-hour special based on The Great Game aired on CNBC on April 24th, 2000. His latest book, a collection of his columns from American Heritage magazine, entitled The Business of America, was published in July, 2001, by Walker. His history of the laying of the Atlantic Cable, A Thread Across the Ocean, was published in June, 2002. His next book, to be published by HarperCollins, is a history of the American economy.

He specializes in business and financial history. He has had articles published in, among others, Forbes, Forbes ASAP, Worth, the New York Times and The Wall Street Journal Op-Ed pages, the Washington Post’s Book World and Outlook. He is a contributing editor at American Heritage, where he has written the “Business of America” column since 1989.

In 1991 he traveled to Europe, Africa, North and South America, and Japan with the photographer Bruce Davidson for Schlumberger, Ltd., to create a photo essay called “Schlumberger People,” for the company’s annual report.

In 1992 he was the co-writer, with Timothy C. Forbes and Steve Forbes, of Happily Ever After?, a video produced by Forbes in honor of the seventy-fifth anniversary of the magazine.

He is a frequent commentator on Marketplace, the daily Public Radio business-news program heard on more than two hundred stations throughout the country. He has appeared on numerous other radio and television shows, including New York: A Documentary Film by Ric Burns, Business Center and Squawk Box on CNBC, and The News Hour with Jim Lehrer on PBS. He was a guest in 2001 on a live, two-hour edition of Booknotes with Brian Lamb on C-SPAN.

Mr. Gordon lives in North Salem, New York. His email address is

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

An admirer of inventors Bell, Edison, and Einstein’s theories, scientist and inventor Philo T. Farnsworth designed the first electric television based on an idea he sketched in a high school chemistry class. He studied and learned some success was gained with transmitting and projecting images. While plowing fields, Farnsworth realized television could become a system of horizontal lines, breaking up images, but forming an electronic picture of solid images. Despite attempts by competitors to impede Farnsworth’s original inventions, in 1928, Farnsworth presented his idea for a television to reporters in Hollywood, launching him into more successful efforts that would revolutionize moving pictures.

On September 3, 1928, Philo Farnsworth, a twenty-two year old inventor with virtually no formal credentials as a scientist, demonstrated his wholly electronic television system to reporters in California. A few years later, a much improved television system was demonstrated to larger crowds of onlookers at the Franklin Institute in Philadelphia, proving to the world that this new medium could broadcast news, entertainment, and educational content across the nation.

Farnsworth had come far from his boyhood roots in northern Utah and southern Idaho. He was born in a log cabin lacking indoor plumbing or electrical power. His family moved to a farm outside of Rigby, Idaho, when Farnsworth was a young boy. For the first time, Farnsworth could examine electrical appliances and electric generators in action. He quickly learned to take electrical gadgets apart and put them back together again, often making adaptations to improve their function. He also watched each time the family’s generator was repaired. Soon, still a  young boy, he could do those repairs himself. Farnsworth was a voracious reader of science books and magazines, but also devoured what is now termed science fiction, although that term was not in use during his youth. He became a skilled violinist, possibly because of the example of his idol, Albert Einstein, who also played the instrument.[1]

Farnsworth excelled in his classes in school, particularly in mathematics and other sciences, but he did present his teachers and school administrators with a bit of a problem when he repeatedly appealed to take classes intended for much older students. According to school rules, only high school juniors and seniors were supposed to enroll in the advanced classes, but Farnsworth determined to find courses that would challenge him intellectually. The school resisted his entreaties, but one chemistry teacher, Justin Tolman, agreed to tutor Philo and give him extra assignments both before and after school.

One day, Farnsworth presented a visual demonstration of an idea that he had for transmitting visual images across space. He later claimed that he had come up with the basic idea for this process one year earlier, when he was only fourteen. As he was plowing a field on his family farm, Philo had seen a series of straight rows of plowed ground. Farnsworth thought it might be possible to represent visual images by breaking up the entire image into a sequence of lines of various shades of light and dark images. The images could be projected electronically and re-assembled as pictures made up of a collection of lines, placed one on top of another. Farnsworth  believed that this could be accomplished based on his understanding of Einstein’s path-breaking work on the “photoelectric effect” which had discovered that a particle of light, called a photon, that hit a metal plate would displace electrons with some residual energy transferred to a free-roaming negative charge, called a photoelectron.[2] Farnsworth had developed a conceptual model of a device that he called an “image dissector” that could break the images apart and transmit them for reassembly at a receiver. He had no means of creating this device with the resources he had at hand, but he did develop a model representation of the device, along with mathematical equations to convey the causal mechanisms. He presented all of this on the blackboard of a classroom in the high school in Rigby.   Tolman was stunned by the intellectual prowess of the fifteen year old standing in front of him. He thought Farnsworth’s model might actually work, and he copied down some of the drawings from the blackboard onto a piece of paper, which he kept for years.[3] It is fortunate for Farnsworth that Tolman held on to those pieces of paper.

Farnsworth was accepted into the United States Naval Academy but very soon was granted an honorable discharge under a provision permitting new midshipman to leave the university and the service to care for their families after the death of a parent. Farnsworth’s father had died the previous year, and Farnsworth returned to Utah, where his family had relocated after the sale of the farm. Farnsworth enrolled at Brigham Young University but worked at various jobs to support himself, his mother, and his younger siblings. As he had in high school, Farnsworth asked to be allowed to register in advanced classes rather than take only freshman level course work. He quickly earned a technical certificate but no baccalaureate degree. While in Utah, Farnsworth met, courted and eventually married his wife, “Pem,” who would later help in his lab creating and building instruments. One of her brothers would also provide lab assistance. One of Farnsworth’s job during his time in Utah was with the local Community Chest. There he met George Everson and Leslie Gorrell, who were regional Community Chest administrators who were experienced in money-raising. Farnsworth explained his idea to them about electronic television, which he had never before done to anyone except his father, now deceased, and his high school teacher, Justin Tolman. Everson and Gorrell were impressed with Farnsworth’s idea, although they barely understood most of the science behind it. Everson and Gorrell invited Farnsworth to travel with them to California to discuss his research with scientists from the California Institute of Technology (a.k.a., Cal Tech). Farnsworth agree to do so, and made the trek to Los Angeles to meet first with scientists and then with bankers to solicit funds to support his research.     When discussing his proposed electronic television model, Farnsworth became transformed from a shy, socially awkward, somewhat tongue-tied young man to a confident and articulate advocate of his project. He was able to explain the broad outline of his research program in terms that lay people could understand. He convinced Gorrell and Everson to put up some money and a few years later got several thousand more dollars from a California bank.[4]

Philo and Pem Farnsworth re-located first to Los Angeles and then to San Francisco to establish a laboratory. Farnsworth believed that his work would progress more quickly if he were close to a number of other working scientists and technical experts at Cal Tech and other universities. Farnsworth also wanted to be near to those in the motion picture industry who had technical expertise. With a little start-up capital, Farnsworth and a few other backers incorporated their business, although Farnsworth did not create a publicly traded corporation until several years later. At the age of twenty-one, in 1927, Farnsworth filed the first two of his many patent applications. Those two patents were approved by the patent office in 1930. By the end of his life he had three hundred patents, most of which dealt with television or radio components. As of 1938, three-fourths of all patents dealing with television were by Farnsworth.[5]

When Farnsworth began his work in California, he and his wife and brother-in-law had to create many of the basic components for his television system. There was very little that they could simply buy off-the-shelf at any sort of store that they could simply assemble into the device Farnsworth had in mind. So much of their time was devoted to soldering wires and creating vacuum tubes, as well as testing materials to determine which performed best. After a while, Farnsworth hired some assistants, many of them graduate students at Cal Tech or Stanford. One of his assistants, Russell Varian, would later make a name for himself as a physicist in his own right and would become one of the founders of Silicon Valley. Farnsworth’s lab also had many visitors, including Hollywood celebrities such as Douglas Fairbanks and Mary Pickford, as well as a number of scientists and engineers. One visitor was Vladimir Zworykin, a Russian émigré with a PhD in electrical engineering who worked for Westinghouse Corporation. Farnsworth showed Zworykin not only his lab but also examples of most of his key innovations, including his image dissector. Zworykin expressed admiration for the devices that he observed, and said that he wished that he had invented the dissector. What Farnsworth did not know was that a few weeks earlier, Zworykin had been hired away from Westinghouse by David Sarnoff, then the managing director and later the president of the Radio Corporation of America (a.k.a., RCA). Sarnoff grilled Zworykin about what he had learned from his trip to Farnsworth’s lab and immediately set him to work on television research. RCA was already a leading manufacturer of radio sets and would soon become the creator of the National Broadcasting Corporation (a.k.a., NBC). After government antitrust regulators forced RCA to divest itself of some of its broadcasting assets, RCA created the American Broadcasting Corporation (a.k.a., ABC) as a separate company[6]. RCA and Farnsworth would remain competitors and antagonists for the rest of Farnsworth’s career.

In 1931, Philco, a major radio manufacturer and electronics corporation entered into a deal with Farnsworth to support his research. The company was not buying out Farnsworth’s company, but was purchasing non-exclusive licenses for Farnsworth’s patents. Farnsworth then moved with his family and some of his research staff to Philadelphia.   Ironically, RCA’s television lab was located in Camden, New Jersey, just a few miles away. On many occasions, Farnsworth and RCA could receive the experimental television broadcasts transmitted from their rival’s lab. Farnsworth and his team were working at a feverish pace to improve their inventions to make them commercially feasible. The Federal Radio Commission, later known as the Federal Communications Commission, classified television as a merely experimental communications technology, rather than one that was commercially viable and subject to license. The commission wished to create standards for picture resolution and frequency bandwidth. Many radio stations objected to television licensing because they believed that television signals would crowd out the bandwidth available for their broadcasts.   Farnsworth developed the capacity to transmit television signals over a more narrow bandwidth than any competing televisions’ transmissions.

Personal tragedy struck the Farnsworth family in 1932 when Philo and Pem’s young son, Kenny, still a toddler, died of a throat infection, an ailment that today could easily have been treated with antibiotics. The Farnsworths decided to have the child buried back in Utah, but Philco refused to allow Philo time off to go west to bury his son. Pem made the trip alone, causing a rift between the couple that would take months to heal. Farnsworth was struggling to perfect his inventions, while at the same time RCA devoted an entire team to television research and engaged in a public relations campaign to convince industry leaders and the public that it had the only viable television system. At this time, Farnsworth’s health was declining. He was diagnosed with ulcers and he began to drink heavily, even though Prohibition had not yet been repealed. He finally decided to sever his relationship with Philco and set up his own lab in suburban Philadelphia. He soon also took the dramatic step of filing a patent infringement complaint against RCA in 1934.[7]

Farnsworth and his friend and patent attorney, Donald Lippincott, presented their argument before the patent examination board that Farnsworth was the original inventor of what was now known as electronic television and that Sarnoff and RCA had infringed on patents approved in 1930. Zworykin had some important patents prior to that time but had not patented the essential inventions necessary to create an electronic television system. RCA went on the offensive by claiming that it was absurd to claim that a young man in his early twenties with no more than one year of college could create something that well-educated scientists had failed to invent. Lippincott responded with evidence of the Zworykin visit to the Farnsworth lab in San Francisco. After leaving Farnsworth, Zworykin had returned first to the labs at Westinghouse and had duplicates of Farnsworth’s tubes constructed on the spot. Then researchers were sent to Washington to make copies of Farnsworth’s patent applications and exhibits. Lippincott also was able to produce Justin Lippincott, Philo’s old and then retired teacher, who appeared before the examination board to testify that the basic idea of the patent had been developed when Farnsworth was a teenager. When queried, Tolman removed a yellowed piece of notebook paper with a diagram that he had copied off the blackboard in 1922. Although the document was undated, the written document, in addition to Tolman’s oral testimony, may have convinced the board that Farnsworth’s eventual patent was for a novel invention.[8]

The examining board took several months to render a decision. In July of 1935, the examiner of interferences from the U.S. Patent Office mailed a forty-eight page document to the parties involved. After acknowledging the significance of inventions by Zworykin, the patent office declared that those inventions were not equivalent to what was understood to be electronic television. Farnsworth’s claims had priority.   The decision was appealed in 1936, but the result remained unchanged.  Beginning in 1939, RCA began paying royalties to Farnsworth.

Farnsworth and his family, friends, and co-workers were ecstatic with the outcome when the patent infringement case was decided. For the first time, Farnsworth was receiving the credit and the promise of the money that he thought he was due. However, the price he had paid already was very high. Farnsworth’s physical and emotional health was declining. He was perpetually nervous and exhausted. As unbelievable as it may sound today, one doctor advised him to take up smoking to calm his nerves. He continued to drink heavily and his weight dropped.      His company was re-organized as the Farnsworth Television & Radio Corporation and had its initial public offering of stock in 1939.    Whether out of necessity or personal choice, Farnsworth’s work in running his lab and his company diminished.

While vacationing in northern, rural Maine in 1938, the Farnsworth family came across a plot of land that reminded Philo of his home and farm outside of Rigby. Farnsworth bought the property, re-built an old house, constructed a dam for a small creek, and erected a building that could house a small laboratory. He spent most of the next few years on the property. Even though RCA had lost several patent infringement cases against Farnsworth, the company was still engaging in public demonstrations of television broadcasts in which it claimed that David Sarnoff was the founder of television and that Vladimir Zworykin was the sole inventor of television. The most significant of these demonstrations was at the World’s Fair at Flushing Meadows, New York. Many reporters accepted the propaganda that was distributed at that event and wrote up glowing stories of the supposedly new invention. Only a few years before, Farnsworth had demonstrated his inventions at the Franklin Institute, but the World’s Fair was a much bigger venue with a wider media audience. In 1949, NBC launched a special televised broadcast celebrating the 25th anniversary of the creation of television by RCA, Sarnoff, and Zworykin. No mention was made of Farnsworth at all.[9]

The FCC approved television as a commercial broadcast enterprise, subject to licensure, in 1939. The commission also set standards for broadcast frequency and picture quality. However, the timing to start off a major commercial venture for the sale of a discretionary consumer product was far from ideal. In fact, the timing of Farnsworth’s milestone accomplishments left much to be desired. His first patents were approved shortly after the nation entered the Great Depression. His inventions created an industry that was already subject to stringent government regulation focused on a related but potentially rival technology: radio. Once television was ready for mass marketing, the nation was poised to enter World War II. During the war, production of televisions and many other consumer products ceased and resources were devoted to war-related materiel. Farnsworth’s company and RCA both produced radar and other electronics equipment. Farnsworth’s company also produced wooden ammunition boxes. Farnsworth allowed the military to enjoy free use of his patents for radar tubes.[10]

Farnsworth enjoyed royalties from his patent for the rest of his life.   However, his two most important patents were his earliest inventions.  The patents were approved in 1930 for a duration of seventeen years. In 1947, the patents became part of the public domain. It was really only in the late 1940s and 1950s that television exploded as a popular consumer good, but by that time Farnsworth could receive no royalties for his initial inventions. Other, less fundamental components that he had patented did provide him with some royalty income. Before the war, Farnsworth’s company had purchased the Capehart Company of Fort Wayne, Indiana, and eventually closed down their Philadelphia area facility and moved their operations entirely to Indiana. A devastating wildfire swept through the countryside in rural Maine, burning down the buildings on Farnsworth’s property, only days before his property insurance policy was activated. Farnsworth’s company fell upon hard times, as well, and eventually was sold to International Telephone and Telegraph. Farnsworth’s health never completely recovered, and he took a disability retirement pension at the age of sixty and returned to Utah.   In his last few years, Farnsworth devoted little time to television research, but did develop devices related to cold fusion, which he hoped to use to produce abundant electrical power for the whole world to enjoy. As of now, cold fusion has not been a viable electric power generator, but it has proved useful in neutron production and medical isotopes.

Farnsworth died in 1971 at the age of sixty-four. At the time of his death, he was not well-known outside of scientific circles. His hopes and dreams of television as a cultural and educational beacon to the whole world had not been realized, but he did find some value in at least some of what he could see on the screen. About two years before he died, Philo and Pem along with millions of other people around the world saw Neil Armstrong set foot on the moon. At that moment, Philo turned to his wife and said that he believed that all of his work was worthwhile.

Farnsworth’s accomplishments demonstrated that a more or less single inventor, with the help of a few friends, family members, and paid staff, could create significant and useful inventions that made a mark on the world.[11] In the long run, corporate product development by rivals such as RCA surpassed what he could do to make his brainchild marketable.   Farnsworth had neither the means nor the inclination to compete with major corporations in all respects. But he did wish to have at least some recognition and some financial reward for his efforts. Unfortunately, circumstances often wiped out what gains he received. Farnsworth also demonstrated that individuals lacking paper credentials can also accomplish significant achievements. With relatively little schooling and precious little experience, Farnsworth developed devices that older and more well-educated competitors could not. Sadly, Farnsworth’s experiences display the role of seemingly chance events in curbing personal success. Had he developed his inventions a bit earlier or later, avoiding most of the Depression and the Second World War, he might have gained much greater fame and fortune. None of us, of course, choose the time into which we are born.

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1]  Schwartz, Evan I. 2002. The Last Lone Inventor: A Tale of Genius, Deceit, and the Birth of Television. New York: HarperCollins.


[3] Schwartz, op cit.

[4] Schwartz, op cit.

[5] Jewkes, J. “Monopoly and Economic Progress.” Economica, New Series, 20, no. 79 (1953): 197-214

[6] Schwartz, op cit.

[7] Schwartz, op cit.

[8] Schwartz, op cit.

[9] Schwartz, op cit.

[10]Schwartz, op cit.

[11]Lemley, Mark A. 2012. “The Myth of the Sole Inventor.” Michigan Law Review 110 (5): 709–60.


Guest Essayist: Tony Williams

Americans have long held the belief that they are exceptional and have a providential destiny to be a “city upon a hill” as a beacon for democracy for the world.

Unlike the French revolutionaries who believed that they were bound to destroy monarchy and feudalism everywhere, the American revolutionaries laid down the principle of being an example for the world instead of imposing the belief on other countries.

In 1821, Secretary of State John Quincy Adams probably expressed this idea best during a Fourth of July address when he asserted the principle of American foreign policy that:

Wherever the standard of freedom and independence has been or shall be unfurled, there will her heart, her benedictions and her prayers be. But she goes not abroad in search of monsters to destroy. She is the well-wisher to the freedom and independence of all. She is the champion and vindicator only of her own.

While the Spanish-American War raised a debate over the nature of American expansionism and foundational principles, the reversal of the course of American diplomatic history found its fullest expression in the progressive presidency of Woodrow Wilson.

Progressives such as President Wilson embraced the idea that a perfect world could be achieved with the spread of democracy and adoption of a greater international outlook instead of national interests for world peace. As president, Wilson believed that America had a responsibility to spread democracy around the world by destroying monarchy and enlightening people in self-government.

When World War I broke out in August 1914 after the assassination of Austrian Archduke Franz Ferdinand, Wilson declared American neutrality and asked a diverse nation of immigrants to be “impartial in thought as well as in action.”

American neutrality was tested in many different ways. Many first-generation American immigrants from different countries still had strong attachments and feelings toward their nation of origin. Americans also sent arms and loans to the Allies (primarily Great Britain, France, and Russia) that undermined claims of U.S. neutrality. Despite the sinking of the liner Lusitania by a German U-boat (submarine) in May 1915 that killed 1,200 including 128 Americans, Secretary of State William Jennings Bryan resigned because he thought the U.S. should protest the British blockade of Germany as much as German actions in the Atlantic.

Throughout 1915 and 1916, German U-boats sank several more American vessels, though Germany apologized and promised no more incidents against merchant vessels of neutrals. By late 1916, however, more than two years of trench warfare and stalemate on the Western Front had led to millions of deaths, and the belligerents sought for ways to break the stalemate.

On February 1, 1917, the German high command decided to launch a policy of unrestricted U-boat warfare in which all shipping was subject to attack. The hope was to knock Great Britain out of the war and attain victory before the United States could enter the war and make a difference.

Simultaneously, Germany curiously sent a secret diplomatic message to Mexico offering territory in Texas, New Mexico, and Arizona in exchange for entering the war against the United States. British intelligence intercepted this foolhardy Zimmerman Telegram and shared it with the Wilson administration. Americans were predictably outraged when news of the telegram became public.

On April 2, President Wilson delivered a message to Congress asking for a declaration of war. He focused on what he labeled the barbaric and cruel inhumanity of attacking neutral ships and killing innocents on the high seas. He spoke of American freedom of the seas and neutral rights but primarily painted a stark moral picture of why the United States should go to war with the German Empire which had violated “the most sacred rights of our Nation.”

Wilson took an expansive view of the purposes of American foreign policy that reshaped American exceptionalism. He had a progressive vision of remaking the world by using the war to spread democratic principles and end autocratic regimes. In short, he thought, “The world must be made safe for democracy.”

Wilson argued that the United States had a duty as “one of the champions of the rights of mankind.” It would not merely defeat Germany but free its people. Americans were entering the war “to fight thus for the ultimate peace of the world and for the liberation of its peoples, the German peoples included: for the rights of nations great and small and the privilege of men everywhere to choose their way of life.”

Wilson believed that the United States had larger purposes than merely defending its national interests. It was now responsible for world peace and the freedom of all.  “Neutrality is no longer feasible or desirable where the peace of the world is involved and the freedom of its peoples, and the menace to that peace and freedom lies in the existence of autocratic governments backed by organized force which is controlled wholly by their will, not by the will of their people.”

At the end of the war and during the Versailles conference, Wilson further articulated this vision of a new world with his Fourteen Points and proposal for a League of Nations to prevent future wars and ensure a lasting world peace.

Wilson’s vision failed to come to come to fruition. The Senate refused to ratify the Treaty of Versailles because it was committed to defending American national sovereign power over declaring war. The great powers were more dedicated to their national interests rather than world peace. Moreover, the next twenty years saw the spread of totalitarian, communist, and fascist regimes rather than progressive democracies. Finally, World War II shattered his vision of remaking the world.

Wilson’s ideals were not immediately adopted, but in the long run helped to reshape American foreign policy. The twentieth and twenty-first centuries saw increasing Wilsonian appeals by American presidents and policymakers to go to war to spread democracy throughout the world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Michael Warren

Before outbreak of the American Revolution, colonies were deeply embedded in the patriarchal traditions and customs of the entire world. All cultures and civilizations had placed women in a subordinate position in the political and social realm. However, the Declaration of Independence raised the consciousness of at least some women and men about the inequality that was embedded in the legal and cultural regimes. Women became serious contributors to the American Revolution war effort, and some, such as Abigail Adams (wife of Colossus of Independence and President John Adams) questioned why they should not be entitled to equality declared in the Declaration.

Unfortunately, the idea of gender equality was scoffed at by most both men and women. For the most part, women were supreme in their social sphere of family and housekeeping, but were to have no direct political or legal power.

The political patriarchy did not consider women able to possess the correct temperament, stamina, or talents to be full participants in the American experiment. Justice Joseph Bradley of the United States Supreme Court, in a concurring opinion upholding the Illinois Bar’s prohibition of women from the practice of law, epitomized these sentiments:

[T]he civil law, as well as nature herself, has always recognized a wide difference in the respective spheres and destinies of man and woman. Man is, or should be, woman’s protector and defender. The natural timidity and delicacy which belongs to the female sex evidently unfits it for many occupations of civil life.

However, the hunger for freedom and equality could not be contained. With the strengthening of the abolitionist movement came a renewed interest in women’s suffrage. A groundbreaking women’s suffrage conference – the first of its kind in the world – was organized by Elizabeth Cady Stanton and others in Seneca Falls New York in 1848. At the heart of the conference was the Declaration of Sentiments and Resolutions, written by Stanton and adopted by the conference on July 20, 1848. Paralleling the Declaration of Independence, the power of the statement is understood best by simply reading a key passage:

We hold these truths to be self-evident: that all men and women are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness; that to secure these rights governments are instituted, deriving their just powers from the consent of the governed. Whenever any form of government becomes destructive of these ends, it is the right of those who suffer from it to refuse allegiance to it, and to insist upon the institution of a new government, laying its foundation on such principles, and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness. Prudence, indeed, will dictate that governments long established should not be changed for light and transient causes; and accordingly all experience hath shown that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same object, evinces a design to reduce them under absolute despotism, it is their duty to throw off such government, and to provide new guards for their future security. Such has been the patient sufferance of the women under this government, and such is now the necessity which constrains them to demand the equal station to which they are entitled. The history of mankind is a history of repeated injuries and usurpations on the part of man toward woman, having in direct object the establishment of an absolute tyranny over her. To prove this, let facts be submitted to a candid world.”


Now, in view of this entire disfranchisement of one-half the people of this country, their social and religious degradation–in view of the unjust laws above mentioned, and because women do feel themselves aggrieved, oppressed, and fraudulently deprived of their most sacred rights, we insist that they have immediate admission to all the rights and privileges which belong to them as citizens of the United States.

The Seneca Falls conference and declaration was just the beginning. During the lead to and aftermath of the Civil War, the women’s suffrage movement gathered strength and momentum. The Fourteenth Amendment, which gave all men the right to vote regardless of race or prior servitude, was bittersweet. The ratification of the amendment split the suffragists and abolitionists movements – some within both movements wanted women to be included in the Fourteenth Amendment, and others did not want to jeopardize its passage by including women in light of the overwhelming bias against women’s suffrage at that time. The suffragists lost, and the Fourteenth Amendment gave all men – but not women – their due.

It took several more generations of determined suffragists to enact constitutional change with the adoption of the Nineteenth Amendment. The territory of Wyoming in 1869 was the first to give women the right to vote. It would take over 50 years before the women’s right to vote was a constitutional right. The movement only happened with the great tenacity, persistence, brilliance, and courage of the women and men suffragists that slowly but surely turned the nation toward universal suffrage. Parades, protests, hunger strikes, speaking tours, book tours, and countless other tactics were used to change the tide.

The Nineteenth Amendment was passed by Congress on June 4, 1919, ratified by the States on August 18, 1920, and effective on August 26, 1920.  It simply provides:

The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of sex. Congress shall have power to enforce this article by appropriate legislation.

Short, but revolutionary. Honor the sacrifices of generations before us and defend – and exercise – the right to vote for women and all Americans.

Michael Warren serves as an Oakland County Circuit Court Judge and is the author of America’s Survival Guide, How to Stop America’s Impending Suicide by Reclaiming Our First Principles and History. Judge Warren is a constitutional law professor, and co-creator of Patriot Week

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

On March 3, 1917, 162 words changed the course of World War I and the history of the 20th Century.

Germany officially admitted to sending the “Zimmermann Telegram,” which exposed a complex web of international intrigue, to keep America out of World War I.  It was this, and not the sinking of the Lusitania on May 7, 1915, that led to the U.S. entering the European war.

The Zimmermann Telegram was a message sent by Arthur Zimmermann, a senior member of the German Foreign Office in Berlin, to Ambassador Heinrich von Eckardt in the German Embassy in Mexico City. It outlined Germany’s plans to support Mexico in a war with the United States should America enter the European War:

We intend to begin on the first of February unrestricted submarine warfare. We shall endeavor in spite of this to keep the United States of America neutral. In the event of this not succeeding, we make Mexico a proposal of alliance on the following basis: make war together, make peace together, generous financial support and an understanding on our part that Mexico is to reconquer the lost territory in Texas, New Mexico, and Arizona. The settlement in detail is left to you. You will inform the President of the above most secretly as soon as the outbreak of war with the United States of America is certain, and add the suggestion that he should, on his own initiative, invite Japan to immediate adherence and at the same time mediate between Japan and ourselves. Please call the President’s attention to the fact that the ruthless employment of our submarines now offers the prospect of compelling England in a few months to make peace.

The story of how this telegram became the pivotal document of World War I reads like a James Bond movie.

America was neutral during the early years of the “Great War.” It also managed the primary transatlantic telegraph cable. European governments, on both sides of the war, were allowed to use the American cable for diplomatic communications with their embassies in North and South America. On a daily basis, messages flowed, unfettered and unread, between diplomatic outposts and European capitals.

Enter Nigel de Grey and his “Room 40” codebreakers.

British Intelligence monitored the American Atlantic cable, violating its neutrality. On January 16, 1917, the Zimmerman Telegram was intercepted and decoded. De Grey and his team immediately understood the explosive impact of its contents. Such a documented threat might force the U.S. into declaring war on Germany. At the time, the “Great War” was a bloody stalemate and unrest in Russia was tilting the outcome in favor of Germany.

De Grey’s challenge was how to orchestrate the telegram getting to American officials without exposing British espionage operations or the breaking of the German codes. He and his team created an elaborate ruse. They would invent a “mole” inside the German Embassy in Mexico City. This “mole” would steal the Zimmermann Telegram and send it, still encrypted, to British intelligence.

The encryption would be an older version, which the Germans would consider a mistake and assume it was such an old code it was already broken. American-based British spies confirmed that the older code, and its decryption, was already in the files of the American Telegraph Company.

On February 19, 1917, British Foreign Office officials shared the older encoded version of Zimmermann Telegram with U.S. Embassy officials.  After decoding it and confirming its authenticity, it was sent on to the White House Staff.

President Woodrow Wilson was enraged and shared it with American newspaper reporters on February 28. At a March 3, 1917 news conference, Zimmermann confirmed the telegram stating, “I cannot deny it. It is true.” German officials tried to rationalize the Telegram as only a contingency plan, legitimately protecting its interests should America enter the war against them.

On April 4, President Wilson finally went before a Joint Session of Congress requesting a Declaration of War against Germany. The Senate approved the Declaration on April 4 and the House on April 6.  It took forty-four days for American public opinion to coalesce around declaring war.

Why the delay?

Americans were deeply divided on intervening in the “European War.” Republicans were solidly isolationist. They had enough votes in the Senate to filibuster a war resolution. They were already filibustering the “Armed Ship Bill” which authorized the arming of American merchant ships against German submarines. German Americans, a significant voter segment in America’s rural areas and small towns, were pro-German and anti-French. Irish Americans, a significant Democratic Party constituency in urban areas, were anti-English. There was also Wilson’s concern over Mexican threats along America’s southern border.

Germany was successful in exploiting America’s division and its isolationism. At the same time, Germany masterfully turned Mexico into a credible threat to America.

The Mexican Revolution provided the perfect environment for German mischief. Germany armed various factions and promoted the “Plan of San Diego” which detailed Mexico’s reclaiming Texas, New Mexico, Arizona, and California. Even before the outbreak of the “Great War,” Germany orchestrated media stories and planted disinformation among Western intelligence agencies to create the impression of Mexico planning an invasion of Texas. German actions and rumors sparked a bloody confrontation between U.S. forces and Mexican troops in Veracruz on April 9, 1914.

After years of preparation, German agents funded and inspired Pancho Villa’s March 9, 1916 raid on Columbus, New Mexico. In retaliation, on March 14, 1916, President Woodrow Wilson ordered General John “Black Jack” Pershing, along with 10,000 soldiers and an aviation squadron, to invade northern Mexico and hunt down Villa. Over the next ten months, U.S. forces fought twelve battles on Mexican soil, including several with Mexican government forces.

The costly and unsuccessful pursuit of Villa diverted America’s attention away from Europe and soured U.S.-Mexican relations.

Germany’s most creative method for keeping America out of World War I was a fifteen-part “Preparedness Serial” called “Patria.” In 1916, the German Foreign Ministry convinced William Randolph Hearst to produce this adventure story about Japan helping Mexico reclaim the American Southwest.

“Patria” was a major production. It starred Irene Castle, one of the early “mega-stars” of Hollywood and Broadway. Castle’s character uses her family fortune to thwart the Japan-Mexico plot against America. The movie played to packed houses across America and ignited paranoia about the growing menace on America’s southern border.  Concerns over Mexico, and opposition to European intervention, convinced Wilson to run for re-election on a “He kept us out of war” platform.  American voters narrowly re-elected Wilson, along with many new isolationist Congressional candidates.

“Patria,” and other German machinations, clouded the political landscape and kept America neutral until April 1917. Foreign interference in the 1916 election, along with chasing Pancho Villa, may have kept America out of WWI completely except that Zimmerman’s Telegram, outlining Germany’s next move, was intercepted by British Intelligence. It awakened Americans to a real threat.

Words really do matter.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Amanda Hughes

Prior to World War I, oceanic travel between the Atlantic and Pacific Oceans had to route dangerous passages around southern South America. Considerations for a way to connect the Atlantic Ocean to the Pacific were present for centuries. More recent among these include survey expeditions Ulysses S. Grant in 1869, who wrote as an Army Captain in 1852 of disease and other tragedies during military travels while crossing the Isthmus of Panama, “The horrors of the road in the rainy season are beyond description.” A survey by Grant  included Panama where it was found that the current route of the Panama Canal was nearly the same as what was proposed by Grant’s survey.

Other efforts by Count Ferdinand de Lesseps of France, who built Egypt’s Suez Canal, led the charge to begin construction on a canal across the isthmus of Panama in 1880. By 1888, challenges such as illness from yellow fever, malaria along with constant rain and mud slides resulted in ending plans by the French.

Further attempts to diminish the lengthy and costly trek began with a United States negotiation with Great Britain with Sir Henry Lytton Bulwer who served as British minister to Washington, and United States Secretary of State John M. Clayton. A sense that a canal connecting the Atlantic and Pacific coasts was viewed as necessary in order to maintain American strength throughout the world. The Clayton-Bulwer Treaty of 1850 allowed for the United States and Britain to maintain joint control by quelling rivals over a proposed canal idea for construction through Nicaragua, northwest of Panama. That agreement was replaced by the Hay-Pauncefote Treaty during President William McKinley’s Administration in 1901 by United States Secretary of State John Hay and British Ambassador Lord Julian Pauncefote. This agreement replaced the Clayton-Bulwer Treaty of 1850 because movement toward building the canal was still had not occurred. Since no action was being taken toward construction after several decades, requests that the United States hold charge over the canal’s construction and control brought about the Hay-Pauncefote Treaty where Britain agreed the United States should take control of the project.

In June 1902, Congress passed H.R. 3110, the Hepburn Bill named for Representative William Peters Hepburn which also became known as the Panama Canal Act of 1902 and the Spooner Act. The bill approved construction of a canal in Nicaragua connecting the waters of the Atlantic and Pacific ocean. Senator John Coit Spooner offered an amendment to the bill that would provide the president, who was Theodore Roosevelt at that time, authorization to purchase the French company that canceled construction in the late 1800s and allow the United States to purchase the rights, assets and site for construction on land owned by Columbia for a canal through Panama not to exceed $40,000,000.

An isthmian canal was especially viewed as imperative by Congress by 1902 in order to improve United States defense. The USS Maine had exploded in Cuba and the USS Oregon that was stationed on the West Coast would need a long two months to arrive on the Atlantic side near the Caribbean to aid in the Spanish-American War. In a Senate speech, Senator Spooner mentioned:

“I want…a bill to be passed here under which we will get a canal. There never was greater need for it than now. The Oregon demonstrated [that] to our people.”

Conflicts regarding sovereignty over Panama continued past earlier agreements made. By 1903, The United States aided a revolution to help Panama gain independence from Columbia, establishing the Republic of Panama through the Hay-Herrán Treaty of 1903. United States Secretary of State John Hay and Tomás Herrán, Columbian Foreign Minister signed the treaty, but Columbia’s congress would not accept it.

That same year, Secretary Hay, and Phillippe Bunau-Varilla representing Panamanian interests signed the Hay-Bunau-Varilla Treaty of 1903, and ratified by the Senate in 1904. While tensions still remained, the new agreement provided independence for Panama and the agreement allowed the United States to build and use a canal without limit. This increased the Canal Zone and gave the United States, in effect, sovereign power including authority to maintain order in the affected area.

Finally able to go ahead with the project, President Roosevelt selected a committee for an Isthmian Canal Commission to see the canal through, including a governor and seven members. The commission was previously arranged by President William McKinley who was assassinated in 1901. Other presidential involvement was especially present by Howard Taft who served as President after Roosevelt. As Roosevelt’s Secretary of War, Taft visited the canal more and participated the most over the longest time. The commission under Roosevelt was arranged to have a representative from the Army and Navy and the group would report to the secretary of war. The United States Army engineers were involved in the planning, supplies, and construction throughout. President Roosevelt argued that: “the War Department ‘has always supervised the construction of the great civil works and…been charged with the supervision of the government of all the island possessions of the United States.’”

Approved, yet fraught with many build challenges, the Panama Canal under control of the United States began in 1904 with construction at the bottom of Culebra Cut, formerly called Gaillard Cut, with 160 miles of track laid at the bottom of the canal. The track would need to be be moved continually to keep up with the shoveling of the surrounding ground and to keep construction materials arriving along the canal route using hundreds of locomotives. Locomotives were used to haul dirt, called dirt trains, along the route. Wet slides caused by rain and slipping of softer dirt were among many hindrances to construction. Other slides, some of which occurred during dry seasons, were caused by faults in the earth due to cuts in the sidewalls of the canal. The slides expanded into the cuts, but the workers kept at their tasks. Rock drills were used to set dynamite shots. Six million pounds of explosives per year were used to cut the nine mile canal. The first water to enter the Panama Canal flowed in from the Chagres River. The Chagres Basin is filled by the Gatun Lake, formed by the man-made Gatun Dam on the Atlantic end of the canal.

Led by Lt. Col. George Washington Goethals of the United States Army Corps of Engineers, improvements began for how the canal would work. The American engineers redesigned the canal so that two sets of three locks would have one set at the entrance of the Pacific side, and the other set on the Atlantic side. It was the largest canal lock system built at that time. The lock chambers were 1,000 feet long and 110 feet wide and up to 81 feet tall, equipped with gates, and allowed for two-way traffic. The large ships and containers are accommodated by the width of the canal. The locks were designed to raise and lower ships in the water controlled by dams and spillways. The engineering marvel of the locks and dam system proved cost effective by saving money, construction time and providing safety. An earth dam was designed with a man-made lake to limit excavation. It was also the largest dam in the world at the time of construction, intended to maintain the elevation of water level. The dam allowed millions of gallons of water to be released daily through the canal, with a top thick underwater spillway that offered protection from flooding.

Col. William C. Gorgas who served as Surgeon General of the Army during World War I, previously worked to prevent disease and death during construction of the canal. Col. Gorgas worked to eradicate major threats of yellow fever and malaria which was viewed as a much greater threat than all of the other diseases combined. He mentioned during a 1906 medical conference, “ malaria in the tropics is by far the most important disease to which tropical populations are subject,” because “the amount of incapacity caused by malaria is very much greater than that due to all other diseases combined.”

The total cost of the canal to America, as completed in 1914, is estimated at $375,000,000 dollars. The total included $10,000,000 paid to Panama and $40,000,000 to the French company. Fortifying the canal cost an additional $12,000,000. Thousands of workers were employed throughout construction from many countries. The jobs were often dangerous, but those overseeing the project made efforts to protect from injury and loss of life.

In 1964, Panama protested control over the canal by the United States which led to the Permanent Neutrality Treaty that Panama wanted in order to make the canal open to all nations, and a Panama Canal Treaty providing joint control over the canal by the United States and Panama. These treaties were signed in September 1977 by President Jimmy Carter and Panamanian leader Brig. Gen. Omar Torrijos Herrera. Complete control over the Panama Canal was transferred to Panama in 1999.

Engineers whose efforts to put forth unprecedented technological construction ideas overcame seeming insurmountable odds cutting the fifty miles of canal through mountains and jungle. Completed and opened on August 15, 1914, the Panama Canal offered a waterway through the isthmus of Panama connecting the oceans, creating fifty miles of sea-level passage. The American cargo and passenger ship, SS Ancon, was the first to officially pass through the Panama Canal in 1914. A testament to American innovation and ingenuity, the American Society of Civil Engineers has recognized the Panama Canal as one of the seven wonders of the modern world.

Amanda Hughes serves as Outreach Director, and 90 Day Study Director, for Constituting America. She is author of Who Wants to Be Free?, and a story contributor for the anthologies Loving Moments, and Moments with Billy Graham.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

In 1788, as the United States Constitution was adopted, senators would be elected by state legislatures to protect the states from the federal government increasing its own power. Problems related to the election of senators later resulted in lengthy senate vacancies. A popular vote movement began as a solution, but it failed to consider importance of separation of powers as designed by the Framers to protect liberty and maintain stability in government. The popular vote was an attempt to hamper the more deliberative body that is the United States Senate, and succumb to the more passionate, immediate will of the people, so on April 8, 1913, the Seventeenth Amendment to the U.S. Constitution was adopted.

We’ve all heard the phrase: “shooting oneself in the foot.” reminds us: To shoot oneself in the foot means to sabotage oneself, to make a silly mistake that harms yourself in some fashion. The phrase comes from a phenomenon that became fairly common during the First World War. Soldiers sometimes shot themselves in the foot in order to be sent to the hospital tent rather than being sent into battle.”[i]

Can a state, one of the United States, be guilty of “shooting itself in the foot?” How about multiple states? How about thirty-six states all at once? Not only can they be, I believe they have been guilty, particularly as it regards the Seventeenth Amendment. Let me explain.

“Checks and balances, checks and balances,” we hear the refrain often and passionately these days. The phrase “Checks and balances” is part of every schoolchild’s introduction to the Constitution. In May 2019, when President Donald Trump exerted executive privilege to prevent the testimony before Congress of certain White House advisors, NBC exclaimed: “Trump’s subpoena obstruction has fractured the Constitution’s system of checks and balances”[ii] I’m not certain the Framers of the Constitution would agree with NBC as exerting executive privilege has been part of our constitutional landscape since George Washington,[iii] and if exerting it “fractures” the Constitution, the document would have fallen into pieces long, long ago. As we will see, a significant “fracturing” of the Constitution’s system of checks and balances did occur in this country, but it occurred more than a hundred years before President Donald Trump took office.

The impeachment power is intended to check a rogue President. The Supreme Court checks a Constitution-ignoring Congress, as does the President’s veto. Congress can check (as in limit) the appellate jurisdiction of the Supreme Court, and reduce or expand the number of justices at will. There are many examples of checks and balances in the Constitution. The framers of the document, distrustful as they were of human nature, were careful to give us this critical, power-limiting feature.[iv] But which was more important: the checks or the balances?

Aha, trick question. They are equally important (in my opinion at least). And sometimes a certain feature works as both a check and a balance. The one I have in mind is the original feature whereby Senators were to be appointed by their state legislatures.

We all know the story of how the Senate came into being which was the result of Roger Sherman’s great compromise. It retained the “one-state-one-vote” equality the small states enjoyed with the large states under the Articles of Confederation while also creating a legislative chamber, the House, where representation was based on a state’s population. Their six-year terms allowed them to take “a more detached view of issues coming before Congress.”[v] But how should these new Senators be selected: by the people, as in the House, or otherwise?

On July 7, 1787 the Constitutional Convention unanimously adopted a proposal by John Dickinson and Roger Sherman that the state legislatures elect this “Second Branch of the National Legislature.”  Why not the people? Alexander Hamilton explains:

“The history of ancient and modern republics had taught them that many of the evils which those republics suffered arose from the want of a certain balance, and that mutual control indispensable to a wise administration. They were convinced that popular assemblies are frequently misguided by ignorance, by sudden impulses, and the intrigues of ambitious men; and that some firm barrier against these operations was necessary. They, therefore, instituted your Senate.”[vi] (Emphasis added)

The Senate was to avoid the “impulses” of popularly-elected assemblies and provide a “barrier”  to such impulses when they might occur in the other branch.

James Madison explains in Federalist 62 who particularly benefits from this arrangement:

It is … unnecessary to [expand] on the appointment of senators by the State legislatures. Among the various modes which might have been devised for constituting this branch of the government, that which has been proposed by the convention is probably the most congenial with the public opinion. It is recommended by the double advantage of favoring a select appointment, and of giving to the State governments such an agency in the formation of the federal government as must secure the authority of the former, and may form a convenient link between the two systems.”[vii] (Emphasis added)

Appointment by the state legislatures gave the state governments a direct voice in the workings of the federal government. Madison continues:

“Another advantage accruing from this ingredient in the constitution of the Senate is, the additional impediment it must prove against improper acts of legislation. No law or resolution can now be passed without the concurrence, first, of a majority of the people (in the House), and then, of a majority of the States. It must be acknowledged that this complicated check on legislation may in some instances be injurious as well as beneficial; ….” (Emphasis added)

For those with lingering doubt as to who the Senators were to represent, Robert Livingston explained in the New York Ratifying Convention: “The senate are indeed designed to represent the state governments.”[viii] (Emphasis added)

Perhaps sensing the potential to change the mode of electing Senators in the future, Hamilton cautioned: “In this state (his own state of New York) we have a senate, possessed of the proper qualities of a permanent body: Virginia, Maryland, and a few other states, are in the same situation: The rest are either governed by a single democratic assembly (ex: Pennsylvania), or have a senate constituted entirely upon democratic principles—These have been more or less embroiled in factions, and have generally been the image and echo of the multitude.[ix] Hamilton refers here to those states where the state senators were popularly elected.

The careful balance of this system worked well until the end of the 19th century and the beginnings of the Progressive Era.

Gradually there arose a “feeling” that some senatorial appointments in the state legislatures were being “bought and sold.”  Between 1857 and 1900, Congress investigated three elections over alleged corruption. In 1900, the election of Montana Senator William A. Clark was voided after the Senate concluded that he had “purchased” eight of his fifteen votes.

Electoral deadlocks became another issue. Occasionally a state couldn’t decide on one or more of their Senators. One of Delaware’s Senate seats went unfilled from 1899 until 1903.

Neither of these problems was serious, but they both provided fodder for those enamored with “democracy.” But bandwagons being what they are, some could not resist. Some states began holding non-binding primaries for their Senate candidates.

Under mounting pressure from Progressives, by 1910, thirty-one state legislatures were asking Congress for a constitutional amendment allowing direct election of senators by the people. In the same year several Republican senators who were opposed to such reform failed re-election. This served as a “wake-up call” to others who remained opposed. Twenty-seven of the thirty-one states requesting an amendment also called for a constitutional convention to meet on the issue, only four states shy of the threshold that would require Congress to act.

Finally, on May 13, 1912, Congress responded. A resolution to require direct elections of Senators by the citizens of each state was finally introduced and it quickly passed. In less than a year it had been ratified by three-quarters of the states and was declared part of the Constitution by Secretary of State William Jennings Bryan on May 31, 1913, two months after President Woodrow Wilson took office.

The Seventeenth Amendment has been cheered by the Left as a victory for populism and democracy, and bemoaned by the Right as a loss for states’ rights or “The Death of Federalism!” Now, millions in corporate funding pours into Senate election campaigns. Senators no longer consult with their state legislatures regarding pending legislation. Why should they? They now represent their state’s citizens directly. The interests of the state governments need not be considered.

For the states to actually ask Congress for this change seems incredibly near-sighted. Much of the encroachment by the Federal Government on policy matters which were traditionally the purview of the states can, I believe, be traced to the Seventeenth Amendment.

We repealed the Eighteenth Amendment. What about repealing the Seventeenth?  Many organizations and individuals have called for it. Every year he was in office, Senator Zell Miller of Georgia repeatedly called for its repeal. A brief look at who supports repeal and who opposes it reveals much. In support of repeal are the various Tea Party organizations, National Review magazine and others on the Right. Opposed, predictably enough, sit the LA Times and other liberal organizations. Solon magazine called the repeal movement “The surprising Republican movement to strip voters of their right to elect senators.” Where this supposed right originates is not explained in the article.

The wisdom of America’s Founders continues to amaze us more than 200 years later. Unfortunately, the carefully balanced framework of government they devised has been slowly chipped away by Supreme Court decisions and structural changes, like the Seventeenth Amendment. Seeing that the states willingly threw away their direct voice in the federal government, my sympathy for them is limited, but repeal of this dreadful amendment is long overdue.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.




[iv] See Federalist 51

[v] Bybee, Jay S. (1997). “Ulysses at the Mast: Democracy, Federalism, and the Sirens’ Song of the Seventeenth Amendment”. Northwestern University Law Review. Northwestern University School of Law. p. 515.

[vi] Alexander Hamilton, speech to the New York Ratifying Convention, 1788

[vii] James Madison, Federalist 62

[viii] Robert Livingston, New York Ratifying Convention, 24 Jun 1788.

[ix] Alexander Hamilton, speech to the New York Ratifying Convention, 1788

Guest Essayist: Tony Williams

During the summer of 1896, twenty-five-year-old Orville Wright was recovering from typhoid fever in his Dayton, Ohio home. His brother, Wilbur Wright, was reading to Orville accounts of a German glider enthusiast named Otto Lilienthal who was killed in a crash flying his glider. The brothers started reading several books about bird flight and even applying the mechanics of it to powered human flight.

Despite the dreams of several visionaries who were studying human flight, the Washington Post proclaimed, “It is a fact that man can’t fly.” The Wright Brothers were amateurs who might just be able to prove the newspaper wrong. They had tinkered with mechanical inventions since they were boys. They had owned a printing press and now a bicycle shop and were highly skilled mechanics. They did not have the advantages of great wealth or a college education, but they had excellent good work habits and perseverance. They were enthusiastically dedicated and disciplined to achieve their goal.

On May 30, 1899, Wilbur wrote a letter to the Smithsonian Institution in Washington, D.C. He stated, “I have been interested in the problem of mechanical and human flight ever since I [was] a boy.” He added, “My observations since have only convinced me more firmly that human flight is possible and practicable.” He requested any reading materials that the Smithsonian might be willing to send. He received a packet full of recent pamphlets and articles including those of Samuel Pierpont Langley who was the Secretary of the Smithsonian.

The Wright brothers voraciously read the Smithsonian materials and books about bird flight. Based upon their study, they first built a glider that allowed them to acquire vast knowledge about the mechanics necessary to fly. They knew that this was a necessary step toward powered flight.

The Wright brothers next found a suitable location to test out their glider flights in Kitty Hawk, North Carolina, because that site had the right combination of steady, strong winds and soft sand dunes on which to crash land. They even flew kites and studied the flight of different birds to measure the air flow in the test area.

On October 19, 1900, Wilbur climbed aboard a glider and flew nearly 30 miles per hour for about 400 feet. They made several more test flights and carefully recorded data about them. Armed with this knowledge and experience, they returned to Dayton and made alterations to the glider during the winter. They returned to Kitty Hawk the following summer for additional testing.

The Wright brothers spent the summer of 1901 acquiring mounds of new data, taking test flights, and tinkering constantly on the design. The brothers experienced their share of doubts that they would be successful. During one low moment, Wilbur lamented that “not in a thousand years would man ever fly.” However, they encouraged each other, and Orville stated that “there was some spirit that carried us through.” During that winter, they even built a homemade wind tunnel and continued to re-design the glider based upon their practical discoveries in Kitty Hawk and theoretical experiments in Dayton.

During the fall of 1902, they made their annual pilgrimage to their camp at Kitty Hawk where they worked day and night. During one sleepless night, Orville developed an idea for a movable rear rudder for better control. They installed a rudder, and the modification helped them achieve even greater success with the glider flights. They knew they were finally ready to test a motor and dared to believe that they might fly through the air in what would become an airplane.

The Wright brothers spent hundreds of hours over the next year testing motors, developing propellers, and finding solutions to countless problems. Orville admitted, “Our minds became so obsessed with it that we could do little other work.” In December 1903, they reached Kitty Hawk and unpacked their powered glider for reassembly at their camp.

On December 17, five curious locals braved the freezing cold to watch Orville and Wilbur Wright as they prepared their flying machine. Wilbur set up their camera on its wooden tripod a short distance from the plane. Dressed in a suit and tie, Orville climbed aboard the bottom wing of the bi-plane and strapped himself in while the motor was warming up.

At precisely 10:35 a.m., Orville launched down the short track while Wilbur ran beside, helping to steady the plane. Suddenly, the plane lifted into the air, and Orville became the first person to pilot a machine that flew under its own power. He flew about 120 feet for nearly twelve seconds. It was a humble yet historic flight.

When Orville was later asked if he was scared, he joked, “Scared?  There wasn’t time.” They readied the plane for another flight and a half-hour later, Wilbur joined his brother in history by flying “like a bird” for approximately 175 feet. They flew farther and farther that day, and Wilbur went nearly half a mile in 59 seconds. They sent their father a telegram sharing the news of their success, and as he read it, he turned to their sister and said, “Well, they’ve made a flight.”

One witness of the Wright Brothers’ first flight noted the character that made them successful. “It wasn’t luck that made them fly; it was hard work and common sense; they put their whole heart and soul and all their energy into an idea, and they had faith.” President William Howard Taft also praised the hard work that went into the Wright Brothers’ achievement. “You made this discovery,” he told them at an award ceremony “by keeping your noses right at the job until you had accomplished what you had determined to.” The Wright brothers’ flight was part of a long train of technological innovations that resulted from American ingenuity and spirit.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

In late January 1898, President William McKinley dispatched the U.S.S. Maine to Cuban waters to protect American citizens and business investments during ongoing tensions between Spain and its colony, Cuba. The event eventually sparked a war that dramatically culminated a century of expansion and led Americans to debate the purposes of American foreign policy at the dawn of the twentieth century.

Events only ninety miles from American shores were increasingly involving the United States in Cuban affairs during the late 1890s.

Cuban revolutionaries had fought a guerrilla war against imperialist Spain starting in 1895, and Spain had responded by brutally suppressing the insurgency. General Valeriano Weyler, nicknamed “the butcher,” forced Cubans into relocation camps to deny the countryside to the rebels. Tens of thousands perished, and Cuba became a cause célèbre for many Americans.

Moreover, William Randolph Hearst, Joseph Pulitzer, and other newspaper moguls publicized the atrocities committed by Spain’s military and encouraged sympathy for the Cuban people. Hearst knew the power he held over public opinion, telling one of his photographers, “You furnish the pictures. I’ll furnish the war.”

During the evening of February 15, all was quiet as the Maine sat at anchor in Havana harbor. At 9:40 p.m., an explosion shattered the silence and tore the ship open, killing 266 sailors and Marines aboard. Giant gouts of flames and smoke flew hundreds of feet into the air. The press immediately blamed Spain and called for war with the sensationalist style of reporting called “yellow journalism.” The shocked public clamored for war with the popular cry, “Remember the Maine!” Hearst had also recently printed an insulting private letter from the Spanish ambassador to the United States, Don Enrique Dupuy de Lôme, that called McKinley “weak.”

President McKinley had sought alternatives to war for years and continued to seek a diplomatic solution despite the war fervor. However, Assistant Secretary of the Navy Theodore Roosevelt repositioned naval warships close to Cuba and ordered Commodore George Dewey to attack the Spanish fleet in Manila Bay in the Philippines. Roosevelt thought McKinley had “no more backbone than a chocolate éclair.” Despite McKinley’s best efforts, Congress declared war on April 25. Roosevelt quickly resigned and received approval to raise a cavalry regiment, nicknamed the “Rough Riders.”

Roosevelt felt it was his patriotic duty to serve his country. “It does not seem to me that it would be honorable for a man who has consistently advocated a warlike policy not to be willing himself to bear the brunt of carrying out that policy.” Moreover, Roosevelt praised “the soldierly virtues” and sought the strenuous life for himself and the country, which he believed had gone soft with the decadence of the Gilded Age. He wanted to test himself in battle and win glory.

Roosevelt went to Texas to train the eclectic First Volunteer Cavalry regiment of tough western cowboys and American Indians and Patriotic Ivy League athletes. Commander Roosevelt felt comfortable with both groups of men because he had attended Harvard and owned a North Dakota ranch. His regiment trained in the dusty heat of San Antonio, in the shadow of the Alamo, under him and Congressional Medal of Honor winner Colonel Leonard Wood.

The regiment loaded their horses and boarded trains bound for the embarkation point at Tampa, Florida. On June 22, the Rough Riders and thousands of other American troops landed unopposed at Daiquirí on the southern coast of Cuba. Many Rough Riders were without their horses and started marching toward the Spanish army at the capital of Santiago.

The Rough Riders and other U.S. troops were suffering from the tropical heat and forbidding jungle terrain. On June 24, hidden Spanish troops ambushed the Americans near Las Guasimas village. After a brief exchange resulting in some casualties on both sides, the Spanish withdrew to their fortified positions on the hills in front of Santiago. By June 30 the Americans had made it to the base of Kettle Hill, where the Spanish were entrenched and had their guns sighted on the surrounding plains.

American artillery was brought forward to bombard Kettle Hill, and Spanish guns answered. Several Rough Riders and men from other units were cut down by flying shrapnel. Roosevelt himself was wounded slightly in the arm. He and the entire army grew impatient as they awaited orders to attack.

When the order finally came, a mounted Roosevelt led the assault. The Rough Riders were flanked on either side by the African American “Buffalo Soldiers” of the regular Ninth and Tenth cavalry regiments, commanded by white officer John “Black Jack” Pershing. The American troops charged up the incline while firing at the enemy. Roosevelt had dismounted and led the charge on foot. The Spanish fired into American ranks and killed and wounded dozens. Soon, they were driven off. When Spaniards atop adjacent San Juan Hill fired on the Rough Riders, Roosevelt prepared his men to attack that hill as well.

After much confusion in the initial charge, Roosevelt rallied his troops. Finally, he jumped over a fence and again led the charge with the support of rattling American Gatling guns. The Rough Riders and other regiments successfully drove the Spaniards off the hill and gave a great cheer. They dug into their positions and collapsed, exhausted after a day of strenuous fighting. The Americans took Santiago relatively easily, forcing the Spanish fleet to take to sea where it was destroyed by U.S. warships. The Spanish capitulated on August 12.

Roosevelt became a national hero and used the fame to catapult his way to become governor of New York, vice-president, and president after McKinley was assassinated in 1901. Although the 1898 Teller Amendment guaranteed Cuban sovereignty and independence, the United States gained significant control over Cuban affairs with the Platt Amendment in 1901 and Roosevelt Corollary to the Monroe Doctrine in 1904. The United States also built the Panama Canal for trade and national security.

In the Philippines, Admiral Dewey sailed into Manila Bay and wiped out the Spanish fleet there on May 1, 1898. However, the Filipinos, led by Emilio Aguinaldo, rebelled against the American control just as they had against the Spanish. The insurrection resulted in the loss of thousands of American and Filipino lives. Americans established control there after suppressing the revolt in 1902.

The Spanish-American War was a turning point in history because the nation assumed global responsibilities for a growing empire that included Cuba, the Philippines, Puerto Rico, and Guam (as well as Hawaii separately). The Spanish-American War sparked a sharp debate between imperialists and anti-imperialists in the United States over the course of American foreign policy and global power. The debate continued throughout the twentieth century known as the “American Century” due to its power and influence around the world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

On February 15, 1898, an American warship, U.S.S. Maine, blew up in the harbor of Havana, Cuba. A naval board of inquiry reported the following month that the explosion had been caused by a submerged mine. That conclusion was confirmed in 1911, after a more exhaustive investigation and careful examination of the wreck. What was unclear, and remains so, is who set the mine. During the decade, tensions with Spain had been rising over that country’s handling of a Cuban insurgency against Spanish rule. The newspaper chains of William Randolph Hearst and Joseph Pulitzer had long competed for circulation by sensationalist reporting. The deteriorating political conditions in Cuba and the harshness of Spanish attempts to suppress the rebels provided fodder for the newspapers’ “yellow” journalism. Congress had pressured the American government to do something to resolve the crisis, but neither President Grover Cleveland nor President William McKinley had taken the bait thus far.

With the heavy loss of life that accompanied the sinking, “Remember the Maine” became a national obsession. Although Spain had very little to gain from sinking an American warship, whereas Cuban rebels had much to gain in order to bring the United States actively to their cause, the public outcry was directed against Spain. The Spanish government previously had offered to change its military tactics in Cuba and to allow Cubans limited home rule. The offer now was to grant an armistice to the insurgents. The American ambassador in Spain believed that the Spanish government would even be willing to grant independence to Cuba, if there were no political or military attempt to humiliate Spain.

Neither the Spanish government nor McKinley wanted war. However, the latter proved unable to resist the new martial mood and the aroused jingoism in the press and Congress. On April 11, 1898, McKinley sent a message to Congress that did not directly call for war, but declared that he had “exhausted every effort” to resolve the matter and was awaiting Congress’s action. Congress declared war. A year later, McKinley observed, “But for the inflamed state of public opinion, and the fact that Congress could no longer be held in check, a peaceful solution might have been had.” He might have added that, had he been possessed of a stiffer political spine, that peaceful solution might have been had, as well.

The “splendid little war,” in the words of the soon-to-be Secretary of State, John Hay, was exceedingly popular and resulted in an overwhelming and relatively easy American victory. Only 289 were killed in action, although, due to poor hygienic conditions, many more died from disease. Psychologically, it proved cathartic for Americans after the national trauma of the Civil War. One symbolic example of the new unity forged by the war with Spain was that Joe Wheeler and Fitzhugh Lee, former Confederate generals, were generals in the U.S. Army.

Spain signed a preliminary peace treaty in August. The treaty called for the surrender of Cuba, Puerto Rico, and Guam. The status of the Philippines was left for final negotiations. The ultimate treaty was signed in Paris on December 10, 1898. The Philippines, wracked by insurrection, were ceded to the United States for $2 million. The administration believed that it would be militarily advantageous to have a base in the Far East to protect American interests.

The war may have been popular, but the peace was less so. The two-thirds vote needed for Senate approval of the peace treaty was a close-run matter. There was a militant group of “anti-imperialists” in the Senate who considered it a betrayal of American republicanism to engage in the same colonial expansion as the European powers. Americans had long imagined themselves to be unsullied by the corrupt motives and brutal tactics that such colonial ventures represented in their minds. McKinley, who had reluctantly agreed to the treaty, reassured himself and Americans, “No imperial designs lurk in the American mind. They are alien to American sentiment, thought, and purpose.” But, with a nod to Rudyard Kipling’s urging that Americans take on the “white man’s burden,” McKinley cast the decision in republican missionary garb, “If we can benefit those remote peoples, who will object? If in the years of the future they are established in government under law and liberty, who will regret our perils and sacrifices?”

The controversy around an “American Empire” was not new. Early American republicans like Thomas Jefferson, Alexander Hamilton, and John Marshall, among many others, had described the United States in that manner and without sarcasm. The government might be a republic in form, but the United States would be an empire in expanse, wealth, and glory. Why else acquire the vast Louisiana territory in 1803? Why else demand from Mexico that huge sparsely-settled territory west of Texas in 1846? “Westward the Course of Empire Takes Its Way,” painted Emanuel Leutze in 1861. Manifest Destiny became the aspirational slogan.

While most Americans cheered those developments, a portion of the political elite had misgivings. The Whigs opposed the annexation of Texas and the Mexican War. To many Whigs, the latter especially was merely a war of conquest and the imposition of American rule against the inhabitants’ wishes. Behind the republican facade lay a more fundamental political concern. The Whigs’ main political power was in the North, but the new territory likely would be settled by Southerners and increase the power of the Democrats. That movement of settlers would also give slavery a new lease on life, something much reviled by most Whigs, among them a novice Congressman from Illinois, Abraham Lincoln.

Yet, by the 1890s, the expansion across the continent was completed. Would it stop there or move across the water to distant shores? One omen was the national debate over Hawaii that culminated in the annexation of the islands in 1898. Some opponents drew on the earlier Whig arguments and urged that, if the goal of the continental expansion was to secure enough land for two centuries to realize Jefferson’s ideal of a large American agrarian republic, the goal had been achieved. Going off-shore had no such republican fig leaf to cover its blatant colonialism.

Other opponents emphasized the folly of nation-building and trying to graft Western values and American republicanism onto alien cultures who neither wanted them nor were sufficiently politically sophisticated to make them work. They took their cue from John C. Calhoun, who, in 1848, had opposed the fanciful proposal to annex all of Mexico, “We make a great mistake in supposing that all people are capable of self-government. Acting under that impression, many are anxious to force free Governments on all the people of this continent, and over the world, if they had the power…. It is a sad delusion. None but a people advanced to a high state of moral and intellectual excellence are capable in a civilized condition, of forming and maintaining free Governments ….”

With peace at hand, the focus shifted to political and legal concerns. The question became whether or not the Constitution applied to these new territories ex proprio vigore: “Does the Constitution follow the flag?” Neither President McKinley nor Congress had a concrete policy. The Constitution, having been formed by thirteen states, along the eastern slice of a vast continent, was unclear. The Articles of Confederation had provided for the admission of Canada and other British colonies, such as the West Indies, but that document was moot. The matter was left to the judiciary, and the Supreme Court provided a settlement of sorts in a series of cases over two decades called the Insular Cases.

Cuba was easy. Congress’s declaration of war against Spain had been clear: “The United States hereby disclaims any disposition or intention to exercise sovereignty, jurisdiction, or control over said island except for the pacification thereof, and asserts its determination, when that is accomplished, to leave the government and control of the island to its people.” In Neely v. Henkel (1901), the Court unanimously held that the Constitution did not apply to Cuba. Effectively, Cuba was already a foreign country outside the Constitution. Cuba became formally independent in 1902. In similar manner, the United States promised independence to the Philippine people, a process that took several decades due to various military exigencies. Thus, again, the Constitution did not apply there, at least not tout court, as the Court affirmed beginning in Dorr v. U.S. in 1904. That took care of the largest overseas dominions, and Americans could tentatively congratulate themselves that they were not genuine colonialists.

More muddled was the status of Puerto Rico and Guam. In Puerto Rico, social, political, and economic conditions did not promise an easy path to independence, and no such assurance was given. The territory was not deemed capable of surviving on its own. Rather, the peace treaty expressly provided that Congress would determine the political status of the inhabitants. In 1900, Congress passed the Foraker Act, which set up a civil government patterned on the old British imperial system with which Americans were familiar. The locals would elect an assembly, but the President would appoint a governor and executive council. Guam was in a similar state of dependency.

In Downes v. Bidwell (1901), the Court established the new status of Puerto Rico as neither outside nor entirely inside the United States. Unlike Hawaii or the territories that were part of Manifest Destiny, there was no clear determination that Puerto Rico was on a path to become a state and, thus, was already incorporated into the entity called the United States. It belonged to the United States, but was not part of the United States. The Constitution, on its own, applied only to states and to territory that was expected to become part of the United States. Puerto Rico was more like, but not entirely like, temporarily administered foreign territory. Congress determined the governance of that territory by statute or treaty, and, with the exception of certain “natural rights” reflected in particular provisions of the Bill of Rights, the Constitution applied only to the extent to which Congress specified.

These cases adjusted constitutional doctrine to a new political reality inaugurated by the sinking of the Maine and the war that event set in motion. The United States no longer looked inward to settle its own large territory and to resolve domestic political issues relating to the nature of the union. Rather, the country was looking beyond its shores and was emerging as a world power. That metamorphosis would take a couple of generations and two world wars to complete, the last of which triggered by another surprise attack on American warships.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at:

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Val Crofts

The Massacre at Wounded Knee, part of the Ghost Dance War, marked the last of the Indian Wars and the end of one of the bloodiest eras in American History, the systematic and deliberate slaughter of Native American peoples and their way of life. It was an American Holocaust. During a 500 year period, approximately 100,000,000 Native Americans were killed as citizens of the United States pushed West in the name of manifest destiny and destroyed the Native American territories that had been their home for thousands of years. These events will never take a place on the front of our history books, but they must never lose their place in our national memory.

Armed conflict was still prevalent in the American West in the 1880s between the U.S. Army and the Native American population, even after most of the tribes there had been displaced or had their populations reduced in great numbers. The Battle of Little Bighorn in 1876 had been the most fierce of the wars with the Sioux, which had started in the mid-1850s, when Chiefs Sitting Bull and Crazy Horse had gone to war to defend the Black Hills after the U.S. violated the treaty that they had signed stating the land was the property of the Sioux. After the Battle of Little Bighorn, a gradual depletion of Sioux forces occurred and Crazy Horse surrendered in 1877.

The remaining Sioux were spread out in their reservations and eventually were placed onto a central reservation in the Dakota territory and were practicing a ritual known as the Ghost Dance. The dance was supposed to drive the white men from Native American territory and restore peace and tranquility to the region. Settlers were frightened by the dance and they said it had a “ghostly aura” to it, thus giving it its name.

In response to the settlers’ fears, U.S. commanders arrested several leaders of the Sioux, including Chief Kicking Bear and Chief Sitting Bull, who was later killed.

Two weeks after Sitting Bull’s death, U.S. troops demanded that all the Sioux immediately turn over their weapons. As they were peacefully doing so, one deaf Sioux warrior did not understand the command to turn over his rifle. As his rifle was being taken from him, a shot went off in the crowd. The soldiers panicked and open fired on everyone in the area.

As the smoke cleared, 300 dead Lakota and 25 dead U.S. soldiers were laying on the ground. Many more Lakota were later killed by U.S. troops as they fled the reservation. The massacre ended the Ghost Dance movement and was the last of the Indian Wars. Twenty U.S. soldiers were later awarded the Congressional Medal of Honor for their actions during this campaign. The National Congress of American Indians has called on the U.S. government to rescind some or all of these medals, but they have not yet taken action to do so.

The American public’s reaction to the massacre was positive at first but over time as the scale and gravity of the massacre was revealed, the American people began to understand the brutal injustice that occurred during this encounter. Today, we need to remember the Massacre at Wounded Knee for the human cost and to make sure that events like this never happen again in our nation. We also need to make sure to honor and remember all Americans and their histories, even when it is not easy to read or take responsibility. For how can we truly be a nation where all are created equal if the treatment of our histories are not?

Val Crofts is a Social Studies teacher from Janesville, Wisconsin. He teaches as Milton High School in Milton, Wisconsin, and has been there 16 years. He teaches AP U.S. Government and Politics, U.S. History and U.S. Military History. Val has also taught for the Wisconsin Virtual School for seven years, teaching several Social Studies courses for them. Val is also a member of the U.S. Semiquincentennial Commission celebrating the 250th Anniversary of the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Paul Israel

By the time Thomas Edison began his effort to develop an incandescent electric light in September 1878, researchers had been working on the problem for forty years. While many of them developed lamps that worked in the laboratory and for short-term demonstrations, none had been able to devise a lamp that would last in long-term commercial use.  Edison was able to succeed where others had failed because he understood that developing a successful commercial lamp also required him to develop an entire electrical system. With the resources of his laboratory, he and his staff were able to design not only a commercially successful lamp but the system that made it possible.

At the time, Edison’s work on telegraphs and telephones largely defined the limits of his knowledge of electrical technology. Unlike some of his contemporaries, he did not even have experience with arc lights and dynamos. Yet he confidently predicted that he could solve the problem and after only a few days of experiment during the second week of September 1878 he announced that he had “struck a bonanza.”  He believed he had solved the problem of creating a long-lasting lamp by designing regulators that would prevent the lamp filament (he was then using platinum and related metals) from melting. Edison reached this solution by thinking of electric lights as analogous to telegraph instruments and lamp regulators as a form of electromechanical switch similar to those he used in telegraphy. Edison’s regulators used the expansion of metals or air heated by the electric current to divert current from the incandescing element in order to prevent it from destruction by overheating. Edison was soon designing lamp regulators in the same fertile manner that he had previously varied the relays and circuits of his telegraph designs.

Edison was also confident that his insights regarding high-resistance lamps and parallel circuits would be key to designing a commercial electric lighting system. Because the regulator temporarily removed the lamp from the circuit, he realized that he had to place the lamps in parallel circuits so that each individual lamp could be turned on and off without affecting any others in the circuit. This was also desirable for customers used to independently operated gas lamps. Even more important was Edison’s grasp of basic electrical laws. He was virtually alone in understanding how to produce an economical distribution system. Other researchers had been stymied by the cost of the copper conductors, which would require a very large cross section to reduce energy lost as excess heat in the system. However, large copper conductors would make the system too expensive. Edison realized that by using high-resistance lamps he could increase the voltage proportionately to the current and thus reduce the size and cost of the conductors.

Edison initially focused his work on the lamp because he saw it as the critical problem and thought that standard arc-lighting dynamos could easily meet the requirements of an incandescent lighting system.  However, after experimenting with one of these dynamos, Edison began to doubt their suitability for his purposes. With the expectation of funds from the newly formed Edison Electric Light Company, he ordered other machines and began to design his own generators as well. By January 1879, Edison’s understanding of generators had advanced sufficiently “after a few weeks hard study on magneto electric principles,” for him to start his machinists building a new design. Edison’s ability to experiment with generators was greatly enhanced by his new financial resources that enabled him to build a large machine shop that could produce not only fine instruments like telegraphs and lamps, but also generators—”in short all the means to set up & test most deliberately every point of the Electric Light.” With the new facilities, machinery, and assistants made possible by his financial backers, Edison could pursue research on a broad front. In fact, by the end of May, he had developed his standard generator design. It would take much longer to develop a commercial lamp.

Just as the dynamo experiments marked a new effort to build up a base of fundamental knowledge, so too did lamp experiments in early 1879 begin to reflect this new spirit of investigation. Instead of continuing to construct numerous prototypes, Edison began observing the behavior of platinum and other metals under the conditions required for incandescence. By studying his filaments under a microscope, he soon discovered that the metal seemed to absorb gases during heating, suggesting that the problem lay less in the composition of the metal than in the environment in which it was heated. The most obvious way to change the environment was to use a vacuum. By improving the existing vacuum-pump technology with the assistance of an experienced German glassblower, Edison was able to better protect his filaments and by the end of the summer he had done away with his complicated electromechanical regulators. The improved vacuum pumps developed by the laboratory staff helped to produce a major breakthrough in the development of a commercial lamp.

Although Edison no longer required a regulator for his platinum filaments and the lamps lasted longer, they were too expensive for commercial use. Not only was platinum a rare and expensive metal, but platinum filaments did not produce the high resistance he needed for his distribution system. With much better vacuum technology capable of preventing the oxidation of carbon filaments, Edison decided to try experimenting with a material that was not only much cheaper and more abundant, but which also produced high-resistance filaments.

The shift to carbon was a product of Edison’s propensity for working on several projects at once. During the spring and summer of 1879, telephone research at times overshadowed the light as Edison sought to improve his instrument for the British market. A crucial element of Edison’s telephone was the carbon button used in his transmitter. These buttons were produced in a little shed at the laboratory complex where day and night kerosene lamps were burned and the resulting carbon, known as lampblack, was collected and formed into buttons. The reason for turning to this familiar material lies in another analogy. Almost from the beginning of the light research, Edison had determined that the most efficient form for his incandescing element would be a thin wire spiral which would allow him to decrease radiating surface so as to reduce the energy lost through radiation of heat rather than light. The spiral form also increased resistance. It was his recognition that the lampblack could be rolled like a wire and then coiled into a spiral like platinum that led Edison to try carbon as a filament material.

Although Edison’s basic carbon-filament lamp patent, filed on November 4, 1879, still retained the spiral form, the laboratory staff had great difficulty in actually winding a carbon spiral. Instead, Edison turned to another form of carbon “wire”–a thread. During the night of October 21–22, the laboratory staff watched as a cotton-thread filament burned for 14 1/2 hours with a resistance of around 100 ohms. This date of this experiment would later be later be associated with the invention of the electric light, but at the time Edison treated it not as a finished invention but rather as the beginning of a new experimental path. The commercial lamp would require another year of research.

Nonetheless, by New Year’s Day 1880, Edison was able to demonstrate his system to the public. Over the course of the next year, he and his staff worked feverishly to bring his system to a state of commercial introduction. In the process, he turned the Menlo Park laboratory into an R&D center, with an emphasis on development. By spring, the staff which previously consisted of some twelve or fifteen experimenters and machinists, was greatly expanded, at times reaching as many as sixty men. Work on the various component was delegated to new members of the staff and over the course of the year, work progressed on each element of the system, including the generator, meter, underground conductors, safety fuses, lamp fixtures and sockets, and the commercial bamboo-carbon filament. By the time all these ancillary components were developed and manufacturing underway in the spring of 1881, Edison had spent over $200,000 on research and development.  Commercial introduction required several thousand additional dollars of research as well as $500,000 to install the first central station system in downtown New York City, which opened on September 4, 1882, four years after Edison first began his research. Though he claimed merely to “have accomplished all I promised,” Edison had done even more by starting a new industry and reorganizing the process of invention.

Historian, Dr. Paul Israel, a former Californian, moved East to NJ over 30 years ago to do research for a book on Thomas Edison & the electric light. Today he is the Director and General Editor of the Thomas A. Edison Papers at Rutgers University, the New Jersey State University. 

The Thomas A. Edison Papers Project, a research center at Rutgers School of Arts and Sciences, is one of the most ambitious editing projects ever undertaken by an American university.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Paul Israel

In mid-July 1877, while working to develop an improved telephone for the Western Union Telegraph Company, Thomas Edison conceived the idea of recording and reproducing telephone messages. Edison came up with this extraordinary idea because he thought about the telephone as a form of telegraph, even referring to it as a “speaking telegraph.” Thus, on July 18, he tried an experiment with a telephone “diaphragm having an embossing point & held against paraffin paper moving rapidly.” Finding that sound “vibrations are indented nicely” he concluded, “there’s no doubt that I shall be able to store up & reproduce automatically at any future time the human voice perfectly.”

At the time Edison was also working on his repeating telegraph known as the translating embosser. This device recorded an outgoing message as the operator sent it, enabling automatic, rapid retransmission of the same message on other lines. This would be particularly desirable for long press-wire articles that required very skilled operators to transmit and receive. The incoming, high-speed message recorded by the embosser at each receiving station could be transcribed at a slower speed by an operator using a standard sounder. Edison thought his telephone recorder could be used in a similar fashion by allowing the voice message to be “reproduced slow or fast by a copyist & written down.”

Busy with telephone and translating embosser experiments, Edison put this idea aside until August 12, when he drew a device he labeled “Phonograph,”  which looked very much like an automatic telegraph recorder he had developed a few years earlier. For many years, researchers were fooled by another drawing containing the inscription “Kreuzi Make This Edison August 12/77.” However, the text for this drawing, which was published twice in the mid-1890s without the inscription, was added during the 40th anniversary of the invention to represent the drawing from which machinist John Kruesi constructed the first phonograph.

Over the next few months, Edison periodically experimented with “apparatus for recording & reproducing the human voice,” using various methods to record on paper tape. The first design for a cylinder recorder, apparently still using paper to record on, appeared in a notebook entry of September 21. However, it was not until November 5, that he first described the design that John Kruesi would beginning making at the end of the month. “I propose having a cylinder 10 threads or embossing grooves to the inch cylinder 1 foot long on this tin foil of proper thickness.” As Edison noted, he had discovered after “various experiments with wax, chalk, etc.” that “tin foil over a groove is the easiest of all= this cylinder will indent about 200 spoken words & reproduce them from same cylinder.” On November 10, he drew a rough sketch of this new tinfoil cylinder design. This drawing looks very similar to the more careful sketch he later inscribed “Kreuzi Make This Edison August 12/77.”  It also resembles the large drawing Edison made on November 29, which may have been used by Kruesi while he was making the first phonograph during the first six days of December.

These drawings of the tinfoil cylinder phonograph looked very much like those for the cylinder version of Edison’s translating embosser while a disc design was based on another version of his translating embosser. The disc translating embosser can be found today at the reconstructed Menlo Park Laboratory at the Henry Ford Museum in Dearborn, Michigan.  This device also became part of the creation myth for the phonograph when it appeared in the 1940 Spencer Tracy movie Edison the Man. In the movie an assistant accidentally starts the embosser with a recording on it, resulting in a high-pitched sound that leads Edison to the idea of recording sound.

Although Edison did not have a working phonograph until December, he had drafted his first press release to announce the new invention on September 7. Writing in the third person he claimed that

“Mr. Edison the Electrician has not only succeeded in producing a perfectly articulating telephone.…far superior and much more ingenious than the telephone off Bell…but has gone into a new and entirely unexplored field of acoustics which is nothing less than an attempt to record automatically the speech of a very rapid speaker upon paper; from which he reproduces the same Speech immediately or year’s afterwards or preserving the characteristics of the speakers voice so that persons familiar with it would at once recognize it.

This text and its drawings of a paper-tape phonograph would become the basis for a letter to the editor by Edison’s associate Edward Johnson that appeared in the November 17 issue of Scientific American. Not surprisingly, when this was republished in the newspapers it was met with skepticism.

On December 7, the day after Kruesi finished making the first tinfoil cylinder phonograph, Edison took the machine to Scientific American’s offices in New York City, accompanied by Johnson and laboratory assistant Charles Batchelor. He amazed the staff when he placed the little machine on the editor’s desk and turned the handle to reproduce a recording he had already made. As described in an article in the December 22 issue, “the machine inquired as to our health, asked how we liked the phonograph, informed us that it was very well, and bid us a cordial good night.”

By the New Year, Edison had an improved phonograph that he exhibited at Western Union headquarters, where it attracted the attention of the New York newspapers. These first public demonstrations produced a trickle of articles that soon turned into a steady stream and by the end of March had become a veritable flood. Edison soon became as famous as his astounding invention. Reports soon began calling Edison “Inventor of the Age,” the “Napoleon of Invention,” and most famously “The Wizard of Menlo Park.”

Edison had grand expectations for his invention as did the investors in the newly formed Edison Speaking Phonograph Company. However, Edison and his associates were unable to turn the tinfoil phonograph from a curiosity suitable for exhibitions and lectures into a consumer product. The phonograph’s real drawback was not the mechanical design on which they focused their efforts but the tinfoil recording surface.  Compared to later wax recording surfaces developed in the 1880s, tinfoil recordings had very poor fidelity and also deteriorated rapidly after a single playback. As a result, for the next decade the phonograph remained little more than a scientific curiosity.

Historian, Dr. Paul Israel, a former Californian, moved East to NJ over 30 years ago to do research for a book on Thomas Edison & the electric light. Today he is the Director and General Editor of the Thomas A. Edison Papers at Rutgers University, the New Jersey State University. 

The Thomas A. Edison Papers Project, a research center at Rutgers School of Arts and Sciences, is one of the most ambitious editing projects ever undertaken by an American university.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

Usually, breaking down history into chapters requires imposing arbitrary separations. Every once in a while, though, the divisions are clear and real, providing a hard-stop in the action that only makes sense against the backdrop of what it concludes, even if it explains what follows.

For reasons having next-to-nothing to do with the actual candidates,[1] the Presidential election of 1876 provided that kind of page-break in American history. It came on the heels of the Grant Presidency, during which the victor of Vicksburg and Appomattox sought to fulfill the Union’s commitments from the war (including those embodied in the post-war Constitutional Amendments) and encountered unprecedented resistance. It saw that resistance taken to a whole new level, which threw the election results into chaos and created a Constitutional crisis. And by the time Congress had extricated itself from that, they had fixed the immediate mess only by creating a much larger, much more costly, much longer lasting one.

Promises Made

To understand the transition, we need to start with the backdrop.

Jump back to April 1865. General Ulysses S. Grant takes Richmond, the Confederate capitol. The Confederate government collapses in retreat, with its Cabinet going its separate ways.[2] Before it does, Confederate President Jefferson Davis issues his final order to General Robert E. Lee and the Army of Northern Virginia: keep fighting! He tells Lee to take his troops into the countryside, fade into a guerrilla force, and fight on, making governance impossible. Lee, of course, refuses and surrenders at Appomattox Courthouse. A celebrating Abraham Lincoln takes a night off for a play, where a Southern sympathizer from Maryland murders him.[3] Before his passing, Abraham Lincoln had freed the slaves and won the war (in part, thanks to the help of the freedmen who had joined the North’s army), so saving the Union. His assassination signified a major theme of the next decade: some’s refusal to accept the war’s results left them willing to cast aside the rule of law and employ political violence to resist the establishment of new norms.

Leaving our flashback: Andrew Johnson succeeded Lincoln in office, but in nothing else. Super-majorities in both the House and Senate hated him and his policies and established a series of precedents enhancing Congressional power, even while failing to establish the one they wanted most.[4] Before his exit from the White House, despite Johnson’s opposition: (a) the States had ratified the Thirteenth Amendment (banning slavery); (b) Congress had passed the first Civil Rights Act (in 1866, over his veto); (c) Congress had proposed and the States had ratified the Fourteenth Amendment (“constitutionalizing” the Civil Rights Act of 1866 by: (i) creating federal citizenship for all born in our territory; (ii) barring states from “abridg[ing] the privileges or immunities of citizens of the United States[;]” (iii) altering the representation formula for states in Congress and the electoral college, and (iv) guaranteeing the equal protection of the laws); and (d) in the final days of his term, Congress formally proposed the Fifteenth Amendment (barring states from denying or abridging the right to vote of citizens of the United States “on account of race, color, or previous condition of servitude.”).

Notice how that progression, at each stage, was made necessary by the resistance of some Southerners to what preceded it. Ratification of the Thirteenth Amendment abolishing slavery? Bedford Forrest, a low-ranking Confederate General, responded by reversing Lee’s April decision: in December 1865, he founded the Ku Klux Klan (effectively, Confederate forces reborn) to wage the clandestine war against the U.S. government and the former slaves it had freed, which Lee refused to fight.  Their efforts (and, after Johnson recognized them as governments, the efforts of the Southern states to recreate slavery under another name) triggered passage of the Civil Rights Act of 1866 and the Fourteenth Amendment. Southern states nonetheless continued to disenfranchise black Americans. So, Congress passed the Fifteenth Amendment to stop them. Each step required the next.

And the next step saw America, at its first chance, turn to its greatest hero, Ulysses S. Grant, to replace Johnson with someone who would put the White House on the side of fulfilling Lincoln’s promises. Grant tried to do so. He (convinced Congress to authorize and then) created the Department of Justice; he backed, signed into law, and had DOJ vigorously prosecute violations of the Enforcement Act of 1870 (banning the Klan and, more generally, the domestic terrorism it pioneered using to prevent black people from voting), the Enforcement Act of 1871 (allowing federal oversight of elections, where requested), and the Ku Klux Klan Act (criminalizing the Klan’s favorite tactics and making state officials who denied Americans either their civil rights or the equal protection of law personally liable for damages). He readmitted to the Union the last states of the old Confederacy still under military government, while conditioning readmission on their recognition of the equality before the law of all U.S. citizens. Eventually, he signed into law the Civil Rights Act of 1875, guarantying all Americans access to all public accommodations.

And over the course of Grant’s Presidency, these policies bore fruit.  Historically black colleges and universities sprang up. America’s newly enfranchised freedmen and their white coalition partners elected governments in ten (10) states of the former Confederacy. These governments ratified new state constitutions and created their states’ first public schools. They saw black Americans serve in office in significant numbers for the first time (including America’s first black Congressmen and Senators and, in P.B.S. Pinchback, its first black Governor).

Gathering Clouds

But that wasn’t the whole story.

While the Grant Administration succeeded in breaking the back of the Klan, the grind of entering a second decade of military tours in the South shifted enough political power in the North to slowly sap support for continued, vigorous, federal action defending the rights of black Southerners. And less centralized terrorist forces functioned with increasing effectiveness. In 1872, in conjunction with a state election marred by thuggery and fraud, one such “militia” massacred an untold number of victims in Colfax, Louisiana. Federal prosecution of the perpetrators foundered when the Supreme Court gutted the Enforcement Acts as beyond Congress’s power to enact.

That led to more such “militias” often openly referring to themselves as “the military arm of the Democratic Party” flowering across the country.  And their increasingly brazen attacks on black voters and their white allies allowed those styling themselves “Redeemers” of the region to replace, one by one, the freely elected governments of Reconstruction first in Louisiana, then in Mississippi, then in South Carolina… with governments expressly dedicated to restoring the racial caste system.  “Pitchfork” Ben Tillman, the leader of a parallel massacre of black Union army veterans living in Hamburg, South Carolina, used his resulting notoriety to launch a political career spanning decades. Eventually, he reached both the governor’s mansion and the U.S. Senate, along the way, becoming the father of America’s gun-control laws, because it was easier to terrorize and disenfranchise the disarmed.

By 1876, with such “militias” enjoying a clear playbook and, in places, support from their state governments, the stage was set for massive fraud and duress trying to swing a presidential election. Attacks on black voters, and their allies, intended to prevent a substantial percentage of the electorate from voting, unfolded on a regional scale. South Carolina, while pursuing such illegal terror, simultaneously claimed to have counted more ballots than it had registered voters. The electoral vote count it eventually sent to the Senate was certified by no one – that for Louisiana was certified by a gubernatorial candidate holding no office. Meanwhile, Oregon sent two different sets of electoral votes: one certified by the Secretary of State, the other certified by the Governor, cast for two different Presidential candidates.

The Mess

The Twelfth Amendment requires states’ electors to: (a) meet; (b) cast their votes for the President and Vice President; (c) compile a list of vote-recipients for each (to be signed by the electors and certified); and (d) send the sealed list to the U.S. Senate (to the attention of the President of the Senate). It then requires the President of the Senate to open the sealed lists in the presence of the House and Senate to count the votes.

Normally, the President of the Senate is the Vice President. But Grant’s Vice President, Henry Wilson, had died in 1875 and the Twenty-Fifth Amendment’s mechanism to fill a Vice Presidential vacancy was still almost a century away. That left, in 1876, the Senate’s President Pro Tempore, Thomas W. Ferry (R-MI) to serve as the acting President of the Senate. But given the muddled state of the records sent to the Senate, Senate Democrats did not trust Ferry to play this role. Since the filibuster was well established by the 1870s, the Senate could do nothing without their acquiescence. More, they could point to Johnson-Administration precedents enhancing Congressional authority to demand that resolution of disputed electoral votes be reached jointly by both chambers of Congress, which they preferred, because Democrats had taken a majority of the lower House in 1874.

No one agreed which votes to count. No one agreed who could count them. And the difference between sets was enough to deliver the majority of the electoral college to either major party’s nominees for the Presidency and Vice Presidency. And all of this came at the conclusion of an election already marred by large-scale, partisan violence.

Swapping Messes

It took the Congress months to find its way out of this morass. Eventually, it did so through an unwritten deal. On March 2, 1877, Congress declared Ohio Republican, Rutherford B. Hayes, President of the United States over Democratic candidate Samuel J. Tilden of New York. Hayes, in turn, embraced so-called “Home Rule,” removing all troops from the old Confederacy and halting the federal government’s efforts to either enforce the Civil Rights Acts or make real the promises of the post-war Constitutional Amendments.

With the commitments of Reconstruction abandoned, the “Redeemers” promptly completed their “Redemption” of the South from freely, lawfully elected governments. They rewrote state constitutions, broadly disenfranchised those promised the vote by the Fifteenth Amendment, and established the whole Jim-Crow structure that ignored (really, made a mockery of) the Fourteenth Amendment’s guaranties.

Congress solved the short-term problem by creating a larger, structural one that would linger for a century.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Rutherford B. Hayes, the Republican nominee, was the Governor of Ohio at the time who had served in the Union army as a Brigadier General; Samuel J. Tilden, the Democratic nominee, was the Governor of New York at the time, and earlier had been among the most prominent anti-slavery, pro-union Democrats to remain in the party in 1860.  Indeed, in 1848, Tilden was a founder of Martin Van Buren’s Free Soil Party, who attacked the Whigs (which Hayes then supported) as too supportive of slave-power.

[2] CSA President Jefferson Davis broke West, with the intention of reaching Texas and Arkansas (both unoccupied by the Union and continuing to claim authority from that rump-Confederacy).  His French-speaking Louisianan Secretary of State Judah P. Benjamin broke South, pretending to be a lost immigrant peddler as he wound his way to Florida, then took a raft to Cuba as a refugee.  He won asylum, there, with the British embassy and eventually rode to London with the protection of the British Navy – alone among leading Confederates, Benjamin had a successful Second Act, in which he became a leading British lawyer and author of the world’s leading treatise on international taxation.

[3] To this day, it is unclear to what degree John Wilkes Booth was a Confederate operative.  He certainly spied for the CSA.  No correspondence survives to answer whether his assassination of Lincoln was a pre-planned CSA operation or freelancing after Richmond’s fall.

[4] He survived impeachment by a single vote.

Guest Essayist: Tony Williams

In the mid-nineteenth century, the providential idea of Manifest Destiny drove Americans to move west. They traveled along various overland trails and railroads to Oregon, California, Colorado, and the Dakota Territory in search of land and gold. Native Americans who lived and hunted in the West were alarmed at white encroachment on their lands, which were usually protected by treaties. The conflict led to several violent clashes throughout the West.

Tensions with Native Americans simmered in the early 1870s. The Transcontinental Railroad contributed to western development and the integration of national markets, but it also intruded on Native American lands. Two military expeditions were dispatched to Montana to protect the railroad and its workers in 1872. In 1874, gold was reportedly discovered in the Black Hills in modern-day South Dakota within the Great Sioux Reservation. By the end of 1875, 15,000 prospectors and miners were in the Black Hills searching for gold.

Some Indians resisted. Sitting Bull was an elite Lakota Sioux war leader who had visions and dreams. Moreover, Oglala Lakota supreme war chief Crazy Horse had a reputation as a fierce warrior. They resisted the reservations and American encroachment on their lands and were willing to unite and fight against it.

In early November 1875, President Ulysses S. Grant met with General Philip Sheridan and others on Indian policy at the White House. They issued an ultimatum for all Sioux outside the reservation to go there by January 31, 1876 or be considered hostile. The Sioux ignored it. Sitting Bull said, “I will not go to the reservation. I have no land to sell. There is plenty of game for us. We have enough ammunition. We don’t want any white men here.”

That spring, the Cheyenne, Oglala, and Sioux tribes in the area decided to unite through the summer and fight the Americans. In the spring, Sitting Bull called for warriors to assemble at his village for war. Nearly two thousand warriors assembled, many were armed with the latest repeating Springfield rifles.

That spring, Sitting Bull had visions of victory over the white man. In mid-May, he fasted and purified himself in a ritual called the Sun Dance. After 50 small strips of flesh had been cut from each arm, he had a vision of whites coming into their camp and suffering a great defeat.

After discovering the approximate location of Sitting Bull’s village, General Alfred Terry met with Colonel George Custer and Colonel John Gibbon on the Yellowstone River to formulate a plan. They agreed upon a classic hammer and anvil attack in which Custer would proceed down the Rosebud River and attack the village, while Terry and Gibbon went down the Yellowstone and Little Bighorn Rivers to block any escape. Custer had 40 Arikara scouts with him to find the enemy.

On June 23 and 24, the Arikara scouts found evidence that Sitting Bull’s village had recently occupied the area. The exhausted Seventh Cavalry stopped for the night at 2 a.m. on June 25. The scouts meanwhile sighted a massive herd of ponies and sent a message to wake Custer. When a frightened scout, Bloody Knife, warned they would “find enough Sioux to keep us fighting two or three days,” Custer arrogantly replied, “I guess we’ll get through them in one day.” His greater fear was that the village would escape his clutches. He ordered his men to form up for battle.

Around noon, Custer led the Seventh into the valley and divided his men as he had during the Battle of Washita. He sent Captain Frederick Benteen to the left with 120 men to block any escape, while Custer and Major Marcus Reno advanced on the right along the Sun Dance Creek.

Custer and Reno spotted 40 to 50 warriors fleeing toward the main village. Custer further divided his army, sending Reno in pursuit and himself continuing along the right flank. Prodding his men with some bluster, Custer told them, “Boys, hold your horses. There are plenty of them down there for all of us.”

Reno’s men crossed the Little Bighorn and fired at noncombatants. Hundreds of Indian warriors started arriving to face Reno. Reno downed a great deal of whiskey and ordered his soldiers to dismount and form a skirmish line. They were outnumbered and were quickly overwhelmed by the Native Americans’ onslaught and running low on ammunition.

Reno’s men retreated to some woods along the bank of the river to find cover but were soon flushed out, though fifteen men remained there, hidden and frightened. The warriors routed Reno’s troops and killed several during their retreat back across the river. Reno finally organized eighty men on a hill and fought off several charges.

Benteen soon reinforced Reno as did the fifteen men from the thicket who also made it to what is now called Reno Hill, and the pack train with ammunition and supplies arrived as well. No one knew where Custer was. The men built entrenchments made of ammunition and hardtack boxes, saddles, and even dead horses. For more than three hours in the 100-degree heat, they fought off a continuous stream of attacking warriors by the hundreds and were saved only by the arrival of darkness. Reno’s exhausted and thirsty men continued to dig in and fortify their barricades.

The attacks resumed around that night and lasted all morning. Benteen and Reno organized charges that momentarily pushed back the Sioux and Cheyenne, and a few men sneaked down to the Little Bighorn for water. The fighting lasted until mid-afternoon when the warriors broke off to follow the large dust cloud of the departing village. The soldiers on the hill feared a trick and kept watch all night for the enemy’s return.

General Terry’s army was camped to the north when his Crow scouts reported to him at sunrise on June 26 that they had found the battlefield where two hundred men of the Seventh Regiment had been overwhelmed and killed making a last stand on a hill. The next day, Terry arrived at Last Stand Hill and morosely confirmed that Custer and his men were dead. The bad news sobered the celebration of the United States’ centennial when it arrived in the East.

Despite the destruction of Custer and his men at Little Bighorn, the Indian Wars of the late nineteenth century were devastating for Native American tribes and their cultures. Their populations suffered heavy losses, and they lost their tribal grounds for hunting and agriculture. In the early twentieth century, the U.S. government restricted most Indians to reservations as Americans settled the West. Many Americans saw the reservation system as a more humane alternative to war, but it wrought continued damage to Native American cultures.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

An attorney representing Alexander Graham Bell and his business partner, Gardiner Hubbard, filed a patent application for an invention entitled an “Improvement in Telegraphy” on February 14, 1876. That same day, Elisha Gray, a prominent inventor from Highland Park, Illinois, had applied for a patent caveat for a similar invention from the same office. On March 7, Bell’s patent was approved by the patent office and the battle over the rights to the invention that we now know as the telephone began. The eventual outcome would shape the development of a major industry and the opportunities for communication and social interaction for the entire country.

The invention came from an unlikely source. Alexander Graham Bell was a Scottish-born teacher of elocution and tutor to the deaf whose family had migrated to Ontario, Canada, after the death of two of Bell’s siblings. Alexander Melville Bell, Alexander Graham Bell’s father, believed that their new home in Ontario offered them a better, more healthful climate. The elder Bell was a student of phonetics who had developed a system of “Visible Speech” to allow deaf people the chance to speak intelligibly. Melville Bell lectured periodically at the Lowell Institute in Boston, Massachusetts, and his son Alexander moved to Boston permanently to assume a teaching position at the Boston University.[1]

Though trained in acoustics and the science behind the sounds of the human voice, Bell did not have a strong understanding of electrical currents or electromagnetism. But early on he realized that the magnetic field of an electrical current was capable of vibrating objects, such as a tuning fork, which could create audible sounds. As he was learning of how electrical currents could be used for sound production, other researchers such as Joseph Stearns and Thomas Edison were developing a system of telegraph transmission in which multiple signals could be sent over the same wire at the same time. These systems relied upon sending the series of dots and dashes at different frequencies. Bell joined that research to find a better “multiplex” telegraph. In order to develop what Bell called a “harmonic telegraph,” Bell needed more funds for his lab. Much of his funding came from a notable Boston attorney, Gardiner Hubbard, who hired Bell as a teacher of his daughter, Mabel, who had become deaf after a bout with scarlet fever. Hubbard, who had a dislike for Western Union’s dominance in long-distance telegraph service, encouraged and subsidized Bell’s research on telegraphy. Hubbard and another financial backer, Thomas Sanders, formed a partnership with Bell, with an agreement that all three hold joint ownership of the patent rights for Bell’s inventions. Bell made significant progress on his research, and had more success in his private life. Mabel Hubbard, who was his student, became his betrothed. Despite the initial objections of her father, Mabel married Bell shortly after her eighteenth birthday.[2]

Other inventers were hard at work on similar lines of research. Daniel Drawbaugh, Antonio Meucci, Johann Philipp Reis, and especially Elisha Gray all were developing alternative versions of what would soon be known as the telephone while Bell was hard at work on his project.  Most of these models involved a variable resistance method of modifying the electrical current by dipping wires into a container of liquid, often mercury or sulfuric acid, to alter the current flowing to a set of reeds or diaphragm that would emit various sounds. Most of these researchers knew more about electrical currents and devices than Bell did. But Bell had a solid understanding of the human voice. Even though his research began as an effort to improve telegraphy, Bell realized that the devices that he created could be designed to replicate speech. His patent application in February of 1876 was for a telephone transmitter that employed a magnetized reed attached to a membrane diaphragm when activated by an undulating current. The device described in the patent application could transmit sounds but not actual speech. Months later, however, Bell’s instrument was improved sufficiently to allow him to convey a brief, audible message to his assistant, Thomas A. Watson, who was in another part of his laboratory. In the summer of 1876, Bell demonstrated the transmission of audible speech to an amazed crowd at the Centennial Exhibition in Philadelphia. Elisha Gray attempted to demonstrate his version of the telephone at the same exhibition, but was unable to convey the sound of human voices. The following year, Bell filed and received a patent for his telephone receiver, assuring his claim to devices that would both transmit and receive voice communications.[3]

In 1877, Bell and his partners formed the American Bell Telephone Company, a corporation that would later be known as American Telephone and Telegraph (AT&T). The corporation and Bell personally were soon involved in a number of lawsuits alleging patent infringement and, in one case, patent cancellation. There were many litigants over the years, but the primary early adversary was Western Union, which had purchased the rights to Elisha Gray’s telephone patent. The United States federal government also was involved in a suit for patent cancellation, alleging that Bell gained his patents fraudulently by stealing the inventions of others. The lawsuit with Western Union was settled in 1879 when the corporation forfeited claims on the invention of the telephone in return for twenty percent of Bell’s company’s earnings for the duration of the patent.[4] The other lawsuits meandered through multiple courts over several years until several were consolidated before the United States Supreme Court. Ultimately, a divided court ruled in favor of Bell’s position in each case. The various opinions and appendices were so voluminous that when compiled they made up the entire volume of United States Reports, the official publication of Supreme Court opinions.[5]

The court decisions ultimately granted vast scope to the Bell patent and assigned an enormously profitable asset to Bell’s corporation. The firm that became AT&T grew into one of the largest corporations in the world.[6] Years earlier, the telegraph had transformed communication, with huge impacts on the operation of industry and government. But although the telegraph had enormous impact upon the lives of ordinary Americans, it was not widely used by private individuals for their personal communications. Almost all messages were sent by businesses and government agencies. Initially, this was the common practice for telephone usage. But with the dawn of the twentieth century, telephones became widely used by private individuals. More phones were available in homes rather than just in offices. Unlike telegrams, which were charged by the word, telephone service for local calls were priced with a flat monthly rate. As a result, telephone service was enjoyed as a means of communication for social purposes, not just commercial activities.    Within a hundred years of Bell’s initial patent, telephones could be found in almost every American home.[7]

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Billington, David P. “Bell and the Telephone.” In Power, Speed, and Form: Engineers and the Making of the Twentieth Century, 35-56. Princeton; Oxford: Princeton University Press, 2006.

[2] Billington, op cit.

[3] Stone, Alan. “Protection of the Newborn.” In Public Service Liberalism: Telecommunications and Transitions in Public Policy, 51-83. Princeton, NJ: Princeton University Press, 1991.

[4] MacDougall, Robert. “Unnatural Monopoly.” In The People’s Network: The Political Economy of the Telephone in the Gilded Age, 92-131. University of Pennsylvania Press, 2014.

[5] The Telephone Cases.  126 US 1.

[6] Beauchamp, Christopher. “Who Invented the Telephone? Lawyers, Patents, and the Judgments of History.” Technology and Culture 51, no. 4 (2010): 854-78.

[7] MacDougall, Robert. “Visions of Telephony.” In The People’s Network: The Political Economy of the Telephone in the Gilded Age, 61-91. University of Pennsylvania Press, 2014.


Guest Essayist: Scot Faulkner

Our National Parks are the most visible manifestation of why America is exceptional.

America’s Parks are the physical touchstones that affirm our national identity. These historical Parks preserve our collective memory of events that shaped our nation and the natural Parks preserve the environment that shaped us.

National Parks are open for all to enjoy, learn, and contemplate. This concept of preserving a physical space for the sole purpose of public access is a uniquely American invention. It further affirms why America remains an inspiration to the world.

On March 1, 1872, President Ulysses S. Grant signed the law creating Yellowstone as the world’s first National Park.

AN ACT to set apart a certain tract of land lying near the headwaters of the Yellowstone River as a public park. Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, That the tract of land in the Territories of Montana and Wyoming … is hereby reserved and withdrawn from settlement, occupancy, or sale under the laws of the United States, and dedicated and set apart as a public park or pleasuring ground for the benefit and enjoyment of the people; and all persons who shall locate, or settle upon, or occupy the same or any part thereof, except as hereinafter provided, shall be considered trespassers and removed there from …

The Yellowstone legislation launched a system that now encompasses 419 National Parks with over 84 million acres. Inspired by Grant’s act, Australia, Canada, and New Zealand established their own National Parks during the following years.

Yellowstone was not predestined to be the first National Park.

In 1806, John Colter, a member of Lewis and Clark’s Corps of Discovery, joined fur trappers to explore several Missouri River tributaries. Colter entered the Yellowstone area in 1807 and later reported on a dramatic landscape of “fire and brimstone.”  His description was rejected as too fanciful and labeled “Colter’s Hell.”

Over the years, other trappers and “mountain men” shared stories of fantastic landscapes of water gushing out of the ground and rainbow-colored hot springs. They were all dismissed as fantasy.

After America’s Civil War, formal expeditions were launched to explore the upper Yellowstone River system. Settlers and miners were interested in the economic potential of the region.

In 1869, Charles Cook, David Folsom, and William Peterson led a privately financed survey of the region. Their journals and personal accounts provided the first believable descriptions of Yellowstone’s natural wonders.

Reports from the Cook-Folsom Expedition encouraged the first official government survey in 1870. Henry Washburn, the Surveyor General of the Montana Territory, led a large team known as the Washburn-Langford-Doan Expedition to the Yellowstone area. Nathaniel P. Langford, who co-led the team, was a friend of Jay Cook, a major investor in the Northern Pacific Railway. Washburn was escorted by a U.S. Cavalry Unit commanded by Lt. Gustavus Doane. Their team, including Folsom, followed a similar course as the Cook-Folsom 1869 excursion, extensively documenting their observations of the Yellowstone area. They explored numerous lakes, mountains, and observed wildlife. The Expedition chronicled the Upper and Lower Geyser Basins. They named one geyser Old Faithful, as it erupted once every 74 minutes.

Upon their return, Cook combined Washburn’s and Folsom’s journals into a single version. He submitted it to the New York Tribune and Scribner’s for publication. Both rejected the manuscript as “unreliable and improbable” even with the military’s corroboration. Fortunately, another member of Washburn’s Expedition, Cornelius Hedges, submitted several articles about Yellowstone to the Helena Herald newspaper from 1870 to 1871. Hedges would become one of the original advocates for setting aside the Yellowstone area as a National Park.

Langford, who would become Yellowstone’s first park superintendent, reported to Cooke about his observations. While Cooke was primarily interested in how Yellowstone’s wonders and resources could attract railroad business, he supported Langford’s vision of establishing a National Park. Cooke financed Langford’s Yellowstone lectures in Virginia City, Helena, New York, Philadelphia, and Washington, D.C.

On January 19, 1871, geologist Ferdinand Vandeveer Hayden attended Langford’s speech in Washington, D.C. He was motivated to conduct his next geological survey in the Yellowstone region.

In 1871, Hayden organized the first federally funded survey of the Yellowstone region. His team included photographer William Henry Jackson, and landscape artist Thomas Moran. Hayden’s reports on the geysers, sulfur springs, waterfalls, canyons, lakes and streams of Yellowstone verified earlier reports. Jackson’s and Moran’s images provided the first visual proof of Yellowstone’s unique natural features.

The various expeditions and reports built the case for preservation instead of exploitation.

In October 1865, acting Montana Territorial Governor Thomas Francis Meagher, was the first public official recommending that the Yellowstone region should be protected. In an 1871 letter from Jay Cooke to Hayden, Cooke wrote that his friend, Congressman William D. Kelley was suggesting “Congress pass a bill reserving the Great Geyser Basin as a public park forever.”

Hayden became another leader for establishing Yellowstone as a National Park. He was concerned the area could face the same fate as the overly developed and commercialized Niagara Falls area. Yellowstone should, “be as free as the air or water.” In his report to the Committee on Public Lands, Hayden declared that if Yellowstone was not preserved, “the vandals who are now waiting to enter into this wonder-land, will in a single season despoil, beyond recovery, these remarkable curiosities, which have required all the cunning skill of nature thousands of years to prepare.”

Langford, and a growing number of park advocates, promoted the Yellowstone bill in late 1871 and early 1872.  They raised the alarm that “there were those who would come and make merchandise of these beautiful specimen.”

Their proposed legislation drew upon the precedent of the Yosemite Act of 1864, which barred settlement and entrusted preservation of the Yosemite Valley to the state of California.

Park advocates faced spirited opposition from mining and development interests who asserted that permanently banning settlement of a public domain the size of Yellowstone would depart from the established policy of transferring public lands to private ownership (in the 1980s, $1 billion of exploitable deposits of gold and silver were discovered within miles of the Park).  Developers feared that the regional economy would be unable to thrive if there remained strict federal prohibitions against resource development or settlement within park boundaries. Some tried to reduce the proposed size of the park so that mining, hunting, and logging activities could be developed.

Fortunately, Jackson’s photographs and Moran’s paintings captured the imagination of Congress. These compelling images, and the credibility of the Hayden report, persuaded the United States Congress to withdraw the Yellowstone region from public auction. The Establishment legislation quickly passed both chambers and was sent to President Grant for his signature.

Grant, an early advocate of preserving America’s unique natural features, enthusiastically signed the bill into law.

On September 8, 1978, Yellowstone and Mesa Verde were the first U.S. National Parks designated as UNESCO World Heritage Sites. Yellowstone was deemed a “resource of universal value to the world community.”

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Brian Pawlowski

The stories of our history connect generations across time in remarkable ways. The same giddy fascination Presidents Abraham Lincoln and Ulysses S. Grant held for the potential of the railroad in the nineteenth century is present in countless children today. They tear through books like Locomotive by Brian Floca until the pages are nearly torn from constant re-reading. It is a wonderful book that conveys both the magnitude and the majesty of the transcontinental railroad in an accessible way. A more thorough treatment of the railroad, Nothing in the World Like It: The Men Who Built the Transcontinental Railroad 1863-1869, written by historian Stephen Ambrose perhaps summarized it best by noting that, “Next to winning the Civil War and abolishing slavery, building the first transcontinental railroad, from Omaha, Nebraska, to Sacramento, California, was the greatest achievement of the American people in the 19th century.”[1] Making this achievement all the more remarkable is the fact that it was hatched as the Civil War was raging: a project to connect a continent that was at war with itself.

In 1862, only a few months after the Union victory at Shiloh and just a month before the battle of Antietam, Abraham Lincoln signed the Pacific Railway Act into law. It called for the construction of a railway from Omaha, Nebraska, to Sacramento, California. It appropriated government lands and bonds to corporations that would do the work, the first time government dollars were granted to any entity other than states. The companies, the Union Pacific starting in Omaha, and the Central Pacific begun in Sacramento, were in direct competition to lay as much track as possible and complete the nearly 2,000 miles that would be necessary for the railroad.

Construction technically began in 1863 but the war demanded men and material in such large proportion that no real progress was made until 1865. After the war, the railroads became engines of economic development that attracted union veterans and Irish immigrants in droves to the Union Pacific’s efforts. The Central Pacific sought a similar workforce, but the population of Irish immigrants in California at the time was not a sustainable source of labor. Instead, thousands of Chinese immigrants sought employment with the railroad. Initially there was resistance to Chinese workers. Fears of racial inferiority pervaded much of California at that time and many felt the Chinese were listless and lazy. These fears dissipated quickly, however, as the Chinese worked diligently, with skill and ingenuity that allowed them to push through the Sierra Nevada mountains. Before it was done, nearly 20,000 Chinese laborers took part building the railroad, employing new techniques and utilizing new materials like nitroglycerin to carve a path for the tracks in areas where no one thought it could be done.

In the summer of 1867, the Central Pacific finally made it through the mountains. While the entire effort represented a new level of engineering brilliance and innovation for its time, the Central Pacific’s thrust through the mountains surpassed expectations. To chart a course for rail through granite, an impediment no one in history to that time had crossed on anything other than horse or foot, ushered in a new era of more rapid continental movement. Before the railroad era, it took nearly four or five months to get from the east coast to the west. Upon completion, however, the trip could take as little as three and a half days.[2] Absent the ability to go through the mountains, this would not have been possible.

Throughout 1867 and 1868, both rail companies worked feverishly to lay more track than their counterpart. Government subsidies for the work increased and more track laid meant more money earned. The amounts were different and were measured by the mile, thus reflecting the difficulty the Central Pacific faced in conquering the mountains. By not having mountainous terrain to contend with, the Union Pacific made incredible progress and reached Wyoming by 1867. But the Union Pacific had challenges of a different sort. Rather than conquering nature, they had to conquer humans.

Native American plains tribes, the Sioux, Cheyenne, and Arapaho, knew the railroad would be a permanent feature on land that was prime hunting ground for the buffalo. They saw the construction as an existential threat. As the railroad continued on into the plains, new settlements sprang up in its shadow, on territory the tribes claimed as their own.[3] There was bound to be a fight. The railway companies called on the government to send the army to pacify the territory and threatened that construction could not continue without this aid. The government complied and as work resumed, army soldiers protected them along the construction route.

As the summer of 1869 approached, a standoff occurred between the companies on the location where they would join the railroad together. Ulysses S. Grant, by then the President, threatened to cut off federal funding until a meeting place was agreed to and ultimately, with the help of a congressional committee and the cold, hard reality of needing cash, they agreed on Promontory Summit, Utah. On May 10, 1869, a 17.6 karat golden spike was hammered home, finishing the railway and connecting the coasts.

The completion of the transcontinental railway brought about an era of unprecedented western expansion, economic development, and population migration. At the same time, it caused more intense conflict between those moving and developing the west and the Native American Indian tribes. Years of conflict would follow, but the settlement of the west continued. And with the new railroad in place, it continued at a rapid pace as more and more people boarded mighty locomotives to head west toward new lands and new lives. As Daniel Webster, a titan of the era remarked nearly twenty years earlier, the railway “towers above all other inventions of this or the preceding age” and it now had continental reach and power.[4] America endured the scourge of Civil War and achieved the most magnificent engineering effort of the era only five years after the guns fell silent at Appomattox.

Brian Pawlowski holds an MA in American History, is a member of the American Enterprise Institute’s state leadership network, and served as an intelligence officer in the United States Marine Corps. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Stephen E. Ambrose, Nothing in the World Like It: The Men Who Built the Transcontinental Railroad 1863-1869 (New York: Simon and Schuster Paperbacks, 2000), 17.

[2], Transcontinental Railroad, September 11, 2019,

[3] H.W. Brands, Dreams of El Dorado: A History of the American West (New York: Basic Books, 2019), 295.

[4] Ambrose, 357.

Guest Essayist: Kyle A. Scott

The Seventeenth Amendment was passed by Congress May 13, 1912 and ratified on April 8, 1913. Secretary of State William Jennings Bryan certified the ratification on May 31, 1913. Once the Amendment was added to the U.S. Constitution, citizens had the right to directly cast ballots for their state’s two senators. The Amendment changed Article I, Section 3, clauses 1 and 3 of the Constitution that had previously stipulated senators were to be elected by state legislatures. By allowing for the direct election of senators, a barrier was removed between the people and the government that moved the U.S. closer to democracy and away from a republican form of government.

At the time the U.S. Constitution was being drafted, there was a clear apprehension toward monarchy but also an aversion toward democracy. The founders were suspicious of the capricious tendencies of the majority and considered democracy to be mob rule. With direct elections of members in the House of Representatives every two years, representatives could be swept into and out of office with great efficiency and would therefore bow to the will of the majority. If some interest that ran counter to the common good, but nonetheless gained the favor of the majority, the House would be ill-incentivized to look after the common good. The Senate, as it was not directly elected by the people, could be a check on the passions of the majority. In Federalist Paper #63 Publius wrote, “an institution may be sometimes necessary as a defense to the people against their own temporary errors and delusions.”

Prior to the formation of the United States it was assumed that republics could only be small in scale. James Madison refuted such luminaries as Baron de Montesquieu in offering a solution to the problem of scale by introducing multiple layers of checks and balances into a federal regime that included a check on democratic rule at the national level. With the senate serving as a bulwark against the threat of tyranny by the majority, every viable interest could be given representation in the national debate. Now that this bulwark has been replaced with democratic elections, the president is now the only elected official at the national level shielded—at least somewhat—from public opinion as the president is elected not by the people but through the Electoral College.

At the time the Constitution was being drafted, some in Philadelphia believed there was a strong push for states’ rights. The Articles of Confederation provided a weak central government and the new states were reluctant to give up their power to a central body as they had just thrown off the yoke of tyranny hoisted upon them by a centralized governing body. The election of senators by state legislatures was one way to assuage those concerns. In Federalist Paper #62, Publius writes, “Among the various modes which might have been devised for constituting this branch of the government, that which has been proposed by the convention is probably the most congenial to public opinion. It is recommended by the double advantage of favoring a select appointment and of giving to the State governments such an agency in the formation of the federal government.” It was thought that the state’s interests would be represented in the Senate and popular interests would be represented in the House. With these sets of interests competing, the popular good would be represented in any bill that would be able to make its way through both chambers of congress. This was the very core of the theory of our constitutional government as envisioned and understood by James Madison and Alexander Hamilton. Both Madison and Hamilton argued that ambition should be made to counteract ambition and through the competition of ambitions the common good would be realized. It is only in republican government that the negative effects of faction can be mitigated and the positive aspects funneled into the realization of the common good. “The two great points of difference between a democracy and a republic are: first, the delegation of the government, in the latter, to a small number of citizens elected by the rest; secondly, the greater the number of citizens, and greater sphere of country, over which the latter may be extended.” (Federalist Paper #10).

The risks associated with democratization threaten the balance and principles republican regimes aspire to. Democracy aspires to nothing but its own will. Direct democracy offers too few safeguards against whimsy and caprice. In democracies, individuals are left to put their interests above all others and be guided by little more than immediate need as long-term planning is disincentivized.

Understanding the ramifications of further democratization is a timely topic as it is likely to be widely discussed in popular media in the upcoming presidential election. We see in every election a push for eliminating the Electoral College. With every passing election the cries for reform grow louder. Those who value republican principles should equip themselves to defend republican principles and institutions with evidence and theory and not rely on self-interest, cliché, or partisan allegiance. If interested, reread the Federalist Papers, but also, go back and read the press clippings from 1912-1913 and look for parallels to today. What you will find is a sense of connectedness with previous eras that will let the reader know these are permanent questions worth taking seriously.

Kyle Scott, PhD, MBA, currently works in higher education administration and has taught American politics, Constitutional Law, and political theory for more than a decade at the university level. He is the author of five books and more than a dozen peer-reviewed articles. His most recent book is The Federalist Papers: A Reader’s Guide. Kyle can be contacted at

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: David Shestokas


It’s difficult to determine a precise moment when Russia came to dominate recent American news and politics. The events that may have spun Russia into part of America’s daily discussions may have been accusations of Russia’s involvement in the 216 elections.

Since then, Russia and the villainous Vladimir Putin have been a daily part of our political discourse. Even as the novel coronavirus raged, mentions of Russian Ambassador Sergey Kislyak and Lt. General Michael Flynn broke through coverage of the pandemic.


Russia has not always been a mortal enemy in the American story. America’s Founders reached out to Russia in our earliest days. In December, 1780, the United States sent its first envoy to St. Petersburg, then Russia’s capital. The envoy, Francis Dana, brought a secretary with him.

The secretary was fourteen-year-old John Quincy Adams. Dana could speak no word of French, the language of the Russian court, and so John Adams[1] had lent Dana his son, who was fluent in French. Young John Quincy thus became a diplomatic interpreter.

Dana’s mission was to secure aid and support from Tsarina Catherine I for the American Revolution against England. The mission was unsuccessful, but for three years John Quincy became familiar with the workings of Russia’s ruling class. Catherine’s grandson was about the royal court during John Quincy’s tenure in St. Petersburg. In 1801, the grandson would become Tsar Alexander I.

Twenty-six years later, after John Quincy’s father had been President of the United States and Alexander’s father had preceded him as tsar, John Quincy returned to St. Petersburg. In November, 1809, Tsar Alexander I received John Quincy Adams as the first official United States Ambassador to Russia.

The Tsar greeted Adams warmly: “Monsieur, je suis charmé d’avoir le plaisir de vous voir ici.”[2] After this warm welcome, Ambassador Adams and Tsar Alexander spoke at length of trade, European politics, wars and Napoleon, the sometime ally, sometime adversary of both Russia and the United States.

The Tsar became ever more comfortable with his American guest, ultimately confiding the difficulties of managing his empire. Alexander revealed to John Quincy: “Its size is one of its greatest evils,” the Czar mused of his own country. “It is difficult to hold together so great a body as this empire.”

Ambassador Adams included the Tsar’s comment in his diplomatic dispatch to the State Department and President Monroe. It became part of the institutional memory of the United States, a country that only six years before had bought Louisiana from Napoleon.[3]


In July, 1741, Vitus Bering, a Russian naval officer culminated years of Russian eastward exploration with the sighting of Mount Saint Elias (a.k.a. Boundary Peak) on the North America mainland. The coming years would see periodic trips by Russian hunters and trappers.

In 1781 the Northeastern Company was established to organize and administer Russian colonies in North America. Company operations were directed from Kithtak (now Kodiak Island) and later from Novo-Arkhangelsk (now Sitka, Alaska).

For the next 70 years, Russian traders and trappers inhabited coastal Alaska. The Russian colonists regularly had violent confrontations with Native Alaskans. The Russian colonies grew in dependence upon both American and British traders for supplies. Alaska was not a profitable undertaking for Russia.


Then came the Crimean War (1854-1856) with Russia facing off against France, England and Turkey. The war did not end well for Russia; the loss was great in both blood and treasure. The monarchy needed to replenish its coffers, and the possible sale of Alaska seemed to hold an answer. Beyond the tsar’s need for cash, the Russians felt unable to defend Alaska should either the British or Americans move to take it.[4]

Even after the Crimean War, England remained a Russian adversary.  England was undesirable as an Alaskan neighbor, so the best scenario would be a sale to the United States.

In 1855, Tsar Alexander II had become emperor of Russia. He was the nephew of Alexander I who had confided to John Quincy Adams the difficulties of managing his distant empire. Given post Crimean War realities and aware of his uncle’s wisdom, a sale of Alaska to the United States appeared to be the best Russian course of action.

Russian Ambassador Eduard de Stoeckl was directed to actively pursue a sale in 1859. Stoeckl’s overtures included discussions with, among others, then New York Senator, William H. Seward. In 1861, the United States Civil War erupted. Discussions of Russia’s sale of Alaska to the United States fell silent.


The United States Civil War was not only of interest to the Union and the Confederacy. Both parties were actively involved in seeking either international support or neutrality. The Confederacy had courted both the British and French. The Rebels had a fair chance of receiving aid from either.

In Europe, relations remained tense between the Crimean War foes.  Although, Russia had lost that war, it had taken a devastating toll on all participants. In the short three-year span of the war, Britain had lost 22,000 soldiers. Against this background, the French and English entertained Confederate overtures.

The Russians viewed a strong unified United States as a counterweight to its European foes and a Union victory to be in Russia’s interests. In the summer of 1863, the Russian Baltic Fleet set sail for New York. The Russian Far East Fleet journeyed to San Francisco. President Abraham Lincoln sent the First Lady, Mary Todd Lincoln, to greet the Russians in New York in September, 1863.

Overt assistance for the Confederacy by England or France disappeared. The Russians had tilted the playing field in favor of the Union. In 1865, the Civil War ended with a Union victory and no active participation by the French, English or Russians.


After the Civil War, Alexander II resumed pursuit of an Alaskan sale. Stoeckl began negotiating with William Seward who became Secretary of State under Abraham Lincoln. Over the next two years, issues of reconstruction, Lincoln’s assassination and mid-19th Century communications hampered U.S./Russia negotiations.

Things changed the evening of March 29, 1867, while Secretary Seward was at his Washington, D.C. home, playing whist with his family. The whist game was interrupted by a knock at the door. It was Baron Eduard de Stoeckl, Minister Plenipotentiary and Ambassador of the Russian Tsar.

Stoeckl advised Seward: “I have a dispatch from my government. The Emperor gives his consent to the cession. Tomorrow if you like. I will come to the department and we can enter upon the treaty.”

Seward, with a smile of satisfaction responded: “Why wait until tomorrow? Let us make the treaty tonight.”

Stoeckl demurred: “But your department is closed, you have no clerks and my secretaries are scattered about the town!”

“Never mind that,” responded Seward, “if you can muster your legation together before midnight you will find me awaiting you there at the department, which will be open and ready for business.”

Carriages were dispatched around Washington and by 4 AM, March 30, 1867 the treaty was signed, engrossed and ready for presidential transmittal to the United States Senate for ratification.


President Andrew Johnson submitted the treaty to the Senate that same Saturday.

Seward had known that Congress was scheduled to adjourn for two months. Due to receipt of the treaty, the Senate set a special session for Monday, April 1, 1867. The Senate Foreign Relations Committee held hearings that entire week, then reported the treaty favorably.

Senator Charles Sumner of Massachusetts, Chairman of the Foreign Relations Committee, spoke in favor of the purchase for three hours. On April 9, 1867, just ten days after Stoeckl interrupted Seward’s whist game, the Senate ratified the treaty 37-2.

Though drafting and ratification took only ten days, it was nearly 68 years after Alexander I mentioned problems managing his empire to John Quincy Adams.

There remained appropriation of the $7.2 million for the purchase by the full Congress, which took place on July 27, 1867.

On October 18, 1867, in a formal ceremony, the United States took possession of Alaska. For the nearly 600,000 square miles, the United States paid about 2 cents an acre. Alaska was a better deal than Louisiana. October 18 is celebrated annually as a state holiday in Alaska.


The natural resources and strategic location of Alaska that came with the purchase cannot be understated. The gold rush that the Russians feared in 1855 came to pass in 1896. Alaska contributes to the American economy through its riches in oil, minerals, precious metals, seafood, timber, and tourism. Alaska is the second largest crude oil producer in the country and the salmon run in Alaska’s Bristol Bay basin is the largest in the world.

America’s Founders recognized the value of a cordial relationship with Russia and began working on it in 1780. The benefits of that effort came to fruition on October 18, 1867 with the peaceful acquisition of Alaska.

While the connotation of the mantra RUSSIA!!! RUSSIA!!! RUSSIA!!! differs greatly from 1780 to 2020, it has been part of the American lexicon through our entire history.

David Shestokas, J.D., is a former Cook County, Illinois State’s Attorney and author of Constitutional Sound Bites and Creating the Declaration of Independence. Follow him on Twitter, @shestokas, and join his Facebook group, Dave Shestokas on the Constitution

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] The Second President and First Vice-President had been the fledgling country’s envoy to France from 1777 to 1779 and his son, John Quincy had been with him.

[2] “Sir, I am delighted to have the pleasure of seeing you here.”

[3] Napoleon needed money for war in Europe, France’s Louisiana territory was 828,000 square miles, and the $15 million purchase price was about 3 cents/acre.

[4] Alaska would prove to hold a wealth of natural resources, including gold.  Strangely, this was of actual concern to Russia. The 1848 gold discovery in California brought in more than 300,000 people in 7 years. The Russians had no ability to manage a similar event in Alaska. They would be better off to sell the land rather than have it taken.

Guest Essayist: James C. Clinger

On September 5, 1867, the first Texas cattle were shipped from the railhead in Abilene, Kansas, with most of the livestock ending their destination in a slaughterhouse in Chicago, Illinois. These cattle made a long, none too pleasant journey from south Texas to central Kansas.   Their hardships were shared by cowboys and cattlemen who drove their herds hundreds of miles to find a better market for their livestock. For almost two decades, cattle drives from Texas were undertaken by beef producers who found that the northern markets were much more lucrative than those they had been dealing with back home. These drives ended after a combination of economic, legal, and technological changes made the long drives impractical or infeasible.

After the end of the civil war, much of the economy of the old Confederacy was in shambles. In Texas, the rebels returning home often found their livestock scattered and their ranches and farms unkempt and overgrown. Once the Texas ranchers reassembled their herds, they found that the local market for beef was very limited. The ranchers’ potential customers in the region had little money with which to buy beef and there was no way to transport livestock to distant markets except by ships that sailed off the Gulf coast. The major railroad lines did not reach Texas until the 1870s. Many of the cattlemen were “cattle rich but cash poor,” and it did not appear that there was any easy way to remedy their situation.[1]

Several cattlemen, cattle traders, and cattle buyers developed a solution.   Rather than sell locally, or attempt to transport cattle by water at high costs, cattle were to be driven along trails to railheads up north. Among the first of these was Abilene, Kansas, but other “cow towns,” such as Ellsworth and Dodge City, quickly grew from small villages to booming cities. This trek required very special men, horses, and cattle.   The men who drove the cattle were mostly young, adventurous, hardened cowhands who were willing to work for about $30 per month in making a trek of two months or more. Several Texas-bred horses were supplied for each cowhand to ride, for it was common for the exertions of each day to wear out several horses. The cattle themselves differed significantly from their bovine brethren in other parts of the continent.   These were Texas Longhorns, mostly steers, which were adorned with massive horns and a thick hide. The meat of the Longhorns was not considered choice by most connoisseurs, but these cattle could travel long distances, go without water for days, and resist many infectious diseases that would lay low cattle of other breeds.

Most cattle drives followed along the path of a number of trails from Texas through the Indian Territory (present day Oklahoma) into Kansas.   The most famous of the trails was the Chisholm Trail, named after Jesse Chisholm, a trader of cattle and other goods with outposts along the North Canadian River in northern Texas and along the Little Arkansas in southern Kansas. The drives ended in a variety of Kansas towns, notably Abilene, after some entrepreneurial cattle buyers, such as Joseph McCoy and his brothers, promoted the obscure train stop as a place where Texas cattle could be shipped by rail to market.[2] The cattle drives had emerged as an entrepreneurial solution to desperate circumstances where economic gains were blocked by geographic, technological, and legal obstacles.

The cow towns grew rapidly in size and prosperity, although many faltered after the cattle drives ended. The cattle and cowboys were not always welcomed. Many Kansas farmers and homesteaders believed that the Longhorns brought diseases such as “Texas Fever” that would infect and kill their own cattle. The disease known today as Babesiosis was caused by parasites carried by ticks that attached themselves to Texas steers. The Longhorn cattle had developed an immunity to the disease, but the northern cattle had not. The ticks on the hides of the Texas cattle often traveled to the hides of the livestock in Kansas, with lethal results.[3]   The Kansas farmers demanded and gained a number of state laws prohibiting the entry of Texas cattle. These laws were circumvented or simply weakly enforced until the 1880s. At first glance, these laws might appear to conflict with the commerce clause of the U.S. Constitution, Article I, Section 8, Clause 3, which authorized Congress, not the states, to regulate commerce among states. Yet, the federal Supreme Court ruled in 1886 in the case of Morgan’s Steamship Company v. Louisiana Board of Health that quarantine laws and general regulation of public health were permissible exercises of their police powers, although they could be preempted by an act of Congress.[4]

The cattle drives faced many hazards on their long treks to the north.   Harsh terrain, inclement weather, hostile Indians, rustlers, and unwelcoming Kansas farmers often made the journey difficult.   Nevertheless, for about twenty years the trail drives continued and were mostly profitable. Even after the railroads reached Fort Worth, Texas, many cattlemen still found it more profitable to make the long journey to Kansas to ship their beef. Cattle prices were higher in Abilene, and the costs of rail shipment from Fort Worth were, at least in the 1870s, too high to justify ending the trips to Kansas.[5] Eventually the drives did end, although there is some dispute among historians about when and why the cattle drives ceased. By the 1880s, barbed wire fencing blocked the cattle trails at some points. The new railheads in Texas offered alternative routes to livestock markets. Finally, Kansas enacted a strict quarantine law to keep out Texas cattle in 1885. Of course, past quarantine laws had been weakly enforced. State officials seemed to take the 1885 law more seriously. Perhaps economic incentives encouraged stricter quarantine enforcement. The cattle herds of the northern plains had been growing gradually over the years. After the Battle of Little Bighorn in 1876, the United States Army largely pacified hostile tribes in the Rocky Mountain states, with the result that the cattle industry thrived in Wyoming and Montana. With bigger and more carefully bred livestock available to the Kansas cattle buyers, the need to buy Texas cattle diminished. Enforcing the quarantine laws became less costly to the cattle traders and certainly pleased many of the Kansas farmers who voted in state elections. The end came relatively abruptly. In 1885, approximately 350,000 cattle were driven from Texas to Kansas. The following year, in 1886, there were no drives at all.[6] The cattle drives had emerged in the 1860s as an entrepreneurial solution to desperate circumstances where economic gains were blocked by geographic, technological, and legal obstacles. In the 1880s, the marketplace had been transformed. New barriers to the cattle drive had appeared, but by then the cattlemen in Texas had safer and more cost-effective  means to bring their livestock to market.

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Specht, Joshua. “Market.” In Red Meat Republic: A Hoof-to-Table History of How Beef Changed America, 119-73. Princeton, NJ: Princeton University Press, 2019.

[2] Gard, Wayne. “Retracing the Chisholm Trail.” The Southwestern Historical Quarterly 60, no. 1 (1956): 53-68.

[3] Hutson, Cecil Kirk. “Texas Fever in Kansas, 1866-1930.” Agricultural History 68, no. 1 (1994): 74-104.

[4] 118 U.S. 455

[5] Galenson, David. “The Profitability of the Long Drive.” Agricultural History 51, no. 4 (1977): 737-58.

[6] Galenson, David. “The End of the Chisholm Trail.” The Journal of Economic History 34, no. 2 (1974): 350-64.


Guest Essayist: Kyle A. Scott

Notwithstanding the controversy over the causes of the U.S. Civil War, we do know that one of the outcomes was ending slavery through the Thirteenth Amendment. Congress passed the proposed Thirteenth Amendment on January 31, 1865 and it was subsequently ratified on December 6, 1865 by three-fourths of the state legislatures. Upon its ratification, the Thirteenth Amendment made slavery unconstitutional.

Unlike amendments before it, this amendment deserves special consideration due to the unconventional proposal and ratification process.

The proposed amendment passed the Senate on April 8, 1864 and the House on January 31, 1865 with President Abraham Lincoln approving the Joint Resolution to submit the proposed amendment to the states on February 1, 1865. But, it was not until April 9, 1865 that the U.S. Civil War officially ended on the steps of the Appomattox Courthouse when Robert E. Lee surrendered the Confederate Army to Ulysses S. Grant. This means that all congressional action up to this point took place without the consent of any state in the Confederacy taking part as those states were not represented in congress.

However, once the war ended, the success of the amendment required that some of the former Confederate states ratify the amendment in order to meet the constitutionally mandated minimum proportion of states. Article V of the Constitution requires three-fourths of state legislatures to ratify a proposed amendment before it can become part of the Constitution. There were only twenty-five Union and border states which meant at least two states from the eleven that comprised the Confederacy had to ratify.

The effort to get states to ratify was led by Andrew Johnson who assumed the presidency after Lincoln was assassinated on April 15, 1865. There was a total of thirty-six states which meant at least twenty-seven of the state legislatures had to ratify. It was not guaranteed that all the states that remained in the Union would ratify. For instance, New Jersey, Delaware and Kentucky all initially rejected the amendment.

This is where things get complicated. When Johnson assumed the presidency in April, he ordered his generals to summon new conventions in the Southern states that would be forced to revise constitutions and elect new state legislators before being admitted back into the Union. This was essentially reform through military injunction thus casting doubt on the sovereignty of the states and the free will of the people.

Further complicating the issue of ratification was the Thirty-ninth Congress which refused the inclusion of all the Southern states except Tennessee. So, while the Congress did not recognize the former Confederate states as states—except Tennessee—all the states were considered legal for purposes of ratification as determined by Secretary of State William Seward. Thus, we are presented with a constitutional predicament in which an amendment is ratified by states recognized by the executive branch but not by the legislative branch.

No resolution was formerly adopted, nor reconciliation made, that could bring clarity to this constitutional crisis. Reconstruction continued, and the Thirteenth Amendment was added to the Constitution along with the Fourteenth and Fifteenth Amendments—collectively known as the Reconstruction Amendments.

The ratification of the Reconstruction Amendments is most aptly characterized as a Second Founding. How the Amendments were ratified occurred outside any reasonable interpretation of Article V or republican principles of representation. Imagine the following scenario. Armed guards move into 51% of American voters’ homes and force them to vote for Candidate A in the next presidential election. If the homeowners do not agree, the armed guards stay. If homeowners agree, and vote for Candidate A, the armed guards leave. This is what occurred during Reconstruction in the South as a state’s inclusion in Congress, and the removal of Union troops, was predicated upon that state’s acquiescence to the demands of the Union which included ratifying the Thirteenth, Fourteenth and Fifteenth Amendments.

Because the amendments were passed in an extra-constitutional manner, we cannot say that they were a continuation of what was laid out in Philadelphia several decades before. This creates an ethical dilemma for historians and legal scholars to consider. Do the ends justify the means or should the letter of the law be subservient to the higher good? To state it more simply: Is ending slavery worth violating the Constitution? Or, should have slavery remained legal until an amendment could be ratified in a manner consistent with Article V and generally accepted principles of representation?

These questions are meant to be hard and they will not be resolved here. What I do propose is that in 1865 the United States decided that the pursuit of the higher good justified a violation of accepted procedures and those who accept the validity of the Reconstruction Amendments today must, at least tacitly, endorse the same.

Kyle Scott, PhD, MBA, currently works in higher education administration and has taught American politics, Constitutional Law, and political theory for more than a decade at the university level. He is the author of five books and more than a dozen peer-reviewed articles. His most recent book is The Federalist Papers: A Reader’s Guide. Kyle can be contacted at

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

Only five days after Confederate General Robert E. Lee’s surrender at Appomattox, ending the Civil War, President Abraham Lincoln was assassinated in a theater in Washington, D.C. John Wilkes Booth, a Confederate supporter, shot the president who succumbed to his wounds the next day. President Andrew Johnson took Lincoln’s place, and was less supportive of Lincoln’s anti-slavery policies, diluting the abolition of slavery Lincoln envisioned. Johnson was in favor of policies that further disenfranchised free blacks, setting political policies that would weaken the nation’s unity.

Imagine if President Donald Trump were to choose Senator Bernie Sanders as his running mate in November 2020. Would that shock you?

Americans of 1864 must have been shocked to see President Abraham Lincoln, leader of the Republican Party, choose Andrew Johnson, a Democrat, as his running mate.

Nothing in the Constitution prohibited it, of course, and once before, America had witnessed a President and Vice from different parties. In 1796 it was accidental; this time it was on purpose.

Andrew Johnson was probably the most politically-qualified VP Lincoln could have chosen. Though totally unschooled, Johnson was the consummate politician. He started political life at age 21 as a Greenville, Tennessee alderman in 1829 and would hold elective office almost continuously for the next thirty-five years, serving as a state legislator, Congressman, two-term Governor of Tennessee and finally Senator from Tennessee.[1] When the Civil War began and Tennessee left the Union, Johnson chose to leave his state rather than break with the Union. Lincoln promptly appointed him Military Governor of Tennessee.

Heading into the 1864 election, the Democratic Party was bitterly split between War Democrats and Peace Democrats. Wars tend to do that. They tend to force people into one camp or the other. To bridge the gap and hopefully unify the party, Democrats found a compromise:  nominate pro-war General George B. McClellan for president and anti-war Representative George H. Pendleton for Vice President. The ticket gathered early support.

Lincoln thought a similar “compromise ticket” was needed. Running once again with Vice President Hannibal Hamlin was out. Hamlin was nice enough, a perfect gentleman who even volunteered for a brief stint in his Maine militia unit during the war, but Hamlin had not played a very prominent role in Lincoln’s administration during the first term. Hamlin had to go. Johnson was in.

To complicate electoral matters further, a group of disenchanted “Radical Republicans” who thought Lincoln too moderate formed the Radical Democracy Party a month before the Republican Convention and nominated their own candidates. They nominated Senator John C. Fremont from California for President and General John Cochrane from New York for Vice President. Two Johns on one ticket, two Georges on another and two men on a third whose first names began with “A.” Coincidence?  I don’t think so.

Choosing, finally, to not play the spoiler, Fremont withdrew his nomination barely two months before the election. Under the slogan “Don’t change horses in the middle of a stream,” Republicans were able to sweep the Lincoln/Johnson ticket to victory. The two men easily defeated “the two Georges” by a wide margin of 212 to 21 electoral votes.

In his second inaugural address, Lincoln uttered some of his most memorable lines ever:

the judgments of the Lord, are true and righteous altogether.” With malice toward none; with charity for all; with firmness in the right, as God gives us to see the right, let us strive on to finish the work we are in; to bind up the nation’s wounds; to care for him who shall have borne the battle, and for his widow, and his orphan—to do all which may achieve and cherish a just and lasting peace, among ourselves, and with all nations.

And then disaster hit. A little over a month after he delivered these memorable lines, Lincoln was shot in the head by Southern sympathizer John Wilkes Booth on the night of April 14 while enjoying a play at Ford’s Theater in Washington, D.C. Lincoln died the following day.  Booth’s conspiracy had planned to take out not only Lincoln, but his Vice President, and Secretary of State William Seward as well.  Seward was critically injured, but survived. Johnson also survived when assassin George Atzerodt got drunk and had a change of heart. The following day, two and a half hours after Lincoln drew his last breath, Johnson was installed as the seventeenth President of the United States.

Booth was quickly tracked down by Union troops and killed while attempting to escape. The rest of the conspirators were soon captured and the ringleaders hanged, including Mary Surratt, the first woman ever executed by the U.S. government.

Faced with the unenviable task of Reconstruction after a devastating war, Johnson’s administration started well, but quickly went downhill. The Radical Republicans were out for southern blood and Johnson did not share their thirst.

Although Lincoln is well-known for his wartime violations of the U.S. Constitution, Johnson is best known for sticking to it.

To show Johnson’s affinity for strict constructionism, there is this story: As a U.S. Representative, Johnson had voted against a bill to give federal aid to Ireland in the midst of a famine. In a debate during his subsequent run for Governor of Tennessee, his opponent criticized this vote. Johnson responded that people, not government, had the responsibility of helping their fellow men in need. He then pulled from his pocket a receipt for the $50 he had sent to the hungry Irish. “How much did you give, sir?” His opponent had to confess he had given nothing. The audience went wild. Johnson later credited this exchange with helping him win the election.

Johnson recognized the legitimacy of the Thirteenth Amendment, but he did not believe blacks deserved the right to vote. He vetoed the Civil Rights Act of 1866 which would give citizenship and extend civil rights to all regardless of race, but Congress overrode the veto. When the constitutionality of the Civil Rights Act was challenged, the Fourteenth Amendment was proposed and Johnson opposed that as well. The Radical Republicans then passed the Reconstruction Act of 1867. Johnson vetoed it and the Republicans overrode his veto. Republicans then threatened reluctant southern states with a continuance of their military governance unless they ratified the Amendment. An unnamed Republican at the time called this “ratification at the point of a bayonet.” Johnson’s reluctance to support the Radical Republican agenda did not endear him to them.

The “straw that broke the camel’s back” came when Johnson tried to remove Edwin Stanton as Secretary of War despite the Tenure of Office Act which ostensibly, and unconstitutionally in Johnson’s view, prevented such action. Johnson fired Stanton. Threatened with impeachment, Johnson replied, “Let them impeach and be damned.” Congress promptly did just that – impeach, that is. After the House impeachment, the Senate trial resulted in acquittal. Johnson retained his office by a single vote, but still gained the notoriety of being the first United States President to be impeached.

The events surrounding President Lincoln’s assignation on April 15, 1865 changed the political landscape following the Civil War making it a significant date to learn about in America’s history.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] After failing to be reelected President, Johnson was even elected Senator from Tennessee once again in 1875.

Guest Essayist: Tony Williams

On March 4, 1865, President Abraham Lincoln delivered his Second Inaugural Address that was a model of reconciliation and moderation for restoring the national Union. He ended with the appeal:

With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation’s wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations.

Later that month, Lincoln visited with Generals Ulysses S. Grant and William T. Sherman during the siege of Petersburg in Virginia near the Confederate capital of Richmond. As they talked, the president reflected on his plan to treat the South with respect. “Treat them liberally all around,” he said. “We want those people to return to their allegiance to the Union and submit to the laws.”

The Civil War was coming to an end as a heavily outnumbered Confederate General Robert E. Lee withdrew his army from Petersburg and abandoned Richmond to its fate. On April 5, 1865, Lee marched his starving, exhausted men across the swollen Appomattox River to Amelia Court House in central Virginia. Lee was disappointed to discover the boxcars on the railroad did not contain the expected rations of food. The Union cavalry under General Philip Sheridan were closing in and burning supply wagons. Lee ordered his men to continue their march to the west without food. Increasing numbers were deserting the Army of Northern Virginia. Lee realized that he would have to surrender soon.

On April 6, Union forces led by General George Custer cut off Lee’s army at Sayler’s Creek. The two sides skirmished for hours and then engaged each other fiercely. Lee lost a quarter of his army that became casualties and prisoners. He reportedly cried out, “My God! Has the army been dissolved?”

The next day, the Rebels retreated to Farmville where rations awaited but Union forces were close behind. The hungry Confederates barely had time to eat before fleeing again to expected supplies at Appomattox railroad station.

That day, Grant wrote to Lee asking for his surrender to prevent “any further effusion of blood.” Grant signed the letter, “Very respectfully, your obedient servant.”  Lee responded that while he did not think his position was as hopeless as Grant indicated, he asked what terms the Union Army would offer. When one of the generals suggested accepting the surrender, Lee informed him, “I trust it has not come to that! We certainly have too many brave men to think of laying down our arms.”  Nevertheless, Grant’s answer was unconditional surrender.

On April 8, Lee’s army straggled into the town of Appomattox Court House, but Sheridan had already seized his supply. He knew the end had come. He was hopelessly outnumbered six-to-one and had very little chance of resupply or reinforcements. Lee conferred with his generals to discuss surrender. When one of his officers suggested melting away and initiating a guerrilla war, Lee summarily rejected it out of hand. “You and I as Christian men have no right to consider only how this would affect us. We must consider its effect on the country as a whole.”

Lee composed a message to Grant asking for “an interview at such time and place as you may designate, to discuss the terms of the surrender of this army.”  Grant was suffering a migraine while awaiting word from Lee. He was greatly relieved to receive this letter. His headache and all the tension within him immediately dissipated. While puffing on his cigar, he wrote back to Lee and magnanimously offered to meet his defeated foe “where you wish the interview to take place.” The ceremony would take place at the home of Wilmer McLean, who had moved to Appomattox Court House to escape the war after a cannonball blasted into his kitchen during the First Battle of Bull Run in 1861. Now, the war’s final act would occur in his living room.

Lee cut a fine picture impeccably dressed in his new gray uniform, adorned with a red sash, shiny boots, and his sword in a golden scabbard as he awaited Grant. The Union general was shabbily dressed in a rough uniform with muddy boots and felt self-conscious. He thought that Lee was “a man of much dignity, with an impassible face.” Grant respectfully treated his worthy adversary as an equal, and felt admiration for him if not his cause. They shook hands and exchanged pleasantries.

Grant sat down at a small table to compose the terms of surrender and personally stood and handed them to Lee rather than have a subordinate do it. Grant graciously allowed the Confederate officers to keep their side arms, horses, and baggage. Lee asked that all the soldiers be allowed to keep their horses since many were farmers, and Grant readily agreed. Grant also generously agreed to feed Lee’s hungry men. Their business completed, the two generals shook hands, and Lee departed with a bow to the assembled men.

As Lee slowly rode away, Grant stood on the porch and graciously lifted his hat in salute, which Lee solemnly returned. The other Union officers and soldiers followed their general’s example. Grant was so conscious of being respectful that when the Union camp broke out into a triumphal celebration, Grant rebuked his men and ordered them to stop. “We did not want to exult over their downfall,” he later explained. For his part, Lee tearfully rode back into his camp, telling his troops, “I have done the best I could for you.” He continued, “Go home now, and if you make as good citizens as you have soldiers, you will do well, and I shall always be proud of you.”

On April 12, the Union formally accepted the Confederate surrender in a solemn ceremony. Brigadier General Joshua Chamberlain, the hero of Gettysburg, oversaw a parade of Confederate troops stacking their weapons. As the Army of Northern Virginia began the procession, Chamberlain ordered his men to raise their muskets to their shoulders as a salute of honor to their fellow Americans. Confederate Major General John Gordon returned the gesture by saluting with his sword. Chamberlain described his feelings at witnessing the dramatic, respectful ceremony: “How could we help falling on our knees, all of us together, and praying God to pity and forgive us all.”

At the end of the dreadful Civil War, in which 750,000 men died, the Americans on both sides of the war demonstrated remarkable respect for each other. Grant demonstrated great magnanimity toward his vanquished foe, following Lincoln’s vision in the Second Inaugural. That vision tragically did not survive the death of the martyred Lincoln a few days after the events at Appomattox.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Val Crofts

Abraham Lincoln is usually considered one of our nation’s greatest presidents. But, what many people may not know is that Lincoln was not a very popular president during his first term and he nearly was not reelected in 1864. For many months leading into the presidential election of that year, Lincoln resigned himself to a simple fact that he was not going to be reelected. He told a visitor to the White House in the fall of 1864, “I am going to be beaten…and unless some great change takes place, badly beaten.”

Lincoln’s administration had presided over hundreds of thousands of young men killed and wounded in the then three-year-old struggle to give our nation, as Lincoln declared at Gettysburg, its “new birth of freedom” during the Civil War. The year 1864 had been the bloodiest of the war so far and Union armies were being decimated as Union General Ulysses S. Grant was making his final push to destroy the army of Confederate General Robert E. Lee and bring the war to an end. At the same time, General William T. Sherman was moving toward Georgia in the summer of 1864, hoping to destroy the Confederate armies in that region as well.

Lincoln was a tremendously unpopular president in 1864 inside and outside of his own political party. Democrats hated Lincoln and blamed him for the longevity of the war. Radical Republicans did not feel that he went far enough to extend equal rights to African-Americans. The war was unpopular and seemed unwinnable for the Union. Lincoln’s recent Emancipation Proclamation had also turned many Northern voters against Lincoln as they believed that equality for former slaves was something that would occur and they were not ready for it.

Between the Emancipation Proclamation and the casualty numbers of the Union army, Lincoln felt as though his administration would be leaving the White House in 1865. He urged his cabinet members to cooperate with the new president to make the transition of power easier, which would hopefully bring the nation back together quicker. A series of events were taking place in the Western theater of the war where one of Lincoln’s generals was about to present him with two gifts in 1864: the city of Atlanta and the reelection of his administration.

General Sherman met president Lincoln in 1861 at the beginning of the war and he was not overly impressed with him. He felt President Lincoln’s attitude toward the South was naive and could damage the Union’s early response to the war. Lincoln was not particularly impressed with Sherman at their first meeting either. But, those attitudes would change as the war progressed.

Sherman had achieved great success in fighting in the Western theater of the war from Shiloh to Chattanooga and was poised to strike a lethal blow into the heart of the Confederacy by marching his armies through the state of Georgia and capturing its capital city of  Atlanta. The capture of Atlanta would destroy a vital rail center and supply depot, as well as demoralize the Confederacy.

Sherman and his 100,000 troops left Chattanooga in May of 1864 and by July, Sherman and his army had reached the outskirts of Atlanta. On September 1, Confederate forces evacuated the city. The Northern reaction to the taking of Atlanta and victories in Virginia at the same time was jubilation. Instead of feeling the war was lost, the exact opposite opinion was now prevalent. It now seemed that the Lincoln administration would be the first reelected since Andrew Jackson in 1832.

President Lincoln won the 1864 election by receiving over 55 percent of the popular vote and winning the electoral vote 212 to 21 over his Democratic opponent, former general George B. McClellan. He was then able to manage the end of the Civil War and the passage of the 13th Amendment to the U.S. Constitution, banning slavery in the United States forever. His presidency would be remembered as the reason why our nation is still one nation, under God, and dedicated to “the proposition that all men are created equal.” Washington created our nation, Jefferson and Madison gave it life and meaning with their ideas and  words, and Lincoln saved it. He may have not had the chance to do so without the military success of General Sherman and his armies in 1864.

Val Crofts is as Social Studies teacher from Janesville, Wisconsin. He teaches as Milton High School in Milton, Wisconsin and has been there 16 years. He teaches AP U.S. Government and Politics, U.S. History and U.S. Military History. Val has also taught for the Wisconsin Virtual School for seven years, teaching several Social Studies courses for them. Val is also a member of the U.S. Semiquincentennial Commission celebrating the 250th Anniversary of the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.


Guest Essayist: Daniel A. Cotter

The Anaconda Plan of the Civil War, crafted by U.S. General-in-Chief Winfield Scott, was designed to split and defeat the Confederacy by closing in on the coasts east and south, control the Mississippi River, then attack from all sides. Union Major General Ulysses S. Grant pressed through to take Vicksburg, Mississippi, get the final Confederate strongholds and control the Mississippi River. President Abraham Lincoln believed taking Vicksburg was the key to victory. The Battle at Vicksburg would be the longest military campaign of the Civil War. Vicksburg was surrendered on July 4, 1863.

President Lincoln said of Vicksburg, “See what a lot of land these fellows hold, of which Vicksburg is the key! The war can never be brought to a close until that key is in our pocket. We can take all the northern ports of the Confederacy, and they can defy us from Vicksburg.” Lincoln well summarized the importance of Vicksburg, Mississippi, with both the Union and Confederacy determined to control the city. Along the Mississippi River, Vicksburg was one of the main strongholds remaining for the Confederacy. If the Union could capture this stronghold, it would cut off Confederacy states west of the Mississippi from those east of the Mississippi.

The location was ideal for defending, protected on the north by swamps of the bayou and was located on high bluffs that were along the river, and was given the name, the “Gibraltar of the South.”

General Grant developed a plan. After assuming command of the Union forces near Vicksburg on January 30, 1863, the Union having waged a campaign to take Vicksburg since the spring of 1862, he had been part of an initial failed attempt to take the city in the winter. In the spring of 1863, he tried again. This time, given the location of Vicksburg, he took the bold approach of marching south on the west side of the Mississippi River, then crossing over south of the city. He led troops south 30 miles south of Vicksburg, crossing over at Bruinsburg via Union fleet.

Once landed east of the river, he began to head northeast. On May 2, his troops took Port Gibson, with his troops abandoning supply lines and sustaining themselves from the surrounding countryside. Grant arrived in Vicksburg on May 18, where Confederate General John Pemberton was waiting with his 30,000 troops. Upon arrival, two major assaults on May 19 and 22 by the Union forces failed. Grant regrouped and his troops dug trenches, enclosing Pemberton and his troops.

Pemberton was boxed in with little provisions and diminishing ammunition. Many Confederate soldiers became sick and were hospitalized. In late June, Union troops dug mines underneath the Confederate troops and, on June 25, detonated the explosives. On July 3, Pemberton sent a note to Grant suggesting peace. Grant responded that only unconditional surrender would suffice. Pemberton formally surrendered on July 4. The nearly 30,000 troops were paroled, Grant not wanting to have to address the soldiers. The Union won at Port Hudson five days later.

Lincoln, who had noted how important Vicksburg was to the Union and the war, upon hearing of the surrender, stated, “The father of the waters goes unvexed to the sea.”

With the Siege of Vicksburg, Scott’s Anaconda Plan, designed at the beginning of the Civil War with the goal to blockade the southern ports and to cut the South in two by advancing down the Mississippi River, was complete.

The Siege of Vicksburg was a  major victory for the Union, giving it control of the Mississippi River. With the Battle of Gettysburg victory around the same time, it presented a turning point for the Union in the Civil War. July 4, 1863, the surrender of Vicksburg by Pemberton, is an important date in our nation’s history.

Dan Cotter is Attorney and Counselor at Howard & Howard Attorneys PLLC. He is the author of The Chief Justices, (published April 2019, Twelve Tables Press). He is also a past president of The Chicago Bar Association. The article contains his opinions and is not to be attributed to anyone else.  

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

President Abraham Lincoln faced an important decision point in the summer of 1862. Lincoln was opposed to slavery and sought a way to end the immoral institution that was at odds with republican principles. However, he had a reverence for the constitutional rule of law and an obligation to follow the Constitution. He discovered a means of ending slavery, saving the Union, and preserving the Constitution.

President Lincoln had reversed previous attempts by his generals to free the slaves because of their dubious constitutionality and because they would drive border states such as Missouri, Kentucky, and Maryland into the arms of the Confederacy. He reluctantly signed the First and Second Confiscation Acts but doubted their constitutionality as well and did little to enforce them. He offered compensated emancipation to the border states, but none took him up on his offer.

On July 22, Lincoln met with the members of his Cabinet and shared his idea with them. He presented a preliminary draft of the Emancipation Proclamation on two pages of lined paper. It would free the slaves in the Confederate states as a “military necessity” by weakening the enemy under his constitutional presidential war powers.

The cabinet agreed with his reasoning even if some members were lukewarm. Some feared the effects on the upcoming congressional elections and that it would cause European states to recognize the Confederacy to protect their sources of cotton. Secretary of State William H. Seward counseled the president to issue the proclamation from a position of strength after a military victory.

The early victories of the year in the West by Grant at the Battle of Shiloh and the capture of New Orleans were dimmed by a more recent defeat in the eastern theater. Union General George McClellan’s Peninsular Campaign driving toward Richmond was thwarted by his defeat in the Seven Days’ Battles. Nor did Lincoln get the victory he needed the following month when Union armies under General John Pope were routed at the Second Battle of Bull Run.

In September, General Robert E. Lee invaded the North to defeat the Union army on northern soil and win European diplomatic recognition. He swept up into Maryland. Even though two Union troops discovered Lee’s plan of attack wrapped around a couple of cigars on the ground, McClellan did not capitalize on his advantage. The two armies converged at Sharpsburg near Antietam Creek.

At dawn on September 17, Union forces under General Joseph Hooker on the Union right attacked Confederates on the left side of their lines under Stonewall Jackson. The opposing armies clashed at West Woods, Dunker Church, and a cornfield. The attack faltered, and thousands were left dead and wounded.

Even that fighting could not compare to the carnage in the middle of the lines that occurred later in the morning. Union forces attacked several times and were repulsed. The battle shifted to a sunken road with horrific close-in fighting. Thousands more men became casualties at this “Bloody Lane.”

The final major stage of the day’s battle occurred further down the line when Union General Ambrose Burnside finally attacked. The Confederate forces here held a stone bridge across Antietam Creek that Burnside decided to cross rather than have his men ford the creek. The Confederates held a strong defensible position that pushed back several Union assaults. After the bridge was finally taken at great cost, the advancing tide of Union soldiers was definitively stopped by recently-arrived Confederate General A.P. Hill.

The battle resulted in the grim casualty figures of 12,400 for the Union armies and 10,300 for the Confederate armies. The losses were much heavier proportionately for the much smaller Confederate army. General McClellan failed to pursue the bloodied Lee the following day and thereby allowed him to escape and slip back down into the South. While Lincoln was furious with his general, he had the victory he needed to release the Emancipation Proclamation.

On September 22, Lincoln issued the preliminary Emancipation Proclamation. It read that as of January 1, 1863, “All persons held as slaves within any state, or designated part of a state, the people whereof shall then be in rebellion against the United States shall be then, thenceforward, and forever free.” If the states ended their rebellion, then the proclamation would have no force there.

Since the proclamation only applied to the states who joined the Confederacy, the border states were exempt, and their slaves were not to be freed by it. Lincoln did this for two important reasons. One, the border states might have declared secession and joined with the Confederacy. Two, Lincoln had no constitutional authority under his presidential war powers to free the slaves in states in the Union.

None of the Confederate states accepted the offer. On January 1, 1863, Lincoln issued the Emancipation Proclamation as promised. The proclamation freed nearly 3.5 million slaves, though obviously the Union had to win the war to make it a reality. The document was arguably Lincoln’s least eloquent document and was, in the words of one historian, about as exciting as a bill of lading.

Lincoln understood that the document had to be an exacting legal document because of the legal and unofficial challenges it would face. Moreover, he knew that a constitutional amendment was necessary to end slavery everywhere. He knew the proclamation’s significance and called it “the central act of my administration,” and “my greatest and most enduring contribution to the history of the war.”

One eloquent line in the Emancipation Proclamation aptly summed up the republican and moral principles that were the cornerstone of the document and Lincoln’s vision: “And upon this act, sincerely believed to be an act of justice, warranted by the Constitution, upon military necessity, I invoke the considerate judgment of mankind, and the gracious favor of Almighty God.”

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!