Constituting America Founder, Actress Janine Turner


 

Constituting America first published this message from Founder Janine Turner over Memorial Day Weekend, 2010, the inaugural year of our organization. We are pleased to share it with you again, as we celebrate our 13th birthday!  

Read more

Guest Essayist: Will Morrisey

To secure the unalienable natural rights of the American people, the American Founders designed a republican regime. Republics had existed before: ancient Rome, modern Switzerland and Venice. By 1776, Great Britain itself could be described as a republic, with a strong legislature counterbalancing a strong monarchy—even if the rule of that legislature and that monarchy over the overseas colonies of the British Empire could hardly be considered republican. But the republicanism instituted after the War of Independence, especially as framed at the Philadelphia Convention in 1787, featured a combination of elements never seen before, and seldom thereafter.

The American definition of republicanism was itself unique. ‘Republic’ or res publica means simply, ‘public thing’—a decidedly vague notion that might apply to any regime other than a monarchy. In the tenth Federalist, James Madison defined republicanism as representative government, that is, by a specific way of constructing the country’s ruling institutions. The Founders gave republicanism a recognizable form beyond ‘not-monarchy.’ From the design of the Virginia House of Burgesses to the Articles of Confederation and finally to the Constitution itself, representation provided Americans with real exercise of self-rule, while at the same time avoiding the turbulence and folly of pure democracies, which had so disgraced themselves in ancient Greece that popular sovereignty itself had been dismissed by political thinkers ever since. Later on, Abraham Lincoln’s Lyceum Address shows how republicanism must defend the rule of law against mob violence; even the naming of Lincoln’s party as the Republican Party was intended to contrast it with the rule of slave-holding plantation oligarchs in the South.

The American republic had six additional characteristics, all of them clearly registered in this 90-Day Study. America was a natural-rights republic, limiting the legitimate exercise of popular rule to actions respecting the unalienable rights of its citizens; it was a democratic republic, with no formal ruling class of titled lords and ladies or hereditary monarchs; it was an extended republic, big enough to defend itself against the formidable empires that threatened it; it was a commercial republic, encouraging prosperity and innovation; it was a federal republic, leaving substantial political powers in the hands of state and local representatives; and it was a compound republic, dividing the powers of the national government into three branches, each with the means of defending itself against encroachments by the others.

Students of the American republic could consider each essay in this series as a reflection on one or more of these features of the American regime as designed by the Founders, or, in some cases, as deviations from that regime. Careful study of what the Declaration of Independence calls “the course of events” in America shows how profound and pervasive American republicanism has been, how it has shaped our lives throughout our history, and continues to do so today.

A Natural-Rights Republic

The Jamestown colony’s charter was written in part by the great English authority on the common law, Sir Edward Coke. Common law was an amalgam of natural law and English custom. The Massachusetts Bay Colony, founded shortly thereafter, was an attempt to establish the natural right of religious liberty. And of course the Declaration of Independence rests squarely on the foundation of the laws of Nature and of Nature’s God as the foundation of unalienable natural rights, several of which were given formal status in the Constitution’s Bill of Rights. As the articles on Nat Turner’s slave rebellion in 1831, the Dred Scott case in 1857, the Civil Rights amendments of the 1860s, and the attempt at replacing plantation oligarchy with republican regimes in the states after the Civil War all show, natural rights have been the pivot of struggles over the character of America. Dr. Martin Luther King, Jr. and the early civil rights leaders invoked the Declaration and natural rights to argue for civic equality, a century after the civil war. As a natural-rights republic, America rejects in principle race, class, and gender as bars to the protection of the rights to life, liberty, and the pursuit of happiness. In practice, Americans have often failed to live up to their principles—as human beings are wont to do—but the principles remain as their standard of right conduct.

A Democratic Republic

The Constitution itself begins with the phrase “We the People,” and the reason constitutional law governs all statutory laws is that the sovereign people ratified that Constitution. George Washington was elected as America’s first president, but he astonished the world by stepping down eight years later; he had no ambition to become another George III, or a Napoleon. The Democratic Party which began to be formed by Thomas Jefferson and James Madison when they went into opposition against the Adams administration named itself for this feature of the American regime. The Seventeenth Amendment to the Constitution, providing for popular election of U. S. Senators, the Nineteenth Amendment, guaranteeing voting rights for women, and the major civil rights laws of the 1960s all express the democratic theme in American public life.

An Extended Republic

Unlike the ancient democracies, which could only rule small territories, American republicanism gave citizens the chance of ruling themselves in a territory large enough to defend itself against the powerful states and empires that had arisen in modern times. All of this was contingent, however, on Jefferson’s idea that this extended republic would be an “empire of liberty,” by which he meant that new territories would be eligible to join the Union on an equal footing with the original thirteen states. Further, every state was to have a republican regime, as stipulated in the Constitution’s Article IV, section iv. In this series of Constituting America essays, the extension of the extended republic is very well documented, from the 1803 Louisiana Purchase and the Lewis and Clark expedition to the Indian Removal Act of 1830 and the Mexican War of 1848, to the purchase of Alaska and the Transcontinental Railroad of the 1960s, to the Interstate Highway Act of 1956. The construction of the Panama Canal, the two world wars, and the Cold War all followed from the need to defend this large republic from foreign regime enemies and to keep the sea lanes open for American commerce.

A Commercial Republic

Although it has proven itself eminently capable of defending itself militarily, America was not intended to be a military republic, like ancient Rome and the First Republic of France. The Constitution prohibits interstate tariffs, making the United States a vast free-trade zone—something Europe could not achieve for another two centuries. We have seen Alexander Hamilton’s brilliant plan to retire the national debt after the Revolutionary War and the founding of the New York Stock Exchange in 1792. Above all, we have seen how the spirit of commercial enterprise leads to innovation: Eli Morse’s telegraph; Alexander Graham Bell’s telephone; Thomas Edison’s phonography and light bulb; the Wright Brothers’ flying machine; and Philo Farnworth’s television. And we have seen how commerce in a free market can go wrong if the legislation and federal policies governing it are misconceived, as they often were before, during, and sometimes after the Great Depression.

A Federal Republic

A republic might be ‘unitary’—ruled by a single, centralized government. The American Founders saw that this would lead to an overbearing national government, one that would eventually undermine republican self-government itself. They gave the federal government enumerated powers, leaving the remaining governmental powers “to the States, or the People.” The Civil War was fought over this issue, as well as slavery, the question of whether the American Union could defend itself against its internal enemies. The substantial centralization of federal government power seen in the New Deal of the 1930s, the Great Society legislation of the 1960s, and the Affordable Care Act of 2010 have renewed the question of how far such power is entitled to reach.

A Compound Republic

A simple republic would elect one branch of government to exercise all three powers: legislative, executive, and judicial. This was the way the Articles of Confederation worked. The Constitution ended that, providing instead for the separation and balance of those three powers. As the essays here have demonstrated, the compound character of the American republic has been eroded by such notions as ‘executive leadership’—a principle first enunciated by Woodrow Wilson but firmly established by Franklin Roosevelt and practiced by all of his successors—and ‘broad construction’ of the Constitution by the Supreme Court. The most dramatic struggle between the several branches of government in recent decades was the Watergate controversy, wherein Congress attempted to set limits on presidential claims of ‘executive privilege.’ Recent controversies over the use of ‘executive orders’ have reminded Americans of all political stripes that government by decree can gore anyone’s prized ox.

The classical political philosophers classified the forms of political rule, giving names to the several ‘regimes’ they saw around them. They emphasized the importance of regimes because regimes, they knew, designate who rules us, the institutions by which the rulers rule, the purposes of that rule, and finally the way of life of citizens or subjects. In choosing a republican regime on a democratic foundation, governing a large territory for commercial purposes with a carefully calibrated set of governmental powers, all intended to secure the natural rights of citizens according to the laws of Nature and of Nature’s God, the Founders set the course of human events on a new and better direction. Each generation of Americans has needed to understand the American way of life and to defend it.

Will Morrisey is Professor Emeritus of Politics at Hillsdale College, editor and publisher of Will Morrisey Reviews, an on-line book review publication.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

 

Guest Essayist: Joerg Knipprath

On March 23, 2010, President Barack Obama signed into law the Patient Protection and Affordable Care Act (“ACA”), sometimes casually referred to as “Obamacare,” a sobriquet that Obama himself embraced in 2013. The ACA covered 900 pages and hundreds of provisions. The law was so opaque and convoluted that legislators, bureaucrats, and Obama himself at times were unclear about its scope. For example, the main goal of the law was presented as providing health insurance to all Americans who previously were unable to obtain it due to, among other factors, lack of money or pre-existing health conditions. The law did increase the number of individuals covered by insurance, but stopped well short of universal coverage. Several of its unworkable or unpopular provisions were delayed by executive order. Others were subject to litigation to straighten out conflicting requirements. The ACA represented a probably not-yet-final step in the massive bureaucratization of health insurance and care over the past several decades, as health care moved from a private arrangement to a government-subsidized “right.”

The law achieved its objectives to the extent it did by expanding Medicaid eligibility to higher income levels and by significantly restructuring the “individual” policy market. In other matters, the ACA sought to control costs by further reducing Medicare reimbursements to doctors, which had the unsurprising consequence that Medicare patients found it still more difficult to get medical care, and by levying excise taxes on medical devices, drug manufacturers, health insurance providers, and high-benefit “Cadillac plans” set-up by employers. The last of these was postponed and, along with most of the other taxes, repealed in December, 2019. On the whole, existing employer plans and plans under collective-bargaining agreements were only minimally affected. Insurers had to cover defined “essential health services,” whether or not the purchaser wanted or needed those services. As a result, certain basic health plans that focused on “catastrophic events” coverage were substandard and could no longer be offered. Hence, while coverage expanded, many people also found that the new, permitted plans cost them more than their prior coverage. They also found that the reality did not match Obama’s promise, “if you like your health care plan, you can keep your health care plan.”

The ACA required insurance companies to “accept all comers.” This policy would have the predictable effect that healthy (mostly young) people would forego purchasing insurance until a condition arose that required expensive treatment. That, in turn, would devastate the insurance market. Imagine being able to buy a fire policy to cover damage that had already arisen from a fire. Such policies would not be issued. Private, non-employer, health insurance plans potentially would disappear. Some commentators opined that this was exactly the end the reformers sought, at least secretly, so as to shift to a single-payer system, in other words, to “Medicare for all.” The ACA sought to address that problem by imposing an “individual mandate.” Unless exempt from the mandate, such as illegal immigrants or 25-year-olds covered under their parents’ policy, every person must purchase insurance through their employer or individually from an insurer through one of the “exchanges.” Barring that, the person had to pay a penalty, to be collected by the IRS.

There have been numerous legal challenges to the ACA. Perhaps the most significant constitutional challenge was decided by the Supreme Court in 2012 in National Federation of Independent Business v. Sebelius (NFIB). There, the Court addressed the constitutionality of the individual mandate under Congress’s commerce and taxing powers, and of the Medicaid expansion under Congress’s spending power. These two provisions were deemed the keys to the success of the entire project.

Before the Court could address the case’s merits, it had to rule that the petitioners had standing to bring their constitutional claim. The hurdle was the Anti-Injunction Act. That law prohibited courts from issuing an injunction against the collection of any tax, in order to prevent litigation from obstructing tax collection. Instead, a party must pay the tax and sue for a refund to test the tax’s constitutionality. The issue turned on whether the individual mandate was a tax or a penalty. Chief Justice John Roberts concluded that Congress had described this “shared responsibility payment” if one did not purchase qualified health insurance as a “penalty,” not a “tax.” Roberts noted that other parts of the ACA imposed taxes, so that Congress’s decision to apply a different label was significant. Left out of the opinion was the reason that Congress made what was initially labeled a “tax” into a “penalty” in the ACA’s final version, namely, Democrats’ sensitivity about Republican allegations that the proposed bill raised taxes on Americans.

Having confirmed the petitioners’ standing, Roberts proceeded to the substantive merits of the challenge to the ACA. The government argued that the health insurance market (and health care, more generally) was a national market in which everyone would participate, sooner or later. While this is a likely event, it is by no means a necessary one, as a person might never seek medical services. If, for whatever reason, people did not have suitable insurance, the government claimed, they might not be able to pay for those services. Because hospitals are legally obligated to provide some services regardless of the patient’s ability to pay, hospitals would pass along their uncompensated costs to insured patients, whose insurance companies in turn would charge those patients higher premiums. The ACA’s broadened insurance coverage and “guaranteed-issue” requirements, subsidized by the minimum insurance coverage requirement, would ameliorate this cost-shifting. Moreover, the related individual mandate was “necessary and proper” to deal with the potential distortion of the market that would come from younger, healthier people opting not to purchase insurance as sought by the ACA.

Of course, Congress could pass laws under the Necessary and Proper Clause only to further its other enumerated powers, hence, the need to invoke the Commerce Clause. The government relied on the long-established, but still controversial, precedent of Wickard v. Filburn. In that 1942 case, the Court upheld a federal penalty imposed on farmer Filburn for growing wheat for home consumption in excess of his allotment under the Second Agricultural Adjustment Act. Even though Filburn’s total production was an infinitesimally small portion of the nearly one billion bushels grown in the U.S. at that time, the Court concluded, tautologically,  that the aggregate of production by all farmers had a substantial effect on the wheat market. Thus, since Congress could act on overall production, it could reach all aspects of it, even marginal producers such as Filburn. The government claimed that the ACA’s individual mandate was analogous. Even if one healthy individual’s failure to buy insurance would scarcely affect the health insurance market, a large number of such individuals and of “free riders” failing to get insurance until after a medical need arose would, in the aggregate, have such a substantial effect.

Roberts, in effect writing for himself and the formally dissenting justices on that issue, disagreed. He emphasized that Congress has only limited, enumerated powers, at least in theory. Further, Congress might enact laws needed to exercise those powers. However, such laws must not only be necessary, but also proper. In other words, they must not themselves seek to achieve objectives not permitted under the enumerated powers. As opinions in earlier cases, going back to Chief Justice John Marshall in Gibbons v. Ogden had done, Roberts emphasized that the enumeration of congressional powers in the Constitution meant that there were some things Congress could not reach.

As to the Commerce Clause itself, the Chief Justice noted that Congress previously had only used that power to control activities in which parties first had chosen to engage. Here, however, Congress sought to compel people to act who were not then engaged in commercial activity. However broad Congress’s power to regulate interstate commerce had become over the years with the Court’s acquiescence, this was a step too far. If Congress could use the Commerce Clause to compel people to enter the market of health insurance, there was no other product or service Congress could not force on the American people.

This obstacle had caused the humorous episode at oral argument where the Chief Justice inquired whether the government could require people to buy broccoli. The government urged, to no avail, that health insurance was unique, in that people buying broccoli would have to pay the grocer before they received their ware, whereas hospitals might have to provide services and never get paid. Of course, the only reason hospitals might not get paid is because state and federal laws require them to provide certain services up front, and there is no reason why laws might not be adopted in the future that require grocers to supply people with basic “healthy” foods, regardless of ability to pay. Roberts also acknowledged that, from an economist’s perspective, choosing not to participate in a market may affect that market as much as choosing to participate. After all, both reflect demand, and a boycott has economic effects just as a purchasing fad does. However, to preserve essential constitutional structures, sometimes lines must be drawn that reflect considerations other than pure economic policy.

The Chief Justice was not done, however. Having rejected the Commerce Clause as support for the ACA, he embraced Congress’s taxing power, instead. If the individual mandate was a tax, it would be upheld because Congress’s power to tax was broad and applied to individuals, assets, and income of any sort, not just to activities, as long as its purpose or effect was to raise revenue. On the other hand, if the individual mandate was a “penalty,” it could not be upheld under the taxing power, but had to be justified as a necessary and proper means to accomplish another enumerated power, such as the commerce clause. Of course, that path had been blocked in the preceding part of the opinion. Hence, everything rested on the individual mandate being a “tax.”

At first glance it appeared that this avenue also was a dead end, due to Roberts’s decision that the individual mandate was not a tax for the purpose of the Anti-Injunction Act. On closer analysis, however, the Chief Justice concluded that something can be both a tax and not be a tax, seemingly violating the non-contradiction principle. Roberts sought to escape this logical trap by distinguishing what Congress can declare as a matter of statutory interpretation and meaning from what exists in constitutional reality. Presumably, Congress can define that, for the purpose of a particular federal law, 2+2=5 and the Moon is made of green cheese. In applying a statute’s terms, the courts are bound by Congress’s will, however contrary that may be to reason and ordinary reality.

However, when the question before a court is the meaning of an undefined term in the Constitution, an “originalist” judge will attempt to discern the commonly-understood meaning of that term when the Constitution was adopted, subject possibly to evolution of that understanding through long-adhered-to judicial, legislative, and executive usage. Here, Roberts applied factors the Court had developed beginning in Bailey v. Drexel Furniture Co. in 1922. Those factors compelled the conclusion that the individual mandate was, functionally, a tax. Particularly significant for Roberts was that the ACA limited the payment to less than the price for insurance, and that it was administered by the IRS through the normal channels of tax collection. Further, because the tax would raise substantial revenue, its ancillary purpose of expanding insurance coverage was of no constitutional consequence. Taxes often affect behavior, understood in the old adage that, if the government taxes something, it gets less of it.

Roberts’s analysis reads as the constitutional law analogue to quantum mechanics and the paradox of Schroedinger’s Cat, in that the individual mandate is both a tax and a penalty until it is observed by the Chief Justice. His opinion has produced much mirth—and frustration—among commentators, and there were inconvenient facts in the ACA itself. The mandate was in the ACA’s operative provisions, not its revenue provisions, and Congress referred to the mandate as a “penalty” eighteen times in the ACA. Still, he has a valid, if not unassailable, point. A policy that has the characteristics associated with a tax ordinarily is a tax. If Congress nevertheless consciously chooses to designate it as a penalty, then for the limited purpose of assessing the policy’s connection to another statute which carefully uses a different term, here the Anti-Injunction Act, the blame for any absurdity lies with Congress.

The Medicaid expansion under the ACA was struck down. Under the Constitution, Congress may spend funds, subject to certain ill-defined limits. One of those is that the expenditure must be for the “general welfare.” Under classic republican theory, this meant that Congress could spend the revenue collected from the people of the several states on projects that would benefit the United States as a whole, not some constituent part, or an individual or private entity. It was under that conception of “general welfare” that President Grover Cleveland in 1887 vetoed a bill that appropriated $10,000 to purchase seeds to be distributed to Texas farmers hurt by a devastating drought. Since then, the phrase has been diluted to mean anything that Congress deems beneficial to the country, however remotely.

Moreover, while principles of federalism prohibit Congress from compelling states to enact federal policy—known as the “anti-commandeering” doctrine—Congress can provide incentives to states through conditional grants of federal funds. As long as the conditions are clear, relevant to the purpose of the grant, and not “coercive,” states are free to accept the funds with the conditions or to reject them. Thus, Congress can try to achieve indirectly through the spending power what it could not require directly. For example, Congress cannot, as of now, direct states to teach a certain curriculum in their schools. However, Congress can provide funds to states that teach certain subjects, defined in those grants, in their schools. The key issue usually is whether the condition effectively coerces the states to submit to the federal financial blandishment. If so, the conditional grant is unconstitutional because it reduces the states to mere satrapies of the federal government rather than quasi-sovereigns in our federal system.

In what was a judicial first, Roberts found that the ACA unconstitutionally coerced the states into accepting the federal grants. Critical to that conclusion was that a state’s failure to accept the ACA’s expansion of Medicaid would result not just in the state being ineligible to receive federal funds for the new coverage. Rather, the state would lose all of its existing Medicaid funding. As well, here the program affected—Medicaid—accounted for over 20% of the typical state’s budget. Roberts described this as “economic dragooning that leaves the States with no real option but to acquiesce in the Medicaid expansion.” Roberts noted that the budgetary impact on a state from rejecting the expansion dwarfed anything triggered by a refusal to accept federal funds under previous conditional grants.

One peculiarity of the opinions in NFIB was the stylistic juxtaposition of Roberts’s opinion for the Court and the principal dissent, penned by Justice Antonin Scalia. Roberts at one point uses “I” to defend a point of law he makes, which is common in dissents or concurrences, instead of the typical “we” or “the Court” used by a majority. By contrast, Scalia consistently uses “we” (such as “We conclude that [the ACA is unconstitutional.” and “We now consider respondent’s second challenge….”), although that might be explained because he wrote for four justices, Anthony Kennedy, Clarence Thomas, Samuel Alito, and himself. He also refers to Justice Ruth Bader Ginsburg’s broadly as “the dissent.” Most significant, Scalia’s entire opinion reads like that of a majority. He surveys the relevant constitutional doctrines more magisterially than does the Chief Justice, even where he and Roberts agree, something that dissents do not ordinarily do. He repeatedly and in detail criticizes the government’s arguments and the “friend-of the-court” briefs that support the government, tactics commonly used by the majority opinion writer.

These oddities have provoked much speculation, chiefly that Roberts initially joined Scalia’s opinion, which would have made it the majority opinion, but got cold feet. Rumor spread that Justice Anthony Kennedy had attempted until shortly before the decision was announced to persuade Roberts to rejoin the Scalia group. Once that proved fruitless, it was too late to make anything but cosmetic changes to Scalia’s opinion for the four now-dissenters. Only the justices know what actually happened, but the scenario seems plausible.

Why would Roberts do this? Had Scalia’s opinion prevailed, the ACA would have been struck down in its entirety. That would have placed the Court in a difficult position, especially during an election year, having exploded what President Obama considered his signature achievement. The President already had a fractious relationship with the Supreme Court and earlier had made what some interpreted as veiled political threats against the Court over the case. Roberts’s “switch in time” blunted that. The chief justice is at most primus inter pares, having no greater formal powers than his associates. But he is often the public and political figurehead of the Court. Historically, chief justices have been more “political” in the sense of being finely attuned to maintaining the institutional vitality of the Court. John Marshall, William Howard Taft, and Charles Evans Hughes especially come to mind. Associate justices can be jurisprudential purists, often through dissents, to a degree a chief justice cannot.

Choosing his path allowed Roberts to uphold the ACA in part, while striking jurisprudential blows against the previously constant expansion of the federal commerce and spending powers. Even as to the taxing power, which he used to uphold that part of the ACA, Roberts planted a constitutional land mine. Should the mandate ever be made really effective, if Congress raised it above the price of insurance, the “tax” argument would fail and a future court could strike it down as an unconstitutional penalty. Similarly, if the tax were repealed, as eventually happened, and the mandate were no longer supported under the taxing power, it could threaten the entire ACA.

After NFIB, attempts to modify or eliminate the ACA through legislation or litigation continued, with mixed success. Noteworthy is that the tax payment for the individual mandate was repealed in 2017. This has produced a new challenge to the ACA as a whole, because the mandate is, as the government conceded in earlier arguments, a crucial element of the whole health insurance structure. The constitutional question is whether the mandate is severable from the rest of the ACA. The district court held that the mandate was no longer a tax and, thus, under NFIB, is unconstitutional. Further, because of the significance that Congress attached to the mandate for the vitality of the ACA, the mandate could not be severed from the ACA, and the entire law is unconstitutional. The Fifth Circuit agreed that the mandate is unconstitutional, but disagreed about the extent that affects the rest of the ACA. The Supreme Court will hear the issue in its 2020-2021 term in California v.. Texas.

On the political side, the American public seems to support the ACA overall, although, or perhaps because, it has been made much more modest than its proponents had planned. So, the law, somewhat belatedly and less boldly, achieved a key goal of President Obama’s agenda. That success came at a stunning political cost to the President’s party, however. The Democrats hemorrhaged over 1,000 federal and state legislative seats during Obama’s tenure. In 2010 alone, they lost a historic 63 House seats, the biggest mid-term election rout since 1938, plus 6 Senate seats. The moderate “blue-dog” Democrats who had been crucial to the passage of the ACA were particularly hard hit. Whatever the ACA’s fate turns out to be in the courts, the ultimate resolution of controversial social issues remains with the people, not lawyers and judges.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

For those old enough to remember, September 11, 2001, 9:03 a.m. is burned into our collective memory. It was at that moment that United Flight 175 crashed into the South Tower of the World Trade Center in New York City.

Everyone was watching. American Airlines Flight 11 had crashed into the North Tower seventeen minutes earlier. For those few moments there was uncertainty whether the first crash was a tragic accident. Then, on live television, the South Tower fireball vividly announced to the world that America was under attack.

The nightmare continued. As horrifying images of people trapped in the burning towers riveted the nation, news broke at 9:37 a.m. that American Flight 77 had plowed into the Pentagon.

For the first time since December 11, 1941, Americans were collectively experiencing full scale carnage from a coordinated attack on their soil.

The horror continued as the twin towers collapsed, sending clouds of debris throughout lower Manhattan and igniting fires in adjoining buildings. Questions filled the minds of government officials and every citizen: How many more planes? What were their targets? How many have died? Who is doing this to us?

At 10:03 a.m., word came that United Flight 93 had crashed into a Pennsylvania field. Speculation exploded as to what happened. Later investigations revealed that Flight 93 passengers, alerted by cell phone calls of the earlier attacks, revolted causing the plane to crash. Their heroism prevented this final hijacked plane from destroying the U.S. Capitol Building.

That final accounting was devastating: 2,977 killed and over 25,000 injured. The death toll continues to climb to this day as first responders and building survivors perish from respiratory conditions caused by inhaling the chemical-laden smoke. It was the deadliest terrorist attack in human history.

How this happened, why this happened, and what happened next compounds the tragedy.

Nineteen terrorists, most from Saudi Arabia, were part a radical Islamic terrorist organization called al-Qaeda “the Base.” This was the name given the training camp for the radical Islamicists who fought the Soviets in Afghanistan.

Khalid Sheikh Mohammed, a Pakistani, was the primary organizer of the attack. Osama Bin Laden, a Saudi, was the leader and financier. Their plan was based upon an earlier failed effort in the Philippines. It was mapped out in late 1998. Bin Laden personally recruited the team, drawn from experienced terrorists. They insinuated themselves into the U.S., with several attending pilot training classes. Five-man teams would board the four planes, overpower the pilots, and fly them as bombs into significant buildings.

They banked on plane crews and passengers responding to decades of “normal” hijackings. They would assume the plane would be commandeered, flown to a new location, demands would be made, and everyone would live. This explains the passivity on the first three planes. Flight 93 was different, because it was delayed in its departure, allowing time for passengers to learn about the fate of the other planes. Last minute problems also reduced the Flight 93 hijacker team to only four.

The driving force behind the attack was Wahhabism, a highly strict, anti-Western version of Sunni Islam.

The Saudi Royal Family owes its rise to power to Muhammad ibn Abd al-Wahhab (1703-1792). He envisioned a “pure” form of Islam that purged most worldly practices (heresies), oppressed women, and endorsed violence against nonbelievers (infidels), including Muslims who differed with his sect. This extremely conservative and violent form of Islam might have died out in the sands of central Arabia were in not for a timely alliance with a local tribal leader, Muhammad bin Saud.

The House of Saud was just another minor tribe, until the two Muhammads realized the power of merging Sunni fanaticism with armed warriors. Wahhab’s daughter married Saud’s son, merging their two blood lines to this day. The House of Saud and its warriors rapidly expanded throughout the Arabia Peninsula, fueled by Wahhabi fanaticism. These various conflicts always included destruction of holy sites of rival sects and tribes. While done in the name of “purification,” the result was erasing the physical touchstones of rival cultures and governments.

In the early 20th Century, Saudi leader, ibn Saud, expertly exploited the decline of the Ottoman Empire, and alliances with European Powers, to consolidate his permanent hold over the Arabian Peninsula. Control of Mecca and Medina, Islam’s two holiest sites, gave the House of Saud the power to promote Wahhabism as the dominant interpretation of Sunni Islam. This included internally contradictory components of calling for eradicating infidels while growing rich from Christian consumption of oil and pursuing lavish hedonism when not in public view.

In the mid-1970s Saudi Arabia used the flood of oil revenue to become the “McDonalds of Madrassas.” Religious schools and new Mosques popped up throughout Africa, Asia, and the Middle East. This building boom had nothing to do with education and everything to do with spreading the cult of Wahhabism. Pakistan became a major hub for turning Wahhabi madrassas graduates into dedicated terrorists.

Wahhabism may have remained a violent, dangerous, but diffused movement, except it found fertile soil in Afghanistan.

Afghanistan was called the graveyard of empires as its rugged terrain and fierce tribal warriors thwarted potential conquerors for centuries. In 1973, the last king of Afghanistan was deposed leading to years of instability. In April 1978, the opposition Communist Party seized control in a bloody coup. The communist tried to brutally consolidate power, which ignited a civil war among factions supported by Pakistan, China, Islamists (known as the Mujahideen), and the Soviet Union. Amidst the chaos, U.S. Ambassador Adolph Dubbs was killed on February 14, 1979.

On December 24, 1979, the Soviet Union invaded Afghanistan, killing their ineffectual puppet President, and ultimately bringing over 100,000 military personnel into the country. What followed was a vicious war between the Soviet military and various Afghan guerrilla factions. Over 2 million Afghans died.

The Reagan Administration covertly supported the anti-Soviet Afghan insurgents, primarily aiding the secular pro-west Northern Alliance. Arab nations supported the Mujahideen. Bin Laden entered the insurgent caldera as a Mujahideen financier and fighter. By 1988, the Soviets realized their occupation had failed. They removed their troops, leaving behind another puppet government and Soviet trained military.

When the Soviet Union collapsed, Afghanistan was finally free. Unfortunately, calls for reunifying the country by reestablishing the monarchy and strengthening regional leadership went unheeded. Attempts at recreating the pre-invasion faction ravaged parliamentary system only led to new rounds of civil war.

In September 1994, the weak U.S. response opened the door for the Taliban, graduates from Pakistan’s Wahhabi madrassas, to launch their crusade to take control of Afghanistan.  By 1998, the Taliban controlled 90% of the country.

Bin Laden and his al-Qaeda warriors made Taliban-controlled territory in Afghanistan their new base of operations. In exchange, Bin Laden helped the Taliban eliminate their remaining opponents. This was accomplished on September 9, 2001, when suicide bombers disguised as a television camera crew blew-up Ahmad Shah Massoud, the charismatic pro-west leader of the Northern Alliance.

Two days later, Bin Laden’s plan to establish al-Qaeda as the global leader of Islamic terrorism was implemented with hijacking four planes and turning them into guided bombs.

The 9-11 attacks, along with the earlier support against the Soviets in Afghanistan, was part of Bin Laden’s goal to lure infidel governments into “long wars of attrition in Muslim countries, attracting large numbers of jihadists who would never surrender.” He believed this would lead to economic collapse of the infidels, by “bleeding” them dry. Bin Laden outlined his strategy of “bleeding America to the point of bankruptcy” in a 2004 tape released through Al Jazeera.

On September 14, amidst the World Trade Center rubble, President George W. Bush addressed those recovering bodies and extinguishing fires using a bullhorn:

“The nation stands with the good people of New York City and New Jersey and Connecticut as we mourn the loss of thousands of our citizens”

A rescue worker yelled, “I can’t hear you!”

President Bush spontaneously responded: “I can hear you! The rest of the world hears you! And the people who knocked these buildings down will hear all of us soon!”

Twenty-three days later, on October 7, 2001, American and British warplanes, supplemented by cruise missiles fired from naval vessels, began destroying Taliban operations in Afghanistan.

U.S. Special Forces entered Afghanistan. Working the Northern Alliance, they defeated major Taliban units. They occupied Kabul, the Afghan Capital, on November 13, 2001.

On May 2, 2011, U.S. Special Forces raided an al-Qaeda compound in Abbottabad, Pakistan, killing Osama bin Laden.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

In October of 1989, hundreds of thousands of East German citizens demonstrated in Leipzig, following a pattern of demonstrations for freedom and human rights throughout Eastern Europe and following the first ever free election in a Communist country, Poland, in the Spring of 1989. Hungary had opened its southern border with Austria and East Germans seeking a better life were fleeing there. Czechoslovakia had done likewise on its western border and the result was the same.

The East German government had been on edge and was seeking to reduce domestic tensions by granting limited passage of its citizens to West Germany. And that’s when the dam broke.

On November 9, 1989, thousands of elated East Berliners started pouring into West Berlin. There was a simple bureaucratic error earlier in the day when an East German official read a press release he hadn’t previously studied and proclaimed that residents of Communist East Berlin were permitted to cross into West Berlin, freely and, most importantly, immediately. He had missed the end of the release which instructed that passports would be issued in an orderly fashion when government offices opened the next day.

This surprising information about free passage was spread throughout East Berlin, East Germany and, indeed, around the word like a lightning bolt. Massive crowds gathered near-instantaneously and celebrated at the heavily guarded Wall gates which, in a party-like atmosphere amid total confusion, were opened by hard core communist yet totally outmanned Border Police, who normally had orders to shoot-to-kill anyone attempting to escape. A floodgate was opened and an unstoppable flood of freedom-seeking humanity passed through, unimpeded.

Shortly thereafter, the people tore down the Wall with every means available. The clarion bell had been sounded and the reaction across communist Eastern Europe was swift. Communist governments fell like dominoes.

The Wall itself was a glaring symbol of totalitarian communist repression and the chains that bound satellite countries to the communist Soviet Union. But the “bureaucratic error” of a low-level East German functionary was the match needed to set off an explosion of freedom that had been years in-the-making throughout the 1980s. And that is critical to understanding just why the Cold War came to an end, precipitously and symbolically, with the fall of the Wall.

With the election of Ronald Reagan to the presidency of the United States, Margaret Thatcher to Prime Minister of Great Britain and the Polish Cardinal, Jean Paul II becoming Pope of the Roman Catholic Church, the foundation was laid in the 1980s for freedom movements in Soviet Communist-dominated Eastern Europe to evolve and grow. Freedom lovers and fighters had friends in high places who believed deeply in their cause. These great leaders of the West understood the enormous human cost of communist rule and were eager to fight back in their own unique and powerful way, leading their respective countries and allies in the process.

Historic figures like labor leader Lech Walesa, head of the Polish Solidarity Movement and Czech playwright Vaclav Havel, an architect of the Charter 77 call for basic human rights had already planted the seeds for historic change. Particularly in Poland, the combination of Solidarity and the Catholic Church, supported staunchly in the non-communist world by Reagan and Thatcher, anti-communism flourished despite repression and brutal crackdowns.

And then, there was a new General Secretary of the Communist Party of the Soviet Union, Mikhail Gorbachev. When he came to power in 1985, he sought to exhort workers to increase productivity in the economy, stamp out the resistance to Soviet occupation in Afghanistan via a massive bombing campaign and keep liquor stores closed till 2:00 pm. However, exhortation didn’t work and the economy continued to decline, Americans gave Stinger missiles to the Afghan resistance and the bombing campaign failed and liquor stores were being regularly broken into by angry citizens not to be denied their vodka. The Afghan war was a body blow to a Soviet military, ‘always victorious’ and Soviet mothers protested their sons coming back in body bags. The elites (“nomenklatura”) were taken aback and demoralized by what was viewed as a military debacle in a then Fourth World country. “Aren’t we supposed to be a superpower?”

Having failed at run-of-the-mill Soviet responses to problems, Gorbachev embarked on a bold-for-the-USSR effort to restructure the failing Soviet economy via Perestroika which sought major reform but within the existing burdensome central-planning bureaucracy. On the political front, he introduced Glasnost, opening discussion of social and economic problems heretofore forbidden since the USSR’s beginning. Previously banned books were published. Working and friendly relationships with President Reagan and Margaret Thatcher were also initiated.

In the meantime, America under President Reagan’s leadership was not only increasing its military strength in an accelerated and expensive arms race but was also opposing Soviet-backed communist regimes and their so-called “wars of national liberation” all over the world. The cold war turned hot under the Reagan Doctrine. President Reagan also pushed “Star Wars,” an anti-ballistic missile system that could potentially neutralize Soviet long-range missiles. Star Wars, even if off in the future, worried Gorbachev’s military and communist leadership of an electronically and computer technology-challenged Soviet Union.

Competing economically and militarily with a resurgent anti-communist American engine firing on all cylinders became too expensive for the economically and technologically disadvantaged Soviet Union. There are those who say the USSR collapsed of its own weight, but they are wrong. If that were so, a congenitally overweight USSR would have collapsed a lot earlier. Gorbachev deserves a lot of credit to be sure but there should be no doubt, he and the USSR were encouraged to shift gears and change course. Unfortunately for communist rulers, their reforms initiated a downward spiral in their ability to control their citizens. Totalitarian control was first diminished and then lost. Author’s note: A lesson which was not lost on the rulers of Communist China.

Summing up: A West with economic and military backbone plus spiritual leadership, combined with brave dissident and human rights movements in Eastern Europe and the USSR itself, forced changes in behavior of the communist monolith. Words and deeds mattered. When Ronald Reagan called the Soviet Union an “evil empire” before the British Parliament, the media and political opposition worldwide was aghast… but in the Soviet Gulag, political prisoners rejoiced. When President Reagan said “Mr. Gorbachev, tear down this wall,” consternation reigned in the West… but the people from East Germany to the Kremlin heard it loud and clear.

And so fell the Berlin Wall.

The Honorable Don Ritter, Sc. D., served seven terms in the U.S. Congress from Pennsylvania including both terms of Ronald Reagan’s presidency. Dr. Ritter speaks fluent Russian and lived in the USSR for a year as a Nation Academy of Sciences post-doctoral Fellow during Leonid Brezhnev’s time. He served in Congress as Ranking Member of the Congressional Helsinki Commission and was a leader in Congress in opposition to the Soviet invasion and occupation of Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Danny de Gracia

It’s hard to believe that this year marks thirty years since Saddam Hussein invaded Kuwait in August of 1990. In history, some events can be said to be turning points for civilization that set the world on fire, and in many ways, our international system has not been the same since the invasion of Kuwait.

Today, the Iraq that went to war against Kuwait is no more, and Saddam Hussein himself is long dead, but the battles that were fought, the policies that resulted, and the history that followed is one that will haunt the world for many more years to come.

Iraq’s attempts to annex Kuwait in 1990 would bring some of the most powerful nations into collision, and would set in motion a series of events that would give rise to the Global War on Terror, the rise of ISIS, and an ongoing instability in the region that frustrates the West to this day.

To understand the beginning of this story, one must go back in time to the Iranian Revolution in 1979, where a crucial ally of the United States of America at the time – Iran – was in turmoil because of public discontent with the leadership of its shah, Mohammad Reza Pahlavi.

Iran’s combination of oil resources and strategic geographic location made it highly profitable for the shah and his allies in the West over the years, and a relationship emerged where Iran’s government, flush with oil money, kept America’s defense establishment in business.

For years, the shah had been permitted to purchase nearly any American weapons system he pleased, no matter how advanced or powerful it may be, and Congress was only all too pleased to give it to him.

The Vietnam War had broken the U.S. military and hollowed out the resources of the armed services, but the defense industry needed large contracts if was to continue to support America.

Few people realize that Iran, under the Shah, was one of the most important client states in the immediate post-Vietnam era, making it possible for America to maintain production lines of top-of-the-line destroyers, fighter planes, engines, missiles, and many other vital elements of the Cold War’s arms race against the Soviet Union. As an example, the Grumman F-14A Tomcat, America’s premier naval interceptor of 1986 “Top Gun” fame, would never have been produced in the first place if it were not for the commitment of the Iranians as a partner nation in the first batch of planes.

When the Iranian Revolution occurred, an embarrassing ulcer to American interests emerged in Western Asia, as one of the most important gravity centers of geopolitical power had slipped out of U.S. control. Iran, led by an ultra-nationalistic religious revolutionary government, and armed with what was at the time some of the most powerful weapons in the world, had gone overnight from trusted partner to sworn enemy.

Historically, U.S. policymakers typically prefer to contain and buffer enemies rather than directly opposing them. Iraq, which had also gone through a regime change in July of 1979 with the rise of Saddam Hussein in a bloody Baath Party purge, was an rival to Iran, making it a prime candidate to be America’s new ally in the Middle East.

The First Persian Gulf War: A Prelude

Hussein, a brutal, transactional-minded leader who rose to power through a combination of violence and political intrigue, was one to always exploit opportunities. Recognizing Iran’s potential to overshadow a region he himself deemed himself alone worthy to dominate, Hussein used the historical disagreement over ownership of the strategic, yet narrow Shatt al-Arab waterway that divided Iran from Iraq to start a war on September 22, 1980.

Iraq, flush with over $33 billion in oil profits, had become formidably armed with a modern military that was supplied by numerous Western European states and, bizarrely, even the Soviet Union as well. Hussein, like Nazi Germany’s Adolf Hitler, had a fascination for superweapons and sought to amass a high-tech military force that could not only crush Iran, but potentially take over the entire Middle East.

In Hussein’s bizarre arsenal would eventually include everything from modified Soviet ballistic missiles (the “al-Hussein”) to Dassault Falcon 50 corporate jets modified to carry anti-ship missiles, a nuclear weapons program at Osirak, and even work on a supergun capable of firing telephone booth-sized projectiles into orbit nicknamed Project Babylon.

Assured of a quick campaign against Iran and tacitly supported by the United States, Hussein saw anything but a decisive victory, and spent almost a decade in a costly war of attrition with Iran. Hussein, who constantly executed his own military officers for making tactical withdrawals or failing in combat, denied his military the ability to learn from defeats and handicapped his army by his own micromanagement.

Iraq’s Pokémon-like “gotta catch ‘em all” model of military procurement during the war even briefly put it at odds with the United States on May 17, 1987, when one of its heavily armed Falcon 50 executive jets, disguised on radar as a Mirage F1EQ fighter, accidentally launched a French-acquired Exocet missile against a U.S. Navy frigate, the USS Stark. President Ronald Reagan, though privately horrified at the loss of American sailors, still considered Iraq a necessary counterweight to Iran, and used the Stark incident to increase political pressure on Iran.

While Iraq had begun its war against Iran in the black, years of excessive military spending, meaningless battles, and rampant destruction of the Iraqi army had taken its toll. Hussein’s war had put the country in over $37 billion dollars in debt, much of which had been owed to neighboring Kuwait.

Faced with a strained economy, tens of thousands of soldiers returning wounded from the war, and a military that was virtually on the brink of deposing Saddam Hussein just as he had deposed his predecessor Ahmed Hassan al-Bakr in 1979, Iraq had no choice but to end its war against Iran.

Both Iran and Iraq would ultimately submit to a UN brokered ceasefire, but ironically, what would be one of the decisive elements in bringing the first Persian Gulf war to a close would not be the militaries of either country, but the U.S. military, when it launched a crippling air and naval attack against Iranian forces on April 18, 1988.

Iran, which had mined important sailing routes of the Persian Gulf as part of its area denial strategy during the war, succeeded on April 14, 1988 in striking the USS Samuel B. Roberts, an American frigate deployed to the region to protect shipping.

In response, the U.S. military retaliated with Operation: Praying Mantis which hit Iranian oil platforms (which had since been reconfigured as offensive gun platforms), naval vessels, and other military targets. The battle, which was so overwhelming in its scope that it actually was and remains to this day as the largest carrier and surface ship battle since World War II, resulted in the destruction of most of Iran’s navy and was a major contributing factor in de-fanging Iran for the next decade to come.

Kuwait and Oil

Saddam Hussein, claiming victory over Iran amidst the UN ceasefire, and now faced with a new U.S. president, George H.W. Bush in 1989, felt that the time was right to consolidate his power and pull his country back from collapse. In Hussein’s mind, he had been the “savior” of the Arab and Gulf States, who had protected them during the Persian Gulf war against the encroachment of Iranian influence. As such, he sought from Kuwait a forgiveness of debts incurred in the war with Iran, but would find no such sympathy. The 1990s were just around the corner, and Kuwait had ambitions of its own to grow in the new decade as a leading economic powerhouse.

Frustrated and outraged by what he perceived was a snub, Hussein reached into his playbook of once more leveraging territorial disputes for political gain and accused Kuwait of stealing Iraqi oil by means of horizontal slant drilling into the Rumaila oil fields of southern Iraq.

Kuwait found itself in an unenviable situation neighboring the heavily armed Iraq, and as talks over debt and oil continued, the mighty Iraqi Republican Guard appeared to be gearing up for war. Most political observers at the time, including many Arab leaders, felt that Hussein was merely posturing and that it was a grand bluff to maintain his image as a strong leader. For Hussein to invade a neighboring Arab ally was unthinkable at the time, especially given Kuwait’s position as an oil producer.

On July 25, 1990, U.S. Ambassador to Iraq, April Glaspie, met with President Saddam Hussein and his deputy, Tariq Aziz on the topic of Kuwait. Infamously, Glaspie is said to have told the two, “We have no opinion on your Arab/Arab conflicts, such as your dispute with Kuwait. Secretary Baker has directed me to emphasize the instruction, first given to Iraq in the 1960s, that the Kuwait issue is not associated with America.”

While the George H.W. Bush administration’s intentions were obviously aimed at taking no side in a regional territorial dispute, Hussein, whose personality was direct and confrontational, likely interpreted the Glaspie meeting as America backing down.

In the Iraqi leader’s eyes, one always takes the initiative and always shows an enemy their dominance. For a powerful country such as the United States to tell Hussein that it had “no opinion” on Arab/Arab conflict, this was most likely a sign of permission or even weakness that the Iraqi leader felt he had to exploit.

America, still reeling from the shadow of the Vietnam War failure and the disastrous Navy SEAL incident in Operation: Just Cause in Panama, may have appeared in that moment to Hussein as a paper tiger that could be out-maneuvered or deterred by aggressive action. Whatever the case was, Iraq stunned the world when just days later on August 2, 1990 it invaded Kuwait.

The Invasion of Kuwait

American military forces and intelligence agencies had been closely monitoring the buildup of Iraqi forces for what appeared like an invasion of Kuwait, but it was still believed right up to the moment of the attack that perhaps Saddam Hussein was only bluffing. The United States Central Command had set WATCHCON 1 – or Watch Condition One – the highest state of non-nuclear alertness in the region just prior to Iraq’s attack, and was regularly employing satellites, reconnaissance aircraft, and electronic surveillance platforms to observe the Iraqi Army.

Nevertheless, if there is one mantra that perfectly encapsulates the posture of the United States and European powers from the beginnings of the 20th century to the present, it is “Western countries too slow to act.” As is often the result with aggressive nations that challenge the international order, Iraq plowed into Kuwait and savaged the local population.

While America and her allies have always had the best technologies, the best weapons, and the best early warning systems or sensors, these historically for more than a century have been rendered useless because they often provide information that is not actionably employed to stop an attack or threat. Such was the case with Iraq, where all of the warning signs were present that an attack was imminent, but no action was taken to stop them.

Kuwait’s military courageously fought Iraq’s invading army, and even notably fought air battles with their American-made A-4 Skyhawks, some of them launching from highways after their air bases were destroyed. But the Iraqi army, full of troops who had fought against Iran and equipped with the fourth largest military in the world at that time, was simply too powerful to overcome. 140,000 Iraqi troops flooded into Kuwait and seized one of the richest oil producing nations in the region.

As Hussein’s military overran Kuwait, sealed its borders, and began plundering the country and ravaging its civilian population, the worry of the United States immediately shifted from Kuwait to Saudi Arabia, for fear that the kingdom might be next. On August 7, 1990, President Bush commenced “Operation: Desert Shield,” a military operation to defend Saudi Arabia and prevent any further advance of the Iraqi army.

At the time that Operation: Desert Shield commenced, I was living in Hampton Roads, Virginia and my father was a lieutenant colonel assigned to Tactical Air Command headquarters at the nearby Langley Air Force Base, and 48 F-15 Eagle fighter planes from that base immediately deployed to the Middle East in support of that operation. In the days that followed, our base became a flurry of activity and I remember seeing a huge buildup of combat aircraft from all around the United States forming at Langley.

President Bush, who himself had been a fighter pilot and U.S. Navy officer who fought in World War II, was all too familiar with what could happen when a megalomaniacal dictator started invading their neighbors. Whatever congeniality of convenience existed between the U.S. and Iraq to oppose Iran was now a thing of the past in the wake of the occupation of Kuwait.

Having fought against both the Nazis and Imperial Japanese in WWII, Bush saw many similarities of Adolf Hitler in Saddam Hussein, and immediately began comparing the Iraqi leader and his government to the Nazis in numerous speeches and public appearances as debates raged over what the U.S. should do regarding Kuwait.

As retired, former members of previous presidential administrations urged caution and called for long-term sanctions on Iraq rather than a kinetic military response, the American public, still captivated by the Vietnam experience, largely felt that the matter in Kuwait was not a concern that should involve military forces. Protests began to break out across America with crowds shouting “Hell no, we won’t go to war for Texaco” and others singing traditional protest songs of peace like “We Shall Overcome.”

Bush, persistent in his beliefs that Iraq’s actions were intolerable, made every effort to keep taking the moral case for action to the American public in spite of these pushbacks. As a leader seasoned by the horrors of war and combat, Bush must have known, as Henry Kissinger once said, that leadership is not about popularity polls, but about “an understanding of historical cycles and courage.”

On September 11, 1990, before a joint session of Congress, Bush gave a fiery address that to this day still stands as one of the most impressive presidential addresses in history.

“Vital issues of principle are at stake. Saddam Hussein is literally trying to wipe a country off the face of the Earth. We do not exaggerate,” President Bush would say before Congress. “Nor do we exaggerate when we say Saddam Hussein will fail. Vital economic interests are at risk as well. Iraq itself controls some 10 percent of the world’s proven oil reserves. Iraq, plus Kuwait, controls twice that. An Iraq permitted to swallow Kuwait would have the economic and military power, as well as the arrogance, to intimidate and coerce its neighbors, neighbors who control the lion’s share of the world’s remaining oil reserves. We cannot permit a resource so vital to be dominated by one so ruthless, and we won’t!”

Members of Congress erupted in roaring applause at Bush’s words, and he issued a stern warning to Saddam Hussein: “Iraq will not be permitted to annex Kuwait. And that’s not a threat, that’s not a boast, that’s just the way it’s going to be.”

Ejecting Saddam from Kuwait

As America prepared for action, in Saudi Arabia, another man would also be making promises to defeat Saddam Hussein and his military. Osama bin Laden, who had participated in the earlier war in Afghanistan as part of the Mujahideen that resisted the Soviet occupation, now offered his services to Saudi Arabia, pledging to use a jihad to force Iraq out of Kuwait in the same way that he had forced the Soviets out of Afghanistan. The Saudis, however, would hear none of it; having already received the protection of the United States and its powerful allies, bin Laden, seen as a useless bit player on the world stage, was brushed aside.

Herein the seeds for a future conflict would be sown, as not only did bin Laden take offense to being rejected by the Saudi government, but the presence of American military forces on holy Saudi soil was seen as blasphemous to him and a morally corrupting influence on the Saudi people.

In fact, the presence of female U.S. Air Force personnel in Saudi Arabia seen without traditional cover or driving around in vehicles, caused many Saudi women to begin petitioning their government – and even in some instances, committing acts of civil disobedience – for more rights. This caused even more outrage among a number of fundamentalist groups in Saudi Arabia, and lent additional support, albeit covert in some instances, to bin Laden and other jihadist leaders.

Despite these cultural tensions boiling beneath the surface, President Bush successfully persuaded not only his own Congress but the United Nations as well to empower the formation of a global coalition of 35 nations to eject Iraqi occupying forces from Kuwait and to protect Saudi Arabia and the rest of the Gulf from further aggression.

On November 29, 1990, the die was cast when the United Nations passed Resolution 678, authorizing “Member States co-operating with the Government of Kuwait, unless Iraq on or before 15 January 1991 [withdraws from Kuwait] … to use all necessary means … to restore international peace and security in the area.”

Subsequently, on January 15, 1991, President Bush issued an ultimatum to Saddam Hussein to leave Kuwait. Hussein ignored the threat, believing that America was weak, and its public easily susceptible to knee-jerk reactions at the sight of losing soldiers from its prior experience in Vietnam. Hussein believed that he could not only cause the American people to back down, but that he could unravel Arab support for the UN coalition by enticing Israel to attack Iraq. As such, he persisted in occupying Kuwait and boasted that a “Mother of all Battles” was to commence, in which Iraq would emerge victorious.

History, however, shows us that this was not the case, and days later on the evening of January 16, 1991, Operation: Desert Shield became Operation: Desert Storm, when a massive aerial bombardment and air superiority campaign commenced against Iraqi forces. Unlike prior wars which combined a ground invasion with supporting air forces, the start of Desert Storm was a bombing campaign that consisted of heavy attacks by aircraft and naval-launched cruise missiles against Iraq.

The operational name “Desert Storm” may have in part been influenced by a war plan developed months earlier by Air Force planner, Colonel John A. Warden who conceived an attack strategy named “Instant Thunder” which used conventional, non-nuclear airpower in a precise manner to topple Iraqi defenses.

A number of elements from Warden’s top secret plan were integrated into the opening shots of Desert Storm’s air campaign, as U.S. and coalition aircraft knocked out Iraqi radars, missile sites, command headquarters, power stations, and other key targets in just the first night alone.

Unlike the Vietnam air campaigns which were largely political and gradual escalations of force, the Air Force, having suffered heavy losses in Vietnam, wanted as General Chuck Horner would later explain, “instant” and “maximum” escalation so that their enemies could not have time to react or rearm.

This was precisely what happened, such to the point that the massive Iraqi air force would be either annihilated by bombs on the ground, shot down by coalition combat air patrols, or forced to flee into neighboring Iran.

A number of radical operations and new weapons were employed in the air campaign of Desert Storm. For one, the U.S. Air Force had secretly converted a number of nuclear AGM-86 Air Launched Cruise Missiles (ALCMs) into conventional, high explosive precision guided missiles and equipped them on 57 B-52 bombers for a January 17 night raid called Operation: Senior Surprise.

Known internally and informally to the B-52 pilots as “Operation: Secret Squirrel,” the cruise missiles knocked out numerous Iraqi defenses and opened the door for more coalition aircraft to surge against Saddam Hussein’s military.

The Navy also employed covert strikes against Iraq, also firing BGM-109 Tomahawk Land Attack Missiles (TLAMs) that had also been converted to carry high explosive (non-nuclear) warheads. Because the early BGM-109s were guided and aimed by a primitive digital scene matching area correlator (DSMAC) that took digital photos of the ground below and compared it with pre-programmed topography in its terrain computer, the flat deserts of Iraq were thought to be problematic in employing cruise missiles, so the Navy came up with a highly controversial solution: secretly fire cruise missiles into Iran – a violation of Iranian airspace and international law – then turn them towards the mountain ranges as aiming points, and fly them into Iraq.

The plan worked, however, and the Navy would ultimately rain down on Iraq some 288 TLAMs that destroyed hardened hangars, runways, parked aircraft, command buildings, and scores of other targets in highly accurate strikes.

Part of the air war came home personally to me when a U.S. Air Force B-52, serial number 58-0248, participated in a night time raid over Iraq when it was accidentally fired upon by a friendly F-4G “Wild Weasel” that mistook the lumbering bomber’s AN/ASG-21 radar-guided tail gun as an Iraqi air defense platform. The Wild Weasel fired an AGM-88 High-speed Anti-Radiation Missile (HARM) at the B-52 that hit and exploded in its tail, but still left the aircraft in flyable condition.

At the time, my family had moved to Andersen AFB in Guam, and 58-0248 made for Guam to land for repairs. When the B-52 arrived, it was parked in a cavernous hangar and crews immediately began patching up the aircraft. My father, always wanting to ensure that I learned something about the real world so I could get an appreciation for America, brought me to the hangar to see the stricken B-52, which was affectionately given the nickname “In HARM’s Way.”

I never forgot that moment, and it caused me to realize that the war was more than just some news broadcast we watched on TV, and that war had real consequences for not only those who fought in it, but people back home as well. I remember feeling an intense surge of pride as I saw that B-52 parked in the hangar, and I felt that I was witnessing history in action.

Ultimately, the air war against Saddam Hussein’s military would go on for a brutal six weeks, leaving many of his troops shell-shocked, demoralized, and eager to surrender. In fighting Iran for a decade, the Iraqi army had never known the kind of destructive scale or deadly precision that coalition forces were able to bring to bear against them.

Once the ground campaign commenced against Iraqi forces on February 24, 1991, that portion of Operation: Desert Storm only lasted a mere 100 hours before a cease-fire would be called, not because Saddam Hussein had pleaded for survival, but because back in Washington, D.C., national leaders watching the war on CNN began to see a level of carnage that they were not prepared for.

Gen. Colin Powell, seeing that most of the coalition’s UN objectives had been essentially achieved, personally lobbied for the campaign to wrap up, feeling that further destruction of Iraq would be “unchivalrous” and fearing the loss of any more Iraqi or American lives. It was also feared that if America had actually tried to make a play for regime change in Iraq in 1991, that the Army would be left holding the bag in securing and rebuilding the country, something that not only would be costly, but might turn the Arab coalition against America. On February 28, 1991, the U.S. officially declared a cease-fire.

The Aftermath

Operation: Desert Storm successfully accomplished the UN objectives that were established for the coalition forces and it liberated Kuwait. But a number of side effects of the war would follow that would haunt America and the rest of the world for decades to come.

First, Saddam Hussein remained in power. As a result, the U.S. military would remain in the region for years as a defensive contingent, not only continuing to inflame existing cultural tensions in Saudi Arabia, but also becoming a target for jihadist terrorist attacks, including the Khobar Towers bombing on June 25, 1996 and the USS Cole bombing on October 12, 2000.

Osama bin Laden’s al Qaeda terrorist group would ultimately change the modern world as we knew it when his men hijacked commercial airliners and flew them into the Pentagon and World Trade Center on September 11, 2001. It should not be lost on historical observers that 15 of the 19 hijackers that day were Saudi citizens, a strategic attempt by bin Laden to drive a wedge between the United States and Saudi Arabia to get American military forces out of the country.

9/11 would also provide an opportunity for George H.W. Bush’s son, President George W. Bush, to attempt to take down Saddam Hussein. Many of the new Bush Administration members were veterans of the previous one during Desert Storm, and felt that the elder Bush’s decision not to “take out” the Iraqi dictator was a mistake. And while the 2003 campaign against Iraq was indeed successful in taking down the Baathist-party rule in Iraq and changing the regime, it allowed many disaffected former Iraqi officers and jihadists to rise up against the West, which ultimately led to the rise of ISIS in the region.

It is my hope that the next generation of college and high school students who read this essay and reflect on world affairs will understand that history is often complex and that every action taken leaves ripples in our collective destinies. A Holocaust survivor once told me, “There are times when the world goes crazy, and catches on fire. Desert Storm was one such time when the world caught on fire.”

What can we learn from the invasion of Kuwait, and what lessons can we take forward into our future? Let us remember always that allies are not always friends; victories are never permanent; and sometimes even seemingly unrelated personalities and forces can lead to world-changing events.

Our young people, especially those who wish to enter into national service, must study history and seek to possess, as the Bible says in the book of Revelation 17:9 in the Amplified Bible translation, “a mind to consider, that is packed with wisdom and intelligence … a particular mode of thinking and judging of thoughts, feelings, and purposes.”

Indeed, sometimes the world truly goes crazy and catches on fire, and some may say that 2020 is such a time. Let us study the past now, and prepare for the future!

Dr. Danny de Gracia, Th.D., D.Min., is a political scientist, theologist, and former committee clerk to the Hawaii State House of Representatives. He is an internationally acclaimed author and novelist who has been featured worldwide in the Washington Times, New York Times, USA Today, BBC News, Honolulu Civil Beat, and more. He is the author of the novel American Kiss: A Collection of Short Stories.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams
Ronald Reagan Speech, Brandenburg Gate & Berlin Wall 1987

“Mr. Gorbachev, tear down this wall!” – Ronald Reagan

After World War II, a Cold War erupted between the world’s two superpowers – the United States and the Soviet Union. Germany was occupied and then divided after the war as was its capital, Berlin. The Soviet Union erected the Berlin Wall in 1961 as a symbol of the divide between East and West in the Cold War and between freedom and tyranny.

During the 1960s and 1970s, the superpowers entered into a period of détente or decreasing tensions. However, the Soviet Union took advantage of détente to use revenue from rising oil prices and arms sales to engage in a massive arms build-up, supported communist insurrections in developing nations around the globe, and invaded Afghanistan.

Ronald Reagan was elected president in 1980 during a time of foreign-policy reversals including the Vietnam War and the Iranian Hostage Crisis. He blamed détente for strengthening and emboldening the Soviets and sought to improve American strength abroad.

As president, Reagan instituted a tough stance towards the Soviets that was designed to reverse their advances and win the Cold War. His administration supported the Polish resistance movement known as Solidarity, increased military spending, started the Strategic Defense Initiative (SDI), and armed resistance fighters around the world, including the mujahideen battling a Soviet invasion in Afghanistan.

Reagan had a long history of attacking communist states and the idea of communism itself that shaped his strategic outlook. In the decades after World War II, like many Americans, he was concerned about Soviet dominance in Eastern Europe spreading elsewhere. In 1952, Reagan compared communism to Nazism and other forms of totalitarianism characterized by a powerful state that limited individual freedoms.

“We have met [the threat] back through the ages in the name of every conqueror that has ever set upon a course of establishing his rule over mankind,” he said. “It is simply the idea, the basis of this country and of our religion, the idea of the dignity of man, the idea that deep within the heart of each one of us is something so godlike and precious that no individual or group has a right to impose his or its will upon the people.”

In a seminal televised speech in 1964 called “A Time for Choosing,” Reagan stated that he believed there could be no accommodation with the Soviets. “We cannot buy our security, our freedom from the threat of the bomb by committing an immorality so great as saying to a billion human beings now in slavery behind the Iron Curtain, ‘Give up your dreams of freedom because to save our own skins, we are willing to make a deal with your slave-masters.’”

Reagan targeted the Berlin Wall as a symbol of communism in a 1967 televised town hall debate with Robert Kennedy. “I think it would be very admirable if the Berlin Wall should…disappear,” Reagan said, “We just think that a wall that is put up to confine people, and keep them within their own country…has to be somehow wrong.”

In 1978, Reagan visited the wall and heard the story of Peter Fechter, one of hundreds who were shot by East German police while trying to escape to freedom over the Berlin Wall. As a result, Reagan told an aide, “My idea of American policy toward the Soviet Union is simple, and some would say simplistic.  It is this: We win and they lose.”

As president, he continued his unrelenting attack on the idea of communism according to his moral vision of the system.  In a 1982 speech to the British Parliament, he predicted that communism would end up “on the ash heap of history,” and that the wall was “the signature of the regime that built it.”  When he visited the wall during the same trip, he stated that “It’s as ugly as the idea behind it.” In a 1983 speech, he referred to the Soviet Union an “evil empire.”

Reagan went to West Berlin to speak during a ceremony commemorating the 750th anniversary of the city and faced a choice. He could confront the Soviets about the wall, or he could deliver a speech without controversy.

In June 1987, many officials in his administration and West Germany were opposed to any provocative words or actions during the anniversary speech. Many Germans also did not want Reagan to deliver his speech anywhere near the wall and feared anything that might be perceived as an aggressive signal. Secretary of State George Schultz and Chief of Staff Howard Baker questioned the speech and asked the president and his speechwriters to tone down the language. Deputy National Security Advisor Colin Powell and other members of the National Security Council wanted to alter the speech and offered several revisions. Reagan demanded to speak next to the Berlin Wall and determined that he would use the occasion to confront the threat the wall posed to human freedom.

Reagan and his team arrived in West Berlin on June 12. He spoke to reporters and nervous German officials, telling them, “This is the only wall that has ever been built to keep people in, not keep people out.” Meanwhile, in East Berlin, the German secret police and Russian KGB agents cordoned off an area a thousand yards wide opposite the spot where Reagan was to speak on the other side of the wall. They wanted to ensure that no one could hear the message of freedom.

Reagan spoke at the Brandenburg Gate with the huge, imposing wall in the background. “As long as this gate is closed, as long as this scar of a wall is permitted to stand, it is not the German question alone that remains open, but the question of freedom for all mankind.”

Reagan challenged Soviet General Secretary Mikhail Gorbachev directly, stating, “If you seek peace, if you seek prosperity for the Soviet Union and Eastern Europe, if you seek liberalization: Come here to this gate! Mr. Gorbachev, open this gate! Mr. Gorbachev, tear down this wall!”

He finished by predicting the wall would not endure because it stood for oppression and tyranny. “This wall will fall. For it cannot withstand faith; it cannot withstand truth. The wall cannot withstand freedom.” No one imagined that the Berlin Wall would fall only two years later on November 9, 1989, as communism collapsed across Eastern Europe.

A year later, Reagan was at a summit with Gorbachev in Moscow and addressed the students at Moscow State University. “The key is freedom,” Reagan boldly and candidly told them. “It is the right to put forth an idea, scoffed at by the experts, and watch it catch fire among the people. It is the right to dream – to follow your dream or stick to your conscience, even if you’re the only one in a sea of doubters.” Ronald Reagan believed that he had a responsibility to bring an end to the Cold War and destroy all nuclear weapons to benefit both the United States as well as the world for an era of peace. He dedicated himself to achieving this goal. Partly due to these efforts, the Berlin Wall fell by 1989, and communism collapsed in the Soviet Union by 1991.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

The election of Ronald Reagan on November 4, 1980 was one of the two most important elections of the 20th Century. It was a revolution in every way.

In 1932, Franklin Roosevelt (FDR) decisively defeated one term incumbent Herbert Hoover by 472-59 Electoral votes. His election

ushered in the era of aggressive liberalism, expanding the size of government, and establishing diplomatic relations with the Soviet Union. Roosevelt’s inner circle, his “brain trust,” were dedicated leftists, several of whom conferred with Lenin and Stalin on policy issues prior to 1932.

In 1980, Ronald Reagan decisively defeated one-term incumbent Jimmy Carter by 489-49 Electoral votes. His election ended the liberal era, shrunk the size of government, and rebuilt America’s military, diplomatic, economic, and intelligence capabilities. America reestablished its leadership in the world, ending the Soviet Empire, and the Soviet Union itself.

Reagan was a key leader in creating and promoting the conservative movement, whose policy and political operatives populated and guided his administration. He was a true “thought leader” who defined American conservatism in the late 20th Century. Through his writings, speeches, and radio program, Reagan laid the groundwork, and shaped the mandate, for one of the most impactful Presidencies in American history.

The road from Roosevelt’s “New Deal” to Reagan’s Revolution began in 1940.

FDR, at the height of his popularity, choose to run for an unprecedented third term. Roosevelt steered ever more leftward, selecting Henry Wallace as his running mate. Wallace would run as a socialist under the Progressive Party banner in 1948. Republican Wendell Willkie was the first private sector businessman to become a major party’s nominee.

Willkie had mounted numerous legal challenges to Roosevelt’s regulatory overreach. While losing, Willkie’s legacy inspired a generation of economists and activists to unite against big government.

As the allied victory in World War II became inevitable, the Willkie activists, along with leading conservative economists from across the globe, established policy organizations, “think tanks,” and publications to formulate and communicate an alternative to Roosevelt’s New Deal.

Human Events, the premiere conservative newspaper, began publishing in 1944. The Foundation for Economic Education was founded in 1946.

In 1947, conservative, “free market,” anti-regulatory economists met at the Mont Pelerin resort at the base of Mont Pelerin near Montreux, Switzerland. The greatest conservative minds of the 20th Century, including Friedrich Hayek, Ludwig von Mises, and Milton Friedman, organized the “Mont Pelerin Society” to counter the globalist economic policies arising from the Bretton Woods Conference.  The Bretton Woods economists had met at the Hotel Washington, at the base of Mount Washington in New Hampshire, to launch the World Bank and International Monetary Fund.

Conservative writer and thinker, William F. Buckley Jr. founded National Review on November 19, 1955. His publication, more than any other, would serve to define, refine and consolidate the modern Conservative Movement.

The most fundamental change was realigning conservatism with the international fight against the Soviet Union, which was leading global Communist expansion. Up until this period, American conservatives tended to be isolationist. National Review’s array of columnists developed “Fusionism” which provided the intellectual justification of conservatives being for limited government at home while aggressively fighting Communism abroad. In 1958, the American Security Council was formed to focus the efforts of conservative national security experts on confronting the Soviets.

Conservative Fusionism was politically launched by Senator Barry Goldwater (R-AZ) during the Republican Party Platform meetings for their 1960 National Convention. Conservative forces prevailed. This laid the groundwork for Goldwater to run and win the Republican Party Presidential nomination in 1964.

The policy victories of Goldwater and Buckley inspired the formation of the Young Americans for Freedom, the major conservative youth movement. Meeting at Buckley’s home in Sharon, Connecticut on September 11, 1960, the YAF manifesto became the Fusionist Canon. The conservative movement added additional policy centers, such as the Hudson Institute, founded on July 20, 1961.

Goldwater’s campaign was a historic departure from traditional Republican politics. His plain-spoken assertion of limited government and aggressive action against the Soviets inspired many, but scared many more. President John F. Kennedy’s assassination had catapulted Vice President Lyndon B. Johnson into the Presidency. LBJ had a vision of an even larger Federal Government, designed to mold urban minorities into perpetually being beholding to Democrat politicians.

Goldwater’s alternative vision was trounced on election day, but the seeds for Reagan’s Conservative Revolution were sown.

Reagan was unique in American politics. He was a pioneer in radio broadcasting and television. His movie career made him famous and wealthy. His tenure as President of the Screen Actors Guild thrust him into the headlines as Hollywood confronted domestic communism.

Reagan’s pivot to politics began when General Electric hired him to host their popular television show, General Electric Theater. His contract included touring GE plants to speak about patriotism, free market economics, and anti-communism. His new life within corporate America introduced him to a circle of conservative businessmen who would become known as his “Kitchen Cabinet.”

The Goldwater campaign reached out to Reagan to speak on behalf of their candidate on a television special during the last week of the campaign. On October 27, 1964, Reagan drew upon his GE speeches to deliver “A Time for Choosing.” His inspiring address became a political classic, which included lines that would become the core of “Reaganism”:

“The Founding Fathers knew a government can’t control the economy without controlling people. And they knew when a government sets out to do that, it must use force and coercion to achieve its purpose. So, we have come to a time for choosing … You and I are told we must choose between a left or right, but I suggest there is no such thing as a left or right. There is only an up or down. Up to man’s age-old dream—the maximum of individual freedom consistent with order—or down to the ant heap of totalitarianism.”

The Washington Post declared Reagan’s “Time for Choosing” “the most successful national political debut since William Jennings Bryan electrified the 1896 Democratic convention with his Cross of Gold speech.” It immediately established Reagan as the heir to Goldwater’s movement.

The promise of Reagan fulfilling the Fusionist vision of Goldwater, Buckley, and a growing conservative movement inspired the formation of additional groups, such as the American Conservative Union in December 1964.

In 1966, Reagan trounced two-term Democrat incumbent Pat Brown to become Governor of California, winning by 57.5 percent. Reagan’s two terms became the epicenter of successful conservative domestic policy attracting top policy and political operatives who would serve him throughout his Presidency.

Retiring after two terms, Reagan devoted full time to being the voice, brain, and face of the Conservative Movement. This included a radio show that was followed by over 30 million listeners.

In 1976. the ineffectual moderate Republicanism of President Gerald Ford led Reagan to mount a challenge. Reagan came close to the unprecedented unseating of his Party’s incumbent. His concession speech on the last night of the Republican National Convention became another political classic. It launched his successful march to the White House.

Reagan’s 1980 campaign was now aided by a more organized, broad, and capable Conservative Movement. Reagan’s “California Reaganites” were linked to Washington, DC-based “Fusionists,” and conservative grassroots activists who were embedded in Republican Party units across America. The Heritage Foundation had become a major conservative policy center on February 16, 1973. A new hub for conservative activists, The Conservative Caucus, came into existence in 1974.

Starting in 1978, Reagan’s inner circle, including his “Kitchen Cabinet,” worked seamlessly with this vast network of conservative groups: The Heritage Foundation, Kingston, Stanton, Library Court, Chesapeake Society, Monday Club, Conservative Caucus, American Legislative Exchange Council, Committee for the Survival of a Free Congress, the Eagle Forum, and many others. They formed a unified and potent political movement that overwhelmed Republican moderates to win the nomination and then buried Jimmy Carter and the Democrat Party in November 1980.

After his landslide victory, which also swept in the first Republican Senate majority since 1956, Reaganites and Fusionists placed key operatives into Reagan’s transition. They identified over 17,000 positions that affected Executive Branch operations. A separate team identified the key positions in each cabinet department and major agency that had to be under Reagan’s control in the first weeks of his presidency.

On January 21, 1981, Reagan’s personnel team immediately removed every Carter political appointee. These Democrat functionaries were walked out the door, identification badge taken, files sealed, and their security clearance terminated. The Carter era’s impotent foreign policy and intrusive domestic policy ended completely and instantaneously.

Reagan went on to lead one of the most successful Presidencies in American history. His vision of a “shining city on a hill” continues to inspire people around the world to seek better lives through freedom, open societies, and economic liberty.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner
Iranian Students Climb Wall of U.S. Embassy, Tehran, Nov. 1979

The long tragic road to the September 11, 2001 terror attacks began with President Jimmy Carter, and his administration’s involvement in the Iranian Revolution, and their fundamentally weak response to the Iranian Hostage Crisis.

The Iranian Hostage Crisis was the most visible act of the Iranian Revolution. Starting on November 4, 1979, and lasting for 444 days, 52 Americans were imprisoned in brutal conditions. The world watched as the Carter Administration repeatedly failed to free the hostages, both through poor diplomacy and the rescue attempt fiasco.

The result was the crippling of U.S. influence throughout the Middle East and the spawning of radical Islamic movements that terrorize the world to this day.

Islam’s three major sects, Sunni, Shiite, and Sufi, all harbor the seeds of violence and hatred. In 1881 a Sufi mystic ignited the Mahdi Revolt in the Sudan leading to eighteen years of death and misery throughout the upper Nile. During World War II, the Sunni Grand Mufti of Jerusalem befriended Hitler and helped Heinrich Himmler form Islamic Stormtrooper units to kill Jews in the Balkans.

After World War II, Islam secularized as mainstream leaders embraced Western economic interests to tap their vast oil and gas reserves.

Activists became embroiled in the Middle East’s Cold War chess board, aiding U.S. or Soviet interests.

The Iranian Revolution changed that. Through the success of the Iranian Revolution, Islamic extremists of all sects embraced the words of Shiite Ayatollah Ruhollah Khomeini:

“If the form of government willed by Islam were to come into being, none of the governments now existing in the world would be able to resist it; they would all capitulate.”

Islamic dominance became an end in and of itself.

This did not have to happen at all.

Iran has been a pivotal regional player for 2,500 years. The Persian Empire was the bane of ancient Greece. As the Greek Empire withered, Persia, later Iran, remained a political, economic, and cultural force. This is why their 1979 Revolution and subsequent confrontation with the U.S. inspired radicals throughout the Islamic world to become the Taliban, ISIS and other terrorists of today.

Iran’s modern history began as part of the East-West conflict following World War II. The Soviets heavily influenced and manipulated Iran’s first elected government. On August 19, 1953, British and America intelligence toppled that government and returned Shah Mohammad Reza to power.

“The Shah” as he became known globally, was reform minded. He launched his “White Revolution” to build a modern, pro-West, pro-capitalist Iran in 1963. The Shah’s “Revolution” built the region’s largest middle class, and broke centuries of tradition by enfranchising women. It was opposed by many traditional powers, including fundamentalist Islamic leaders like the Ayatollah Ruhollah Khomeini. Khomeini’s agitation for violent opposition to the Shah’s reforms led to his arrest and exile.

Throughout his reign, the Shah was vexed by radical Islamic and communist agitation. His secret police brutally suppressed fringe dissidents. This balancing act between western reforms and control worked well, with a trend towards more reforms as the Shah aged. The Shah enjoyed warm relationships with American Presidents of both parties and was rewarded with lavish military aid.

That was to change in 1977.

From the beginning, the Carter Administration expressed disdain for the Shah. President Carter pressed for the release of political prisoners. The Shah complied, allowing many radicals the freedom to openly oppose him.

Not satisfied with the pace or breadth of the Shah’s human rights reforms, Carter envoys began a dialogue with the Ayatollah Khomeini, first at his home in Iraq and more intensely when he moved to a Paris suburb.

Indications that the U.S. was souring on the Shah emboldened dissidents across the political spectrum to confront the regime. Demonstrations, riots, and general strikes began to destabilize the Shah and his government. In response, the Shah accelerated reforms. This was viewed as weakness by the opposition.

The Western media, especially the BBC, began to promote the Ayatollah as a moderate alternative to the Shah’s “brutal regime.” The Ayatollah assured U.S. intelligence operatives and State Department officials that he would only be the “figure head” for a western parliamentary system.

During the fall of 1978, strikes and demonstrations paralyzed the country. The Carter Administration, led by Secretary of State Cyrus Vance and U.S. Ambassador to Iran William Sullivan, coalesced around abandoning the Shah and helping install Khomeini, who they viewed as a “moderate clergyman” who would be Iran’s “Ghandi-like” spiritual leader.

Time and political capital were running out. On January 16, 1979, The Shah, after arranging for an interim government, resigned and went into exile.

The balance of power now remained with the Iranian Military.

While the Shah was preparing for his departure, General Robert Huyser, Deputy Commander of NATO and his top aides, arrived in Iran. They were there to neutralize the military leaders. Using ties of friendship, promises of aid, and assurance of safety, Huyser and his team convinced the Iranian commanders to allow the transitional government to finalize arrangements for Khomeini becoming part of the new government.

Many of these Iranian military leaders, and their families, were slaughtered as Khomeini and his Islamic Republican Guard toppled the transitional government and seized power during the Spring of 1979.  “It was a most despicable act of treachery, for which I will always be ashamed,” admitted one NATO general years later.

While Iran was collapsing, so were America’s intelligence capabilities.

One of President Carter’s earliest appointments was placing Admiral Stansfield Turner in charge of the Central Intelligence Agency (CIA). Turner immediately eviscerated the Agency’s human intelligence and clandestine units. He felt they had gone “rogue” during the Nixon-Ford era. He also thought electronic surveillance and satellites could do as good a job.

Turner’s actions led to “one of the most consequential strategic surprises that the United States has experienced since the CIA was established in 1947” – the Embassy Takeover and Hostage Crisis.

The radicalization of Iran occurred at lightning speed. Khomeini and his lieutenants remade Iran’s government and society into a totalitarian fundamentalist Islamic state. Anyone who opposed their Islamic Revolution were driven into exile, imprisoned, or killed.

Khomeini’s earlier assurances of moderation and working with the West vanished. Radicalized mobs turned their attention to eradicating all vestiges of the West. This included the U.S. Embassy.

The first attack on the U.S. Embassy occurred on the morning of February 14, 1979. Coincidently, this was the same day that Adolph Dubs, the U.S. ambassador to Afghanistan, was kidnapped and fatally shot by Muslim extremists in Kabul. In Tehran, Ambassador Sullivan surrendered the U.S. Embassy and was able to resolve the occupation within hours through negotiations with the Iranian Foreign Minister.

Despite this attack, and the bloodshed in Kabul, nothing was done to either close the Tehran Embassy, reduce personnel, or strengthen its defenses. During the takeover, Embassy personnel failed to burn sensitive documents as their furnaces malfunctioned. They installed cheaper paper shredders. During the 444-day occupation, rug weavers were employed to reconstruct the sensitive shredded documents, creating global embarrassment of America.

Starting in September 1979, radical students began planning a more extensive assault on the Embassy. This included daily demonstrations outside the U.S. Embassy to trigger an Embassy security response. This allowed organizers to assess the size and capabilities of the Embassy security forces.

On November 4, 1979, one of the demonstrations erupted into an all-out conflict by the Embassy’s Visa processing public entrance. The assault leaders deployed approximately 500 students. Female students hid metal cutters under their robes, which were used to breach the Embassy gates.

Khomeini was in a meeting outside of Tehran and did not have prior knowledge of the takeover. He immediately issued a statement of support, declaring it “the second revolution” and the U.S. Embassy an “America spy den in Tehran.”

What followed was an unending ordeal of terror and depravation for the 66 hostages, who through various releases, were reduced to a core of 52. The 2012 film “Argo” chronicled the audacious escape of six Americans who had been outside the U.S. Embassy at the time of the takeover.

ABC News began a nightly update on the hostage drama. This became “Nightline.” During the 1980 Presidential campaign, it served as a nightly reminder of the ineffectiveness of President Carter.

On April 24, 1980, trying to break out of this chronic crisis, Carter initiated an ill-conceived, and poorly executed, rescue mission called Operation Eagle Claw. It ended with crashed helicopters and eight dead soldiers at the staging area outside of the Iranian Capital, designated Desert One. Another attempt was made through diplomacy as part of a hoped for “October Surprise,” but the Iranians cancelled the deal just as planes were being mustered at Andrews Air Force Base.

Carter paid the price for his Iranian duplicity. On November 4, 1980, Ronald Reagan obliterated Carter in the worst defeat suffered by an incumbent President since Herbert Hoover in 1932.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

The election of Ronald Reagan in 1980 marked THE crucial turning point in winning the Cold War with Russia-dominated Communism, the USSR.

Reagan’s rise to national prominence began with the surge in communist insurgencies and revolutions worldwide that began after the fall of Saigon on April 30, 1975, and all South Vietnam to the communists. After 58,000 American lives and trillions in treasure lost over the tenures of five American Presidents, the United States left the Vietnam War and South Vietnam to the communists.

Communist North Vietnam in league with fellow communist governments in Russia and China accurately saw the weakening of a new American President, Gerald Ford, and a new ‘anti-war’ Congress as a result of the ‘Watergate’ scandal and President Richard Nixon’s subsequent resignation. In the minds of the communists, it was a signal opportunity to forcibly “unify,” read invade, the non-communist South with magnum force, armed to the teeth by both the People’s Republic of China and the USSR. President Nixon’s Secret Letter to South Vietnamese President Thieu pledging all-out support of U.S. air and naval power if the communists broke the Paris Peace Agreement and invaded was irrelevant as Nixon was gone. With the communist invasion beginning, seventy-four new members of Congress, all anti-war Democrats guaranteed the ”No” vote on the Ford Administration’s Bill to provide $800 million for ammunition and fuel to the South Vietnamese military to roll their tanks and fly their planes. That Bill lost in Congress by only one vote. The fate of South Vietnam was sealed. The people of South Vietnam, in what seemed then like an instant, were abandoned by their close American ally of some 20 years. Picture that.

Picture the ignominy of it all. Helicopters rescuing Americans and some chosen Vietnamese from rooftops while U.S. Marines staved off the desperate South Vietnamese who had worked with us for decades. Picture Vietnamese people clinging to helicopter skids and airplane landing gears in desperation, falling to their death as these aircraft ascended. Picture drivers of South Vietnamese tanks and pilots of fighter planes not able to engage for want of fuel. Picture famous South Vietnamese Generals committing suicide rather than face certain torture and death in Re-Education Camps, read Gulags with propaganda lessons. Picture perhaps hundreds of thousands of “Boat People,” having launched near anything that floated to escape the wrath of their conquerors, at the bottom of the South China Sea. Picture horrific genocide in Cambodia where Pol Pot and his henchmen murdered nearly one-third of the population to establish communism… and through it all, the West, led by the United States, stayed away.

Leonid Brezhvnev, Secretary General of the Communist Party of the Soviet Union and his Politburo colleagues could picture it… all of it. The Cold War was about to get hot.

The fall of the non-communist government in South Vietnam and the election of President Jimmy Carter was followed by an American military and intelligence services-emasculating U.S. Congress. Many in the Democratic Party took the side of the insurgents. I remember well, Sen. Tom Harkin from Iowa claiming that the Sandinista Communists (in Nicaragua) were more like “overzealous Boy Scouts” than hardened Communists. Amazing.

Global communism with the USSR in the lead and America in retreat, was on the march.

In just a few years, in Asia, Africa and Latin America, repressive communist-totalitarian regimes had been foisted on the respective peoples by small numbers of ideologically committed, well-trained and well-armed (by the Soviet Union) insurgencies. “Wars of national liberation” and intensive Soviet subversion raged around the world. Think Angola and Southern Africa, Ethiopia and Somalia in the Horn of Africa. Think the Middle East and the Philippines, Malaysia and Afghanistan (there a full-throated Red Army invasion) in Asia.

Think Central America in our own hemisphere and Nicaragua where the USSR and their right hand in the hemisphere, communist Cuba, took charge along with a relatively few committed Marxist-Leninist Nicaraguans, backed by Cuba and the Soviet Union, even creating a Soviet-style Politburo and Central Committee! On one my several trips to the region, I personally met with Tomas Borge, the Stalinist leader of the Nicaraguan Communist Party and his colleagues. Total Bolsheviks. To make things even more dangerous for the United States, these wars of national liberation were also ongoing in El Salvador, Honduras and Guatemala.

A gigantic airfield that could land Soviet jumbo transports was being completed under the Grenadian communist government of Maurice Bishop. Warehouses with vast storage capacity for weapons to fuel insurgency in Latin America were built. I personally witnessed these facilities and found the diary of one leading Politburo official, Liam James, who was on the payroll of the Soviet Embassy at the time. They all were but he, being the Treasurer of the government, actually wrote it down! These newly-minted communist countries and other ongoing insurgencies, with Marxist-Leninist values in direct opposition to human freedom and interests of the West, were being funded and activated by Soviet intelligence agencies, largely the KGB and were supplied by the economies of the Soviet Union and their Warsaw Pact empire in Eastern and Central Europe. Many leaders of these so-called “Third World” countries were on Moscow’s payroll.

In the words of one KGB General, “The world was going our way.” Richard Andrew, ‘The KGB and the Battle for the Third World’ (based on the Mitrokhin archives). These so-called wars of national liberation didn’t fully end until some ten years later, when the weapons and supplies from the Soviet Union dried up as the Soviet Empire began to disintegrate, thanks to a new U.S. President who led the way during  the 1980s.

Enter Ronald Wilson Reagan. To the chagrin of the Soviet communists and their followers worldwide, it was the beginning of the end of their glory days when in January of 1981, Ronald Reagan, having beaten the incumbent President, Jimmy Carter, in November, was sworn in as President of the United States. Ronald Reagan was no novice in the subject matter. President Reagan had been an outspoken critic of communism over three decades. He had written and given speeches on communism and the genuinely evil nature of the Soviet Union. He was a committed lover of human freedom, human rights and free markets. As Governor of California, he had gained executive experience in a large bureaucracy and during that time had connected with a contingent of likeminded political and academic conservatives. The mainstream media was ruthless with him, characterizing him as an intellectual dolt and warmonger who would bring on World War III. He would prove his detractors so wrong. He would prove to be the ultimate Cold Warrior, yet a sweet man with an iron fist when needed.

When his first National Security Advisor, Richard Allen, asked the new President Reagan about his vision of the Cold War, Reagan’s response was, “We win, they lose.” Rare moral clarity rarely enunciated.

To the end of his presidency, he continued to be disparaged by the mainstream media, although less aggressively. However, the American people grew to appreciate and even love the man as he and his team, more than anyone would be responsible for winning the Cold War and bringing down a truly “Evil Empire.” Just ask those who suffered most, the Polish, Czech, Hungarian, Ukrainian, Rumanian, Baltic, and yes, the Russian people, themselves. To this very day, his name is revered by those who suffered and still suffer under the yoke of communism.

Personally, I have often pondered that had Ronald Reagan not been elected President of the United States in 1980, the communist behemoth USSR would be standing strong today and the Cold War ended with communism, the victor.

The Honorable Don Ritter, Sc. D., served seven terms in the U.S. Congress from Pennsylvania including both terms of Ronald Reagan’s presidency. Dr. Ritter speaks fluent Russian and lived in the USSR for a year as a Nation Academy of Sciences post-doctoral Fellow during Leonid Brezhnev’s time. He served in Congress as Ranking Member of the Congressional Helsinki Commission and was a leader in Congress in opposition to the Soviet invasion and occupation of Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath
President Nixon Farewell Speech to White House Staff, August 9, 1974

On Thursday, August 8, 1974, a somber Richard Nixon addressed the American people in a 16-minute speech via television to announce that he was planning to resign from the Presidency of the United States. He expressed regret over mistakes he made about the break-in at the Democratic Party offices at the Watergate Hotel and the aftermath of that event. He further expressed the hope that his resignation would begin to heal the political divisions the matter had exacerbated. The next day, having resigned, he boarded a helicopter and, with his family, left Washington, D.C.

Nixon had won the 1972 election against Senator George McGovern of South Dakota with over 60% of the popular vote and an electoral vote of 520-17 (one vote having gone to a third candidate). Yet less than two years after what is one of the most overwhelming victories in American elections, Nixon was politically dead. Nixon has been described as a tragic figure, in a literary sense, due to his struggle to rise to the height of political power, only to be undone when he had achieved the pinnacle of success. The cause of this astounding change of fortune has been much debated. It resulted from a confluence of factors, political, historical, and personal.

Nixon was an extraordinarily complex man. He was highly intelligent, even brilliant, yet was the perennial striver seeking to overcome, by unrelenting work, his perceived limitations. He was an accomplished politician with a keen understanding of political issues, yet socially awkward and personally insecure. He was perceived as the ultimate insider, yet, despite his efforts, was always somehow outside the “establishment,” from his school days to his years in the White House. Alienated from the social and political elites, who saw him as an arriviste, he emphasized his marginally middle-class roots and tied his political career to that “silent majority.” He could arouse intense loyalty among his supporters, yet equally intense fury among his opponents. Nixon infamously kept an “enemies list,” the only surprise of which is that it was so incomplete. Seen by the Left as an operative of what is today colloquialized as the “Deep State,” yet he rightly mistrusted the bureaucracy and its departments and agencies, and preferred to rely on White House staff and hand-picked loyal individuals. Caricatured as an anti-Communist ideologue and would-be right-wing dictator, Nixon was a consummately pragmatic politician who was seen by many supporters of Senator Barry Goldwater and Governor Ronald Reagan as insufficiently in line with their world view.

The Watergate burglary and attempted bugging of the Democratic Party offices in June, 1972, and investigations by the FBI and the Government Accountability Office that autumn into campaign finance irregularities by the Committee to Re-Elect the President (given the unfortunate acronym CREEP by Nixon’s opponents) initially had no impact on Nixon and his comprehensive political victory. In January, 1973, the trial of the operatives before federal judge John Sirica in Washington, D.C., revealed possible White House involvement. This perked the interest of the press, never Nixon’s friends. These revelations, now spread before the public, caused the Democratic Senate majority to appoint a select committee under Senator Sam Ervin of North Carolina for further investigation. Pursuant to an arrangement with Senate Democrats, Attorney General Elliot Richardson named Democrat Archibald Cox, a Harvard law professor and former Kennedy administration solicitor general, as special prosecutor.

Cox’s efforts uncovered a series of missteps by Nixon, as well as actions that were viewed as more seriously corrupt and potentially criminal. Some of these sound rather tame by today’s standards. Others are more problematic. Among the former were allegations that Nixon had falsely backdated a gift of presidential papers to the National Archives to get a tax credit, not unlike Bill Clinton’s generously-overestimated gift of three pairs of his underwear in 1986 for an itemized charitable tax deduction. Another was that he was inexplicably careless in preparing his tax return. Given the many retroactively amended tax returns and campaign finance forms filed by politicians, such as the Clintons and their eponymous foundations, this, too, seems of slight import. More significant was the allegation that he had used the Internal Revenue Service to attack political enemies. Nixon certainly considered that, although it is not shown that any such actions were undertaken. Another serious charge was that Nixon had set up a secret structure to engage in political intelligence and espionage.

The keystone to the impeachment was the discovery of a secret taping system in the Oval Office that showed that Nixon had participated in a cover-up of the burglary and obstructed the investigation. Nixon, always self-reflective and sensitive to his position in history, had set up the system to provide a clear record of conversations within the Oval Office for his anticipated post-Presidency memoirs. It proved to be his downfall. When Cox became aware of the system, he sought a subpoena to obtain nine of the tapes in July, 1973. Nixon refused, citing executive privilege relating to confidential communications. That strategy had worked when the Senate had demanded the tapes; Judge Sirica had agreed with Nixon. But Judge Sirica rejected that argument when Cox sought the information, a decision upheld 5-2 by the federal Circuit Court for the District of Columbia.

Nixon then offered to give Cox authenticated summaries of the nine tapes. Cox refused. After a further clash between the President and the special prosecutor, Nixon ordered Attorney General Richardson to remove Cox. Both Richardson and Assistant Attorney General William Ruckelshaus refused and resigned. However, by agreement between these two and Solicitor General Robert Bork, Cox was removed by Bork in his new capacity as Acting Attorney General. It was well within Nixon’s constitutional powers as head of the unitary executive to fire his subordinates. But what the President is constitutionally authorized to do is not the same as what the President politically should do. The reaction of the political, academic, and media elites to the “Saturday Night Massacre” was overwhelmingly negative, and precipitated the first serious effort at impeaching Nixon.

A new special prosecutor, Democrat Leon Jaworski, was appointed by Bork in consultation with Congress. The agreement among the three parties was that, though Jaworski would operate within the Justice Department, he could not be removed except for specified causes and with notification to Congress. Jaworski also was specifically authorized to contest in court any claim of executive privilege. When Jaworski again sought various specific tapes, and Nixon again claimed executive privilege, Jaworski eventually took the case to the Supreme Court. On July 24, 1974, Chief Justice Warren Burger’s opinion in the 8-0 decision in United States v. Nixon (William Rehnquist, a Nixon appointee who had worked in the White House, had recused himself) overrode the executive privilege claim. The justices also rejected the argument that this was a political intra-branch dispute between the President and a subordinate that rendered the matter non-justiciable, that is, beyond the competence of the federal courts.

At the same time, in July, 1974, with bipartisan support, the House Judiciary Committee voted out three articles of impeachment. Article I charged obstruction of justice regarding the Watergate burglary. Article II charged him with violating the Constitutional rights of citizens and “contravening the laws governing agencies of the executive branch,” which dealt with Nixon’s alleged attempted misuse of the IRS, and with his misuse of the FBI and CIA. Article III charged Nixon with ignoring congressional subpoenas, which sounds remarkably like an attempt to obstruct Congress, a dubious ground for impeachment. Two other proposed articles were rejected. When the Supreme Court ordered Nixon to release the tapes, that of June 23, 1972, showed obstruction of justice by the President instructing his staff to use the CIA to end the Watergate investigation. The tape was released on August 5. Nixon was then visited by a delegation of Republican Representatives and Senators who informed him of the near-certainty of impeachment by the House and of his extremely tenuous position to avoid conviction by the Senate. The situation having become politically hopeless, Nixon resigned, making his resignation formal on Friday, August 9, 1974.

The Watergate affair produced several constitutional controversies. First, the Supreme Court addressed executive privilege to withhold confidential information. Nixon’s opponents had claimed that the executive lacked such a privilege because the Constitution did not address it, unlike the privilege against self-incrimination. Relying on consistent historical practice going back to the Washington administration, the Court found instead that such a privilege is inherent in the separation of powers and necessary to protect the President in exercising the executive power and others granted under Article II of the Constitution. However, unless the matter involves state secrets, that privilege could be overridden by a court, if warranted in a criminal case, and the “presumptively privileged” information ordered released. While the Court did not directly consider the matter, other courts have agreed with Judge Sirica that, based on long practice, the privilege will be upheld if Congress seeks such confidential information. The matter then is a political question, not one for courts to address at all.

Another controversy arose over the President’s long-recognized power to fire executive branch subordinates without restriction by Congress. This is essential to the President’s position as head of the executive branch. For example, the President has inherent constitutional authority to fire ambassadors as Barack Obama and Donald Trump did, or to remove U.S. Attorneys, as Bill Clinton and George W. Bush did. Jaworski’s appointment under the agreement not to remove him except for specified cause interfered with that power, yet the Court upheld that limitation in the Nixon case.

After Watergate, in 1978, Congress passed the Ethics in Government Act that provided a broad statutory basis for the appointment of special prosecutors outside the normal structure of the Justice Department. Such prosecutors, too, could not be removed except for specified causes. In Morrison v. Olson, in 1988, the Supreme Court, by 7-1, upheld this incursion on executive independence over the lone dissent of Justice Antonin Scalia. At least as to inferior executive officers, which the Court found special prosecutors to be, Congress could limit the President’s power to remove, as long as the limitation did not interfere unduly with the President’s control over the executive branch. The opinion, by Chief Justice Rehnquist, was in many ways risible from a constitutional perspective, but it upheld a law that became the starting point for a number of highly-partisan and politically-motivated investigations into actions taken by Presidents Ronald Reagan, George H.W. Bush, and Bill Clinton, and by their subordinates. Only once the last of these Presidents was being subjected to such oversight did opposition to the law become sufficiently bipartisan to prevent its reenactment.

The impeachment proceeding itself rekindled the debate over the meaning of the substantive grounds for such an extraordinary interference with the democratic process. While treason is defined in the Constitution and bribery is an old and well-litigated criminal law concept, the third basis, of “high crimes and misdemeanors,” is open to considerable latitude of meaning. One view, taken by defenders of the official under investigation, is that this phrase requires conduct amounting to a crime, an “indictable offense.” The position of the party pursuing impeachment, Republican or Democrat, has been that this phrase more broadly includes unfitness for office and reaches conduct which is not formally criminal but which shows gross corruption or a threat to the constitutional order. The Framers’ understanding appears to have been closer to the latter, although the much greater number and scope of criminal laws today may have narrowed the difference. However, what the Framers considered sufficiently serious impeachable corruption likely was more substantial than what has been proffered recently. They were acutely aware of the potential for merely political retaliation and similar partisan mischief that a low standard for impeachment would produce. These and other questions surrounding the rather sparse impeachment provisions in the Constitution have not been resolved. They continue to be, foremost, political matters addressed on a case-by-case basis, as demonstrated the past twelve months.

As has been often observed, Nixon’s predicament was not entirely of his own making. In one sense, he was the victim of political trends that signified a reaction against what had come to be termed the “Imperial Presidency.” It had long been part of the progressive political faith that there was “nothing to fear but fear itself” as far as broadly exercised executive power, as long as the presidential tribune using “a pen and a phone” was subject to free elections. Actions routinely done by Presidents such as Franklin Roosevelt, Harry Truman, and Nixon’s predecessor, Lyndon Johnson, now became evidence of executive overreach. For example, those presidents, as well as others going back to at least Thomas Jefferson had impounded appropriated funds, often to maintain fiscal discipline over profligate Congresses. Nixon claimed that his constitutional duty “to take care that the laws be faithfully executed” was also a power that allowed him to exercise discretion as to which laws to enforce, not just how to enforce them. In response, the Democratic Congress in 1974 passed the Budget and Impoundment Control Act of 1974. The Supreme Court in Train v. City of New York declared presidential impoundment unconstitutional and limited the President’s authority to impound funds to whatever extent was permitted by Congress in statutory language.

In military matters, the elites’ reaction against the Vietnam War, shaped by negative press coverage and antiwar demonstrations on elite college campuses, gradually eroded popular support. The brunt of the responsibility for the vast expansion of the war lay with Lyndon Johnson and the manipulative use of a supposed North Vietnamese naval attack on an American destroyer, which resulted in the Gulf of Tonkin Resolution. At a time when Nixon had ended the military draft, drastically reduced American troop numbers in Vietnam, and agreed to the Paris Peace Accords signed at the end of January, 1973, Congress enacted the War Powers Resolution of 1973 over Nixon’s veto. The law limited the President’s power to engage in military hostilities to specified situations, in the absence of a formal declaration of war. It also basically required pre-action consultation with Congress for any use of American troops and a withdrawal of such troops unless Congress approved within sixty days. It also, somewhat mystifyingly, purported to disclaim any attempt to limit the President’s war powers. The Resolution has been less than successful in curbing presidential discretion in using the military and remains largely symbolic.

Another restriction on presidential authority occurred through the Supreme Court. In United States v. United States District Court in 1972, the Supreme Court rejected the administration’s program of warrantless electronic surveillance for domestic security. This was connected to the Huston Plan of warrantless searches of mail and other communications of Americans. Warrantless wiretaps were connected on some members of the National Security Council and several journalists. Not touched by the Court was the President’s authority to conduct warrantless electronic surveillance of foreigners or their agents for national security-related information gathering. On the latter, Congress nevertheless in 1978 passed the Foreign Intelligence Surveillance Act, which, ironically, has expanded the President’s power in that area. Because it can be applied to communications of Americans deemed agents of a foreign government, FISA, along with the President’s inherent constitutional powers regarding foreign intelligence-gathering, can be used to circumvent the Supreme Court’s decision. It has even been used in the last several years to target the campaign of then-candidate Donald Trump.

Nixon’s use of the “pocket veto” and his imposition of price controls also triggered resentment and reaction in Congress, although once again his actions were hardly novel. None of these various executive policies, by themselves, were politically fatal. Rather, they demonstrate the political climate in which what otherwise was just another election-year dirty trick, the Watergate Hotel burglary, could result in the historically extraordinary resignation from office of a President who had not long before received the approval of a large majority of American voters. Nixon’s contemplated use of the IRS to audit “enemies” was no worse than the Obama Administration’s actual use of the IRS to throttle conservative groups’ tax exemption. His support of warrantless wiretaps under his claimed constitutional authority to target suspected domestic troublemakers, while unconstitutional, hardly is more troubling than Obama’s use of the FBI and CIA to manipulate the FISA system into spying on a presidential candidate to assist his opponent. Nixon’s wiretapping of NSC officials and several journalists is not dissimilar to Obama’s search of phone records of various Associated Press reporters and of spying on Fox News’s James Rosen. Obama’s FBI also accused Rosen of having violated the Espionage Act. The Obama administration brought more than twice as many prosecutions—including under the Espionage Act—against leakers than all prior Presidents combined. That was in his first term.

There was another, shadowy factor at work. Nixon, the outsider, offended the political and media elites. Nixon himself disliked the bureaucracy, which had increased significantly over the previous generation through the New Deal’s “alphabet agencies” and the demands of World War II and the Cold War. The Johnson Administration’s Great Society programs sped up this growth. The agencies were staffed at the upper levels with left-leaning members of the bureaucratic elite. Nixon’s relationship with the press was poisoned not only by their class-based disdain for him, but by the constant flow of leaks from government insiders who opposed him. Nixon tried to counteract that by greatly expanding the White House offices and staffing them with members who he believed were personally loyal to him. His reliance on those advisers rather than on the advice of entrenched establishment policy-makers threatened the political clout and personal self-esteem of the latter. What has been called Nixon’s plebiscitary style of executive government, relying on the approval of the voters rather than on that of the elite administrative cadre, also was a threat to the existing order. As Senator Charles Schumer warned President Trump in early January, 2017, about the intelligence “community,” “Let me tell you: You take on the intelligence community — they have six ways from Sunday at getting back at you.” Nixon, too, lived that reality.

Once out of office, Nixon generally stayed out of the limelight. The strategy worked well. As seems to be the custom for Republican presidents, once they are “former,” many in the press and among other “right-thinking people” came to see him as the wise elder statesman, much to be preferred to the ignorant cowboy (and dictator) Ronald Reagan. Who, of course, then came to be preferred to the ignorant cowboy (and dictator) George W. Bush. Who, of course, then came to be preferred to the ignorant reality television personality (and dictator) Donald Trump. Thus, the circle of political life continues. It ended for Nixon on April 22, 1994. His funeral five days later was attended by all living Presidents. Tens of thousands of mourners paid their respects.

The parallel to recent events should be obvious. That said, a comparison between seriousness of the Watergate Affair that resulted in President Nixon’s resignation and the Speaker Nancy Pelosi/Congressman Adam Schiff/Congressman Jerry Nadler impeachment of President Trump brings to mind what may be Karl Marx’s only valuable observation, that historic facts appear twice, “the first time as tragedy, the second time as farce.”

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Danny de Gracia

The story of how men first set foot on the Moon one fateful day on July 20, 1969, will always be enshrined as one of America’s greatest contributions to history. When the first humans looked upwards to the night sky thousands of years ago, they must have marveled at the pale Moon looming in the heavens, set against the backdrop of countless stars. Inspired by the skies, and driven by a natural desire for exploration, humans must have wondered what was out there, and if it would be somehow possible to ever explore the distant heavens above.

Indeed, even the Bible tells us that the patriarch of faith, Abraham, was told by God in Genesis 15:5, “Look now toward heaven, and count the stars if you are able to number them. So shall your descendants be.”

The word given to Abraham may have been more than just an impressive way of promising an elderly man way past the age of conception that he would bear many children; it seems more like an invitation that mankind’s destiny belongs not merely on Earth, but among the stars of the limitless cosmos, as a spacefaring civilization.

Early Beginnings

For most of mankind’s history, space travel was relegated to wild myths, hopeless dreams, and fanciful science fiction. The first hurdle in reaching for the stars would be mastering staying aloft in Earth’s atmosphere, which by itself was no easy task. Observing birds, humans for millennia had tried to emulate organic wings with little to no success, not truly understanding the science of lift or the physics of flight.

Like Icarus of Greek mythology, the 11th century English Benedictine monk Eilmer of Malmesbury attempted to foray into the skies by fashioning wings as a kind of primitive glider, but he only succeeded in flying a short distance before he crashed, breaking his legs. Later, Jean-François Pilâtre de Rozier would give mankind a critical first in flight when he took off aboard the Montgolfier hot air balloon in 1783.

Ironically, it would not be benevolent inspiration that would free mankind from his millennia-old ties to the ground beneath his feet, but the pressing demands of war and increasing militarization of the planet. As the Industrial Age began, so also arose the age of warfare, and men knew from countless battles that whoever held the high ground could defend any stronghold or defeat any army. And what greater high ground could afford victory, than the heavens themselves?

Once balloons had been proven an effective and stable means of flight, militaries began to use them as spotting platforms to see enemy movements from a distance and provide accurate targeting for artillery. Notably, during the American Civil War, balloons made for a kind of early air forces for both the Union and Confederacy.

When the Wright Brothers at last mastered the art of controlled and powered flight in a fixed-wing aircraft on December 17, 1903, less than a decade later after the invention of the airplane, the First World War would erupt and aircraft and blimps would become crucial weapons in deciding the outcome of battles.

Germany’s defeat, which was seen by many Germans as something that should not have happened and should never happen again, stirred people like the former army lance corporal Adolf Hitler to pursue more advanced aerial weapons as a means of establishing military superiority.

Even as propeller planes were seen as the ultimate form of aircraft by most militaries of the time, in the late 1930s, German engineers Eugen Sänger and Irene Bredt were already envisioning spacecraft to attack enemies from orbit. In 1941, they conceived plans for the Silbervogel (“Silver Bird”), a rocket-powered space bomber that could take off into low Earth orbit, descend, and bounce off the outer atmosphere like a tossed stone skipping across a pond to reach an enemy target even half a world away.

Fortunately for the United States, the Silbervogel would never be produced, but other German scientists would be working on wonder weapons of their own, one of them being Wernher von Braun, an engineer who had childhood dreams of landing men on the Moon with rockets.

Working at the Peenemünde Army Research Center, von Braun infamously gave Nazi Germany the power to use V-2 rockets, a kind of early ballistic missile that could deliver a high-explosive warhead hundreds of miles away. One such V-2 rocket, MW 18014, test launched on June 20, 1944, became the first man-made object to cross above the Kármán line – Earth’s atmospheric edge of space – when it reached an apogee of 176 kilometers in flight.

While these weapons did not win the war for Nazi Germany, they aroused the interest of both the United States and the Soviets, and as the victorious Allies reclaimed Europe, a frantic effort to capture German scientists for their aerospace knowledge would become the prelude to a coming Cold War.

The Nuclear Age and Space

The use of the Fat Man and Little Boy atomic bombs against Japan brought to light a realization among planners in both the United States and the Soviet Union: The next battleground for control of the planet would be space. Between the difficulty in intercepting weapons like the V-2 rocket, and the destructive capability of the atom bomb, the nations that emerged victorious in WWII all saw potential in combining these technologies together.

At the end of WWII, both the Soviet Union and the United States brought back to their countries numerous German scientists and unused V-2 rockets for the purposes of creating their own next-generation of missiles.

The early V-2 rockets developed by von Braun for Nazi Germany were primitive and inaccurate weapons, but they had demonstrated the capability to carry objects, such as an explosive warhead, in high ballistic arcs over the earth. Early atomic bombs were bulky and extremely heavy, which meant that in order to deliver these weapons of mass destruction across space, larger rockets would need to be developed.

It is no accident then that the early space launchers of both the Soviet Union and the United States were, in fact, converted intercontinental ballistic missiles (or ICBMs) meant for delivering nuclear payloads. The first successful nuclear ICBM was the Soviet R-7 Semyorka (NATO reporting name SS-6 “Sapwood”), which would be the basis for the modified rocket 8K71PS No. M1-1PS, that sent Sputnik, the world’s first artificial satellite, into orbit on October 4, 1957.

The success of the Soviets in putting the first satellite into orbit awed the entire world, but was disturbing to the President Dwight D. Eisenhower White House, because it was not lost on the U.S. military that this accomplishment was more or less a demonstration of nuclear delivery capabilities by the Russians.

And while the United States in 1957 had an overwhelming superiority in nuclear weapons numbers relative to the Soviets, the nuclear doctrine of the early Cold War was structured around a bluff of “massive retaliation” created by Secretary of State John Foster Dulles that intended to minimize the proliferation of new conflicts – including space –  by threatening atomic use as the default response.

“If an enemy could pick his time and place and method of warfare,” Dulles had said in a dinner before the Council on Foreign Relations in January 1954, “and if our policy was to remain the traditional one of meeting aggression by direct and local opposition, then we needed to be ready to fight in the Arctic and in the Tropics; in Asia, the Near East; and in Europe; by sea, by land, and by air; with old weapons, and with new weapons.”

A number of terrifying initial conclusions emerged from the success of Sputnik. First, it showed that the Soviets had reached the ultimate high ground before U.S./NATO forces, and that their future ICBMs could potentially put any target in the world at risk for nuclear bombardment.

To put things into perspective, a jet plane like the American B-47, B-52, or B-58 bombers of the time, took upwards of 8 hours or more cruising through the stratosphere to strike a target from its airbase. But an ICBM, which can reach speeds of Mach 23 or faster in its terminal descent from orbit, can hit any target in the world in 35 minutes or less from launch. This destabilizing development whittled down the U.S. advantage, as it gave the Soviets the possibility of firing first in a surprise attack to “decapitate” any superior American or NATO forces that might be used against them.

The second, and more alarming perception paved by the Soviet entry into space was that America had dropped the ball and been left behind, not only technologically, but historically. In the Soviet Union, Nikita Khrushchev sought to gut check the integrity of both the United States and the NATO alliance by showcasing novel technological accomplishments, such as the Sputnik launch, to cast a long shadow over Western democracies and to imply that communism would be the wave of the future.

In a flurry of briefings and technical research studies that followed the Sputnik orbit, von Braun and other scientists in the U.S. determined that while the Soviets had beaten the West into orbit, the engineering and industrial capabilities of America would ultimately make it feasible for the U.S. over the long term to accomplish a greater feat, in which a man could be landed on the Moon.

Texas Senator Lyndon B. Johnson, later to be vice president to the young, idealistic John F. Kennedy, would be one of the staunchest drivers behind the scenes in pushing for America’s landing on the Moon. The early years of the space race were tough to endure, as NASA, America’s fledgling new civilian space agency, seemed – at least in public – to always be one step behind the Soviets in accomplishing space firsts.

Johnson, a rough-on-the-edges, technocratic leader who saw the necessity of preventing a world “going to sleep by the light of a communist Moon” pushed to keep America in the space fight even when it appeared, to some, as though American space rockets “always seemed to blow up.” His leadership would put additional resolve in the Kennedy administration to stay the course, and may have arguably ensured America being the first and only nation to land men on the Moon.

The Soviets would score another blow to America when on April 12, 1961, cosmonaut Yuri Gagarin became the first human in space when he made a 108-minute orbital flight, launched on the Vostok-K 8K72K rocket, another R-7 ICBM derivative.

But a month later on May 5, 1961, NASA began to catch-up with the Soviets when Alan Shepard and his Freedom 7 space capsule successfully made it into space, brought aloft by the Mercury-Redstone rocket which was adapted from the U.S. Army’s PGM-11 short range nuclear ballistic missile.

Each manned launch and counter-launch between the two superpowers was more than just a demonstration of scientific discovery; they were suggestions of nuclear launch capabilities, specifically, the warhead throw weight power of either country’s missiles, and a thinly veiled competition of who, at any given point in time, was winning the Cold War.

International Politics and Space

President Kennedy, speaking at Rice University on September 12, 1962, just one month before the Cuban Missile Crisis, hinted to the world that the Soviet advantage in space was not quite what it seemed to be, and that perhaps some of their “less public” space launches had been failures. Promising to land men on the Moon before the decade ended, Kennedy’s “Moon speech” at Rice has been popularly remembered as the singular moment when America decided to come together and achieve the impossible, but this is not the whole story.

In truth, ten days after giving the Moon speech, Kennedy privately reached out to Khrushchev pleading with him to make the landing a joint affair, only to be rebuffed, and then to find himself in October 14 of that same year ambushed by the Soviets with offensive nuclear missiles pointed at the U.S. in Cuba.

Kennedy thought himself to be a highly persuasive, flexible leader who could peaceably talk others into agreeing to make political changes, which set him at odds with the more hard-nosed, realpolitik-minded members of both his administration and the U.S. military. It also invited testing of his mettle by the salty Khrushchev, who saw the youthful American president – “Profiles in Courage” aside – as inexperienced, pliable, and a pushover.

Still, while the Moon race was a crucial part of keeping America and her allies encouraged amidst the ever-chilling Cold War, the Cuban Missile Crisis deeply shook Kennedy and brought him face-to-face with the possibility of a nuclear apocalypse.

Kennedy had already nearly gone to nuclear war once before during the now largely forgotten Berlin Crisis of 1961 when his special advisor to West Berlin, Lucius D. Clay, responded to East German harassment of American diplomatic staff with aggressive military maneuvers, but the Cuba standoff had become one straw too heavy for the idealistic JFK.

Fearing the escalating arms race, experiencing sticker shock over the growing cost of the Moon race he had committed America to, and ultimately wanting to better relations with the Soviet Union, a year later on September 20, 1963 before the United Nations, Kennedy dialed his public Moon rhetoric back and revisited his private offer to Khrushchev when he asked, albeit rhetorically, “Why, therefore, should man’s first flight to the Moon be a matter of national competition?”

The implications of a joint U.S.-Soviet Moon landing may have tickled the ears of world leaders throughout the General Assembly, but behind the scenes, it agitated both Democrats and Republicans alike, who not-so-secretly began to wonder if Kennedy was “soft” on communism.

Even Kennedy’s remarks to the press over the developing conflict in Vietnam during his first year as president were especially telling about his worldview amidst the arms race and space race of the Cold War: “But we happen to live – because of the ingenuity of science and man’s own inability to control his relationships with one another – we happen to live in the most dangerous time in the history of the human race.”

Kennedy’s handling of the Bay of Pigs, Berlin, the Cuban Missile Crisis, and his more idealistic approaches to the openly belligerent Soviet Union began to shake the political establishment, and the possibility of ceding the Moon to a kind of squishy, joint participation trophy embittered those who saw an American landing as a crucial refutation of Soviet advances.

JFK was an undeniably formidable orator, but in the halls of power, he was beginning to develop a reputation in his presidency as eroding America’s post-WWII advantages as a military superpower and leader of the international system. His rhetoric made some nervous, and suggestions of calling off an American Moon landing put a question mark over the future of the West for some.

Again, the Moon race wasn’t just about landing men on the Moon; it was about showcasing the might of one superpower over the other, and Kennedy’s attempts to roll back America’s commitment to space in favor of acquiescing to a Moon shared with the Soviets could have potentially cost the West the outcome of the Cold War.

As far back as 1961, NASA had already sought the assistance of the traditionally military-oriented National Reconnaissance Office (NRO) to gain access to top secret, exotic spy technologies which would assist them in surveying the Moon for future landings, and would later enter into memorandums of agreement with the NRO, Department of Defense, and Central Intelligence Agency. This is important, because the crossover between the separations of civilian spaceflight and military/intelligence space exploitation reflects how the space race served strategic goals rather than purely scientific ones.

On August 28, 1963, Secretary of Defense Robert McNamara and NASA Administrator James Webb had signed an MOA titled “DOD/CIA-NASA Agreement on NASA Reconnaissance Programs” (Document BYE-6789-63) which stated “NRO, by virtue of its capabilities in on-going reconnaissance satellite programs, has developed the necessary technology, contractor resources, and management skills to produce satisfactory equipments, and appropriate security methods to preserve these capabilities, which are currently covert and highly sensitive. The arrangement will properly match NASA requirements with NRO capabilities to perform lunar reconnaissance.”

Technology transfers also went both ways. The Gemini space capsules, developed by NASA as part of the efforts to master orbital operations such as spacewalks, orbital docking, and other aspects deemed critical to an eventual Moon landing, would even be considered by the United States Air Force for a parallel military space program on December 16, 1963. Adapting the civilian Gemini design into an alternate military version called the “Gemini-B,” the Air Force intended to put crews in orbit to a space station called the Manned Orbiting Laboratory (MOL), which would serve as a reconnaissance platform to take pictures of Soviet facilities.

While the MOL program would ultimately be canceled in its infancy before ever actually going online by the President Richard Nixon Administration in 1969, it was yet another demonstration of the close-knit relationship between civilian and military space exploration to accomplish the same interests.

Gold Fever at NASA

Whatever President Kennedy’s true intentions may have been moving forward on the space race, his unfortunate death two months after his UN speech at the hands of assassin Lee Harvey Oswald in Dallas on November 22, 1963 would be seized upon as a justification by the establishment to complete the original 1962 Rice University promise of landing an American first on the Moon, before the end of the decade.

Not surprisingly, one of Johnson’s very first actions in assuming the presidency after the death of Kennedy was to issue Executive Order 11129 on November 29, 1963, re-naming NASA’s Launch Operations Center in Florida as the “John F. Kennedy Space Center,” a politically adroit maneuver which ensured the space program was now seen as synonymous with the fallen president.

In world history, national icons and martyrs – even accidental or involuntary ones – are powerful devices for furthering causes that would ordinarily burnout and lose interest, if left to private opinion alone. Kennedy’s death led to a kind of “gold fever” at NASA in defeating the Soviets, and many stunning advances in space technology would be won in the aftermath of his passing.

So intense was the political pressure and organizational focus at NASA that some began to worry that corners were being cut and that there were serious issues that needed to be addressed.

On January 27, 1967, NASA conducted a “plugs out test” of their newly developed Apollo space capsule, where launch conditions would be simulated on the launch pad with the spacecraft running on internal power. The test mission, designated AS-204, had been strongly cautioned against by the spacecraft’s manufacturer, North American Aviation, because of the fact that it would take place at sea level and with pure oxygen, where the pressure would be dangerously higher than normal atmospheric pressure. Nevertheless, NASA proceeded with the test.

Veteran astronauts Roger B. Chaffee, Virgil “Gus” Grissom, and Ed White, who crewed the test mission, would perish when an electrical malfunction sparked a fire that spread rapidly as a result of the pure oxygen atmosphere of the capsule. Their deaths nearly threatened to bring the entire U.S. space program to a screeching halt, but NASA was able to rise above the tragedy, adding the loss of their astronauts as yet another compelling case for making it to the Moon before the decade would end.

On January 30, 1967, the Monday that followed the “Apollo 1” fire, NASA flight director Eugene F. Kranz gathered his staff together and gave an impromptu speech that would change the space agency forever.

“Spaceflight will never tolerate carelessness, incapacity, and neglect,” he began. “Somewhere, somehow, we screwed up. It could have been in design, build, or test. Whatever it was, we should have caught it.”

He would go on to say, “We did not do our job. We were rolling the dice, hoping that things would come together by launch day, when in our hearts we knew it would be a miracle. We were pushing the schedule and betting that the Cape would slip before we did. From this day forward, Flight Control will be known by two words: Tough and Competent. ‘Tough’ means we are forever accountable for what we do or what we fail to do. We will never again compromise our responsibilities. Every time we walk into Mission Control, we will know what we stand for.”

“‘Competent’ means we will never take anything for granted. We will never be found short in our knowledge and in our skills; Mission Control will be perfect. When you leave this meeting today, you will go back to your office and the first thing you will do there is to write ‘Tough and Competent’ on your blackboards. It will never be erased. Each day when you enter the room, these words will remind you of the price paid by Grissom, White, and Chaffee. These words are the price of admission to the ranks of Mission Control.”

And “tough and competent” would be exactly what NASA would become in the days, months, and years to follow. The U.S. space agency in the wake of the Apollo fire would set exacting standards of professionalism, quality, and safety, even as they continued to increase in mastery of the technology and skills necessary to make it to the Moon.

America’s Finest Hour

Unbeknownst to U.S. intelligence agencies, the Soviets had already fallen vastly far behind in their own Moon program, and their N1 rocket, which was meant to compete with the U.S. Saturn V rocket, was by no means ready for manned use. Unlike NASA, the Soviet space program had become completely dependent on a volatile combination of personalities and politics, which bottlenecked innovation, slowed necessary changes, and in the end, made it impossible to adapt appropriately in the race for the Moon.

On December 21, 1968, the U.S. leapt into first place in the space race when Apollo 8 entered history as the first crewed spacecraft to leave Earth, orbit the Moon, and return. Having combined decades of military and civilian science, overcome terrible tragedies, and successfully applied lessons learned into achievements won, NASA could at last go on to attain mankind’s oldest dream of landing on the Moon with the Apollo 11 mission, launched on July 16, 1969 from the Kennedy Space Center launch complex LC-39A.

Astronauts Neil A. Armstrong, Edwin “Buzz” E. Aldrin Jr., and Michael Collins would reach the Moon’s orbit on July 19, where they would survey their target landing site at the Sea of Tranquility and begin preparations for separation from the Command Module, Columbia, and landing in the Lunar Module, The Eagle.

On Sunday, July 20, Armstrong and Aldrin would leave Collins behind to pilot the Apollo Command Module and begin their descent to the lunar surface below. Discovering their landing area strewn with large boulders, Armstrong took the Lunar Module out of computer control and manually steered the lander on its descent while searching for a suitable location, finding himself with only a mere 50 seconds of fuel left. But at 8:17 pm, Armstrong would touch down safely, declaring to a distant Planet Earth, “Houston, Tranquility Base here, The Eagle has landed!”

Communion on the Moon

As if to bring humanity full circle, two hours after landing on the surface of the Moon, Aldrin, a Presbyterian, quietly and unknown to NASA back on Earth, would remove from his uniform a small 3” x 5” notecard with a hand-written passage from John 15:5. Taking Communion on the Moon, Aldrin would read within the Lunar Module, “As Jesus said: I am the Vine, you are the branches. Whoever remains in Me, and I in Him, will bear much fruit; for you can do nothing without Me.”

Abraham, the Bible’s “father of faith,” could almost be said to have been honored by Aldrin’s confession of faith. In a sense, the landing of a believing astronaut on a distant heavenly object was like a partial fulfillment of the prophecy of Genesis 15:5, in which Abraham’s descendants would be like the stars in the sky.

Later, when Armstrong left the Lunar Module and scaled the ladder down to the Moon’s dusty surface, he would radio back to Earth, “That’s one small step for a man; one giant leap for mankind.” Due to a 35-millisecond interruption in the signal, listeners would not hear the “one small step for a man” but instead, “one small step for man,” leaving the entire world with the impression that the NASA astronauts had won not just a victory for America, but for humankind, as a whole.

After planting Old Glory, the flag of the United States of America in the soft lunar dust, the Moon race had officially been won, and the Soviets, having lost the initiative, would scale back their space program to focus on other objectives, such as building space stations and attempting to land probes on other planets. The Soviets not only lost the Moon race, but their expensive investment that produced no propaganda success would also, ultimately, cost them the Cold War as well.

America would go on to send men to the Moon a total of six times and with twelve different astronauts between July 20, 1969 (Apollo 11) and December 11, 1972 (Apollo 17). The result of the U.S. winning the Moon race would be the caper of assuring the planet that the Western world would not be overtaken by the communist bloc, and many useful technologies which were employed either for the U.S. civilian space program or military aerospace applications would later find themselves in commercial, everyday use.

While other nations, including Russia, the European Union, Japan, India, China, Luxembourg, and Israel all have successfully landed unmanned probes on the Moon, to this date, only the United States holds the distinction of having placed humans on the Moon.

Someday, hopefully soon, humans will return once again to the Moon, and even travel from there to distant planets, or even distant stars. But no matter how far humanity travels, the enduring legacy of July 20, 1969 will be that freedom won the 20th century because America, not the Soviets, won the Moon race.

Landing on the Moon was a global victory for humanity, but getting there first will forever be a uniquely American accomplishment.

Dr. Danny de Gracia, Th.D., D.Min., is a political scientist, theologist, and former committee clerk to the Hawaii State House of Representatives. He is an internationally acclaimed author and novelist who has been featured worldwide in the Washington Times, New York Times, USA Today, BBC News, Honolulu Civil Beat, and more. He is the author of the novel American Kiss: A Collection of Short Stories.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams
USS.Maddox1960s

On March 12, 1947, President Harry Truman delivered a speech advocating assistance to Greece and Turkey to resist communism as part of the early Cold War against the Soviet Union. The speech enunciated the Truman Doctrine, which led to a departure from the country’s traditional foreign policy to a more expansive direction in global affairs.

Truman said, “I believe that it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.” Protecting the free world against communist expansion became the basis for the policy of Cold War containment.

The United States fought a major war in Korea in the early 1950s to halt the expansion of communism in Asia especially after the loss of China in 1949. Although President Dwight D. Eisenhower had resisted the French appeal to intervene in Vietnam at Dien Bien Phu in 1954, the United States gradually increased its commitment and sent thousands of military advisers and billions of dollars in financial assistance over the next decade.

In the summer of 1964, President Lyndon B. Johnson was in the midst of a presidential campaign against Barry Goldwater and pushing his Great Society legislative program through Congress. He did not want to allow foreign affairs to imperil either and downplayed increased American involvement in the war.

Administration officials were quietly considering bombing North Vietnam or sending ground troops to interdict the Viet Cong insurgency in South Vietnam. Meanwhile, the United States Navy was running covert operations in the waters off North Vietnam in the Gulf of Tonkin.

On August 2, the destroyer USS Maddox and several U.S. fighter jets from a nearby carrier exchanged fire with some North Vietnamese gunboats. The U.S. warned North Vietnam that further “unprovoked” aggression would have “grave consequences.” The USS Turner Joy was dispatched to patrol with the Maddox.

On August 4, the Maddox picked up multiple enemy radar contacts in severe weather, but no solid proof confirmed the presence of the enemy. Whatever the uncertainty related to the event, the administration proceeded as if a second attack had definitely occurred. It immediately ordered a retaliatory airstrike and sought a congressional authorization of force. President Johnson delivered a national television address and said, “Repeated acts of violence against the armed forces of the United States must be met….we seek no wider war.”

On August 7, Congress passed the Tonkin Gulf Resolution which authorized the president “to take all necessary measures to repeal any armed attack against the forces of the United States and to prevent any further aggression.” The House passed the joint resolution unanimously, and the Senate passed it with only two dissenting votes.

The Tonkin Gulf Resolution became the basis for fighting the Vietnam War. World War II remained the last congressional declaration of war.

President Johnson had promised the electorate that he would not send “American boys to fight a war Asian boys should fight for themselves.” However, the administration escalated the war over the next several months.

On February 7, 1965, the Viet Cong launched an attack on the American airbase at Pleiku. Eight Americans were killed and more than one hundred wounded. President Johnson and Secretary of Defense Robert McNamara used the incident to expand the American commitment significantly but sought a piecemeal approach that would largely avoid a contentious public debate over American intervention.

Within a month, American ground troops were introduced into Vietnam as U.S. Marines went ashore and were stationed at Da Nang to protect an airbase there. The president soon authorized deployment of thousands more troops. In April, he approved Operation Rolling Thunder which launched a sustained bombing campaign against North Vietnam.

It did not take long for the Marines to establish offensive operations against the communists. The Marines initiated search and destroy missions to engage the Viet Cong. They fought several battles with the enemy, requiring the president to send more troops.

In April 1965, the president finally explained his justification for escalating the war, which included the Cold War commitment to the free world. He told the American people, “We fight because we must fight if we are to live in a world where every country can shape its own destiny. And only in such a world will our own freedom be finally secure.”

As a result, Johnson progressively sent more and more troops to fight in Vietnam until there were 565,000 troops in 1968. The Tet Offensive in late January 1968 was a profound shock to the American public which had received repeated promises of progress in the war. Even though U.S. forces recovered from the initial shock and won on overwhelming military victory that effectively neutralized the Viet Cong and devastated North Vietnamese Army forces, President Johnson was ruined politically and announced he would not run for re-election. His “credibility gap” contributed to growing distrust of government and concern about an unlimited and unchecked “imperial presidency” soon made worse by Watergate.

The Vietnam War contributed to profound division on the home front. Hundreds of thousands of Americans from across the spectrum protested American involvement in Vietnam. Young people from the New Left were at the center of teach-ins and demonstrations on college campuses across the country. The Democratic Party was shaken by internal convulsions over the war, and conservatism dominated American politics for a generation culminating in the presidency of Ronald Reagan.

Eventually, more than 58,000 troops were lost in the war. The Cold War consensus on containment suffered a dislocation, and a Vietnam syndrome affected morale in the U.S. military and contributed to significant doubts about the projection of American power abroad. American confidence recovered in the 1980s as the United States won the Cold War, but policymakers have struggled to define the purposes of American foreign policy with the rise of new global challenges in the post-Cold War world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

It took almost a century for Congress, and President Lyndon B. Johnson, a Democrat from Texas, to enact the Civil Rights Act of 1964, putting America back on the side of defending the equality before the law of all U.S. Citizens. That act formally made segregation illegal. It legally required states to stop applying facially neutral election laws differently, depending on the race of the citizen trying to register and vote. If the Civil Rights Act of 1957 had raised expectations by showing what was now possible, the Civil Rights Act of 1964 again dramatically raised expectations to actual equal treatment by governments.

But the defenders of segregation were not yet done. They continued to pursue the “massive resistance” to integration that emerged in the year between Brown I and Brown II.[1] They continued to refuse to register black voters, to use “literacy” tests (which tested esoteric knowledge, rather than literacy) only to deny black citizens the chance to register, and to murder those who didn’t get the message.

Jimmie Lee Jackson was one such victim of last-ditch defiance. In February 1965, Jackson, an Alabamian church deacon, led a demonstration in favor of voting rights in his hometown of Marion, Alabama; as he did so, state troopers beat him to death. The Southern Christian Leadership Conference (in apparent coordination with the White House) responded by organizing a far larger march for voting rights, one that would cover the 54 miles from Selma, Alabama to the capitol in Montgomery. On March 7, 1965, that march reached the Edmund Pettis Bridge in Selma, where national and international television cameras captured (and broadcast into living rooms everywhere) Alabama state troopers gassing and beating unarmed demonstrators.  When the SCLC committed to continuing the march, others flocked to join them. Two days later, as a federal court considered enjoining further state action against the demonstrators, a mob lynched James Reeb, a Unitarian minister from Boston who had flown in for that purpose.

Johnson Returns to Congress

Less than a week later, President Johnson had called Congress into a special session and began it with a nationally televised Presidential address to a Joint Session.[2] Urging “every member of both parties, Americans of all religions and of all colors, from every section of this country” to join him in working “for the dignity of man and the destiny of democracy,” President Johnson, the heavily accented man-of-the-South that Senator Richard Russell, a Democrat from Georgia, once had connived to get into the Presidency, compared the historical “turning point” confronting the nation to other moments “in man’s unending search for freedom” including the battles of “Lexington and Concord” and the surrender at “Appomattox.” President Johnson defined the task before Congress as a “mission” that was “at once the oldest and the most basic of this country: to right wrong, to do justice, to serve man.” The President identified the core issue – that “of equal rights for American Negroes” – as one that “lay bare the secret heart of America itself[,]” a “challenge, not to our growth or abundance, or our welfare or our security, but rather to the values, and the purposes, and the meaning of our beloved nation.”

He said more. President Johnson recognized that “[t]here is no Negro problem. There is no Southern problem. There is no Northern problem. There is only an American problem. And we are met here tonight as Americans — not as Democrats or Republicans. We are met here as Americans to solve that problem.”  And still more:

“This was the first nation in the history of the world to be founded with a purpose. The great phrases of that purpose still sound in every American heart, North and South: ‘All men are created equal,’ ‘government by consent of the governed,’ ‘give me liberty or give me death.’ Well, those are not just clever words, or those are not just empty theories. In their name Americans have fought and died for two centuries, and tonight around the world they stand there as guardians of our liberty, risking their lives.

“Those words are a promise to every citizen that he shall share in the dignity of man. This dignity cannot be found in a man’s possessions; it cannot be found in his power, or in his position.  It really rests on his right to be treated as a man equal in opportunity to all others. It says that he shall share in freedom, he shall choose his leaders, educate his children, provide for his family according to his ability and his merits as a human being. To apply any other test – to deny a man his hopes because of his color, or race, or his religion, or the place of his birth is not only to do injustice, it is to deny America and to dishonor the dead who gave their lives for American freedom.

“Every American citizen must have an equal right to vote.

“There is no reason which can excuse the denial of that right.  There is no duty which weighs more heavily on us than the duty we have to ensure that right.

“Yet the harsh fact is that in many places in this country men and women are kept from voting simply because they are Negroes. Every device of which human ingenuity is capable has been used to deny this right. The Negro citizen may go to register only to be told that the day is wrong, or the hour is late, or the official in charge is absent. And if he persists, and if he manages to present himself to the registrar, he may be disqualified because he did not spell out his middle name or because he abbreviated a word on the application. And if he manages to fill out an application, he is given a test. The registrar is the sole judge of whether he passes this test. He may be asked to recite the entire Constitution, or explain the most complex provisions of State law. And even a college degree cannot be used to prove that he can read and write.

“For the fact is that the only way to pass these barriers is to show a white skin. Experience has clearly shown that the existing process of law cannot overcome systematic and ingenious discrimination. No law that we now have on the books – and I have helped to put three of them there – can ensure the right to vote when local officials are determined to deny it. In such a case our duty must be clear to all of us. The Constitution says that no person shall be kept from voting because of his race or his color. We have all sworn an oath before God to support and to defend that Constitution. We must now act in obedience to that oath.

“We cannot, we must not, refuse to protect the right of every American to vote in every election that he may desire to participate in. And we ought not, and we cannot, and we must not wait another eight months before we get a bill. We have already waited a hundred years and more, and the time for waiting is gone.

“But even if we pass this bill, the battle will not be over. What happened in Selma is part of a far larger movement which reaches into every section and State of America. It is the effort of American Negroes to secure for themselves the full blessings of American life. Their cause must be our cause too.  Because it’s not just Negroes, but really it’s all of us, who must overcome the crippling legacy of bigotry and injustice.

And we shall overcome.

“The real hero of this struggle is the American Negro. His actions and protests, his courage to risk safety and even to risk his life, have awakened the conscience of this nation. His demonstrations have been designed to call attention to injustice, designed to provoke change, designed to stir reform.  He has called upon us to make good the promise of America.  And who among us can say that we would have made the same progress were it not for his persistent bravery, and his faith in American democracy.

“For at the real heart of [the] battle for equality is a deep[-]seated belief in the democratic process. Equality depends not on the force of arms or tear gas but depends upon the force of moral right; not on recourse to violence but on respect for law and order.

“And there have been many pressures upon your President and there will be others as the days come and go. But I pledge you tonight that we intend to fight this battle where it should be fought – in the courts, and in the Congress, and in the hearts of men.”

The Passage and Success of the Voting Rights Act

Congress made good on the President’s promises and fulfilled its oath.  The Voting Rights Act, the crowning achievement of the Civil Rights Movement, was signed into law in August 1965, less than five (5) months after those bloody events in Selma.

The VRA would allow individuals to sue in federal court when their voting rights were denied. It would allow the Department of Justice to do the same. And, recognizing that “voting discrimination … on a pervasive scale” justified an “uncommon exercise of congressional power[,]” despite the attendant “substantial federalism costs[,]” it required certain states and localities, for a limited time, to obtain the approval (or “pre-clearance”) of either DOJ or a federal court sitting in Washington, DC before making any alteration to their voting laws, from registration requirements to the location of polling places.[3]

And it worked.

The same Alabama Governor and Democrat, George Wallace, who (on first losing re-election) had promised himself never to be “out-segged” again and who, on getting back into office in 1963, had proclaimed “segregation today, segregation tomorrow, segregation forever[!]” would win re-election in 1982 by seeking and obtaining the majority support of Alabama’s African Americans. By 2013, “African-American voter turnout exceeded white voter turnout in five of the six States originally covered by [the pre-clearance requirement], with a gap in the sixth State of less than one half of one percent;”[4] the percentage of preclearance submissions drawing DOJ objections had dropped about 100-fold between the first decade under pre-clearance and 2006.[5]

At long last, with only occasional exceptions (themselves addressed through litigation under the VRA), American elections were held consistent with the requirements of the Constitution and the equality before the law of all U.S. Citizens.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Southern states might now be required by law to integrate their public schools, but, by and large, they didn’t yet do so.  That would follow around 1970 when a pair of events forced the issue: (a) a University of Southern California football team led by O.J. Simpson drubbed the University of Alabama in the Crimson Tide’s 1970 home opener – so allowing Alabama Coach Bear Bryant to finally convince Alabama Governor George Wallace that the state must choose between having competitive football or segregated football; and (b) President Nixon quietly confronting the Southern governments that had supported his election with the conclusion of the American intelligence community that their failure to integrate was costing America the Cold War – they must decide whether they hated their black neighbors more than they hated the godless Communists.  However ironically, what finally killed Jim Crow was a love of football and a hatred of Marxism.

[2] See, https://www.americanrhetoric.com/speeches/lbjweshallovercome.htm.

[3] Shelby County v. Holder, 570 U.S. 529, 133 S.Ct. 2612, 2620 and 2624 (2013) (each citing South Carolina v. Katzenbach, 383 U.S. 301, 308 and 334 (1966)); and at 2621 (citing Northwest Austin Municipal Util. Dist. No. One v. Holder, 557 U.S. 193, 202-03 (2009)), respectively.

[4] Id. at 2626.

[5] Id.

Guest Essayist: Dan Morenoff

For a decade after the Civil War, the federal government sought to make good its promises and protect the rights of the liberated as American citizens.  Most critically, in the Civil Rights Act of 1866, Congress created U.S. Citizenship and, in the Civil Rights Act of 1875, Congress guaranteed all American Citizens access to all public accommodations. Then, stretching from 1877 to the end of the century following the close of the Civil War, the federal government did nothing to assure that those rights were respected. Eventually, in Brown v. Board of Education, the Supreme Court started to admit that this was a problem, a clear failure to abide by our Constitution. But the Supreme Court (in Brown II) also made clear that it wouldn’t do anything about it.

So things stood, until a man in high office made it his business to get the federal government again on the side of right, equality, and law. That man was Lyndon Baines Johnson. And while this story could be told in fascinating, exhaustive detail,[1] these are its broad outlines.

Jim Crow’s Defenders

Over much of the century following the Civil War’s close, the American South was an accepted aberration, where the federal government turned a blind-eye to government mistreatment of U.S. Citizens (as well as to the systematic failure of governments to protect U.S. Citizens from mob-rule and racially-tinged violence), and where the highest office White Southerners could realistically dream of attaining was a seat in the U.S. Senate from which such a Southerner could keep those federal eyes blind.[2], [3] So, for the decades when it mattered, Southern Senators used their seniority and the procedures of the Senate (most prominently the filibuster, pioneered the previous century by South Carolina’s John C. Calhoun in the early 1800s) to block any federal ban on lynching, to protect the region’s racial caste system from federal intrusion, and to steadily steer federal money into the rebuilding of their broken region.  For decades, the leader of these efforts was Senator Richard Russell, a Democrat from Georgia and an avowed racist, if one whose insistence on the prerogatives of the Senate and leadership on other issues nonetheless earned him the unofficial title, “the Conscience of the Senate.”

LBJ Enters the Picture

Then Lyndon Baines Johnson got himself elected to the Senate as a Democrat from Texas in 1948. He did so through fraud in a hotly contested election. The illegal ballots counted on his behalf turned a narrow defeat into an 87-vote victory that triggered his Senate colleagues calling him “Landslide Lyndon” for the rest of his career.

By that time, LBJ had established a number of traits that would remain prominent throughout the rest of his life. Everywhere he went, LBJ consistently managed to convince powerful men to treat him as if he was their professional son. For a few examples, LBJ had convinced the president of his college to treat him alone among decades of students as a preferred heir. For another, as a Congressman, he managed to convince Sam Rayburn, a Democrat from Texas and Speaker of the House for 17 of 21 years, a man before whom everyone else in Washington coward, to allow LBJ to regularly walk up to him in large gatherings to kiss his bald head. And everywhere he went, LBJ consistently managed to identify positions no one else wanted that held potential leverage and therefore could be made focal points of enormous power. When LBJ worked as a Capitol-Hill staffer, he turned a “model Congress,” in which staffers played at being their bosses, into a vehicle to move actual legislation through the embarrassment of his rivals’ bosses. On a less positive note, everywhere he went, LBJ had demonstrated (time and again) an enthusiasm for verbally and emotionally abusing those subject to his authority such as staffers, girl-friends, his wife…, sometimes in the service of good causes and other times entirely in the name of his caprice and meanness.

In the Senate, LBJ followed form. He promptly won the patronage of Richard Russell, convincing the arch-segregationist both that he was the Southerner capable of taking up Russell’s mantle after him and that Russell should teach him everything he knew about Senate procedure.  Arriving at a time that everyone else viewed Senate leadership positions as thankless drudgery, LBJ talked his way into being named his party’s Senate Whip in only his second Congress in the chamber. Four years later, having impressed his fellow Senators with his ability to accurately predict how they would vote, even as they grew to fear his beratings and emotional abuse, LBJ emerged as Senate Majority Leader. And in 1957, using as instigation the support of President Dwight D. Eisenhower, a Republican from Kansas, for such a measure and the recent Supreme Court issuance of Brown, LBJ managed to convince Russell both that the Senate must pass the first Civil Rights Act since Reconstruction, a comparatively weak bill, so palatable to Russell as a way to prevent the passage of a stronger one and that Russell should help him pass it to advance LBJ’s chances of winning the Presidency in 1960 as a loyal Southerner. Substantively, that 1957 Act created the U.S. Civil Rights Commission, a clearinghouse for ideas for further reforms, but one with no enforcement powers. The Act’s real power, though, wasn’t in its substance. Its real power lay in what it demonstrated was suddenly possible: where a weak act could pass, a stronger one was conceivable. And where one was conceivable, millions of Americans long denied equality, Americans taught by Brown, in the memorable phraseology of the Reverend Martin Luther King, Jr. that “justice too long delayed is justice denied” would demand the passage of the possible.

The Kennedy Years

Of course, Johnson didn’t win the Presidency in 1960. But, in part thanks to Rayburn and Russell’s backing, he did win the Vice Presidency. There, he could do nothing, and did. President John F. Kennedy, a Democrat from Massachusetts, didn’t trust him, the Senate gave him no role, and Bobby Kennedy, the President’s in-house proxy and functional Prime Minister, officially serving as Attorney General, openly mocked and dismissed Johnson as a washed up, clownish figure. So as the Civil Rights Movement pressed for action to secure the equality long denied, as students were arrested at lunch-counters and freedom riders were murdered, LBJ could only take the Attorney General’s abuse, silently sitting back and watching the President commit the White House to pushing for a far more aggressive Civil Rights Act, even as it had no plan for how to get it passed over the opposition of Senator Russell and his block of Southern Senators.

Dallas, the Presidency, and How Passage Was Finally Obtained

Not long before his assassination in 1963, President Kennedy proposed stronger legislation and said the nation “will not be fully free until all its citizens are free.” But, when an assassin’s bullet tragically slayed President Kennedy on a Dallas street, LBJ became the new president of the United States. The man with a knack for finding leverage and power where others saw none suddenly sat Center Stage, with every conceivable lever available to him. And he wasted no time deploying those levers. Uniting with the opposition party’s leadership in the Senate, Everett Dirksen, a Republican from Illinois, was the key man who delivered the support of eighty-two percent (82%) of his party’s Senators, President Johnson employed every tool available to the chief magistrate to procure passage of the stronger Civil Rights Act he had once promised Senator Russell that the 1957 Act would forestall.

The bill he now backed, like the Civil Rights Act of 1875, would outlaw discrimination based on race, color, religion, or national origin in public accommodations through Title II. It would do more. Title I would forbid the unequal application of voter registration laws to different races. Title III would bar state and local governments from denying access to public facilities on the basis of race, color, religion, or national origin. Title IV would authorize the Department of Justice to bring suits to compel the racial integration of schools. Title VI would bar discrimination on the basis of race, color, or national origin by federally funded programs and activities. And Title VII would bar employers from discriminating in hiring or firing on the basis of race, color, religion, sex, or national origin.

This was the bill approved by the House of Representatives after the President engineered a discharge petition to force the bill out of committee. This was the bill filibustered by 18 Senators for a record 60 straight days. This was the bill where that filibuster was finally broken on June 10, 1964, the first filibuster of any kind defeated since 1927. After the lengthy Democrat filibuster in the Senate, the bill was finally passed 73-27. The Senate passed that Civil Rights bill on June 19, 1964. The House promptly re-passed it as amended by the Senate.

On July 2, 1964, President Johnson signed the Civil Rights Act into law on national television. Finally, on the same day that John Adams had predicted 188 years earlier would be forever commemorated as a “Day of Deliverance” with “Pomp and Parade, with Shews, Games, Sports, Guns, Bells, Bonfires and Illuminations from one End of this Continent to the other[,]” the federal government had restored the law abandoned with Reconstruction in 1876. Once more, the United States government would stand for the equality before the law for all its Citizens.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Robert Caro has, so far, written four (4) books over as many decades telling this story over thousands of pages.  The author recommends them, even as he provides this TLDR summation.  Caro’s books on the subject are: The Years of Lyndon Johnson: The Path to Power, The Years of Lyndon Johnson: Means of Ascent, The Years of Lyndon Johnson: Master of the Senate, and The Years of Lyndon Johnson: The Passage of Power.

[2] Black Southerners, almost totally barred from voting, could not realistically hope for election to any office over this period.  It is worth noting, however, that the Great Migration saw a substantial portion of America’s Black population move North, where this was not the case and where such migrants (and their children) could and did win elective office.

[3] The exception proving the rule is President Woodrow Wilson.  Wilson, the son of a Confederate veteran, was born in Virginia and raised mostly in South Carolina.  Yet he ran for the Presidency as the Governor of New Jersey, a position be acquired as a result of his career at Princeton University (and the progressive movement’s adoration of the “expertise” that an Ivy League President seemed to promise).  Even then, Wilson could only win the Presidency (which empowered him to segregate the federal workforce) when his two predecessors ran against each other and split their shared, super-majority support.

Guest Essayist: Robert L. Woodson, Sr.

When President Lyndon B. Johnson announced the launch of a nationwide War on Poverty in 1964, momentary hope arose that it would uplift the lives of thousands of impoverished Americans and their inner-city neighborhoods. But the touted antipoverty campaign of the 60s is a classic example of injury with the helping hand.

Regardless of intention—or mantras—the ultimate measure of any effort to reduce poverty is the impact it has on its purported beneficiaries. After more than 60 years and the investment of $25 trillion of tax-payers’ money, poverty numbers have virtually remained the same, while conditions in low-income neighborhoods have spiraled downward.

While impoverished Americans may not be rising up, what has become a virtual “poverty industry” and the bureaucracy of welfare system has prospered, expanding to 89 separate programs spread across 14 government programs and agencies.  In sum, 70% of anti-poverty funding has not reached the poor but has been absorbed by those who serve the poor. As a consequence, the system has created a commodity out of the poor with perverse incentives to maintain people in poverty as dependents. The operative question became not which problems are solvable, but which ones are fundable.

I had first-hand experience of power and money grabs that followed the launch of Johnson’s antipoverty agenda. As a young civil rights leader at the time of its introduction, I was very hopeful that, at long-last, policies would be adopted that would direct resources to empower the poor to rise. I was working for the summer in Pasadena, California, leading a work project with the American Friends Service Committee in the year after the Watts riots and the government’s response with the War on Poverty.

Initially, the anti-poverty money funded grassroots leaders in high-crime, low-income neighborhoods who had earned the trust and confidence of local people and had their best interests at heart. But many of the local grassroots leaders who were paid by the program began to raise questions about the functions of the local government and how it was assisting the poor. These challenges from the residents became very troublesome to local officials and they responded by appealing to Washington to change the rules to limit the control that those grassroots leaders could exercise over programs to aid their peers.

One of the ways the Washington bureaucracy responded was to institute a requirement that all outreach workers had to be college-educated as a condition of their employment. Overnight, committed and trusted workers on the ground found themselves out of a job. In addition, it was ruled that the allocation and distribution of all incoming federal dollars was to be controlled by a local anti-poverty board of directors that represented three groups: 1/3 local officials, 1/3 business leaders and 1/3 local community leaders. I knew from the moment those structural changes occurred that the poverty program was going to be a disaster and that it would serve the interests of those who served the poor with little benefit to its purported beneficiaries.

Since only a third of the participants on the board would be from the community, the other two-thirds were careful to ensure that the neighborhood residents would be ineffective and docile representatives who would ratify the opportunistic and often corrupt decisions they made. In the town where I was engaged in civil rights activities, I witnessed local poverty agencies awarding daycare contracts to business members on the board who would lease space at three times the market-value rate.

Years of such corruption throughout the nation were later followed by many convictions and the incarceration of people who were exploiting the programs and hurting the poor. When they were charged with corruption, many of the perpetrators used the issue of race to defend themselves. The practice of using race as a shield of defense against charges for corrupt activity continues to this day. The disgraced former Detroit Mayor Kwame Kilpatrick received a 28-year sentence for racketeering, bribery, extortion and tax crimes. Last year, more than 40 public and private officials were charged as part of a long-running and expanding federal investigation into public corruption in metro Detroit, including fifteen police, five suburban trustees, millionaire moguls and a former state senator. Much of the reporting about corruption in the administration of poverty programs never rose to the level of public outrage or indignation and were treated as local issues.

Yet the failure of the welfare system and the War on Poverty is rooted in something deeper than the opportunistic misuse of funds. Its most devastating impact is in undermining pillars of strength that have empowered the black community to survive and thrive in spite of oppression: a spirit of enterprise and mutual cooperation, and the sustaining support of family and community.

In the past, even during periods of legalized discrimination and oppression, a spirit of entrepreneurship and agency permeated the black community. Within the first 50 years after the Emancipation Proclamation, black Americans had accumulated a personal wealth of $700 million. They owned more than 40,000 businesses and more than 930,000 farms, Black commercial enclaves in Durham, North Carolina and the Greenwood Avenue section of Tulsa, Oklahoma, were known as the Negro Wall Street. When blacks were barred from white establishments and services, they created their own thriving alternative transit systems. When whites refused to lend money to blacks, they established more than 103 banks and savings and loans associations and more than 1,000 inns and hotels. When whites refused to treat blacks in hospitals, they established 230 hospitals and medical schools throughout the country.

In contrast, within the bureaucracy of the burgeoning poverty industry, low-income people were defined as the helpless victims of an unfair and unjust society. The strategy of the liberal social engineers is to right this wrong by the redistribution of wealth, facilitated by the social services bureaucracy in the form of cash payments or equivalent benefits. The cause of a person’s poverty was assumed beyond their power and ability to control and, therefore, resources were given with no strings attached and there was no assumption of the possibility of upward mobility towards self-sufficiency. The empowering notions of personal responsibility and agency were decried as “blaming the victim” and, with the spread of that mentality and the acceptance of a state of dependency the rich heritage of entrepreneurship in the black community fell by the wayside.

Until the mid-60s, in 85% of all black families, two parents were raising their children. Since the advent of the Welfare State, more than 75% of black children were born to single mothers. The system included penalties for marriage and work through which benefits would be decreased or terminated. As income was detached from work, the role of fathers in the family was undermined and dismissed. The dissolution of the black family was considered as necessary collateral damage in a war that was being waged in academia against capitalism in America, led by Columbia University professors Richard Cloward and Frances Fox Piven who promoted a massive rise of dependency with a goal to overload the U.S. public welfare system and elicit “radical change.”

Reams of research has found that youths in two-parent families are less likely to become involved in delinquent behavior and drug abuse or suffer depression and more likely to succeed in school and pursue higher education. As generations of children grew up on the streets of the inner city, drug addiction and school drop-out rates soared. When youths turned to gangs for identity, protection, and a sense of belonging, entire neighborhoods became virtual killing fields of warring factions. Statistics from Chicago alone bring home the tragic toll that has been taken. Within Fathers’ Day weekend, 104 people were shot across the city, 15 of them, including five children, fatally. Within a three-day period of the preceding week, a three-year-old child was shot and killed in the South Austin community, the third child under the age of 10 who was shot.

In the midst of this tragic scenario, the true casualties of the War on Poverty have been its purported beneficiaries.

Robert L. Woodson, Sr. founded the Woodson Center in 1981 to help residents of low-income neighborhoods address the problems of their communities. A former civil rights activist, he has headed the National Urban League Department of Criminal Justice, and has been a resident fellow at the American Enterprise Foundation for Public Policy Research. Referred to by many as “godfather” of the neighborhood empowerment movement, for more than four decades, Woodson has had a special concern for the problems of youth. In response to an epidemic of youth violence that has afflicted urban, rural and suburban neighborhoods alike, Woodson has focused much of the Woodson Center’s activities on an initiative to establish Violence-Free Zones in troubled schools and neighborhoods throughout the nation. He is an early MacArthur “genius” awardee and the recipient of the 2008 Bradley Prize, the Presidential Citizens Award, and a 2008 Social Entrepreneurship Award from the Manhattan Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Andrew Langer

We are going to assemble the best thought and broadest knowledge from all over the world to find these answers. I intend to establish working groups to prepare a series of conferences and meetings—on the cities, on natural beauty, on the quality of education, and on other emerging challenges. From these studies, we will begin to set our course toward the Great Society. – President Lyndon Baines Johnson, Anne Arbor, MI, May 22, 1964

In America in 1964, the seeds of the later discontent of the 1960s were being planted. The nation had just suffered an horrific assassination of an enormously charismatic president, John F. Kennedy, we were in the midst of an intense national conversation on race and civil rights, and we were just starting to get mired in a military conflict in Southeast Asia.

We were also getting into a presidential election, and while tackling poverty in America wasn’t a centerpiece, President Johnson started giving a series of speeches talking about transforming the United States into a “Great Society”—a concept that was going to be the most-massive series of social welfare reforms since Franklin Roosevelt’s post-depression “New Deal” of the 1930s.

In that time, there was serious debate over whether the federal government even had the power to engage in what had, traditionally, been state-level social support work—or, previously, private charitable work. The debate centered around the Constitution’s “general welfare” clause, the actionable part of the Constitution building on the Preamble’s “promote the general welfare” language, saying in Article I, Section 8, Clause 1 that, “The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;” (emphasis added)

Proponents of an increased federal role in social service spending have argued that “welfare” for this purpose means just what politicians today proffer that it does: that “welfare” means social service spending, and that because the Constitution grants Congress this power, such power is expansive (if not unlimited).

But this flies in the face of the whole concept of the Constitution itself—which is the idea of a federal government of limited, carefully-enumerated powers. The founders were skeptical of powerful, centralized government (and had fought a revolution over that very point), and the debate of just how powerful, how centralized was at the core of the Constitutional Convention’s debates.

Constitutional author (and later president) James Madison said this in Federalist 41:

It has been urged and echoed, that the power “to lay and collect taxes, duties, imposts, and excises, to pay the debts, and provide for the common defense and general welfare of the United States,’’ amounts to an unlimited commission to exercise every power which may be alleged to be necessary for the common defense or general welfare. No stronger proof could be given of the distress under which these writers labor for objections, than their stooping to such a misconstruction. Had no other enumeration or definition of the powers of the Congress been found in the Constitution, than the general expressions just cited, the authors of the objection might have had some color for it; though it would have been difficult to find a reason for so awkward a form of describing an authority to legislate in all possible cases.

In 1831, he also said, more plainly:

With respect to the words “general welfare,” I have always regarded them as qualified by the detail of powers connected with them. To take them in a literal and unlimited sense would be a metamorphosis of the Constitution into a character which there is a host of proofs was not contemplated by its creators.

This was, essentially, the interpretation of the clause that stood for nearly 150 years—only to be largely gutted in the wake of FDR’s New Deal programs. As discussed in the essay on FDR’s first 100 days, there was great back and forth within the Supreme Court over the constitutionality of the New Deal—with certain members of the court eventually apparently succumbing to the pressure of a proposed plan to “stack” the Supreme Court with newer, younger members.

A series of cases, starting with United States v. Butler (1936) and then Helvering v. Davis (1937), essentially ruled that Congress’ power to spend was non-reviewable by the Supreme Court… that there could be no constitutional challenge to spending plans, that if Congress said a spending plan was to “promote the general welfare” then that’s what it was.

Madison was right to be fearful—when taken into the context of an expansive interpretation of the Commerce Clause, it gives the federal government near-unlimited power. Either something is subject to federal regulation because it’s an “item in or related to commerce” or it’s subject to federal spending because it “promotes the general welfare.”

Building on this, LBJ moved forward with the Great Society in 1964, creating a series of massive spending and federal regulatory programs whose goal was to eliminate poverty and create greater equity in social service programs.

Problematically, LBJ created a series of “task forces” to craft these policies—admittedly because he didn’t want public input or scrutiny that would lead to criticism of the work his administration was doing.

Normally, when the executive branch engages in policymaking, those policies are governed by a series of rules aimed at ensuring public participation—both so that the public can offer their ideas at possible solutions, but also to ensure that the government isn’t abusing its powers.

Here, the Johnson administration did no such thing—creating, essentially, a perfect storm of problematic policymaking: a massive upheaval of government policy, coupled with massive spending proposals, coupled with little public scrutiny.

Had they allowed for greater public input, someone might have pointed out what the founders knew: that there was a reason such social support has traditionally been either the purview of local governance or private charity, that such programs are much more effective when they are locally-driven and/or community based. Local services work because they better understand the challenges their local communities face.

And private charities provide more-effective services because they not only have a vested-interest in the outcomes, that vested-interest is driven by building relationships centered around faith and hope. If government programs are impersonal, government programs whose management is far-removed from the local communities is far worse.

The end result is two-fold:  faceless entitlement bureaucracies whose only incentive is self-perpetuation (not solving problems), and people who have little incentive to move themselves off of these programs.

Thus, Johnson’s Great Society was a massive failure. Not only did it not end poverty, it created a devastating perpetual cycle of it. Enormous bureaucratic programs which still exist today—and which, despite pressures at various points in time (the work of President Bill Clinton and the GOP-led Congress after the 1994 election at reforming the nation’s welfare programs as one example), seem largely resistant to change or improvement.

The founders knew that local and private charity did a better job at promoting “the general welfare” of a community than a federal program would. They knew the dangers of expansive government spending (and the power that would accrue with it). Once again, as Justice Sandra Day O’Connor said in New York v. United States (1992), the “Constitution protects us from our own best intentions.”

Andrew Langer is President of the Institute for Liberty. He teaches in the Public Policy Program at the College of William & Mary

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joshua Schmid

The Cold War was a time of immense tension between the world’s superpowers, the Soviet Union and the United States. However, the two never came into direct conflict during the first decade and a half and rather chose to pursue proxy wars in order to dominate the geopolitical landscape. The Cuban Missile Crisis of October 1962 threatened to reverse this course by turning the war “hot” as the leader of the free world and the leader of the world communist revolution squared off in a deadly game of nuclear cat and mouse.

At the beginning of the 1960s, some members of the Soviet Union’s leadership desired more aggressive policies against the United States. The small island of Cuba, located a mere 100 miles off the coast of Florida, provided Russia with an opportunity. Cuba had recently undergone a communist revolution and its leadership was happy to accept Soviet intervention if it would minimize American harassments like the failed Bay of Pigs invasion in 1961. Soviet Premier Nikita Khrushchev offered to place nuclear missiles on Cuba, which would put him within striking range of nearly any target on the continental U.S. The Cubans accepted and work on the missile sites began during the summer of 1962.

Despite an elaborate scheme to disguise the missiles and the launch sites, American intelligence discovered the Soviet scheme by mid-October. President John F. Kennedy immediately convened a team of security advisors, who suggested a variety of options. These included ignoring the missiles, using diplomacy to pressure the Soviets to remove the missiles, invading Cuba, blockading the island, and strategic airstrikes on the missile sites. Kennedy’s military advisors strongly suggested a full-scale invasion of Cuba as the only way to defeat the threat. However, the president ultimately overrode them and decided any attack would only provoke greater conflict with the Russians. On October 22, Kennedy gave a speech to the American people in which he called for a “quarantine” of the island under which “all ships of any kind bound for Cuba, from whatever nation or port, will, if found to contain cargoes of offensive weapons, be turned back.”

The Russians appeared unfazed by the bravado of Kennedy’s speech, and announced they would interpret any attempts to quarantine the island of Cuba as an aggressive act. However, as the U.S. continued to stand by its policy, the Soviet Union slowly backed down. When Russian ships neared Cuba, they broke course and moved away from the island rather than challenging the quarantine. Despite this small victory, the U.S. still needed to worry about the missiles already installed.

In the ensuing days, the U.S. continued to insist on the removal of the missiles from Cuba. As the haggling between the two nations continued, the nuclear launch sites became fully operational. Kennedy began a more aggressive policy that included a threat to invade Cuba. Amidst these tensions, the most harrowing event of the entire Cuban Missile Crisis occurred. The Soviet submarine B-59 neared the blockade line and was harassed by American warships dropping depth charges. The submarine had lost radio contact with the rest of the Russian navy and could not surface to refill its oxygen. The captain of B-59 decided that war must have broken out between the U.S. and Soviet Union, and proposed that the submarine launch its nuclear missile. This action required a unanimous vote by the top three officers onboard. Fortunately, the executive officer cast the lone veto vote against what surely would have been an apocalyptic action.

Eventually, Khrushchev and Kennedy reached an agreement that brought an end to the crisis. The Russians removed the missiles from Cuba and the U.S. promised not to invade the island. Additionally, Kennedy removed missiles stationed near the Soviet border in Turkey and Italy as a show of good faith. A brief cooling period between the two superpowers would ensue, during which time a direct communication line between the White House and the Kremlin was established. And while the Cold War would continue for three more decades, never again would the two blocs be so close to nuclear annihilation as they were in October 1962.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

The Cold War between the United States and Soviet Union was a geopolitical struggle around the globe characterized by an ideological contest between capitalism and communism, and a nuclear arms race. An important part of the Cold War was the space race which became a competition between the two superpowers.

Each side sought to be the first to achieve milestones in the space race and used the achievements for propaganda value in the Cold War. The Soviet launch of the satellite, Sputnik, while a relatively modest accomplishment, became a symbolically important event that triggered and defined the dawn of the space race. The space race was one of the peaceful competitions of the Cold War and pushed the boundaries of the human imagination.

The Cold War nuclear arms race helped lead to the development of rocket technology that made putting humans into space a practical reality in a short time. Only 12 years after the Russians launched a satellite into orbit around the Earth, Americans sent astronauts to walk on the moon.

The origins of Sputnik and spaceflight occurred a few decades before World War II, with the pioneering flights of liquid-fueled rockets in the United States and Europe. American Robert Goddard launched one from a Massachusetts farm in 1926 and continued to develop the technology on a testing range in New Mexico in the 1930s. Meanwhile, Goddard’s research influenced the work of German rocketeer Hermann Oberth who fired the first liquid-fueled rocket in Europe in 1930 and dreamed of spaceflight. In Russia, Konstantin Tsiolkovsky developed the idea of rocket technology, and his ideas influenced Sergei Korolev in the 1930s.

The greatest advance in rocket technology took place in Nazi Germany, where Werner von Braun led efforts to build V-2 and other rockets that could hit England and terrorize civilian populations when launched from continental Europe. Hitler’s superweapons never had the decisive outcome for victory as he hoped, but the rockets had continuing military and civilian applications.

At the end of the war, Russian and Allied forces raced to Berlin as the Nazi regime collapsed in the spring of 1945. Preferring to surrender to the Americans because of the Red Army’s well-deserved reputation for brutality, von Braun and his team famously surrendered to Private Fred Schneikert and his platoon. They turned over 100 unfinished V-2 rockets and 14 tons of spare parts and blueprints to the Americans who whisked the scientists, rocketry, and plans away just days before the Soviet occupation of the area.

In Operation Paperclip, the Americans secretly brought thousands of German scientists and engineers to the United States including more than 100 German rocket scientists from Von Braun’s team to the United States. The operation was controversial because of Nazi Party affiliations, but few were rabid devotees to Nazi ideology, and their records were cleared. The Americans did not want them contributing to Soviet military production and brought them instead to Texas and then to Huntsville, Alabama, to develop American rocket technology as part of the nuclear arms race to build immense rockets to carry nuclear warheads. Within a decade, both sides had intercontinental ballistic missiles (ICBMs) in their arsenals.

During the next decade, the United States developed various missile systems producing rockets of incredible size, thrust, and speed that could travel large distances. Interservice rivalry meant that the U.S. Army, Navy, and Air Force developed and built their own competing rocket systems including the Redstone, Vanguard, Jupiter-C, Polaris, and Atlas rockets. Meanwhile, the Soviets were secretly building their own R-7 missiles erected as a cluster rather than staged rocket.

On October 4, 1957, the Russians shocked Americans by successfully launching a satellite into orbit. Sputnik was a metal sphere weighing 184 pounds that emitted a beeping sound to Earth that was embarrassingly picked up by U.S. global tracking stations. The effort was not only part of the Cold War, but also the International Geophysical Year in which scientists from around the world formed a consortium to share information on highly active solar flares and a host of other scientific knowledge. However, both the Soviets and Americans were highly reluctant to share any knowledge that might have relationship to military technology.

While American intelligence had predicted the launch, Sputnik created a wave of panic and near hysteria. Although President Dwight Eisenhower was publicly unconcerned because the United States was preparing its own satellite, the American press, the public, and Congress were outraged, fearing the Russians were spying on them or could rain down nuclear weapons from space. Moreover, it seemed as if the Americans were falling behind the Soviets. Henry Jackson, a Democratic senator from the state of Washington, called Sputnik “a devastating blow to the United States’ scientific, industrial, and technical prestige in the world.” Sputnik initiated the space race between the United States and Soviet Union as part of the Cold War superpower rivalry.

A month later, the Soviets sent a dog named Laika into space aboard Sputnik II. Although the dog died because it only had life support systems for a handful of days, the second successful orbiting satellite—this one carrying a living creature—further humiliated Americans even if they humorously dubbed it “Muttnik.”

The public relations nightmare was further exacerbated by the explosion of a Vanguard rocket carrying a Navy satellite at the Florida Missile Test Range on Patrick Air Force Base on Cape Canaveral. on December 6. The event was aired on television and watched by millions. The launch was supposed to restore pride in American technology, but it was an embarrassing failure. The press had a field-day and labeled it “Kaputnik” and “Flopnik.”

On January 31, 1958, Americans finally had reason to cheer when a Jupiter-C rocket lifted off and went into orbit carrying a thirty-one-pound satellite named Explorer. The space race was now on and each side competed to be the first to accomplish a goal. The space race also had significant impacts upon American society.

In 1958, Congress passed the National Defense Education Act to spend more money to promote science, math, and engineering education at all levels. To signal its peaceful intentions, Congress also created the National Aeronautics and Space Administration (NASA) as a civilian organization to lead the American efforts in space exploration, whereas the Russian program operated as part of the military.

In December 1958, NASA announced Project Mercury with the purpose of putting an astronaut in space which would be followed by Projects Gemini and Apollo which culminated in Neil Armstrong and Buzz Aldrin walking on the moon. The space race was an important part of the Cold War and also about the spirit of human discovery and pushing the frontiers of knowledge and space.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

While speaking on June 14, 1954, Flag Day, President Dwight D. Eisenhower talked about the importance of reaffirming religious faith in America’s heritage and future, that doing so would “constantly strengthen those spiritual weapons which forever will be our country’s most powerful resource, in peace or in war.” In 1864 during the Civil War, the phrase “In God We Trust” first appeared on U.S. coins. On July 30, 1956, “In God We Trust” became the nation’s motto as President Eisenhower signed into law a bill declaring it, along with having the motto printed in capital letters, on every United States denomination of paper currency.

The Hand of providence has been so conspicuous in all this, that he must be worse than an infidel that lacks faith, and more than wicked, that has not gratitude enough to acknowledge his obligations.” George Washington, 1778.[i]

It becomes a people publicly to acknowledge the over-ruling hand of Divine Providence and their dependence upon the Supreme Being as their Creator and Merciful Preserver . . .” Samuel Huntington, 1791.[ii]

We are a religious people whose institutions presuppose a Supreme Being.” Associate Justice William O. Douglas, 1952.[iii]

One of the most enduring battles in American politics has been over the question of whether America is or ever was a Christian Nation. For Supreme Court Associate Justice David Brewer the answer was simple: yes. The United States was formed as and, in Brewer’s 1892 at least, still was, a Christian Nation. The Justice said as much in Church of the Holy Trinity vs. United States. But his simple answer did not go unsupported.

“[I]n what sense can [the United States] be called a Christian nation? Not in the sense that Christianity is the established religion or the people are compelled in any manner to support it…Neither is it Christian in the sense that all its citizens are either in fact or in name Christians. On the contrary, all religions have free scope within its borders. Numbers of our people profess other religions, and many reject all…Nevertheless, we constantly speak of this republic as a Christian Nation – in fact, as the leading Christian Nation of the world. This popular use of the term certainly has significance. It is not a mere creation of the imagination. It is not a term of derision but has substantial basis – on which justifies its use. Let us analyze a little and see what is the basis.”[iv]

Brewer went on, of course, to do just that.

Regrettably, it lies beyond the scope of this short essay to repeat Brewer’s arguments. In 1905, Brewer re-assembled them into a book: The United States a Christian Nation. It was republished in 2010 by American Vision and is worth the read.[v]  For the purposes of this essay I will stipulate, with Brewer, that America is a Christian nation. If that be the case, it should come as no surprise that such a nation would take the advice of Samuel Huntington and openly acknowledge its trust in God on multiple occasions and in a variety of ways: on its coinage, for instance. How we came to do that as a nation is an interesting story stretching over much of our history.

Trusting God was a familiar concept to America’s settlers – they spoke and wrote of it often. Their Bibles, at least one in every home, contained many verses encouraging believers to place their trust in God,[vi] and early Americans knew their Bible.[vii] Upon surviving the perilous voyage across the ocean, their consistent first act was to thank the God of the Bible for their safety.

Benjamin Franklin’s volunteer Pennsylvania militia of 1747-1748 reportedly had regimental banners displaying “In God We Trust.”[viii] In 1776, our Declaration of Independence confirmed the signers had placed “a firm reliance on the protection of divine Providence.”[ix] In 1814, Francis Scott Key penned his famous poem which eventually became our national anthem. The fourth stanza contains the words: “Then conquer we must, when our cause is just, and this be our motto: ‘In God is our trust.’”

In 1848, construction began on the first phase of the Washington Monument (it was not completed until 1884). “In God We Trust” sits among Bible verses chiseled on the inside walls and “Praise God” (“Laus Deo” in Latin) can be found on its cap plate. But it would be another thirteen years before someone suggested putting a “recognition of the Almighty God” on U.S. coins.

That someone, Pennsylvania minister M. R. Watkinson, wrote to Salmon P. Chase, Abraham Lincoln’s Secretary of the Treasury, and suggested that such a recognition of the Almighty God would “place us openly under the Divine protection we have personally claimed.” Watkinson suggested the words “PERPETUAL UNION” and “GOD, LIBERTY, LAW.” Chase liked the basic idea but not Watkinson’s suggestions. He instructed James Pollock, Director of the Mint at Philadelphia, to come up with a motto for the coins: “The trust of our people in God should be declared on our national coins. You will cause a device to be prepared without unnecessary delay with a motto expressing in the fewest and tersest words possible this national recognition (emphasis mine).

Secretary Chase “wordsmithed” Director Pollock’s suggestions a bit and came up with his “tersest” words: “IN GOD WE TRUST,” which was ordered to be so engraved by an Act of Congress on April 22, 1864. First to bear the words was the 1864 two-cent coin.

The following year, another Act of Congress allowed the Mint Director to place the motto on all gold and silver coins that “shall admit the inscription thereon.” The motto was promptly placed on the gold double-eagle coin, the gold eagle coin, and the gold half-eagle coin. It was also minted on silver coins, and on the nickel three-cent coin beginning in 1866.

One might guess that the phrase has appeared on all U.S. coins since 1866 – one would be wrong.

The U.S. Treasury website explains (without further details) that “the motto disappeared from the five-cent coin in 1883, and did not reappear until production of the Jefferson nickel began in 1938.” The motto was also “found missing from the new design of the double-eagle gold coin and the eagle gold coin shortly after they appeared in 1907. In response to a general demand, Congress ordered it restored, and the Act of May 18, 1908, made it mandatory on all coins upon which it had previously appeared” [x] (emphasis added). I’m guessing someone got fired over that disappearance act. Since 1938, all United States coins have borne the phrase. None others have had it “go missing.”

The date 1956 was a watershed year.  As you read in the introduction to this essay, that year, President Dwight D. Eisenhower signed a law (P.L. 84-140) which declared “In God We Trust” to be the national motto of the United States. The bill had passed the House and the Senate unanimously and without debate. The following year the motto began appearing on U.S. paper currency, beginning with the one-dollar silver certificate. The Treasury gradually included it as part of the back design of all classes and denominations of currency.

Our story could end there – but it doesn’t.

There is no doubt Founding Era Americans would have welcomed the phrase on their currency had someone suggested it, but it turns out some Americans today have a problem with it – a big problem.

America’s atheists continue to periodically challenge the constitutionality of the phrase appearing on government coins. The first challenge occurred in 1970; Aronow v. United States would not be the last. Additional challenges were mounted in 1978 (O’Hair v. Blumenthal) and 1979 (Madalyn Murray O’Hair vs W. Michael Blumenthal). Each of these cases was decided at the circuit court level against the plaintiff, with the court affirming that the “primary purpose of the slogan was secular.”

Each value judgment under the Religion Clauses must therefore turn on whether particular acts in question are intended to establish or interfere with religious beliefs and practices or have the effect of doing so. [xi]

Having the national motto on currency neither established nor interfered with “religious beliefs and practices.”

In 2011, in case some needed a reminder, the House of Representatives passed a new resolution reaffirming “In God We Trust” as the official motto of the United States by a 396–9 vote (recall that the 1956 vote had been unanimous, here in the 21st century it was not).

Undaunted by the courts’ previous opinions on the matter, atheist activist Michael Newdow brought a new challenge in 2019 — and lost in the Eighth Circuit. The Supreme Court (on April 23, 2020) declined to hear the appeal. At my count, Newdow is now 0-5. His 2004 challenge[xii] that the words “under God” in the Pledge of Allegiance violated the First Amendment was a bust, as was his 2009 attempt to block Chief Justice John Roberts from including the phrase “So help me God” when administering the presidential oath of office to Barack Obama. He tried to stop the phrase from being recited in the 2013 and 2017 inaugurations as well – each time unsuccessfully.

In spite of atheist challenges, or perhaps because of them, our national motto is enjoying a bit of resurgence of late, at least in the more conservative areas of the country:

In 2014, the Mississippi legislature voted to add the words, “In God We Trust” to their state seal.

In 2015, Jefferson County, Illinois decided to put the national motto on their police squad cars. Many other localities followed suit, including York County, Virginia, and Bakersfield, California, in 2019.

In March, 2017, Arkansas required their public schools to display posters which included the national motto. Similar laws were passed in Florida (2018), Tennessee (2018), South Dakota (2019) and Louisiana (2019).

On March 3, 2020, the Oklahoma House of Representatives passed a bill that would require all public buildings in the state to display the motto. Kansas, Indiana, and Oklahoma are considering similar bills.

But here is the question which lies at the heart of this issue: Does America indeed trust in God?

I think it is clear that America’s Founders, by and large did – at least they said and acted as though they did. But when you look around the United States today, outside of some limited activity on Sunday mornings and on the National Day of Prayer, does America actually trust in God? There is ample evidence we trust in everything, anything, but God.

Certainly we seem to trust in science, or what passes for science today.  We put a lot of trust in public education, it would seem, even though the results are quite unimpressive and the curriculum actually works to undermine trust in God. Finally, we put a lot of trust in our elected officials even though they betray that trust with alarming regularity.[xiii]

Perhaps citizens of the United States need to see our motto on our currency, on school and court room walls to simply remind us of what we should be doing, and doing more often.

“America trusts in God,” we declare. Do we mean it?

“And those who know your name put their trust in you, for you, O Lord, have not forsaken those who seek you.” Psalm 9:10 ESV

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people. CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[i] Letter to Thomas Nelson, August 20, 1778.

[ii] Samuel Huntington was a signer of the Declaration Of Independence; President of Congress;
Judge; and Governor of Connecticut.  Quoted from A Proclamation for a Day of Fasting, Prayer and Humiliation, March 9, 1791.

[iii] Zorach v. Clauson, 343 U.S. 306 (1952).

[iv] Church of the Holy Trinity v. United States, 143 U.S. 457 (1892).

[v] https://store.americanvision.org/collections/books/products/the-united-states-a-christian-nation

[vi] Examples include: Psalm 56:3, Isaiah 26:4, Psalm 20:7, Proverbs 3:5-6 and Jeremiah 17:7.

[vii] “Their many quotations from and allusions to both familiar and obscure scriptural passages confirms that [America’s Founders] knew the Bible from cover to cover.” Daniel L. Driesbach, 2017, Reading the Bible with the Founding Fathers, Oxford University Press, p.1

[viii] See https://historynewsnetwork.org/article/161178

[ix] Thomas Jefferson, Declaration of Independence, July 1776.

[x] https://www.treasury.gov/about/education/Pages/in-god-we-trust.aspx

[xi] https://openjurist.org/432/f2d/242/aronow-v-united-states

[xii] Newdow v. United States, 328 F.3d 466 (9th Cir. 2004)

[xiii] https://en.wikipedia.org/wiki/List_of_American_federal_politicians_convicted_of_crimes

Guest Essayist: Tony Williams

In 1919, Dwight Eisenhower was part of a U.S. Army caravan of motor vehicles traveling across the country as a publicity stunt. The convoy encountered woeful and inadequate roads in terrible condition. The journey took two months by the time it was completed.

When Eisenhower was in Germany after the end of World War II, he was deeply impressed by the Autobahn because of its civilian and military applications. The experiences were formative in shaping Eisenhower’s thinking about developing a national highway system in the United States. He later said, “We must build new roads,” and asked Congress for “forward looking action.”

As president, Eisenhower generally held to the postwar belief called “Modern Republicanism.” This meant that while he did not support a massive increase in spending on the federal New Deal welfare state, he would also not roll it back. He was a fiscal conservative who supported decreased federal spending and balanced budgets, but he advocated a national highway system as a massive public infrastructure project to facilitate private markets and economic growth.

The postwar consumer culture was dominated by the automobile. Americans loved their large cars replete with large tail fins and abundant amounts of chrome. By 1960, 80 percent of American families owned a car. American cars symbolized their geographical mobility, consumer desires, and global industrial predominance. They needed a modern highway system to get around the sprawling country. By 1954, President Eisenhower was ready to pitch the idea of a national highway system to Congress and the states. He called it, “The biggest peacetime construction project of any description every undertaken by the United States or any other country.”

In July, Eisenhower dispatched his vice-president, Richard Nixon, to the meeting of Governors’ Conference to win support. The principle of federalism was raised with many states in opposition to federal control and taxes.

That same month, the president asked his friend, General Lucius Clay, who was an engineer by training and supervised the occupation of postwar Germany, to manage the planning of the project and present it to Congress. He organized the President’s Advisory on a National Highway Program.

The panel held hearings and spoke to a variety of experts and interests including engineers, financiers, construction and trucking companies, and labor unions. Based upon the information it amassed, the panel put together a plan by January 1955.

The plan proposed 41,000 miles of highway construction at an estimated cost of $101 billion over ten years. It recommended the creation of a federal highway corporation that would use 30-year bonds to finance construction. There would be a gas tax but no tolls or federal taxes. A bill was written based upon the terms of the plan.

The administration sent the bill to Congress the following month, but a variety of interests expressed opposition to the bill. Southern members of Congress, for example, were particularly concerned about federal control because it might set a precedent for challenging segregation. Eisenhower and his allies pushed hard for the bill and used the Cold War to sell the bill as a means of facilitating evacuation from cities in case of a nuclear attack. The bill passed the Senate but then stalled in the House where it died during the congressional session.

The administration reworked the bill and sent it to Congress again. The revised proposal created a Highway Trust Fund that would be funded and replenished with taxes primarily on gasoline, diesel oil, and tires. No federal appropriations would be used for interstate highways.

The bill passed both houses of Congress in May and June 1956, and the president triumphantly signed the bill into the law creating the National System of Interstate and Defense Highways on June 29.

The interstate highway system transformed the landscape of the United States in the postwar period. It linked the national economy, markets, and large cities together. It contributed to the growth of suburban America as commuters could now drive their cars to work in cities or consumers could drive to shopping malls. Tourists could travel expeditiously to vacations at distant beaches, national parks, and amusement parks like Disneyland. Cheap gas, despite the taxes to fund the highways, was critical to travel along the interstates.

The interstate highway system later became entwined in national debates over energy policy in the 1970s when OPEC embargoed oil to the United States. Critics said gas-guzzling cars should be replaced by more efficient cars or public transportation, that American love of cars contributed significantly to the degradation of the environment, and that America had reached an age of limits.

The creation of the interstate highway system was a marvel of American postwar prosperity and contributed to its unrivaled affluence. It also symbolized some of the challenges Americans faced. Both the success of  completing the grand public project and the ability to confront and solve new challenges represented the American spirit.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

You can count on one hand the number of Supreme Court decisions that normal people can identify by name and subject. Brown is one of them (and, arguably, both the widest and most accurately known). Ask any lawyer what the most important judicial decision in American history is, and they will almost certainly tell you, with no hesitation, Brown v. Board of Education. It’s the case that, for decades, Senators have asked every nominee to become a judge to explain why is right.

It’s place in the public mind is well-deserved, even if it should be adjusted to reflect more accurately its place in modern American history.

Backstory: From Reconstruction’s Promise to Enshrinement of Jim Crow in Plessy

Remember the pair of course reversals that followed the Civil War.

Between 1865 and 1876, Congress sought to make good the Union’s promises to the freedmen emancipated during the war. In the face of stiff, violent resistance by those who refused to accept the war’s verdict, America amended the Constitution three (3) times, with: (a) the Thirteenth Amendment banning slavery; (b) the Fourteenth Amendment: (i) affirmatively acting to create and bestow American citizenship on all those born here, (ii) barring states from “abridg[ing] the privileges or immunities of citizens of the United States[,]” and (iii) guaranteeing the equal protection of the laws; and (c) the Fifteenth Amendment barring states from denying American citizens the right to vote “on account of race, color, or previous condition of servitude.” Toward the same end, Congress passed the Civil Rights Acts of 1866 and 1875, the Enforcement Acts of 1870 and 1871, and the Ku Klux Klan Act. They created the Department of Justice to enforce these laws and supported President Grant in his usage of the military to prevent states from reconstituting slavery under another name.

Until 1876. To solve the constitutional crisis of a Presidential election with no clear winner, Congress (and President Hayes) effectively, if silently, agreed to effectively and abruptly end all that. The federal government removed troops from former Confederate states and stopped trying to enforce federal law. And the states “redeemed” by the violent forces of retaliation amended their state constitutions and passed the myriad of laws creating the “Jim Crow” regime of American apartheid.  Under Jim Crow, races were separated, the public services available to an American came to radically differ depending on that American’s race, and the rights of disfavored races became severely curtailed. Most African Americans were disenfranchised, then disarmed, and then subjected to mob-violence to incentivize compliance with the “redeemer” community’s wishes.

One could point to a number of crystallizing moments as the key point when the federal government made official that it and national law would do nothing to stop any of this. But the most commonly cited is the Plessy v. Ferguson decision of the Supreme Court, issued in 1896. It was a case arising out of New Orleans and its even-then-long-multi-hued business community. There, predictably, there were companies and entrepreneurs that hated these laws interfering with their businesses and their ability to provide services to willing buyers on the (racially integrated) basis they preferred. A particularly hated law passed by the State of Louisiana compelled railroads (far and away the largest industry of the day) to separate customers into different cars on the basis of race. With admirable truth in advertising, the Citizens Committee to Test the Constitutionality of the Separate Car Law formed and went to work to rid New Orleans of this government micromanagement. Forgotten in the long sweep of history, the Committee (acting through the Pullman Company, one of America’s largest manufacturers at the time) actually won their first case at the Louisiana Supreme Court, which ruled that any state law requiring separate accommodations in interstate travel violated the U.S. Constitution (specifically, Article I’s grant of power to Congress alone to regulate interstate travel). They then sought to invalidate application of the same law to train travel within Louisiana as a violation of the Fourteenth Amendment. With coordination between the various actors involved, Homer Plessy (a man with 7 “white” and 1 “black” great-grandparent(s) purchased and used a seat in the state-law required “white” section of a train that the train company wanted to sell him; they then assured a state official knew he was there, was informed of his racial composition, and would willingly arrest Mr. Plessy to create the test case the Committee wanted. It is known to us as Plessy v. Ferguson.[1] This time, though, things didn’t go as planned: the trial court ruled the statute enforceable and the Louisiana Supreme Court upheld its application to Mr. Plessy. The Supreme Court of the United States accepted the case, bringing the national spotlight onto this specific challenge to the constitutionality of the states’ racial-caste-enforcing laws. In 1896, over the noteworthy, highly-praised, sole dissent of Justice John Marshall Harlan, the Supreme Court agreed that, due to its language requiring “equal, but separate” accommodations for the races (and without ever really considering whether the accommodations provided actually were “equal”), the separate car statute was consistent with the U.S. Constitution; they added that the Fourteenth Amendment was not intended “to abolish distinctions based upon color, or to enforce social … equality … of the two races.”

For decades, the Plessy ruling was treated as the federal government’s seal of approval for the continuation of Jim Crow.

Killing Jim Crow

Throughout those decades, African Americans (and conscientious whites) continued to object to American law treating races differently as profoundly unjust. And they had ample opportunities to note the intensity of the injustice. A sampling (neither comprehensive, nor fully indicative of the scope) would include: Woodrow Wilson’s segregation of the federal work force, the resurgence of lynchings following the 1915 rebirth of the Ku Klux Klan (itself an outgrowth of the popularity of Birth of a Nation, the intensely racist film that Woodrow Wilson made the first ever screened at the White House), and the spate of anti-black race riots surrounding America’s participation in World War I.

For the flavor of those riots, consider the fate of the African American community living in the Greenwood section of Tulsa, Oklahoma. In the spring of 1921, Greenwood’s professional class had done so well that it became known as “Negro Wall Street” or “Black Wall Street.” On the evening of May 31, 1921, a mob gathered at the Tulsa jail and demanded that an African American man accused of attempting to assault a white woman be handed over to them. When African Americans, including World War I veterans, came to the jail in order to prevent a lynching, shots were fired and a riot began. Over the next 12 hours, at least three hundred African Americans were killed. In addition, 21 churches, 21 restaurants, 30 grocery stores, two movie theaters, a hospital, a bank, a post office, libraries, schools, law offices, a half dozen private airplanes, and a bus system were utterly destroyed. The Tulsa race riot (perhaps better styled a pogrom, given the active participation of the national guard in these events) has been called “the single worst incident of racial violence in American history.”[2]

But that is far from the whole story of these years. What are today described as Historically Black Colleges and Universities graduated generations of students, who went on to live productive lives and better their communities (whether racially defined or not). They saw the rise of the Harlem Renaissance, where African American luminaries like Duke Ellington, Langston Hughes, and Zora Neale Hurston acquired followings across the larger population and, indeed, the world. The Negro Leagues demonstrated through the national pastime that the athletic (and business) skills of African Americans were equal to those of any others;[3] the leagues developed into some of the largest black-owned businesses in the country and developed fan-followings across America. Eventually, these years saw Jackie Robinson, one of the Negro Leagues’ brightest stars, sign a contract with the Brooklyn Dodgers in 1945 and “break the color barrier” in 1947 as the first black Major Leaguer since Cap Anson successfully pushed for their exclusion in the 1880s.[4] He would be: (a) named Major League Baseball’s Rookie of the Year in 1947; (b) voted the National League MVP in 1949; and (c) voted by fans as an All Star six (6) times (spanning each of the years from 1949-1954). Robinson also led the Dodgers to the World Series in four (4) of those six (6) years.

For the main plot of our story, though, the most important reaction to the violence of Tulsa (and elsewhere)[5] was the “newfound sense of determination” that “emerged” to confront it.[6] Setting aside the philosophical debate that raged across the African American community over the broader period on the best way to advance the prospects of those most impacted by these laws,[7] the National Association for the Advancement of Colored People (the “NAACP”) began to plan new strategies to defeat Jim Crow.”[8]  The initial architect of this challenge was Charles Hamilton Houston, who joined the NAACP and developed and implemented the framework of its legal strategy after graduating from Harvard Law School in 1922, the year following the Tulsa race riot.[9]

Between its founding in 1940, under the leadership of Houston-disciple Thurgood Marshall,[10] and 1955, the NAACP Legal Defense and Education Fund brought a series of cases designed to undermine Plessy.  Houston had believed from the outset that unequal education was the Achilles heel of Jim Crow and the LDF targeted that weak spot.

The culmination of these cases came with a challenge to the segregated public schools operated by Topeka, Kansas. While schools were racially segregated many places, the LDF specifically chose to bring its signature case against the Topeka Board of Education, precisely because Kansas was not Southern, had no history of slavery, and institutionally praised John Brown;[11] the case highlighted that its issues were national, not regional, in scope.[12]

LDF, through Marshall and Greenberg, convinced the Supreme Court to reverse Plessy and declare Topeka’s school system unconstitutional. On May 17, 1954, Chief Justice Earl Warren handed down the unanimous opinion of the Court. Due to months of wrangling and negotiation of the final opinion, there were no dissents and no concurrences. With a single voice the Supreme Court proclaimed that:

…in the field of public education the doctrine of “separate but equal” has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs and others similarly situated for whom the actions have been brought are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment.

These sweeping tones are why the decision holds the place it does in our collective imagination. They are why Brown is remembered as the end of legal segregation. They are why Brown is the most revered precedent in American jurisprudence.

One might have thought that they would mean an immediate end to all race-based public educational systems (and, indeed, to all segregation by law in American life). Indeed, as Justice Marshall told his biographer Dennis Hutchison in 1979, he thought just that: “the biggest mistake [I] made was assuming that once Jim Crow was deconstitutionalized, the whole structure would collapse – ‘like pounding a stake in Dracula’s heart[.]’”

But that was not to be. For the Court to get to unanimity, the Justices needed to avoid ruling on the remedy for the violation they could jointly agree to identify. So they asked the parties to return and reargue the question of what to do about it the following year. When they again addressed the Brown case, the Supreme Court reiterated its ruling on the merits from 1954, but as to what to do about it, ordered nothing more than that the states “make a prompt and reasonable start toward full compliance” and get around to “admit[ting children] to public schools on a racially nondiscriminatory basis with all deliberate speed.”

So the true place of Brown in the story of desegregation is best reflected in Justice Marshall’s words (again, to Dennis Hutchison in 1979): “…[i]n the twelve months between Brown I and Brown II, [I] realized that [I] had yet to win anything….  ‘In 1954, I was delirious. What a victory!  I thought I was the smartest lawyer in the entire world. In 1955, I was shattered.  They gave us nothing and then told us to work for it. I thought I was the dumbest Negro in the United States.’”

Of course, Justice Marshall was far from dumb, however he felt in 1955.  But actual integration didn’t come from Brown. That would have to wait for action by Congress, cajoling by a President, and the slow development of the cultural facts-on-the-ground arising from generations of white American children growing up wanting to be like, rooting for, and seeing the equal worth in men like Duke Ellington, Langston Hughes, Jackie Robinson, and Larry Doby.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] In the terminology of the day, Mr. Ferguson was a “Carpetbagger.”  A native of Massachusetts who had married into a prominent abolitionist family, Mr. Ferguson studied law in Boston before moving to New Orleans in 1865.  He was the same judge who, at the trial court level, had ruled that Louisiana’s separate cars act could not be constitutionally applied to interstate travel.  Since Plessy’s prosecution also was initially conducted in Mr. Ferguson’s courtroom, he became the named defendant, despite his own apparent feelings about the propriety of the law.

[2] All Deliberate Speed: Reflections on the First Half-Century of Brown v. Board of Education, by Charles J. Ogletree, Jr. W.W. Norton & Company (2004).

[3] In 1936, Jesse Owens did the same on an amateur basis at the Berlin Olympics.

[4] Larry Doby became the first black American League player ever weeks later (the AL had not existed in the 1880s).

[5] There were parallel riots in Omaha and Chicago in 1919.

[6] See, All Deliberate Speed, in Fn. 2, above.

[7] The author recommends delving into this debate.  Worthy samples of contributions to it the reader might consider include: (a) Booker T. Washington’s 1895 Address to Atlanta’s Cotton States and International Exposition (http://historymatters.gmu.edu/d/39/); and (b) W.E.B. Du Bois’s The Souls of Black Folk.

[8]  See, All Deliberate Speed, in Fn. 2, above.

[9] Houston was the first African American elected to the Harvard Law Review and has been called “the man who killed Jim Crow.”

[10] Later a Justice of U.S. Supreme Court himself, Justice Marshall was instrumental in the NAACP’s choice of legal strategies.  But LDF was not a one-man shop.  Houston had personally recruited Marshall and Oliver Hill, the first- and second-ranked students in the Law School Class of 1933 at Howard University – itself, a historically black institution founded during Reconstruction – to fight these legal battles.  Later, Jack Greenberg was Marshall’s Assistant Counsel was and hand-chosen successor to lead the LDF

[11] The Kansas State Capitol, in Topeka, has featured John Brown as a founding hero since the 1930s (https://www.kshs.org/places/capitol/graphics/tragic_prelude.jpg).

[12] This was all the more true when the case was argued before the Supreme Court, because the Supreme Court had consolidated Brown for argument with other cases from across the nation.  Those cases were Briggs v. Elliot (from South Carolina), Davis v. County School Board of Prince Edward County (from Virginia), Belton (Bulah) v. Gebhart (from Delaware), and Bolling v. Sharpe (District of Columbia).

Juneteenth’s a celebration of Liberation Day
When word of emancipation reached Texas slaves they say.
In sorrow were we brought here to till a harvest land.
We live and died and fought here
‘Til freedom was at hand.

They tore apart our families
They stole life’s nascent breath.
Turned women into mammies
And worked our men to death.

They shamed the very nation
Which fostered freedom’s birth
It died on the plantation
Denying man his worth.

But greed and misplaced honor
Brought crisis to a head
And Justice felt upon her
The weight of Union Dead.

They fought to save a nation.
And yet they saved its soul
From moral condemnation
And made the country whole.

But when the war was waning
And the battle was in doubt.
The soldiers were complaining
An many dropping out.

There seemed but one solution
Which might yet save the day.
Although its execution
Loomed several months away.

The Congress was divided.
The Cabinet as well.
Abe did his best to hide it.
And no one did he tell.

He meant to sign an order
To deal the South a blow.
The Mason Dixon border
And the Rebel states below

Would now have to contend with
The Freedman on their land.
For slavery had endeth
For woman, child and man.

The time 18 and 63
The first day of the year.
But June of 65 would be
The time we would hold dear.

For that would be when Freedom’s thought
First saw full light of day.
And justified why men had fought
And died along the way.

Now every June we celebrate
What Lincoln had in mind
The day he did emancipate
The bonds of all mankind.

Copyright All rights reserved  www.thecoleportersociety.org

Noah Griffin, America 250 Commissioner, is a lifelong student of history and is founder and artistic director of the Cole Porter Society.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

Little focused the public’s mind in the early 1950s like the atom bomb and the potential for vast death and destruction in the event of nuclear war with the Soviet Union. Who can forget the classroom drills where students dropped to the floor and hid under their desks ostensibly to reduce exposure to an exploding atomic bomb? It was a prevailing subject of discussion amongst average people as well as elites in government, media and the arts.

The Soviet Union had attained “the bomb” in 1949, four years after Hiroshima and Nagasaki. With the atom bomb at their disposal, the leadership of the Soviet Union was likely emboldened to accelerate its deeply felt ideological imperative to spread communism opportunistically. Getting an A-bomb led to a military equality with the United States that far reduced the threat of nuclear retaliation against their superior land armies in the event of an East-West military confrontation. The blatant invasion of South Korea, supported by the U.S. by communist North Korea in 1950 with total Soviet political and logistical commitment and indeed, encouragement, was likely an outcome of the Soviets possessing the atomic bomb.

In January of 1950, British intelligence, on information provided by the FBI, arrested East Germany-born and a British-educated and citizen, atomic scientist, Klaus Fuchs, who was spying for the Soviet Union. Fuchs had worked at the very highest level at Los Alamos on the American project to develop an atom bomb and was passing secrets to American Communist Party members who were also spying for the Soviet Union. He admitted his espionage and provided names of his American collaborators at Los Alamos. Those connections led to the arrest of Julius Rosenberg in June of 1950 on suspicion of espionage and two months later, his wife Ethel on the same charge.

Julius Rosenberg, an electrical engineer, and his wife Ethel were dedicated members of the Communist Party USA and had been working for years for Soviet Military Intelligence (GRU) delivering secret American work on advanced weaponry such as radar detection, jet engines and guided missiles. In hindsight, that information probably exceeded the value of atomic secrets given to the Soviet Union although consensus is that the Rosenbergs’ bomb design information confirmed the direction of Soviet bomb development. Ethel Rosenberg’s brother, David Greenglass was working at Los Alamos and evidence brought to light over the years strongly suggest that Ethel was the one who recruited her brother to provide atom bomb design secrets to her husband and worked hand-in-glove with him in his espionage activities.

The Rosenbergs, never admitting their crimes, were tried and convicted on the charge of “conspiracy to commit espionage.” The Death Penalty was their sentence. They professed their innocence until the very end when in June 1953, they were electrocuted at Sing Sing prison.

Politically, there was another narrative unfolding. The political Left in the United States and worldwide strongly supported the Rosenbergs’  innocence, reminiscent of their support for former State Department official Alger Hiss who was tried in 1949 and convicted in 1950 of perjury and not espionage as the Statute of Limitations on espionage had expired. The world-renowned Marxist intellectual, Jean-Paul Sartre called the Rosenberg trial a “legal lynching.” On execution day, there was a demonstration of several hundred outside Sing Sing paying their last respects. For decades to follow, the Rosenbergs’ innocence became a rallying cry of the political Left.

Leaders on the political and intellectual Left blamed anti-communist fervor drummed up by McCarthyism for the federal government’s pursuit of the Rosenbergs and others accused of spying for the Soviet Union. At the time, there was great sympathy on the Left with the ideals of communism and America’s former communist ally, the Soviet Union, which had experienced great loss in WW II in defeating hated Nazi fascism. They fervently believed the Rosenbergs’ plea of innocence.

When the Venona Project, secret records of intercepted Soviet messages, were made public in the mid-1990s, with unequivocal information pointing to the Rosenbergs’ guilt, the political Left’s fervor for the Rosenbergs was greatly diminished. Likewise, with material copied from Soviet KGB archives (the Vassillyev Notebooks) in 2009. However, some said, (paraphrasing) “OK, they did it but U.S. government Cold War mentality and McCarthyism were even greater threats” (e. g. the Nation magazine, popular revisionist Historian Howard Zinn).

Since then, the Left and not only the Left, led by the surviving sons of the Rosenbergs, have focused on the unfairness of the sentence, particularly Ethel Rosenberg’s, and that she should have not received the death penalty. Federal prosecutors likely hoped that such a charge would get the accused to talk, implicate others and provide insights into Soviet espionage operations. It did not. The Rosenbergs became martyrs to the Left and likely as martyrs, continued to better serve the Soviet communist cause than serving a prison sentence. Perhaps that was even their reason for professing innocence.

Debate continues to this day. But these days it’s over the severity of the sentence as just about all agree the Rosenbergs were spies for the Soviet Union. In today’s climate, there would be no death sentence but at the height of the Cold War…

However, there is absolutely no doubt that they betrayed America by spying for the Soviet Union at a time of great peril to America and world.

Don Ritter is President and CEO Emeritus (after serving eight years in that role) of the Afghan American Chamber of Commerce (AACC) and a 15-year founding member of the Board of Directors. Since 9-11, 2001, he has worked full time on Afghanistan and has been back to the country more than 40 times. He has a 38-year history in Afghanistan.

Ritter holds a B.S. in Metallurgical Engineering from Lehigh University and a Masters and Doctorate from MIT in physical-mechanical metallurgy. After MIT, where his hobby was Russian language and culture, he was a NAS-Soviet Academy of Sciences Exchange Fellow in the Soviet Union in the Brezhnev era for one year doing research at the Baikov Institute for Physical Metallurgy on high temperature materials. He speaks fluent Russian (and French), is a graduate of the Bronx High School of Science and recipient of numerous awards from scientific and technical societies and human rights organizations.

After returning from Russia in 1968, he spent a year teaching at California State Polytechnic University, Pomona, where he was also a contract consultant to General Dynamics in their solid-state physics department. He then returned, as a member of the faculty and administration, to his alma-mater, Lehigh University. At Lehigh, in addition to his teaching, research and industry consulting, Dr. Ritter was instrumental in creating a university-wide program linking disciplines of science and engineering to the social sciences and humanities with the hope of furthering understanding of the role of technology in society.

After10 years at Lehigh, Dr. Ritter represented Pennsylvania’s 15th district, the “Lehigh Valley” from 1979 to 1993 in the U.S. House of Representatives where he served on the Science and Technology and Energy and Commerce Committees. Ritter’s main mission as a ‘scientist congressman’ was to work closely with the science, engineering and related industry communities to bring a greater science-based perspective to the legislative, regulatory and political processes.

In Congress, as ranking member on the Congressional Helsinki Commission, he fought for liberty and human rights in the former Soviet Union. The Commission was Ritter’s platform to gather congressional support to the Afghan resistance to the Soviet invasion and occupation during the 1980s. Ritter was author of the “Material Assistance” legislation and founder and House-side Chairman of the “Congressional Task Force on Afghanistan.”

Dr. Ritter continued his effort in the 1990’s after Congress as founder and Chairman of the Washington, DC-based Afghanistan Foundation. In 2003, as creator of a six million-dollar USAID-funded initiative, he served as Senior Advisor to AACC in the creation of the first independent, free-market oriented Chamber of Commerce in the history of the country. Dr. Ritter presently is part of AACC’s seminal role in assisting the development of the Afghan market economy to bring stability and prosperity to Afghanistan. He is also a businessman and investor in Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

World War II ended in 1945 but the ideological imperative of Soviet communism’s expansion did not. By 1950, the Soviet Union (USSR) had solidified its empire by conquest and subversion in all Central and Eastern Europe. But to Stalin & Co., there were other big fish to fry. At the Yalta Conference in February 1945 between Stalin, Roosevelt and Churchill, the USSR was asked to participate in ending the war in the Pacific against Japan. Even though Japan’s defeat was not in doubt, the atom bomb would not be tested until July and it was not yet known to our war planners if it would work.

An invasion of Japan, their home island, was thought to mean huge American and allied casualties, perhaps half a million, a conclusion reached given the tenacity which Japanese soldiers had defended islands like Iwo Jima and Okinawa. So much blood was yet to be spilled… they were fighting to the death. The Soviet Red Army, so often oblivious to casualties in their onslaught against Nazi Germany, would share in the burden of invasion of Japan.

Japan had controlled Manchukuo (later Manchuria).  The Korean peninsula was dominated by Japan historically and actually annexed early in the 20th century. Islands taken from Czarist Russia in the Russo-Japanese War of 1905 were also in play.

Stalin and the communist USSR’s presence at the very end of the war in Asia was solidified at Yalta and that is how they got to create a communist North Korea.

Fast forward to April of 1950, Kim Il Sung had traveled to Moscow to discuss how communist North Korea, might take South Korea and unify the peninsula under communist rule for the communist world. South Korea or the Republic of Korea (ROK) was dependent on the United States. The non-communist ROK was in the middle of the not abnormal chaos of establishing a democracy, an economy, and a new country. Their military was far from ready. Neither was that of the U.S.

Kim and Stalin concluded that South was weak and ripe for adding new realm to their communist world. Stalin gave Kim the go-ahead to invade and pledged full Soviet support. Vast quantities of supplies, artillery and tanks would be provided to the Army of North Korea for a full-fledged attack on the South. MIG-15 fighter aircraft, flown by Soviet pilots masquerading as Koreans would be added. Close by was Communist China for whom the USSR had done yeoman service in their taking control. That was one large insurance policy should things go wrong.

On June 25, 1950, a North Korean blitzkrieg came thundering down on South Korea. Closely spaced large artillery firing massive barrages followed by tanks and troops, a tactic perfected in the Red Army’s battles with the Nazis, wreaked havoc on the overpowered South Korean forces. Communist partisans infiltrated into the South joined the fray, against the ROK. The situation was dire as it looked like the ROK would collapse.

President Harry Truman decided that an expansionist Soviet communist victory in Korea was not only unacceptable but that it would not stop there. He committed the U.S. to fight back and fight back, we did. In July of 1950, the Americans got support from the UN Security Council to form a UN Command (UNC) under U.S. leadership. As many as 70 countries would get involved eventually but the U.S. troops bore the brunt of the war with Great Britain and Commonwealth troops, a very distant second.

It is contested to this day as to why the USSR under Stalin had not been there at the Security Council session to veto the engagement of the UN with the U.S. leading the charge. The Soviets had walked out in January and did not return until August. Was it a grand mistake or did Stalin want to embroil America in a war in Asia so he could more easily deal with his new and possibly expanding empire in Europe? Were the Soviets so confident of a major victory in Korea that would embarrass the U.S. and signal to others that America would be weakened by a defeat in Korea, and thus be unable to lead the non-communist world?

At a time when ROK and U.S. troops were reeling backwards, when the communist North had taken the capital of the country, Seoul, and much more, Supreme UN Commander, General Douglas McArthur had a plan for a surprise attack. He would attack at a port twenty-five miles from Seoul, Inchon, using the American 1st Marine Division as the spearhead of an amphibious operation landing troops, tanks and artillery. That put UNC troops north of the North Korean forces in a position to sever the enemy’s supply lines and inflict severe damage on their armies. Seoul was retaken. The bold Inchon landing changed the course of the Korean war and put America back on offense.

While MacArthur rapidly led the UNC all the way to the Yalu River bordering China, when Communist China entered the war, everything changed. MacArthur had over-extended his own supply lines and apparently had not fully considered the potential for a military disaster if China entered the war. The Chinese People’s Liberation Army (PLA) counterattacked. MacArthur was sacked by Truman. There was a debate in the Truman administration over the use of nuclear weapons to counter the Chinese incursion.

Overwhelming numbers of Chinese forces employing sophisticated tactics, and a willingness to take huge casualties, pushed the mostly American troops back to the original dividing line between the north and south, the 38th parallel (38 degrees latitude)… which, ironically, after two more years of deadly stalemate, is where we and our South Korean allies stand today.

Looking back, airpower was our ace in the hole and a great equalizer given the disparity in ground troops. B-29 Superfortresses blasted targets in the north, incessantly. Jet fighters like the legendary F-86 Sabre jet dominated the Soviet MIG-15s.  But if you discount nuclear weapons, wars are won by troops on the ground, and on the ground, we ended up where we started.

33, 000 Americans died in combat. Other UNC countries lost about 7,000. South Korea, 134,000. North Korea, 213,000. The Chinese lost an estimated 400,000 troops in combat! Civilians all told, 2.7 million, a staggering number.

The Korean war ended in 1953 when Dwight D. Eisenhower was the U.S. President. South Korea has evolved from a nation of rice paddies to a modern industrial power with strong democratic institutions and world-class living standards. North Korea, under communist dictatorship, is one of the poorest and most repressive nations on earth yet they develop nuclear weapons. China, still a communist dictatorship but having adopted capitalist economic principles, has surged in its economic and military development to become a great power with the capacity to threaten the peace in Asia and beyond.

Communist expansion was halted by a hot war in Korea from 1950 to 1953 but the Cold War continued with no letup.

A question for the reader: What would the world be like if America and its allies had lost the war in Korea.

Don Ritter is President and CEO Emeritus (after serving eight years in that role) of the Afghan American Chamber of Commerce (AACC) and a 15-year founding member of the Board of Directors. Since 9-11, 2001, he has worked full time on Afghanistan and has been back to the country more than 40 times. He has a 38-year history in Afghanistan.

Ritter holds a B.S. in Metallurgical Engineering from Lehigh University and a Masters and Doctorate from MIT in physical-mechanical metallurgy. After MIT, where his hobby was Russian language and culture, he was a NAS-Soviet Academy of Sciences Exchange Fellow in the Soviet Union in the Brezhnev era for one year doing research at the Baikov Institute for Physical Metallurgy on high temperature materials. He speaks fluent Russian (and French), is a graduate of the Bronx High School of Science and recipient of numerous awards from scientific and technical societies and human rights organizations.

After returning from Russia in 1968, he spent a year teaching at California State Polytechnic University, Pomona, where he was also a contract consultant to General Dynamics in their solid-state physics department. He then returned, as a member of the faculty and administration, to his alma-mater, Lehigh University. At Lehigh, in addition to his teaching, research and industry consulting, Dr. Ritter was instrumental in creating a university-wide program linking disciplines of science and engineering to the social sciences and humanities with the hope of furthering understanding of the role of technology in society.

After10 years at Lehigh, Dr. Ritter represented Pennsylvania’s 15th district, the “Lehigh Valley” from 1979 to 1993 in the U.S. House of Representatives where he served on the Science and Technology and Energy and Commerce Committees. Ritter’s main mission as a ‘scientist congressman’ was to work closely with the science, engineering and related industry communities to bring a greater science-based perspective to the legislative, regulatory and political processes.

In Congress, as ranking member on the Congressional Helsinki Commission, he fought for liberty and human rights in the former Soviet Union. The Commission was Ritter’s platform to gather congressional support to the Afghan resistance to the Soviet invasion and occupation during the 1980s. Ritter was author of the “Material Assistance” legislation and founder and House-side Chairman of the “Congressional Task Force on Afghanistan.”

Dr. Ritter continued his effort in the 1990’s after Congress as founder and Chairman of the Washington, DC-based Afghanistan Foundation. In 2003, as creator of a six million-dollar USAID-funded initiative, he served as Senior Advisor to AACC in the creation of the first independent, free-market oriented Chamber of Commerce in the history of the country. Dr. Ritter presently is part of AACC’s seminal role in assisting the development of the Afghan market economy to bring stability and prosperity to Afghanistan. He is also a businessman and investor in Afghanistan.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

When Time Magazine was at its heyday and the dominant ‘last word’ in American media, over a ten-year period, Whittaker Chambers was its greatest writer and editor. He was a founding editor of National Review along with William F. Buckley. He received the Presidential Medal of Freedom posthumously from President Ronald Reagan in 1984. His memoir, Witness, is an American classic.

But all that was a vastly different world from his earlier life as a card-carrying member of the Communist Party in the 1920s and spy for Soviet Military Intelligence (GRU) in the 1930s.

We recognize Chambers today for the nation’s focus given to his damning testimony in the Alger Hiss congressional investigations and spy trials from 1948-50 and a trove of documents called the Pumpkin Papers.

Alger Hiss came from wealth and was a member of the privileged class, attended Harvard Law School and was upwardly mobile in the State Department reaching high-ranking positions with access to extremely sensitive information. He was an organizer of the Yalta Conference between Stalin, Roosevelt and Churchill. He helped create the United Nations and in 1949, was President of the prestigious Carnegie Endowment for International Peace.

In Congress in 1948, based on FBI information, a number of Americans were being investigated for spying for the Soviet Union dating back to the early 1930s and during WW II, particularly in the United States Department of State. These were astonishing accusations at the time. When an American spy for the Soviets, Elizabeth Bentley, defected and accused Alger Hiss and a substantial group of U.S. government officials in the Administration of Franklin Roosevelt of spying for the Soviet Union, Hiss vehemently denied the charges. Handsome and sophisticated, Hiss was for a lifetime, well-connected, well-respected and well-spoken. He made an extremely credible witness before the House Unamerican Activities Committee. Plus, most public figures involved in media, entertainment and academe came to his defense.

Whittaker Chambers, by then a successful editor at Time, reluctantly and fearing retribution by the GRU, was subpoenaed before HUAC to testify. He accused Hiss of secretly being a communist and passing secret documents to him for transfer to Soviet Intelligence. He testified that he and Hiss had been together on several occasions. Hiss denied it. Chambers was a product of humble beginnings, divorced parents, a brother who committed suicide at 22 and was accused of having psychological problems. All this was prequel to his adoption – “something to live for and something to die for” – of the communist cause. His appearance, dress, voice and demeaner, no less his stinging message, were considered less than attractive. The comparison to the impression that Hiss made was stark and Chambers was demeaned and derided by Hiss’ supporters.

Then came the trial in 1949. During the pre-trial discovery period, Chambers eventually released large quantities of microfilm he had kept hidden as insurance against any GRU reprisal, including murder. Eliminating defectors was not uncommon in GRU practice then… and exists unfortunately to this day.

A then little-known Member of Congress and member of HUAC, one Richard Nixon, had gained access to the content of Chambers’ secret documents, and adamantly pursued the case before the Grand Jury. Nixon at first refused to give the actual evidence to the Grand Jury but later relented. Two HUAC investigators went to Chambers’ farm in Westminster, Maryland, and from there, guided by Chambers, to his garden. There in a capped and hollowed out orange gourd (not a pumpkin!) were the famous “Pumpkin Papers.” Contained in the gourd were hundreds of documents on microfilm including four hand-written pages by Hiss, implicating him as spying for the Soviet Union.

Hiss was tried and convicted of perjury as the statute of limitations on espionage by then had run out. He was sentenced to two five-year terms and ended up serving three and a half years total in federal prison.

Many on the political Left refused to believe that Alger Hiss was guilty and to this day there are some who still support him. However, the Venona Papers released by the U.S. National Security Agency in 1995 which contained intelligence intercepts from the Soviet Union during Hiss’ time as a Soviet spy showed conclusively that Hiss was indeed a Soviet spy. The U.S. government at the highest levels knew all along that Hiss was a spy but in order to keep the Venona Project a secret and to keep gathering intelligence from the Soviet Union during nuclear standoff and the Cold War, it could not divulge publicly what it knew.

Alger Hiss died at the ripe old age of 92, Whittaker Chambers at the relatively young age of 61. Many believe that stress from his life as a spy, and later the pervasive and abusive criticism he endured, weakened his heart and led to his early death.

The Hiss case is seminal in the history of the Cold War and its impact on America because it led to the taking sides politically on the Left and on the Right, a surge in anti-communism on the Right and the reaction to anti-communism on the Left. At the epicenter of the saga is Whittaker Chambers.

Author’s Postscript:

To me, this is really the story of Whittaker Chambers, whose brilliance as a thinker and as a writer alone did more to unearth and define the destructive nature of communism than any other American of his time. His memoir, Witness, a best-seller published in 1952, is one of the most enlightening works of non-fiction one can read. It reflects a personal American journey through a dysfunctional family background and depressed economic times when communism and Soviet espionage, were ascendant, making the book both an educational experience and page-turning thriller. In Witness, as a former Soviet spy who became disillusioned with communism’s murder and lies, Chambers intellectually and spiritually defined its tyranny and economic incompetence to Americans in a way that previously, only those who experienced it personally could understand. It gave vital insights into the terrible and insidious practices of communism to millions of Americans.

Don Ritter, Sc.D., served in the United States House of Representatives for the 15th Congressional District of Pennsylvania. As founder of the Afghanistan-American Foundation, he was senior advisor to the Afghan-American Chamber of Commerce (AACC) and the Afghan International Chamber of Commerce (AICC). Congressman Ritter currently serves as president and CEO of the Afghan-American Chamber of Commerce. He holds a B.S. in Metallurgical Engineering from Lehigh University and a M.S. and Sc. D. (Doctorate) from the Massachusetts Institute of Technology, M.I.T, in Physical Metallurgy and Materials Science. For more information about the work of Congressman Don Ritter, visit http://www.donritter.org/

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Don Ritter

It was a time when history hung in the balance. The outcome of a struggle between free and controlled peoples – democratic versus totalitarian rule – was at stake.

Here’s the grim picture in early 1948. Having fought for 4 years against the Nazis in history’s biggest and bloodiest battles, victorious Soviet communist armies have thrown back the Germans across all of Eastern and Central Europe and millions of Soviet troops are either occupying their ‘liberated’ lands or have installed oppressive communist governments. Soviet army and civilian losses in WW II are unimaginable, and soldiers killed number around 10 million. Perhaps 20 million when civilians are included. Josef Stalin, the murderous Soviet communist dictator is dead set on not giving up one inch.

Czechoslovakia has just succumbed to communist control in February under heavy Soviet pressure. Poland fell to the communists back in 1946 with Stalin, reneging on his promise to American President Roosevelt and British Prime Minster Churchill at Yalta for free elections, instead installed a Soviet puppet government while systematically eradicating Polish opposition. Churchill had delivered his public-awakening “Iron Curtain” speech 2 years earlier. The major Allies, America, Great Britain and France, are extremely worried about Stalin and the Red Army’s next moves.

Under agreements between the Soviet Union and the allies – Americans, British and French – the country of Germany is divided into 4 Economic Zones, each controlled by the respective 4 countries. The Allies control the western half and the Soviet Union (USSR), the eastern. Berlin itself, once the proud capital of Germany, is now a wasteland of rubble, poverty and hunger after city-shattering house-to-house combat between Nazi and Soviet soldiers. There’s barely a building left standing. There’s hardly any men left in the city. They are either killed in battle or taken prisoner by the Red Army. Berlin, a hundred miles inside the Soviet-controlled Zone in eastern Germany, is also likewise divided between the Allies and the USSR.

That’s the setting for what is to take place next in the pivotal June of 1948.

The Allies had for some time decided that a democratic, western-oriented Germany would be the best defense against further Soviet communist expansion westward. Germany, in a short period of time, had made substantial progress towards democratization and rebuilding. This unnerved Stalin who all along had planned for a united Germany in the communist orbit and the Soviets were gradually increasing pressure on transport in and out of Berlin.

The Allies announced on June 1 of 1948 the addition of the French Zone to the already unified Brit and American zones. Then, on June 18, the Allies announced the creation of a common currency, the Deutschmark, to stimulate economic recovery across the three allied Zones. Stalin and the Soviet leadership, seeing the potential for a new, vital, non-communist Western Germany in these actions, on June 24, decided to blockade Berlin’s rails, roads and canals to choke off what had become a western-nation-allied West Germany and West Berlin.

Stalin’s chess move was to starve the citizens of the city by cutting off their food supply, their electricity, and their coal to heat homes, power remaining factories and rebuild. His plan also was to make it difficult to resupply allied military forces. This was a bold move to grab West Berlin for the communists. Indeed, there were some Americans and others who felt that Germany, because of its crimes against humanity, should never again be allowed to be an industrial nation and that we shouldn’t stand up for Berlin. But that opinion did not hold sway with President Truman.

What Stalin and the Soviet communists didn’t count on was the creativity, ingenuity, perseverance and capacity of America and its allies.

Even though America had nuclear weapons at the time and the Soviet Union did not, it had pretty much demobilized after the war. So, rather than fight the Red Army, firmly dug in with vast quantities of men, artillery and tanks in eastern Germany and risk another world war, the blockade would be countered by an airlift. The greatest airlift of all time. Food, supplies and coal would be transported to the people of Berlin, mainly on American C54s flown by American, British, French and other allied pilots. But only America had the numbers of aircraft, the amount of fuel and the logistical resources, to actually do what looked to Stalin and the Soviets to be impossible.

One can only imagine the enormity of the 24-7 activity. Nearly 300,000 flights were made from June 24 of 1948 till September 30, 1949. Flights were coming in every 30 seconds at height of the airlift. It was a truly amazing logistical achievement to work up to the delivery of some three and a half thousand tons daily to meet the city’s needs. Think of the energy and dedication of the pilots and mechanics, those involved in the supply chains and the demanding delivery schedules… the sheer complexity of such an operation is mind-boggling.

Stalin, seeing the extent of Allied perseverance and capability over a year’s time and meanwhile, suffering an enormous propaganda defeat worldwide, relented.

Think of the Americans who led this history-making endeavor, all the men and women, from the Generals to the soldiers, airmen and civilians and their achievement on behalf of creating a free and prosperous Germany. A free Germany that sat side-by-side in stark contrast with the brutal communist east. To them, known as the “the greatest generation,” we owe our everlasting gratitude for victory in this monumental first ‘battle’ of the Cold War.

Don Ritter, Sc.D., served in the United States House of Representatives for the 15th Congressional District of Pennsylvania. As founder of the Afghanistan-American Foundation, he was senior advisor to the Afghan-American Chamber of Commerce (AACC) and the Afghan International Chamber of Commerce (AICC). Congressman Ritter currently serves as president and CEO of the Afghan-American Chamber of Commerce. He holds a B.S. in Metallurgical Engineering from Lehigh University and a M.S. and Sc. D. (Doctorate) from the Massachusetts Institute of Technology, M.I.T, in Physical Metallurgy and Materials Science. For more information about the work of Congressman Don Ritter, visit http://www.donritter.org/

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

The fall of 1939 saw dramatic changes in world events that would alter the course of history. On September 1, 1939, Nazi Germany invaded Poland to trigger the start of World War II but imperial Japan had been ravaging Manchuria and China for nearly a decade. Even though the United States was officially neutral in the world war, President Franklin Roosevelt had an important meeting in mid-October.

Roosevelt met with his friend, Alexander Sachs, who hand-delivered a letter from scientists Albert Einstein and Leo Szilard. They informed the president that German scientists had recently discovered fission and might possibly be able to build a nuclear bomb. The warning prompted Roosevelt to initiate research into the subject and beat the Nazis.

The United States entered the war after Japan bombed Pearl Harbor on December 7, 1941, and the Roosevelt administration began the highly secretive Manhattan Project in October 1942. The project had facilities in far-flung places and employed the efforts of more than half a million Americans across the country. The weapons research laboratory resided in Los Alamos, New Mexico, under the direction of J. Robert Oppenheimer.

As work progressed on a nuclear weapon, the United States waged a global war in the Pacific, North Africa, and Europe. The Pacific witnessed a particularly brutal war against Japan. After the Battle of Midway in June 1942, the Americans launched an “island-hopping” campaign. They were forced to eliminate a tenacious and dug-in enemy in heavy jungles in distant places like Guadalcanal. The Japanese forces gained a reputation for suicidal banzai charges and fighting to the last man.

By late 1944, the United States was closing in on Japan and invaded the Philippines. The U.S. Navy won the Battle of Leyte Gulf, but the Japanese desperately launched kamikaze attacks that inflicted a heavy toll, sinking and damaging several ships and causing thousands of American casualties. The nature of the attacks helped confirm the Americans believed they were fighting a fanatical enemy.

The battles of Iwo Jima and Okinawa greatly shaped American views of Japanese barbarism. Iwo Jima was a key island for airstrips to bomb Japan and U.S. naval assets as they built up for the invasion of Japan. On February 19, 1945, the Fourth and Fifth Marine Divisions landed mid-island after a massive preparatory bombardment. After a dreadful slog against an entrenched enemy, the Marines took Mt. Suribachi and famously raised an American flag on its heights.

The worst was yet to come against the nearly 22,000-man garrison in a complex network of tunnels. The brutal fighting was often hand-to-hand. The Americans fought for each yard of territory by using grenades, satchel charges, and flamethrowers to attack pillboxes. The Japanese fought fiercely and sent waves of hundreds of men in banzai charges. The Marines and Navy lost 7,000 dead and nearly one-third of the Marines who fought on the island were casualties. Almost all the defenders perished.

The battle for Okinawa was just as bloody. Two Marine and two Army divisions landed unopposed on Okinawa on April 1 after another relatively ineffective bombardment and quickly seized two airfields. The Japanese built nearly impregnable lines of defense, but none was stronger than the southern Shuri line of fortresses where 97,000 defenders awaited.

The Marines and soldiers attacked in several frontal assaults and were ground up by mine fields, grenades, and pre-sighted machine-guns and artillery covering every inch. For their part, the Japanese launched several fruitless attacks that bled them dry. The war of attrition finally ended with 13,000 Americans dead and 44,000 wounded. On the Japanese side, more than 70,000 soldiers and tens of thousands of Okinawan civilians were killed. The naval battle in the waters surrounding the island witnessed kamikaze and bombing attacks that sank 28 U.S. ships and damaged an additional 240.

Okinawa was an essential staging area for the invasion of Japan and additional proof of the fanatical nature of the enemy. Admiral Chester Nimitz, General Douglas MacArthur, and the members of the Joint Chiefs of Staff were planning Operation Downfall—the invasion of Japan—beginning with Operation Olympic in southern Japan in the fall of 1945 with fourteen divisions and twenty-eight aircraft carriers, followed by Operation Coronet in central Japan in early 1946.

While the U.S. naval blockade and aerial bombing of Japan were very successful in grinding down the enemy war machine, Japanese resistance was going to be even stronger and more fanatical than Iwo Jim and Okinawa. The American planners expected to fight a horrific war against the Japanese forces, kamikaze attacks, and a militarized civilian population. Indeed, the Japanese reinforced Kyushu with thirteen divisions of 450,000 entrenched men by the end of July and had an estimated 10,000 aircraft at their disposal. Japan was committed to a decisive final battle defending its home. Among U.S. military commanders, only MacArthur underestimated the difficulty of the invasion as he was wont to do.

Harry Truman succeeded Roosevelt as president when he died on April 12, 1945. Besides the burdens of command decisions in fighting the war to a conclusion, holding together a fracturing alliance with the Soviets, and shaping the postwar world, Truman learned about the Manhattan Project.

While some of the scientists who worked on the project expressed grave concerns about the use of the atomic bomb, most decision-makers expected that it would be used if it were ready. Secretary of War Henry Stimson headed the Interim Committee that considered the use of the bomb. The committee rejected the idea of a demonstration or a formal warning to the Japanese in case it failed and strengthened their resolve.

On the morning of July 16, the “gadget” nuclear device was successfully exploded at Alamogordo, New Mexico. The test was code-name “Trinity,” and word was immediately sent to President Truman then at the Potsdam Conference negotiating the postwar world. He was ecstatic and tried to use it to impress Stalin, who impassively received the news because he had several spies in the Manhattan project. The Allies issued the Potsdam Declaration demanding unconditional surrender from Japan or face “complete and utter destruction.”

After possible targets were selected, the B-29 bomber, Enola Gay, carried the uranium atomic bomb nicknamed Little Boy from Tinian Island. The Enola Gay dropped Little Boy over Hiroshima, where the blast and resulting firestorm killed 70,000 and grievously injured and sickened tens of thousands of others. The Japanese government still adamantly refused to surrender.

On August 9, another B-29 dropped the plutonium bomb Fat Man over Nagasaki which was a secondary target. Heavy cloud cover meant that the bomb was dropped in a valley that restricted the effect of the blast somewhat. Still, approximately 40,000 were killed. The dropping of the second atomic bombs and the simultaneous invasion of Manchukuo by the Soviet Union led to the Emperor Hirohito to announce Japan’s surrender on August 15. The formal surrender took place on the USS Missouri on September 2.

General MacArthur closed the ceremony with a moving speech in which he said,

It is my earnest hope, and indeed the hope of all mankind, that from this solemn occasion a better world shall emerge out of the blood and carnage of the past—a world founded upon faith and understanding, a world dedicated to the dignity of man and the fulfillment of his most cherished wish for freedom, tolerance, and justice…. Let us pray that peace now be restored to the world, and that God will preserve it always. These proceedings are closed.

World War II had ended, but the Cold War and atomic age began.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joshua Schmid

‘A Date Which Will Live in Infamy’

The morning of December 7, 1941 was another day in paradise for the men and women of the U.S. armed forces stationed at the Pearl Harbor Naval Base on Oahu, Hawaii. By 7:30 am, the air temperature was already a balmy 73 degrees. A sense of leisure was in the air as sailors enjoyed the time away from military duties that Sundays offered. Within the next half hour, the serenity of the island was shattered. Enemy aircraft streaked overhead, marked only by a large red circle. The pilots—who had been training for months for this mission—scanned their surroundings and set their eyes on their target: Battleship Row. The eight ships—the crown of the United States’ Pacific fleet—sat silently in harbor, much to the delight of the oncoming Japanese pilots, who began their attack.

Since the Japanese invasion of Manchuria in 1931, the relationship between the United States and Japan had significantly deteriorated. Over the course of the ensuing decade, the U.S. imposed embargoes on strategic materials such as oil and froze Japanese assets to deter the Empire of the Rising Sun’s continual aggressions in the Pacific. For many in the American political and military leadership, it became not a question whether violent conflict would erupt between the two nations but rather when. Indeed, throughout the month of November 1941, the two military commanders at Pearl Harbor—Admiral Husband Kimmel and Lieutenant General Walter Short—received multiple warnings from Washington, D.C. that conflict with Japan somewhere in the Pacific would very soon be a reality. In response, Kimmel and Short ordered that aircraft be moved out of their hangars at Pearl Harbor and lined up on runways in order to prevent sabotage. Additionally, radar—a new technology that had not yet reached its full capabilities—began operation a few hours a day on the island of Oahu. Such a lackluster response to war warnings can largely be attributed to the fact that American intelligence suspected that the initial Japanese strike would fall on U.S. bases in the remote Pacific such as at the Philippines or Wake Island. The logistical maneuvering it would take to carry out a large-scale attack on Pearl Harbor—nearly 4,000 miles from mainland Japan—seemed ludicrously impossible.

Such beliefs, of course, were immediately drowned out by the wails of the air raid sirens and the repeated message, “Air raid Pearl Harbor. This is not a drill” on the morning of what turned out to be perhaps the most momentous day of the entire twentieth century. The Japanese strike force launched attacks from aircraft carriers in two waves. Torpedo and dive bombers attacked hangars and the ships anchored in the harbor while fighters provided strafing runs and air defense. In addition to the eight American battleships, a variety of cruisers, destroyers, and support ships were at Pearl Harbor.

A disaster quickly unfolded for the Americans. Many sailors had been granted leave that day given it was a Sunday. These men were not at their stations as the attack began—a fact that Japanese planners likely expected. Members of the American radar teams did in fact spot blips of a large array of aircraft before the attack. However, when they reported it to their superiors, they were told it was incoming American planes. The American aircraft that were lined up in clusters on runways to prevent sabotage now made easy targets for the Japanese strike force. Of the 402 military aircraft at Pearl Harbor and the surrounding airfields, 188 were destroyed and 159 damaged. Only a few American pilots were able to take off—those who did bravely took on the overwhelming swarm of Japanese aircraft and successfully shot a few down. Ships in the harbor valiantly attempted to get under way despite being undermanned, but with little success. The battleship Nevada attempted to lumber its way out of the narrow confines but her captain purposefully scuttled it to avoid blocking the harbor after it suffered multiple bomb hits. All eight of the battleships took some form of damage, and four were sunk. In the most infamous event of the entire attack, a bomb struck the forward magazine of the battleship Arizona, causing a mass explosion that literally ripped the ship apart. Of the nearly 2,500 Americans killed in the attack on Pearl Harbor, nearly half were sailors onboard the Arizona. In addition to the battleships, a number of cruisers, destroyers, and other ships were also sunk or severely damaged. In contrast, only 29 Japanese planes were shot down during the raid. The Japanese fleet immediately departed and moved to conduct other missions against British, Dutch, and U.S. holdings in the Pacific, believing that they had achieved the great strike that would incapacitate American naval power in the Pacific for years to come.

On the morning of the attack at Pearl Harbor, the aircraft carrier U.S.S. Saratoga was in port at San Diego on a mission. The other two carriers in the Pacific fleet were also noticeably absent from Pearl Harbor when the bombs began to fall. Japanese planners thought little of it in the ensuing weeks—naval warfare theory at the time was fixated on the idea of battleships dueling each other from long range with giant guns. Without their battleships, how could the Americans hope to stop the Japanese from dominating in the Pacific? However, within a year and a half, these three carriers would win a huge victory at the Battle of Midway and helped turn the tide in the Pacific in favor of the Americans and made it a carrier war.

The victory at Midway would give morale to an American people already hard at work since December 7, 1941 at mobilizing its entire society for war in one of the greatest human efforts in history. Of the eight battleships damaged at Pearl Harbor, all but the Arizona and Oklahoma were salvaged and returned to battle before the end of the war. In addition, the U.S. produced thousands of ships between 1941-1945 as part of a massive new navy. In the end, rather than striking a crushing blow, the Japanese task force merely awoke a sleeping giant who eagerly sought to avenge its wounds. As for the men and women who fought and died on December 7, 1941—a date that President Franklin Roosevelt declared would “live in infamy”—they will forever be enshrined in the hearts and minds of Americans for their courage and honor on that fateful day.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Andrew Langer

In 1992, United States Supreme Court Justice Sandra Day O’Connor enunciated an axiomatic principle of constitutional governance, that the Constitution “protects us from our own best intentions,” dividing power precisely so that we might resist the temptation to concentrate that power as “the expedient solution to the crisis of the day.”[1] It is a sentiment that echoes through American history, as there has been a constant “push-pull” between the demands of the populace and the divisions and restrictions on power as laid out by the Constitution.

Before President Franklin Delano Roosevelt’s first term, the concept of a 100-Day agenda simply didn’t exist. But, since 1933, incoming new administrations have been measured by that arbitrary standard—what they plan on accomplishing in those first hundred days, and what they actually accomplished.

The problem, of course, is that public policy decision making should not only be a thorough and deliberative process, but in order to protect the rights of the public, must allow for significant public input. Without that deliberation, without that public participation, significant mistakes can be made. This is why policy made in a crisis is almost always bad policy—and thus Justice O’Connor’s vital warning.

FDR came into office with America under its most significant crisis since the Civil War. Nearly three and a half years into an economic disaster—nearly a quarter of the population was out of work, banks and businesses were failing, millions of Americans were completely devastated and looking for real answers.

The 1932 presidential election was driven by this crisis. Incumbent President Herbert Hoover was seen as a “do-nothing” president, whose efforts at stabilizing the economy through tariffs and tax increases hadn’t stemmed the economic tide of the Great Depression. FDR had built a reputation as governor of New York for action, and on the campaign trail raised a series of ambitious plans that he intended to enact that he called “The New Deal.” Significant portions of this New Deal were to be enacted during those first 100 days in office.

This set a standard that later presidents would be held to: what they wanted to accomplish during those first hundred days, and how those goals might compare to the goals laid out by FDR.

At the core of those enactments were the creation of three major federal programs: the Federal Deposit Insurance Corporation, the Civilian Conservation Corps, and the National Industrial Recovery Administration. Of these three, the FDIC remains in existence today, with its mission largely unchanged: to guarantee the monetary accounts of bank customers, and, in doing so, ensure that banks aren’t closed down because of customers suddenly withdrawing all their money from a bank and closing their accounts.

This had happened with great frequency following the stock market crash of 1929—and such panicked activity was known, popularly, as a “bank run.”[2]

FDR was inaugurated on March 4, 1933. On March 6, he closed the entire American banking system! Three days later, on March 9, Congress passed the Emergency Banking Act—which essentially created the FDIC. Three days later, on Sunday, March 12, FDR gave the first of his “fireside chats,” assuring the nation that when the banks re-opened the following day, the federal government would be protecting Americans’ money.

But there were massive questions over the constitutionality of much of FDR’s New Deal proposals, and many of them were challenged in federal court. At the same time, a number of states were also attempting their own remedies for the nation’s economic morass—and in challenges to some of those policies, the Supreme Court upheld them, citing a new and vast interpretation of the Constitution’s Commerce Clause, with sweeping ramifications.

In the Blaisdell Case[3], the Supreme Court upheld a Minnesota law that essentially suspended the ability of mortgage holders from collecting mortgage monies or pursuing remedies when monies had not been paid.  The court said that due to the severe national emergency created by the Great Depression, government had vast and enormous power to deal with it.

But critics have understood the serious and longstanding ramifications of such decisions. Adjunct Scholar at the libertarian-leaning Cato Institute and NYU law professor Richard Epstein said of Blaisdell that, “trumpeted a false liberation from the constitutional text that has paved the way for massive government intervention that undermines the security of private transactions. Today the police power exception has come to eviscerate the contracts clause.”

In other words—in a conflict between the rights of private parties under the contracts clause and the power of government under the commerce clause, when it comes to emergencies, the power of government wins.

Interestingly enough, due to a series of New Deal programs that had been ruled unconstitutional by the Supreme Court, in 1937, FDR attempted to change the make-up of the court in what became known as the “court-packing scheme.” The proposal essentially called for remaking the balance of the court by appointing an additional justice (up to six additional) for every justice who was over the age of 70 years and 6 months.

Though the legislation languished in Congress, the pressure was brought to bear on the Supreme Court and Associate Justice Owen Roberts began casting votes in support of FDR’s New Deal programs—fundamentally shifting the direction of federal power towards concentration, a shift that continued until the early 1990s, when the high court began issuing decisions (like New York v. United States) that limited the power of the federal government and the expansive interpretation of the commerce clause.

But it’s the sweeping power for the federal government to act within a declared emergency, and the impact of the policies that are created within that crisis that is of continued concern. Much in the same way that the lack of deliberation during FDR’s first 100 days led to programs that had sweeping and lasting impact on public life, and created huge unintended consequences, we are seeing those same mistakes played out today—the declaration of a public emergency, sweeping polices created without any real deliberation and public input, and massive (and devastating consequences) to businesses, jobs, and society in general.

If we are to learn anything from those first hundred days, it should be that we shouldn’t let a deliberative policy process be hijacked, and certainly not for political reasons. Moreover, when polices are enacted without deliberation, we should be prepared for the potential consequences of that policy… and adjust those policies accordingly when new information presents itself (and when the particular crisis has passed). Justice O’Connor was correct—the Constitution does protect us from our own best intentions.

We should rely on it, especially when we are in a crisis.

Andrew Langer is President of the Institute for Liberty.  He teaches in the Public Policy program at the College of William and Mary.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] New York v. US, 505 US 144 (1992)

[2] Bank runs were so engrained in the national mindset that Frank Capra dramatized one in his famous film, It’s A Wonderful Life. In it, the Bedford Falls Bank is the victim of a run and “saved” by the film’s antagonist, Mr. Potter.  Potter offers to “guarantee” the Bailey Building and Loan, but, knowing it would give Potter Control, the film’s hero, George Bailey, uses his own money to keep his firm intact.

[3] Home Building and Loan Association v Blaisdell, 290 US 398 (1934)

Guest Essayist: John Steele Gordon

Wall Street, because it tries to discern future values, is usually a leading indicator. It began to recover, for instance, from the financial debacle of 2008 in March of the next year. But the economy didn’t begin to grow again until June of 2009.

But sometimes Wall Street separates from the underlying economy and loses touch with economic reality. That is what happened in 1929 and brought about history’s most famous stock market crash.

The 1920’s were a prosperous time for most areas of the American economy and Wall Street reflected that expansion. But rural America was not prospering. In 1900, one-third of all American crop land had been given over to fodder crops to feed the country’s vast herd of horses and mules. But by 1930, horses had largely given way to automobiles and trucks while the mules had been replaced by the tractor.

As more and more agricultural land was turned over to growing crops for human consumption, food prices fell and rural areas began to fall into depression. Rural banks began failing at the rate of about 500 a year.

Because the major news media, then as now, was highly concentrated in the big cities, this economic problem went largely unnoticed. Indeed, while the overall economy rose 59 percent in the 1920s, the Dow Jones Industrial Average increased 400 percent.

The Federal Reserve in the fall of 1928, raised the discount rate from 3.5 percent to 5 percent and began to reduce the increase in the money supply, in hopes of getting the stock market to cool off.

But by then, Wall Street was in a speculative bubble. Fueling that bubble was a very misguided policy by the Fed. It allowed member banks to borrow at the discount window at five percent. The banks in turn, loaned the money to brokerage houses at 12 percent. The brokers then loaned the money to speculators at 20 percent. The Fed tried to use “moral suasion” to get the banks to stop borrowing in this way. But if a bank can make 7 percent on someone else’s money, it is going to do so. The Fed should have just closed the window for those sorts of loans, but didn’t.

By Labor Day, 1929, the American economy was in a recession but Wall Street still had not noticed. On the day after Labor Day, the Dow hit a new all-time high at 381.17. It would not see that number again for 25 years. Two days later the market began to wake up.

A stock market analyst of no great note, Roger Babson, gave a talk that day in Wellesley, Massachusetts, and said that, “I repeat what I said at this time last year and the year before, that sooner or later a crash is coming.” When news of this altogether unremarkable prophecy crossed the broad tape at 2:00 that afternoon, all hell broke loose. Prices plunged (US Steel fell 9 points, AZT&T 6) and volume in the last two hours of trading was a fantastic two million shares.

Remembered as the Babson Break, it was like a slap across the face of an hysteric, and the mood on the Street went almost in an instant from “The sky’s the limit” to “Every man for himself.”

For the next six weeks, the market trended downwards, with some plunges followed by weak recoveries. Then on Thursday, October 23rd, selling swamped the market on the second highest volume on record. The next morning there was a mountain of sell orders in brokerage offices across the country and prices plunged across the board. This set off a wave of margin calls, further depressing prices, while short sellers put even more pressure on prices.

A group of the Street’s most important bankers met at J. P. Morgan and Company, across Broad Street from the exchange.

Together they raised $20 million to support the market and entrusted it to Richard Whitney, the acting president of the NYSE.

At 1:30, Whitney strode onto the floor and asked the price of US Steel. He was told that it had last traded at 205 but that it had fallen several points since, with no takers.

“I bid 205 for 10,000 Steel,” Whitney shouted. He then went to other posts, buying large blocks of blue chips. The market steadied as shorts closed their positions and some stocks even ended up for the day. But the volume had been an utterly unprecedented 13 million shares.

The rally continued on Friday but there was modest profit taking at the Saturday morning session. Then, on Monday, October 28th, selling resumed as rumors floated around that some major speculators had committed suicide and that new bear pools were being formed.

On Tuesday, October 29th, remembered thereafter as Black Tuesday, there was no stopping the collapse in prices. Volume reached 16 million shares, a record that would stand for nearly 40 years, and the tape ran four hours late. The Dow was down a staggering 23 percent on the day and nearly 40 percent below its September high.

Prices trended downwards for more than another month, but by the spring of 1930 the market, badly over sold by December, had recovered about 45 percent of its autumn losses. Many thought the recession was over. But then the federal government and the Federal Reserve began making a series of disastrous policy blunders that would turn an ordinary recession into the Great Depression.

John Steele Gordon was born in New York City in 1944 into a family long associated with the city and its financial community. Both his grandfathers held seats on the New York Stock Exchange. He was educated at Millbrook School and Vanderbilt University, graduating with a B.A. in history in 1966.

After college he worked as a production editor for Harper & Row (now HarperCollins) for six years before leaving to travel, driving a Land-Rover from New York to Tierra del Fuego, a nine-month journey of 39,000 miles. This resulted in his first book, Overlanding. Altogether he has driven through forty-seven countries on five continents.

After returning to New York he served on the staffs of Congressmen Herman Badillo and Robert Garcia. He has been a full-time writer for the last twenty years. His second book, The Scarlet Woman of Wall Street, a history of Wall Street in the 1860’s, was published in 1988. His third book, Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt, was published in 1997. The Great Game: The Emergence of Wall Street as a World Power, 1653-2000, was published by Scribner, a Simon and Schuster imprint, in November, 1999. A two-hour special based on The Great Game aired on CNBC on April 24th, 2000. His latest book, a collection of his columns from American Heritage magazine, entitled The Business of America, was published in July, 2001, by Walker. His history of the laying of the Atlantic Cable, A Thread Across the Ocean, was published in June, 2002. His next book, to be published by HarperCollins, is a history of the American economy.

He specializes in business and financial history. He has had articles published in, among others, Forbes, Forbes ASAP, Worth, the New York Times and The Wall Street Journal Op-Ed pages, the Washington Post’s Book World and Outlook. He is a contributing editor at American Heritage, where he has written the “Business of America” column since 1989.

In 1991 he traveled to Europe, Africa, North and South America, and Japan with the photographer Bruce Davidson for Schlumberger, Ltd., to create a photo essay called “Schlumberger People,” for the company’s annual report.

In 1992 he was the co-writer, with Timothy C. Forbes and Steve Forbes, of Happily Ever After?, a video produced by Forbes in honor of the seventy-fifth anniversary of the magazine.

He is a frequent commentator on Marketplace, the daily Public Radio business-news program heard on more than two hundred stations throughout the country. He has appeared on numerous other radio and television shows, including New York: A Documentary Film by Ric Burns, Business Center and Squawk Box on CNBC, and The News Hour with Jim Lehrer on PBS. He was a guest in 2001 on a live, two-hour edition of Booknotes with Brian Lamb on C-SPAN.

Mr. Gordon lives in North Salem, New York. His email address is jsg@johnsteelegordon.com.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

An admirer of inventors Bell, Edison, and Einstein’s theories, scientist and inventor Philo T. Farnsworth designed the first electric television based on an idea he sketched in a high school chemistry class. He studied and learned some success was gained with transmitting and projecting images. While plowing fields, Farnsworth realized television could become a system of horizontal lines, breaking up images, but forming an electronic picture of solid images. Despite attempts by competitors to impede Farnsworth’s original inventions, in 1928, Farnsworth presented his idea for a television to reporters in Hollywood, launching him into more successful efforts that would revolutionize moving pictures.

On September 3, 1928, Philo Farnsworth, a twenty-two year old inventor with virtually no formal credentials as a scientist, demonstrated his wholly electronic television system to reporters in California. A few years later, a much improved television system was demonstrated to larger crowds of onlookers at the Franklin Institute in Philadelphia, proving to the world that this new medium could broadcast news, entertainment, and educational content across the nation.

Farnsworth had come far from his boyhood roots in northern Utah and southern Idaho. He was born in a log cabin lacking indoor plumbing or electrical power. His family moved to a farm outside of Rigby, Idaho, when Farnsworth was a young boy. For the first time, Farnsworth could examine electrical appliances and electric generators in action. He quickly learned to take electrical gadgets apart and put them back together again, often making adaptations to improve their function. He also watched each time the family’s generator was repaired. Soon, still a  young boy, he could do those repairs himself. Farnsworth was a voracious reader of science books and magazines, but also devoured what is now termed science fiction, although that term was not in use during his youth. He became a skilled violinist, possibly because of the example of his idol, Albert Einstein, who also played the instrument.[1]

Farnsworth excelled in his classes in school, particularly in mathematics and other sciences, but he did present his teachers and school administrators with a bit of a problem when he repeatedly appealed to take classes intended for much older students. According to school rules, only high school juniors and seniors were supposed to enroll in the advanced classes, but Farnsworth determined to find courses that would challenge him intellectually. The school resisted his entreaties, but one chemistry teacher, Justin Tolman, agreed to tutor Philo and give him extra assignments both before and after school.

One day, Farnsworth presented a visual demonstration of an idea that he had for transmitting visual images across space. He later claimed that he had come up with the basic idea for this process one year earlier, when he was only fourteen. As he was plowing a field on his family farm, Philo had seen a series of straight rows of plowed ground. Farnsworth thought it might be possible to represent visual images by breaking up the entire image into a sequence of lines of various shades of light and dark images. The images could be projected electronically and re-assembled as pictures made up of a collection of lines, placed one on top of another. Farnsworth  believed that this could be accomplished based on his understanding of Einstein’s path-breaking work on the “photoelectric effect” which had discovered that a particle of light, called a photon, that hit a metal plate would displace electrons with some residual energy transferred to a free-roaming negative charge, called a photoelectron.[2] Farnsworth had developed a conceptual model of a device that he called an “image dissector” that could break the images apart and transmit them for reassembly at a receiver. He had no means of creating this device with the resources he had at hand, but he did develop a model representation of the device, along with mathematical equations to convey the causal mechanisms. He presented all of this on the blackboard of a classroom in the high school in Rigby.   Tolman was stunned by the intellectual prowess of the fifteen year old standing in front of him. He thought Farnsworth’s model might actually work, and he copied down some of the drawings from the blackboard onto a piece of paper, which he kept for years.[3] It is fortunate for Farnsworth that Tolman held on to those pieces of paper.

Farnsworth was accepted into the United States Naval Academy but very soon was granted an honorable discharge under a provision permitting new midshipman to leave the university and the service to care for their families after the death of a parent. Farnsworth’s father had died the previous year, and Farnsworth returned to Utah, where his family had relocated after the sale of the farm. Farnsworth enrolled at Brigham Young University but worked at various jobs to support himself, his mother, and his younger siblings. As he had in high school, Farnsworth asked to be allowed to register in advanced classes rather than take only freshman level course work. He quickly earned a technical certificate but no baccalaureate degree. While in Utah, Farnsworth met, courted and eventually married his wife, “Pem,” who would later help in his lab creating and building instruments. One of her brothers would also provide lab assistance. One of Farnsworth’s job during his time in Utah was with the local Community Chest. There he met George Everson and Leslie Gorrell, who were regional Community Chest administrators who were experienced in money-raising. Farnsworth explained his idea to them about electronic television, which he had never before done to anyone except his father, now deceased, and his high school teacher, Justin Tolman. Everson and Gorrell were impressed with Farnsworth’s idea, although they barely understood most of the science behind it. Everson and Gorrell invited Farnsworth to travel with them to California to discuss his research with scientists from the California Institute of Technology (a.k.a., Cal Tech). Farnsworth agree to do so, and made the trek to Los Angeles to meet first with scientists and then with bankers to solicit funds to support his research.     When discussing his proposed electronic television model, Farnsworth became transformed from a shy, socially awkward, somewhat tongue-tied young man to a confident and articulate advocate of his project. He was able to explain the broad outline of his research program in terms that lay people could understand. He convinced Gorrell and Everson to put up some money and a few years later got several thousand more dollars from a California bank.[4]

Philo and Pem Farnsworth re-located first to Los Angeles and then to San Francisco to establish a laboratory. Farnsworth believed that his work would progress more quickly if he were close to a number of other working scientists and technical experts at Cal Tech and other universities. Farnsworth also wanted to be near to those in the motion picture industry who had technical expertise. With a little start-up capital, Farnsworth and a few other backers incorporated their business, although Farnsworth did not create a publicly traded corporation until several years later. At the age of twenty-one, in 1927, Farnsworth filed the first two of his many patent applications. Those two patents were approved by the patent office in 1930. By the end of his life he had three hundred patents, most of which dealt with television or radio components. As of 1938, three-fourths of all patents dealing with television were by Farnsworth.[5]

When Farnsworth began his work in California, he and his wife and brother-in-law had to create many of the basic components for his television system. There was very little that they could simply buy off-the-shelf at any sort of store that they could simply assemble into the device Farnsworth had in mind. So much of their time was devoted to soldering wires and creating vacuum tubes, as well as testing materials to determine which performed best. After a while, Farnsworth hired some assistants, many of them graduate students at Cal Tech or Stanford. One of his assistants, Russell Varian, would later make a name for himself as a physicist in his own right and would become one of the founders of Silicon Valley. Farnsworth’s lab also had many visitors, including Hollywood celebrities such as Douglas Fairbanks and Mary Pickford, as well as a number of scientists and engineers. One visitor was Vladimir Zworykin, a Russian émigré with a PhD in electrical engineering who worked for Westinghouse Corporation. Farnsworth showed Zworykin not only his lab but also examples of most of his key innovations, including his image dissector. Zworykin expressed admiration for the devices that he observed, and said that he wished that he had invented the dissector. What Farnsworth did not know was that a few weeks earlier, Zworykin had been hired away from Westinghouse by David Sarnoff, then the managing director and later the president of the Radio Corporation of America (a.k.a., RCA). Sarnoff grilled Zworykin about what he had learned from his trip to Farnsworth’s lab and immediately set him to work on television research. RCA was already a leading manufacturer of radio sets and would soon become the creator of the National Broadcasting Corporation (a.k.a., NBC). After government antitrust regulators forced RCA to divest itself of some of its broadcasting assets, RCA created the American Broadcasting Corporation (a.k.a., ABC) as a separate company[6]. RCA and Farnsworth would remain competitors and antagonists for the rest of Farnsworth’s career.

In 1931, Philco, a major radio manufacturer and electronics corporation entered into a deal with Farnsworth to support his research. The company was not buying out Farnsworth’s company, but was purchasing non-exclusive licenses for Farnsworth’s patents. Farnsworth then moved with his family and some of his research staff to Philadelphia.   Ironically, RCA’s television lab was located in Camden, New Jersey, just a few miles away. On many occasions, Farnsworth and RCA could receive the experimental television broadcasts transmitted from their rival’s lab. Farnsworth and his team were working at a feverish pace to improve their inventions to make them commercially feasible. The Federal Radio Commission, later known as the Federal Communications Commission, classified television as a merely experimental communications technology, rather than one that was commercially viable and subject to license. The commission wished to create standards for picture resolution and frequency bandwidth. Many radio stations objected to television licensing because they believed that television signals would crowd out the bandwidth available for their broadcasts.   Farnsworth developed the capacity to transmit television signals over a more narrow bandwidth than any competing televisions’ transmissions.

Personal tragedy struck the Farnsworth family in 1932 when Philo and Pem’s young son, Kenny, still a toddler, died of a throat infection, an ailment that today could easily have been treated with antibiotics. The Farnsworths decided to have the child buried back in Utah, but Philco refused to allow Philo time off to go west to bury his son. Pem made the trip alone, causing a rift between the couple that would take months to heal. Farnsworth was struggling to perfect his inventions, while at the same time RCA devoted an entire team to television research and engaged in a public relations campaign to convince industry leaders and the public that it had the only viable television system. At this time, Farnsworth’s health was declining. He was diagnosed with ulcers and he began to drink heavily, even though Prohibition had not yet been repealed. He finally decided to sever his relationship with Philco and set up his own lab in suburban Philadelphia. He soon also took the dramatic step of filing a patent infringement complaint against RCA in 1934.[7]

Farnsworth and his friend and patent attorney, Donald Lippincott, presented their argument before the patent examination board that Farnsworth was the original inventor of what was now known as electronic television and that Sarnoff and RCA had infringed on patents approved in 1930. Zworykin had some important patents prior to that time but had not patented the essential inventions necessary to create an electronic television system. RCA went on the offensive by claiming that it was absurd to claim that a young man in his early twenties with no more than one year of college could create something that well-educated scientists had failed to invent. Lippincott responded with evidence of the Zworykin visit to the Farnsworth lab in San Francisco. After leaving Farnsworth, Zworykin had returned first to the labs at Westinghouse and had duplicates of Farnsworth’s tubes constructed on the spot. Then researchers were sent to Washington to make copies of Farnsworth’s patent applications and exhibits. Lippincott also was able to produce Justin Lippincott, Philo’s old and then retired teacher, who appeared before the examination board to testify that the basic idea of the patent had been developed when Farnsworth was a teenager. When queried, Tolman removed a yellowed piece of notebook paper with a diagram that he had copied off the blackboard in 1922. Although the document was undated, the written document, in addition to Tolman’s oral testimony, may have convinced the board that Farnsworth’s eventual patent was for a novel invention.[8]

The examining board took several months to render a decision. In July of 1935, the examiner of interferences from the U.S. Patent Office mailed a forty-eight page document to the parties involved. After acknowledging the significance of inventions by Zworykin, the patent office declared that those inventions were not equivalent to what was understood to be electronic television. Farnsworth’s claims had priority.   The decision was appealed in 1936, but the result remained unchanged.  Beginning in 1939, RCA began paying royalties to Farnsworth.

Farnsworth and his family, friends, and co-workers were ecstatic with the outcome when the patent infringement case was decided. For the first time, Farnsworth was receiving the credit and the promise of the money that he thought he was due. However, the price he had paid already was very high. Farnsworth’s physical and emotional health was declining. He was perpetually nervous and exhausted. As unbelievable as it may sound today, one doctor advised him to take up smoking to calm his nerves. He continued to drink heavily and his weight dropped.      His company was re-organized as the Farnsworth Television & Radio Corporation and had its initial public offering of stock in 1939.    Whether out of necessity or personal choice, Farnsworth’s work in running his lab and his company diminished.

While vacationing in northern, rural Maine in 1938, the Farnsworth family came across a plot of land that reminded Philo of his home and farm outside of Rigby. Farnsworth bought the property, re-built an old house, constructed a dam for a small creek, and erected a building that could house a small laboratory. He spent most of the next few years on the property. Even though RCA had lost several patent infringement cases against Farnsworth, the company was still engaging in public demonstrations of television broadcasts in which it claimed that David Sarnoff was the founder of television and that Vladimir Zworykin was the sole inventor of television. The most significant of these demonstrations was at the World’s Fair at Flushing Meadows, New York. Many reporters accepted the propaganda that was distributed at that event and wrote up glowing stories of the supposedly new invention. Only a few years before, Farnsworth had demonstrated his inventions at the Franklin Institute, but the World’s Fair was a much bigger venue with a wider media audience. In 1949, NBC launched a special televised broadcast celebrating the 25th anniversary of the creation of television by RCA, Sarnoff, and Zworykin. No mention was made of Farnsworth at all.[9]

The FCC approved television as a commercial broadcast enterprise, subject to licensure, in 1939. The commission also set standards for broadcast frequency and picture quality. However, the timing to start off a major commercial venture for the sale of a discretionary consumer product was far from ideal. In fact, the timing of Farnsworth’s milestone accomplishments left much to be desired. His first patents were approved shortly after the nation entered the Great Depression. His inventions created an industry that was already subject to stringent government regulation focused on a related but potentially rival technology: radio. Once television was ready for mass marketing, the nation was poised to enter World War II. During the war, production of televisions and many other consumer products ceased and resources were devoted to war-related materiel. Farnsworth’s company and RCA both produced radar and other electronics equipment. Farnsworth’s company also produced wooden ammunition boxes. Farnsworth allowed the military to enjoy free use of his patents for radar tubes.[10]

Farnsworth enjoyed royalties from his patent for the rest of his life.   However, his two most important patents were his earliest inventions.  The patents were approved in 1930 for a duration of seventeen years. In 1947, the patents became part of the public domain. It was really only in the late 1940s and 1950s that television exploded as a popular consumer good, but by that time Farnsworth could receive no royalties for his initial inventions. Other, less fundamental components that he had patented did provide him with some royalty income. Before the war, Farnsworth’s company had purchased the Capehart Company of Fort Wayne, Indiana, and eventually closed down their Philadelphia area facility and moved their operations entirely to Indiana. A devastating wildfire swept through the countryside in rural Maine, burning down the buildings on Farnsworth’s property, only days before his property insurance policy was activated. Farnsworth’s company fell upon hard times, as well, and eventually was sold to International Telephone and Telegraph. Farnsworth’s health never completely recovered, and he took a disability retirement pension at the age of sixty and returned to Utah.   In his last few years, Farnsworth devoted little time to television research, but did develop devices related to cold fusion, which he hoped to use to produce abundant electrical power for the whole world to enjoy. As of now, cold fusion has not been a viable electric power generator, but it has proved useful in neutron production and medical isotopes.

Farnsworth died in 1971 at the age of sixty-four. At the time of his death, he was not well-known outside of scientific circles. His hopes and dreams of television as a cultural and educational beacon to the whole world had not been realized, but he did find some value in at least some of what he could see on the screen. About two years before he died, Philo and Pem along with millions of other people around the world saw Neil Armstrong set foot on the moon. At that moment, Philo turned to his wife and said that he believed that all of his work was worthwhile.

Farnsworth’s accomplishments demonstrated that a more or less single inventor, with the help of a few friends, family members, and paid staff, could create significant and useful inventions that made a mark on the world.[11] In the long run, corporate product development by rivals such as RCA surpassed what he could do to make his brainchild marketable.   Farnsworth had neither the means nor the inclination to compete with major corporations in all respects. But he did wish to have at least some recognition and some financial reward for his efforts. Unfortunately, circumstances often wiped out what gains he received. Farnsworth also demonstrated that individuals lacking paper credentials can also accomplish significant achievements. With relatively little schooling and precious little experience, Farnsworth developed devices that older and more well-educated competitors could not. Sadly, Farnsworth’s experiences display the role of seemingly chance events in curbing personal success. Had he developed his inventions a bit earlier or later, avoiding most of the Depression and the Second World War, he might have gained much greater fame and fortune. None of us, of course, choose the time into which we are born.

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1]  Schwartz, Evan I. 2002. The Last Lone Inventor: A Tale of Genius, Deceit, and the Birth of Television. New York: HarperCollins.

[2] https://www.scientificamerican.com/article/einstein-s-legacy-the-photoelectric-effect/

[3] Schwartz, op cit.

[4] Schwartz, op cit.

[5] Jewkes, J. “Monopoly and Economic Progress.” Economica, New Series, 20, no. 79 (1953): 197-214

[6] Schwartz, op cit.

[7] Schwartz, op cit.

[8] Schwartz, op cit.

[9] Schwartz, op cit.

[10]Schwartz, op cit.

[11]Lemley, Mark A. 2012. “The Myth of the Sole Inventor.” Michigan Law Review 110 (5): 709–60.

 

Guest Essayist: Tony Williams

Americans have long held the belief that they are exceptional and have a providential destiny to be a “city upon a hill” as a beacon for democracy for the world.

Unlike the French revolutionaries who believed that they were bound to destroy monarchy and feudalism everywhere, the American revolutionaries laid down the principle of being an example for the world instead of imposing the belief on other countries.

In 1821, Secretary of State John Quincy Adams probably expressed this idea best during a Fourth of July address when he asserted the principle of American foreign policy that:

Wherever the standard of freedom and independence has been or shall be unfurled, there will her heart, her benedictions and her prayers be. But she goes not abroad in search of monsters to destroy. She is the well-wisher to the freedom and independence of all. She is the champion and vindicator only of her own.

While the Spanish-American War raised a debate over the nature of American expansionism and foundational principles, the reversal of the course of American diplomatic history found its fullest expression in the progressive presidency of Woodrow Wilson.

Progressives such as President Wilson embraced the idea that a perfect world could be achieved with the spread of democracy and adoption of a greater international outlook instead of national interests for world peace. As president, Wilson believed that America had a responsibility to spread democracy around the world by destroying monarchy and enlightening people in self-government.

When World War I broke out in August 1914 after the assassination of Austrian Archduke Franz Ferdinand, Wilson declared American neutrality and asked a diverse nation of immigrants to be “impartial in thought as well as in action.”

American neutrality was tested in many different ways. Many first-generation American immigrants from different countries still had strong attachments and feelings toward their nation of origin. Americans also sent arms and loans to the Allies (primarily Great Britain, France, and Russia) that undermined claims of U.S. neutrality. Despite the sinking of the liner Lusitania by a German U-boat (submarine) in May 1915 that killed 1,200 including 128 Americans, Secretary of State William Jennings Bryan resigned because he thought the U.S. should protest the British blockade of Germany as much as German actions in the Atlantic.

Throughout 1915 and 1916, German U-boats sank several more American vessels, though Germany apologized and promised no more incidents against merchant vessels of neutrals. By late 1916, however, more than two years of trench warfare and stalemate on the Western Front had led to millions of deaths, and the belligerents sought for ways to break the stalemate.

On February 1, 1917, the German high command decided to launch a policy of unrestricted U-boat warfare in which all shipping was subject to attack. The hope was to knock Great Britain out of the war and attain victory before the United States could enter the war and make a difference.

Simultaneously, Germany curiously sent a secret diplomatic message to Mexico offering territory in Texas, New Mexico, and Arizona in exchange for entering the war against the United States. British intelligence intercepted this foolhardy Zimmerman Telegram and shared it with the Wilson administration. Americans were predictably outraged when news of the telegram became public.

On April 2, President Wilson delivered a message to Congress asking for a declaration of war. He focused on what he labeled the barbaric and cruel inhumanity of attacking neutral ships and killing innocents on the high seas. He spoke of American freedom of the seas and neutral rights but primarily painted a stark moral picture of why the United States should go to war with the German Empire which had violated “the most sacred rights of our Nation.”

Wilson took an expansive view of the purposes of American foreign policy that reshaped American exceptionalism. He had a progressive vision of remaking the world by using the war to spread democratic principles and end autocratic regimes. In short, he thought, “The world must be made safe for democracy.”

Wilson argued that the United States had a duty as “one of the champions of the rights of mankind.” It would not merely defeat Germany but free its people. Americans were entering the war “to fight thus for the ultimate peace of the world and for the liberation of its peoples, the German peoples included: for the rights of nations great and small and the privilege of men everywhere to choose their way of life.”

Wilson believed that the United States had larger purposes than merely defending its national interests. It was now responsible for world peace and the freedom of all.  “Neutrality is no longer feasible or desirable where the peace of the world is involved and the freedom of its peoples, and the menace to that peace and freedom lies in the existence of autocratic governments backed by organized force which is controlled wholly by their will, not by the will of their people.”

At the end of the war and during the Versailles conference, Wilson further articulated this vision of a new world with his Fourteen Points and proposal for a League of Nations to prevent future wars and ensure a lasting world peace.

Wilson’s vision failed to come to come to fruition. The Senate refused to ratify the Treaty of Versailles because it was committed to defending American national sovereign power over declaring war. The great powers were more dedicated to their national interests rather than world peace. Moreover, the next twenty years saw the spread of totalitarian, communist, and fascist regimes rather than progressive democracies. Finally, World War II shattered his vision of remaking the world.

Wilson’s ideals were not immediately adopted, but in the long run helped to reshape American foreign policy. The twentieth and twenty-first centuries saw increasing Wilsonian appeals by American presidents and policymakers to go to war to spread democracy throughout the world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable Michael Warren

Before outbreak of the American Revolution, colonies were deeply embedded in the patriarchal traditions and customs of the entire world. All cultures and civilizations had placed women in a subordinate position in the political and social realm. However, the Declaration of Independence raised the consciousness of at least some women and men about the inequality that was embedded in the legal and cultural regimes. Women became serious contributors to the American Revolution war effort, and some, such as Abigail Adams (wife of Colossus of Independence and President John Adams) questioned why they should not be entitled to equality declared in the Declaration.

Unfortunately, the idea of gender equality was scoffed at by most both men and women. For the most part, women were supreme in their social sphere of family and housekeeping, but were to have no direct political or legal power.

The political patriarchy did not consider women able to possess the correct temperament, stamina, or talents to be full participants in the American experiment. Justice Joseph Bradley of the United States Supreme Court, in a concurring opinion upholding the Illinois Bar’s prohibition of women from the practice of law, epitomized these sentiments:

[T]he civil law, as well as nature herself, has always recognized a wide difference in the respective spheres and destinies of man and woman. Man is, or should be, woman’s protector and defender. The natural timidity and delicacy which belongs to the female sex evidently unfits it for many occupations of civil life.

However, the hunger for freedom and equality could not be contained. With the strengthening of the abolitionist movement came a renewed interest in women’s suffrage. A groundbreaking women’s suffrage conference – the first of its kind in the world – was organized by Elizabeth Cady Stanton and others in Seneca Falls New York in 1848. At the heart of the conference was the Declaration of Sentiments and Resolutions, written by Stanton and adopted by the conference on July 20, 1848. Paralleling the Declaration of Independence, the power of the statement is understood best by simply reading a key passage:

We hold these truths to be self-evident: that all men and women are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness; that to secure these rights governments are instituted, deriving their just powers from the consent of the governed. Whenever any form of government becomes destructive of these ends, it is the right of those who suffer from it to refuse allegiance to it, and to insist upon the institution of a new government, laying its foundation on such principles, and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness. Prudence, indeed, will dictate that governments long established should not be changed for light and transient causes; and accordingly all experience hath shown that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same object, evinces a design to reduce them under absolute despotism, it is their duty to throw off such government, and to provide new guards for their future security. Such has been the patient sufferance of the women under this government, and such is now the necessity which constrains them to demand the equal station to which they are entitled. The history of mankind is a history of repeated injuries and usurpations on the part of man toward woman, having in direct object the establishment of an absolute tyranny over her. To prove this, let facts be submitted to a candid world.”

 

Now, in view of this entire disfranchisement of one-half the people of this country, their social and religious degradation–in view of the unjust laws above mentioned, and because women do feel themselves aggrieved, oppressed, and fraudulently deprived of their most sacred rights, we insist that they have immediate admission to all the rights and privileges which belong to them as citizens of the United States.

The Seneca Falls conference and declaration was just the beginning. During the lead to and aftermath of the Civil War, the women’s suffrage movement gathered strength and momentum. The Fourteenth Amendment, which gave all men the right to vote regardless of race or prior servitude, was bittersweet. The ratification of the amendment split the suffragists and abolitionists movements – some within both movements wanted women to be included in the Fourteenth Amendment, and others did not want to jeopardize its passage by including women in light of the overwhelming bias against women’s suffrage at that time. The suffragists lost, and the Fourteenth Amendment gave all men – but not women – their due.

It took several more generations of determined suffragists to enact constitutional change with the adoption of the Nineteenth Amendment. The territory of Wyoming in 1869 was the first to give women the right to vote. It would take over 50 years before the women’s right to vote was a constitutional right. The movement only happened with the great tenacity, persistence, brilliance, and courage of the women and men suffragists that slowly but surely turned the nation toward universal suffrage. Parades, protests, hunger strikes, speaking tours, book tours, and countless other tactics were used to change the tide.

The Nineteenth Amendment was passed by Congress on June 4, 1919, ratified by the States on August 18, 1920, and effective on August 26, 1920.  It simply provides:

The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of sex. Congress shall have power to enforce this article by appropriate legislation.

Short, but revolutionary. Honor the sacrifices of generations before us and defend – and exercise – the right to vote for women and all Americans.

Michael Warren serves as an Oakland County Circuit Court Judge and is the author of America’s Survival Guide, How to Stop America’s Impending Suicide by Reclaiming Our First Principles and History. Judge Warren is a constitutional law professor, and co-creator of Patriot Week www.PatriotWeek.org.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

On March 3, 1917, 162 words changed the course of World War I and the history of the 20th Century.

Germany officially admitted to sending the “Zimmermann Telegram,” which exposed a complex web of international intrigue, to keep America out of World War I.  It was this, and not the sinking of the Lusitania on May 7, 1915, that led to the U.S. entering the European war.

The Zimmermann Telegram was a message sent by Arthur Zimmermann, a senior member of the German Foreign Office in Berlin, to Ambassador Heinrich von Eckardt in the German Embassy in Mexico City. It outlined Germany’s plans to support Mexico in a war with the United States should America enter the European War:

We intend to begin on the first of February unrestricted submarine warfare. We shall endeavor in spite of this to keep the United States of America neutral. In the event of this not succeeding, we make Mexico a proposal of alliance on the following basis: make war together, make peace together, generous financial support and an understanding on our part that Mexico is to reconquer the lost territory in Texas, New Mexico, and Arizona. The settlement in detail is left to you. You will inform the President of the above most secretly as soon as the outbreak of war with the United States of America is certain, and add the suggestion that he should, on his own initiative, invite Japan to immediate adherence and at the same time mediate between Japan and ourselves. Please call the President’s attention to the fact that the ruthless employment of our submarines now offers the prospect of compelling England in a few months to make peace.

The story of how this telegram became the pivotal document of World War I reads like a James Bond movie.

America was neutral during the early years of the “Great War.” It also managed the primary transatlantic telegraph cable. European governments, on both sides of the war, were allowed to use the American cable for diplomatic communications with their embassies in North and South America. On a daily basis, messages flowed, unfettered and unread, between diplomatic outposts and European capitals.

Enter Nigel de Grey and his “Room 40” codebreakers.

British Intelligence monitored the American Atlantic cable, violating its neutrality. On January 16, 1917, the Zimmerman Telegram was intercepted and decoded. De Grey and his team immediately understood the explosive impact of its contents. Such a documented threat might force the U.S. into declaring war on Germany. At the time, the “Great War” was a bloody stalemate and unrest in Russia was tilting the outcome in favor of Germany.

De Grey’s challenge was how to orchestrate the telegram getting to American officials without exposing British espionage operations or the breaking of the German codes. He and his team created an elaborate ruse. They would invent a “mole” inside the German Embassy in Mexico City. This “mole” would steal the Zimmermann Telegram and send it, still encrypted, to British intelligence.

The encryption would be an older version, which the Germans would consider a mistake and assume it was such an old code it was already broken. American-based British spies confirmed that the older code, and its decryption, was already in the files of the American Telegraph Company.

On February 19, 1917, British Foreign Office officials shared the older encoded version of Zimmermann Telegram with U.S. Embassy officials.  After decoding it and confirming its authenticity, it was sent on to the White House Staff.

President Woodrow Wilson was enraged and shared it with American newspaper reporters on February 28. At a March 3, 1917 news conference, Zimmermann confirmed the telegram stating, “I cannot deny it. It is true.” German officials tried to rationalize the Telegram as only a contingency plan, legitimately protecting its interests should America enter the war against them.

On April 4, President Wilson finally went before a Joint Session of Congress requesting a Declaration of War against Germany. The Senate approved the Declaration on April 4 and the House on April 6.  It took forty-four days for American public opinion to coalesce around declaring war.

Why the delay?

Americans were deeply divided on intervening in the “European War.” Republicans were solidly isolationist. They had enough votes in the Senate to filibuster a war resolution. They were already filibustering the “Armed Ship Bill” which authorized the arming of American merchant ships against German submarines. German Americans, a significant voter segment in America’s rural areas and small towns, were pro-German and anti-French. Irish Americans, a significant Democratic Party constituency in urban areas, were anti-English. There was also Wilson’s concern over Mexican threats along America’s southern border.

Germany was successful in exploiting America’s division and its isolationism. At the same time, Germany masterfully turned Mexico into a credible threat to America.

The Mexican Revolution provided the perfect environment for German mischief. Germany armed various factions and promoted the “Plan of San Diego” which detailed Mexico’s reclaiming Texas, New Mexico, Arizona, and California. Even before the outbreak of the “Great War,” Germany orchestrated media stories and planted disinformation among Western intelligence agencies to create the impression of Mexico planning an invasion of Texas. German actions and rumors sparked a bloody confrontation between U.S. forces and Mexican troops in Veracruz on April 9, 1914.

After years of preparation, German agents funded and inspired Pancho Villa’s March 9, 1916 raid on Columbus, New Mexico. In retaliation, on March 14, 1916, President Woodrow Wilson ordered General John “Black Jack” Pershing, along with 10,000 soldiers and an aviation squadron, to invade northern Mexico and hunt down Villa. Over the next ten months, U.S. forces fought twelve battles on Mexican soil, including several with Mexican government forces.

The costly and unsuccessful pursuit of Villa diverted America’s attention away from Europe and soured U.S.-Mexican relations.

Germany’s most creative method for keeping America out of World War I was a fifteen-part “Preparedness Serial” called “Patria.” In 1916, the German Foreign Ministry convinced William Randolph Hearst to produce this adventure story about Japan helping Mexico reclaim the American Southwest.

“Patria” was a major production. It starred Irene Castle, one of the early “mega-stars” of Hollywood and Broadway. Castle’s character uses her family fortune to thwart the Japan-Mexico plot against America. The movie played to packed houses across America and ignited paranoia about the growing menace on America’s southern border.  Concerns over Mexico, and opposition to European intervention, convinced Wilson to run for re-election on a “He kept us out of war” platform.  American voters narrowly re-elected Wilson, along with many new isolationist Congressional candidates.

“Patria,” and other German machinations, clouded the political landscape and kept America neutral until April 1917. Foreign interference in the 1916 election, along with chasing Pancho Villa, may have kept America out of WWI completely except that Zimmerman’s Telegram, outlining Germany’s next move, was intercepted by British Intelligence. It awakened Americans to a real threat.

Words really do matter.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Amanda Hughes

Prior to World War I, oceanic travel between the Atlantic and Pacific Oceans had to route dangerous passages around southern South America. Considerations for a way to connect the Atlantic Ocean to the Pacific were present for centuries. More recent among these include survey expeditions Ulysses S. Grant in 1869, who wrote as an Army Captain in 1852 of disease and other tragedies during military travels while crossing the Isthmus of Panama, “The horrors of the road in the rainy season are beyond description.” A survey by Grant  included Panama where it was found that the current route of the Panama Canal was nearly the same as what was proposed by Grant’s survey.

Other efforts by Count Ferdinand de Lesseps of France, who built Egypt’s Suez Canal, led the charge to begin construction on a canal across the isthmus of Panama in 1880. By 1888, challenges such as illness from yellow fever, malaria along with constant rain and mud slides resulted in ending plans by the French.

Further attempts to diminish the lengthy and costly trek began with a United States negotiation with Great Britain with Sir Henry Lytton Bulwer who served as British minister to Washington, and United States Secretary of State John M. Clayton. A sense that a canal connecting the Atlantic and Pacific coasts was viewed as necessary in order to maintain American strength throughout the world. The Clayton-Bulwer Treaty of 1850 allowed for the United States and Britain to maintain joint control by quelling rivals over a proposed canal idea for construction through Nicaragua, northwest of Panama. That agreement was replaced by the Hay-Pauncefote Treaty during President William McKinley’s Administration in 1901 by United States Secretary of State John Hay and British Ambassador Lord Julian Pauncefote. This agreement replaced the Clayton-Bulwer Treaty of 1850 because movement toward building the canal was still had not occurred. Since no action was being taken toward construction after several decades, requests that the United States hold charge over the canal’s construction and control brought about the Hay-Pauncefote Treaty where Britain agreed the United States should take control of the project.

In June 1902, Congress passed H.R. 3110, the Hepburn Bill named for Representative William Peters Hepburn which also became known as the Panama Canal Act of 1902 and the Spooner Act. The bill approved construction of a canal in Nicaragua connecting the waters of the Atlantic and Pacific ocean. Senator John Coit Spooner offered an amendment to the bill that would provide the president, who was Theodore Roosevelt at that time, authorization to purchase the French company that canceled construction in the late 1800s and allow the United States to purchase the rights, assets and site for construction on land owned by Columbia for a canal through Panama not to exceed $40,000,000.

An isthmian canal was especially viewed as imperative by Congress by 1902 in order to improve United States defense. The USS Maine had exploded in Cuba and the USS Oregon that was stationed on the West Coast would need a long two months to arrive on the Atlantic side near the Caribbean to aid in the Spanish-American War. In a Senate speech, Senator Spooner mentioned:

“I want…a bill to be passed here under which we will get a canal. There never was greater need for it than now. The Oregon demonstrated [that] to our people.”

Conflicts regarding sovereignty over Panama continued past earlier agreements made. By 1903, The United States aided a revolution to help Panama gain independence from Columbia, establishing the Republic of Panama through the Hay-Herrán Treaty of 1903. United States Secretary of State John Hay and Tomás Herrán, Columbian Foreign Minister signed the treaty, but Columbia’s congress would not accept it.

That same year, Secretary Hay, and Phillippe Bunau-Varilla representing Panamanian interests signed the Hay-Bunau-Varilla Treaty of 1903, and ratified by the Senate in 1904. While tensions still remained, the new agreement provided independence for Panama and the agreement allowed the United States to build and use a canal without limit. This increased the Canal Zone and gave the United States, in effect, sovereign power including authority to maintain order in the affected area.

Finally able to go ahead with the project, President Roosevelt selected a committee for an Isthmian Canal Commission to see the canal through, including a governor and seven members. The commission was previously arranged by President William McKinley who was assassinated in 1901. Other presidential involvement was especially present by Howard Taft who served as President after Roosevelt. As Roosevelt’s Secretary of War, Taft visited the canal more and participated the most over the longest time. The commission under Roosevelt was arranged to have a representative from the Army and Navy and the group would report to the secretary of war. The United States Army engineers were involved in the planning, supplies, and construction throughout. President Roosevelt argued that: “the War Department ‘has always supervised the construction of the great civil works and…been charged with the supervision of the government of all the island possessions of the United States.’”

Approved, yet fraught with many build challenges, the Panama Canal under control of the United States began in 1904 with construction at the bottom of Culebra Cut, formerly called Gaillard Cut, with 160 miles of track laid at the bottom of the canal. The track would need to be be moved continually to keep up with the shoveling of the surrounding ground and to keep construction materials arriving along the canal route using hundreds of locomotives. Locomotives were used to haul dirt, called dirt trains, along the route. Wet slides caused by rain and slipping of softer dirt were among many hindrances to construction. Other slides, some of which occurred during dry seasons, were caused by faults in the earth due to cuts in the sidewalls of the canal. The slides expanded into the cuts, but the workers kept at their tasks. Rock drills were used to set dynamite shots. Six million pounds of explosives per year were used to cut the nine mile canal. The first water to enter the Panama Canal flowed in from the Chagres River. The Chagres Basin is filled by the Gatun Lake, formed by the man-made Gatun Dam on the Atlantic end of the canal.

Led by Lt. Col. George Washington Goethals of the United States Army Corps of Engineers, improvements began for how the canal would work. The American engineers redesigned the canal so that two sets of three locks would have one set at the entrance of the Pacific side, and the other set on the Atlantic side. It was the largest canal lock system built at that time. The lock chambers were 1,000 feet long and 110 feet wide and up to 81 feet tall, equipped with gates, and allowed for two-way traffic. The large ships and containers are accommodated by the width of the canal. The locks were designed to raise and lower ships in the water controlled by dams and spillways. The engineering marvel of the locks and dam system proved cost effective by saving money, construction time and providing safety. An earth dam was designed with a man-made lake to limit excavation. It was also the largest dam in the world at the time of construction, intended to maintain the elevation of water level. The dam allowed millions of gallons of water to be released daily through the canal, with a top thick underwater spillway that offered protection from flooding.

Col. William C. Gorgas who served as Surgeon General of the Army during World War I, previously worked to prevent disease and death during construction of the canal. Col. Gorgas worked to eradicate major threats of yellow fever and malaria which was viewed as a much greater threat than all of the other diseases combined. He mentioned during a 1906 medical conference, “ malaria in the tropics is by far the most important disease to which tropical populations are subject,” because “the amount of incapacity caused by malaria is very much greater than that due to all other diseases combined.”

The total cost of the canal to America, as completed in 1914, is estimated at $375,000,000 dollars. The total included $10,000,000 paid to Panama and $40,000,000 to the French company. Fortifying the canal cost an additional $12,000,000. Thousands of workers were employed throughout construction from many countries. The jobs were often dangerous, but those overseeing the project made efforts to protect from injury and loss of life.

In 1964, Panama protested control over the canal by the United States which led to the Permanent Neutrality Treaty that Panama wanted in order to make the canal open to all nations, and a Panama Canal Treaty providing joint control over the canal by the United States and Panama. These treaties were signed in September 1977 by President Jimmy Carter and Panamanian leader Brig. Gen. Omar Torrijos Herrera. Complete control over the Panama Canal was transferred to Panama in 1999.

Engineers whose efforts to put forth unprecedented technological construction ideas overcame seeming insurmountable odds cutting the fifty miles of canal through mountains and jungle. Completed and opened on August 15, 1914, the Panama Canal offered a waterway through the isthmus of Panama connecting the oceans, creating fifty miles of sea-level passage. The American cargo and passenger ship, SS Ancon, was the first to officially pass through the Panama Canal in 1914. A testament to American innovation and ingenuity, the American Society of Civil Engineers has recognized the Panama Canal as one of the seven wonders of the modern world.

Amanda Hughes serves as Outreach Director, and 90 Day Study Director, for Constituting America. She is author of Who Wants to Be Free?, and a story contributor for the anthologies Loving Moments, and Moments with Billy Graham.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

In 1788, as the United States Constitution was adopted, senators would be elected by state legislatures to protect the states from the federal government increasing its own power. Problems related to the election of senators later resulted in lengthy senate vacancies. A popular vote movement began as a solution, but it failed to consider importance of separation of powers as designed by the Framers to protect liberty and maintain stability in government. The popular vote was an attempt to hamper the more deliberative body that is the United States Senate, and succumb to the more passionate, immediate will of the people, so on April 8, 1913, the Seventeenth Amendment to the U.S. Constitution was adopted.

We’ve all heard the phrase: “shooting oneself in the foot.” Grammerist.com reminds us: To shoot oneself in the foot means to sabotage oneself, to make a silly mistake that harms yourself in some fashion. The phrase comes from a phenomenon that became fairly common during the First World War. Soldiers sometimes shot themselves in the foot in order to be sent to the hospital tent rather than being sent into battle.”[i]

Can a state, one of the United States, be guilty of “shooting itself in the foot?” How about multiple states? How about thirty-six states all at once? Not only can they be, I believe they have been guilty, particularly as it regards the Seventeenth Amendment. Let me explain.

“Checks and balances, checks and balances,” we hear the refrain often and passionately these days. The phrase “Checks and balances” is part of every schoolchild’s introduction to the Constitution. In May 2019, when President Donald Trump exerted executive privilege to prevent the testimony before Congress of certain White House advisors, NBC exclaimed: “Trump’s subpoena obstruction has fractured the Constitution’s system of checks and balances”[ii] I’m not certain the Framers of the Constitution would agree with NBC as exerting executive privilege has been part of our constitutional landscape since George Washington,[iii] and if exerting it “fractures” the Constitution, the document would have fallen into pieces long, long ago. As we will see, a significant “fracturing” of the Constitution’s system of checks and balances did occur in this country, but it occurred more than a hundred years before President Donald Trump took office.

The impeachment power is intended to check a rogue President. The Supreme Court checks a Constitution-ignoring Congress, as does the President’s veto. Congress can check (as in limit) the appellate jurisdiction of the Supreme Court, and reduce or expand the number of justices at will. There are many examples of checks and balances in the Constitution. The framers of the document, distrustful as they were of human nature, were careful to give us this critical, power-limiting feature.[iv] But which was more important: the checks or the balances?

Aha, trick question. They are equally important (in my opinion at least). And sometimes a certain feature works as both a check and a balance. The one I have in mind is the original feature whereby Senators were to be appointed by their state legislatures.

We all know the story of how the Senate came into being which was the result of Roger Sherman’s great compromise. It retained the “one-state-one-vote” equality the small states enjoyed with the large states under the Articles of Confederation while also creating a legislative chamber, the House, where representation was based on a state’s population. Their six-year terms allowed them to take “a more detached view of issues coming before Congress.”[v] But how should these new Senators be selected: by the people, as in the House, or otherwise?

On July 7, 1787 the Constitutional Convention unanimously adopted a proposal by John Dickinson and Roger Sherman that the state legislatures elect this “Second Branch of the National Legislature.”  Why not the people? Alexander Hamilton explains:

“The history of ancient and modern republics had taught them that many of the evils which those republics suffered arose from the want of a certain balance, and that mutual control indispensable to a wise administration. They were convinced that popular assemblies are frequently misguided by ignorance, by sudden impulses, and the intrigues of ambitious men; and that some firm barrier against these operations was necessary. They, therefore, instituted your Senate.”[vi] (Emphasis added)

The Senate was to avoid the “impulses” of popularly-elected assemblies and provide a “barrier”  to such impulses when they might occur in the other branch.

James Madison explains in Federalist 62 who particularly benefits from this arrangement:

It is … unnecessary to [expand] on the appointment of senators by the State legislatures. Among the various modes which might have been devised for constituting this branch of the government, that which has been proposed by the convention is probably the most congenial with the public opinion. It is recommended by the double advantage of favoring a select appointment, and of giving to the State governments such an agency in the formation of the federal government as must secure the authority of the former, and may form a convenient link between the two systems.”[vii] (Emphasis added)

Appointment by the state legislatures gave the state governments a direct voice in the workings of the federal government. Madison continues:

“Another advantage accruing from this ingredient in the constitution of the Senate is, the additional impediment it must prove against improper acts of legislation. No law or resolution can now be passed without the concurrence, first, of a majority of the people (in the House), and then, of a majority of the States. It must be acknowledged that this complicated check on legislation may in some instances be injurious as well as beneficial; ….” (Emphasis added)

For those with lingering doubt as to who the Senators were to represent, Robert Livingston explained in the New York Ratifying Convention: “The senate are indeed designed to represent the state governments.”[viii] (Emphasis added)

Perhaps sensing the potential to change the mode of electing Senators in the future, Hamilton cautioned: “In this state (his own state of New York) we have a senate, possessed of the proper qualities of a permanent body: Virginia, Maryland, and a few other states, are in the same situation: The rest are either governed by a single democratic assembly (ex: Pennsylvania), or have a senate constituted entirely upon democratic principles—These have been more or less embroiled in factions, and have generally been the image and echo of the multitude.[ix] Hamilton refers here to those states where the state senators were popularly elected.

The careful balance of this system worked well until the end of the 19th century and the beginnings of the Progressive Era.

Gradually there arose a “feeling” that some senatorial appointments in the state legislatures were being “bought and sold.”  Between 1857 and 1900, Congress investigated three elections over alleged corruption. In 1900, the election of Montana Senator William A. Clark was voided after the Senate concluded that he had “purchased” eight of his fifteen votes.

Electoral deadlocks became another issue. Occasionally a state couldn’t decide on one or more of their Senators. One of Delaware’s Senate seats went unfilled from 1899 until 1903.

Neither of these problems was serious, but they both provided fodder for those enamored with “democracy.” But bandwagons being what they are, some could not resist. Some states began holding non-binding primaries for their Senate candidates.

Under mounting pressure from Progressives, by 1910, thirty-one state legislatures were asking Congress for a constitutional amendment allowing direct election of senators by the people. In the same year several Republican senators who were opposed to such reform failed re-election. This served as a “wake-up call” to others who remained opposed. Twenty-seven of the thirty-one states requesting an amendment also called for a constitutional convention to meet on the issue, only four states shy of the threshold that would require Congress to act.

Finally, on May 13, 1912, Congress responded. A resolution to require direct elections of Senators by the citizens of each state was finally introduced and it quickly passed. In less than a year it had been ratified by three-quarters of the states and was declared part of the Constitution by Secretary of State William Jennings Bryan on May 31, 1913, two months after President Woodrow Wilson took office.

The Seventeenth Amendment has been cheered by the Left as a victory for populism and democracy, and bemoaned by the Right as a loss for states’ rights or “The Death of Federalism!” Now, millions in corporate funding pours into Senate election campaigns. Senators no longer consult with their state legislatures regarding pending legislation. Why should they? They now represent their state’s citizens directly. The interests of the state governments need not be considered.

For the states to actually ask Congress for this change seems incredibly near-sighted. Much of the encroachment by the Federal Government on policy matters which were traditionally the purview of the states can, I believe, be traced to the Seventeenth Amendment.

We repealed the Eighteenth Amendment. What about repealing the Seventeenth?  Many organizations and individuals have called for it. Every year he was in office, Senator Zell Miller of Georgia repeatedly called for its repeal. A brief look at who supports repeal and who opposes it reveals much. In support of repeal are the various Tea Party organizations, National Review magazine and others on the Right. Opposed, predictably enough, sit the LA Times and other liberal organizations. Solon magazine called the repeal movement “The surprising Republican movement to strip voters of their right to elect senators.” Where this supposed right originates is not explained in the article.

The wisdom of America’s Founders continues to amaze us more than 200 years later. Unfortunately, the carefully balanced framework of government they devised has been slowly chipped away by Supreme Court decisions and structural changes, like the Seventeenth Amendment. Seeing that the states willingly threw away their direct voice in the federal government, my sympathy for them is limited, but repeal of this dreadful amendment is long overdue.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[i] https://grammarist.com/idiom/shoot-oneself-in-the-foot/

[ii] https://www.nbcnews.com/think/opinion/trump-s-subpoena-obstruction-has-fractured-constitution-s-system-checks-ncna1002101

[iii] https://supreme.findlaw.com/legal-commentary/a-brief-history-of-executive-privilege-from-george-washington-through-dick-cheney.html

[iv] See Federalist 51

[v] Bybee, Jay S. (1997). “Ulysses at the Mast: Democracy, Federalism, and the Sirens’ Song of the Seventeenth Amendment”. Northwestern University Law Review. Northwestern University School of Law. p. 515.

[vi] Alexander Hamilton, speech to the New York Ratifying Convention, 1788

[vii] James Madison, Federalist 62

[viii] Robert Livingston, New York Ratifying Convention, 24 Jun 1788.

[ix] Alexander Hamilton, speech to the New York Ratifying Convention, 1788

Guest Essayist: Tony Williams

During the summer of 1896, twenty-five-year-old Orville Wright was recovering from typhoid fever in his Dayton, Ohio home. His brother, Wilbur Wright, was reading to Orville accounts of a German glider enthusiast named Otto Lilienthal who was killed in a crash flying his glider. The brothers started reading several books about bird flight and even applying the mechanics of it to powered human flight.

Despite the dreams of several visionaries who were studying human flight, the Washington Post proclaimed, “It is a fact that man can’t fly.” The Wright Brothers were amateurs who might just be able to prove the newspaper wrong. They had tinkered with mechanical inventions since they were boys. They had owned a printing press and now a bicycle shop and were highly skilled mechanics. They did not have the advantages of great wealth or a college education, but they had excellent good work habits and perseverance. They were enthusiastically dedicated and disciplined to achieve their goal.

On May 30, 1899, Wilbur wrote a letter to the Smithsonian Institution in Washington, D.C. He stated, “I have been interested in the problem of mechanical and human flight ever since I [was] a boy.” He added, “My observations since have only convinced me more firmly that human flight is possible and practicable.” He requested any reading materials that the Smithsonian might be willing to send. He received a packet full of recent pamphlets and articles including those of Samuel Pierpont Langley who was the Secretary of the Smithsonian.

The Wright brothers voraciously read the Smithsonian materials and books about bird flight. Based upon their study, they first built a glider that allowed them to acquire vast knowledge about the mechanics necessary to fly. They knew that this was a necessary step toward powered flight.

The Wright brothers next found a suitable location to test out their glider flights in Kitty Hawk, North Carolina, because that site had the right combination of steady, strong winds and soft sand dunes on which to crash land. They even flew kites and studied the flight of different birds to measure the air flow in the test area.

On October 19, 1900, Wilbur climbed aboard a glider and flew nearly 30 miles per hour for about 400 feet. They made several more test flights and carefully recorded data about them. Armed with this knowledge and experience, they returned to Dayton and made alterations to the glider during the winter. They returned to Kitty Hawk the following summer for additional testing.

The Wright brothers spent the summer of 1901 acquiring mounds of new data, taking test flights, and tinkering constantly on the design. The brothers experienced their share of doubts that they would be successful. During one low moment, Wilbur lamented that “not in a thousand years would man ever fly.” However, they encouraged each other, and Orville stated that “there was some spirit that carried us through.” During that winter, they even built a homemade wind tunnel and continued to re-design the glider based upon their practical discoveries in Kitty Hawk and theoretical experiments in Dayton.

During the fall of 1902, they made their annual pilgrimage to their camp at Kitty Hawk where they worked day and night. During one sleepless night, Orville developed an idea for a movable rear rudder for better control. They installed a rudder, and the modification helped them achieve even greater success with the glider flights. They knew they were finally ready to test a motor and dared to believe that they might fly through the air in what would become an airplane.

The Wright brothers spent hundreds of hours over the next year testing motors, developing propellers, and finding solutions to countless problems. Orville admitted, “Our minds became so obsessed with it that we could do little other work.” In December 1903, they reached Kitty Hawk and unpacked their powered glider for reassembly at their camp.

On December 17, five curious locals braved the freezing cold to watch Orville and Wilbur Wright as they prepared their flying machine. Wilbur set up their camera on its wooden tripod a short distance from the plane. Dressed in a suit and tie, Orville climbed aboard the bottom wing of the bi-plane and strapped himself in while the motor was warming up.

At precisely 10:35 a.m., Orville launched down the short track while Wilbur ran beside, helping to steady the plane. Suddenly, the plane lifted into the air, and Orville became the first person to pilot a machine that flew under its own power. He flew about 120 feet for nearly twelve seconds. It was a humble yet historic flight.

When Orville was later asked if he was scared, he joked, “Scared?  There wasn’t time.” They readied the plane for another flight and a half-hour later, Wilbur joined his brother in history by flying “like a bird” for approximately 175 feet. They flew farther and farther that day, and Wilbur went nearly half a mile in 59 seconds. They sent their father a telegram sharing the news of their success, and as he read it, he turned to their sister and said, “Well, they’ve made a flight.”

One witness of the Wright Brothers’ first flight noted the character that made them successful. “It wasn’t luck that made them fly; it was hard work and common sense; they put their whole heart and soul and all their energy into an idea, and they had faith.” President William Howard Taft also praised the hard work that went into the Wright Brothers’ achievement. “You made this discovery,” he told them at an award ceremony “by keeping your noses right at the job until you had accomplished what you had determined to.” The Wright brothers’ flight was part of a long train of technological innovations that resulted from American ingenuity and spirit.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

In late January 1898, President William McKinley dispatched the U.S.S. Maine to Cuban waters to protect American citizens and business investments during ongoing tensions between Spain and its colony, Cuba. The event eventually sparked a war that dramatically culminated a century of expansion and led Americans to debate the purposes of American foreign policy at the dawn of the twentieth century.

Events only ninety miles from American shores were increasingly involving the United States in Cuban affairs during the late 1890s.

Cuban revolutionaries had fought a guerrilla war against imperialist Spain starting in 1895, and Spain had responded by brutally suppressing the insurgency. General Valeriano Weyler, nicknamed “the butcher,” forced Cubans into relocation camps to deny the countryside to the rebels. Tens of thousands perished, and Cuba became a cause célèbre for many Americans.

Moreover, William Randolph Hearst, Joseph Pulitzer, and other newspaper moguls publicized the atrocities committed by Spain’s military and encouraged sympathy for the Cuban people. Hearst knew the power he held over public opinion, telling one of his photographers, “You furnish the pictures. I’ll furnish the war.”

During the evening of February 15, all was quiet as the Maine sat at anchor in Havana harbor. At 9:40 p.m., an explosion shattered the silence and tore the ship open, killing 266 sailors and Marines aboard. Giant gouts of flames and smoke flew hundreds of feet into the air. The press immediately blamed Spain and called for war with the sensationalist style of reporting called “yellow journalism.” The shocked public clamored for war with the popular cry, “Remember the Maine!” Hearst had also recently printed an insulting private letter from the Spanish ambassador to the United States, Don Enrique Dupuy de Lôme, that called McKinley “weak.”

President McKinley had sought alternatives to war for years and continued to seek a diplomatic solution despite the war fervor. However, Assistant Secretary of the Navy Theodore Roosevelt repositioned naval warships close to Cuba and ordered Commodore George Dewey to attack the Spanish fleet in Manila Bay in the Philippines. Roosevelt thought McKinley had “no more backbone than a chocolate éclair.” Despite McKinley’s best efforts, Congress declared war on April 25. Roosevelt quickly resigned and received approval to raise a cavalry regiment, nicknamed the “Rough Riders.”

Roosevelt felt it was his patriotic duty to serve his country. “It does not seem to me that it would be honorable for a man who has consistently advocated a warlike policy not to be willing himself to bear the brunt of carrying out that policy.” Moreover, Roosevelt praised “the soldierly virtues” and sought the strenuous life for himself and the country, which he believed had gone soft with the decadence of the Gilded Age. He wanted to test himself in battle and win glory.

Roosevelt went to Texas to train the eclectic First Volunteer Cavalry regiment of tough western cowboys and American Indians and Patriotic Ivy League athletes. Commander Roosevelt felt comfortable with both groups of men because he had attended Harvard and owned a North Dakota ranch. His regiment trained in the dusty heat of San Antonio, in the shadow of the Alamo, under him and Congressional Medal of Honor winner Colonel Leonard Wood.

The regiment loaded their horses and boarded trains bound for the embarkation point at Tampa, Florida. On June 22, the Rough Riders and thousands of other American troops landed unopposed at Daiquirí on the southern coast of Cuba. Many Rough Riders were without their horses and started marching toward the Spanish army at the capital of Santiago.

The Rough Riders and other U.S. troops were suffering from the tropical heat and forbidding jungle terrain. On June 24, hidden Spanish troops ambushed the Americans near Las Guasimas village. After a brief exchange resulting in some casualties on both sides, the Spanish withdrew to their fortified positions on the hills in front of Santiago. By June 30 the Americans had made it to the base of Kettle Hill, where the Spanish were entrenched and had their guns sighted on the surrounding plains.

American artillery was brought forward to bombard Kettle Hill, and Spanish guns answered. Several Rough Riders and men from other units were cut down by flying shrapnel. Roosevelt himself was wounded slightly in the arm. He and the entire army grew impatient as they awaited orders to attack.

When the order finally came, a mounted Roosevelt led the assault. The Rough Riders were flanked on either side by the African American “Buffalo Soldiers” of the regular Ninth and Tenth cavalry regiments, commanded by white officer John “Black Jack” Pershing. The American troops charged up the incline while firing at the enemy. Roosevelt had dismounted and led the charge on foot. The Spanish fired into American ranks and killed and wounded dozens. Soon, they were driven off. When Spaniards atop adjacent San Juan Hill fired on the Rough Riders, Roosevelt prepared his men to attack that hill as well.

After much confusion in the initial charge, Roosevelt rallied his troops. Finally, he jumped over a fence and again led the charge with the support of rattling American Gatling guns. The Rough Riders and other regiments successfully drove the Spaniards off the hill and gave a great cheer. They dug into their positions and collapsed, exhausted after a day of strenuous fighting. The Americans took Santiago relatively easily, forcing the Spanish fleet to take to sea where it was destroyed by U.S. warships. The Spanish capitulated on August 12.

Roosevelt became a national hero and used the fame to catapult his way to become governor of New York, vice-president, and president after McKinley was assassinated in 1901. Although the 1898 Teller Amendment guaranteed Cuban sovereignty and independence, the United States gained significant control over Cuban affairs with the Platt Amendment in 1901 and Roosevelt Corollary to the Monroe Doctrine in 1904. The United States also built the Panama Canal for trade and national security.

In the Philippines, Admiral Dewey sailed into Manila Bay and wiped out the Spanish fleet there on May 1, 1898. However, the Filipinos, led by Emilio Aguinaldo, rebelled against the American control just as they had against the Spanish. The insurrection resulted in the loss of thousands of American and Filipino lives. Americans established control there after suppressing the revolt in 1902.

The Spanish-American War was a turning point in history because the nation assumed global responsibilities for a growing empire that included Cuba, the Philippines, Puerto Rico, and Guam (as well as Hawaii separately). The Spanish-American War sparked a sharp debate between imperialists and anti-imperialists in the United States over the course of American foreign policy and global power. The debate continued throughout the twentieth century known as the “American Century” due to its power and influence around the world.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

On February 15, 1898, an American warship, U.S.S. Maine, blew up in the harbor of Havana, Cuba. A naval board of inquiry reported the following month that the explosion had been caused by a submerged mine. That conclusion was confirmed in 1911, after a more exhaustive investigation and careful examination of the wreck. What was unclear, and remains so, is who set the mine. During the decade, tensions with Spain had been rising over that country’s handling of a Cuban insurgency against Spanish rule. The newspaper chains of William Randolph Hearst and Joseph Pulitzer had long competed for circulation by sensationalist reporting. The deteriorating political conditions in Cuba and the harshness of Spanish attempts to suppress the rebels provided fodder for the newspapers’ “yellow” journalism. Congress had pressured the American government to do something to resolve the crisis, but neither President Grover Cleveland nor President William McKinley had taken the bait thus far.

With the heavy loss of life that accompanied the sinking, “Remember the Maine” became a national obsession. Although Spain had very little to gain from sinking an American warship, whereas Cuban rebels had much to gain in order to bring the United States actively to their cause, the public outcry was directed against Spain. The Spanish government previously had offered to change its military tactics in Cuba and to allow Cubans limited home rule. The offer now was to grant an armistice to the insurgents. The American ambassador in Spain believed that the Spanish government would even be willing to grant independence to Cuba, if there were no political or military attempt to humiliate Spain.

Neither the Spanish government nor McKinley wanted war. However, the latter proved unable to resist the new martial mood and the aroused jingoism in the press and Congress. On April 11, 1898, McKinley sent a message to Congress that did not directly call for war, but declared that he had “exhausted every effort” to resolve the matter and was awaiting Congress’s action. Congress declared war. A year later, McKinley observed, “But for the inflamed state of public opinion, and the fact that Congress could no longer be held in check, a peaceful solution might have been had.” He might have added that, had he been possessed of a stiffer political spine, that peaceful solution might have been had, as well.

The “splendid little war,” in the words of the soon-to-be Secretary of State, John Hay, was exceedingly popular and resulted in an overwhelming and relatively easy American victory. Only 289 were killed in action, although, due to poor hygienic conditions, many more died from disease. Psychologically, it proved cathartic for Americans after the national trauma of the Civil War. One symbolic example of the new unity forged by the war with Spain was that Joe Wheeler and Fitzhugh Lee, former Confederate generals, were generals in the U.S. Army.

Spain signed a preliminary peace treaty in August. The treaty called for the surrender of Cuba, Puerto Rico, and Guam. The status of the Philippines was left for final negotiations. The ultimate treaty was signed in Paris on December 10, 1898. The Philippines, wracked by insurrection, were ceded to the United States for $2 million. The administration believed that it would be militarily advantageous to have a base in the Far East to protect American interests.

The war may have been popular, but the peace was less so. The two-thirds vote needed for Senate approval of the peace treaty was a close-run matter. There was a militant group of “anti-imperialists” in the Senate who considered it a betrayal of American republicanism to engage in the same colonial expansion as the European powers. Americans had long imagined themselves to be unsullied by the corrupt motives and brutal tactics that such colonial ventures represented in their minds. McKinley, who had reluctantly agreed to the treaty, reassured himself and Americans, “No imperial designs lurk in the American mind. They are alien to American sentiment, thought, and purpose.” But, with a nod to Rudyard Kipling’s urging that Americans take on the “white man’s burden,” McKinley cast the decision in republican missionary garb, “If we can benefit those remote peoples, who will object? If in the years of the future they are established in government under law and liberty, who will regret our perils and sacrifices?”

The controversy around an “American Empire” was not new. Early American republicans like Thomas Jefferson, Alexander Hamilton, and John Marshall, among many others, had described the United States in that manner and without sarcasm. The government might be a republic in form, but the United States would be an empire in expanse, wealth, and glory. Why else acquire the vast Louisiana territory in 1803? Why else demand from Mexico that huge sparsely-settled territory west of Texas in 1846? “Westward the Course of Empire Takes Its Way,” painted Emanuel Leutze in 1861. Manifest Destiny became the aspirational slogan.

While most Americans cheered those developments, a portion of the political elite had misgivings. The Whigs opposed the annexation of Texas and the Mexican War. To many Whigs, the latter especially was merely a war of conquest and the imposition of American rule against the inhabitants’ wishes. Behind the republican facade lay a more fundamental political concern. The Whigs’ main political power was in the North, but the new territory likely would be settled by Southerners and increase the power of the Democrats. That movement of settlers would also give slavery a new lease on life, something much reviled by most Whigs, among them a novice Congressman from Illinois, Abraham Lincoln.

Yet, by the 1890s, the expansion across the continent was completed. Would it stop there or move across the water to distant shores? One omen was the national debate over Hawaii that culminated in the annexation of the islands in 1898. Some opponents drew on the earlier Whig arguments and urged that, if the goal of the continental expansion was to secure enough land for two centuries to realize Jefferson’s ideal of a large American agrarian republic, the goal had been achieved. Going off-shore had no such republican fig leaf to cover its blatant colonialism.

Other opponents emphasized the folly of nation-building and trying to graft Western values and American republicanism onto alien cultures who neither wanted them nor were sufficiently politically sophisticated to make them work. They took their cue from John C. Calhoun, who, in 1848, had opposed the fanciful proposal to annex all of Mexico, “We make a great mistake in supposing that all people are capable of self-government. Acting under that impression, many are anxious to force free Governments on all the people of this continent, and over the world, if they had the power…. It is a sad delusion. None but a people advanced to a high state of moral and intellectual excellence are capable in a civilized condition, of forming and maintaining free Governments ….”

With peace at hand, the focus shifted to political and legal concerns. The question became whether or not the Constitution applied to these new territories ex proprio vigore: “Does the Constitution follow the flag?” Neither President McKinley nor Congress had a concrete policy. The Constitution, having been formed by thirteen states, along the eastern slice of a vast continent, was unclear. The Articles of Confederation had provided for the admission of Canada and other British colonies, such as the West Indies, but that document was moot. The matter was left to the judiciary, and the Supreme Court provided a settlement of sorts in a series of cases over two decades called the Insular Cases.

Cuba was easy. Congress’s declaration of war against Spain had been clear: “The United States hereby disclaims any disposition or intention to exercise sovereignty, jurisdiction, or control over said island except for the pacification thereof, and asserts its determination, when that is accomplished, to leave the government and control of the island to its people.” In Neely v. Henkel (1901), the Court unanimously held that the Constitution did not apply to Cuba. Effectively, Cuba was already a foreign country outside the Constitution. Cuba became formally independent in 1902. In similar manner, the United States promised independence to the Philippine people, a process that took several decades due to various military exigencies. Thus, again, the Constitution did not apply there, at least not tout court, as the Court affirmed beginning in Dorr v. U.S. in 1904. That took care of the largest overseas dominions, and Americans could tentatively congratulate themselves that they were not genuine colonialists.

More muddled was the status of Puerto Rico and Guam. In Puerto Rico, social, political, and economic conditions did not promise an easy path to independence, and no such assurance was given. The territory was not deemed capable of surviving on its own. Rather, the peace treaty expressly provided that Congress would determine the political status of the inhabitants. In 1900, Congress passed the Foraker Act, which set up a civil government patterned on the old British imperial system with which Americans were familiar. The locals would elect an assembly, but the President would appoint a governor and executive council. Guam was in a similar state of dependency.

In Downes v. Bidwell (1901), the Court established the new status of Puerto Rico as neither outside nor entirely inside the United States. Unlike Hawaii or the territories that were part of Manifest Destiny, there was no clear determination that Puerto Rico was on a path to become a state and, thus, was already incorporated into the entity called the United States. It belonged to the United States, but was not part of the United States. The Constitution, on its own, applied only to states and to territory that was expected to become part of the United States. Puerto Rico was more like, but not entirely like, temporarily administered foreign territory. Congress determined the governance of that territory by statute or treaty, and, with the exception of certain “natural rights” reflected in particular provisions of the Bill of Rights, the Constitution applied only to the extent to which Congress specified.

These cases adjusted constitutional doctrine to a new political reality inaugurated by the sinking of the Maine and the war that event set in motion. The United States no longer looked inward to settle its own large territory and to resolve domestic political issues relating to the nature of the union. Rather, the country was looking beyond its shores and was emerging as a world power. That metamorphosis would take a couple of generations and two world wars to complete, the last of which triggered by another surprise attack on American warships.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Val Crofts

The Massacre at Wounded Knee, part of the Ghost Dance War, marked the last of the Indian Wars and the end of one of the bloodiest eras in American History, the systematic and deliberate slaughter of Native American peoples and their way of life. It was an American Holocaust. During a 500 year period, approximately 100,000,000 Native Americans were killed as citizens of the United States pushed West in the name of manifest destiny and destroyed the Native American territories that had been their home for thousands of years. These events will never take a place on the front of our history books, but they must never lose their place in our national memory.

Armed conflict was still prevalent in the American West in the 1880s between the U.S. Army and the Native American population, even after most of the tribes there had been displaced or had their populations reduced in great numbers. The Battle of Little Bighorn in 1876 had been the most fierce of the wars with the Sioux, which had started in the mid-1850s, when Chiefs Sitting Bull and Crazy Horse had gone to war to defend the Black Hills after the U.S. violated the treaty that they had signed stating the land was the property of the Sioux. After the Battle of Little Bighorn, a gradual depletion of Sioux forces occurred and Crazy Horse surrendered in 1877.

The remaining Sioux were spread out in their reservations and eventually were placed onto a central reservation in the Dakota territory and were practicing a ritual known as the Ghost Dance. The dance was supposed to drive the white men from Native American territory and restore peace and tranquility to the region. Settlers were frightened by the dance and they said it had a “ghostly aura” to it, thus giving it its name.

In response to the settlers’ fears, U.S. commanders arrested several leaders of the Sioux, including Chief Kicking Bear and Chief Sitting Bull, who was later killed.

Two weeks after Sitting Bull’s death, U.S. troops demanded that all the Sioux immediately turn over their weapons. As they were peacefully doing so, one deaf Sioux warrior did not understand the command to turn over his rifle. As his rifle was being taken from him, a shot went off in the crowd. The soldiers panicked and open fired on everyone in the area.

As the smoke cleared, 300 dead Lakota and 25 dead U.S. soldiers were laying on the ground. Many more Lakota were later killed by U.S. troops as they fled the reservation. The massacre ended the Ghost Dance movement and was the last of the Indian Wars. Twenty U.S. soldiers were later awarded the Congressional Medal of Honor for their actions during this campaign. The National Congress of American Indians has called on the U.S. government to rescind some or all of these medals, but they have not yet taken action to do so.

The American public’s reaction to the massacre was positive at first but over time as the scale and gravity of the massacre was revealed, the American people began to understand the brutal injustice that occurred during this encounter. Today, we need to remember the Massacre at Wounded Knee for the human cost and to make sure that events like this never happen again in our nation. We also need to make sure to honor and remember all Americans and their histories, even when it is not easy to read or take responsibility. For how can we truly be a nation where all are created equal if the treatment of our histories are not?

Val Crofts is a Social Studies teacher from Janesville, Wisconsin. He teaches as Milton High School in Milton, Wisconsin, and has been there 16 years. He teaches AP U.S. Government and Politics, U.S. History and U.S. Military History. Val has also taught for the Wisconsin Virtual School for seven years, teaching several Social Studies courses for them. Val is also a member of the U.S. Semiquincentennial Commission celebrating the 250th Anniversary of the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Paul Israel

By the time Thomas Edison began his effort to develop an incandescent electric light in September 1878, researchers had been working on the problem for forty years. While many of them developed lamps that worked in the laboratory and for short-term demonstrations, none had been able to devise a lamp that would last in long-term commercial use.  Edison was able to succeed where others had failed because he understood that developing a successful commercial lamp also required him to develop an entire electrical system. With the resources of his laboratory, he and his staff were able to design not only a commercially successful lamp but the system that made it possible.

At the time, Edison’s work on telegraphs and telephones largely defined the limits of his knowledge of electrical technology. Unlike some of his contemporaries, he did not even have experience with arc lights and dynamos. Yet he confidently predicted that he could solve the problem and after only a few days of experiment during the second week of September 1878 he announced that he had “struck a bonanza.”  He believed he had solved the problem of creating a long-lasting lamp by designing regulators that would prevent the lamp filament (he was then using platinum and related metals) from melting. Edison reached this solution by thinking of electric lights as analogous to telegraph instruments and lamp regulators as a form of electromechanical switch similar to those he used in telegraphy. Edison’s regulators used the expansion of metals or air heated by the electric current to divert current from the incandescing element in order to prevent it from destruction by overheating. Edison was soon designing lamp regulators in the same fertile manner that he had previously varied the relays and circuits of his telegraph designs.

Edison was also confident that his insights regarding high-resistance lamps and parallel circuits would be key to designing a commercial electric lighting system. Because the regulator temporarily removed the lamp from the circuit, he realized that he had to place the lamps in parallel circuits so that each individual lamp could be turned on and off without affecting any others in the circuit. This was also desirable for customers used to independently operated gas lamps. Even more important was Edison’s grasp of basic electrical laws. He was virtually alone in understanding how to produce an economical distribution system. Other researchers had been stymied by the cost of the copper conductors, which would require a very large cross section to reduce energy lost as excess heat in the system. However, large copper conductors would make the system too expensive. Edison realized that by using high-resistance lamps he could increase the voltage proportionately to the current and thus reduce the size and cost of the conductors.

Edison initially focused his work on the lamp because he saw it as the critical problem and thought that standard arc-lighting dynamos could easily meet the requirements of an incandescent lighting system.  However, after experimenting with one of these dynamos, Edison began to doubt their suitability for his purposes. With the expectation of funds from the newly formed Edison Electric Light Company, he ordered other machines and began to design his own generators as well. By January 1879, Edison’s understanding of generators had advanced sufficiently “after a few weeks hard study on magneto electric principles,” for him to start his machinists building a new design. Edison’s ability to experiment with generators was greatly enhanced by his new financial resources that enabled him to build a large machine shop that could produce not only fine instruments like telegraphs and lamps, but also generators—”in short all the means to set up & test most deliberately every point of the Electric Light.” With the new facilities, machinery, and assistants made possible by his financial backers, Edison could pursue research on a broad front. In fact, by the end of May, he had developed his standard generator design. It would take much longer to develop a commercial lamp.

Just as the dynamo experiments marked a new effort to build up a base of fundamental knowledge, so too did lamp experiments in early 1879 begin to reflect this new spirit of investigation. Instead of continuing to construct numerous prototypes, Edison began observing the behavior of platinum and other metals under the conditions required for incandescence. By studying his filaments under a microscope, he soon discovered that the metal seemed to absorb gases during heating, suggesting that the problem lay less in the composition of the metal than in the environment in which it was heated. The most obvious way to change the environment was to use a vacuum. By improving the existing vacuum-pump technology with the assistance of an experienced German glassblower, Edison was able to better protect his filaments and by the end of the summer he had done away with his complicated electromechanical regulators. The improved vacuum pumps developed by the laboratory staff helped to produce a major breakthrough in the development of a commercial lamp.

Although Edison no longer required a regulator for his platinum filaments and the lamps lasted longer, they were too expensive for commercial use. Not only was platinum a rare and expensive metal, but platinum filaments did not produce the high resistance he needed for his distribution system. With much better vacuum technology capable of preventing the oxidation of carbon filaments, Edison decided to try experimenting with a material that was not only much cheaper and more abundant, but which also produced high-resistance filaments.

The shift to carbon was a product of Edison’s propensity for working on several projects at once. During the spring and summer of 1879, telephone research at times overshadowed the light as Edison sought to improve his instrument for the British market. A crucial element of Edison’s telephone was the carbon button used in his transmitter. These buttons were produced in a little shed at the laboratory complex where day and night kerosene lamps were burned and the resulting carbon, known as lampblack, was collected and formed into buttons. The reason for turning to this familiar material lies in another analogy. Almost from the beginning of the light research, Edison had determined that the most efficient form for his incandescing element would be a thin wire spiral which would allow him to decrease radiating surface so as to reduce the energy lost through radiation of heat rather than light. The spiral form also increased resistance. It was his recognition that the lampblack could be rolled like a wire and then coiled into a spiral like platinum that led Edison to try carbon as a filament material.

Although Edison’s basic carbon-filament lamp patent, filed on November 4, 1879, still retained the spiral form, the laboratory staff had great difficulty in actually winding a carbon spiral. Instead, Edison turned to another form of carbon “wire”–a thread. During the night of October 21–22, the laboratory staff watched as a cotton-thread filament burned for 14 1/2 hours with a resistance of around 100 ohms. This date of this experiment would later be later be associated with the invention of the electric light, but at the time Edison treated it not as a finished invention but rather as the beginning of a new experimental path. The commercial lamp would require another year of research.

Nonetheless, by New Year’s Day 1880, Edison was able to demonstrate his system to the public. Over the course of the next year, he and his staff worked feverishly to bring his system to a state of commercial introduction. In the process, he turned the Menlo Park laboratory into an R&D center, with an emphasis on development. By spring, the staff which previously consisted of some twelve or fifteen experimenters and machinists, was greatly expanded, at times reaching as many as sixty men. Work on the various component was delegated to new members of the staff and over the course of the year, work progressed on each element of the system, including the generator, meter, underground conductors, safety fuses, lamp fixtures and sockets, and the commercial bamboo-carbon filament. By the time all these ancillary components were developed and manufacturing underway in the spring of 1881, Edison had spent over $200,000 on research and development.  Commercial introduction required several thousand additional dollars of research as well as $500,000 to install the first central station system in downtown New York City, which opened on September 4, 1882, four years after Edison first began his research. Though he claimed merely to “have accomplished all I promised,” Edison had done even more by starting a new industry and reorganizing the process of invention.

Historian, Dr. Paul Israel, a former Californian, moved East to NJ over 30 years ago to do research for a book on Thomas Edison & the electric light. Today he is the Director and General Editor of the Thomas A. Edison Papers at Rutgers University, the New Jersey State University. 

The Thomas A. Edison Papers Project, a research center at Rutgers School of Arts and Sciences, is one of the most ambitious editing projects ever undertaken by an American university.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Paul Israel

In mid-July 1877, while working to develop an improved telephone for the Western Union Telegraph Company, Thomas Edison conceived the idea of recording and reproducing telephone messages. Edison came up with this extraordinary idea because he thought about the telephone as a form of telegraph, even referring to it as a “speaking telegraph.” Thus, on July 18, he tried an experiment with a telephone “diaphragm having an embossing point & held against paraffin paper moving rapidly.” Finding that sound “vibrations are indented nicely” he concluded, “there’s no doubt that I shall be able to store up & reproduce automatically at any future time the human voice perfectly.”

At the time Edison was also working on his repeating telegraph known as the translating embosser. This device recorded an outgoing message as the operator sent it, enabling automatic, rapid retransmission of the same message on other lines. This would be particularly desirable for long press-wire articles that required very skilled operators to transmit and receive. The incoming, high-speed message recorded by the embosser at each receiving station could be transcribed at a slower speed by an operator using a standard sounder. Edison thought his telephone recorder could be used in a similar fashion by allowing the voice message to be “reproduced slow or fast by a copyist & written down.”

Busy with telephone and translating embosser experiments, Edison put this idea aside until August 12, when he drew a device he labeled “Phonograph,”  which looked very much like an automatic telegraph recorder he had developed a few years earlier. For many years, researchers were fooled by another drawing containing the inscription “Kreuzi Make This Edison August 12/77.” However, the text for this drawing, which was published twice in the mid-1890s without the inscription, was added during the 40th anniversary of the invention to represent the drawing from which machinist John Kruesi constructed the first phonograph.

Over the next few months, Edison periodically experimented with “apparatus for recording & reproducing the human voice,” using various methods to record on paper tape. The first design for a cylinder recorder, apparently still using paper to record on, appeared in a notebook entry of September 21. However, it was not until November 5, that he first described the design that John Kruesi would beginning making at the end of the month. “I propose having a cylinder 10 threads or embossing grooves to the inch cylinder 1 foot long on this tin foil of proper thickness.” As Edison noted, he had discovered after “various experiments with wax, chalk, etc.” that “tin foil over a groove is the easiest of all= this cylinder will indent about 200 spoken words & reproduce them from same cylinder.” On November 10, he drew a rough sketch of this new tinfoil cylinder design. This drawing looks very similar to the more careful sketch he later inscribed “Kreuzi Make This Edison August 12/77.”  It also resembles the large drawing Edison made on November 29, which may have been used by Kruesi while he was making the first phonograph during the first six days of December.

These drawings of the tinfoil cylinder phonograph looked very much like those for the cylinder version of Edison’s translating embosser while a disc design was based on another version of his translating embosser. The disc translating embosser can be found today at the reconstructed Menlo Park Laboratory at the Henry Ford Museum in Dearborn, Michigan.  This device also became part of the creation myth for the phonograph when it appeared in the 1940 Spencer Tracy movie Edison the Man. In the movie an assistant accidentally starts the embosser with a recording on it, resulting in a high-pitched sound that leads Edison to the idea of recording sound.

Although Edison did not have a working phonograph until December, he had drafted his first press release to announce the new invention on September 7. Writing in the third person he claimed that

“Mr. Edison the Electrician has not only succeeded in producing a perfectly articulating telephone.…far superior and much more ingenious than the telephone off Bell…but has gone into a new and entirely unexplored field of acoustics which is nothing less than an attempt to record automatically the speech of a very rapid speaker upon paper; from which he reproduces the same Speech immediately or year’s afterwards or preserving the characteristics of the speakers voice so that persons familiar with it would at once recognize it.

This text and its drawings of a paper-tape phonograph would become the basis for a letter to the editor by Edison’s associate Edward Johnson that appeared in the November 17 issue of Scientific American. Not surprisingly, when this was republished in the newspapers it was met with skepticism.

On December 7, the day after Kruesi finished making the first tinfoil cylinder phonograph, Edison took the machine to Scientific American’s offices in New York City, accompanied by Johnson and laboratory assistant Charles Batchelor. He amazed the staff when he placed the little machine on the editor’s desk and turned the handle to reproduce a recording he had already made. As described in an article in the December 22 issue, “the machine inquired as to our health, asked how we liked the phonograph, informed us that it was very well, and bid us a cordial good night.”

By the New Year, Edison had an improved phonograph that he exhibited at Western Union headquarters, where it attracted the attention of the New York newspapers. These first public demonstrations produced a trickle of articles that soon turned into a steady stream and by the end of March had become a veritable flood. Edison soon became as famous as his astounding invention. Reports soon began calling Edison “Inventor of the Age,” the “Napoleon of Invention,” and most famously “The Wizard of Menlo Park.”

Edison had grand expectations for his invention as did the investors in the newly formed Edison Speaking Phonograph Company. However, Edison and his associates were unable to turn the tinfoil phonograph from a curiosity suitable for exhibitions and lectures into a consumer product. The phonograph’s real drawback was not the mechanical design on which they focused their efforts but the tinfoil recording surface.  Compared to later wax recording surfaces developed in the 1880s, tinfoil recordings had very poor fidelity and also deteriorated rapidly after a single playback. As a result, for the next decade the phonograph remained little more than a scientific curiosity.

Historian, Dr. Paul Israel, a former Californian, moved East to NJ over 30 years ago to do research for a book on Thomas Edison & the electric light. Today he is the Director and General Editor of the Thomas A. Edison Papers at Rutgers University, the New Jersey State University. 

The Thomas A. Edison Papers Project, a research center at Rutgers School of Arts and Sciences, is one of the most ambitious editing projects ever undertaken by an American university.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

Usually, breaking down history into chapters requires imposing arbitrary separations. Every once in a while, though, the divisions are clear and real, providing a hard-stop in the action that only makes sense against the backdrop of what it concludes, even if it explains what follows.

For reasons having next-to-nothing to do with the actual candidates,[1] the Presidential election of 1876 provided that kind of page-break in American history. It came on the heels of the Grant Presidency, during which the victor of Vicksburg and Appomattox sought to fulfill the Union’s commitments from the war (including those embodied in the post-war Constitutional Amendments) and encountered unprecedented resistance. It saw that resistance taken to a whole new level, which threw the election results into chaos and created a Constitutional crisis. And by the time Congress had extricated itself from that, they had fixed the immediate mess only by creating a much larger, much more costly, much longer lasting one.

Promises Made

To understand the transition, we need to start with the backdrop.

Jump back to April 1865. General Ulysses S. Grant takes Richmond, the Confederate capitol. The Confederate government collapses in retreat, with its Cabinet going its separate ways.[2] Before it does, Confederate President Jefferson Davis issues his final order to General Robert E. Lee and the Army of Northern Virginia: keep fighting! He tells Lee to take his troops into the countryside, fade into a guerrilla force, and fight on, making governance impossible. Lee, of course, refuses and surrenders at Appomattox Courthouse. A celebrating Abraham Lincoln takes a night off for a play, where a Southern sympathizer from Maryland murders him.[3] Before his passing, Abraham Lincoln had freed the slaves and won the war (in part, thanks to the help of the freedmen who had joined the North’s army), so saving the Union. His assassination signified a major theme of the next decade: some’s refusal to accept the war’s results left them willing to cast aside the rule of law and employ political violence to resist the establishment of new norms.

Leaving our flashback: Andrew Johnson succeeded Lincoln in office, but in nothing else. Super-majorities in both the House and Senate hated him and his policies and established a series of precedents enhancing Congressional power, even while failing to establish the one they wanted most.[4] Before his exit from the White House, despite Johnson’s opposition: (a) the States had ratified the Thirteenth Amendment (banning slavery); (b) Congress had passed the first Civil Rights Act (in 1866, over his veto); (c) Congress had proposed and the States had ratified the Fourteenth Amendment (“constitutionalizing” the Civil Rights Act of 1866 by: (i) creating federal citizenship for all born in our territory; (ii) barring states from “abridg[ing] the privileges or immunities of citizens of the United States[;]” (iii) altering the representation formula for states in Congress and the electoral college, and (iv) guaranteeing the equal protection of the laws); and (d) in the final days of his term, Congress formally proposed the Fifteenth Amendment (barring states from denying or abridging the right to vote of citizens of the United States “on account of race, color, or previous condition of servitude.”).

Notice how that progression, at each stage, was made necessary by the resistance of some Southerners to what preceded it. Ratification of the Thirteenth Amendment abolishing slavery? Bedford Forrest, a low-ranking Confederate General, responded by reversing Lee’s April decision: in December 1865, he founded the Ku Klux Klan (effectively, Confederate forces reborn) to wage the clandestine war against the U.S. government and the former slaves it had freed, which Lee refused to fight.  Their efforts (and, after Johnson recognized them as governments, the efforts of the Southern states to recreate slavery under another name) triggered passage of the Civil Rights Act of 1866 and the Fourteenth Amendment. Southern states nonetheless continued to disenfranchise black Americans. So, Congress passed the Fifteenth Amendment to stop them. Each step required the next.

And the next step saw America, at its first chance, turn to its greatest hero, Ulysses S. Grant, to replace Johnson with someone who would put the White House on the side of fulfilling Lincoln’s promises. Grant tried to do so. He (convinced Congress to authorize and then) created the Department of Justice; he backed, signed into law, and had DOJ vigorously prosecute violations of the Enforcement Act of 1870 (banning the Klan and, more generally, the domestic terrorism it pioneered using to prevent black people from voting), the Enforcement Act of 1871 (allowing federal oversight of elections, where requested), and the Ku Klux Klan Act (criminalizing the Klan’s favorite tactics and making state officials who denied Americans either their civil rights or the equal protection of law personally liable for damages). He readmitted to the Union the last states of the old Confederacy still under military government, while conditioning readmission on their recognition of the equality before the law of all U.S. citizens. Eventually, he signed into law the Civil Rights Act of 1875, guarantying all Americans access to all public accommodations.

And over the course of Grant’s Presidency, these policies bore fruit.  Historically black colleges and universities sprang up. America’s newly enfranchised freedmen and their white coalition partners elected governments in ten (10) states of the former Confederacy. These governments ratified new state constitutions and created their states’ first public schools. They saw black Americans serve in office in significant numbers for the first time (including America’s first black Congressmen and Senators and, in P.B.S. Pinchback, its first black Governor).

Gathering Clouds

But that wasn’t the whole story.

While the Grant Administration succeeded in breaking the back of the Klan, the grind of entering a second decade of military tours in the South shifted enough political power in the North to slowly sap support for continued, vigorous, federal action defending the rights of black Southerners. And less centralized terrorist forces functioned with increasing effectiveness. In 1872, in conjunction with a state election marred by thuggery and fraud, one such “militia” massacred an untold number of victims in Colfax, Louisiana. Federal prosecution of the perpetrators foundered when the Supreme Court gutted the Enforcement Acts as beyond Congress’s power to enact.

That led to more such “militias” often openly referring to themselves as “the military arm of the Democratic Party” flowering across the country.  And their increasingly brazen attacks on black voters and their white allies allowed those styling themselves “Redeemers” of the region to replace, one by one, the freely elected governments of Reconstruction first in Louisiana, then in Mississippi, then in South Carolina… with governments expressly dedicated to restoring the racial caste system.  “Pitchfork” Ben Tillman, the leader of a parallel massacre of black Union army veterans living in Hamburg, South Carolina, used his resulting notoriety to launch a political career spanning decades. Eventually, he reached both the governor’s mansion and the U.S. Senate, along the way, becoming the father of America’s gun-control laws, because it was easier to terrorize and disenfranchise the disarmed.

By 1876, with such “militias” enjoying a clear playbook and, in places, support from their state governments, the stage was set for massive fraud and duress trying to swing a presidential election. Attacks on black voters, and their allies, intended to prevent a substantial percentage of the electorate from voting, unfolded on a regional scale. South Carolina, while pursuing such illegal terror, simultaneously claimed to have counted more ballots than it had registered voters. The electoral vote count it eventually sent to the Senate was certified by no one – that for Louisiana was certified by a gubernatorial candidate holding no office. Meanwhile, Oregon sent two different sets of electoral votes: one certified by the Secretary of State, the other certified by the Governor, cast for two different Presidential candidates.

The Mess

The Twelfth Amendment requires states’ electors to: (a) meet; (b) cast their votes for the President and Vice President; (c) compile a list of vote-recipients for each (to be signed by the electors and certified); and (d) send the sealed list to the U.S. Senate (to the attention of the President of the Senate). It then requires the President of the Senate to open the sealed lists in the presence of the House and Senate to count the votes.

Normally, the President of the Senate is the Vice President. But Grant’s Vice President, Henry Wilson, had died in 1875 and the Twenty-Fifth Amendment’s mechanism to fill a Vice Presidential vacancy was still almost a century away. That left, in 1876, the Senate’s President Pro Tempore, Thomas W. Ferry (R-MI) to serve as the acting President of the Senate. But given the muddled state of the records sent to the Senate, Senate Democrats did not trust Ferry to play this role. Since the filibuster was well established by the 1870s, the Senate could do nothing without their acquiescence. More, they could point to Johnson-Administration precedents enhancing Congressional authority to demand that resolution of disputed electoral votes be reached jointly by both chambers of Congress, which they preferred, because Democrats had taken a majority of the lower House in 1874.

No one agreed which votes to count. No one agreed who could count them. And the difference between sets was enough to deliver the majority of the electoral college to either major party’s nominees for the Presidency and Vice Presidency. And all of this came at the conclusion of an election already marred by large-scale, partisan violence.

Swapping Messes

It took the Congress months to find its way out of this morass. Eventually, it did so through an unwritten deal. On March 2, 1877, Congress declared Ohio Republican, Rutherford B. Hayes, President of the United States over Democratic candidate Samuel J. Tilden of New York. Hayes, in turn, embraced so-called “Home Rule,” removing all troops from the old Confederacy and halting the federal government’s efforts to either enforce the Civil Rights Acts or make real the promises of the post-war Constitutional Amendments.

With the commitments of Reconstruction abandoned, the “Redeemers” promptly completed their “Redemption” of the South from freely, lawfully elected governments. They rewrote state constitutions, broadly disenfranchised those promised the vote by the Fifteenth Amendment, and established the whole Jim-Crow structure that ignored (really, made a mockery of) the Fourteenth Amendment’s guaranties.

Congress solved the short-term problem by creating a larger, structural one that would linger for a century.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Rutherford B. Hayes, the Republican nominee, was the Governor of Ohio at the time who had served in the Union army as a Brigadier General; Samuel J. Tilden, the Democratic nominee, was the Governor of New York at the time, and earlier had been among the most prominent anti-slavery, pro-union Democrats to remain in the party in 1860.  Indeed, in 1848, Tilden was a founder of Martin Van Buren’s Free Soil Party, who attacked the Whigs (which Hayes then supported) as too supportive of slave-power.

[2] CSA President Jefferson Davis broke West, with the intention of reaching Texas and Arkansas (both unoccupied by the Union and continuing to claim authority from that rump-Confederacy).  His French-speaking Louisianan Secretary of State Judah P. Benjamin broke South, pretending to be a lost immigrant peddler as he wound his way to Florida, then took a raft to Cuba as a refugee.  He won asylum, there, with the British embassy and eventually rode to London with the protection of the British Navy – alone among leading Confederates, Benjamin had a successful Second Act, in which he became a leading British lawyer and author of the world’s leading treatise on international taxation.

[3] To this day, it is unclear to what degree John Wilkes Booth was a Confederate operative.  He certainly spied for the CSA.  No correspondence survives to answer whether his assassination of Lincoln was a pre-planned CSA operation or freelancing after Richmond’s fall.

[4] He survived impeachment by a single vote.

Guest Essayist: Tony Williams

In the mid-nineteenth century, the providential idea of Manifest Destiny drove Americans to move west. They traveled along various overland trails and railroads to Oregon, California, Colorado, and the Dakota Territory in search of land and gold. Native Americans who lived and hunted in the West were alarmed at white encroachment on their lands, which were usually protected by treaties. The conflict led to several violent clashes throughout the West.

Tensions with Native Americans simmered in the early 1870s. The Transcontinental Railroad contributed to western development and the integration of national markets, but it also intruded on Native American lands. Two military expeditions were dispatched to Montana to protect the railroad and its workers in 1872. In 1874, gold was reportedly discovered in the Black Hills in modern-day South Dakota within the Great Sioux Reservation. By the end of 1875, 15,000 prospectors and miners were in the Black Hills searching for gold.

Some Indians resisted. Sitting Bull was an elite Lakota Sioux war leader who had visions and dreams. Moreover, Oglala Lakota supreme war chief Crazy Horse had a reputation as a fierce warrior. They resisted the reservations and American encroachment on their lands and were willing to unite and fight against it.

In early November 1875, President Ulysses S. Grant met with General Philip Sheridan and others on Indian policy at the White House. They issued an ultimatum for all Sioux outside the reservation to go there by January 31, 1876 or be considered hostile. The Sioux ignored it. Sitting Bull said, “I will not go to the reservation. I have no land to sell. There is plenty of game for us. We have enough ammunition. We don’t want any white men here.”

That spring, the Cheyenne, Oglala, and Sioux tribes in the area decided to unite through the summer and fight the Americans. In the spring, Sitting Bull called for warriors to assemble at his village for war. Nearly two thousand warriors assembled, many were armed with the latest repeating Springfield rifles.

That spring, Sitting Bull had visions of victory over the white man. In mid-May, he fasted and purified himself in a ritual called the Sun Dance. After 50 small strips of flesh had been cut from each arm, he had a vision of whites coming into their camp and suffering a great defeat.

After discovering the approximate location of Sitting Bull’s village, General Alfred Terry met with Colonel George Custer and Colonel John Gibbon on the Yellowstone River to formulate a plan. They agreed upon a classic hammer and anvil attack in which Custer would proceed down the Rosebud River and attack the village, while Terry and Gibbon went down the Yellowstone and Little Bighorn Rivers to block any escape. Custer had 40 Arikara scouts with him to find the enemy.

On June 23 and 24, the Arikara scouts found evidence that Sitting Bull’s village had recently occupied the area. The exhausted Seventh Cavalry stopped for the night at 2 a.m. on June 25. The scouts meanwhile sighted a massive herd of ponies and sent a message to wake Custer. When a frightened scout, Bloody Knife, warned they would “find enough Sioux to keep us fighting two or three days,” Custer arrogantly replied, “I guess we’ll get through them in one day.” His greater fear was that the village would escape his clutches. He ordered his men to form up for battle.

Around noon, Custer led the Seventh into the valley and divided his men as he had during the Battle of Washita. He sent Captain Frederick Benteen to the left with 120 men to block any escape, while Custer and Major Marcus Reno advanced on the right along the Sun Dance Creek.

Custer and Reno spotted 40 to 50 warriors fleeing toward the main village. Custer further divided his army, sending Reno in pursuit and himself continuing along the right flank. Prodding his men with some bluster, Custer told them, “Boys, hold your horses. There are plenty of them down there for all of us.”

Reno’s men crossed the Little Bighorn and fired at noncombatants. Hundreds of Indian warriors started arriving to face Reno. Reno downed a great deal of whiskey and ordered his soldiers to dismount and form a skirmish line. They were outnumbered and were quickly overwhelmed by the Native Americans’ onslaught and running low on ammunition.

Reno’s men retreated to some woods along the bank of the river to find cover but were soon flushed out, though fifteen men remained there, hidden and frightened. The warriors routed Reno’s troops and killed several during their retreat back across the river. Reno finally organized eighty men on a hill and fought off several charges.

Benteen soon reinforced Reno as did the fifteen men from the thicket who also made it to what is now called Reno Hill, and the pack train with ammunition and supplies arrived as well. No one knew where Custer was. The men built entrenchments made of ammunition and hardtack boxes, saddles, and even dead horses. For more than three hours in the 100-degree heat, they fought off a continuous stream of attacking warriors by the hundreds and were saved only by the arrival of darkness. Reno’s exhausted and thirsty men continued to dig in and fortify their barricades.

The attacks resumed around that night and lasted all morning. Benteen and Reno organized charges that momentarily pushed back the Sioux and Cheyenne, and a few men sneaked down to the Little Bighorn for water. The fighting lasted until mid-afternoon when the warriors broke off to follow the large dust cloud of the departing village. The soldiers on the hill feared a trick and kept watch all night for the enemy’s return.

General Terry’s army was camped to the north when his Crow scouts reported to him at sunrise on June 26 that they had found the battlefield where two hundred men of the Seventh Regiment had been overwhelmed and killed making a last stand on a hill. The next day, Terry arrived at Last Stand Hill and morosely confirmed that Custer and his men were dead. The bad news sobered the celebration of the United States’ centennial when it arrived in the East.

Despite the destruction of Custer and his men at Little Bighorn, the Indian Wars of the late nineteenth century were devastating for Native American tribes and their cultures. Their populations suffered heavy losses, and they lost their tribal grounds for hunting and agriculture. In the early twentieth century, the U.S. government restricted most Indians to reservations as Americans settled the West. Many Americans saw the reservation system as a more humane alternative to war, but it wrought continued damage to Native American cultures.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

An attorney representing Alexander Graham Bell and his business partner, Gardiner Hubbard, filed a patent application for an invention entitled an “Improvement in Telegraphy” on February 14, 1876. That same day, Elisha Gray, a prominent inventor from Highland Park, Illinois, had applied for a patent caveat for a similar invention from the same office. On March 7, Bell’s patent was approved by the patent office and the battle over the rights to the invention that we now know as the telephone began. The eventual outcome would shape the development of a major industry and the opportunities for communication and social interaction for the entire country.

The invention came from an unlikely source. Alexander Graham Bell was a Scottish-born teacher of elocution and tutor to the deaf whose family had migrated to Ontario, Canada, after the death of two of Bell’s siblings. Alexander Melville Bell, Alexander Graham Bell’s father, believed that their new home in Ontario offered them a better, more healthful climate. The elder Bell was a student of phonetics who had developed a system of “Visible Speech” to allow deaf people the chance to speak intelligibly. Melville Bell lectured periodically at the Lowell Institute in Boston, Massachusetts, and his son Alexander moved to Boston permanently to assume a teaching position at the Boston University.[1]

Though trained in acoustics and the science behind the sounds of the human voice, Bell did not have a strong understanding of electrical currents or electromagnetism. But early on he realized that the magnetic field of an electrical current was capable of vibrating objects, such as a tuning fork, which could create audible sounds. As he was learning of how electrical currents could be used for sound production, other researchers such as Joseph Stearns and Thomas Edison were developing a system of telegraph transmission in which multiple signals could be sent over the same wire at the same time. These systems relied upon sending the series of dots and dashes at different frequencies. Bell joined that research to find a better “multiplex” telegraph. In order to develop what Bell called a “harmonic telegraph,” Bell needed more funds for his lab. Much of his funding came from a notable Boston attorney, Gardiner Hubbard, who hired Bell as a teacher of his daughter, Mabel, who had become deaf after a bout with scarlet fever. Hubbard, who had a dislike for Western Union’s dominance in long-distance telegraph service, encouraged and subsidized Bell’s research on telegraphy. Hubbard and another financial backer, Thomas Sanders, formed a partnership with Bell, with an agreement that all three hold joint ownership of the patent rights for Bell’s inventions. Bell made significant progress on his research, and had more success in his private life. Mabel Hubbard, who was his student, became his betrothed. Despite the initial objections of her father, Mabel married Bell shortly after her eighteenth birthday.[2]

Other inventers were hard at work on similar lines of research. Daniel Drawbaugh, Antonio Meucci, Johann Philipp Reis, and especially Elisha Gray all were developing alternative versions of what would soon be known as the telephone while Bell was hard at work on his project.  Most of these models involved a variable resistance method of modifying the electrical current by dipping wires into a container of liquid, often mercury or sulfuric acid, to alter the current flowing to a set of reeds or diaphragm that would emit various sounds. Most of these researchers knew more about electrical currents and devices than Bell did. But Bell had a solid understanding of the human voice. Even though his research began as an effort to improve telegraphy, Bell realized that the devices that he created could be designed to replicate speech. His patent application in February of 1876 was for a telephone transmitter that employed a magnetized reed attached to a membrane diaphragm when activated by an undulating current. The device described in the patent application could transmit sounds but not actual speech. Months later, however, Bell’s instrument was improved sufficiently to allow him to convey a brief, audible message to his assistant, Thomas A. Watson, who was in another part of his laboratory. In the summer of 1876, Bell demonstrated the transmission of audible speech to an amazed crowd at the Centennial Exhibition in Philadelphia. Elisha Gray attempted to demonstrate his version of the telephone at the same exhibition, but was unable to convey the sound of human voices. The following year, Bell filed and received a patent for his telephone receiver, assuring his claim to devices that would both transmit and receive voice communications.[3]

In 1877, Bell and his partners formed the American Bell Telephone Company, a corporation that would later be known as American Telephone and Telegraph (AT&T). The corporation and Bell personally were soon involved in a number of lawsuits alleging patent infringement and, in one case, patent cancellation. There were many litigants over the years, but the primary early adversary was Western Union, which had purchased the rights to Elisha Gray’s telephone patent. The United States federal government also was involved in a suit for patent cancellation, alleging that Bell gained his patents fraudulently by stealing the inventions of others. The lawsuit with Western Union was settled in 1879 when the corporation forfeited claims on the invention of the telephone in return for twenty percent of Bell’s company’s earnings for the duration of the patent.[4] The other lawsuits meandered through multiple courts over several years until several were consolidated before the United States Supreme Court. Ultimately, a divided court ruled in favor of Bell’s position in each case. The various opinions and appendices were so voluminous that when compiled they made up the entire volume of United States Reports, the official publication of Supreme Court opinions.[5]

The court decisions ultimately granted vast scope to the Bell patent and assigned an enormously profitable asset to Bell’s corporation. The firm that became AT&T grew into one of the largest corporations in the world.[6] Years earlier, the telegraph had transformed communication, with huge impacts on the operation of industry and government. But although the telegraph had enormous impact upon the lives of ordinary Americans, it was not widely used by private individuals for their personal communications. Almost all messages were sent by businesses and government agencies. Initially, this was the common practice for telephone usage. But with the dawn of the twentieth century, telephones became widely used by private individuals. More phones were available in homes rather than just in offices. Unlike telegrams, which were charged by the word, telephone service for local calls were priced with a flat monthly rate. As a result, telephone service was enjoyed as a means of communication for social purposes, not just commercial activities.    Within a hundred years of Bell’s initial patent, telephones could be found in almost every American home.[7]

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Billington, David P. “Bell and the Telephone.” In Power, Speed, and Form: Engineers and the Making of the Twentieth Century, 35-56. Princeton; Oxford: Princeton University Press, 2006.

[2] Billington, op cit.

[3] Stone, Alan. “Protection of the Newborn.” In Public Service Liberalism: Telecommunications and Transitions in Public Policy, 51-83. Princeton, NJ: Princeton University Press, 1991.

[4] MacDougall, Robert. “Unnatural Monopoly.” In The People’s Network: The Political Economy of the Telephone in the Gilded Age, 92-131. University of Pennsylvania Press, 2014.

[5] The Telephone Cases.  126 US 1.

[6] Beauchamp, Christopher. “Who Invented the Telephone? Lawyers, Patents, and the Judgments of History.” Technology and Culture 51, no. 4 (2010): 854-78.

[7] MacDougall, Robert. “Visions of Telephony.” In The People’s Network: The Political Economy of the Telephone in the Gilded Age, 61-91. University of Pennsylvania Press, 2014.

 

Guest Essayist: Scot Faulkner

Our National Parks are the most visible manifestation of why America is exceptional.

America’s Parks are the physical touchstones that affirm our national identity. These historical Parks preserve our collective memory of events that shaped our nation and the natural Parks preserve the environment that shaped us.

National Parks are open for all to enjoy, learn, and contemplate. This concept of preserving a physical space for the sole purpose of public access is a uniquely American invention. It further affirms why America remains an inspiration to the world.

On March 1, 1872, President Ulysses S. Grant signed the law creating Yellowstone as the world’s first National Park.

AN ACT to set apart a certain tract of land lying near the headwaters of the Yellowstone River as a public park. Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, That the tract of land in the Territories of Montana and Wyoming … is hereby reserved and withdrawn from settlement, occupancy, or sale under the laws of the United States, and dedicated and set apart as a public park or pleasuring ground for the benefit and enjoyment of the people; and all persons who shall locate, or settle upon, or occupy the same or any part thereof, except as hereinafter provided, shall be considered trespassers and removed there from …

The Yellowstone legislation launched a system that now encompasses 419 National Parks with over 84 million acres. Inspired by Grant’s act, Australia, Canada, and New Zealand established their own National Parks during the following years.

Yellowstone was not predestined to be the first National Park.

In 1806, John Colter, a member of Lewis and Clark’s Corps of Discovery, joined fur trappers to explore several Missouri River tributaries. Colter entered the Yellowstone area in 1807 and later reported on a dramatic landscape of “fire and brimstone.”  His description was rejected as too fanciful and labeled “Colter’s Hell.”

Over the years, other trappers and “mountain men” shared stories of fantastic landscapes of water gushing out of the ground and rainbow-colored hot springs. They were all dismissed as fantasy.

After America’s Civil War, formal expeditions were launched to explore the upper Yellowstone River system. Settlers and miners were interested in the economic potential of the region.

In 1869, Charles Cook, David Folsom, and William Peterson led a privately financed survey of the region. Their journals and personal accounts provided the first believable descriptions of Yellowstone’s natural wonders.

Reports from the Cook-Folsom Expedition encouraged the first official government survey in 1870. Henry Washburn, the Surveyor General of the Montana Territory, led a large team known as the Washburn-Langford-Doan Expedition to the Yellowstone area. Nathaniel P. Langford, who co-led the team, was a friend of Jay Cook, a major investor in the Northern Pacific Railway. Washburn was escorted by a U.S. Cavalry Unit commanded by Lt. Gustavus Doane. Their team, including Folsom, followed a similar course as the Cook-Folsom 1869 excursion, extensively documenting their observations of the Yellowstone area. They explored numerous lakes, mountains, and observed wildlife. The Expedition chronicled the Upper and Lower Geyser Basins. They named one geyser Old Faithful, as it erupted once every 74 minutes.

Upon their return, Cook combined Washburn’s and Folsom’s journals into a single version. He submitted it to the New York Tribune and Scribner’s for publication. Both rejected the manuscript as “unreliable and improbable” even with the military’s corroboration. Fortunately, another member of Washburn’s Expedition, Cornelius Hedges, submitted several articles about Yellowstone to the Helena Herald newspaper from 1870 to 1871. Hedges would become one of the original advocates for setting aside the Yellowstone area as a National Park.

Langford, who would become Yellowstone’s first park superintendent, reported to Cooke about his observations. While Cooke was primarily interested in how Yellowstone’s wonders and resources could attract railroad business, he supported Langford’s vision of establishing a National Park. Cooke financed Langford’s Yellowstone lectures in Virginia City, Helena, New York, Philadelphia, and Washington, D.C.

On January 19, 1871, geologist Ferdinand Vandeveer Hayden attended Langford’s speech in Washington, D.C. He was motivated to conduct his next geological survey in the Yellowstone region.

In 1871, Hayden organized the first federally funded survey of the Yellowstone region. His team included photographer William Henry Jackson, and landscape artist Thomas Moran. Hayden’s reports on the geysers, sulfur springs, waterfalls, canyons, lakes and streams of Yellowstone verified earlier reports. Jackson’s and Moran’s images provided the first visual proof of Yellowstone’s unique natural features.

The various expeditions and reports built the case for preservation instead of exploitation.

In October 1865, acting Montana Territorial Governor Thomas Francis Meagher, was the first public official recommending that the Yellowstone region should be protected. In an 1871 letter from Jay Cooke to Hayden, Cooke wrote that his friend, Congressman William D. Kelley was suggesting “Congress pass a bill reserving the Great Geyser Basin as a public park forever.”

Hayden became another leader for establishing Yellowstone as a National Park. He was concerned the area could face the same fate as the overly developed and commercialized Niagara Falls area. Yellowstone should, “be as free as the air or water.” In his report to the Committee on Public Lands, Hayden declared that if Yellowstone was not preserved, “the vandals who are now waiting to enter into this wonder-land, will in a single season despoil, beyond recovery, these remarkable curiosities, which have required all the cunning skill of nature thousands of years to prepare.”

Langford, and a growing number of park advocates, promoted the Yellowstone bill in late 1871 and early 1872.  They raised the alarm that “there were those who would come and make merchandise of these beautiful specimen.”

Their proposed legislation drew upon the precedent of the Yosemite Act of 1864, which barred settlement and entrusted preservation of the Yosemite Valley to the state of California.

Park advocates faced spirited opposition from mining and development interests who asserted that permanently banning settlement of a public domain the size of Yellowstone would depart from the established policy of transferring public lands to private ownership (in the 1980s, $1 billion of exploitable deposits of gold and silver were discovered within miles of the Park).  Developers feared that the regional economy would be unable to thrive if there remained strict federal prohibitions against resource development or settlement within park boundaries. Some tried to reduce the proposed size of the park so that mining, hunting, and logging activities could be developed.

Fortunately, Jackson’s photographs and Moran’s paintings captured the imagination of Congress. These compelling images, and the credibility of the Hayden report, persuaded the United States Congress to withdraw the Yellowstone region from public auction. The Establishment legislation quickly passed both chambers and was sent to President Grant for his signature.

Grant, an early advocate of preserving America’s unique natural features, enthusiastically signed the bill into law.

On September 8, 1978, Yellowstone and Mesa Verde were the first U.S. National Parks designated as UNESCO World Heritage Sites. Yellowstone was deemed a “resource of universal value to the world community.”

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Brian Pawlowski

The stories of our history connect generations across time in remarkable ways. The same giddy fascination Presidents Abraham Lincoln and Ulysses S. Grant held for the potential of the railroad in the nineteenth century is present in countless children today. They tear through books like Locomotive by Brian Floca until the pages are nearly torn from constant re-reading. It is a wonderful book that conveys both the magnitude and the majesty of the transcontinental railroad in an accessible way. A more thorough treatment of the railroad, Nothing in the World Like It: The Men Who Built the Transcontinental Railroad 1863-1869, written by historian Stephen Ambrose perhaps summarized it best by noting that, “Next to winning the Civil War and abolishing slavery, building the first transcontinental railroad, from Omaha, Nebraska, to Sacramento, California, was the greatest achievement of the American people in the 19th century.”[1] Making this achievement all the more remarkable is the fact that it was hatched as the Civil War was raging: a project to connect a continent that was at war with itself.

In 1862, only a few months after the Union victory at Shiloh and just a month before the battle of Antietam, Abraham Lincoln signed the Pacific Railway Act into law. It called for the construction of a railway from Omaha, Nebraska, to Sacramento, California. It appropriated government lands and bonds to corporations that would do the work, the first time government dollars were granted to any entity other than states. The companies, the Union Pacific starting in Omaha, and the Central Pacific begun in Sacramento, were in direct competition to lay as much track as possible and complete the nearly 2,000 miles that would be necessary for the railroad.

Construction technically began in 1863 but the war demanded men and material in such large proportion that no real progress was made until 1865. After the war, the railroads became engines of economic development that attracted union veterans and Irish immigrants in droves to the Union Pacific’s efforts. The Central Pacific sought a similar workforce, but the population of Irish immigrants in California at the time was not a sustainable source of labor. Instead, thousands of Chinese immigrants sought employment with the railroad. Initially there was resistance to Chinese workers. Fears of racial inferiority pervaded much of California at that time and many felt the Chinese were listless and lazy. These fears dissipated quickly, however, as the Chinese worked diligently, with skill and ingenuity that allowed them to push through the Sierra Nevada mountains. Before it was done, nearly 20,000 Chinese laborers took part building the railroad, employing new techniques and utilizing new materials like nitroglycerin to carve a path for the tracks in areas where no one thought it could be done.

In the summer of 1867, the Central Pacific finally made it through the mountains. While the entire effort represented a new level of engineering brilliance and innovation for its time, the Central Pacific’s thrust through the mountains surpassed expectations. To chart a course for rail through granite, an impediment no one in history to that time had crossed on anything other than horse or foot, ushered in a new era of more rapid continental movement. Before the railroad era, it took nearly four or five months to get from the east coast to the west. Upon completion, however, the trip could take as little as three and a half days.[2] Absent the ability to go through the mountains, this would not have been possible.

Throughout 1867 and 1868, both rail companies worked feverishly to lay more track than their counterpart. Government subsidies for the work increased and more track laid meant more money earned. The amounts were different and were measured by the mile, thus reflecting the difficulty the Central Pacific faced in conquering the mountains. By not having mountainous terrain to contend with, the Union Pacific made incredible progress and reached Wyoming by 1867. But the Union Pacific had challenges of a different sort. Rather than conquering nature, they had to conquer humans.

Native American plains tribes, the Sioux, Cheyenne, and Arapaho, knew the railroad would be a permanent feature on land that was prime hunting ground for the buffalo. They saw the construction as an existential threat. As the railroad continued on into the plains, new settlements sprang up in its shadow, on territory the tribes claimed as their own.[3] There was bound to be a fight. The railway companies called on the government to send the army to pacify the territory and threatened that construction could not continue without this aid. The government complied and as work resumed, army soldiers protected them along the construction route.

As the summer of 1869 approached, a standoff occurred between the companies on the location where they would join the railroad together. Ulysses S. Grant, by then the President, threatened to cut off federal funding until a meeting place was agreed to and ultimately, with the help of a congressional committee and the cold, hard reality of needing cash, they agreed on Promontory Summit, Utah. On May 10, 1869, a 17.6 karat golden spike was hammered home, finishing the railway and connecting the coasts.

The completion of the transcontinental railway brought about an era of unprecedented western expansion, economic development, and population migration. At the same time, it caused more intense conflict between those moving and developing the west and the Native American Indian tribes. Years of conflict would follow, but the settlement of the west continued. And with the new railroad in place, it continued at a rapid pace as more and more people boarded mighty locomotives to head west toward new lands and new lives. As Daniel Webster, a titan of the era remarked nearly twenty years earlier, the railway “towers above all other inventions of this or the preceding age” and it now had continental reach and power.[4] America endured the scourge of Civil War and achieved the most magnificent engineering effort of the era only five years after the guns fell silent at Appomattox.

Brian Pawlowski holds an MA in American History, is a member of the American Enterprise Institute’s state leadership network, and served as an intelligence officer in the United States Marine Corps. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Stephen E. Ambrose, Nothing in the World Like It: The Men Who Built the Transcontinental Railroad 1863-1869 (New York: Simon and Schuster Paperbacks, 2000), 17.

[2] History.com, Transcontinental Railroad, September 11, 2019, https://www.history.com/topics/inventions/transcontinental-railroad.

[3] H.W. Brands, Dreams of El Dorado: A History of the American West (New York: Basic Books, 2019), 295.

[4] Ambrose, 357.

Guest Essayist: Kyle A. Scott

The Seventeenth Amendment was passed by Congress May 13, 1912 and ratified on April 8, 1913. Secretary of State William Jennings Bryan certified the ratification on May 31, 1913. Once the Amendment was added to the U.S. Constitution, citizens had the right to directly cast ballots for their state’s two senators. The Amendment changed Article I, Section 3, clauses 1 and 3 of the Constitution that had previously stipulated senators were to be elected by state legislatures. By allowing for the direct election of senators, a barrier was removed between the people and the government that moved the U.S. closer to democracy and away from a republican form of government.

At the time the U.S. Constitution was being drafted, there was a clear apprehension toward monarchy but also an aversion toward democracy. The founders were suspicious of the capricious tendencies of the majority and considered democracy to be mob rule. With direct elections of members in the House of Representatives every two years, representatives could be swept into and out of office with great efficiency and would therefore bow to the will of the majority. If some interest that ran counter to the common good, but nonetheless gained the favor of the majority, the House would be ill-incentivized to look after the common good. The Senate, as it was not directly elected by the people, could be a check on the passions of the majority. In Federalist Paper #63 Publius wrote, “an institution may be sometimes necessary as a defense to the people against their own temporary errors and delusions.”

Prior to the formation of the United States it was assumed that republics could only be small in scale. James Madison refuted such luminaries as Baron de Montesquieu in offering a solution to the problem of scale by introducing multiple layers of checks and balances into a federal regime that included a check on democratic rule at the national level. With the senate serving as a bulwark against the threat of tyranny by the majority, every viable interest could be given representation in the national debate. Now that this bulwark has been replaced with democratic elections, the president is now the only elected official at the national level shielded—at least somewhat—from public opinion as the president is elected not by the people but through the Electoral College.

At the time the Constitution was being drafted, some in Philadelphia believed there was a strong push for states’ rights. The Articles of Confederation provided a weak central government and the new states were reluctant to give up their power to a central body as they had just thrown off the yoke of tyranny hoisted upon them by a centralized governing body. The election of senators by state legislatures was one way to assuage those concerns. In Federalist Paper #62, Publius writes, “Among the various modes which might have been devised for constituting this branch of the government, that which has been proposed by the convention is probably the most congenial to public opinion. It is recommended by the double advantage of favoring a select appointment and of giving to the State governments such an agency in the formation of the federal government.” It was thought that the state’s interests would be represented in the Senate and popular interests would be represented in the House. With these sets of interests competing, the popular good would be represented in any bill that would be able to make its way through both chambers of congress. This was the very core of the theory of our constitutional government as envisioned and understood by James Madison and Alexander Hamilton. Both Madison and Hamilton argued that ambition should be made to counteract ambition and through the competition of ambitions the common good would be realized. It is only in republican government that the negative effects of faction can be mitigated and the positive aspects funneled into the realization of the common good. “The two great points of difference between a democracy and a republic are: first, the delegation of the government, in the latter, to a small number of citizens elected by the rest; secondly, the greater the number of citizens, and greater sphere of country, over which the latter may be extended.” (Federalist Paper #10).

The risks associated with democratization threaten the balance and principles republican regimes aspire to. Democracy aspires to nothing but its own will. Direct democracy offers too few safeguards against whimsy and caprice. In democracies, individuals are left to put their interests above all others and be guided by little more than immediate need as long-term planning is disincentivized.

Understanding the ramifications of further democratization is a timely topic as it is likely to be widely discussed in popular media in the upcoming presidential election. We see in every election a push for eliminating the Electoral College. With every passing election the cries for reform grow louder. Those who value republican principles should equip themselves to defend republican principles and institutions with evidence and theory and not rely on self-interest, cliché, or partisan allegiance. If interested, reread the Federalist Papers, but also, go back and read the press clippings from 1912-1913 and look for parallels to today. What you will find is a sense of connectedness with previous eras that will let the reader know these are permanent questions worth taking seriously.

Kyle Scott, PhD, MBA, currently works in higher education administration and has taught American politics, Constitutional Law, and political theory for more than a decade at the university level. He is the author of five books and more than a dozen peer-reviewed articles. His most recent book is The Federalist Papers: A Reader’s Guide. Kyle can be contacted at kyle.a.scott@hotmail.com.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: David Shestokas

RUSSIA!!! RUSSIA!!! RUSSIA!!!

It’s difficult to determine a precise moment when Russia came to dominate recent American news and politics. The events that may have spun Russia into part of America’s daily discussions may have been accusations of Russia’s involvement in the 216 elections.

Since then, Russia and the villainous Vladimir Putin have been a daily part of our political discourse. Even as the novel coronavirus raged, mentions of Russian Ambassador Sergey Kislyak and Lt. General Michael Flynn broke through coverage of the pandemic.

RUSSIA AND THE UNITED STATES, THE EARLY DAYS

Russia has not always been a mortal enemy in the American story. America’s Founders reached out to Russia in our earliest days. In December, 1780, the United States sent its first envoy to St. Petersburg, then Russia’s capital. The envoy, Francis Dana, brought a secretary with him.

The secretary was fourteen-year-old John Quincy Adams. Dana could speak no word of French, the language of the Russian court, and so John Adams[1] had lent Dana his son, who was fluent in French. Young John Quincy thus became a diplomatic interpreter.

Dana’s mission was to secure aid and support from Tsarina Catherine I for the American Revolution against England. The mission was unsuccessful, but for three years John Quincy became familiar with the workings of Russia’s ruling class. Catherine’s grandson was about the royal court during John Quincy’s tenure in St. Petersburg. In 1801, the grandson would become Tsar Alexander I.

Twenty-six years later, after John Quincy’s father had been President of the United States and Alexander’s father had preceded him as tsar, John Quincy returned to St. Petersburg. In November, 1809, Tsar Alexander I received John Quincy Adams as the first official United States Ambassador to Russia.

The Tsar greeted Adams warmly: “Monsieur, je suis charmé d’avoir le plaisir de vous voir ici.”[2] After this warm welcome, Ambassador Adams and Tsar Alexander spoke at length of trade, European politics, wars and Napoleon, the sometime ally, sometime adversary of both Russia and the United States.

The Tsar became ever more comfortable with his American guest, ultimately confiding the difficulties of managing his empire. Alexander revealed to John Quincy: “Its size is one of its greatest evils,” the Czar mused of his own country. “It is difficult to hold together so great a body as this empire.”

Ambassador Adams included the Tsar’s comment in his diplomatic dispatch to the State Department and President Monroe. It became part of the institutional memory of the United States, a country that only six years before had bought Louisiana from Napoleon.[3]

RUSSIA’S CLAIM TO ALASKA

In July, 1741, Vitus Bering, a Russian naval officer culminated years of Russian eastward exploration with the sighting of Mount Saint Elias (a.k.a. Boundary Peak) on the North America mainland. The coming years would see periodic trips by Russian hunters and trappers.

In 1781 the Northeastern Company was established to organize and administer Russian colonies in North America. Company operations were directed from Kithtak (now Kodiak Island) and later from Novo-Arkhangelsk (now Sitka, Alaska).

For the next 70 years, Russian traders and trappers inhabited coastal Alaska. The Russian colonists regularly had violent confrontations with Native Alaskans. The Russian colonies grew in dependence upon both American and British traders for supplies. Alaska was not a profitable undertaking for Russia.

THE CRIMEAN AND CIVIL WARS

Then came the Crimean War (1854-1856) with Russia facing off against France, England and Turkey. The war did not end well for Russia; the loss was great in both blood and treasure. The monarchy needed to replenish its coffers, and the possible sale of Alaska seemed to hold an answer. Beyond the tsar’s need for cash, the Russians felt unable to defend Alaska should either the British or Americans move to take it.[4]

Even after the Crimean War, England remained a Russian adversary.  England was undesirable as an Alaskan neighbor, so the best scenario would be a sale to the United States.

In 1855, Tsar Alexander II had become emperor of Russia. He was the nephew of Alexander I who had confided to John Quincy Adams the difficulties of managing his distant empire. Given post Crimean War realities and aware of his uncle’s wisdom, a sale of Alaska to the United States appeared to be the best Russian course of action.

Russian Ambassador Eduard de Stoeckl was directed to actively pursue a sale in 1859. Stoeckl’s overtures included discussions with, among others, then New York Senator, William H. Seward. In 1861, the United States Civil War erupted. Discussions of Russia’s sale of Alaska to the United States fell silent.

THE RUSSIAN FLEET AND THE CIVIL WAR

The United States Civil War was not only of interest to the Union and the Confederacy. Both parties were actively involved in seeking either international support or neutrality. The Confederacy had courted both the British and French. The Rebels had a fair chance of receiving aid from either.

In Europe, relations remained tense between the Crimean War foes.  Although, Russia had lost that war, it had taken a devastating toll on all participants. In the short three-year span of the war, Britain had lost 22,000 soldiers. Against this background, the French and English entertained Confederate overtures.

The Russians viewed a strong unified United States as a counterweight to its European foes and a Union victory to be in Russia’s interests. In the summer of 1863, the Russian Baltic Fleet set sail for New York. The Russian Far East Fleet journeyed to San Francisco. President Abraham Lincoln sent the First Lady, Mary Todd Lincoln, to greet the Russians in New York in September, 1863.

Overt assistance for the Confederacy by England or France disappeared. The Russians had tilted the playing field in favor of the Union. In 1865, the Civil War ended with a Union victory and no active participation by the French, English or Russians.

THE ALASKAN TALKS RESUME AND A BREAKTHROUGH

After the Civil War, Alexander II resumed pursuit of an Alaskan sale. Stoeckl began negotiating with William Seward who became Secretary of State under Abraham Lincoln. Over the next two years, issues of reconstruction, Lincoln’s assassination and mid-19th Century communications hampered U.S./Russia negotiations.

Things changed the evening of March 29, 1867, while Secretary Seward was at his Washington, D.C. home, playing whist with his family. The whist game was interrupted by a knock at the door. It was Baron Eduard de Stoeckl, Minister Plenipotentiary and Ambassador of the Russian Tsar.

Stoeckl advised Seward: “I have a dispatch from my government. The Emperor gives his consent to the cession. Tomorrow if you like. I will come to the department and we can enter upon the treaty.”

Seward, with a smile of satisfaction responded: “Why wait until tomorrow? Let us make the treaty tonight.”

Stoeckl demurred: “But your department is closed, you have no clerks and my secretaries are scattered about the town!”

“Never mind that,” responded Seward, “if you can muster your legation together before midnight you will find me awaiting you there at the department, which will be open and ready for business.”

Carriages were dispatched around Washington and by 4 AM, March 30, 1867 the treaty was signed, engrossed and ready for presidential transmittal to the United States Senate for ratification.

RATIFICATION, FUNDING AND TRANSFER

President Andrew Johnson submitted the treaty to the Senate that same Saturday.

Seward had known that Congress was scheduled to adjourn for two months. Due to receipt of the treaty, the Senate set a special session for Monday, April 1, 1867. The Senate Foreign Relations Committee held hearings that entire week, then reported the treaty favorably.

Senator Charles Sumner of Massachusetts, Chairman of the Foreign Relations Committee, spoke in favor of the purchase for three hours. On April 9, 1867, just ten days after Stoeckl interrupted Seward’s whist game, the Senate ratified the treaty 37-2.

Though drafting and ratification took only ten days, it was nearly 68 years after Alexander I mentioned problems managing his empire to John Quincy Adams.

There remained appropriation of the $7.2 million for the purchase by the full Congress, which took place on July 27, 1867.

On October 18, 1867, in a formal ceremony, the United States took possession of Alaska. For the nearly 600,000 square miles, the United States paid about 2 cents an acre. Alaska was a better deal than Louisiana. October 18 is celebrated annually as a state holiday in Alaska.

ALASKA TODAY

The natural resources and strategic location of Alaska that came with the purchase cannot be understated. The gold rush that the Russians feared in 1855 came to pass in 1896. Alaska contributes to the American economy through its riches in oil, minerals, precious metals, seafood, timber, and tourism. Alaska is the second largest crude oil producer in the country and the salmon run in Alaska’s Bristol Bay basin is the largest in the world.

America’s Founders recognized the value of a cordial relationship with Russia and began working on it in 1780. The benefits of that effort came to fruition on October 18, 1867 with the peaceful acquisition of Alaska.

While the connotation of the mantra RUSSIA!!! RUSSIA!!! RUSSIA!!! differs greatly from 1780 to 2020, it has been part of the American lexicon through our entire history.

David Shestokas, J.D., is a former Cook County, Illinois State’s Attorney and author of Constitutional Sound Bites and Creating the Declaration of Independence. Follow him on Twitter, @shestokas, and join his Facebook group, Dave Shestokas on the Constitution

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] The Second President and First Vice-President had been the fledgling country’s envoy to France from 1777 to 1779 and his son, John Quincy had been with him.

[2] “Sir, I am delighted to have the pleasure of seeing you here.”

[3] Napoleon needed money for war in Europe, France’s Louisiana territory was 828,000 square miles, and the $15 million purchase price was about 3 cents/acre.

[4] Alaska would prove to hold a wealth of natural resources, including gold.  Strangely, this was of actual concern to Russia. The 1848 gold discovery in California brought in more than 300,000 people in 7 years. The Russians had no ability to manage a similar event in Alaska. They would be better off to sell the land rather than have it taken.

Guest Essayist: James C. Clinger

On September 5, 1867, the first Texas cattle were shipped from the railhead in Abilene, Kansas, with most of the livestock ending their destination in a slaughterhouse in Chicago, Illinois. These cattle made a long, none too pleasant journey from south Texas to central Kansas.   Their hardships were shared by cowboys and cattlemen who drove their herds hundreds of miles to find a better market for their livestock. For almost two decades, cattle drives from Texas were undertaken by beef producers who found that the northern markets were much more lucrative than those they had been dealing with back home. These drives ended after a combination of economic, legal, and technological changes made the long drives impractical or infeasible.

After the end of the civil war, much of the economy of the old Confederacy was in shambles. In Texas, the rebels returning home often found their livestock scattered and their ranches and farms unkempt and overgrown. Once the Texas ranchers reassembled their herds, they found that the local market for beef was very limited. The ranchers’ potential customers in the region had little money with which to buy beef and there was no way to transport livestock to distant markets except by ships that sailed off the Gulf coast. The major railroad lines did not reach Texas until the 1870s. Many of the cattlemen were “cattle rich but cash poor,” and it did not appear that there was any easy way to remedy their situation.[1]

Several cattlemen, cattle traders, and cattle buyers developed a solution.   Rather than sell locally, or attempt to transport cattle by water at high costs, cattle were to be driven along trails to railheads up north. Among the first of these was Abilene, Kansas, but other “cow towns,” such as Ellsworth and Dodge City, quickly grew from small villages to booming cities. This trek required very special men, horses, and cattle.   The men who drove the cattle were mostly young, adventurous, hardened cowhands who were willing to work for about $30 per month in making a trek of two months or more. Several Texas-bred horses were supplied for each cowhand to ride, for it was common for the exertions of each day to wear out several horses. The cattle themselves differed significantly from their bovine brethren in other parts of the continent.   These were Texas Longhorns, mostly steers, which were adorned with massive horns and a thick hide. The meat of the Longhorns was not considered choice by most connoisseurs, but these cattle could travel long distances, go without water for days, and resist many infectious diseases that would lay low cattle of other breeds.

Most cattle drives followed along the path of a number of trails from Texas through the Indian Territory (present day Oklahoma) into Kansas.   The most famous of the trails was the Chisholm Trail, named after Jesse Chisholm, a trader of cattle and other goods with outposts along the North Canadian River in northern Texas and along the Little Arkansas in southern Kansas. The drives ended in a variety of Kansas towns, notably Abilene, after some entrepreneurial cattle buyers, such as Joseph McCoy and his brothers, promoted the obscure train stop as a place where Texas cattle could be shipped by rail to market.[2] The cattle drives had emerged as an entrepreneurial solution to desperate circumstances where economic gains were blocked by geographic, technological, and legal obstacles.

The cow towns grew rapidly in size and prosperity, although many faltered after the cattle drives ended. The cattle and cowboys were not always welcomed. Many Kansas farmers and homesteaders believed that the Longhorns brought diseases such as “Texas Fever” that would infect and kill their own cattle. The disease known today as Babesiosis was caused by parasites carried by ticks that attached themselves to Texas steers. The Longhorn cattle had developed an immunity to the disease, but the northern cattle had not. The ticks on the hides of the Texas cattle often traveled to the hides of the livestock in Kansas, with lethal results.[3]   The Kansas farmers demanded and gained a number of state laws prohibiting the entry of Texas cattle. These laws were circumvented or simply weakly enforced until the 1880s. At first glance, these laws might appear to conflict with the commerce clause of the U.S. Constitution, Article I, Section 8, Clause 3, which authorized Congress, not the states, to regulate commerce among states. Yet, the federal Supreme Court ruled in 1886 in the case of Morgan’s Steamship Company v. Louisiana Board of Health that quarantine laws and general regulation of public health were permissible exercises of their police powers, although they could be preempted by an act of Congress.[4]

The cattle drives faced many hazards on their long treks to the north.   Harsh terrain, inclement weather, hostile Indians, rustlers, and unwelcoming Kansas farmers often made the journey difficult.   Nevertheless, for about twenty years the trail drives continued and were mostly profitable. Even after the railroads reached Fort Worth, Texas, many cattlemen still found it more profitable to make the long journey to Kansas to ship their beef. Cattle prices were higher in Abilene, and the costs of rail shipment from Fort Worth were, at least in the 1870s, too high to justify ending the trips to Kansas.[5] Eventually the drives did end, although there is some dispute among historians about when and why the cattle drives ceased. By the 1880s, barbed wire fencing blocked the cattle trails at some points. The new railheads in Texas offered alternative routes to livestock markets. Finally, Kansas enacted a strict quarantine law to keep out Texas cattle in 1885. Of course, past quarantine laws had been weakly enforced. State officials seemed to take the 1885 law more seriously. Perhaps economic incentives encouraged stricter quarantine enforcement. The cattle herds of the northern plains had been growing gradually over the years. After the Battle of Little Bighorn in 1876, the United States Army largely pacified hostile tribes in the Rocky Mountain states, with the result that the cattle industry thrived in Wyoming and Montana. With bigger and more carefully bred livestock available to the Kansas cattle buyers, the need to buy Texas cattle diminished. Enforcing the quarantine laws became less costly to the cattle traders and certainly pleased many of the Kansas farmers who voted in state elections. The end came relatively abruptly. In 1885, approximately 350,000 cattle were driven from Texas to Kansas. The following year, in 1886, there were no drives at all.[6] The cattle drives had emerged in the 1860s as an entrepreneurial solution to desperate circumstances where economic gains were blocked by geographic, technological, and legal obstacles. In the 1880s, the marketplace had been transformed. New barriers to the cattle drive had appeared, but by then the cattlemen in Texas had safer and more cost-effective  means to bring their livestock to market.

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Specht, Joshua. “Market.” In Red Meat Republic: A Hoof-to-Table History of How Beef Changed America, 119-73. Princeton, NJ: Princeton University Press, 2019.

[2] Gard, Wayne. “Retracing the Chisholm Trail.” The Southwestern Historical Quarterly 60, no. 1 (1956): 53-68.

[3] Hutson, Cecil Kirk. “Texas Fever in Kansas, 1866-1930.” Agricultural History 68, no. 1 (1994): 74-104.

[4] 118 U.S. 455

[5] Galenson, David. “The Profitability of the Long Drive.” Agricultural History 51, no. 4 (1977): 737-58.

[6] Galenson, David. “The End of the Chisholm Trail.” The Journal of Economic History 34, no. 2 (1974): 350-64.

 

Guest Essayist: Kyle A. Scott

Notwithstanding the controversy over the causes of the U.S. Civil War, we do know that one of the outcomes was ending slavery through the Thirteenth Amendment. Congress passed the proposed Thirteenth Amendment on January 31, 1865 and it was subsequently ratified on December 6, 1865 by three-fourths of the state legislatures. Upon its ratification, the Thirteenth Amendment made slavery unconstitutional.

Unlike amendments before it, this amendment deserves special consideration due to the unconventional proposal and ratification process.

The proposed amendment passed the Senate on April 8, 1864 and the House on January 31, 1865 with President Abraham Lincoln approving the Joint Resolution to submit the proposed amendment to the states on February 1, 1865. But, it was not until April 9, 1865 that the U.S. Civil War officially ended on the steps of the Appomattox Courthouse when Robert E. Lee surrendered the Confederate Army to Ulysses S. Grant. This means that all congressional action up to this point took place without the consent of any state in the Confederacy taking part as those states were not represented in congress.

However, once the war ended, the success of the amendment required that some of the former Confederate states ratify the amendment in order to meet the constitutionally mandated minimum proportion of states. Article V of the Constitution requires three-fourths of state legislatures to ratify a proposed amendment before it can become part of the Constitution. There were only twenty-five Union and border states which meant at least two states from the eleven that comprised the Confederacy had to ratify.

The effort to get states to ratify was led by Andrew Johnson who assumed the presidency after Lincoln was assassinated on April 15, 1865. There was a total of thirty-six states which meant at least twenty-seven of the state legislatures had to ratify. It was not guaranteed that all the states that remained in the Union would ratify. For instance, New Jersey, Delaware and Kentucky all initially rejected the amendment.

This is where things get complicated. When Johnson assumed the presidency in April, he ordered his generals to summon new conventions in the Southern states that would be forced to revise constitutions and elect new state legislators before being admitted back into the Union. This was essentially reform through military injunction thus casting doubt on the sovereignty of the states and the free will of the people.

Further complicating the issue of ratification was the Thirty-ninth Congress which refused the inclusion of all the Southern states except Tennessee. So, while the Congress did not recognize the former Confederate states as states—except Tennessee—all the states were considered legal for purposes of ratification as determined by Secretary of State William Seward. Thus, we are presented with a constitutional predicament in which an amendment is ratified by states recognized by the executive branch but not by the legislative branch.

No resolution was formerly adopted, nor reconciliation made, that could bring clarity to this constitutional crisis. Reconstruction continued, and the Thirteenth Amendment was added to the Constitution along with the Fourteenth and Fifteenth Amendments—collectively known as the Reconstruction Amendments.

The ratification of the Reconstruction Amendments is most aptly characterized as a Second Founding. How the Amendments were ratified occurred outside any reasonable interpretation of Article V or republican principles of representation. Imagine the following scenario. Armed guards move into 51% of American voters’ homes and force them to vote for Candidate A in the next presidential election. If the homeowners do not agree, the armed guards stay. If homeowners agree, and vote for Candidate A, the armed guards leave. This is what occurred during Reconstruction in the South as a state’s inclusion in Congress, and the removal of Union troops, was predicated upon that state’s acquiescence to the demands of the Union which included ratifying the Thirteenth, Fourteenth and Fifteenth Amendments.

Because the amendments were passed in an extra-constitutional manner, we cannot say that they were a continuation of what was laid out in Philadelphia several decades before. This creates an ethical dilemma for historians and legal scholars to consider. Do the ends justify the means or should the letter of the law be subservient to the higher good? To state it more simply: Is ending slavery worth violating the Constitution? Or, should have slavery remained legal until an amendment could be ratified in a manner consistent with Article V and generally accepted principles of representation?

These questions are meant to be hard and they will not be resolved here. What I do propose is that in 1865 the United States decided that the pursuit of the higher good justified a violation of accepted procedures and those who accept the validity of the Reconstruction Amendments today must, at least tacitly, endorse the same.

Kyle Scott, PhD, MBA, currently works in higher education administration and has taught American politics, Constitutional Law, and political theory for more than a decade at the university level. He is the author of five books and more than a dozen peer-reviewed articles. His most recent book is The Federalist Papers: A Reader’s Guide. Kyle can be contacted at kyle.a.scott@hotmail.com.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

Only five days after Confederate General Robert E. Lee’s surrender at Appomattox, ending the Civil War, President Abraham Lincoln was assassinated in a theater in Washington, D.C. John Wilkes Booth, a Confederate supporter, shot the president who succumbed to his wounds the next day. President Andrew Johnson took Lincoln’s place, and was less supportive of Lincoln’s anti-slavery policies, diluting the abolition of slavery Lincoln envisioned. Johnson was in favor of policies that further disenfranchised free blacks, setting political policies that would weaken the nation’s unity.

Imagine if President Donald Trump were to choose Senator Bernie Sanders as his running mate in November 2020. Would that shock you?

Americans of 1864 must have been shocked to see President Abraham Lincoln, leader of the Republican Party, choose Andrew Johnson, a Democrat, as his running mate.

Nothing in the Constitution prohibited it, of course, and once before, America had witnessed a President and Vice from different parties. In 1796 it was accidental; this time it was on purpose.

Andrew Johnson was probably the most politically-qualified VP Lincoln could have chosen. Though totally unschooled, Johnson was the consummate politician. He started political life at age 21 as a Greenville, Tennessee alderman in 1829 and would hold elective office almost continuously for the next thirty-five years, serving as a state legislator, Congressman, two-term Governor of Tennessee and finally Senator from Tennessee.[1] When the Civil War began and Tennessee left the Union, Johnson chose to leave his state rather than break with the Union. Lincoln promptly appointed him Military Governor of Tennessee.

Heading into the 1864 election, the Democratic Party was bitterly split between War Democrats and Peace Democrats. Wars tend to do that. They tend to force people into one camp or the other. To bridge the gap and hopefully unify the party, Democrats found a compromise:  nominate pro-war General George B. McClellan for president and anti-war Representative George H. Pendleton for Vice President. The ticket gathered early support.

Lincoln thought a similar “compromise ticket” was needed. Running once again with Vice President Hannibal Hamlin was out. Hamlin was nice enough, a perfect gentleman who even volunteered for a brief stint in his Maine militia unit during the war, but Hamlin had not played a very prominent role in Lincoln’s administration during the first term. Hamlin had to go. Johnson was in.

To complicate electoral matters further, a group of disenchanted “Radical Republicans” who thought Lincoln too moderate formed the Radical Democracy Party a month before the Republican Convention and nominated their own candidates. They nominated Senator John C. Fremont from California for President and General John Cochrane from New York for Vice President. Two Johns on one ticket, two Georges on another and two men on a third whose first names began with “A.” Coincidence?  I don’t think so.

Choosing, finally, to not play the spoiler, Fremont withdrew his nomination barely two months before the election. Under the slogan “Don’t change horses in the middle of a stream,” Republicans were able to sweep the Lincoln/Johnson ticket to victory. The two men easily defeated “the two Georges” by a wide margin of 212 to 21 electoral votes.

In his second inaugural address, Lincoln uttered some of his most memorable lines ever:

the judgments of the Lord, are true and righteous altogether.” With malice toward none; with charity for all; with firmness in the right, as God gives us to see the right, let us strive on to finish the work we are in; to bind up the nation’s wounds; to care for him who shall have borne the battle, and for his widow, and his orphan—to do all which may achieve and cherish a just and lasting peace, among ourselves, and with all nations.

And then disaster hit. A little over a month after he delivered these memorable lines, Lincoln was shot in the head by Southern sympathizer John Wilkes Booth on the night of April 14 while enjoying a play at Ford’s Theater in Washington, D.C. Lincoln died the following day.  Booth’s conspiracy had planned to take out not only Lincoln, but his Vice President, and Secretary of State William Seward as well.  Seward was critically injured, but survived. Johnson also survived when assassin George Atzerodt got drunk and had a change of heart. The following day, two and a half hours after Lincoln drew his last breath, Johnson was installed as the seventeenth President of the United States.

Booth was quickly tracked down by Union troops and killed while attempting to escape. The rest of the conspirators were soon captured and the ringleaders hanged, including Mary Surratt, the first woman ever executed by the U.S. government.

Faced with the unenviable task of Reconstruction after a devastating war, Johnson’s administration started well, but quickly went downhill. The Radical Republicans were out for southern blood and Johnson did not share their thirst.

Although Lincoln is well-known for his wartime violations of the U.S. Constitution, Johnson is best known for sticking to it.

To show Johnson’s affinity for strict constructionism, there is this story: As a U.S. Representative, Johnson had voted against a bill to give federal aid to Ireland in the midst of a famine. In a debate during his subsequent run for Governor of Tennessee, his opponent criticized this vote. Johnson responded that people, not government, had the responsibility of helping their fellow men in need. He then pulled from his pocket a receipt for the $50 he had sent to the hungry Irish. “How much did you give, sir?” His opponent had to confess he had given nothing. The audience went wild. Johnson later credited this exchange with helping him win the election.

Johnson recognized the legitimacy of the Thirteenth Amendment, but he did not believe blacks deserved the right to vote. He vetoed the Civil Rights Act of 1866 which would give citizenship and extend civil rights to all regardless of race, but Congress overrode the veto. When the constitutionality of the Civil Rights Act was challenged, the Fourteenth Amendment was proposed and Johnson opposed that as well. The Radical Republicans then passed the Reconstruction Act of 1867. Johnson vetoed it and the Republicans overrode his veto. Republicans then threatened reluctant southern states with a continuance of their military governance unless they ratified the Amendment. An unnamed Republican at the time called this “ratification at the point of a bayonet.” Johnson’s reluctance to support the Radical Republican agenda did not endear him to them.

The “straw that broke the camel’s back” came when Johnson tried to remove Edwin Stanton as Secretary of War despite the Tenure of Office Act which ostensibly, and unconstitutionally in Johnson’s view, prevented such action. Johnson fired Stanton. Threatened with impeachment, Johnson replied, “Let them impeach and be damned.” Congress promptly did just that – impeach, that is. After the House impeachment, the Senate trial resulted in acquittal. Johnson retained his office by a single vote, but still gained the notoriety of being the first United States President to be impeached.

The events surrounding President Lincoln’s assignation on April 15, 1865 changed the political landscape following the Civil War making it a significant date to learn about in America’s history.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] After failing to be reelected President, Johnson was even elected Senator from Tennessee once again in 1875.

Guest Essayist: Tony Williams

On March 4, 1865, President Abraham Lincoln delivered his Second Inaugural Address that was a model of reconciliation and moderation for restoring the national Union. He ended with the appeal:

With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation’s wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations.

Later that month, Lincoln visited with Generals Ulysses S. Grant and William T. Sherman during the siege of Petersburg in Virginia near the Confederate capital of Richmond. As they talked, the president reflected on his plan to treat the South with respect. “Treat them liberally all around,” he said. “We want those people to return to their allegiance to the Union and submit to the laws.”

The Civil War was coming to an end as a heavily outnumbered Confederate General Robert E. Lee withdrew his army from Petersburg and abandoned Richmond to its fate. On April 5, 1865, Lee marched his starving, exhausted men across the swollen Appomattox River to Amelia Court House in central Virginia. Lee was disappointed to discover the boxcars on the railroad did not contain the expected rations of food. The Union cavalry under General Philip Sheridan were closing in and burning supply wagons. Lee ordered his men to continue their march to the west without food. Increasing numbers were deserting the Army of Northern Virginia. Lee realized that he would have to surrender soon.

On April 6, Union forces led by General George Custer cut off Lee’s army at Sayler’s Creek. The two sides skirmished for hours and then engaged each other fiercely. Lee lost a quarter of his army that became casualties and prisoners. He reportedly cried out, “My God! Has the army been dissolved?”

The next day, the Rebels retreated to Farmville where rations awaited but Union forces were close behind. The hungry Confederates barely had time to eat before fleeing again to expected supplies at Appomattox railroad station.

That day, Grant wrote to Lee asking for his surrender to prevent “any further effusion of blood.” Grant signed the letter, “Very respectfully, your obedient servant.”  Lee responded that while he did not think his position was as hopeless as Grant indicated, he asked what terms the Union Army would offer. When one of the generals suggested accepting the surrender, Lee informed him, “I trust it has not come to that! We certainly have too many brave men to think of laying down our arms.”  Nevertheless, Grant’s answer was unconditional surrender.

On April 8, Lee’s army straggled into the town of Appomattox Court House, but Sheridan had already seized his supply. He knew the end had come. He was hopelessly outnumbered six-to-one and had very little chance of resupply or reinforcements. Lee conferred with his generals to discuss surrender. When one of his officers suggested melting away and initiating a guerrilla war, Lee summarily rejected it out of hand. “You and I as Christian men have no right to consider only how this would affect us. We must consider its effect on the country as a whole.”

Lee composed a message to Grant asking for “an interview at such time and place as you may designate, to discuss the terms of the surrender of this army.”  Grant was suffering a migraine while awaiting word from Lee. He was greatly relieved to receive this letter. His headache and all the tension within him immediately dissipated. While puffing on his cigar, he wrote back to Lee and magnanimously offered to meet his defeated foe “where you wish the interview to take place.” The ceremony would take place at the home of Wilmer McLean, who had moved to Appomattox Court House to escape the war after a cannonball blasted into his kitchen during the First Battle of Bull Run in 1861. Now, the war’s final act would occur in his living room.

Lee cut a fine picture impeccably dressed in his new gray uniform, adorned with a red sash, shiny boots, and his sword in a golden scabbard as he awaited Grant. The Union general was shabbily dressed in a rough uniform with muddy boots and felt self-conscious. He thought that Lee was “a man of much dignity, with an impassible face.” Grant respectfully treated his worthy adversary as an equal, and felt admiration for him if not his cause. They shook hands and exchanged pleasantries.

Grant sat down at a small table to compose the terms of surrender and personally stood and handed them to Lee rather than have a subordinate do it. Grant graciously allowed the Confederate officers to keep their side arms, horses, and baggage. Lee asked that all the soldiers be allowed to keep their horses since many were farmers, and Grant readily agreed. Grant also generously agreed to feed Lee’s hungry men. Their business completed, the two generals shook hands, and Lee departed with a bow to the assembled men.

As Lee slowly rode away, Grant stood on the porch and graciously lifted his hat in salute, which Lee solemnly returned. The other Union officers and soldiers followed their general’s example. Grant was so conscious of being respectful that when the Union camp broke out into a triumphal celebration, Grant rebuked his men and ordered them to stop. “We did not want to exult over their downfall,” he later explained. For his part, Lee tearfully rode back into his camp, telling his troops, “I have done the best I could for you.” He continued, “Go home now, and if you make as good citizens as you have soldiers, you will do well, and I shall always be proud of you.”

On April 12, the Union formally accepted the Confederate surrender in a solemn ceremony. Brigadier General Joshua Chamberlain, the hero of Gettysburg, oversaw a parade of Confederate troops stacking their weapons. As the Army of Northern Virginia began the procession, Chamberlain ordered his men to raise their muskets to their shoulders as a salute of honor to their fellow Americans. Confederate Major General John Gordon returned the gesture by saluting with his sword. Chamberlain described his feelings at witnessing the dramatic, respectful ceremony: “How could we help falling on our knees, all of us together, and praying God to pity and forgive us all.”

At the end of the dreadful Civil War, in which 750,000 men died, the Americans on both sides of the war demonstrated remarkable respect for each other. Grant demonstrated great magnanimity toward his vanquished foe, following Lincoln’s vision in the Second Inaugural. That vision tragically did not survive the death of the martyred Lincoln a few days after the events at Appomattox.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Val Crofts

Abraham Lincoln is usually considered one of our nation’s greatest presidents. But, what many people may not know is that Lincoln was not a very popular president during his first term and he nearly was not reelected in 1864. For many months leading into the presidential election of that year, Lincoln resigned himself to a simple fact that he was not going to be reelected. He told a visitor to the White House in the fall of 1864, “I am going to be beaten…and unless some great change takes place, badly beaten.”

Lincoln’s administration had presided over hundreds of thousands of young men killed and wounded in the then three-year-old struggle to give our nation, as Lincoln declared at Gettysburg, its “new birth of freedom” during the Civil War. The year 1864 had been the bloodiest of the war so far and Union armies were being decimated as Union General Ulysses S. Grant was making his final push to destroy the army of Confederate General Robert E. Lee and bring the war to an end. At the same time, General William T. Sherman was moving toward Georgia in the summer of 1864, hoping to destroy the Confederate armies in that region as well.

Lincoln was a tremendously unpopular president in 1864 inside and outside of his own political party. Democrats hated Lincoln and blamed him for the longevity of the war. Radical Republicans did not feel that he went far enough to extend equal rights to African-Americans. The war was unpopular and seemed unwinnable for the Union. Lincoln’s recent Emancipation Proclamation had also turned many Northern voters against Lincoln as they believed that equality for former slaves was something that would occur and they were not ready for it.

Between the Emancipation Proclamation and the casualty numbers of the Union army, Lincoln felt as though his administration would be leaving the White House in 1865. He urged his cabinet members to cooperate with the new president to make the transition of power easier, which would hopefully bring the nation back together quicker. A series of events were taking place in the Western theater of the war where one of Lincoln’s generals was about to present him with two gifts in 1864: the city of Atlanta and the reelection of his administration.

General Sherman met president Lincoln in 1861 at the beginning of the war and he was not overly impressed with him. He felt President Lincoln’s attitude toward the South was naive and could damage the Union’s early response to the war. Lincoln was not particularly impressed with Sherman at their first meeting either. But, those attitudes would change as the war progressed.

Sherman had achieved great success in fighting in the Western theater of the war from Shiloh to Chattanooga and was poised to strike a lethal blow into the heart of the Confederacy by marching his armies through the state of Georgia and capturing its capital city of  Atlanta. The capture of Atlanta would destroy a vital rail center and supply depot, as well as demoralize the Confederacy.

Sherman and his 100,000 troops left Chattanooga in May of 1864 and by July, Sherman and his army had reached the outskirts of Atlanta. On September 1, Confederate forces evacuated the city. The Northern reaction to the taking of Atlanta and victories in Virginia at the same time was jubilation. Instead of feeling the war was lost, the exact opposite opinion was now prevalent. It now seemed that the Lincoln administration would be the first reelected since Andrew Jackson in 1832.

President Lincoln won the 1864 election by receiving over 55 percent of the popular vote and winning the electoral vote 212 to 21 over his Democratic opponent, former general George B. McClellan. He was then able to manage the end of the Civil War and the passage of the 13th Amendment to the U.S. Constitution, banning slavery in the United States forever. His presidency would be remembered as the reason why our nation is still one nation, under God, and dedicated to “the proposition that all men are created equal.” Washington created our nation, Jefferson and Madison gave it life and meaning with their ideas and  words, and Lincoln saved it. He may have not had the chance to do so without the military success of General Sherman and his armies in 1864.

Val Crofts is as Social Studies teacher from Janesville, Wisconsin. He teaches as Milton High School in Milton, Wisconsin and has been there 16 years. He teaches AP U.S. Government and Politics, U.S. History and U.S. Military History. Val has also taught for the Wisconsin Virtual School for seven years, teaching several Social Studies courses for them. Val is also a member of the U.S. Semiquincentennial Commission celebrating the 250th Anniversary of the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

 

Guest Essayist: Daniel A. Cotter

The Anaconda Plan of the Civil War, crafted by U.S. General-in-Chief Winfield Scott, was designed to split and defeat the Confederacy by closing in on the coasts east and south, control the Mississippi River, then attack from all sides. Union Major General Ulysses S. Grant pressed through to take Vicksburg, Mississippi, get the final Confederate strongholds and control the Mississippi River. President Abraham Lincoln believed taking Vicksburg was the key to victory. The Battle at Vicksburg would be the longest military campaign of the Civil War. Vicksburg was surrendered on July 4, 1863.

President Lincoln said of Vicksburg, “See what a lot of land these fellows hold, of which Vicksburg is the key! The war can never be brought to a close until that key is in our pocket. We can take all the northern ports of the Confederacy, and they can defy us from Vicksburg.” Lincoln well summarized the importance of Vicksburg, Mississippi, with both the Union and Confederacy determined to control the city. Along the Mississippi River, Vicksburg was one of the main strongholds remaining for the Confederacy. If the Union could capture this stronghold, it would cut off Confederacy states west of the Mississippi from those east of the Mississippi.

The location was ideal for defending, protected on the north by swamps of the bayou and was located on high bluffs that were along the river, and was given the name, the “Gibraltar of the South.”

General Grant developed a plan. After assuming command of the Union forces near Vicksburg on January 30, 1863, the Union having waged a campaign to take Vicksburg since the spring of 1862, he had been part of an initial failed attempt to take the city in the winter. In the spring of 1863, he tried again. This time, given the location of Vicksburg, he took the bold approach of marching south on the west side of the Mississippi River, then crossing over south of the city. He led troops south 30 miles south of Vicksburg, crossing over at Bruinsburg via Union fleet.

Once landed east of the river, he began to head northeast. On May 2, his troops took Port Gibson, with his troops abandoning supply lines and sustaining themselves from the surrounding countryside. Grant arrived in Vicksburg on May 18, where Confederate General John Pemberton was waiting with his 30,000 troops. Upon arrival, two major assaults on May 19 and 22 by the Union forces failed. Grant regrouped and his troops dug trenches, enclosing Pemberton and his troops.

Pemberton was boxed in with little provisions and diminishing ammunition. Many Confederate soldiers became sick and were hospitalized. In late June, Union troops dug mines underneath the Confederate troops and, on June 25, detonated the explosives. On July 3, Pemberton sent a note to Grant suggesting peace. Grant responded that only unconditional surrender would suffice. Pemberton formally surrendered on July 4. The nearly 30,000 troops were paroled, Grant not wanting to have to address the soldiers. The Union won at Port Hudson five days later.

Lincoln, who had noted how important Vicksburg was to the Union and the war, upon hearing of the surrender, stated, “The father of the waters goes unvexed to the sea.”

With the Siege of Vicksburg, Scott’s Anaconda Plan, designed at the beginning of the Civil War with the goal to blockade the southern ports and to cut the South in two by advancing down the Mississippi River, was complete.

The Siege of Vicksburg was a  major victory for the Union, giving it control of the Mississippi River. With the Battle of Gettysburg victory around the same time, it presented a turning point for the Union in the Civil War. July 4, 1863, the surrender of Vicksburg by Pemberton, is an important date in our nation’s history.

Dan Cotter is Attorney and Counselor at Howard & Howard Attorneys PLLC. He is the author of The Chief Justices, (published April 2019, Twelve Tables Press). He is also a past president of The Chicago Bar Association. The article contains his opinions and is not to be attributed to anyone else.  

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

President Abraham Lincoln faced an important decision point in the summer of 1862. Lincoln was opposed to slavery and sought a way to end the immoral institution that was at odds with republican principles. However, he had a reverence for the constitutional rule of law and an obligation to follow the Constitution. He discovered a means of ending slavery, saving the Union, and preserving the Constitution.

President Lincoln had reversed previous attempts by his generals to free the slaves because of their dubious constitutionality and because they would drive border states such as Missouri, Kentucky, and Maryland into the arms of the Confederacy. He reluctantly signed the First and Second Confiscation Acts but doubted their constitutionality as well and did little to enforce them. He offered compensated emancipation to the border states, but none took him up on his offer.

On July 22, Lincoln met with the members of his Cabinet and shared his idea with them. He presented a preliminary draft of the Emancipation Proclamation on two pages of lined paper. It would free the slaves in the Confederate states as a “military necessity” by weakening the enemy under his constitutional presidential war powers.

The cabinet agreed with his reasoning even if some members were lukewarm. Some feared the effects on the upcoming congressional elections and that it would cause European states to recognize the Confederacy to protect their sources of cotton. Secretary of State William H. Seward counseled the president to issue the proclamation from a position of strength after a military victory.

The early victories of the year in the West by Grant at the Battle of Shiloh and the capture of New Orleans were dimmed by a more recent defeat in the eastern theater. Union General George McClellan’s Peninsular Campaign driving toward Richmond was thwarted by his defeat in the Seven Days’ Battles. Nor did Lincoln get the victory he needed the following month when Union armies under General John Pope were routed at the Second Battle of Bull Run.

In September, General Robert E. Lee invaded the North to defeat the Union army on northern soil and win European diplomatic recognition. He swept up into Maryland. Even though two Union troops discovered Lee’s plan of attack wrapped around a couple of cigars on the ground, McClellan did not capitalize on his advantage. The two armies converged at Sharpsburg near Antietam Creek.

At dawn on September 17, Union forces under General Joseph Hooker on the Union right attacked Confederates on the left side of their lines under Stonewall Jackson. The opposing armies clashed at West Woods, Dunker Church, and a cornfield. The attack faltered, and thousands were left dead and wounded.

Even that fighting could not compare to the carnage in the middle of the lines that occurred later in the morning. Union forces attacked several times and were repulsed. The battle shifted to a sunken road with horrific close-in fighting. Thousands more men became casualties at this “Bloody Lane.”

The final major stage of the day’s battle occurred further down the line when Union General Ambrose Burnside finally attacked. The Confederate forces here held a stone bridge across Antietam Creek that Burnside decided to cross rather than have his men ford the creek. The Confederates held a strong defensible position that pushed back several Union assaults. After the bridge was finally taken at great cost, the advancing tide of Union soldiers was definitively stopped by recently-arrived Confederate General A.P. Hill.

The battle resulted in the grim casualty figures of 12,400 for the Union armies and 10,300 for the Confederate armies. The losses were much heavier proportionately for the much smaller Confederate army. General McClellan failed to pursue the bloodied Lee the following day and thereby allowed him to escape and slip back down into the South. While Lincoln was furious with his general, he had the victory he needed to release the Emancipation Proclamation.

On September 22, Lincoln issued the preliminary Emancipation Proclamation. It read that as of January 1, 1863, “All persons held as slaves within any state, or designated part of a state, the people whereof shall then be in rebellion against the United States shall be then, thenceforward, and forever free.” If the states ended their rebellion, then the proclamation would have no force there.

Since the proclamation only applied to the states who joined the Confederacy, the border states were exempt, and their slaves were not to be freed by it. Lincoln did this for two important reasons. One, the border states might have declared secession and joined with the Confederacy. Two, Lincoln had no constitutional authority under his presidential war powers to free the slaves in states in the Union.

None of the Confederate states accepted the offer. On January 1, 1863, Lincoln issued the Emancipation Proclamation as promised. The proclamation freed nearly 3.5 million slaves, though obviously the Union had to win the war to make it a reality. The document was arguably Lincoln’s least eloquent document and was, in the words of one historian, about as exciting as a bill of lading.

Lincoln understood that the document had to be an exacting legal document because of the legal and unofficial challenges it would face. Moreover, he knew that a constitutional amendment was necessary to end slavery everywhere. He knew the proclamation’s significance and called it “the central act of my administration,” and “my greatest and most enduring contribution to the history of the war.”

One eloquent line in the Emancipation Proclamation aptly summed up the republican and moral principles that were the cornerstone of the document and Lincoln’s vision: “And upon this act, sincerely believed to be an act of justice, warranted by the Constitution, upon military necessity, I invoke the considerate judgment of mankind, and the gracious favor of Almighty God.”

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

On July 2, 1862, President Abraham Lincoln signed into law the Land-Grant Agricultural and Mechanical College Act, widely known today as the Morrill Act. The act was the culmination of work over many years by many legislators, notably the legislation’s author and chief sponsor, Justin Morrill of Vermont, who was one of the long-serving members of Congress during the 19th century. Congress had passed an earlier version of Morrill’s bill in 1857, but the bill was vetoed by President James Buchanan. An earlier bill sponsored by Henry Clay that would have used federal land revenues to support education and internal improvement was also vetoed by President Andrew Jackson. In each veto case, an argument was made that the federal government had no business involving itself in educational matters or other issues that were properly the province of state governments.[1]

The Morrill Act permitted participating states to make use of the sale, rent, and/or royalties derived from property granted to the states by the federal government. If a state did not have sufficient federal land situated within its borders, the state would be granted scrip representing proceeds from federal land in other states or territories. Somewhat similar land grants had been used by the federal government for a variety of purposes, but many of those programs did not attract a great deal of interest or cooperation from state governments. The Morrill Act required participating governments to produce annual reports regarding the use of funds from both the state governors and the recipient colleges and universities.[2]

Morrill pushed for a land grant program that would support education, a cause to which he was devoted for much of his career. Morrill had relatively little formal education himself, but he was dedicated to the effort to provide higher education to people of humble station. He also favored a very particular kind of higher education, one supporting agriculture and the “mechanic arts” (today generally known as engineering).[3] At the time, many colleges and universities in America and Europe largely emphasized the classics and humanities to the exclusion of more applied fields of study. Studies focusing on seemingly more practical and career-related topics were given little attention. The successful 1862 legislation, unlike the 1857 bill, also indicated that the funding would support the teaching of “military tactics.” In light of the on-going civil war, the emphasis on military training broadened the bill’s appeal.[4] The bill gained a co-sponsor in the Senate in the person of Ben Wade of Ohio, who later would serve as president pro tempore of the chamber. Wade is remembered today as the man who would have become acting president if the senate had voted to remove President Andrew Johnson in his impeachment trial in 1868.

Opponents of the legislation included those who believed that education matters were solely the responsibility of the states. But many who took this position in 1857 were no longer in Congress in the 1860s. In fact, many were southerners who left Congress when their states seceded from the union. Other legislators, particularly from the western states, objected to the fact that land situated in their states sometimes was used to provide revenue to states in the east that lacked substantial amounts of federal land. Some objected to the legislation because they feared that the new funding would support new institutions that would compete with existing colleges and universities. As various iterations of the Morrill bill moved through Congress between 1860 and 1862, various amendments were approved to appease the objections of some critics.   Notably, the bill was amended to exclude mineral lands and to limit the amount of federal land that could be sold within any given state. Final passage was also delayed so that settlers granted federal land under the recently enacted Homestead Act could have their first choice of land.[5]

After enactment, many states used the land grant funds to support existing colleges and universities, both public and private. While the land grant system overwhelmingly has favored public institutions, a few private schools, such as the Massachusetts Institute of Technology (MIT), Cornell University, and Brown University, received land grant money for some considerable time, and some continue to operate as land-grant institutions to this day.[6] In most states, however, entirely new institutions were created, generally with some reference to agriculture in their titles. These institutions became what are now known as land-grant colleges, even though the total number of schools that receive land-grant support is far greater than most people realize.

The impact of the Morrill Act is hard to overestimate. It was not the first federal grant programs offering aid to state governments, but it was one of the most important and enduring programs. In comparison to categorical aid programs that became popular in the 1960s and later, the program attached few strings with which the recipient governments had to comply. However, in comparison to its predecessors, the land-grant act imposed significant requirements upon its benefactors, particularly regarding reporting obligations and the formal commitment of  resources to particular fields of study.

The act caused a great increase in the number of higher education institutions in the country, and greatly increased the accessibility of college for many Americans of limited income who often lived far removed from population centers or the locations of extant colleges and universities. The Morrill Act was amended in 1890 by new legislation that prohibited grants to states that excluded students from higher education on the basis of race. Recipient states were also required to create universities intended to serve African-Americans. Today these schools are generally called Historically Black Colleges and Universities (HBCUs).[7]

The land-grant program had a huge impact in agriculture, engineering, and military science. The land-grant institutions conducted agriculture research and trained agricultural students. These institutions became a part of the Department of Agriculture’s extension service, which has disseminated research findings throughout the country.[8] By the early twentieth century, the military tactics classes that were supported by the land-grants had evolved into the Reserve Officer Training Corps (ROTC) program.[9] The “mechanic arts” emphasis in the land-grant colleges built up the knowledge base and the professional identity of the engineering profession.[10] The economic implications of this support for both basic and applied science have been significant, although their exact magnitude is disputed. Research by Isaac Ehrlich, Adam Cook, and Yong Yin indicates that the returns from the land-grant schools had made the United States into an economic superpower by the early twentieth century, surpassing countries such as the United Kingdom that followed a much different higher education model.[11] In short, the Morrill Act and subsequent legislation regarding the land-grant colleges has had an astounding impact upon educational quality and access, economic growth and opportunity, and leadership in the nation’s military.

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Duemer, Lee S. 2007. “The Agricultural Education Origins of the Morrill Land Grant Act of 1862.” American Educational History Journal 34 (1): 135–46.

[2] Lieberman, Carl. “The Constitutional and Political Bases of Federal Aid to Higher Education, 1787-1862.” International Social Science Review 63, no. 1 (1988): 3-13.

[3] Key, Scott. 1996. “Economics or Education: The Establishment of American Land-Grant Universities.” Journal of Higher Education 67 (March): 196–220.

[4] Benson, Michael T., and Hal Robert Boyd. 2018.  College for the Commonwealth: A Case for Higher Education in American Democracy.  Lexington: University Press of Kentucky.

[5] Lieberman, Carl. “The Constitutional and Political Bases of Federal Aid to Higher Education, 1787-1862.” International Social Science Review 63, no. 1 (1988): 3-13.

[6] Carstensen, Victor. 1962. “Century of the Land-Grant Colleges.” Journal of Higher Education 33 (January): 30–37.

[7] Wheatle, Katherine I. E. 2019. “Neither Just nor Equitable.” American Educational History Journal 46 (1/2/2019): 1–20.

[8] Duemer, Lee S. 2007. “The Agricultural Education Origins of the Morrill Land Grant Act of 1862.” American Educational History Journal 34 (1): 135–46.

[9] Benson, Michael T., and Hal Robert Boyd. 2018.  College for the Commonwealth: A Case for Higher Education in American Democracy.  Lexington: University Press of Kentucky.

[10]Nienkamp, Paul. 2010. “Land-Grant Colleges and American Engineers.” American Educational History Journal 37 (1/2): 313–30.

[11] Ehrlich, Isaac, Adam Cook, and Yong Yin. 2018. “What Accounts for the US Ascendancy to Economic Superpower by the Early Twentieth Century? The Morrill Act-Human Capital Hypothesis.” Journal of Human Capital 12 (2): 233–81.

Guest Essayist: Daniel A. Cotter

The Homestead Act of 1862 encouraged development of farming on land as homesteads for western expansion. Heads of households could receive up to 160 acres to farm for five years, or purchase the land after six months. If homesteaders were unable to farm successfully, the land would go back to the government to be offered again to another homesteader. Pro-slavery groups feared a homestead act would give more power to anti-slavery families moving to new territories of privatized land that could become free states, so they fought passage. 

The Civil War, by May 1862, was just over a year old, having begun with the Battle of Fort Sumter. The Homestead Act of 1862 was a way for the Union to expand westward, and in some ways fulfilled the promise contained in President Abraham Lincoln’s message to Congress on July 4, 1861, when he wrote in part:

On the side of the Union it is a struggle for maintaining in the world that form and substance of government whose leading object is to elevate the condition of men; to lift artificial weights from all shoulders; to clear the paths of laudable pursuit for all; to afford all an unfettered start and a fair chance in the race of life. Yielding to partial and temporary departures, from necessity, this is the leading object of the Government for whose existence we contend.

According to the National Park Service website, the Act brought to life the “fair chance” to which Lincoln referred and the Act “was one of the most significant and enduring events in the westward expansion of the United States.”

The 1862 Act was not the first effort to expand westward, but prior efforts had been met with resistance from Southern Democrats, who feared European immigrants might inhabit the west. The Act was intended to make it easier for interested persons to move west, without the requirement of “squatters” on federal lands to pay per acre for retaining the property that was part of the Preemption Act of 1841.

The Act had minimal requirements to qualify. Any adult citizen, or intended citizen, who had never borne arms against the United States, could apply for the grant. The citizen was required to improve the land by building a dwelling and cultivating the land. After five years, they obtained the deed to the 160 acres. An inhabitant had the option of a six-month residency with minor improvements and paying $1.25 per acre, the same price as existed under the Preemption Act.

Many could not afford to effectively build a farm and cultivate the land, which included obtaining the necessary tools as well as crops and livestock. One of the first registered homesteaders was Daniel Freeman, whose claim is the site of the Homestead National Monument of America. Freeman is said to have filed his claim ten minutes after the Act became effective on January 1, 1863.

With the Homestead Act of 1862, the westward expansion truly commenced. Over its history, more than 2 million individuals filed claims, with approximately 780,000 obtaining title to the lands. More than 270 million acres were granted while the law was in effect.

Lincoln’s signing of the Homestead Act on May 20, 1862 was an important day in our nation’s history.

Dan Cotter is Attorney and Counselor at Howard & Howard Attorneys PLLC. He is the author of The Chief Justices, (published April 2019, Twelve Tables Press). He is also a past president of The Chicago Bar Association. The article contains his opinions and is not to be attributed to anyone else. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Val Crofts

Abraham Lincoln traveled East in February of 1860. He was asked to deliver an address at the Cooper Institute in New York City on the momentous topic of the era, slavery. Lincoln had been a popular orator and politician in Illinois, but had yet to solidify himself as a national politician. His sense of humor, frontier charm and folksy wit appealed to his political and debate audiences in the West, but if he was going to attract a national following and possibly earn the nomination from the fledgling Republican Party as their presidential candidate, he needed to appeal to voters in different areas of the country.

Before he gave his Cooper Institute speech, Lincoln made his way to the New York studio of photographer, Matthew Brady. He was going to sit for a portrait that was going to introduce him to the American people. Brady’s portrait of Lincoln shows a confident, 51 year old Lincoln staring into the camera with his left hand resting on two books. He pulled his collar up in the portrait to partially obscure his long neck. He looks distinguished, but his hair is a bit disheveled as he stands ready to make arguably the most important speech of his life in a few hours.

A crowd of around 1,500 people crowded into the Cooper Institute on the night of February 27, 1860 to hear this Republican orator from the West deliver a carefully researched and crafted speech to explain to the nation why they should not fear a Republican president and why the views of the Republicans on slavery mirrored those of the Founding Fathers. Lincoln was about to reinvent himself as an orator and to establish himself as a national politician and serious contender for the presidency.

Some eyewitnesses claimed disappointment when Lincoln first stood to address the crowd. His tall (so tall as someone said) appearance with his arms and legs created an awkward appearance and some in the crowd expressed pity for how Lincoln looked that night. But then, he began to speak.

Lincoln began by informing his audience that 21 of 39 Founding Fathers felt that the federal government should be able to control slavery in territories of the United States and that the Constitution verifies this. The Republican Party had pledged to stop slavery from spreading into the Western territories and Lincoln felt that the basis for this decision came from the basis for our legal cornerstone, the Constitution of the United States.

He then denied that the Republicans were a Northern political party intent on inciting slave rebellions. He talked about how John Brown, the abolitionist who attempted to start a slave rebellion in Virginia, was no Republican and he urged the South to understand the Republican Party was an American party and not a sectional one. He was attempting to explain to the South that Republicans were allies and not enemies. He further explained that for the South to threaten to secede if a Republican president was elected, was similar to an “armed robbery” of the Union.

He then addressed fellow Republicans to leave the South alone and to convince the South that they would continue to do so. Southern fears of Republican interference was fueling the flames of rebellion and Lincoln urged it to cease. Lincoln felt that if Republicans were not able to stop slavery where it existed, because the Constitution did not give them power to do so, then they must stop it from spreading into the Western territories. Then, he ended one of his longest public speeches by saying, “Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it.”

Lincoln laid out what he perceived to be the fears of the South and had done his best to calm them. He had also given his opinions on what Republicans could do to stop the further escalation of the division between the two regions. The speech was a huge success.

To capitalize on the speech and its success, Matthew Brady began to circulate the photo in several sizes for people to purchase. Harper’s Weekly converted the photo into a full page drawing of Lincoln which accompanied their story of the Cooper Institute speech and Lincoln’s success there. The image became the public’s first encounter with this rising star in the Republican Party.

Lincoln’s Cooper Institute speech was considered one of his greatest successes. If he had failed to engage and impress his New York audience, he may not have received the nomination as president in 1860. Had that not happened, he may have returned to Illinois to live out his days as a lawyer in Springfield and the history of our nation would have been very different. Lincoln credited Brady and the Cooper Institute speech with helping him to secure his nomination as the Republican candidate for president and ultimately putting him in the White House. Those two very important events in New York City in February of 1860 may have ultimately helped to preserve the Union.

Val Crofts is as Social Studies teacher from Janesville, Wisconsin. He teaches as Milton High School in Milton, Wisconsin and has been there 16 years. He teaches AP U.S. Government and Politics, U.S. History and U.S. Military History. Val has also taught for the Wisconsin Virtual School for seven years, teaching several Social Studies courses for them. Val is also a member of the U.S. Semiquincentennial Commission celebrating the 250th Anniversary of the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Daniel A. Cotter

In November 1860, Abraham Lincoln was elected President of the United States. Shortly after, South Carolina became the first state to secede, doing so on December 20, 1860. Mississippi and Florida followed, with Alabama, Georgia, Louisiana and Texas joining them.  On April 12, 1861, the Civil War officially began at the Battle of Fort Sumter.

The South Carolina militia bombarded Fort Sumter, an island fortification near Charleston, South Carolina. The Confederate Army had not yet officially formed. The attack began in the early morning hours of April 12, 1861, when Lieutenant Henry S. Farley fired a mortar round over Fort Sumter as a signal to the militia to begin firing on Fort Sumter. The militia, led by General P.G.T. Beauregard, had the upper hand. The Fort, led by Major Robert Anderson, had been designed and fortified to respond to, and defend against, naval attack, but over the ensuing battle proved to be no match for land bombardment.

Fort Sumter had a total of 60 guns but many of them were on top, where Anderson’s troops would be most vulnerable to being hit by incoming fire from the militia. The twenty-one guns on the lower level did not allow the trajectory needed to hit any of the artillery of the attackers.  Anderson and his troops did what they could, engaging in an exchange of fire that lasted for thirty-four hours. Because efforts by President Lincoln to resupply Fort Sumter had not been successful, and despite moving as many supplies to the fort as possible, Anderson and his troops were short on ammunition. They reduced the number of guns being deployed.

The militia began to fire heated shots at the fort, hitting some of the wooden structures within. Fires ensued, but on April 12th, a rain shower put those fires out. On April 13th, the heated shot barrage would resume and do substantial damage to the fort.

Early in the  morning of April 13, 1861, Gustavus V. Fox arrived, leading the supply relief expedition that President Lincoln had ordered.  But his arrival would do little to change the equation. Around 1 PM on April 13, the central flagpole was knocked down. Colonel Louis Wigfall, an aide to Beauregard, without authorization rowed a skiff to the island fort and met with Anderson. He is reported to have said, “You have defended your flag nobly, Sir. You have done all that it is possible to do, and General Beauregard wants to stop this fight. On what terms, Major Anderson, will you evacuate this fort?”

Anderson agreed to a truce, proud of his troops and not having lost any of his men to the bombardment. After Beauregard’s initial contingent disavowed Wigfall’s settlement terms, Beauregard saw the surrender handkerchief and sent a second contingent, which offered similar terms to what Wigfall had. With that, Anderson surrendered, and the South had won its first battle. Over the next almost four years, the Confederate troops would hold Fort Sumter, withstanding several Union army attacks.

The Battle of Fort Sumter caused many Northerners to strongly advocate for the assembly of volunteer troops to recapture the fort and to preserve our Union. The war would go on for four years, with approximately 750,000 casualties.

Dan Cotter is Attorney and Counselor at Howard & Howard Attorneys PLLC. He is the author of The Chief Justices, (published April 2019, Twelve Tables Press). He is also a past president of The Chicago Bar Association. The article contains his opinions and is not to be attributed to anyone else. 

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

America’s bloodiest day was also the most geopolitically significant battle of the Civil War.

On September 17, 1862, twelve hours of battle along the Antietam Creek, near Sharpsburg, Maryland, resulted in 23,000 Union and Confederate dead or wounded. Its military outcome was General Robert E. Lee, and his Army of Northern Virginia, retreating back into Virginia. Its political outcome reshaped global politics and doomed the Southern cause.

The importance of Antietam begins with President Abraham Lincoln weighing how to characterize the Civil War to both domestic and international audiences. Lincoln chose to make “disunion” the issue instead of slavery. His priority was retaining the border states (Delaware, Kentucky, Maryland, and Missouri) within the Union. [1]

The first casualties of the Civil War occurred on April 19, 1861 on the streets of Baltimore. The 6th Massachusetts Regiment was attacked by pro-South demonstrators while they were changing trains. Sixteen dead soldiers and citizens validated Lincoln’s choice of making the Civil War about reunification. Eastern Maryland was heavily pro-slave. Had Maryland seceded, Washington, D.C. would have been an island within the Confederacy. This would have spelled disaster for the North.

To affirm the “war between the states” nature of the Civil War, Lincoln’s Secretary of State, William Seward, issued strict instructions to American envoys to avoid referencing slavery when discussing the Civil War. [2]

Explaining to foreign governments that the conflict was simply a “war between the states” had a downside. England and France were dependent on Southern cotton for their textile mills. “Moral equivalency” of the combatants allowed political judgments to be based on economic concerns. [3]

On April 27, 1861, Lincoln and Seward further complicated matters by announcing a blockade of Southern ports. While this was vital to depriving the South of supplies, it forced European governments to determine whether to comply. There were well established international procedures for handling conflicts between nations and civil wars. Seward ignored these conventions, igniting fierce debate in foreign governments over what to do with America. [4]

England and France opted for neutrality, which officially recognized the blockade, but with no enforcement. Blockade runners gathered in Bermuda, and easily avoided the poorly organized Union naval forces, while conducting commerce with Southern ports. [5]

Matters got worse. On November 8, 1861, a Union naval warship stopped the Trent, a neutral British steamer travelling from Havana to London. Captain Charles Wilkes removed two Confederate Government Commissioners, James Mason and John Slidell, who were on their way for meetings with the British Government. [6]

The “Trent Affair” echoed the British stopping neutral American ships during the Napoleonic Wars. Those acts were the main reason for America initiating the War of 1812 with England.

British Prime Minister, Lord Henry Palmerston, issued an angry ultimatum to Lincoln demanding immediate release of the Commissioners. He also moved 11,000 British troops to Canada to reinforce its border with America. Lincoln backed down, releasing the Commissioners, stating “One war at a time.” [7]

While war with England was forestalled, economic issues were driving a wedge between the Lincoln Administration and Europe.

The 1861 harvest of Southern cotton had shipped just before war broke out. In 1862, the South’s cotton exports were disrupted by the war. Textile owners clamored for British intervention to force a negotiated peace.

In the early summer of 1862, bowing to political and economic pressure, Lord Palmerston drafted legislation to officially recognize the Confederate government and press for peace negotiations. [8]

During the Spring of 1862, Lincoln’s view of the Civil War was shifting. Union forces were attracting escaped slaves wherever they entered Southern territory. Union General’s welcomed the slaves as “contraband,” prizes of war similar to capturing the enemy’s weapons. This gave Lincoln a legal basis for establishing a policy for emancipating slaves in the areas of conflict.

Union victories had solidified the Border States into the North. Therefore, disunion was not as important a justification for military action. In fact, shedding blood solely for reunification seemed to be souring Northern support for the war.

Lincoln and Seward realized emancipating slaves could rekindle Northern support for the war, critical for winning the Congressional elections in November 1862. Emancipation would also place the conflict on firm moral grounds, ending European support for recognition and intervention. England had abolished slavery throughout its empire in 1833. It would not side with a slave nation, if the goal of war became emancipation. Lincoln embraced this geopolitical chess board, “Emancipation would weaken the rebels by drawing off their laborers, would help us in Europe, and convince them that we are incited by something more than ambition.” [9]

On July 22, 1862, Lincoln called a Cabinet meeting to announce his intention to issue the Emancipation Proclamation. It was framed as an imperative of war, “by virtue of the power in me vested as Commander-in-Chief, of the Army and Navy of the United States in time of actual armed rebellion against the authority and government of the United States, and as a fit and necessary war measure for suppressing said rebellion.” [10]

Seward raised concerns over the timing of the Proclamation. He felt recent Union defeats outside of the Confederate Capital of Richmond, Virginia might make its issuance look like an act of desperation, “our last shriek, on the retreat.” [11] It was decided to wait for a Northern victory so that the Emancipation could be issued from a position of strength.

Striving for a game-changing victory became the priority for both sides. The summer of 1862 witnessed a series of brilliant Confederate victories. British Prime Minister Palmerston agreed to finally hold a Cabinet meeting to formally decide on recognition and mediation. [12]

General Lee wished to tip the scales further by engineering a Confederate victory on northern soil. [13] Lee wanted a victory like the 1777 Battle of Saratoga that brought French recognition and aid to America. [14]

The race was on. General Stonewall Jackson annihilated General John Pope’s Army in the Second Battle of Manassas (August 28-30, 1862). Lee saw his opportunity, consolidated his forces, and invaded Maryland on September 4, 1862.

After entering Frederick, Maryland, Lee divided his forces to eliminate the large Union garrison in Harpers Ferry, which was astride his supply lines. Lee planned to draw General George McClellan and his “Army of the Potomac” deep into western Maryland. Far from Union logistical support, McClellan’s forces could be destroyed, delivering a devastating blow to the North. [15]

A copy of Special Order No. 191, which outlined Lee’s plans and troop movements, was lost by the Confederates, and found by a Union patrol outside of Frederick. [15] On reading the Order, McClellan, famous for his slow and ponderous actions in the field, sped his pursuit of Lee.

Now there was a deadly race for whether Lee and Jackson could neutralize Harpers Ferry and reunite before McClellan’s army pounced. This turned the siege of Harpers Ferry (September 12-15, 1862), the Battle of South Mountain (September 14, 1862), and Antietam (September 17, 1862) into the Civil War’s most important series of battles.

While Antietam was tactically a draw, heavy losses forced Lee and his army back into Virginia. This was enough for Lincoln to issue his Preliminary Emancipation Proclamation, five days after the battle, on September 22, 1862. When news of the Confederate retreat reached England, support for recognition collapsed, extinguishing, “the last prospect of European intervention.” [17] News of the Emancipation Proclamation launched “Emancipation Meetings” throughout England. Support for a Union victory rippled through even pacifist Anti-Slavery groups who asserted abolition, “was possible only in a united America.” [18]

There were many more battles to be fought, but Europe’s alignment against the Confederacy sealed its fate. European nations flocked to embrace Lincoln and his Emancipation crusade. One vivid example was Czar Alexander II, who had emancipated Russia’s serfs, becoming a friend of Lincoln. In the fall of 1863, he sent Russian fleets to New York City and San Francisco to support the Union cause. [19]

Unifying European nations against the Confederacy, and ending slavery in the South, makes America’s bloodiest day one of the world’s major events.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

REFERENCES

[1] McPherson, James, Battle Cry of Freedom (Oxford University Press, New York, 1988) pp. 311-312.

[2] Foreman, Amanda, A World on Fire; Britain’s Crucial Role in the American Civil War (Random House, New York, 2010) p.107.

[3] Op. cit., McPherson, p. 384.

[4] Op. cit., Foreman, page 80.

[5] Op. cit., McPherson, pages 380-381.

[6] ibid., pages 389-391.

[7] ibid.[8] Op. cit., Foreman, page 293.

[9] Op. cit., McPherson, page 510.[10] Carpenter, Francis, How the Emancipation Proclamation was Drafted; Political Recollections; Anthology – America; Great Crises in Our History Told by its Makers; Vol. VIII (Veterans of Foreign Wars, Chicago, 1925) pages 160-161.[11] Op. cit., McPherson, page 505.

[12] Op. cit., Foreman, page 295.[13] Op. cit., McPherson, page 555.

[14] McPherson, James, The Saratoga That Wasn’t: The Impact of Antietam Abroad, in This Mighty Scourge: Perspectives on the Civil War (New York: Oxford University Press, 2007), pages 65-77.

[15] Sears, Stephen W., Landscape Turned Red (Ticknor & Fields, New York, 1983) pages 66-67.

[16] Ibid., pages 112-113.

[17] Op. cit., Foreman, page 322.

[18] ibid., page 397.

[19] The Russian Navy Visits the United States (Naval Historical Foundation, Annapolis, 1969)

Guest Essayist: Scot Faulkner

On the evening of Oct. 16, 1859, John Brown and his raiders unleashed 36 hours of terror on the federal armory in Harpers Ferry, Virginia (now West Virginia).

Brown’s raid marked a cataclysmic moment of change for America and the world. It ranks up there with Sept. 11, the Dec. 7, 1941 attack on Pearl Harbor, and the shots fired on Lexington Common and Concord Bridge during the momentous day of April 19, 1775. Each of these days marked a point when there was no turning back. Contributing events may have been prologue, but once these fateful days took place, America was forever changed.

Americans at the time knew that the raid was not the isolated work of a madman. Brown was the well-financed and supported “point of the lance” for the abolition movement.

He was a major figure among the leading abolitionists and intellectuals of the time. This included Gerrit Smith, the second wealthiest man in America and business partner of Cornelius Vanderbilt. Among other ventures, Smith was a patron of Oberlin College, where Brown’s father served as a trustee. Thus was born a 20-year friendship.

Through Smith, Brown moved among America’s elite, conversing with Henry David Thoreau, Ralph Waldo Emerson, journalists, religious leaders and politicians.

Early on, Brown deeply believed that the only way to end slavery was through armed rebellion. His vision was to create a southern portal for the Underground Railway in the Blue Ridge Mountains.

The plan was to raise a small force and attack the armory in Harpers Ferry. There Brown would obtain additional weapons and then move into the Blue Ridge to establish his mountain sanctuary for escaping slaves.

Brown anticipated having escaped slaves swell his rebellious ranks and protect his sanctuary. He planned to acquire hundreds of metal tipped pikes as the weapon of choice.

The idea of openly rebelling against slavery was an extreme position in the 1850s. Abolitionist leaders felt slavery would either become economically obsolete or had faith that their editorials would shame the federal government to end the practice.

For years Brown remained the lone radical voice in the elite salons of New York and Boston. He looked destined to remain on the fringes of the anti-slavery movement when a series of events shook the activists’ trust in working within the system and shifted sentiment toward Brown’s solution.

The Fugitive Slave Act of 1850 forced local public officials in free states to help recover escaped slaves. The federally sanctioned intrusion of slavery into the North began tipping the scales in favor of Brown’s agenda.

The Kansas-Nebraska Act of 1854 destroyed decades of carefully crafted compromises that limited slavery’s westward expansion.  The Act ignited a regional civil war as pro-slavery and anti-slavery settlers fought each other prior to a referendum on the state’s status – slave or free. Smith enlisted the help of several of the more active abolitionists to underwrite Brown’s guerilla war against slavery in Kansas in 1856-57. This group, including some of America’s leading intellectuals, went on to become the Secret Six, who pledged to help Brown with his raid.

The Supreme Court’s Dred Scott decision in 1857, and its striking down of Wisconsin’s opposition to the Fugitive Slave Act in March 1859, set the Secret Six and Brown on their collision course with Harpers Ferry.

Today, Harpers Ferry is a scenic town of 300 people, but in 1859 it was one of the largest industrial complexes south of the Mason Dixon Line. The Federal Armory and Rifle Works were global centers of industrial innovation and invention. The mass production of interchangeable parts, the foundation of the modern industrial era, was perfected along the banks of the Potomac and Shenandoah Rivers.

Brown moved into the Harpers Ferry area on July 3, 1859, establishing his base at the Kennedy Farm a few miles north in Maryland. The broader abolitionist movement remained divided about an armed struggle. During August 19-21, 1859, a unique debate occurred. At a quarry outside Chambersburg, Pennsylvania, Frederick Douglass and Brown spent hours debating whether anyone had a moral obligation to take up arms against slavery. Douglass refused to join Brown but remained silent about the raid. Douglass’ aide, Shields Green, was so moved by Brown’s argument, he joined Brown on the raid, was captured, tried, and executed.

The American Civil War began the moment Brown and his men walked across the B&O Railroad Bridge and entered Harpers Ferry late on the evening of October 16, 1859. Brown’s raiders secured the bridges and the armories as planned. Hostages were collected from surrounding plantations, including Col. Lewis W. Washington, great grandnephew of George Washington. A wagon filled with “slave pikes” was brought into town. Brown planned to arm freed slaves with the pikes assuming they had little experience with firearms.

As Brown and his small force waited for additional raiders with wagons to remove the federal weapons, local militia units arrived and blocked their escape. Militia soldiers and armed townspeople methodically killed Brown’s raiders, who were arrayed throughout the industrial complexes.

Eventually, the surviving raiders and their hostages retreated into the Armory’s Fire Engine House for their last stand. Robert E. Lee and a detachment of U.S. Marines from Washington, D.C. arrived on October 18. The Marines stormed the Engine House, killing or capturing Brown and his remaining men, and freeing the hostages. The raid was over.

Brown survived the raid. His trial became a national sensation as he chose to save his cause instead of himself. Brown rejected an insanity plea in favor of placing slavery on trial. His testimony, and subsequent newspaper interviews while awaiting execution on Dec. 2, 1859, created a fundamental emotional and political divide across America that made civil war inevitable.

Fearing that abolitionists were planning additional raids or slave revolts, communities across the South formed their own militias and readied for war. There was no going back to pre-October America.

Edmund Ruffin, one of Virginia’s most vocal pro-slavery and pro-secession leaders, acquired several of Brown’s “slave pikes.” He sent them to the governors of slave-holding states, each labeled “Sample of the favors designed for us by our Northern Brethren.”  Many of the slave pikes were publicly displayed in southern state capitols, further inflaming regional emotions.

On April 12, 1861, Ruffin lit the fuse on the first cannon fired at Fort Sumter in Charleston, South Carolina. The real fuse had been lit months earlier by Brown at Harpers Ferry.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

In the 1850s, the United States was deeply divided over the issue of slavery and its expansion into the West. Northerners and southerners had been arguing over the expansion of slavery into the western territories for decades. The Missouri Compromise of 1820 had divided the Louisiana Territory at 36’30° with new states north of the line free states and south of the lines slave states. The delicate compromise held until the Mexican War.

The territory acquired in the Mexican War of 1846 triggered the sectional debate again. In 1850, Senator Henry Clay of Kentucky engineered the Compromise of 1850 to settle the dispute with the help of Stephen Douglas. But, in 1854, the Kansas-Nebraska Act permitted settlers to decide whether the states would be free or slave according to the principle of “popular sovereignty.” Pro and anti-slavery settlers rushed to Kansas and violence and murder erupted in “Bleeding Kansas.” Meanwhile, southerners spoke of secession and observers warned of civil war.

The United States faced this combustible situation when Chief Justice Roger B. Taney sat down in late February 1857 to write the infamous opinion in the case of Dred Scott v. Sandford that would go down as a travesty of constitutional interpretation and one of the greatest injustices by the Supreme Court.

Dred Scott was a slave who had been owned by different masters in the slave states of Virginia and Missouri. Dr. John Emerson was an Army surgeon who was one of those owners and brought Scott to the free state of Illinois for three years and then the free Wisconsin Territory. Scott even married another slave while on free soil. Emerson moved back to Missouri and brought his enslaved with him just before he died. Scott sued Emerson’s widow for his freedom because he had lived in Illinois and Wisconsin, where slavery was prohibited.

Southern and Northern state laws and courts had long recognized the “right of transit” for slaveowners to bring their slaves while briefly traveling through free states/territories or remaining for short durations. However, they also recognized that residence in a free state or territory established freedom for slaves who moved there. In fact, Missouri’s long-standing judicial rule was “once free, always free.” Many former slaves who returned to Missouri after living in a free state or territory had successfully sued in Missouri courts to establish their freedom. The Dred Scott case made its way through the Missouri and federal courts, and finally reached the Supreme Court.

The attorneys presented oral arguments to Taney and the other justices in February 1856. The justices met in chambers but simply could not come to a consensus. They asked the lawyers to re-argue the case the following December, which coincidentally delayed the decision until after the contentious presidential election that allowed the Court to maintain the semblance of neutrality. But, Justice Taney sought to remove the issue from the messy arena of democratic politics and settle the sectional dispute over slavery in the Court.

After hearing the case argued for a second time, the justices met in mid-February 1857 to consider the case. They almost agreed to a narrow legal opinion that addressed Dred Scott’s status as a slave in a free state. However, they selected Chief Justice Taney to write the opinion. He used the opportunity to write an expansive opinion that would avert possible civil war.

On the morning of March 6, Taney read the shocking opinion to the Court for nearly two hours. Taney, speaking for seven members of the Court, declared that all African-Americans—slave or free—were not U.S. citizens at the time of the founding and could not become citizens. He asserted that the founders thought that blacks were an inferior class of humans and “had no rights which the white man was bound to respect,” and no right to sue in federal court. This was not only a misreading of the history of the American founding but a gross act of injustice toward African Americans. Taney could have stopped there, but he believed this decision could end the sectional conflict over the expansion of slavery. He declared that the Missouri Compromise was unconstitutional because Congress had no power to regulate slavery in the territories despite Article IV, section 3 giving Congress the “power to dispose of and make all needful rules and regulations respecting the territory or other property belonging to the United States.” According to his reasoning, slavery could become legal throughout the nation. Finally, Taney pronounced that Dred Scott, despite his residence in the free state and territory that allowed other slaves to claim their freedom, was still a slave.

The Dred Scott decision was not unanimous; Justices Benjamin Curtis and John McLean wrote dissenting opinions. Curtis’s painstakingly detailed research in U.S. history demonstrated that Taney was wrong on several points. First, Curtis convincingly showed that free African-Americans had been citizens and even voters in several states at the time of the founding. He wrote that slavery was fundamentally “contrary to natural right.” Furthermore, Curtis pointed out that by settled practice and the Constitution, Congress did indeed have power to legislate regarding slavery. He provided evidence that Congress had legislated with respect to slavery more than a dozen times before the 1820 Missouri Compromise.

The Dred Scott decision was supposed to calm sectional tensions in the United States, but it worsened them. Northerners expressed great moral outrage, and southerners doubled down on the Court’s decision that African Americans had no rights and Congress could not regulate slavery’s expansion. Indeed, the Court’s decision greatly exacerbated tensions and contributed directly to events leading to the Civil War. Instead of leaving the issue to the people’s representatives who had successfully negotiated important compromises in Congress to preserve the Union, Taney and other justices arrogantly thought they could settle the issue.

Taney’s understanding of American republican government was that only the white race enjoyed natural rights and consensual self-government. Abraham Lincoln continually attacked the decision in his speeches and debates. Lincoln stood for a Union rooted upon natural rights for all humans. He did not believe that the country could survive indefinitely “half slave, half free.” He argued that the Declaration of Independence “set up a standard maxim for free society” of self-governing individuals. Lincoln also opposed the Dred Scott decision because of its impact on democracy. If the Court’s majority gained the final say on political decisions, Lincoln thought “the people will have ceased to be their own rulers.”

The different views of slavery, its expansion, and the principles of republican self-government were at the core of the Civil War that ensued three years later. During the war, President Lincoln freed the slaves in Confederate states with the Emancipation Proclamation and laid down the moral vision of the American republic in the Gettysburg Address. He wrote: “Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.” The bloody Civil War was fought so that “this nation, under God, shall have a new birth of freedom — and that government of the people, by the people, for the people, shall not perish from the earth.”

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. He is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Source: Collected Works of Abraham Lincoln, edited by Roy P. Basler et al.

Mr. President and fellow citizens of New York: –

The facts with which I shall deal this evening are mainly old and familiar; nor is there anything new in the general use I shall make of them. If there shall be any novelty, it will be in the mode of presenting the facts, and the inferences and observations following that presentation.

In his speech last autumn, at Columbus, Ohio, as reported in “The New-York Times,” Senator Douglas said:

“Our fathers, when they framed the Government under which we live, understood this question just as well, and even better, than we do now.”

I fully indorse this, and I adopt it as a text for this discourse. I so adopt it because it furnishes a precise and an agreed starting point for a discussion between Republicans and that wing of the Democracy headed by Senator Douglas. It simply leaves the inquiry: “What was the understanding those fathers had of the question mentioned?”

What is the frame of government under which we live?

The answer must be: “The Constitution of the United States.” That Constitution consists of the original, framed in 1787, (and under which the present government first went into operation,) and twelve subsequently framed amendments, the first ten of which were framed in 1789.

Who were our fathers that framed the Constitution? I suppose the “thirty-nine” who signed the original instrument may be fairly called our fathers who framed that part of the present Government. It is almost exactly true to say they framed it, and it is altogether true to say they fairly represented the opinion and sentiment of the whole nation at that time. Their names, being familiar to nearly all, and accessible to quite all, need not now be repeated.

I take these “thirty-nine,” for the present, as being “our fathers who framed the Government under which we live.”

What is the question which, according to the text, those fathers understood “just as well, and even better than we do now?”

It is this: Does the proper division of local from federal authority, or anything in the Constitution, forbid our Federal Government to control as to slavery in our Federal Territories?

Upon this, Senator Douglas holds the affirmative, and Republicans the negative. This affirmation and denial form an issue; and this issue – this question – is precisely what the text declares our fathers understood “better than we.”

Let us now inquire whether the “thirty-nine,” or any of them, ever acted upon this question; and if they did, how they acted upon it – how they expressed that better understanding?

In 1784, three years before the Constitution – the United States then owning the Northwestern Territory, and no other, the Congress of the Confederation had before them the question of prohibiting slavery in that Territory; and four of the “thirty-nine” who afterward framed the Constitution, were in that Congress, and voted on that question. Of these, Roger Sherman, Thomas Mifflin, and Hugh Williamson voted for the prohibition, thus showing that, in their understanding, no line dividing local from federal authority, nor anything else, properly forbade the Federal Government to control as to slavery in federal territory. The other of the four – James M’Henry – voted against the prohibition, showing that, for some cause, he thought it improper to vote for it.

In 1787, still before the Constitution, but while the Convention was in session framing it, and while the Northwestern Territory still was the only territory owned by the United States, the same question of prohibiting slavery in the territory again came before the Congress of the Confederation; and two more of the “thirty-nine” who afterward signed the Constitution, were in that Congress, and voted on the question. They were William Blount and William Few; and they both voted for the prohibition – thus showing that, in their understanding, no line dividing local from federal authority, nor anything else, properly forbids the Federal Government to control as to slavery in Federal territory. This time the prohibition became a law, being part of what is now well known as the Ordinance of ’87.

The question of federal control of slavery in the territories, seems not to have been directly before the Convention which framed the original Constitution; and hence it is not recorded that the “thirty-nine,” or any of them, while engaged on that instrument, expressed any opinion on that precise question.

In 1789, by the first Congress which sat under the Constitution, an act was passed to enforce the Ordinance of ’87, including the prohibition of slavery in the Northwestern Territory. The bill for this act was reported by one of the “thirty-nine,” Thomas Fitzsimmons, then a member of the House of Representatives from Pennsylvania. It went through all its stages without a word of opposition, and finally passed both branches without yeas and nays, which is equivalent to a unanimous passage. In this Congress there were sixteen of the thirty-nine fathers who framed the original Constitution. They were John Langdon, Nicholas Gilman, Wm. S. Johnson, Roger Sherman, Robert Morris, Thos. Fitzsimmons, William Few, Abraham Baldwin, Rufus King, William Paterson, George Clymer, Richard Bassett, George Read, Pierce Butler, Daniel Carroll, James Madison.

This shows that, in their understanding, no line dividing local from federal authority, nor anything in the Constitution, properly forbade Congress to prohibit slavery in the federal territory; else both their fidelity to correct principle, and their oath to support the Constitution, would have constrained them to oppose the prohibition.

Again, George Washington, another of the “thirty-nine,” was then President of the United States, and, as such approved and signed the bill; thus completing its validity as a law, and thus showing that, in his understanding, no line dividing local from federal authority, nor anything in the Constitution, forbade the Federal Government, to control as to slavery in federal territory.

No great while after the adoption of the original Constitution, North Carolina ceded to the Federal Government the country now constituting the State of Tennessee; and a few years later Georgia ceded that which now constitutes the States of Mississippi and Alabama. In both deeds of cession it was made a condition by the ceding States that the Federal Government should not prohibit slavery in the ceded territory. Besides this, slavery was then actually in the ceded country. Under these circumstances, Congress, on taking charge of these countries, did not absolutely prohibit slavery within them. But they did interfere with it – take control of it – even there, to a certain extent. In 1798, Congress organized the Territory of Mississippi. In the act of organization, they prohibited the bringing of slaves into the Territory, from any place without the United States, by fine, and giving freedom to slaves so bought. This act passed both branches of Congress without yeas and nays. In that Congress were three of the “thirty-nine” who framed the original Constitution. They were John Langdon, George Read and Abraham Baldwin. They all, probably, voted for it. Certainly they would have placed their opposition to it upon record, if, in their understanding, any line dividing local from federal authority, or anything in the Constitution, properly forbade the Federal Government to control as to slavery in federal territory.

In 1803, the Federal Government purchased the Louisiana country. Our former territorial acquisitions came from certain of our own States; but this Louisiana country was acquired from a foreign nation. In 1804, Congress gave a territorial organization to that part of it which now constitutes the State of Louisiana. New Orleans, lying within that part, was an old and comparatively large city. There were other considerable towns and settlements, and slavery was extensively and thoroughly intermingled with the people. Congress did not, in the Territorial Act, prohibit slavery; but they did interfere with it – take control of it – in a more marked and extensive way than they did in the case of Mississippi. The substance of the provision therein made, in relation to slaves, was:

First. That no slave should be imported into the territory from foreign parts.

Second. That no slave should be carried into it who had been imported into the United States since the first day of May, 1798.

Third. That no slave should be carried into it, except by the owner, and for his own use as a settler; the penalty in all the cases being a fine upon the violator of the law, and freedom to the slave.

This act also was passed without yeas and nays. In the Congress which passed it, there were two of the “thirty-nine.” They were Abraham Baldwin and Jonathan Dayton. As stated in the case of Mississippi, it is probable they both voted for it. They would not have allowed it to pass without recording their opposition to it, if, in their understanding, it violated either the line properly dividing local from federal authority, or any provision of the Constitution.

In 1819-20, came and passed the Missouri question. Many votes were taken, by yeas and nays, in both branches of Congress, upon the various phases of the general question. Two of the “thirty-nine” – Rufus King and Charles Pinckney – were members of that Congress. Mr. King steadily voted for slavery prohibition and against all compromises, while Mr. Pinckney as steadily voted against slavery prohibition and against all compromises. By this, Mr. King showed that, in his understanding, no line dividing local from federal authority, nor anything in the Constitution, was violated by Congress prohibiting slavery in federal territory; while Mr. Pinckney, by his votes, showed that, in his understanding, there was some sufficient reason for opposing such prohibition in that case.

The cases I have mentioned are the only acts of the “thirty-nine,” or of any of them, upon the direct issue, which I have been able to discover.

To enumerate the persons who thus acted, as being four in 1784, two in 1787, seventeen in 1789, three in 1798, two in 1804, and two in 1819-20 – there would be thirty of them. But this would be counting John Langdon, Roger Sherman, William Few, Rufus King, and George Read each twice, and Abraham Baldwin, three times. The true number of those of the “thirty-nine” whom I have shown to have acted upon the question, which, by the text, they understood better than we, is twenty-three, leaving sixteen not shown to have acted upon it in any way.

Here, then, we have twenty-three out of our thirty-nine fathers “who framed the government under which we live,” who have, upon their official responsibility and their corporal oaths, acted upon the very question which the text affirms they “understood just as well, and even better than we do now;” and twenty-one of them – a clear majority of the whole “thirty-nine” – so acting upon it as to make them guilty of gross political impropriety and willful perjury, if, in their understanding, any proper division between local and federal authority, or anything in the Constitution they had made themselves, and sworn to support, forbade the Federal Government to control as to slavery in the federal territories. Thus the twenty-one acted; and, as actions speak louder than words, so actions, under such responsibility, speak still louder.

Two of the twenty-three voted against Congressional prohibition of slavery in the federal territories, in the instances in which they acted upon the question. But for what reasons they so voted is not known. They may have done so because they thought a proper division of local from federal authority, or some provision or principle of the Constitution, stood in the way; or they may, without any such question, have voted against the prohibition, on what appeared to them to be sufficient grounds of expediency. No one who has sworn to support the Constitution can conscientiously vote for what he understands to be an unconstitutional measure, however expedient he may think it; but one may and ought to vote against a measure which he deems constitutional, if, at the same time, he deems it inexpedient. It, therefore, would be unsafe to set down even the two who voted against the prohibition, as having done so because, in their understanding, any proper division of local from federal authority, or anything in the Constitution, forbade the Federal Government to control as to slavery in federal territory.

The remaining sixteen of the “thirty-nine,” so far as I have discovered, have left no record of their understanding upon the direct question of federal control of slavery in the federal territories. But there is much reason to believe that their understanding upon that question would not have appeared different from that of their twenty-three compeers, had it been manifested at all.

For the purpose of adhering rigidly to the text, I have purposely omitted whatever understanding may have been manifested by any person, however distinguished, other than the thirty-nine fathers who framed the original Constitution; and, for the same reason, I have also omitted whatever understanding may have been manifested by any of the “thirty-nine” even, on any other phase of the general question of slavery. If we should look into their acts and declarations on those other phases, as the foreign slave trade, and the morality and policy of slavery generally, it would appear to us that on the direct question of federal control of slavery in federal territories, the sixteen, if they had acted at all, would probably have acted just as the twenty-three did. Among that sixteen were several of the most noted anti-slavery men of those times – as Dr. Franklin, Alexander Hamilton and Gouverneur Morris – while there was not one now known to have been otherwise, unless it may be John Rutledge, of South Carolina.

The sum of the whole is, that of our thirty-nine fathers who framed the original Constitution, twenty-one – a clear majority of the whole – certainly understood that no proper division of local from federal authority, nor any part of the Constitution, forbade the Federal Government to control slavery in the federal territories; while all the rest probably had the same understanding. Such, unquestionably, was the understanding of our fathers who framed the original Constitution; and the text affirms that they understood the question “better than we.”

But, so far, I have been considering the understanding of the question manifested by the framers of the original Constitution. In and by the original instrument, a mode was provided for amending it; and, as I have already stated, the present frame of “the Government under which we live” consists of that original, and twelve amendatory articles framed and adopted since. Those who now insist that federal control of slavery in federal territories violates the Constitution, point us to the provisions which they suppose it thus violates; and, as I understand, that all fix upon provisions in these amendatory articles, and not in the original instrument. The Supreme Court, in the Dred Scott case, plant themselves upon the fifth amendment, which provides that no person shall be deprived of “life, liberty or property without due process of law;” while Senator Douglas and his peculiar adherents plant themselves upon the tenth amendment, providing that “the powers not delegated to the United States by the Constitution” “are reserved to the States respectively, or to the people.”

Now, it so happens that these amendments were framed by the first Congress which sat under the Constitution – the identical Congress which passed the act already mentioned, enforcing the prohibition of slavery in the Northwestern Territory. Not only was it the same Congress, but they were the identical, same individual men who, at the same session, and at the same time within the session, had under consideration, and in progress toward maturity, these Constitutional amendments, and this act prohibiting slavery in all the territory the nation then owned. The Constitutional amendments were introduced before, and passed after the act enforcing the Ordinance of ’87; so that, during the whole pendency of the act to enforce the Ordinance, the Constitutional amendments were also pending.

The seventy-six members of that Congress, including sixteen of the framers of the original Constitution, as before stated, were pre- eminently our fathers who framed that part of “the Government under which we live,” which is now claimed as forbidding the Federal Government to control slavery in the federal territories.

Is it not a little presumptuous in any one at this day to affirm that the two things which that Congress deliberately framed, and carried to maturity at the same time, are absolutely inconsistent with each other? And does not such affirmation become impudently absurd when coupled with the other affirmation from the same mouth, that those who did the two things, alleged to be inconsistent, understood whether they really were inconsistent better than we – better than he who affirms that they are inconsistent?

It is surely safe to assume that the thirty-nine framers of the original Constitution, and the seventy-six members of the Congress which framed the amendments thereto, taken together, do certainly include those who may be fairly called “our fathers who framed the Government under which we live.” And so assuming, I defy any man to show that any one of them ever, in his whole life, declared that, in his understanding, any proper division of local from federal authority, or any part of the Constitution, forbade the Federal Government to control as to slavery in the federal territories. I go a step further. I defy any one to show that any living man in the whole world ever did, prior to the beginning of the present century, (and I might almost say prior to the beginning of the last half of the present century,) declare that, in his understanding, any proper division of local from federal authority, or any part of the Constitution, forbade the Federal Government to control as to slavery in the federal territories. To those who now so declare, I give, not only “our fathers who framed the Government under which we live,” but with them all other living men within the century in which it was framed, among whom to search, and they shall not be able to find the evidence of a single man agreeing with them.

Now, and here, let me guard a little against being misunderstood. I do not mean to say we are bound to follow implicitly in whatever our fathers did. To do so, would be to discard all the lights of current experience – to reject all progress – all improvement. What I do say is, that if we would supplant the opinions and policy of our fathers in any case, we should do so upon evidence so conclusive, and argument so clear, that even their great authority, fairly considered and weighed, cannot stand; and most surely not in a case whereof we ourselves declare they understood the question better than we.

If any man at this day sincerely believes that a proper division of local from federal authority, or any part of the Constitution, forbids the Federal Government to control as to slavery in the federal territories, he is right to say so, and to enforce his position by all truthful evidence and fair argument which he can. But he has no right to mislead others, who have less access to history, and less leisure to study it, into the false belief that “our fathers who framed the Government under which we live” were of the same opinion – thus substituting falsehood and deception for truthful evidence and fair argument. If any man at this day sincerely believes “our fathers who framed the Government under which we live,” used and applied principles, in other cases, which ought to have led them to understand that a proper division of local from federal authority or some part of the Constitution, forbids the Federal Government to control as to slavery in the federal territories, he is right to say so. But he should, at the same time, brave the responsibility of declaring that, in his opinion, he understands their principles better than they did themselves; and especially should he not shirk that responsibility by asserting that they “understood the question just as well, and even better, than we do now.”

But enough! Let all who believe that “our fathers, who framed the Government under which we live, understood this question just as well, and even better, than we do now,” speak as they spoke, and act as they acted upon it. This is all Republicans ask – all Republicans desire – in relation to slavery. As those fathers marked it, so let it be again marked, as an evil not to be extended, but to be tolerated and protected only because of and so far as its actual presence among us makes that toleration and protection a necessity. Let all the guarantees those fathers gave it, be, not grudgingly, but fully and fairly, maintained. For this Republicans contend, and with this, so far as I know or believe, they will be content.

And now, if they would listen – as I suppose they will not – I would address a few words to the Southern people.

I would say to them: – You consider yourselves a reasonable and a just people; and I consider that in the general qualities of reason and justice you are not inferior to any other people. Still, when you speak of us Republicans, you do so only to denounce us a reptiles, or, at the best, as no better than outlaws. You will grant a hearing to pirates or murderers, but nothing like it to “Black Republicans.” In all your contentions with one another, each of you deems an unconditional condemnation of “Black Republicanism” as the first thing to be attended to. Indeed, such condemnation of us seems to be an indispensable prerequisite – license, so to speak – among you to be admitted or permitted to speak at all. Now, can you, or not, be prevailed upon to pause and to consider whether this is quite just to us, or even to yourselves? Bring forward your charges and specifications, and then be patient long enough to hear us deny or justify.

You say we are sectional. We deny it. That makes an issue; and the burden of proof is upon you. You produce your proof; and what is it? Why, that our party has no existence in your section – gets no votes in your section. The fact is substantially true; but does it prove the issue? If it does, then in case we should, without change of principle, begin to get votes in your section, we should thereby cease to be sectional. You cannot escape this conclusion; and yet, are you willing to abide by it? If you are, you will probably soon find that we have ceased to be sectional, for we shall get votes in your section this very year. You will then begin to discover, as the truth plainly is, that your proof does not touch the issue. The fact that we get no votes in your section, is a fact of your making, and not of ours. And if there be fault in that fact, that fault is primarily yours, and remains until you show that we repel you by some wrong principle or practice. If we do repel you by any wrong principle or practice, the fault is ours; but this brings you to where you ought to have started – to a discussion of the right or wrong of our principle. If our principle, put in practice, would wrong your section for the benefit of ours, or for any other object, then our principle, and we with it, are sectional, and are justly opposed and denounced as such. Meet us, then, on the question of whether our principle, put in practice, would wrong your section; and so meet it as if it were possible that something may be said on our side. Do you accept the challenge? No! Then you really believe that the principle which “our fathers who framed the Government under which we live” thought so clearly right as to adopt it, and indorse it again and again, upon their official oaths, is in fact so clearly wrong as to demand your condemnation without a moment’s consideration.

Some of you delight to flaunt in our faces the warning against sectional parties given by Washington in his Farewell Address. Less than eight years before Washington gave that warning, he had, as President of the United States, approved and signed an act of Congress, enforcing the prohibition of slavery in the Northwestern Territory, which act embodied the policy of the Government upon that subject up to and at the very moment he penned that warning; and about one year after he penned it, he wrote LaFayette that he considered that prohibition a wise measure, expressing in the same connection his hope that we should at some time have a confederacy of free States.

Bearing this in mind, and seeing that sectionalism has since arisen upon this same subject, is that warning a weapon in your hands against us, or in our hands against you? Could Washington himself speak, would he cast the blame of that sectionalism upon us, who sustain his policy, or upon you who repudiate it? We respect that warning of Washington, and we commend it to you, together with his example pointing to the right application of it.

But you say you are conservative – eminently conservative – while we are revolutionary, destructive, or something of the sort. What is conservatism? Is it not adherence to the old and tried, against the new and untried? We stick to, contend for, the identical old policy on the point in controversy which was adopted by “our fathers who framed the Government under which we live;” while you with one accord reject, and scout, and spit upon that old policy, and insist upon substituting something new. True, you disagree among yourselves as to what that substitute shall be. You are divided on new propositions and plans, but you are unanimous in rejecting and denouncing the old policy of the fathers. Some of you are for reviving the foreign slave trade; some for a Congressional Slave-Code for the Territories; some for Congress forbidding the Territories to prohibit Slavery within their limits; some for maintaining Slavery in the Territories through the judiciary; some for the “gur-reat pur-rinciple” that “if one man would enslave another, no third man should object,” fantastically called “Popular Sovereignty;” but never a man among you is in favor of federal prohibition of slavery in federal territories, according to the practice of “our fathers who framed the Government under which we live.” Not one of all your various plans can show a precedent or an advocate in the century within which our Government originated. Consider, then, whether your claim of conservatism for yourselves, and your charge or destructiveness against us, are based on the most clear and stable foundations.

Again, you say we have made the slavery question more prominent than it formerly was. We deny it. We admit that it is more prominent, but we deny that we made it so. It was not we, but you, who discarded the old policy of the fathers. We resisted, and still resist, your innovation; and thence comes the greater prominence of the question. Would you have that question reduced to its former proportions? Go back to that old policy. What has been will be again, under the same conditions. If you would have the peace of the old times, readopt the precepts and policy of the old times.

You charge that we stir up insurrections among your slaves. We deny it; and what is your proof? Harper’s Ferry! John Brown!! John Brown was no Republican; and you have failed to implicate a single Republican in his Harper’s Ferry enterprise. If any member of our party is guilty in that matter, you know it or you do not know it. If you do know it, you are inexcusable for not designating the man and proving the fact. If you do not know it, you are inexcusable for asserting it, and especially for persisting in the assertion after you have tried and failed to make the proof. You need to be told that persisting in a charge which one does not know to be true, is simply malicious slander.

Some of you admit that no Republican designedly aided or encouraged the Harper’s Ferry affair, but still insist that our doctrines and declarations necessarily lead to such results. We do not believe it. We know we hold to no doctrine, and make no declaration, which were not held to and made by “our fathers who framed the Government under which we live.” You never dealt fairly by us in relation to this affair. When it occurred, some important State elections were near at hand, and you were in evident glee with the belief that, by charging the blame upon us, you could get an advantage of us in those elections. The elections came, and your expectations were not quite fulfilled. Every Republican man knew that, as to himself at least, your charge was a slander, and he was not much inclined by it to cast his vote in your favor. Republican doctrines and declarations are accompanied with a continual protest against any interference whatever with your slaves, or with you about your slaves. Surely, this does not encourage them to revolt. True, we do, in common with “our fathers, who framed the Government under which we live,” declare our belief that slavery is wrong; but the slaves do not hear us declare even this. For anything we say or do, the slaves would scarcely know there is a Republican party. I believe they would not, in fact, generally know it but for your misrepresentations of us, in their hearing. In your political contests among yourselves, each faction charges the other with sympathy with Black Republicanism; and then, to give point to the charge, defines Black Republicanism to simply be insurrection, blood and thunder among the slaves.

Slave insurrections are no more common now than they were before the Republican party was organized. What induced the Southampton insurrection, twenty-eight years ago, in which, at least three times as many lives were lost as at Harper’s Ferry? You can scarcely stretch your very elastic fancy to the conclusion that Southampton was “got up by Black Republicanism.” In the present state of things in the United States, I do not think a general, or even a very extensive slave insurrection is possible. The indispensable concert of action cannot be attained. The slaves have no means of rapid communication; nor can incendiary freemen, black or white, supply it. The explosive materials are everywhere in parcels; but there neither are, nor can be supplied, the indispensable connecting trains.

Much is said by Southern people about the affection of slaves for their masters and mistresses; and a part of it, at least, is true. A plot for an uprising could scarcely be devised and communicated to twenty individuals before some one of them, to save the life of a favorite master or mistress, would divulge it. This is the rule; and the slave revolution in Hayti was not an exception to it, but a case occurring under peculiar circumstances. The gunpowder plot of British history, though not connected with slaves, was more in point. In that case, only about twenty were admitted to the secret; and yet one of them, in his anxiety to save a friend, betrayed the plot to that friend, and, by consequence, averted the calamity. Occasional poisonings from the kitchen, and open or stealthy assassinations in the field, and local revolts extending to a score or so, will continue to occur as the natural results of slavery; but no general insurrection of slaves, as I think, can happen in this country for a long time. Whoever much fears, or much hopes for such an event, will be alike disappointed.

In the language of Mr. Jefferson, uttered many years ago, “It is still in our power to direct the process of emancipation, and deportation, peaceably, and in such slow degrees, as that the evil will wear off insensibly; and their places be, pari passu, filled up by free white laborers. If, on the contrary, it is left to force itself on, human nature must shudder at the prospect held up.”

Mr. Jefferson did not mean to say, nor do I, that the power of emancipation is in the Federal Government. He spoke of Virginia; and, as to the power of emancipation, I speak of the slaveholding States only. The Federal Government, however, as we insist, has the power of restraining the extension of the institution – the power to insure that a slave insurrection shall never occur on any American soil which is now free from slavery.

John Brown’s effort was peculiar. It was not a slave insurrection. It was an attempt by white men to get up a revolt among slaves, in which the slaves refused to participate. In fact, it was so absurd that the slaves, with all their ignorance, saw plainly enough it could not succeed. That affair, in its philosophy, corresponds with the many attempts, related in history, at the assassination of kings and emperors. An enthusiast broods over the oppression of a people till he fancies himself commissioned by Heaven to liberate them. He ventures the attempt, which ends in little else than his own execution. Orsini’s attempt on Louis Napoleon, and John Brown’s attempt at Harper’s Ferry were, in their philosophy, precisely the same. The eagerness to cast blame on old England in the one case, and on New England in the other, does not disprove the sameness of the two things.

And how much would it avail you, if you could, by the use of John Brown, Helper’s Book, and the like, break up the Republican organization? Human action can be modified to some extent, but human nature cannot be changed. There is a judgment and a feeling against slavery in this nation, which cast at least a million and a half of votes. You cannot destroy that judgment and feeling – that sentiment – by breaking up the political organization which rallies around it. You can scarcely scatter and disperse an army which has been formed into order in the face of your heaviest fire; but if you could, how much would you gain by forcing the sentiment which created it out of the peaceful channel of the ballot-box, into some other channel? What would that other channel probably be? Would the number of John Browns be lessened or enlarged by the operation?

But you will break up the Union rather than submit to a denial of your Constitutional rights.

That has a somewhat reckless sound; but it would be palliated, if not fully justified, were we proposing, by the mere force of numbers, to deprive you of some right, plainly written down in the Constitution. But we are proposing no such thing.

When you make these declarations, you have a specific and well-understood allusion to an assumed Constitutional right of yours, to take slaves into the federal territories, and to hold them there as property. But no such right is specifically written in the Constitution. That instrument is literally silent about any such right. We, on the contrary, deny that such a right has any existence in the Constitution, even by implication.

Your purpose, then, plainly stated, is that you will destroy the Government, unless you be allowed to construe and enforce the Constitution as you please, on all points in dispute between you and us. You will rule or ruin in all events.

This, plainly stated, is your language. Perhaps you will say the Supreme Court has decided the disputed Constitutional question in your favor. Not quite so. But waiving the lawyer’s distinction between dictum and decision, the Court have decided the question for you in a sort of way. The Court have substantially said, it is your Constitutional right to take slaves into the federal territories, and to hold them there as property. When I say the decision was made in a sort of way, I mean it was made in a divided Court, by a bare majority of the Judges, and they not quite agreeing with one another in the reasons for making it; that it is so made as that its avowed supporters disagree with one another about its meaning, and that it was mainly based upon a mistaken statement of fact – the statement in the opinion that “the right of property in a slave is distinctly and expressly affirmed in the Constitution.”

An inspection of the Constitution will show that the right of property in a slave is not “distinctly and expressly affirmed” in it. Bear in mind, the Judges do not pledge their judicial opinion that such right is impliedly affirmed in the Constitution; but they pledge their veracity that it is “distinctly and expressly” affirmed there – “distinctly,” that is, not mingled with anything else – “expressly,” that is, in words meaning just that, without the aid of any inference, and susceptible of no other meaning.

If they had only pledged their judicial opinion that such right is affirmed in the instrument by implication, it would be open to others to show that neither the word “slave” nor “slavery” is to be found in the Constitution, nor the word “property” even, in any connection with language alluding to the things slave, or slavery; and that wherever in that instrument the slave is alluded to, he is called a “person;” – and wherever his master’s legal right in relation to him is alluded to, it is spoken of as “service or labor which may be due,” – as a debt payable in service or labor. Also, it would be open to show, by contemporaneous history, that this mode of alluding to slaves and slavery, instead of speaking of them, was employed on purpose to exclude from the Constitution the idea that there could be property in man.

To show all this, is easy and certain.

When this obvious mistake of the Judges shall be brought to their notice, is it not reasonable to expect that they will withdraw the mistaken statement, and reconsider the conclusion based upon it?

And then it is to be remembered that “our fathers, who framed the Government under which we live” – the men who made the Constitution – decided this same Constitutional question in our favor, long ago – decided it without division among themselves, when making the decision; without division among themselves about the meaning of it after it was made, and, so far as any evidence is left, without basing it upon any mistaken statement of facts.

Under all these circumstances, do you really feel yourselves justified to break up this Government unless such a court decision as yours is, shall be at once submitted to as a conclusive and final rule of political action? But you will not abide the election of a Republican president! In that supposed event, you say, you will destroy the Union; and then, you say, the great crime of having destroyed it will be upon us! That is cool. A highwayman holds a pistol to my ear, and mutters through his teeth, “Stand and deliver, or I shall kill you, and then you will be a murderer!”

To be sure, what the robber demanded of me – my money – was my own; and I had a clear right to keep it; but it was no more my own than my vote is my own; and the threat of death to me, to extort my money, and the threat of destruction to the Union, to extort my vote, can scarcely be distinguished in principle.

A few words now to Republicans. It is exceedingly desirable that all parts of this great Confederacy shall be at peace, and in harmony, one with another. Let us Republicans do our part to have it so. Even though much provoked, let us do nothing through passion and ill temper. Even though the southern people will not so much as listen to us, let us calmly consider their demands, and yield to them if, in our deliberate view of our duty, we possibly can. Judging by all they say and do, and by the subject and nature of their controversy with us, let us determine, if we can, what will satisfy them.

Will they be satisfied if the Territories be unconditionally surrendered to them? We know they will not. In all their present complaints against us, the Territories are scarcely mentioned. Invasions and insurrections are the rage now. Will it satisfy them, if, in the future, we have nothing to do with invasions and insurrections? We know it will not. We so know, because we know we never had anything to do with invasions and insurrections; and yet this total abstaining does not exempt us from the charge and the denunciation.

The question recurs, what will satisfy them? Simply this: We must not only let them alone, but we must somehow, convince them that we do let them alone. This, we know by experience, is no easy task. We have been so trying to convince them from the very beginning of our organization, but with no success. In all our platforms and speeches we have constantly protested our purpose to let them alone; but this has had no tendency to convince them. Alike unavailing to convince them, is the fact that they have never detected a man of us in any attempt to disturb them.

These natural, and apparently adequate means all failing, what will convince them? This, and this only: cease to call slavery wrong, and join them in calling it right. And this must be done thoroughly – done in acts as well as in words. Silence will not be tolerated – we must place ourselves avowedly with them. Senator Douglas’ new sedition law must be enacted and enforced, suppressing all declarations that slavery is wrong, whether made in politics, in presses, in pulpits, or in private. We must arrest and return their fugitive slaves with greedy pleasure. We must pull down our Free State constitutions. The whole atmosphere must be disinfected from all taint of opposition to slavery, before they will cease to believe that all their troubles proceed from us.

I am quite aware they do not state their case precisely in this way. Most of them would probably say to us, “Let us alone, do nothing to us, and say what you please about slavery.” But we do let them alone – have never disturbed them – so that, after all, it is what we say, which dissatisfies them. They will continue to accuse us of doing, until we cease saying.

I am also aware they have not, as yet, in terms, demanded the overthrow of our Free-State Constitutions. Yet those Constitutions declare the wrong of slavery, with more solemn emphasis, than do all other sayings against it; and when all these other sayings shall have been silenced, the overthrow of these Constitutions will be demanded, and nothing be left to resist the demand. It is nothing to the contrary, that they do not demand the whole of this just now. Demanding what they do, and for the reason they do, they can voluntarily stop nowhere short of this consummation. Holding, as they do, that slavery is morally right, and socially elevating, they cannot cease to demand a full national recognition of it, as a legal right, and a social blessing.

Nor can we justifiably withhold this, on any ground save our conviction that slavery is wrong. If slavery is right, all words, acts, laws, and constitutions against it, are themselves wrong, and should be silenced, and swept away. If it is right, we cannot justly object to its nationality – its universality; if it is wrong, they cannot justly insist upon its extension – its enlargement. All they ask, we could readily grant, if we thought slavery right; all we ask, they could as readily grant, if they thought it wrong. Their thinking it right, and our thinking it wrong, is the precise fact upon which depends the whole controversy. Thinking it right, as they do, they are not to blame for desiring its full recognition, as being right; but, thinking it wrong, as we do, can we yield to them? Can we cast our votes with their view, and against our own? In view of our moral, social, and political responsibilities, can we do this?

Wrong as we think slavery is, we can yet afford to let it alone where it is, because that much is due to the necessity arising from its actual presence in the nation; but can we, while our votes will prevent it, allow it to spread into the National Territories, and to overrun us here in these Free States? If our sense of duty forbids this, then let us stand by our duty, fearlessly and effectively. Let us be diverted by none of those sophistical contrivances wherewith we are so industriously plied and belabored – contrivances such as groping for some middle ground between the right and the wrong, vain as the search for a man who should be neither a living man nor a dead man – such as a policy of “don’t care” on a question about which all true men do care – such as Union appeals beseeching true Union men to yield to Disunionists, reversing the divine rule, and calling, not the sinners, but the righteous to repentance – such as invocations to Washington, imploring men to unsay what Washington said, and undo what Washington did.

Neither let us be slandered from our duty by false accusations against us, nor frightened from it by menaces of destruction to the Government nor of dungeons to ourselves. LET US HAVE FAITH THAT RIGHT MAKES MIGHT, AND IN THAT FAITH, LET US, TO THE END, DARE TO DO OUR DUTY AS WE UNDERSTAND IT.

Source: Collected Works of Abraham Lincoln, edited by Roy P. Basler et al.

 

Guest Essayist: Joerg Knipprath

In 1834, Dr. Emerson, an Army surgeon, took his slave Dred Scott from Missouri, a slave state, to Illinois, a free state, and then, in 1836, to Fort Snelling in Wisconsin Territory. The latter was north of the geographic line at latitude 36°30′ established under the Missouri Compromise of 1820 as the division between free territory and that potentially open to slavery. In addition, the law that organized Wisconsin Territory in 1836 made the domain free. Emerson, his wife, and Scott and his family eventually returned to Missouri by 1840. Emerson died in Iowa in 1843. Ownership of Scott and his family ultimately passed to Emerson’s brother-in-law, John Sanford, of New York.

With financial assistance from the family of his former owner, the late Peter Blow, Scott sued for his freedom in Missouri state court, beginning in 1846. He argued that he was free due to having resided in both a free state and a free territory. After some procedural delays, the lower court jury eventually agreed with him in 1850, but the Missouri Supreme Court in 1852 overturned the verdict. The judges rejected Scott’s argument, on the basis that the laws of Illinois and Wisconsin Territory had no extraterritorial effect in Missouri once he returned there.

It has long been speculated that the case was contrived. Records were murky, and it was not clear that Sanford actually owned Scott. Moreover, Sanford’s sister Irene, the late Dr. Emerson’s widow, had remarried an abolitionist Congressman. Finally, the suit was brought in the court of Judge Alexander Hamilton, known to be sympathetic to such “freedom suits.”

Having lost in the state courts, in 1853 Scott tried again, in the United States Circuit Court for Missouri, which at that time was a federal trial court. The basic thrust of the case at that level was procedural sufficiency. Federal courts, as courts of limited and defined jurisdiction under Article III of the Constitution, generally can hear only cases between citizens of different states or if a claim is based on a federal statute or treaty, or on the Constitution. There being no federal law of any sort involved, Scott’s claim rested on diversity of citizenship. Scott claimed that he was a free citizen of Missouri and Sanford a citizen of New York. On the substance, Scott reiterated the position from his state court claim. Sanford sought a dismissal on the basis of lack of subject matter jurisdiction because, being black, Scott could not be a citizen of Missouri.

When Missouri sought admission to statehood in 1820, its constitution excluded free blacks from living in the state. The compromise law passed by Congress prohibited the state constitution from being interpreted to authorize a law that would exclude citizens of any state from enjoying the constitutional privileges and immunities of citizenship the state recognized for its own citizens. That prohibition was toothless, and Sanford’s argument rested on Missouri’s negation of citizenship for all blacks. Thus, Scott’s continued status as a slave was not crucial to resolve the case. Rather, his racial status, free or slave, meant that he was not a citizen of Missouri. Thus, the federal court lacked jurisdiction over the suit and could not hear Scott’s substantive claim. Instead, the appropriate forum to determine Scott’s status was the Missouri state court. As already noted, that was a dry well and could not water the fountain of justice.

In a confusing action, the Circuit Court appeared to reject Sanford’s jurisdictional argument, but the jury nevertheless ruled for Sanford on the merits, based on Missouri law. Scott appealed to the United States Supreme Court by writ of error, a broad corrective tool to review decisions of lower courts. The Court heard argument in Dred Scott v. Sandford (the “d” is a clerical error) at its February, 1856, term. The justices were divided on the preliminary jurisdictional issue. They bound the case over to the December, 1856, term, after the contentious 1856 election. There seemed to be a way out of the ticklish matter. In Strader v. Graham in 1850, the unanimous Supreme Court had held that a slave’s status rested finally on the decision of the relevant state court. The justices also had refused to consider independently the claim that a slave became free simply through residence in a free state. Seven of the justices in Dred Scott believed Strader to be on point, and Justice Samuel Nelson drafted an opinion on that basis. Such a narrow resolution would have steered clear of the hot political issue of extension of slavery into new territories that was roiling the political waters and threatening to tear apart the Union.

It was not to be. Several of the Southern justices were sufficiently alarmed by the public debate and affected by sectional loyalty to prepare concurring opinions to address the lurking issue of Scott’s status. Justice James Wayne of Georgia then persuaded his fellows to take up all issues raised by Scott’s suit. Chief Justice Roger Taney would write the opinion.

Writing for himself and six associate justices, Taney delivered the Court’s opinion on March 6, 1857, just a couple of days after the inauguration of President James Buchanan. In his inaugural address, Buchanan hinted at the coming decision through which the slavery question would “be speedily and finally settled.” Apparently having received advance word of the decision, Buchanan declared that he would support the decision, adding coyly, “whatever this may be.” Some historians have wondered if Buchanan actually appreciated the breadth of the Court’s imminent opinion or misunderstood what was about to happen. Of the seven justices that joined the decision that Scott lacked standing to sue and was still a slave, five were Southerners (Taney of Maryland, Wayne, John Catron of Tennessee, Peter Daniel of Virginia, and John Campbell of Georgia). Two were from the North (Samuel Nelson of New York and Robert Grier of Pennsylvania). Two Northerners (Benjamin Curtis of Massachusetts and John McLean of Ohio) dissented.

Taney’s ruling concluded that Scott was not a citizen of the United States, because he was black, and because he was a slave. Thus, the federal courts lacked jurisdiction, and by virtue of the Missouri Supreme Court’s decision, Scott was still a slave. Taney’s argument rested primarily on a complex analysis of citizenship. When the Constitution was adopted, neither slaves nor free blacks were part of the community of citizens in the several states. Thereafter, some states made citizens of free blacks, as they were entitled to do. But that did not affect the status of such individuals in other states, as state laws could not act extraterritorially. Only United States citizenship or state citizenship conferred directly under the Constitution could be the same in all states. Neither slaves nor free blacks were understood to be part of the community of citizens in the states in 1788 when the Constitution was adopted, the only time that state citizenship could have also conferred national citizenship. Thereafter, only Congress could extend national citizenship to free blacks, but had never done so. States could not now confer U.S. citizenship, because the two were distinct, which reflected basic tenets of dual sovereignty.

Taney rejected the common law principle of birthright citizenship based on jus soli, that citizenship arose from where the person was born. This was not traditionally the only source of citizenship, the other being the jus sanguinis, by which citizenship arose through the parents’ citizenship, unless a person was an alien and became naturalized under federal law. Since blacks were not naturalized aliens, and their parental lineage could not confer citizenship on them under Taney’s reasoning, the rejection of citizenship derived from birth in the United States meant that even free blacks were merely subordinate American nationals owing obligations and allegiance to the United States but not enjoying the inherent political, legal, and civil rights of full citizenship. This was a novel status, but one that became significant several decades later when the United States acquired overseas dominions.

After the Civil War, the 14th Amendment was adopted. The very first sentence defines one basis of citizenship. National citizenship and state citizenship are divided, but the division is not identical to Taney’s version. To counter the Dred Scott Case and to affirm the citizenship of the newly-freed slaves, and, by extension, all blacks, national citizenship became rooted in jus soli. If one was born (or naturalized) in the United States and was subject to the jurisdiction of the United States, that is, one owed no loyalty to a foreign government, national citizenship applied. State citizenship was derivative of national citizenship, not independent of it, as Taney had held, and was based on domicile in that state.

The Chief Justice also rejected the idea that blacks were entitled to the same privileges and immunities of citizenship as whites. Although Taney viewed the Constitution’s privileges and immunities clause in Article IV broadly, if blacks were regarded as full state citizens under the Constitution, then Southern states could not enforce their laws that restricted the rights of blacks regarding free speech, assembly, and the keeping and bearing of arms. That, in turn, would threaten the social order and the stability of the slave system.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

Dred Scott lost his appeal for a second reason, his status as a slave. The Court’s original, since-abandoned, plan had been to decide the whole suit on the basis of the Strader precedent that Scott was a slave because the Missouri Supreme Court had so found. That approach still could have been used to deal summarily with this issue in the eventual opinion. But Taney struck a bolder theme. He analyzed the effect of Scott’s residence in Illinois and Wisconsin Territory on his status. This allowed Taney to challenge more broadly the prevailing idea that the federal government could interfere with the movement of slavery throughout the nation.

Taney opined that the federal government’s power to regulate directly the status of slavery in the territories, including its abolition, was derived from Article IV, Section 3, of the Constitution, which authorized Congress to “make all needful Rules and Regulations respecting the Territory … belonging to the United States” and to admit new states. However, Taney claimed, this provision applied only to the land that had been ceded to the United States by the several states under the Articles of Confederation. Thus, Congress could abolish slavery in the Northwest Ordinance of 1787, reenacted in 1789, because it applied to such ceded land. Any territory acquired by the United States thereafter, such as through the Louisiana Purchase or the Treaty of Guadalupe Hidalgo after the Mexican War, was held by the United States in trust for the whole people of the United States. Thus, white citizens who settled in those territories did not lose the rights they had acquired residing within their previous states. They were not “mere colonists, … to be governed by any laws [the general government] may think proper.” These rights would include that to property and extended to the property in slaves.

Lastly, Taney explained, the Fifth Amendment expressly protected against federal laws that sought to deprive a person of his life, liberty, or property without due process. Due process guaranteed not only a fair trial, but protected generally against arbitrary laws. A law that deprived a person of property, including slaves, simply because he moved into a territory controlled by the federal government, “could hardly be dignified with the name of due process of law.” This was a founding example of the doctrine of substantive due process that has been invoked by the courts in more recent cases to strike down laws against abortion and same-sex marriage. Taney’s distinction between the constitutional rights of citizens and colonists and his postulate that the Constitution limited Congress’s power of administering the territories settled by Americans reappeared in modified form a half century later in cases dealing with Congress’s control over overseas territory acquired after the Spanish-American War.

Scott did not become free by residing in free territory, because the Missouri Compromise of 1820, which excluded slavery from Wisconsin Territory, was unconstitutional. That decision was radical because it upset a long constitutional custom of geographically dividing free from (potentially) slave territory, beginning with the Northwest Ordinance, but which had been undermined in the Compromise of 1850 and the Kansas-Nebraska Act of 1854. Nor could Wisconsin’s territorial legislature abolish slavery, in Taney’s analysis, through the newly-minted doctrine of “popular sovereignty.” That legislature was merely an agent of Congress, and had no more power to destroy constitutional rights than did its principal.

“Popular sovereignty” lay at the core of the Compromise of 1850 and the Kansas- Nebraska Act. That doctrine, championed by Senators Henry Clay and Stephen Douglas, allowed slave holders to bring their property into all parts of the politically unorganized territorial area. Under the Northern view, once organized as a territory, the people acting through a convention or through their territorial legislature might authorize or prohibit slavery. Under the Southern view, only states could abolish slavery, and any such prohibition had to await a decision of the people when seeking statehood or thereafter. The Court thus endorsed the Southern perspective, further inflaming sectional tensions because the two federal compromise laws had always been a bitter pill to swallow for many in the North.

Four of the concurring justices wrote opinions that reached the same result via various other doctrinal paths. Two dissented. The main dissent, by Benjamin Curtis—whose brother George Ticknor Curtis was one of Scott’s attorneys—relied on the theory that state citizenship was the source of national citizenship. Therefore, once someone resided in a state, and was not merely a sojourner, he acquired the rights of citizenship in that state. Scott, having resided in a free state, had shed his status as a slave and could not be reduced to that status merely by returning to Missouri. Once free, he was also entitled to all privileges and immunities of citizens, which included the right to travel freely to other states. Curtis’s theory, by focusing on states as the source of all citizenship, was even more inconsistent than Taney’s with the eventual language of the Fourteenth Amendment, which embodied a national supremacy approach.

From the beginning, the Dred Scott Case was received poorly by the public. Its controversial, and to us odious, result also tarnished the legacy of Roger Taney. Viewed from our more distant historical perspective, perhaps a more nuanced evaluation is possible. Judged by intellectual standards, Taney’s opinion showed considerable judicial craftsmanship. Taney himself was an accomplished and influential Chief Justice, whose Court addressed legal and constitutional matters significant for the country’s development.

Why then did Taney opt for an approach that destroyed the delicate balances worked out politically in the Congress, and would have nationalized the spread of slavery? After all, the narrower route of Strader lay open to the Court for the same result. Part of it was sympathy for the Southern cause, although Taney by then was not himself a slave owner. Indeed, while in law practice, Taney had vigorously denounced slavery when defending an abolitionist minister accused of inciting slave rebellions. Mostly, it was the perception that the political process was becoming unable to negotiate the hardening positions of both sides on the various facets of the slavery controversy. Those facets included protection of the “peculiar institution” in the existing slave states, expansion of slavery into new territory, and recapture of fugitive slaves from states hostile to such efforts.

The relatively successful compromises of the late 18th and early 19th centuries with their attendant comity among the states were in the distant past. Congressional efforts were increasingly strained and laborious, as experience with the convoluted process that led to the Compromise of 1850 had shown. Southerners’ paranoia about their section’s diminished political power and comparative industrial inadequacy, as well as Northerners’ moral self-righteousness and sense of political ascendancy eroded the mutual good will needed for compromise. Presidential leadership had proved counterproductive to sectional accommodation, as with James Polk and the controversy over potential expansion of slavery into territory from the Mexican War. Or, such executive efforts were ineffective, as with Franklin Pierce’s failed attempt to act while President like the compromise candidate that he had been at the Democratic convention. Worse yet, eventually such leadership was non-existent, as with James Buchanan.

There remained only the judicial solution to prevent the rupture of the political order that was looming. Legal decisions, unlike political ones, are binary and generally produce a basic clarity. One side wins, the other loses. Constitutional cases add to that the veneer of moral superiority. If the Constitution is seen as a collection of moral principles, not just a pragmatic collection of political compromises, the winner in a constitutional dispute has a moral legitimacy that the loser lacks. Hence, Taney decided to cut the Gordian knot and hope that the Court’s decision would be accepted even by those who opposed slavery. Certainly, President Buchanan, having received advance word of the impending decision, announced in his inaugural address that he would accept the Court’s decision and expected all good citizens to do likewise.

Unfortunately, matters turned out differently. At best, the decision had no impact on the country’s lurch toward violence. At worst, the decision hastened secession and war. Abraham Lincoln presented the moderate opposition to the decision. In a challenge to the Court, he defended the President’s independent powers to interpret the Constitution. In his first inaugural address, Lincoln disavowed any intention to overturn the decision and free Dred Scott. He then declared, “At the same time, the candid citizen must confess that if the policy of the government upon vital questions affecting the whole people is to be irrevocably fixed by decisions of the Supreme Court, … the people will have ceased to be their own rulers, having to that extent practically resigned their government into the hands of that eminent tribunal.”

Scott and his family were freed by manumission in May, 1837, two months after the decision in his case. Scott died a year later.

In the eyes of many, the Court’s institutional legitimacy suffered from its attempt to solve undemocratically such a deep public controversy about a fundamental moral issue. A more recent analogue springs to mind readily. Many years after Dred Scott, partially dissenting in the influential abortion case Planned Parenthood v. Casey in 1992, Justice Antonin Scalia described a portrait of Taney painted in 1859: “There seems to be on his face, and in his deep-set eyes, an expression of profound sadness and disillusionment. Perhaps he always looked that way, even when dwelling upon the happiest of thoughts. But those of us who know how the lustre of his great Chief Justiceship came to be eclipsed by Dred Scott cannot help believing that he had that case—its already apparent consequences for the Court and its soon-to-be-played-out consequences for the Nation—burning on his mind.” Scalia’s linkage of Taney’s ill-fated undemocratic attempt to settle definitively the slavery question by judicial decree to the similar attempt by his own fellow justices to settle the equally morally fraught abortion issue was none- too-subtle. Lest someone miss the point, Scalia concluded: “[B]y foreclosing all democratic outlet for the deep passions this issue arouses, by banishing the issue from the political forum that gives all participants, even the losers, the satisfaction of a fair hearing and an honest fight, by continuing the imposition of a rigid national rule instead of allowing for regional differences, the Court merely prolongs and intensifies the anguish.”

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

The cascade of events leading to John Brown’s Harpers Ferry raid, and 700,000 dead on countless Civil War battlefields, began with a cynical ploy by Illinois Senator Stephen Douglas to help land speculators and political donors.

America’s founding was an intricately crafted series of compromises and rules of engagement to balance regional interests. One of the fundamental points of conflict was slavery.

Slavery was integral to the economic vitality and culture of southern states. As America expanded westward, increasingly complex arrangements maintained the North/South regional political balance.  Western settlers quickly became a third regional interest.

For decades, three titans of the U.S. Senate: Daniel Webster representing Northern interests, John C. Calhoun representing Southern interests, and Henry Clay representing the West debated and compromised to keep America united. Their agreements were tested as the Louisiana Purchase, and then the Mexican War, created vast land masses for settlement, economic development, and political power.

Slavery became the epicenter of regional rivalries. The South wanted to maintain parity in the Senate, balancing adding a new free state with a slave state. The Missouri Compromise of 1820 and the Compromise of 1850 maintained the political balance while avoiding confronting slavery. Most Americans, even Southerners, hated the institution. They hoped that slavery, if left alone, would somehow fade away over decades to come.

In the minority were Northern abolitionists, who wanted to end slavery in their lifetime. There were also Southern slavery advocates, who hoped to expand slavery westward and even southward by annexing Caribbean and Central American lands to bolster their power. The moderates held off both factions until the lure of land speculation, government contracts, and quick profits were added into the mix.

It began with the proposed trans-continental railroad to California. Southerners wanted the rail line to take a southern route. James Gadsden, President Franklin Pierce’s Ambassador to Mexico, negotiated the purchase of Mexican lands in what is now the southern border of Arizona and New Mexico on December 30, 1853 to assure sufficient railroad rights-of-way through less mountainous terrain.

The North wanted a northern route that began at St. Louis, Missouri and linked to Chicago, Baltimore, Philadelphia, and New York City. Most northern business leaders favored the northern route and felt that organization of the Nebraska Territory would facilitate this decision. However, rival business factions within Missouri wanted control of the route and the potential fortunes to be made from land speculation. Pro-slave forces threatened to block any efforts to organize Nebraska because Missouri would then be surrounded on its west, east, and north by free states.

Senator Stephen Douglas was a key architect of the Compromise of 1850 and Chairman of the Senate Committee on Territories. Douglas already had presidential aspirations, having lost the 1852 Democratic Party nomination to Franklin Pierce. He was preparing for another run in 1856. He wanted to help his Missouri-based political and financial allies, while avoiding a confrontation with Southerners. [1]

On January 4, 1854, Douglas introduced the Kansas-Nebraska Act. This act repealed the Missouri Compromise of 1820 and the Compromise of 1850. It opened the entire territory to popular or “squatter” sovereignty for determining whether the territories would be free or slave. At this time the Nebraska Territory encompassed the entire Louisiana Purchase from the Missouri Compromise line to the Canadian Border. Indiana Representative George Washington Julian, who would serve as the Chairman of the Committee on Organization for the 1856 Republican Convention, commented, “The whole question of slavery was thus re-opened.” [2]

The Congressional debate on the Kansas-Nebraska Act was tumultuous. Ohio Senator, Salmon Chase, published, “The Appeal of the Independent Democrats in Congress to the People of the United States,” in the New York Times on January 24, 1854. He declared the abandonment of the Missouri Compromise a “gross violation of a sacred pledge” and an “atrocious plot” to convert free territory into a “dreary region of despotism, inhabited by masters and slaves.” [3]

Northerners, and many Westerners, felt Southern politicians were dealing in bad faith. The “Nebraska Act” was viewed as a bold Southern power grab that threatened the nation’s future. Protests against the “Nebraska Act” spread throughout the North. Highly charged emotions fractured the Democratic Party, destroyed the Whig Party, and launched the Republican Party. Sixty-seven-years of America’s civic culture was falling apart.

The Kansas-Nebraska Act passed the Senate in March and the House of Representatives in early May. President Pierce signed the bill into law on May 30, 1854.

New York Senator William H. Seward responded to victorious Southern Senators by stating, “Since there is no escaping your challenge, I accept it in behalf of the cause of freedom. We will engage in a competition for the virgin soil of Kansas, and God give the victory to the side which is stronger in numbers as it is in right.” [4]

Both pro-slavery and anti-slave forces moved into the Kansas territory engaging in brutal guerilla warfare over the next five years. This sporadic civil war became known as “Bleeding Kansas.” It even spilled into the U.S. Senate Chamber. On May 22, 1856, South Carolina pro-slavery Democrat Representative Preston Brooks assaulted Massachusetts anti-slavery Republican Senator Charles Sumner in the Senate Chamber, bludgeoning him into unconsciousness. [5]

The regional civil war that erupted among Kansas settlers attracted the attention of John Brown, a key leader within the abolitionist movement.  Wealthy and politically connected abolitionists funded and armed Brown, his many sons, and a growing number of paramilitary units, to enter the Kansas maelstrom.

Kansas became a killing ground, and a proving ground for Brown’s violent approach to ending slavery. The nation had embarked on a path leading to the most cataclysmic event in American history.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Mayer, George H., The Republican Party 1854-1966 Second Edition (Oxford University Press, New York, 1966) page 25.

[2] Julian, George Washington, Political Recollections; Anthology – America; Great Crises in Our History Told by its Makers; Vol. VII (Veterans of Foreign Wars, Chicago, 1925) page 212.

[3] McPherson, James, Battle Cry of Freedom (Oxford University Press, New York, 1988) page 124.

[4] Ibid., page 145.

[5] Ibid., page 150.

Guest Essayist: James D. Best

On the evening of February 27, 1860, New Yorkers paid an exorbitant twenty-five cents to listen to a commonplace politician from some prairie state. The man had a reputation as a storyteller extraordinaire. Everyone expected to be entertained; few took the speaker seriously as a presidential candidate. Abraham Lincoln had earned a modicum of fame due to his debates with Senator Stephen Douglas two years previously, but he had lost that race and most believed the fledgling Republican Party would never nominate a loser. In fact, many wondered how this roughhewn storyteller wangled an invitation to a lecture series meant to expose serious candidates to the New York elite? Homespun yarns might draw crowds in the bucolic West, but New York City demanded a more elevated style of speechmaking.

The Republican Party was new and had failed in running John C. Frémont, a national hero, for president in 1856. Lincoln’s chances of ascending to the presidency under the Republican banner were slight. Lincoln, of course, had other plans. That afternoon, he had his photograph taken by Mathew Brady, and he intended his evening speech to be historic. It was. Lincoln often said that the Brady’s photograph and his Cooper Union address propelled him to the presidency.

What was so great about his speech at the Cooper Union? It was earth-moving because it was highly unusual. It was a call for his party to stand on principle—God’s principles, the Founders’ principles, and the founding principle of the Republican Party—the abolition of slavery.

Lincoln wanted to debunk the Democrats’ claim that the Founders would have supported the extension of slavery into the territories, so he presented a scholarly review of the voting records of the Constitution’s thirty-nine signers, showing that twenty-one of them had voted for bills restricting slavery in territories, and sixteen had left no record. Only two had made votes that supported the Democrats’ contention.

His speech had started slow, but as it picked up momentum, the energy in the hall lifted until the excited audience waited on the edge of their seats for the next opportunity to clap, yell, and bang out a rhythm with their shoes. Lincoln gave them plenty of opportunities. Below is a highly abridged selection from his speech.

We hear that you will not abide the election of a Republican president! In that event, you say you will destroy the Union; and then, you say, the great crime of having destroyed it will be upon us!

That is cool. A highwayman holds a pistol to my ear, and mutters through his teeth, ‘Stand and deliver, or I shall kill you and then you will be a murderer!’

What the robber demands of me—my money—is my own; and I have a clear right to keep it; but my vote is also my own; and the threat of death to me to extort my money and the threat to destroy the Union to extort my vote can scarcely be distinguished.

What will convince slaveholders that we do not threaten their property? This and this only: cease to call slavery wrong and join them in calling it right. Silence alone will not be tolerated—we must place ourselves avowedly with them. We must suppress all declarations that slavery is wrong, whether made in politics, in presses, in pulpits, or in private. We must arrest and return their fugitive slaves with greedy pleasure. The whole atmosphere must be disinfected from all taint of opposition to slavery before they will cease to believe that all their troubles proceed from us.

All they ask, we can grant, if we think slavery right. All we ask, they can grant if they think it wrong.

Right and wrong is the precise fact upon which depends the whole controversy.

Thinking it wrong, as we do, can we yield? Can we cast our votes with their view and against our own? In view of our moral, social, and political responsibilities, can we do this?”

Let us not grope for some middle ground between right and wrong. Let us not search in vain for a policy of don’t care on a question about which we do care. Nor let us be frightened by threats of destruction to the government.”

Prolonged applause kept Lincoln silent for several minutes before delivering his final sentence.

“Let us have faith that right makes might, and in that faith, let us, to the end, dare to do our duty as we understand it!”

When Lincoln stepped back from the podium, the Cooper Union Great Hall exploded with noise and motion. Everybody stood. The staid New York audience cheered, clapped, and stomped their feet. Many waved handkerchiefs and hats.

Great leaders speak and act on principle. People will not only follow a principled leader; they will labor mightily in a principled cause.

Abraham Lincoln went on to win the Republican nomination, the presidency, the Civil War, and the abolition of slavery.

Click here to read the text of Abraham Lincoln’s Address At Cooper Union.

(To learn more, I recommend Lincoln at Cooper Union by Harold Holzer.)

James D. Best, author of Tempest at Dawn, a novel about the 1787 Constitutional Convention, Principled Action, Lessons From the Origins of the American Republic, and the Steve Dancy Tales.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Scot Faulkner

In the 1850s, America’s civic culture was crumbling. Decades of political compromise and avoidance on the issue of slavery had maintained an uneasy peace. The Mexican-American War (1846-47) added over 500,000 square miles to the U.S. and rekindled sectional competition. Ralph Waldo Emerson prophesied, “The United States will conquer Mexico, but it will be as the man swallows the arsenic, which brings him down in turn. Mexico will poison us.” [1]

The carefully orchestrated balance between Northern/Free states and Southern/Slave states in the U.S. Senate had only been maintained by tightly controlling the admission of new states to the Union. In 1820, Missouri was ready to be admitted as a “slave” state. Their Senate votes were to be off-set by separating the northern part of Massachusetts into the new “free” state of Maine. A key part of this Missouri Compromise of 1820 was to limit expansion of “slave states” to below a line, parallel 36°30′ north. However, after the Mexican War, Texas, California, and many other potential states, clamored for admission into the Union, reawakening the slumbering sectional strife and the “free” versus “slave” state controversy.

In 1850, a new Compromise was approved. This was a package of five separate bills that maintained the North/South balance in the Senate by allowing California to join the Union as a free state, even though its southern border dipped below the 1820 slave demarcation line. This was balanced by admitting Texas as a slave state. Other provisions balanced ending the slave trade in Washington, D.C. with establishing the Fugitive Slave Act, which required local officials in the North to aid in the capture and return of escaped slaves.

The Compromise of 1850 was the last great moment for the Whig Party. This party rose as a counter to the Jacksonian Democrats in the late 1830s. It thrived by broadly promoting westward expansion without a conflict with Mexico, supporting transportation infrastructure projects, and protecting fledgling American businesses with tariffs. The Whigs also benefited from having stellar leaders in the U.S. Senate, like Henry Clay and Daniel Webster, and attracting popular war heroes to run as their presidential candidates. The reawakening of sectional competition ended their brief moment of political ascendancy.

In 1848, the Whig Party split on slavery with pro-freedom/anti-Mexican War “Conscience Whigs” and pro-slavery “Cotton Whigs” (“lords of the lash” allied with “lords of the loom”). [2] They still stumbled across the 1848 Presidential finish line with Mexican War hero Zachery Taylor. Unfortunately, food poisoning led to Taylor’s death on July 9, 1850 ushering in the Presidency of anti-immigrant Millard Filmore and his “No-nothing” nativist movement. In 1852, the highly divided Whig Party needed 53 roll call votes to nominate another war hero, Winfield Scott, only to lose in a landslide to Pro-slavery Democrat Franklin Pierce. Rep. Alexander Stephens, a “Cotton Whig” pronounced, “the Whig Party is dead.” [3]

The implosion of the Whigs, and the new sectional rivalry, launched new parties, and factions within parties. These reflected the wide range of opinions on slavery from zealous support of slavery everywhere possible to immediate abolition everywhere possible. In the middle were factions that wanted to maintain the Union through various forms of compromise, allowing slavery some places, but not others.

This cauldron of factionalism came to a boil in 1854 with consideration of the Kansas-Nebraska Act, which was intended to void the carefully crafted Compromises of 1820 and 1850.

Anti-slavery “Free Soil” party activists along with anti-slavery “Conscience Whigs” and “Barn Burner” Democrats held anti-Nebraska meetings and rallies across the north. One of the organizers of these anti-Nebraska protests was Alvan E. Bovay.

Bovay (July 12, 1818 – January 13, 1903) was a successful New York lawyer and an early abolitionist. He and his wife moved to Ripon, Wisconsin, in 1850, where he helped establish Ripon College. [7]

Bovay was an active Whig, but was disappointed in the Party’s disarray over slavery. He felt Party leaders had lost their way. Only a new party, uniting anti-slavery factions across the political spectrum would resolve the divisiveness facing America. In 1852, Bovay visited his friend, Horace Greeley, the editor of the New York Tribune, to discuss a new party. They agreed a new party deserved a new name – Republican. [8] Launching this new party would have to wait until the Nebraska bill ignited wide-spread calls for strategic political change.

On March 1, 1854, Bovay announced an anti-Nebraska protest meeting in the local Ripon newspaper:

NEBRASKA. A meeting will be held at 6:30 o’clock this Wednesday evening at the Congregational church in the village of Ripon to remonstrate against the Nebraska swindle. [9]

After the meeting, Bovay posted in newspapers:

THE NEBRASKA BILL. A bill expressly intended to extend slavery will be the call to arms of a Great Northern Party, such as the country has not hitherto seen, composed of Whigs, Democrats, and Freesoilers; every man with a heart in him united under the single banner dry of “Repeal!” “Repeal!” [10]

Wisconsin anti-slavery activists became further inflamed when, March 9, 1854, protesters, led by abolitionist Sherman Booth, stormed a Milwaukee, Wisconsin jail to rescue Joshua Glover. Glover was an escaped slave awaiting extradition under the Fugitive Slave Act of 1850.  A federal judge had refused to hear his appeal. [11]

Bovay then organized a second protest meeting:

The Nebraska bill. A bill expressly intended to extend and strengthen the institution of slavery has passed the Senate by a very large majority, many northern Senators voting for it and many more sitting in their seats and not voting at all. It is evidently destined to pass the House and become law, unless its progress is arrested by a general uprising of the north against it.

Therefore, We, the undersigned, believing the community to be nearly quite unanimous in opposition to the nefarious scheme, would call upon the public meeting of citizens of all parties to be held at the school house in Ripon on Monday evening, March 20, at 6:30 o’clock, to resolve, to petition, and to organize against it. [12]

Bovay and sixteen others met at the schoolhouse and decided to organize a Wisconsin state convention to endorse candidates for state and federal office. Bovay worked with anti-slavery Democrat, Edwin Hurlbut, to develop a platform for the “Republican Party.” They organized and managed a state convention in Madison, Wisconsin on July 13, 1854. That convention nominated the first slate of Republican candidates for that fall’s local elections. [13].

The Republican Party spread across America, coalescing diverse factions into a new political movement that would dominate American politics for the next 76 years, winning 14 of the next 19 Presidential elections. It also signaled the end of 36 years of political obfuscation on the issue of slavery in America, ultimately leading to the Civil War.

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

NOTES

[1] McPherson, James, Battle Cry of Freedom (Oxford University Press, New York, 1988) page 51.

[2] Ibid., page 60.

[3] Ibid., page 118.

[4] Mayer, George H., The Republican Party 1854-1966 Second Edition (Oxford University Press, New York, 1966) page 25.

[5] Julian, George Washington, Political Recollections; Anthology – America; Great Crises in Our History Told by its Makers; Vol. VII (Veterans of Foreign Wars, Chicago, 1925) page 212.

[6] Op. Cit., McPherson, page 124.

[7] https://www.jsonline.com/story/opinion/crossroads/2016/09/24/messitte-republican-return-ripon/90978702/

[8] Pedrick, Samuel M. , The life of Alvan E. Bovay, founder of the Republican Party in Ripon, Wis., March 20, 1854. (Commonwealth Partners, 1950).

[9] Unity Weekly; Unity Publishing Company, Chicago, Illinois; Volume LXI, Number 16, June 18, 1908, pages 245-246.

[10] Ibid.

[11] Legler, Henry E., Leading Events of Wisconsin History. Milwaukee: Sentinel, 1898. Pages 226-229.

[12] Ibid.

[13] Op. Cit., Mayer, page 26.

Guest Essayist: Daniel A. Cotter

Sent by President Millard Fillmore, Commodore Matthew C. Perry went on an expedition to Japan in 1853 to persuade, even pressure, Japan to end its policy of isolation and become open to trade and diplomacy with the United States. Japan signed a treaty with the U.S. in 1854, agreeing to trade and an American consulate. The Treaty of Kanagawa was the first by Japan with a Western nation. Among many accomplishments, Commodore Perry devised a naval apprentice system, assisted the Naval Academy, worked to develop naval officers to their fullest potentials, and helped found the New York Naval Lyceum.

Commodore Perry had a long and distinguished career in the United States Navy, with his commanding of ships in both the War of 1812 and the Mexican-American War from 1846-1848, but was also instrumental at the end of his career in opening Japan to the West.

Perry was asked to travel to Japan in 1852, when President Fillmore sent Perry on a mission to open the ports of Japan for American trade. Perry embarked on his voyage on November 24, 1852, from Norfolk, Virginia.  After making various stops along the way, including at Cape Town and Hong Kong, on July 8, 1853, Perry and his contingent arrived at Uraga.  Despite demands by the Japanese to proceed to the only port open to foreigners at Nagasaki, Perry refused. Perry warned the Japanese that if forced to fight, the Japanese would suffer immense damage and that the Americans would conquer them.

After some delays caused by the illness of Japanese shogun Tokugawa Ieyoshi and debating what was to be done with the demands of Perry, the Japanese decided to accept his offered letter. Perry was allowed to land near Uraga, at Kurihama. Perry presented his letter to the Japanese delegates, and departed for Hong Kong.

Returning approximately six months later, rather than the year he had promised, Perry landed and after negotiations, the Convention of Kanagawa was signed on March 31, 1854. Perry signed on behalf of America. Signed under threat of force, the Convention contained twelve articles, including a provision for it to be ratified within eighteen months. The treaty was written in English, Japanese, Chinese and Dutch, and the text was eventually ratified by Emperor Komei. The treaty was ratified on February 21, 1855.

Perry earned the nickname or title “Father of the Steam Navy” for his advocacy of modernizing the United States Navy and pushing for wider use of the steam engine.

Perry’s efforts to open Japan to the west for trade and diplomatic relationships after many years of isolation was an important achievement, and his ability to land and present his letter on July 14, 1853, is an important date in American history.

Among others, the treaty provided which ports would be open and contained a provision that Japan would supply the United States with any advantages that Japan might negotiate with any foreign nation in the future.

Perry returned to the United States in 1855 and Congress awarded him the sum of $20,000 for his work in Japan.

Dan Cotter is Attorney and Counselor at Howard & Howard Attorneys PLLC. He is the author of The Chief Justices, (published April 2019, Twelve Tables Press). He is also a past president of The Chicago Bar Association. The article contains his opinions and is not to be attributed to anyone else.  

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Dan Morenoff

The Treaty of Guadalupe Hidalgo was signed shortly after James Wilson Marshall discovered gold flakes in the area now known as Sacramento. Border disputes would continue, but the treaty ended the Mexican-American War (1846-1848) and added a large swath of western territory broadly expanding the United States. It would make up Arizona, California, Nevada, New Mexico, Utah, Wyoming, Washington, Oregon, Texas, and parts that would later make up Oklahoma, Colorado, Kansas, Wyoming, and Montana. The new lands acquired from Mexico stirred sectional passions about the expansion of slavery in the West that helped lead to the Civil War after being temporarily settled by the Compromise of 1850.

Americans almost never think about the Mexican-American War. We don’t often pause to consider its justifications or results. We may know that it served as the training ground of just about everyone who became famous in the Civil War, but details of how, where, or why they fought that prequel might as well be myth. Most of us have no idea how many of our place names have their roots in its participants. Outside of a little-understood line in the Marine Hymn, we almost never hear anything about how we won it. And almost no American knows anything about the Treaty of Guadalupe-Hidalgo that formally ended it: not its name, not its terms, not who signed it, and not the drama that went into creating a Mexican government willing to enter it.

We have good reason to studiously remember to forget these details.  Guadalupe-Hidalgo was an unjust treaty forced on a weaker neighbor to conclude our least-just war. It was also hugely consequential, and we spectacularly benefited from imposing it on Mexico.

The War Itself

The war’s beginnings lay in the unsettled details of Texas’s War of Independence. Yes, Mexican President Santa Anna had signed the so-called Treaties of Velasco while held prisoner after the Texian victory at San Jacinto in 1836, but Mexico both: (a) refused to ratify them (so never formally recognizing Texas’s independence); and (b) simultaneously argued that the unrecognized republic’s Southern border lay at the Nueces River, about a hundred and fifty (150) miles north of the Rio Grande (as Texas noted the Treaties of Velasco would have determined).

All this came to matter when James K. Polk won the Presidency in 1844.  He had campaigned on a series of promises that, for our purposes, included: (a) annexing Texas; and (b) obtaining California (and parts of five (5) other modern states) from Mexico.[1], [2]  Part one came early and easily, as he negotiated Texas’s ascension to the Union in 1845.  But when the Mexican government refused to meet with his emissary sent to negotiate the purchase of the whole northern part of their country,[3] in pursuit of a fallback plan, President Polk sent an army south to resolve the remaining ambiguity of the Texas-Mexico border. That army (contemporaneously called, with greater honesty than later sources usually admit, the “Army of Occupation”) under the command of Zachary Taylor went to “guard” the northern shore of the Rio Grande.  It took some doing, including the shelling of Matamoros (a major Mexican port city on the uncontested Southern shore of the river), but Taylor eventually managed to provoke a Mexican response. Mexican forces crossed the river to drive back the attacking Americans, in the process prevailing in the Thornton Skirmish and destroying Fort Brown.[4]

President Polk styled these actions a Mexican invasion of America that had killed American soldiers on American soil. On that basis, he sought and received a Congressional declaration of war. Days after receiving it (and in a time before even telegraphs, the distances involved – and coordination with naval forces in the Pacific make that timing revealing), American forces invaded Mexico in numerous arenas across the continent, seizing modern-day Colorado, New Mexico, Arizona, and Nevada; simultaneously, American settlers declared the “independence” of the Republic of California (the “independence” of which lasted no more than the twenty-five (25) days between their ineffective declaration and the arrival of the American military in Sonoma Valley).

But the achieving of Polk’s aims didn’t end the war.  Mexico didn’t accept the legitimacy or irreversibility of any of this. Polk ordered Taylor’s army South, to take Monterrey and press into the Mexican heartland.  Eventually, when that, too, failed to alter Mexican recalcitrance, President Polk sent another army, under the Command of Winfield Scott, to land in the Yucatan and follow essentially the same route Cortez had toward Mexico City. Like the conquistadors of old, that force (including Marines) would head for “the Halls of Montezuma” (the last Aztec king).

All this was controversial immediately.  Abraham Lincoln condemned the entire escapade from the floor of Congress as “unnecessarily and unconstitutionally commenced.” Henry Clay called it an act of “unnecessary and [ ] offensive aggression.”  Fresh out of West Point, Ulysses S. Grant fought in the war, but after his Presidency would reflect that “I do not think there was ever a more wicked war than that waged by the United States on Mexico.” And as the war raged, Washington, D.C. became consumed with the question of what all this new territory would do to the delicate balance established by the Missouri Compromise; was the whole war just a scheme to create new slave states? It was still only 1846, with the war’s outcome still unclear, when Pennsylvania Congressman David Wilmot sought to amend an appropriations bill (meant to authorize the funds to pay for peace) with “the Wilmot Proviso,” a bar on slavery in any formerly Mexican territory.[5]

Victory and Then What?

Wicked or not, the plan worked.  American forces promptly took Mexico City in September of 1847.

As Mexico City fell (with its President fleeing and the Foreign Minister declared acting-President), the collapsing Mexican Government proposed terms of peace (under which Mexico would retain all of modern New Mexico, Arizona, Southern California, and much of Nevada), designed to further pique Washington’s emerging divide by giving the U.S. only territory north of the free/slave divide established by the Missouri Compromise. But the Mexican government’s ability to deliver even those terms (which would have reversed enormous battlefield losses) was highly suspect; Jefferson Davis, who had fought in most of the war’s battles before being appointed a Mississippi Senator, warned Polk that any Mexican emissaries coming to Washington to negotiate based on it would see the talks go longer than their government’s survival and the negotiators labelled traitors and murderers, should they ever try to return home. And he was pretty clearly right: the people of Mexico were not happy about any of this: not with the loss of more than ½ their territory, not with the conquest of their capital, and not with the collapse of the government that was supposed to prevent anything of the sort from happening.

So how do you end a war and go home when there’s no one left to hand back the pieces to?

That was the rub.  The Americans in Mexico City didn’t want to stay. The whole war had been controversial and America having to occupy the entirety of Mexico sat particularly poorly with its opponents. The Wilmot Proviso hadn’t gone away either and Senators were already arguing that it would need to be incorporated into any peace treaty with Mexico (whomever that meant).  And there was no one with the ability to clearly speak for Mexico to negotiate anything anyway.

Eventual Treaty with Mexico’s Acting Government

It would take almost all of the remainder of Polk’s presidency to resolve these problems. Eventually, the acting Mexican government became willing to sign what America had decided the terms of peace required.  Those terms?  America would pay Mexico $15 million.[6]  America would assume and pay another $3.25 million of Mexico’s debts to Americans.[7]  As the GDP of California, alone, is estimated at $3.1 trillion for 2019, the purchase price was a bargain, to say the least. Mexico would renounce all rights to Texas and essentially all the rest of the modern American West.  All of California (down to the port of San Diego, but not Baja), Nevada, New Mexico, Utah, and Colorado (and almost all of Arizona)[8] would so change hands; the Republic of Texas’s territories were substantially larger than those of the modern State of Texas, so this renunciation also secured American title to parts of Oklahoma, Kansas, and Wyoming.

Aftermath

So America became the transcontinental republic Jefferson had dreamed of. Polk got to go home with his mission accomplished.[9]  America secured title to California weeks after settlers discovered what would become the world’s largest known gold deposits (to that date) at Sutter’s Mill, but before word of that discovery had made it to either Mexico City or Washington.

But America acquired something else with vast territories, and unforeseen riches.  It also acquired a renewal of fights over what to do with seemingly limitless, unsettled lands and how to accommodate the evil of slavery within them. Those fights, already underway before Mexico City fell, would scroll out directly into Bleeding Kansas and the Civil War.

Dan Morenoff is Executive Director of The Equal Voting Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] The complete set also included: (a) resolving the border with Canada of America’s Oregon territory; (b) reducing taxes; (c) solving the American banking crises that had lingered for decades; and (d) leaving office without seeking reelection.  Famously, Polk followed through and completed the full set.

[2] The targeted territory included lands now lying in California, Nevada, Arizona, New Mexico, Utah, and Colorado.

[3] That was John Slidell, for whom the New Orleans suburb is named.

[4] Mexico so killed Major Jacob Brown, arguably the first American casualty of the war (for whom both the fort-at-issue and the town it became – Brownsville, Texas – were named).

[5] It failed in the House at the time, but exposed the raw power plays that success in (or perhaps entry into) the Mexican-American War would bring to the forefront of American politics.

[6] Depending on how one chooses to calculate the current value of that payment, it could be scored as the equivalent in modern dollars of $507.2 million, $4.04 billion, $8.86 billion, or $9.04 billion.  https://www.measuringworth.com/dollarvaluetoday/?amount=15000000&from=1848.

[7] The same approaches suggest this worked out to an assumption, in today’s dollars, of another $109.9 million, $876.1 million, $1.91 billion, or $1.96 billion in Mexican obligations.  https://www.measuringworth.com/dollarvaluetoday/?amount=3250000&from=1848.

[8] Later, the US would separately negotiate the Gadsen Purchase to acquire a mountain pass through which a railroad would one-day run.

[9] He promptly died on reaching his home in Tennessee; some have noted that this means he was not only the sole President to do everything he promised, but also the perfect ex-President.

Guest Essayist: Tony Williams

On January 24, 1848, James Marshall was overseeing some workers digging a millrace for a sawmill for his employer, John Sutter, along a tributary in the American River in the hills near Yerba Buena (modern San Francisco). While he was inspecting the project, the morning sun reflected off shiny pieces of yellow metal. Curious, he gathered a few pieces to examine them and showed the workers.

The group ran some tests on the metal to determine if it were gold. They hammered the malleable metal into thin sheets and then cooked it in boiling lye that cleaned it. Marshall was sure that he found gold but kept his composure as he rode his horse to share the news with Sutter. They tested it again with nitric acid and then its density. He smiled and told the group (which included a female cook), “Boys, I believe I have found a gold mine.”

Marshall and Sutter began hunting for gold but shockingly did not attempt to hide the discovery. Sam Brannan owned a general store near Sutter’s fort (modern Sacramento) and developed a scheme to get rich by selling provisions to miners. He filled a large jar with gold dust and nuggets and traveled to San Francisco.

Brannan went about the village showing its residents the contents of the jar and enticing them to become miners by yelling, “Gold! Gold! Gold from the American River!” They did not need much encouragement. In the words of one person: “A frenzy seized my soul; unbidden my legs performed some entirely new movements of polka steps—I took several….Piles of gold rose up before me at every step; castles of marble, dazzling the eye….In short, I had a very violent attack of the gold fever.” The village emptied as people dropped everything and raced for the river.

Eight days after Marshall’s discovery, representatives of the United States and Mexico signed the Treaty of Guadalupe-Hidalgo. The treaty ended the Mexican-American War and delivered the West including California to the United States in exchange for $15 million. Because of American property rights, the miners need only work the land to lay a claim to the property including its valuable minerals.

The miners used a variety of methods for finding gold. Most were inexperienced and initially used the simple method of panning for gold. Others used a cradle that was similar and could sift through more material. They soon built sluices and ran water over dirt and collected the dense gold at the bottom of grates. Later, enterprising individuals with the means introduced hydraulic mining using pressurized water cannons to blast hills into slurry that ran through sluices. Miners became amateur geologists and searched for ancient streambeds that might hold massive gold deposits.

Gold fever induced a gold rush that gripped Americans as well as thousands of others from around the world. The telegraph, letter writers, and travelers spread the news quickly. The New York Herald announced the discovery of gold to readers with the astonishing news that, “There are cases of over a hundred dollars being obtained in a day from the work of one man” (at a time when workers made perhaps $500 a year).

In his December Annual Message to Congress, President James Polk added his voice to the frenzy when he stated, “The accounts of the abundance of gold in that territory are of such an extraordinary character as would scarcely command belief were they not corroborated by the authentic reports of officers.”

Gold seekers known as Argonauts traveled to the American River from all over the world. They came from such distant places as Mexico, Chile, France, Hawaii, China, and Australia. In many cases, they risked everything they had voyaging thousands of miles for the chance to become fabulously wealthy. Ship captains had to find ways to prevent their crews from joining the passengers rushing to the mining camps.

More than 80,000 Americans of diverse backgrounds but with the same goal in mind headed west as part of the gold rush. They traveled overland for months along the Oregon Trail and other well-beaten paths where they hunted buffalo, traded with Native Americans, and risked cholera and starvation. Others of greater means selected travel aboard a Yankee clipper or other ships that sailed 15,000 miles around Cape Horn with its treacherous waters and storms. Others sailed to Panama, where they crossed the isthmus where tropical diseases claimed many, and then booked passage for the Pacific.

Besides the obsession with gold, the one thing that the Argonauts had was that almost all of them planned to get rich and return home. Very few planned to stay and build a permanent settlement. Almost 90 percent of the Argonauts were men.

The reality of the gold camps rarely matched people’s dreams. Many found only modest amounts of gold or had to settle for manual labor.  Any wealth was rapidly consumed by goods sold for astronomically inflated prices. Tensions between Americans and foreigners rose to a fever pitch due to nativism.

San Francisco grew rapidly though it had neither the government nor civil institutions to handle such growth. Saloons and gambling houses were ubiquitous where gold that was easily acquired was easily lost. Crime, ethnic gangs, and vice dominated the streets of the city. Justice was handled by vigilance committees that could order summary executions of frontier justice that was little more than mob rule and with the slightest pretense of due process.

California grew so rapidly due to the gold rush that it skipped the territorial stage and immediately applied for statehood. In September 1849, 48 delegates attended a constitutional convention in Monterey and drafted a state constitution and a bill of rights that banned slavery. The Pathfinder of the West, John Fremont, brought the constitution to Washington, D.C. where Congress considered it. Former Vice President and current U.S. Senator John C. Calhoun led the southern opposition to it because it banned slavery. He argued only Congress could decide the question.

The contentious issue was resolved only by the fragile Compromise of 1850 that included making California a free state and passing the Fugitive Slave Act. The gold rush thereby indirectly contributed to the growing sectionalism of the 1850s that led to the Civil War. The gold rush also helped create the American West and today’s prosperous sunbelt.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. He is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: David F. Forte

The tall, awkwardly boned, young Illinois legislator rose to speak. His thick hair, impervious to the comb, splayed over his head. The crowd at the Young Men’s Lyceum of Springfield leaned forward. They did not know it, but they were about to hear a prophet.

The title of Lincoln’s address was “The Perpetuation of Our Political Institutions.” It could have been subtitled, “Will this nation survive?” From the moment his high-pitched voice began to address the audience, Lincoln’s passionate embrace of the Constitution set his life out on an arc that would carry him a quarter century later to Gettysburg when he asked whether a nation conceived in liberty “can long endure.” He was asking it even now.

Yet 1838 was not such a bad year. True, the nation was struggling through the effects of the Panic of 1837, and it would be a few years before a recovery could take hold. But the wrenching crises of slavery and nullification that nearly severed the Union were in the past—at least many hoped so. The Missouri Compromise of 1820, it was thought, had settled the geographical boundary that separated the slave from the free.   And in 1831, Andrew Jackson had squelched South Carolina’s attempt at nullification. The trio of Daniel Webster, John C. Calhoun, and most especially Henry Clay—Lincoln’s idol—had fashioned peace through compromise. The Annexation of Texas, the Mexican War, the Compromise of 1850, the Kansas-Nebraska Act, Dred Scott, and John Brown were still in the unknowable future.

But Lincoln was not mollified. He saw the nation falling apart right then and there. Vigilantism and brutal violence were everywhere rampant. There was a contagion of lynching. Lincoln reviewed what all knew had been happening. First, five gamblers in Vicksburg were strung up. “Next, negroes suspected of conspiring to raise an insurrection were caught up and hanged in all parts of the State; then, white men supposed to be leagued with the negroes; and finally, strangers from neighboring States, going thither on business, were in many instances subjected to the same fate. Thus went on this process of hanging, from gamblers to negroes, from negroes to white citizens, and from these to strangers, till dead men were seen literally dangling from the boughs of trees upon every roadside, and in numbers almost sufficient to rival the native Spanish moss of the country as a drapery of the forest.” A Negro named McIntosh, accused of murdering a white man, was tied to a tree and burnt to death. Abolitionist editors were slain and their printing presses thrown in the river. This, Lincoln said, was mob law.

The founders, who gave us a blessed government, were now gone– “our now lamented and departed race of ancestors,” he told his listeners. The last had only recently passed. (It was James Madison, the “father of the Constitution,” who died in 1836). What can we do without them? Lincoln asked. What then can save us from dissolution, from turning ourselves into a miserable race, nothing more than a vengeful mob?

Lincoln paused, and then declared, “The answer is simple. Let every American, every lover of liberty, every well-wisher to his posterity swear by the blood of the Revolution never to violate in the least particular the laws of the country, and never to tolerate their violation by others. As the patriots of seventy-six did to the support of the Declaration of Independence, so to the support of the Constitution and laws let every American pledge his life, his property, and his sacred honor — let every man remember that to violate the law is to trample on the blood of his father, and to tear the charter of his own and his children’s liberty.”

On that day, January 27, 1838, Lincoln set his course. It was Lincoln’s charter of the rest of his public life. The Constitution. Only reverence for it and for the law can keep us from becoming divided tribes ruled by our passions, he told the crowd in the Young Men’s Lyceum, and other crowds in the coming decades. Reverence for the Constitution saved the Union in the bitter election of 1800. It energized Henry Clay and his political enemy, Andrew Jackson, to keep the states tied together. It steeled Lincoln in his perseverance to see the Civil War through to victory.

It can save us yet.

David F. Forte is Professor of Law at Cleveland State University, Cleveland-Marshall College of Law, where he was the inaugural holder of the Charles R. Emrick, Jr. – Calfee Halter & Griswold Endowed Chair. He has been a Fulbright Distinguished Chair at the University of Warsaw and the University of Trento. In 2016 and 2017, Professor Forte was Garwood Visiting Professor at Princeton University in the Department of Politics. He holds degrees from Harvard College, Manchester University, England, the University of Toronto and Columbia University.

During the Reagan administration, Professor Forte served as chief counsel to the United States delegation to the United Nations and alternate delegate to the Security Council. He has authored a number of briefs before the United States Supreme Court and has frequently testified before the United States Congress and consulted with the Department of State on human rights and international affairs issues. His advice was specifically sought on the approval of the Genocide Convention, on world-wide religious persecution, and Islamic extremism. He has appeared and spoken frequently on radio and television, both nationally and internationally. In 2002, the Department of State sponsored a speaking tour for Professor Forte in Amman, Jordan, and he was also a featured speaker to the Meeting of Peoples in Rimini, Italy, a meeting that gathers over 500,000 people from all over Europe. He has also been called to testify before numerous state legislatures across the country. He has assisted in drafting a number of pieces of legislation both for Congress and for the Ohio General Assembly dealing with abortion, international trade, and federalism. He has sat as acting judge on the municipal court of Lakewood Ohio and was chairman of Professional Ethics Committee of the Cleveland Bar Association. He has received a number of awards for his public service, including the Cleveland Bar Association’s President’s Award, the Cleveland State University Award for Distinguished Service, the Cleveland State University Distinguished Teaching Award, and the Cleveland-Marshall College of Law Alumni Award for Faculty Excellence. He served as Consultor to the Pontifical Council for the Family under Pope Saint John Paul II and Pope Benedict XVI. In 2004, Dr. Forte was a Visiting Professor at the University of Trento. Professor Forte was He has given over 300 invited addresses and papers at more than 100 academic institutions.

Professor Forte was a Bradley Scholar at the Heritage Foundation, Visiting Scholar at the Liberty Fund, and Senior Visiting Scholar at the Center for the Study of Religion and the Constitution in at the Witherspoon Institute in Princeton, New Jersey. He has been President of the Ohio Association of Scholars, was on the Board of Directors of the Philadelphia Society, and is also adjunct Scholar at the Ashbrook Institute. He is Vice-Chair of the Ohio State Advisory Committee to the United States Commission on Civil Rights.

He writes and speaks nationally on topics such as constitutional law, religious liberty, Islamic law, the rights of families, and international affairs. He served as book review editor for the American Journal of Jurisprudence and has edited a volume entitled, Natural Law and Contemporary Public Policy, published by Georgetown University Press. His book, Islamic Law Studies: Classical and Contemporary Applications, has been published by Austin & Winfield. He is Senior Editor of The Heritage Guide to the Constitution (2006), 2d. edition (2014) published by Regnery & Co, a clause by clause analysis of the Constitution of the United States.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

Remember the Alamo! The Battle of San Jacinto and Texan Independence

In December 1832, Sam Houston went to Texas. He had been a soldier, Indian fighter, state and national politician, and member of the Cherokee. Beset by several failures, he sought a better life in Texas. On the way, Houston traveled to the San Antonio settlement with frontiersman and land speculator, Jim Bowie, to San Antonio.

During the 1820s, thousands of Americans had moved to Texas in search of land and opportunity. The Mexican republic had recently won independence and welcomed the settlers to establish prosperous settlements under leaders such as Stephen F. Austin. These settlers were required to become Mexican citizens, convert to Catholicism, and free their slaves. The prosperous colony thrived, but Mexican authorities suspected the settlers maintained their American ideals and loyalties and banned further immigration and cracked down on the importation of slaves in 1830.

In 1833, Houston attended a convention of Texan leaders, who petitioned the Mexican government to grant them self-rule. Austin presented the petition at Mexico City where he was imprisoned indefinitely. Meanwhile, the new Mexican president, Antonio López de Santa Anna, took dictatorial powers and sent General Martin Perfecto de Cos to suppress Texan resistance. When Austin was finally released in August 1835, he asserted, “We must and ought to become part of the United States.”

Texans were prepared to fight for independence, and violence erupted in October 1835. When Mexican forces attempted to disarm Texans at Gonzales, volunteers rushed to the spot with cannons bearing the banner “Come and Take Them” and blasted them into Mexican ranks. Texans described it as their Battle of Lexington, and with it, the war for Texas independence began.

In the wake of the initial fighting, the Texans began to organize their militias to defend their rights with the revolutionary slogan “Liberty or Death!” Houston appealed to the Declaration of Independence and was an early supporter of an independent Texas joining the American Union.

Houston was appointed commander-in-chief of Texan forces. His fledgling army was a ragtag group of volunteers who were ill-disciplined and highly individualistic and democratic. His strategy was to avoid battle until he could raise a larger army to face the Mexican forces, but he could barely control his men. He opposed an attack on San Antonio, but his men launched one anyway. On December 5, Texans assaulted the town and the fortified mission at the Alamo. Texan sharpshooters and infantry closed in on General Cos’s army. Despite the arrival of reinforcements, Cos surrendered on the fourth day, and his army was permitted to march home with their weapons.

Santa Anna brought an army to San Antonio and besieged the Alamo held by 200 Texans under William Travis. The Texans deployed their men and cannons around the fort, and begged Houston for more troops. Travis pledged to fight to the last man. James Fannin launched an abortive relief expedition from Goliad, 100 miles away, but had to turn back for lack of supplies. The men at the Alamo were on their own, except for one recent American who came.

Davey Crockett was a colorful frontiersman and a member of Congress who said, “You can go to hell, I will go to Texas.” Crockett arrived in San Antonio in February and went to the Alamo. He proudly fought for liberty and roused the courage of the defenders.

Before dawn on March 6, the Mexican army assaulted the mission in four columns from different angles. The defenders slaughtered the enemy with cannon blasts but still they advanced. The Mexicans scaled ladders and were picked off by sharpshooters. Soon, the attackers established a foothold on the walls and overwhelmed the defenders. The Mexicans threw open the gates for their comrades, and the Texans and Crockett retreated into the chapel. They made a last stand until the door was knocked down and nearly all inside were killed.

Santa Anna made martyrs and heroes of the men who fought for Texan independence at the fort. “Remember the Alamo!” became a rallying cry that further unified the Texans.  At Gonzales only a few days before, on March 2, the territory’s government had met in convention and declared Texas an independent republic in a statement modeled on the Declaration of Independence. The delegates appealed to the United States for diplomatic recognition and aid in the war.

Later that month, James Fannin’s garrison of about 400 men was trapped by the Mexican army. The Texans courageously repelled several cavalry charges and fought through the night until they ran low on water and ammunition. The following day, they were forced to surrender, and Mexican forces executing the unarmed prisoners by firing four volleys into their ranks. The atrocity led to another rallying cry: “Remember Goliad!”

Houston only had 400 soldiers remaining and refused to give battle. Santa Anna chased the Texan government from Gonzales and terrorized civilians throughout the area with impunity. However, hundreds of Texans enthusiastically flocked to Houston’s camp, and he learned that Santa Anna’s force had only 750 men. Houston moved his army to the confluence of the Buffalo Bayou and San Jacinto River, where he deployed his force in the woods.

On April 20 the two armies squared off and engaged in an artillery duel with the Texans firing their canons, nicknamed the “Twin Sisters.” A group of Texan cavalry sallied out and ignored an order only to scout enemy positions. The cavalry exchanged fire with the Mexicans and narrowly escaped back to their lines. Both sides retired and prepared for battle the following day.

On the morning of April 21, General Cos arrived and doubled the size of Santa Anna’s army, but his men were exhausted from their march and took an afternoon nap. Houston seized the moment and formed up his army. They silently moved across the open ground until they started yelling “Remember the Alamo! Remember Goliad!” The shocked Mexican army roused itself and quickly formed up. The Texans’ “Twin Sisters” canons blasted away, and the infantry drove the Mexicans into the bayou while the cavalry flanked and surrounded them. In a little over 20 minutes, however, 630 were killed and more than 700 captured. Santa Anna was taken prisoner and agreed to Texan independence. The new republic selected Houston as its president and approved annexation by the United States.

Americans were deeply divided over the question of annexation, however, because it meant opening hostilities with Mexico. Moreover, many northerners, such as John Quincy Adams and abolitionists, warned that annexation would strengthen southern “slave power” because Texas would come into the Union as a massive slave state or several smaller ones. Eight years later, in 1844, President John Tyler supported a resolution for annexation after the Senate had defeated an annexation treaty. Both houses of Congress approved the resolution after a heated debate, and Tyler signed the bill in his last few days in office in early March 1845.

Annexation led to war with Mexico in 1846. Throughout the annexation debate and contention over the Mexican War, sectional tensions raised by the westward expansion of slavery tore at the fabric of the Union. The tensions eventually led to the Civil War.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. He is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Daniel A. Cotter

In early August 1831, Nat Turner, an African-American preacher and slave in Virginia, began planning and preparing a revolt against slavery. Beginning on August 21, Nat and others with him killed his master’s family, then mounted horses and continued the same on farms and elsewhere of slave owners and their families. After, the Virginia legislature received petitions urging the menace of slavery be dealt with as a cause of political and economic failure.

Turner was born on October 2, 1800 in Southampton County, Virginia, a slave held by Benjamin Turner. Turner at the time of the slave rebellion was owned by Samuel Turner, Benjamin’s son, who inherited Nat when Benjamin died. Nat was smart, learning to read and write at a very early age, and often read and preached the Bible. Nat claimed to have visions that he thought were from God. Fellow slaves gave him the nickname, “The Prophet.”

One of the visions Turner credited with motivating the rebellion. In August, after postponing the rebellion due to personal illness, he saw an August 7 solar eclipse, the second one in a six month period, as the final sign that the vision was to be implemented. The slave rebellion began on August 21, 1831, and was small at first, consisting of Nat and a few of his trusted fellow slaves. The rebels in the group traveled from home to home in the neighborhood, freeing the slaves they found at each home and killing any white people they found. The small group of rebels grew over the short-lived rebellion.

Governor John Floyd received a note on August 23, 1831, alerting him “that an insurrection of the slaves in that county had taken place, that several families had been massacred and that it would take a considerable military force to put them down.”

The rebellion led by Nat Turner led to the death of almost 60 white people, including men, women and children. In order to suppress the rebellion, whites formed militias and they in turn killed approximately 200 black people, including men, women and children, many of whom had no connection to the rebellion.

The rebellion lasted only a few days, but Nat avoided capture until October 30, when a farmer discovered him hiding. Turner was tried and was convicted and sentenced to death on November 5, 1831. Less than a week later, Nat Turner was hanged at Jerusalem, Virginia, his body flayed as an example to anyone who might be thinking of rebellion.  Turner’s attorney during the trial, Thomas Ruffin Gray, wrote The Confessions of Nat Turner: The Leader of the Late Insurrection in Southampton, Va., a pamphlet that some have dismissed as not being accurate. Whatever its accuracy, nothing more detailed exists. One alleged statement of the vision that appears in the pamphlet was:

“I had a vision … I saw white and black spirits engaged in battle, and the sun was darkened … the thunder rolled in the heavens and flowed in the streams. I discovered drops of blood on the corn as though it were dew from heaven.”

Nat’s slave rebellion appears to be the only United States one that was effective and sustained. One of the repercussions of the Nat Turner Slave Rebellion was that it led to a series of oppressive legislation that prohibited many slave activities, including education, movement and assembly. Laws also expanded against free blacks. Some states banned the possession of abolitionist publications, and in emancipation debates around the time of the rebellion and subsequently, slavery was defined by some as a positive good.

Nat Turner has been considered a patriot by some and Molefi Kete Asante listed him as on of the “100 Greatest African Americans.”  Others have noted his violence and slaughtering of many. For example, historian Scot French told The New York Times:

“To accept Nat Turner and place him within the pantheon of American revolutionary heroes is to sanction violence as a means of social change. He has a kind of radical consciousness that to this day troubles advocates of a racially reconciled society. The story lives because it’s relevant today to questions of how to organize for change.”

His rebellion made it clear that slaves were not content with their enslavement, and August 21, 1831, is an important date in American History.

Dan Cotter is Attorney and Counselor at Howard & Howard Attorneys PLLC. He is the author of The Chief Justices, (published April 2019, Twelve Tables Press). He is also a past president of The Chicago Bar Association. The article contains his opinions and is not to be attributed to anyone else.  

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James S. Humphreys

The Indian Removal Act passed the United States House of Representatives by a vote of 102 to 97 and the U.S. Senate by a vote of 28 to 19. It was signed by President Andrew Jackson on May 28, 1830.  Jackson, a Tennessean, held slaves and belonged to the Democratic Party. He first attracted national attention during the War of 1812, when his forces decimated the Creek Indians and later successfully defended New Orleans against the onslaught of an experienced and well-trained British army. Jackson’s reputation as the hero of New Orleans assisted him in his rise to the presidency. He signed the Indian Removal Act fourteen months after assuming office.

The act, consisting of eight sections, broadly outlined the conditions under which Native Americans would relinquish claim to their tribal land within the United States in exchange for territory west of the Mississippi River. United States officials would provide assistance to Native Americans during removal and would guarantee forever the migrants’ right to their western homeland. The act called for the allocation of $500,000 to cover the expenses incurred by the implementation of the measure. It also declared that earlier treaties negotiated with the Native Americans remained in force, disavowing any coercion of Native Americans on the part of U.S. officials. Removal was to be voluntary, not forced.[1]

The act targeted Native American groups living in the southern region of the United States. Whites referred to these groups as the “Five Civilized Tribes” — Choctaw, Chickasaw, Creek, Seminole, and Cherokee — because the tribes had adopted many of the habits and practices of white Americans. Some members of the tribes were not full-blooded Native Americans. John Ross, a prominent Cherokee leader, for example, was only one-eighth Native American. Hoping to earn the respect of whites, the Cherokee developed a phonetic alphabet, printed a newspaper with articles in English and Cherokee, wrote a constitution, held slaves, and founded a capital in northern Georgia called New Echota. Many whites, nevertheless, coveted Cherokee land in Tennessee, Alabama, and Georgia and the other four “civilized” tribes’ territory in other southern states. The deep-seated racism of whites toward Native Americans; the admission into the United States of Louisiana, Mississippi, and Alabama, all of which held prime cotton-growing land; and the discovery of gold in Georgia in 1829 whetted whites’ appetite for acquiring Native American territory. That is not to say that removal attracted the overwhelming support of whites. The Indian Removal act barely passed the House of Representatives. Many Whig party politicians, the most prominent of which was Kentuckian, Henry Clay, loathed Jackson and opposed the measure.

Andrew Jackson was not the first president to address the presence of large numbers of Native Americans living within the borders of the United States, but his removal policy stands out as the most aggressive strategy for dealing with them. Jackson viewed the policy as enlightened and benevolent, because, in his mind, the expansion of white civilization posed a lethal threat to Native American culture. The Native Americans, of course, viewed the policy differently, fearing that instead of saving their way of life, removal would destroy it. Cherokee leaders hoped to reach an agreement allowing them to remain on their land with U.S. officials, but Jackson’s unwillingness to yield eventually frustrated them.  Cherokee leaders, therefore, sought redress in the federal courts, where they found judges sympathetic to their plight. John Marshall, Supreme Court Chief Justice, issued two rulings favorable to the Cherokee in the cases of Cherokee Nation v. Georgia and Worcester v. Georgia. The gist of each decision was that Cherokee lands belonged to the Cherokee, a fact U.S. officials were bound to respect. The court rulings failed to halt the implementation of the removal program, which dispensed with the earlier emphasis on voluntary migration.

The Choctaw, Creek, and Chickasaw Indians succumbed to pressure from federal officials to migrate to land west of the Mississippi River.  The removal of the three Native American groups took place from 1831 to 1837. The most wrenching removal occurred in 1838 during the administration of Jackson’s successor, Martin Van Buren. A relatively small group of Cherokee agreed to removal terms outlined in the Treaty of New Echota. Members of the “Treaty Party,” believing removal was inevitable, accepted five million dollars from federal officials to relinquish all claim to Cherokee territory. The treaty’s provisions aroused the ire of Cherokees, who opposed migrating. The resisters beseeched members of the Senate to reject the treaty, but to no avail.  The Senate ratified the treaty by one vote. As white settlers increasingly overran their territory, sometimes resulting in violence against Native Americans, the Cherokees held out bravely before being gathered into camps by U.S. troops prior to removal. The United States Army then oversaw the journey of twenty thousand Cherokees to the Oklahoma territory. The arduous trek, carried out during the winter, claimed the lives of four thousand Native Americans. The episode became known as the “Trail of Tears.” Seething with anger over what they considered betrayal, Cherokee resisters murdered several leaders of the “Treaty Party,” and the reputation of Andrew Jackson, once considered a great president, has declined over time as a result of his role in Native American removal.

James S. Humphreys is a professor of United States history at Murray State University in Murray, Kentucky.  He is the author of a biography of the southern historian, Francis Butler Simkins, entitled Francis Butler Simkins: A Life (2008), published by the University Press of Florida.  He is also the editor of Interpreting American History: the New South (2018) and co-editor of the Interpreting American History series, published by the Kent State University Press.   

[1] “Indian Removal Act of 1830,” California History Social Science Project, accessed March 19, 2020, chssp.ucdavis.edu.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

Andrew Jackson started out as a lawyer and grew in politics. By the end of the War of 1812 between the United States and Britain, Jackson was a military hero of great influence. Former governor of Tennessee, he defeated John Quincy Adams in 1828, became the seventh president and first Democratic Party president, and helped found the Democratic Party.

Jackson’s biography reads larger than life. He was born in 1767 in a backwoods cabin, its precise location unknown. He was scarred by a British officer’s sword, orphaned at fourteen and raised by uncles. He was admitted to the bar after reading law on his own, one year a Congressman before being elected to the U.S. Senate, a position he then resigned after only eight months. He was appointed as a circuit judge on the Tennessee superior court. He became a wealthy Tennessee landowner, and received a direct appointment as a major general in the Tennessee militia which led, after military success, to direct appointment to the same rank in the U.S. Army. He was an underdog victor and national hero at the Battle of New Orleans, and conducted controversial military actions in the 1817 Seminole War. He experienced disappointment in the 1824 presidential election, but success four years later. He survived the first assassination attempt of a United States President and was the first President to have his Vice-President resign. He appointed Roger Taney (Dred Scott v. Sanford) to the U.S. Supreme Court. President Jackson died in 1845 of lead poisoning from the two duelist bullets he carried for years in his chest, one for forty years. You couldn’t make this biography up if you tried.

President Andrew Jackson is a constitutionalist’s dream. Few U.S. Presidents intersected the document of the U.S. Constitution as often or as forcefully during their terms as did “Old Hickory.” From the Nullification Crisis of 1832, to “killing” the Second National Bank, to his controversial “Trail of Tears” decision, Jackson seemed to attract constitutional crises like a magnet. When the Supreme Court handed down its opinion in Worcester v. George, Jackson is purported to have said “Well, John Marshall has made his decision; now let him enforce it.” It has not been reported whether Thomas Jefferson’s moldering corpse sat up at hearing those words, but I think it likely.

Jackson’s multiple rubs with the Constitution preceded his presidency. As the General in charge of defending New Orleans in late 1814, he suspended the writ of habeas corpus, which the Constitution gives only Congress the power to suspend,[1] unilaterally declaring martial law over the town and surrounding area. Habeas corpus, the “great and efficacious writ,”[2] enjoyed a heritage going back at least to Magna Carta in 1215, a fact Jackson found not compelling enough in the light of the civilian unrest he faced. As Matthew Warshauer has noted: “The rub was that martial law saved New Orleans and the victory itself saved the nation’s pride… Jackson walked away from the event with two abiding convictions: one, that victory and the nationalism generated by it protected his actions, even if illegal; and two, that he could do what he wanted if he deemed it in the nation’s best interest.”[3]

It would not be Jackson’s last brush as a military officer with arguably illegal actions. Three years later, during the First Seminole War, he found his incursion into Spanish Florida, conducted without military orders, under review by Congress. Later, when running for President, Jackson had to defend his actions: “it has been my lot often to be placed in situations of a critical kind” that “imposed on me the necessity of [v]iolating, or rather departing from, the constitution of the country; yet at no subsequent period has it produced to me a single pang, believing as I do now, & then did, that without it, security neither to myself or the great cause confided to me, could have been obtained.” (Abraham Lincoln would later offer a not dissimilar defense of his own unconstitutional suspension of Habeas Corpus in 1861).

After the ratification of the Adams–Onís Treaty in 1821, settling affairs with Spain, Jackson resigned from the army and, after a brief stint as the Governor of the Territory of Florida, returned to Tennessee. The next year he reluctantly allowed himself to be elected Senator from Tennessee in a bid (by others) to position him for the Presidency.

In the 1824 election against John Quincy Adams, Senator Jackson won a plurality of the electoral vote but, thanks to the Twelfth Amendment and the political maneuvering of Henry Clay, he was defeated in the subsequent contingent election in favor of “JQA.” Four year later, while weathering Federalist newspapers’ charges that Adams was a “murderer, drunk, cockfighting, slave-trading cannibal” the tide finally turned in Jackson’s favor and he won an Electoral College landslide.

As his inauguration day approached, I wonder how many Americans knew just how exciting would be the next eight years? On March 4, 1829, Jackson took the oath as the seventh President of the United States.

In an attempt to “drain the swamp,” he immediately began investigations into all executive Cabinet offices and departments, an effort that uncovered enormous fraud. Numerous officials were removed from office and indicted on charges of corruption.

Reflecting on the 1824 election, in his first State of the Union Address, Jackson called for abolition of the Electoral College, by constitutional amendment, in favor of a direct election by the people.

In 1831, he fired his entire cabinet.[4]

In July 1832, the issue became the Second National Bank of the United States, up for re-chartering. Jackson believed the bank to be unconstitutional as well as patently unfair in the terms of its charter. He accepted that there was precedent, both for the chartering (McCulloch v. Maryland (1819) as well as rejecting a new charter (Madison, 1815), but, perhaps reflecting his reaction to Worcester v. Georgia earlier that year, he threw down the gauntlet in his veto message:

“The Congress, the Executive, and the Court must each for itself be guided by its own opinion of the Constitution. Each public officer who takes an oath to support the Constitution swears that he will support it as he understands it, and not as it is understood by others. It is as much the duty of the House of Representatives, of the Senate, and of the President to decide upon the constitutionality of any bill or resolution which may be presented to them for passage or approval as it is of the supreme judges when it may be brought before them for judicial decision. The opinion of the judges has no more authority over Congress than the opinion of Congress has over the judges, and on that point the President is independent of both. . .”[5] . (emphasis added)

Later that same year came Jackson’s most famous constitutional crisis: the Nullification Crisis. Vice President John C. Calhoun’s home state of South Carolina declared that the federal Tariffs of 1828 and 1832 were unconstitutional and therefore null and void within the sovereign boundaries of the state, thus “firing a shot across the bow” of Jackson’s view of federalism. The doctrine of nullification had been first proposed by none other than James Madison and Thomas Jefferson thirty-four years earlier and it retains fans today. South Carolina eventually backed down but not before Jackson’s Vice President, J.C. Calhoun resigned to accept appointment to the Senate and fight for his state in that venue, and not before Congress passed the Force Bill which authorized the President to use military force against South Carolina.

In 1834, the House declined to impeach Jackson, knowing the votes were not there in the Senate for removal and settled on censure instead, which Jackson shrugged off.

Yet in 1835, Jackson sided with the Constitution and its First Amendment by refusing to block the mailing of inflammatory abolitionist mailings to the South even while denouncing the abolitionists as “monsters.”

Today, some people  compare our current President to Jackson, including President Donald Trump himself.  Others disagree.  There are indeed striking similarities, as well as great differences. Although coming from polar opposite backgrounds, both are populists who often make pronouncements upon the world of politics without the filter of “political correctness.” Further comparisons are found in the linked articles.

Thanks to the great care taken by the men of 1787, the “American Experiment” has weathered many a controversial president, such as Andrew Jackson – and we will doubtlessly encounter, and hopefully weather many more.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] U.S. Constitution; Article One, Section 9, Clause 2

[2] Sir William Blackstone, Commentaries on the Laws of England.

[3] https://ap.gilderlehrman.org/essay/andrew-jackson-and-constitution

[4] Secretary of State Martin Van Buren, who had suggested the firing, resigned as well to avoid the appearance of favoritism.

[5] http://www.americanyawp.com/reader/democracy-in-america/andrew-jacksons-veto-message-against-re-chartering-the-bank-of-the-united-states-1832/

Guest Essayist: Gary Porter

In 1817, construction on the Erie Canal began, opening in October of 1825. Initially a 363-mile waterway, 40 feet wide and four feet deep, it connected the Great Lakes and Atlantic Ocean flowing from the Hudson River at Albany to Lake Erie at Buffalo, New York. The canal increased transportation of bulk commercial goods at a much lower cost, widely expanded agricultural development, and brought settlers into surrounding states as the free flow of goods to the stretches of Northwest Territory were availed through the Appalachian Mountains.

On Friday, July 13, 1787, “James Madison’s Gang,” otherwise known as the Constitutional Convention, approved a motion stating that until completion of the first census, showing exactly how many residents each state contained, direct taxes to the states would be proportioned according to the number of representatives the state had been assigned in Congress. A short time later, Gouverneur Morris of Pennsylvania and Pierce Butler of South Carolina had a rather heated exchange over the issue of slavery and how to account for slaves in determining the state’s representation.

That same day, ninety-five miles to the northeast in New York City, the Confederation Congress passed the Northwest Ordinance, creating the Northwest Territory and opening a significant new portion of the country to rapid settlement. The territory would go on to produce 5 new states and, more importantly to our story, produce tons and tons of grain in its fertile Ohio Valley. At the time, the only practical route to bring this produce to world markets was the long, 1,513 miles, voyage down the Ohio and Mississippi Rivers from Cincinnati, Ohio to New Orleans, a voyage that could take weeks and was quite expensive. A cheaper, more efficient method had to be found.

The idea of a canal that would tie the western settlements of the country to the ports on the East Coast had been discussed as early as 1724. Now that those settlements were becoming economically important, talk resumed in earnest.

The first problem encountered was geography. The most logical western route for a canal was from the east end of Lake Erie at Buffalo, to Albany, New York on the Hudson River, but Lake Erie sits 570 feet above sea level. Descending eastward from the lake to the Hudson River would be relatively easy, but canals had to allow traffic in both directions. Ascending 570 feet in elevation on the westbound trip meant one thing: locks and lots of them. Lock technology at the time could only provide a lift of 12 feet. It was soon determined that fifty locks would be required along the 363 mile canal. Given the technology of the time, such a canal would be exorbitantly expensive to build; the cost was barely imaginable. President Jefferson called the idea “little short of madness” and rejected any involvement of the federal government. This left it up to the State of New York and private investors. The project would not get any relief with a change of Presidents. On March 3, 1817, President James Madison vetoed “An act to set apart and pledge certain funds for internal improvements.” In his veto message, Madison wrote he was “constrained by the insuperable difficulty [he felt] in reconciling the bill with the Constitution of the United States.

“The legislative powers vested in Congress are specified and enumerated in the eighth section of the first article of the Constitution, and it does not appear that the power proposed to be exercised by the bill is among the enumerated powers, or that it falls by any just interpretation within the power to make laws necessary and proper for carrying into execution those or other powers vested by the Constitution in the Government of the United States…

“The power to regulate commerce among the several States” can not include a power to construct roads and canals, and to improve the navigation of water courses…”

Once again, no help would come from the federal government.

Two different routes were considered: a southern one which would be shorter but present more challenging topography and a northern route which was longer but presented much easier terrain to deal with. The northern route was selected.

Estimates of the workers involved in building the canal vary widely, from 50,000 to 3,000 workers. A thousand men reportedly died building Governor DeWitt Clinton, “Clinton’s Folly” — the majority of them due to canal wall collapses, drowning, careless use of gunpowder and disease. Men dug, by hand, the 4-foot-deep by 40-foot-wide canal, aided occasionally by horses or oxen, explosives, and tree-stump-pulling machines. They were paid 50 cents a day, about $12 a month, and were sometimes provided meals and a place to sleep. The sides of the canal were lined with stone set in clay. The project required the importation of hundreds of skilled German stonemasons.

The gamble paid off. Once the canal opened, tolls charged to barges paid off the construction debt within ten years. From 1825 to 1882, tolls generated $121 million, four times what it cost to operate the canal. When completed in 1825, it was the second longest canal in the world.

The Erie Canal’s early commercial success, combined with the engineering knowledge gained in building it, encouraged the construction of other canals across the United States. None, however, would come close to repeating the success of the Erie. Other projects became enmeshed in politics. They became more and more expensive to build and maintain. Many canals had to be closed in the winter, yet goods still needed to get to market, whatever the cost. Railroads soon began offering competitive rates.

But the Erie Canal left its mark. New York City is today the business and financial capital of America due largely to the success of the Erie Canal.

Today, the Erie Canal is a modest tourist attraction. Cheaper means are available to move cargo. You can still take a leisurely trip via small boat from the Great Lakes to the Hudson and beyond. But so can certain species of fish, mollusks and plants use the canal and its boat traffic to make their way from the Great Lakes to “invade” New York’s inland lakes and streams, the Hudson River and New York harbor.

In 2017, Governor Andrew M. Cuomo established a “Reimagining the Canal task force” to determine the canal’s future. Addressing the environmental damage caused by invasive animal and plant life, the task force recommended permanently closing and draining portions of the canal in Rochester and Rome.

I grew up in Erie, Pennsylvania an hour’s drive from the terminus of the canal at Buffalo and I still recall the family visit we took to see it. My young brain didn’t really comprehend the labor and hardship faced by those thousands of workers over those eight years of construction. But now I can marvel at their ingenuity and perseverance in the face of amazing engineering challenges – a testament to the American Spirit.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

On May 24, 1844, Samuel Finley Breese Morse demonstrated his electro-magnetic telegraph in the capitol building in Washington, DC, by transmitting a message sent to a railway station in Baltimore, Maryland, approximately thirty-eight miles away. The message transmitted over the telegraph line was “What has God wrought?” a biblical passage from Numbers 23:23. This demonstration convinced many in both government and industry of the viability and usefulness of the new technology.

Much had happened before that eventful day to bring that demonstration to fruition. Much would happen afterward that would make this demonstration momentous in American history. Morse was not a scientist by training. He had already made his mark, if not his fortune, as an artist who preferred to paint epic, historical scenes but who often resorted to completing commissioned portraits of prominent figures as a way of making money. He became acquainted with advances in electro-magnetic telegraphy in Europe while on a voyage to Europe after the death of his first wife. His discussions with European scientists gave him some background knowledge in the scientific issues that had to be confronted. His own creativity and his dogged willingness to learn from the work of others led him to develop a model telegraphic device that proved to be practical and profitable. Among those who had worked in this field before were William Fothergill Cooke and Charles Wheatstone, who patented an electro-magnetic telegraph in Britain.   Cooke and Wheatstone employed a variety of different circuits and electrical wires to transmit signals to a receiver. When a current was transmitted through different circuits, or different combinations of circuits, signals for different characters were recognized and a needle was turned to point to various letters of the alphabet. However, the Cooke-Wheatstone telegraph message was, in Morse’s words, “evanescent.” It left no permanent record.[1]

Morse developed a telegraph that operated on a single circuit with a single wire situated between the transmitter and receiver. The transmitter employed a lever that would connect and disconnect an electrical current that would start and then stop a magnetic attraction within a mechanism at the receiver. The magnetic attraction would cause the arm of the device to mark a paper tape with dots and dashes, representing the length of time that the circuit was engaged. The innovation achieving the marking of paper may have been the creation of Morse’s assistant, Alfred Vail. What is not in doubt is that Morse created the code that translated combinations of dots and dashes into letters of the alphabet and numerals. The code was essentially a form of binary language that is now in use in computer systems today, except that now the dots and dashes have been replaced by a series of ones and zeroes.[2]

Morse also had to tackle the difficulty of transmitting signals over great distances. The signal strength declined the farther along the line that the message traveled. Morse relied upon the assistance of the famous physicist, Dr. Joseph Henry, later the first secretary of the Smithsonian Institution. Henry showed Morse how a series of relays situated miles apart from each other could renew the signal strength for an indefinite distance. Morse acknowledged the contribution of Henry in private correspondence before perfecting his invention, but downplayed Henry’s role later during challenges to his patent.[3]

Morse demonstrated his invention in many venues, but had not demonstrated its ability to work over a long distance. Morse approached Congress to gain an appropriation of $30,000 to develop the telegraphic device but also to construct a telegraph over a long distance to demonstrate its feasibility. When the appropriation bill was debated in the House of Representatives, some Congressmen ridiculed the proposal.   A tongue-in-cheek amendment to the bill, proposing an appropriation to send messages through “mesmerism,” was discussed and voted on in the chamber. Ultimately, the appropriation narrowly passed the House, and then passed unanimously in the Senate. The funding could not have come at a better time for Morse, who later estimated that just after the vote he had only thirty-seven-and-a half cents to his name. Morse originally attempted to build a long trench stretching from Baltimore to Washington, in which an insulated telegraph wire was placed.   Preliminary tests proved unsuccessful, so Morse quietly arranged to support the wires over a series of poles that would hold the lines above ground. Once completed, Morse was able to send the famous “What hath God wrought?” message from Washington, D.C. to Vail in Baltimore.  The on-looking Congressmen were amazed by the achievement, and were generous in their praise of Morse.[4]

Under Article I, Section 8, Clause 8 of the Constitution, Congress was empowered “To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”   The exclusive right to the discoveries of a scientific nature takes the form of what is generally known as a patent. Morse had been denied a patent for his invention in the United States when he first applied in 1837, although he had received a “caveat” from the office that would give priority to his claim in front of other applicants seeking a patent for an identical device. He was able to patent his creation in the United States by 1840, but had been denied a patent in Britain in 1838.[5] That same year he received a French patent, which was one reason the U.S. Supreme Court eventually limited the duration of his American patent to fourteen years from the date of his original French patent, October 30, 1838.[6] The Supreme Court also limited the scope of his patent to the telegraph device that Morse had designed. He was not able to enjoy exclusive rights to “use of electro-magnetism, however developed, for marking or printing intelligible characters, signs, or letters at any distances,”  as Morse’s original patent claim held. If that overly broad had been accepted, Morse could plausibly claim patent rights covering modern day fax transmissions and email messages.[7]

Morse was able to sell territorial licenses to his patent which permitted companies to run telegraph services in certain geographic areas but not nationwide. For a time, the telegraphy business was quite decentralized and competitive, but by the late 1860s, one company, Western Union, had achieved a dominant position in the industry. The new invention had major impacts on industrial development, military operations, and government regulation. The telegraph was used by both Union and Confederate forces during the Civil War. For the first time, commanders far distant from battlefields could provide specific orders to troops in combat. In some instances, President Abraham Lincoln skipped over the normal chain of command to send instructions directly to officers in the field through the telegraph.[8]

The telegraph proved very useful in industry in conveying almost instant   information about price changes in products and securities, as well as news of events that might affect the supply and demand for products and the factors of production. No industry was more greatly affected by the telegraph than the railroads. The speed of communication over long distances allowed railroad management to coordinate the movement of trains moving over single tracks in opposite directions. Railroads aided the expansion of telegraph lines by granting rights-of-way to telegraph companies to set up poles and wires alongside the railroad tracks.[9]

In much of the world, telegraph services were owned and operated by the government. For the most part, this was not the case in the United States and Canada. Governments were still deeply involved in the growth of the telegraph services, both by subsidizing the infrastructure and by regulating the service. States promoted the industry by granting rights-of-way and imposing penalties for damaging lines. They also regulated the industry by imposing penalties for refusing to receive messages sent from other telegraph companies, for transmitting dispatches out of the order in which they were received, and for disclosing private communications to third parties.[10] As telegraph lines crossed state lines, the federal government gained some jurisdiction.   The Post Office operated some limited telegraph services for a time.   Later the Interstate Commerce Commission regulated interstate services, although railroads, not the telegraph industry, were the primary focus of the ICC. Later, the Federal Communications Commission gained jurisdiction, although by then telephony and radio and broadcast television was the major concern of that regulatory body.[11]

If Morse had never worked on telegraphy he would still be remembered today, at least to art historians, as an exceptionally fine painter. His work on the telegraph and, perhaps more importantly, the Morse code was of monumental importance. Morse’s work on communicating messages across enormous spaces in minimal periods of time has had enormous impact upon the way that America and the whole world have developed over the last century-and-a-half.

James C. Clinger is a professor in the Department of Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Wheeler, Tom. “The First Electronic Network and the End of Time.” In From Gutenberg to Google: The History of Our Future, 87-116. Washington, D.C.: Brookings Institution Press, 2019.

[2] Wheeler, op cit..

[3] http://siarchives.si.edu/oldsite/siarchives-old/history/jhp/joseph20.htm

[4] Wheeler, op cit.

[5] https://www.sparks.it/download/00993407.pdf

[6] O’Reilly v. Morse, 56 U.S. 15 at page 96

[7] Kappos, David J., and Christopher P. Davis. 2015. “Functional Claiming and the Patent Balance.” Stanford Technology Law Review 18 (2): 365–74.

[8] Wilhelm, Pierre. “The Telegraph: A Strategic Means of Communication During the American Civil War.” Revista De Historia De América, no. 124 (1999): 81-9

[9] Du Boff, Richard B. 1980. “Business Demand and the Development of the Telegraph in the United States, 1844-1860.” Business History Review 53 (4): 459-479.

[10] Nonnenmacher, Tomas. “State Promotion and Regulation of the Telegraph Industry, 1845-1860.” The Journal of Economic History 61, no. 1 (2001): 19-36.

[11] Goldin, H. H. “Governmental Policy and the Domestic Telegraph Industry.” The Journal of Economic History 7, no. 1 (1947): 53-68.

 

Guest Essayist: Joshua Schmid

Realpolitik and Idealism Coupled: A Brief History of the Monroe Doctrine

The Monroe Doctrine remains one of the most influential foreign policy statements in United States history. The principles it espoused were largely consistent with the foreign affairs doctrine of non-interventionism in European affairs that were followed by the first U.S. presidents. Even by the twenty-first century, at times when U.S. geopolitical strategies were drastically different from those of the early nineteenth century, some foreign policy makers still invoked the Monroe Doctrine when prioritizing the spread of democracy and the well-being of the Western Hemisphere over other parts of the world. One of the key reasons for its longevity and success was because it established a foreign policy rooted in a combination of realism and idealism.

In the early days of the American republic, the U.S. remained largely uninvolved in the affairs of Europe and the rest of the world. As George Washington stated in his Farewell Address, “Europe has a set of primary interests which to us have none; or a very remote relation…Hence, therefore, it must be unwise in us to implicate ourselves by artificial ties in the ordinary vicissitudes of her politics, or the ordinary combinations and collisions of her friendships or enmities.” The young nation could ill-afford to become involved in the squabbling affairs of the Old World as it needed to focus more on its own internal development.

Despite the best attempts by the U.S. in the late eighteenth and early nineteenth centuries to avoid entanglements with Europe, foreign policy issues abounded. Spain, Britain, France, and Russia all sought to lay claim to vast swaths of territory on the North and South American continents. A number of these, especially Spanish colonies in Latin America, began revolutions in the early nineteenth century to claim self-governance. The so-called “Holy Alliance” of Prussia, Austria, and Russia emerged in 1815 after the Napoleonic Wars with the established purpose of protecting monarchy around the globe. After the Holy Alliance and France successfully re-instated the Spanish king following a revolution, it then turned to putting down the liberal revolutions occurring in the New World. The U.S. hardly blinked at the Spanish restoration as it had no impact on its affairs. However, the prospect of European armadas barging into the Western Hemisphere to put down movements that followed the spirit of the American Revolution posed a threat to U.S. security.

In response to potential intrusions into South America, U.S. President James Monroe decided to release a stunning declaration to the world. He laid out his policy in his Seventh Annual Message to Congress in December 1823. First, Monroe addressed the continual colonization of the New World by European powers, stating, “the American continents, by the free and independent condition which they have assumed and maintain, are henceforth not to be considered as subjects for future colonization by any European powers.” He then proceeded to discuss the attempts by European nations to interfere in Latin America’s revolutions. “With the movements in this hemisphere we are of necessity more immediately connected…The political system of [European monarchs] is essentially different…from that of America. We owe it, therefore, to candor and to the amicable relations existing between the United States and those powers to declare that we should consider any attempt on their part to extend their system to any portion of this hemisphere as dangerous to our peace and safety,” Monroe continued. While Monroe conceded that European powers would be allowed to keep the colonies they held at the time, this announcement had the potential to drastically alter geopolitical affairs in the New World. By framing his doctrine in this manner, Monroe established a policy that simultaneously protected American interests in the Western Hemisphere while also supporting self-government from absolutism.

As bold of a declaration as the Monroe Doctrine was, it would have been largely ineffectual without the support of the British navy in helping to enforce it. The American navy did not have the resources to patrol the entire Western Hemisphere to prevent colonization by European powers. However, Britain—the most liberal of the major European powers of the time—did. It was using its navy to protect free trade around the world and the prospect of additional New World colonies, with their closed markets, would have harmed this economic policy. In fact, Britain’s Foreign Minister George Canning approached U.S. officials earlier in 1823 with a plan to release a joint-declaration to deter intervention in the New World. However, Secretary of State John Quincy Adams convinced Monroe that Britain had imperial motivations for wanting a joint-declaration, and a unilateral proclamation was announced. Even then, British self-interest in a largely de-colonized Western Hemisphere ensured that the U.S. would have a partner in enforcing its new hegemony.

The Monroe Doctrine helped the U.S. remain distant from European affairs and allowed it to chart a path to dominance in the Western Hemisphere. While initially largely symbolic, it would eventually be invoked by presidents ranging from Ulysses Grant to Teddy Roosevelt to John F. Kennedy to justify U.S. military interventions in the Western Hemisphere. According to the Monroe Doctrine, the success of American ideals of liberty and self-government in the Western Hemisphere went hand-in-hand with U.S. security. This coupling was in large part what made the doctrine so successful and why it has lasted as a cornerstone of American foreign policy. By claiming a hegemony in the New World that would support liberal values against absolutism, the U.S. began on its path to becoming the leader of the Free World for two centuries.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: David Head

February 22, 1819, was, John Quincy Adams recorded in his diary, “perhaps the most important day of my life.”[1] On that day, the United States finalized a momentous treaty with Spain that acquired Florida for the United States and settled a border with Spain’s North American provinces that reached across the continent to give the United States a piece of Oregon on the Pacific Ocean.

The agreement was known officially as the Treaty of Amity, Settlement, and Limits Between the United States of America and His Catholic Majesty.[2] Today, it’s more commonly called the Transcontinental Treaty, to emphasize its geographic scope, or it’s known as the Adams–Onís Treaty, after its two architects, Secretary of State Adams and Spanish minister plenipotentiary Luis de Onís.[3]

Adams would have preferred the last title. Self-pitying and often depressed, he feared his role would “never be known to the public—and, if ever known, will be soon and easily forgotten.”[4]

Adams and Onís negotiated the final agreement, but the issues between their two nations ranged back to the earliest days of the American republic.

One source of discord was the border. Where, exactly, did it lie?

The Treaty of Paris that ended the American Revolution promised the United States British territory west to the Mississippi River and south to the 31st parallel (the north–south border between Alabama and the Florida panhandle today).[5]

Spain, though not a formal U.S. ally, fought against Britain in the war as an ally of France. In North America, fighting on the winning side brought Spain the British colonies of East and West Florida. The two Floridas were analogous to the modern state, but somewhat bigger as the panhandle extended West Florida along the Gulf of Mexico to the Mississippi River. On the northern side, Spain claimed the Floridas were even larger as it did not recognize the 31st parallel as the proper boundary.

In 1795, Spain acquiesced to the American definition of its border with Florida in the Treaty of San Lorenzo (also called Pinckney’s Treaty). But new problems erupted eight years later when the United States acquired the Louisiana territory from France.[6]

The Purchase treaty was maddeningly vague on just what “Louisiana” was. Rather than stipulating geographic features like rivers—things that might appear on both a map and in reality—the document defined the land to be transferred only by reference to the Treaty of San Ildefonso in which Spain had ceded Louisiana back to France in 1800.[7]

With Louisiana moved from France to Spain, back to France, and then to the United States, it’s little wonder no one could say what “Louisiana” really was.

For their part, U.S. policymakers insisted they had purchased West Florida from the Mississippi River east to the Perdido River (the modern border between Alabama and Florida) as well as a chunk of Texas from the Mississippi to the Rio Grande.

Spain countered that no, Louisiana was never that big. In the gulf region, Louisiana was only ever a sliver between the Mississippi River in the east and the midpoint between the Mermentau and Calcasieu Rivers in the west (not far from the border between modern Louisiana and Texas). Everything else was Spanish.

Another category of contention revolved around money. The U.S. government felt Spain owed U.S. citizens compensation because of two Spanish violations of American rights.

The first kind of violation happened in the 1790s, and like the land squabbles, it involved Spain’s relationship with France and the geopolitics of European war. Some American ships attempting to trade with Britain had been seized by French privateers and were then taken to Spanish ports to be condemned as prize of war by a friendly admiralty court. Since the United States declared its neutrality in the war, U.S. leaders said Spain had committed spoliations—unlawful destruction of a neutral’s property—and should make restitution to the American merchants.

The second kind of violation occurred in 1802 when Spain, which controlled New Orleans, prevented American merchants from selling their goods in the city, a right won for Americans trading down the Mississippi River by the Treaty of San Lorenzo. American merchants stuck with goods they couldn’t sell demanded that Spain make them whole.

Spain and the United States attempted to reconcile on several occasions in the following years. But no agreement survived Napoleon’s shifting ambitions, the emerging Spanish American push for independence, and the fading of Spain’s position as an imperial power.

France’s 1808 invasion of Spain touched off a crisis for the Spanish empire. Napoleon vanquished the king, installed his brother Joseph on the throne, and inspired a resistance government to form in Spain and independence movements to rise up in the Americas to fight against both French and Spanish rule.

The United States broke off negotiations with Spain as a result. Though Luis de Onís lived in Philadelphia, President James Madison kept him at arm’s length. Madison didn’t want the United States appearing to take sides in the Spanish resistance government’s battle with Napoleon or look like it was playing favorites between Spain and its rebelling colonies.

In 1814, King Fernando VII regained the Spanish crown in the wake of Napoleon’s defeat. Relations between the United States and Spain stayed chilly, however. President Madison didn’t trust Onís, who he suspected of conspiring with Britain during the War of 1812, but more substantially, both nations believed their bargaining power would improve with time. As a consequence, both sides delayed negotiations.

In 1817, a new administration took office, with James Monroe as president and Adams as Secretary of State. The delays continued.[8]

Onís was officially received by Monroe. Talks began but the delays continued. Adams complained about his futile meetings with Onís, who, he said, “beat about the bush” and failed to “make any propositions at all.”[9]

Then, suddenly in 1818, two developments broke open negotiations and quickly produced a treaty.

First, General Andrew Jackson invaded Spanish Florida. Tasked to secure the Georgia–Florida border against Indian attacks and prevent slaves from running away, Jackson exceeded his orders, captured two Spanish forts, and executed two British subjects for assisting Natives. Following tense discussions inside the administration, Monroe decided to back Jackson. Onís, fearing the United States might seize Florida outright, backed down. A treaty sooner rather than later looked good.

Second, Spain found its hopes of European support for retaking its American colonies frustrated. Meeting at the Congress of Aix-la-Chapelle, Fernando’s fellow kings refused to help restore his empire. Spain was on its own in the Americas. A deal with the United States was imperative.

The treaty terms settled both the border and compensation issues. The United States received Florida, its top priority, and access to the Pacific via Oregon, its second concern.

Spain got the Texas border it wanted, and the United States agreed to take responsibility for paying the claims of U.S. citizens up to $5 million. (The contention that the United States bought Florida for $5 million is erroneous.)

Approved by the Senate, the treaty encountered a hiccup when it arrived in Spain. An error was discovered in one treaty’s terms having to do with land grants the king made to various Spanish noblemen, and Spain tried to trade the lands back to the United States in exchange for putting pressure on the new Spanish American republics.

The treaty passed two years in limbo until a rebellion of Spanish army officers, who refused to continue fighting in the Americas, forced Fernando to acquiesce to the wishes of the Spanish legislature on, among other issues, accepting the treaty with the United States.

Approved in Spain, the treaty was again ratified by the Senate on February 22, 1821—exactly two years after Secretary Adams’ most important day.[10]

The full significance of the treaty emerged only later in the nineteenth century. It cleared the way for U.S. expansion south into Florida and west to the Pacific Ocean. In time, Americans pushed against the border in the west, putting more pressure on an independent Mexico.

The pushing turned to war in 1846, and after the United States took the provinces once contemplated as the limits of American growth, a vast expansion of territory lay bare the nation’s ugliest disagreement: what to do about slavery?

Adams knew the true results of his accomplishment would only be realized in the future.

“What the consequences may be of the compact this day signed with Spain is known only to the all-wise and all beneficent Disposer of events,” he recorded in his diary.

But despite the unknown, Adams was cautiously hopeful. “May no disappointment embitter the hope which this event warrants us in cherishing,” he wrote. “May its future influence on the destinies of my country be as extensive and as favorable as our warmest anticipations can paint!”[11]

David Head teaches history at the University of Central Florida. He is the author of Privateers of the Americas: Spanish American Privateering from the United States in the Early Republic and A Crisis of Peace: George Washington, the Newburgh Conspiracy, and the Fate of the American Revolution. For more information visit www.davidheadhistory or follow him on Twitter @davidheadphd.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] John Quincy Adams, February 22, 1819, Memoirs of John Quincy Adams, ed. Charles Francis Adams, 12 vols. (Philadelphia: Lippincott, 1874–77), 4: 274. https://books.google.com/books?id=kLQ4AQAAMAAJ&newbks=1&newbks_redir=0&pg=PA274#v=onepage&q&f=false

[2] The text of the treaty can be found at https://avalon.law.yale.edu/19th_century/sp1819.asp;

[3] For overviews of the treaty, its context, and its negotiation, see J. C. A. Stagg, Borderlines in Borderlands: James Madison and the Spanish–American Frontier, 1776–1821 (New Haven: Yale University Press, 2009); James E. Lewis, Jr., The American Union and the Problem of Neighborhood: The United States and the Collapse of the Spanish Empire, 1783–1829 (Chapel Hill: The University of North Carolina Press, 1998); Samuel Flagg Bemis, John Quincy Adams and the Foundations of American Foreign Policy (New York: Alfred A. Knopf, 1969); Philip C. Brooks, Diplomacy in the Borderlands: The Adams–Onís Treaty of 1819 (1939; reprint, New York: Octagon, 1970).

[4] Adams, February 22, 1819, Memoirs, 4: 275. https://books.google.com/books?id=kLQ4AQAAMAAJ&newbks=1&newbks_redir=0&pg=PA275#v=onepage&q&f=false

[5] The text of the Treaty of Paris can be found at https://avalon.law.yale.edu/18th_century/paris.asp.

[6] David Head, Privateers of the Americas: Spanish American Privateering from the United States in the Early Republic (Athens: University of Georgia Press, 2015), 21–22.

[7] The Louisiana Purchase treaty can be found at https://avalon.law.yale.edu/19th_century/louis1.asp; the Treat of San Ildefonso can be found at https://avalon.law.yale.edu/19th_century/ildefens.asp.

[8] Head, Privateers, 21–22, 25–29.

[9] Adams, January 10, 1818, Memoirs, 4: 37. https://books.google.com/books?id=kLQ4AQAAMAAJ&newbks=1&newbks_redir=0&pg=PA37#v=onepage&q&f=false

[10] Head, Privateers, 29–30.

[11] Adams, February 22, 1819, Memoirs, 4: 274. https://books.google.com/books?id=kLQ4AQAAMAAJ&newbks=1&newbks_redir=0&pg=PA274#v=onepage&q&f=false

Guest Essayist: Tony Williams

During the Napoleonic Wars of the early 1800s, the British Royal Navy stopped American ships and forcibly impressed their sailors into naval service after attempts by the Jefferson and Madison administrations to use embargoes and trade sanctions to compel British respect for freedom of the seas. In June 1812, Congress declared war to defend American national sovereignty from repeated British violations. Most of the battles were fought at sea and around the Great Lakes.

However, in August 1814, the British fleet arrived in the Chesapeake Bay and landed 4,000 troops who humiliated U.S. forces at Bladensburg, Maryland. The British marched into Washington, D.C. and burned the capital in revenge for the burning of York (Toronto). A few weeks later, British Admiral Alexander Cochrane and his officers decided to invade the nearby port-city of Baltimore because he thought the “town ought to be laid in ashes.”

Francis Scott Key was a prosperous D.C. attorney who had argued before the Supreme Court and had a large family. Like many Easterners, he was opposed to the war whose main proponents were “war hawks” from the West and South. However, Key was appalled by the threats to the capital and then the burning of Washington, and joined the local militia. He was persuaded by some friends to help secure the release of Dr. William Beanes from British captivity. Key gained an audience with President James Madison and soon joined with prisoner of war agent, John Skinner, to seek Beanes’ release.

Meanwhile, Major General Samuel Smith of the Maryland militia prepared Baltimore’s defenses for a British assault. During the morning of September 12, 4,700 redcoats and royal marines disembarked along the Patapsco River for a fifteen-mile march to Baltimore. They were led by General Robert Ross who promised to “sup in Baltimore tonight, or in hell.” A Royal Navy squadron sailed up the river to bombard Fort McHenry in the harbor and then the city itself.

The American militia was 3,000-strong and deployed along a narrow part of the peninsula to block the British advance. A rifleman among the forward skirmishers killed General Ross in the opening round. The British attacked after a brief, but sharp artillery exchange and forced the Americans back several times. The British pressed the attack the next day but suffered increasing numbers of casualties and were forced to withdraw. The Americans had held, and the British infantry attack on Baltimore had ground to a stop and failed.

Meanwhile, at sunrise on Tuesday, September 13, Commander of Fort McHenry, Major George Armistead, and commander of a volunteer artillery company from the city which had joined in the defense of the fort, peered through their spyglasses into Baltimore Harbor. They saw five British bomb-ships maneuvering into position one and a half miles from the star-shaped fort.

Armistead’s soldiers in the 1,000-man garrison were up and preparing the 36 guns to defend the fort. The tension was rife, and their nerves were stretched to the limit. Suddenly, the ship Volcano lobbed 200-pound explosive shells into the fort. The other four bomb ships and the rest of the fleet fired on the fort. Inaccurate but terrifying, screaming rockets were launched from Erebus toward the fort.

Major Armistead ordered his soldiers to return fire, and several cannonballs scored direct hits on British ships. The American fire was unexpectedly severe and forced the British to move out of range of the fort’s guns. The British had moved out of range of the American guns, but their bomb ships could still hit Fort McHenry. Finally, Armistead ordered his men to take cover in a moat.

A few observers of the battle from the British fleet were Key and Skinner. They had rented a packet-ship and went to the enemy fleet to secure the release of their prisoner. They had been stuck with the British fleet aboard the HMS Surprize for several days as it had moved toward the harbor. They were transferred with Beanes back to their small vessel but not allowed to depart until after the battle.

They were as distressed as the men in the fort at being bombarded and suffering casualties but impotent to return fire. A British shell crashed through the roof of the fort’s magazine where 300 barrels of gunpowder were stored, but miraculously, did not explode.

In the early afternoon, the sun disappeared, and heavy rain fell from a nor’easter. The men in the fort lowered the American flag stitched by Mary Pickersgill and raised a storm flag due to the rain. The British fleet moved closer to fire broadsides from several warships. The Americans quickly fired their own guns and caused severe damage to three warships forcing the British back.

However, the British bomb ships continued to fire as darkness settled with the arrival of evening, Admiral Cochrane had thought he would force a surrender in less than two hours leaving Baltimore vulnerable to a coordinated land-sea assault. But Armistead had no intention of surrendering.

The shelling continued through the night. Finally, the first light of dawn approached with Fort McHenry still standing. From his vantage point, Key watched as the fort raised the immense 30 by 42-foot star-spangled banner as the soldiers stood at attention. Meanwhile, Key pulled a letter from his pocket and started to jot down some words and notes for a song that came to mind. “O say can you see by the dawn’s early light . . .” it began, and ended with “O’er the land of the free, and the home of the brave.”

The American forces redeemed themselves at Baltimore and Fort McHenry after the national humiliation in Washington, D.C. Only a few months later, American commissioners including John Quincy Adams, Henry Clay, and Albert Gallatin signed the Treaty of Ghent officially ending the War of 1812. Key’s words became America’s national anthem, marking its great victory in what some have called the Second War for American Independence.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James C. Clinger

On May 14, 1804, President Thomas Jefferson’s private secretary, Meriwether Lewis, and an army captain, William Clark, began an expedition exploring the territory stretching from the Mississippi River, along the Missouri River, all the way to the Pacific Ocean. But the origins of the expedition began long before this, even before Jefferson became president of the United States and well before the Louisiana Purchase took place. Only a few years after the Revolutionary War, shortly after sea captain Robert Gray had discovered the estuary of the Columbia River in present-day Oregon, Jefferson instructed Andre Michaux to “explore the country a[long] the Missouri, & thence Westwardly to the Pacific ocean.” [1] The enterprise was sponsored by the American Philosophical Society and included the support of subscribers such as George Washington and Benjamin Franklin. The expedition was quickly ended before the entourage reached the Missouri River after it was discovered that Michaux was acting as an agent of the French government. Other expeditions, led by explorers such as Zebulon Montgomery Pike, Thomas Freeman, William Dunbar, and Peter Custis, were also enlisted by Jefferson during his presidency to explore various sections of the American West, but none of these luminaries have gained the popular recognition that the Lewis and Clark Expedition retains today.

Meriwether Lewis, like Jefferson, was born into a family of Virginia planters. Lewis served for a time in the army during the Whiskey Rebellion, but saw no combat. For a period of about six months, Lewis served under the command of William Clark, who was the younger brother of the Revolutionary War commander, George Rogers Clark.     Lewis left the army as a captain to become the private secretary to President Jefferson, whom he advised on the capabilities and political loyalties of high-ranking officers in the military. Lewis availed himself of Jefferson’s personal library, which was considered one of the finest in the world. After his appointment as the commander of the expedition to the west (also known as the “Corps of Discovery”) Lewis received instruction in astronomy, botany, and medicine by some of the leading scientists in the country to prepare him for his mission. Lewis later asked William Clark to join in the command with the rank of captain, although the initial budget for the expedition included pay for only one captain.   Clark was eventually commissioned as a lieutenant, although this was not known to the crew.[2]

President Jefferson first proposed the Lewis and Clark expedition in a secret message to Congress on January 18, 1803, months before the Louisiana Purchase took place. The message stressed the need for the government to develop the fur trade along the Missouri River in what was then Spanish territory. Jefferson wished to provide new lands to compensate private fur traders who would be forced out as the federal government purchased Indian titles to land along the Mississippi River.   Jefferson hoped to induce Native-American tribes to take up agriculture and abandon their reliance upon hunting and fur-trading.[3] Although the primary purpose of the expedition was commercial in nature,  it should be understood that with that commercial development, significant expansion in government authority would necessarily take place. In addition to its implications for commerce and government, the expedition had other purposes and objectives. Jefferson’s specific marching orders to Lewis indicated that “the object of your mission is to explore the Missouri river, & such principal stream of it, as, by its course and communication with the waters of the Pacific ocean, whether the Columbia, Oregan, Colorado or any other river may offer the most direct & practicable water communication across this continent for the purposes of commerce. . . . Beginning at the mouth of the Missouri, you will take [careful] observations of latitude & longitude, at all remarkeable points on the river, & especially at the mouths of rivers, at rapids, at islands, & other places & objects distinguished by such natural marks & characters of a durable kind, as that they may with certainty be recognised hereafter….The interesting points of the portage between the heads of the Missouri, & of the water offering the best communication with the Pacific ocean, should also be fixed by observation, & the course of that water to the ocean, in the same manner as that of the Missouri.”[4]

Jefferson clearly had interest in the geographic and scientific discoveries that the expedition could make, and was particularly interested in learning if a water route from the Missouri to the Pacific could be found.  Jefferson also hoped to learn something about the life of the Native American tribes the expedition would encounter along the way.[5]

The original plan was for Lewis to lead a party of only a dozen or so men. A larger party was thought to be perceived as a military threat to the Native Americans encountered along the way. Lewis and Clark, however, decided to add some additional members for the expedition, while still keeping the numbers down to a sufficiently small size to convince the Indians of their peaceful intentions. The expedition departed from the northern bank of the Missouri River, just north of St. Charles, Missouri, on May 21, 1804. Traveling in a heavily laden keelboat and two pirogues, the expedition only traveled three-and-a-half miles before stopping to camp for the night. Twenty-eight months later, the expedition returned to St. Louis. A total of forty-five men began the expedition. Others were added for a while on the journey but left before the expedition was completed. Thirty-three returned. One member died along the way of a “bilious colic,” which may have been appendicitis.   The party included William Clark’s African-American slave, York, and a French-Canadian fur trader Toussaint Charbonneau, and his Shoshone wife, Sacagawea, who was pregnant when she joined the expedition.   Sacagawea was a talented interpreter of Indian languages, and also skilled in finding edible plants on the westward journey. Her very presence in the party was beneficial in that Native American tribes did not believe a war party would contain a woman in its midst. This provided convincing evidence that the Corps of Discovery had peaceful intentions. Throughout the entire expedition, the party had only one lethal encounter with Native Americans. A band of Piegan Blackfeet Indians attacked the camp in an attempt to steal horses and weapons.   Two of the Blackfeet were killed in the battle.

The Corps of Discovery made slow progress up the Missouri River and into the Rocky Mountains. The expedition had to proceed on foot and on horseback for much of the way after learning that there was no water route through the mountains to the ocean. The expedition made its way to the Pacific coast by the December of 1805, when it voted to spend the winter at Fort Clatsop. The entire party participated in the decision, including York and Sacagawea,  perhaps marking the first time that an African-American slave and a Native American woman had participated formally in a decision of a federal governmental body.[6]

Lewis and Clark made their return in the spring of 1806. In July, Lewis took part of the company with him while Clark took the remainder to explore different paths within the territory of present-day Montana. The two groups re-joined one another in August in present-day North Dakota. The expedition proceeded back to St. Louis, where the party arrived on September 22. As Lewis scrambled out of his canoe, the first question that he had for a local resident was “When does the post leave?” Lewis was desperate to report to the president.[7]

Lewis had been directed by Jefferson to keep a journal of his discoveries. Clark also kept a journal, which he filled with descriptions of his observations, as well as fine illustrations of flora and fauna above, beneath, and beside his handwritten text on the pages of the journal.    Lewis traveled to Washington, D.C., to report directly to the president.   What was said at that meeting is unknown, but it is clear that Lewis pledged to write and publish a book that would report his findings.   Unfortunately, the book was never written. Jefferson offered Lewis an appointment as the governor of the territory of Louisiana. Jefferson no doubt expected that the position would provide income and security for Lewis as he authored his book. However, the sedentary position did not suit Lewis, who seemed unable to master administrative duties once the expedition was completed. He suffered from serious drinking problems, indebtedness, and acute melancholy. He died from gunshot wounds, probably by his own hand, while staying at an inn on a trip to Washington.[8]

Because of Lewis’s death and failure to complete his narrative about the expedition, much of the scientific, ethnographic, and geographic findings of the enterprise were not fully appreciated. Many of the discoveries of plants and animals that the Corps made were, for a time, lost. Those species were later re-discovered many years later. Sergeant Patrick Gass, a member of the expedition, did compose a book length narrative about the venture, but that volume did not contain much of the kind of discoveries that were of original interest to President Jefferson.    Francis Biddle, later the president of the Second Bank of the United States, conducted an “audit” of the enterprise and then was the primary, but uncredited author of the official history of the expedition, which appeared in two volumes. The second of the two volumes was devoted to the botanical and zoological discoveries of Lewis and Clark, but Biddle was neither an expert scientist nor a first-hand observer of the phenomena that he was to describe. Biddle was one of the great intellectuals of his age, but he was not scientifically trained and he could have benefited greatly from the elaboration that Lewis could have offered him had he lived. For these reasons, the significance of the expedition was not recognized in the first century after the return of the Corps of Discovery as it could have been and, in fact, as it has been in more recent years as more and more scholars have slowly uncovered more evidence of the events that took place.[9] Today, we can see that the Corps of Discovery accomplished much in the way of learning of the terrain, climate, and physical environment of the trans-Mississippi West.   The expedition learned much of the Native Americans who lived in that territory. This knowledge aided in the settlement and development of a massive land area on the North American continent. Obviously, those developments have had diverse implications for all involved, most notably for the Native Americans who lived there. Nevertheless, the Lewis and Clark expedition paved the way for the future transformation of much of what is now the United States.

James C. Clinger is a professor in the Department of  Political Science and Sociology at Murray State University. He is the co-author of Institutional Constraint and Policy Choice: An Exploration of Local Governance and co-editor of Kentucky Government, Politics, and Policy. Dr. Clinger is the chair of the Murray-Calloway County Transit Authority Board, a past president of the Kentucky Political Science Association, and a former firefighter for the Falmouth Volunteer Fire Department.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] https://www.monticello.org/thomas-jefferson/louisiana-lewis-clark/origins-of-the-expedition/jefferson-s-instructions-to-michaux/

[2] Ambrose, Stephen E.  1996. Undaunted Courage: Meriwether Lewis, Thomas Jefferson, and the Opening of the American West.   New York: Simon & Schuster. Pages 133-136.

[3] Guinness, Ralph B. 1933. “Purpose of the Lewis and Clark Expedition.” Mississippi Valley Historical Review 20 (January): 90–100.

[4] https://www.smithsonianmag.com/history/meriwether-lewis-gets-his-marching-orders-96463431/

[5] Ronda, James P.  1991.  ‘A Knowledge of Distant Parts’: The Shaping of the Lewis and Clark Expedition.   Montana: The Magazine of Western History, Vol. 41, No. 4 (Autumn): 4-19.

[6] Ambrose, Stephen E.  1996. Undaunted Courage: Meriwether Lewis, Thomas Jefferson, and the Opening of the American West.   New York: Simon & Schuster. Pages 313-316.

[7] Ambrose, op cit.  Chapter 32.

[8]Ambrose, op cit.  Chapter 39

[9] Snow, Spencer. 2013. “Maps and Myths: Consuming Lewis and Clark in the Early Republic.” Early American Literature 48 (3): 671–708.

Guest Essayist: Gary Porter

The 1803 treaty signed in Paris brought a purchase by the United States for 828,000 square miles, doubling the nation’s size. Constitutional questions stirred disputes over how to best divide territory and keep the nation’s peace. Concurrently, the Louisiana Purchase helped sustain America’s growing need for agriculture, free flow of commerce along the Mississippi, and secure westward regions.

On June 12, 1823, Thomas Jefferson wrote in a letter to William Johnson: “On every question of construction, let us carry ourselves back to the time when the Constitution was adopted, recollect the spirit manifested in the debates, and instead of trying what meaning may be squeezed out of the text, or invented against it, conform to the probable one in which it was passed.”

Twenty years earlier, Jefferson had been in a bit of a quandary concerning this very topic. As the third President of the United States, he was presented with an enormous opportunity: nearly double the size of the nation by purchasing land offered by France at a bargain-basement price. But search the Constitution from top to bottom, side to side, Article 1 to Amendment 11, he could find no explicit power given the President to make such a purchase. At first blush, Jefferson concluded an amendment to the Constitution was required. In an August 1803 letter to John Dickinson, he wrote: “The General Government has no powers but such as the Constitution gives it.  It has not given it power of holding foreign territory, and still less of incorporating it into the Union. An amendment of the Constitution seems necessary for this.”

An amendment would have to be rushed through Congress. Jefferson’s Democratic-Republicans were tantalizingly close to the 2/3 majority needed to pass an amendment in the House (they controlled 67 of 103 seats, or 65%), but they only held 14 of 32 seats in the Senate (43%).  The Federalists, still smarting from the drumming they took at the polls in 1800, were not at all interested in supporting Jefferson (sort of like today’s Democratic Party with President Donald Trump), so crossover votes were unlikely. Even if an amendment could get through Congress, it would take months to be ratified by the states; by then would this “deal of a lifetime” slip through their grasp? Would Napoleon Bonaparte have found another buyer? Otherwise, would Jefferson be required to “squeeze” new meaning out of the Constitution? The President could not pass this deal up, but what was he to do?

The ownership of the land was even in question. Totaling 828,000 square miles (530,000,000 acres) the land had been claimed at times by Britain, France and Spain. Its ownership in 1803 rested upon secret treaties and informal agreements. Everyone at the time could see that many states would eventually emerge from the acquisition (fifteen states to be precise) but to take control of both banks of the Mississippi, a river down which, to use Jefferson’s characterization “three-eighths of our territory must pass to market”[i] simply could not be passed up.

To solve his puzzle, Jefferson did the most practical thing he could think of. He consulted none other than “Father of the Constitution” himself, James Madison.  Fortunately for the President, Madison worked right down the street from the “President’s House.”[ii] Jefferson had wisely brought Madison into the administration as his Secretary of State.

Madison adroitly opined that the power of a nation to “extend its territories” was a power enjoyed by any nation “by treaties.” And the treaty power was, without question, enjoyed by the President of the United States.[iii]

James Monroe and Robert Livingston had previously been dispatched to France to negotiate the purchase of New Orleans and Florida. The day after Monroe arrived in France (Livingston was already there) the pair had been summoned to the chambers of Talleyrand himself and offered the entirety of Louisiana. Unbeknownst to them, the day before, Napoleon had declared Louisiana “entirely lost” and ordered it be offered to the Americans. Napoleon needed the cash for yet another war with Britain. After some crafty negotiations, Monroe and Livingston were able to get the price down to $15 Million, still above the limits of their instructions, but “affordable.” The pair quickly signed the Louisiana Purchase Treaty on April 30th 1803, and hurried home.

On October 31, 1803, the last day the Senate could do so without nullifying it, the treaty was ratified, making Louisiana part of the United States.

Twenty-five years later, the Supreme Court would finally confirm Jefferson and Madison’s decision by stating: “The Constitution confers absolutely on the government of the Union, the powers of making war, and of making treaties; consequently, that government possesses the power of acquiring territory, either by conquest or by treaty.”[iv]

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[i] April 18, 1802 letter to Robert Livingston.

[ii] The building would not be informally called the “White House” until shortly before the War of 1812 and not officially until action taken by President Theodore Roosevelt in 1901.

[iii] Article 2, Section 2.

[iv] American Insurance Co. v. Canter (1828).

Guest Essayist: Jeanne McKinney

At the beginning of the American Revolution, the fractured colonies were up against the British Empire like a David against Goliath. For 150 years, the colonists had already experienced an unusual amount of freedom managing their own affairs. This was not because of Britain granting the freedoms, but their preoccupation with local affairs, foreign conflicts, inefficiency, geography, and neglect.

It wasn’t so much a planned rebellion but a continuing outrage against offenses that grew into a movement to protect and defend what the colonists had toiled hard for and held dear.

The American Revolution began in 1765. Some of the key causes were:

The Founding of the Colonies

French and Indian War

Taxes, Laws, and More Taxes

Protests in Boston

Intolerable Acts

Boston Blockade

Immediate and sometimes violent objections of the Americans to their new taxes both baffled and angered the British. The colonists’ position was that, while Parliament could legislate for the colonies,

taxes could only be levied through their direct representatives of whom they had none in Parliament. (1)

England sent troops to protect against mob violence when Americans refused to import the goods in order to avoid the duties. The Americans’ argument grew boldly stating equality, claiming their colonial assemblies stood on equal footing with Parliament. Their assertion was they alone could legislate for the colonies, while Parliament legislated for Great Britain. (2)(Marines in the Revolution, Charles R. Smith)

British naval and ground forces were placed strategically on a colonial chessboard when arguments and assertions transformed into a growing rebellion and King George’s colonial income stream was faltering. His radar was now switched on.

A never-before-tried precedent to go against a greater enemy for cause the world had never seen before. A collective body of colonists gave birth to the concept of a government by the people. They risked all to obtain inalienable rights and individual freedoms given from God to all men.

Before 1775, the colonists had no central government arsenals stocked with weapons and ammo, no formal armies or navies. Growing unity among the colonies spurred a mobilization of the crudest kind. Whether farmer or shopkeeper, educated or uneducated, they grabbed their rifles to form small groups and state militias. The colonists adapted slogans like “Join or Die,” and “Don’t Tread on Me.”

A fever swept through the patriots, willing to give what little they had for a rapidly evolving mission of independence from Britain.

America’s first Continental Army served under the helm of a tall, unstoppable military leader. Recruitment offered meager rewards and much hardship. General George Washington inspired and led, on his white horse, America’s rag tag patriot troops through defeat after defeat and horrific fighting conditions. With Washington at the front lines, Thomas Jefferson, John Adams, Benjamin Franklin and other representatives in the Continental Congress had their fight to create an enduring foundation to a budding republic – a new way to live that would require external defense.

The Brits were well-equipped and had years of formal military training. Their success in pillaging and usurping foreign lands made them cocky, brutal, and unforgiving against any attempt to defy their deity, King George. Their many war ships blockaded the harbors of the colonies. The red and white lobsters, with cannons, bullets, and swords, would militarily take from the colonists what they claimed was theirs.

In the late 1700s, the British navy fueled their prowess with the ability to deliver expeditionary ground forces in numbers and navigate waterways into the interior. Washington, Adams, even Benedict Arnold (before he turned) knew they needed amphibious troop movement to expand their defenses and procure much-needed food, ammunition, and warm clothing. Washington’s pivotal victory crossing the Delaware in small boats to win the Battle of Trenton proves this.

The birth of expeditionary power projection.

“On 10 November 1775, the Second Continental Congress authorized the raising of two battalions of [Continental] Marines. From this small beginning we have seen the United States Marine Corps grow into a powerful force for the nation’s security.” (Marine Corps History Division, Brigadier General E. H. Simmons History Center).

Newly commissioned Captains Samuel Nicholas and Robert Mullan supposedly organized the first Marine Corps muster at Tun Tavern, a popular watering hole in Philadelphia.

(The following excerpts are summarized from interviews with Gunnery Sergeant Jon Holmes and Sgt. William Rucker at Marine Corps Air Station (MCAS), Miramar, Feb 25, 27, 2020).

According to Rucker, the captains rounded everybody up, got them drunk and said we’ve got an offer – ‘you can come fight and protect your country, protect these ships’. They recruited people to see if it’s something they voluntarily wanted to do. Back then, that’s what distinguished the Marines from other branches.

“You weren’t mandated – [they said] you want to come join this fighting institution- come along with us,” says Rucker. One of the requirements was good moral character and rifle skills. They were supposed to be good shots.

Holmes states, “This isn’t a well-trained army that has been established for years. These are people who are tired of being taken advantage of – they don’t like the taxation without representation. They think they can achieve a better life for them and their families and for the future.”

When the leaders of the colonists formally declared their independence 2 July, 1776, John Adams believed it would be “the most memorable epocha in the history of America.” Essentially, the entire war against Britain was an open act of treason with death warrants if overrun or captured. To top it all off – where was the funding for the fight coming from? For years Britain’s East India Trading Company provided a well-established economy that funded Britain’s large navy.

“Then you have people who have been colonizing a frontier land; a farmer’s basic way of life and what they have is what they have to subsist with. The ammunition for their weapons, outside of their militias, is for hunting weapons,” says Holmes.

“Any ships they had were probably ‘tactically borrowed,’” he adds, “everything they had [to fight with] they had gotten from someone else…and they used it to great effect.”

On September 5, 1776, the Naval Committee published the Continental Marines uniform regulations specifying green coats with white facings (lapels, cuffs, and coat lining), with a leather high collar to protect against cutlass slashes and to keep a man’s head erect. The moniker “Leatherneck,” stemmed from that first uniform. Though the green color is attributed to the traditional color of riflemen, Colonial Marines carried muskets. More likely, green cloth was simply plentiful in Philadelphia, and it served to distinguish Marines from the red of the British or the blue of the Continental Army and Navy. (Wikipedia)

Holmes relates that the Continental Marines were originally tasked with security for the Navy. Other than safeguarding the ship during naval engagements, they were to specifically aim for officers on the enemy side.

“They were trained to remove those key positions of leadership so the enemy would fall apart,” says Holmes.

Today, leadership in the Marine Corps is passed down through the ranks. If one falls out- the next highest rank takes over and that dominoes down the lowest rank in the various units. That way there is always a leader to complete the mission and stay congealed as a force. That wasn’t really something that was taught back then.

Yet, in those early days as it is now in the Marine Corps, “Every man a rifleman.” Which one can surmise was a safeguard to protect something so very precious, vital and new as a free republic. The Marine Corps is the only branch of America’s armed forces that requires proficient rifle qualifications from all personnel.

Marine amphibious doctrine grew during the eight-year fight for American independence.

“The infant American Marine Corps was a threat to be reckoned with.” (Marine Corps History Division)

The first amphibious assault by Colonial forces came against the British during The Raid of Nassau, Bahamas, 3 March, 1776. During this naval operation, the Marines sailed there – raided it once and got the ammunition and other military supplies. Their main purpose was to get 200 barrels of gunpowder, but the Brits managed to get most of it off the island. Marines, not to be dissuaded from their goal, conducted a second raiding party and got more.

They also captured in April a small 6-gun schooner HMS Hawk and a merchant vessel carrying guns and gunpowder. (Military History Now)

Nobody thought the Colonists at the time would be capable of putting up any defense- much less mounting a raid and attacking one of the most superior naval powers at the time. The Raid of Nassau was the Marines’ first major victory where a very small organization went forth to take supplies from a much larger, more superior force.

“I think the biggest reason for that… is because they [the Marines] were underestimated,” says Holmes.

The Battle of Nassau was the first major victory for the Continental Marines. In December 1776, the Continental Marines were tasked to join Washington’s army at Trenton to slow the progress of British troops southward through New Jersey. Washington, unsure what to do with the Marines, added them to a brigade of Philadelphia militia. Though they were unable to arrive in time to meaningfully affect the Battle of Trenton, they were able to fight at the Battle of Princeton. (Wikipedia)

One of the Continental Marines’ final acts was escorting gold bullion from King Louis of France to start the first bank and treasury of the United States.

The Continental Marines served throughout the Revolutionary War and were disbanded in 1783 after the Treaty of Paris. In all, there were 131 Colonial Marine officers and probably no more than 2,000 enlisted Colonial Marines.

The American Revolution finally won, the navy and the army were also largely disbanded. The few ships in the young American Navy were sold or turned into merchant vessels. America no longer had the protection of the British navy and had to defend its own interests abroad. The idea of an American Navy was the subject of much debate between the Federalists who favored a strong navy and the anti-federalists who felt the money required for a navy would be better spent elsewhere. Repeated threats from France and the Barbary states of North Africa gave cause to consider resorting to more forceful measures to procure the security of American shipping interests. (Wikipedia)

The United States Marine Corps we know today was re-established formally on July 11, 1798.

Despite this, Nov. 10, 1775 is still considered to be the “birthday” of the U.S. Marines. Enemies of America have been underestimating Marines ever since the organization was formed. They repeatedly shock the world with ‘running towards bullets,’ showing tenacious aggression and fearless force. Fueled by steel-clad brotherly love, their motto “No man left behind,” is life itself.

These time-tested, elite warriors have established themselves throughout U.S. history as game changers on foreign battlefields.

Marines have participated in all wars of the United States, being in most instances first, or among the first, to fight. In addition, Marines have executed more than 300 landings on foreign shores and served in every major U.S. naval action since 1775. (Jan 2, 2020, Britannica.com).

1918: The Battle of Belleau Wood.

“Retreat, hell we just got here!” was the war cry of Capt. Lloyd Williams during World War I, 1918. He was advised to withdraw with the French who had had suffered greatly from a massive German assault in their quest to take France and win the war. Marine 5th and 6th regiments (nicknamed ‘devil dogs’) stood their ground – forcing the Germans to withdraw to Belleau Wood and Bouresches. (1) They then launched their assault into oncoming machine gun fire to clear the woods and recapture French soil.

“The Battle of Belleau Wood did not win the war, but it prevented the Allies from losing it,” said Alan Axelrod, author of Miracle at Belleau Wood: The Birth of the Modern U.S. Marine Corps.

“The Marines advanced from shipboard guard or constabulary forces of the 19th century into the multi-purpose force-in-readiness of the 20th and 21st centuries.” (2) (warontherocks.com)

1942: The Battle of Guadalcanal.

“On August 7, 1942, in the Allies’ first major offensive in the Pacific, 6,000 U.S. Marines landed on Guadalcanal and seized the airfield, surprising the island’s 2,000 Japanese defenders. Both sides then began landing reinforcements by sea, and bitter fighting ensued in the island’s jungles.” (Britannica.com)

The Battle of Guadalcanal set a powerful precedent, noted for the operational interrelationship of a complex series of engagements on the ground, at sea, and in the air. This marked a turning point for the Allies in the Pacific. (history.com)

1945: Battle of Iwo Jima.

Shortly after its attack on Pearl Harbor in December 1941, Japan gained control over much of Southeast Asia and the central Pacific. World War II Commanding General Henry (Hap) Arnold wanted Iwo Jima to place B-29 fighters in favorable range of Tokyo, so that they could support bombing operations in the region. (1)

“There was no way we would have been able to make it to Japan and accomplish what we needed to do to win the war with the [Japanese-occupied] island chains. It was a constant threat for our aircraft potential threat for naval vessels, because the Japanese were using the islands to resupply and do strikes from.” (1b)

“With the partnership of the Navy, the Marines were able to go from island to island – seize it- go to the next objective – and take that. All along the way they are building supply lines for U.S. operations up to the enemy’s front lines.” (2b) (GySgt. Jon Holmes)

Adm. Chester Nimitz created a U.S. Joint Expeditionary Force of Navy and Marines to carry out Operation Detachment. February 19, 1945, Marines began to land on the beach of Iwo Jima in intervals and the rest is legend. The offensive was one of the deadliest conflicts in U.S. Marine Corps history.

Nearly 70,000 troops under the command of Maj. Gen Harry Schmidt were forced to kill the Japanese virtually to the last man because they refused to surrender. (2)

Marines twice raised the American flag on Suribachi’s summit. The second raising was photographed by Pulitzer Prize-winner Joe Rosenthal (AP), to become one of the most famous combat images of World War II. (3) (Britannica.com)

Nimitz said after the battle was won, “Of the Marines on Iwo Jima, uncommon valor was a common virtue.” (USMCU)

1968: Tet Offensive:

During the Tet Offensive 85,000 troops under the direction of the North Vietnamese government carried out attacks against five major South Vietnamese cities, dozens of military installations, and scores of towns and villages throughout South Vietnam. The enemy did this to foment rebellion amongst the South Vietnamese population against U.S. involvement in the war. The size and scope of the communist attacks caught the American and South Vietnamese allies completely by surprise. (Britannica.com)

The Marines, used to jungle and guerilla warfare, were not prepared for the fighting style of house to house, street to street.

“That definitely changed our mindset on how we train and changed America’s mindset on how we train…Gave us our motto, “We fight in any clime and place.” It took a scope back to the Tet Offensive [to see that].” (Sgt. William Rucker)

The offensive was a crushing tactical defeat for the North, but it struck a sharp psychological blow that eroded support for the war among the American public and political establishment.

2004: The Battle of Fallujah 1 and 2.

The First Battle of Fallujah called “Operation Valiant Resolve,” was a U.S. military campaign during the Iraq War. The Iraq city of Fallujah was overrun with extremists and insurgents. Marines were tasked to pacify the city and find those responsible for the March 31 ambush and killing of four American military contractors.

Marine Cpl. Stephen Berge was in an abandoned factory in Fallujah when he felt the tides turning. He didn’t know at the time that this second battle of Fallujah he currently was in, also known as Operation Phantom Fury, would become known as the bloodiest battle of the Iraq War. (Marine Corps Times).

The Second Battle of Fallujah is notable for being the first major engagement of the Iraq War fought solely against insurgents rather than the forces of the former Ba’athist Iraqi government, which was deposed in 2003.

When coalition forces (mostly U.S. Marines) fought into the center of the city, the Iraqi government requested that the city’s control be transferred to an Iraqi-run local security force, which then began stockpiling weapons and building complex defenses across the city through mid-2004.

Insurgency, counterinsurgency was a new kind of enemy and fighting. They hid in the shadows, wore no uniform, used civilians as shields, and were in the business of malevolent terror. They were not afraid to die, had no rules of engagement or sense of humanity in warfighting, and used the crudest methods to do the most explosive and lethal harm. Then brag about the innocent people they killed, including children.  https://www.commdiginews.com/politics-2/history-insurgency-iraqs-enduring-defeat-counterinsurgency-fught-123192/

2010: The Battle for Sangin.

In October 2010, 3rd Battalion, 5th Marines (3/5) started clearing the Taliban insurgency from the Sangin District in the Helmand Province of Afghanistan.

Prior to that a full third of all British casualties in Afghanistan had occurred in the Sangin District…They were done in with death and injury, gladly turning it over to U.S. Marines.

The Helmand Valley was significant geography. A fertile green zone, the opium drug trade was booming and the Taliban had nestled into a stronghold. The Sangin crossroads fingered out to Kajaki Dam (electricity), Lashkar Gah (Helmand capital), and Kandahar. Dominating Kandahar put the insurgents that much closer to capital city, Kabul, just an RPG or two away from winning the war.

3/7 followed by 3/5 Marines were deployed to clear districts, kill the insurgents that were killing them, stop movement of weapons, ammunition and IED-making materials, open the roads, destroy weapons caches and insurgent hideouts, and hundreds of other tasks. 3/5 lived within feet of the people who wanted them dead. The missions were measured out among platoons, squads, and teams placed on somewhat exposed, but strategically located bases. Daily patrol demanded each Marine taking each step forward in the exact same place as the man ahead, while sweepers tried to unearth the hidden bombs.

Marines earned the respect of the Taliban by showing force and taking the fight to them.

U.S. Marine Corps accomplishments and sacrifices merit an immortal place in America’s history.

Similar observations can be made about the enduring, courageous character of a U.S. Marine in many other legendary battles not mentioned. All of which should be sobering reminders that victorious ends have come about through bloody means.

These points of pride left indelible marks on the Marines fighting in them – more than celebratory and triumphant accolades, but a sense of a harsh, unescapable reality that the environment of freedom can only exist when there are forces to sustain it. Marines need not prove who they are – their results do.

They are not only frightful and lethal when they need to be, but also serve humanitarian missions in natural disasters and disease outbreaks. They adapt to the calls of their country in any clime or place. Marines build vital global relationships with partners and allies who look to them for training. Because of their generous nature, they have helped other nations build their armies and security forces to be able to be ready, like them, for future conflicts. Marines have been one of the most enduring elements of the 20th and 21st centuries, shaping the world by winning its conflicts, securing stability, and building relationships among allies.

“The Army specialty is land warfare, the Air Force is sky space dominance, the Navy has the seas, and the Marine Corps focus on that amphibious ocean to [expeditionary] land aspect. WWII is a great example…if it’s impossible; Marines have found ways to do it.” (GySgt. Jon Holmes)

The Revolutionary War was the beginning of it. Even then they were looking at more than just sailors on ships. America’s various struggles and successes winning independence and sustaining it were and are transformational on the world stage. The Goliath, Britain, underestimated its David, America, slinging her way to freedom to become a shining example to all nations. America’s first Marines helped her do it and have been serving her best interests through generations.

Jeanne McKinney is an award-winning writer whose focus and passion is our United States active-duty military members and military news. Her Patriot Profiles offer an inside look at the amazing active-duty men and women in all Armed Services, including U.S. Marine Corps, Navy, Army, Air Force, Coast Guard, and National Guard. Reporting includes first-hand accounts of combat missions in Iraq and Afghanistan, the fight against violent terror groups, global defense, tactical training and readiness, humanitarian and disaster relief assistance, next-generation defense technology, family survival at home, U.S. port and border protection and illegal interdiction, women in combat, honoring the Fallen, Wounded Warriors, Military Working Dogs, Crisis Response, and much more. McKinney has won twelve San Diego Press Club “Excellence in Journalism Awards,” including seven First Place honors.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joshua Schmid

The Making of King Cotton: Eli Whitney’s Dogged Inventiveness

When one considers the great inventors of nineteenth-century America, few surpass Eli Whitney in both personal tenacity and the broader impact their works had on industrialization. Born on December 8, 1765 in Massachusetts, Whitney came of age during the American Revolution. As a boy, he enjoyed tinkering in his family’s workshop. One story tells that he stayed home on a Sunday to take apart his father’s watch while the rest of his family went to church. At the age of 12, Whitney created a violin—an incredible feat at such a young age. However, his tinkering was no mere hobby. The American Revolution was taking a toll on the colonial economy as men left their jobs to fight in the war or dedicated their trades to creating military equipment. Whitney heard that farmers around his home needed nails, and he soon created a forge to meet the demand.

After the American Revolution, Whitney studied and worked hard to pass entrance exams and pay to attend Yale. His intelligence and skills caught the eye of officials at the college, one of whom helped Whitney secure a job as a tutor in the South. The young man left his home area for South Carolina after graduation, but the job ended up falling through. Fortunately, Whitney had met Catherine Greene, the widow of Revolutionary War General Nathanael Greene, on his way south and accepted a position on her plantation in Savannah, Georgia.

On the Greene plantation, the New Englander witnessed the troubles of Southern agriculture first-hand. The strand of cotton that was best adapted to grow throughout the South—known as green-seed cotton—was high quality but contained many seeds within the fiber. These seeds needed to be removed before the raw cotton could be sent to textile mills. Whitney discovered that no effective machinery existed to remove the seeds, requiring laborers do the work by hand. The young man immediately set about to find a viable solution, recognizing that a machine that could minimize the workload of removing seeds would be hugely profitable. Whitney partnered with Phineas Miller, the manager of the plantation, in developing the machine. The two quickly produced a crude model of what became known as the cotton gin. This machine used combs to pull the seeds out and mesh to collect them while straining the finished fiber out.

In October 1793, Whitney wrote to Secretary of State Thomas Jefferson to apply for a patent for his invention. Jefferson was very intrigued by the device, replying, “As the state of Virginia, of which I am, carries on household manufactures of cotton to a great extent, as I also do myself, and one of our great embarrassments is the clearing the cotton of the seed, I feel a considerable interest in the success of your invention.” Whitney and Miller received a patent in March 1794 and developed a plan to travel to plantations to use the gin on farmers’ cotton in exchange for a portion of the completed supply. However, the cotton gin was not difficult to replicate, and many began to steal the design for themselves despite the patent. A variety of litigation cases followed as Whitney attempted to uphold his patent rights. However, a loophole in the laws at the time prevented the inventor from winning any of his court battles until 1807, at which point only a single year remained on his patent. He pleaded to Congress for a patent renewal, requesting in a letter to be “admitted to a more liberal participation with his fellow citizens, in the benefits of [the cotton gin],” but was denied on two separate occasions.

The cotton gin inadvertently increased demand for slavery in the South to meet the demands for the production of “King Cotton,” increasing sectional tensions in the country over the issue of human bondage. Whereas many of the founders believed that slavery would die a natural death because it was being restricted to the South where tobacco stripped the land of nutrients, the cotton gin helped breathe new life into the “peculiar institution.” Cotton and slavery spread to the new southern states in the early nineteenth century, and the scourge of human bondage in the country would exist until after the Civil War.

Joshua Schmid serves as a Program Analyst at the Bill of Rights Institute.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: John Steele Gordon

Before independence there was little financial activity in the American colonies. Britain had forbidden the creation of banks and there were no corporations to issue stock and bonds. And while merchants kept their books in pounds, shillings, and pence, the actual money supply was a hodgepodge of foreign coins, warehouse certificates, scrip, and IOU’s.

With independence, that began to change. The Continental Congress issued paper money, called “continentals,” in massive amounts and this fiat money quickly depreciated into near worthlessness. Congress and the various states also issued bonds to help pay the costs of the Revolution. In 1782 the first bank was organized in Philadelphia. By 1790 there were three in operation under the new Constitution that had come into effect the previous year.

And Alexander Hamilton, the first Secretary of the Treasury, pushed through the new Congress a bill for the federal government to assume the debts of the states and to refund its own debt with bonds that were backed by revenue from the new tariff.

He also established the new Bank of the United States, modeled on the Bank of England to act as a central bank. It was to regulate the money supply, provide discipline for state-chartered banks, act as a depository for government funds, and loan money to the government.

Suddenly there were financial instruments—federal and state bonds as well as stock in new banks and insurance companies to be traded—and brokers began to do so.

A broker is someone who brings buyers and sellers together and takes a commission on the sale price. Only relatively recently has it, unmodified, come to mean someone who handles financial instruments. In the 1790’s New York brokers were often involved in a number of different areas. They might be a partner in a private bank, sell insurance, run a private lottery as well as handle securities trading.

While state and federal bonds were the meat and potatoes of the new securities brokerage, the “hottest” security was the stock of the new Bank of the United States. Capitalized at $10 million, a vast sum for that time and place. Twenty percent of the stock was to be held by the government while the rest was to be offered to the public.

Trading in the stock began on a when-issued basis in 1791. When it was issued in July of that year, it sold out almost immediately and began to rise, setting off the country’s first bull market. Short sales (the sale of borrowed stock in hopes of a decline in price), and puts and calls (the right to sell or buy a security at a certain price before a certain date) began at this time, greatly increasing the speculative possibilities.

Early trading took place in coffee houses and taverns (as well as on the street in good weather), but brokers also began holding auctions in their offices. In early 1792, John Sutton and his partner Benjamin Jay and several others decided to form a central auction at 22 Wall Street. Sellers would deposit the securities they wanted to sell and buyers would attend the auction and the auctioneers would take a commission on the sale price.

But the system soon collapsed as brokers would attend the auctions just to learn what the prices were and then offer the securities at a lower commission.

To fix that problem, on May 17th, 1792, a group of men gathered beneath a buttonwood tree (today, such trees are called sycamores) outside of 68 Wall Street and signed an agreement. (There is some doubt as to whether the agreement was actually signed beneath that tree, but it became a beloved Wall Street icon until it fell in a storm on June 14, 1865.)

The agreement read as follows: “We the Subscribers, Brokers for the Purchase and Sale of the Public Stock, do hereby solemnly promise and pledge ourselves to each other, that we will not buy or sell from this day for any person whatsoever, any kind of Public Stock, at a less rate than one quarter percent Commission on the Specie value and that we will give preference to each other in our Negotiations. In Testimony whereof we have set our hands this 17th day of May at New York, 1792.”

This is often regarded as the origin of the New York Stock Exchange, although the exchange would not be formally organized and given a constitution for another quarter of a century. Basically, the Buttonwood Agreement was a price-fixing arrangement among brokers not to undercut each other on commissions. And fixed commissions remained a feature of the Wall Street financial market until 1975, when the Securities and Exchange Commission abolished them, forcing brokers to compete in terms of price. The result was greatly reduced commissions and, therefore, greatly increased volume, bringing today’s Wall Street into being.

John Steele Gordon was born in New York City in 1944 into a family long associated with the city and its financial community. Both his grandfathers held seats on the New York Stock Exchange. He was educated at Millbrook School and Vanderbilt University, graduating with a B.A. in history in 1966.

After college he worked as a production editor for Harper & Row (now HarperCollins) for six years before leaving to travel, driving a Land-Rover from New York to Tierra del Fuego, a nine-month journey of 39,000 miles. This resulted in his first book, Overlanding. Altogether he has driven through forty-seven countries on five continents.

After returning to New York he served on the staffs of Congressmen Herman Badillo and Robert Garcia. He has been a full-time writer for the last twenty years. His second book, The Scarlet Woman of Wall Street, a history of Wall Street in the 1860’s, was published in 1988. His third book, Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt, was published in 1997. The Great Game: The Emergence of Wall Street as a World Power, 1653-2000, was published by Scribner, a Simon and Schuster imprint, in November, 1999. A two-hour special based on The Great Game aired on CNBC on April 24th, 2000. His latest book, a collection of his columns from American Heritage magazine, entitled The Business of America, was published in July, 2001, by Walker. His history of the laying of the Atlantic Cable, A Thread Across the Ocean, was published in June, 2002. His next book, to be published by HarperCollins, is a history of the American economy.

He specializes in business and financial history. He has had articles published in, among others, Forbes, Forbes ASAP, Worth, the New York Times and The Wall Street Journal Op-Ed pages, the Washington Post’s Book World and Outlook. He is a contributing editor at American Heritage, where he has written the “Business of America” column since 1989.

In 1991 he traveled to Europe, Africa, North and South America, and Japan with the photographer Bruce Davidson for Schlumberger, Ltd., to create a photo essay called “Schlumberger People,” for the company’s annual report.

In 1992 he was the co-writer, with Timothy C. Forbes and Steve Forbes, of Happily Ever After?, a video produced by Forbes in honor of the seventy-fifth anniversary of the magazine.

He is a frequent commentator on Marketplace, the daily Public Radio business-news program heard on more than two hundred stations throughout the country. He has appeared on numerous other radio and television shows, including New York: A Documentary Film by Ric Burns, Business Center and Squawk Box on CNBC, and The News Hour with Jim Lehrer on PBS. He was a guest in 2001 on a live, two-hour edition of Booknotes with Brian Lamb on C-SPAN.

Mr. Gordon lives in North Salem, New York. His email address is jsg@johnsteelegordon.com.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter
U.S. Bill of Rights

In 1789, James Madison spoke on the House Floor introducing amendments to the U.S. Constitution, an attempt to persuade Congress a Bill of Rights would protect liberty and produce unity in the new government. Opposed to a Bill of Rights at first, Madison stated that the rights of mankind were built into the fabric of human nature by God, and government had no powers to alienate an individual’s rights. Having witnessed the states violating them, Madison realized in order to safeguard America’s freedoms, Congress needed to remain mindful of their role never to take a position of power by force over the people they serve.

There was probably no American more interested in what was taking place in Richmond, Virginia that brisk December morning in 1791 than James Madison. Christmas had come and gone and now Congress entered the last week of the year. The second-term U.S. Congressman from Virginia’s 5th Congressional District could only sit patiently in Mrs. House’s boarding establishment in Philadelphia and wait for a dispatch-rider carrying news from his home state. It must have been frustrating.

In the waning days of 1789, the Virginia Senate had outright rejected four of the proposed amendments, an ominous sign. The ratification by ten states was necessary (the admission of Vermont in March, 1791 bumped that up to eleven) and Virginia’s rejection did not bode well for Madison’s “summer project.”

Madison had single-handedly pushed the proposed amendments through a reluctant Congress during the summer of 1789, a Congress understandably focused on building a government from scratch. But push them through he did; a promise had to be kept.

Madison’s successful election to the First Congress under the new Constitution (by a mere 336 votes) had been largely due to a promise the future fourth President made to the Baptists of his native Orange County. “Vote for me and I’ll work to ensure your religious liberty is secured, not just here in Virginia but throughout the United States.” And vote for him they had.

Upon taking his seat in the First U.S. Congress, then meeting in New York,[1] “Jemmy” had encountered the ratification messages of the eleven states which had joined the new union. North Carolina and Rhode island would as well, eventually. Several of these ratification messages contained lengthy lists of proposed amendments which became Madison’s starting point. He whittled down the list, discarding duplicates and those with absolutely no chance for success, and submitted nineteen proposed amendments to Congress. These were “wordsmithed,” combined, some good ones inexplicitly discarded, and the lot reduced further to twelve, which were finally approved and submitted to the states for ratification on September 28, 1789.

Three states: New Jersey, Maryland and North Carolina quickly ratified almost all of the amendments before the end of the year. [2] South Carolina, New Hampshire, Delaware, New York, Pennsylvania and Rhode Island ratified different combinations of amendments in the first six months of 1790. And then things came grinding to a halt. The remaining four states would take no further action for more than a year.

Massachusetts, Connecticut and Georgia were dragging their feet. The three states did not fully ratify what we know today as the Bill of Rights until its sesquicentennial in 1939! The new state of Vermont ratified all twelve articles in early November, 1791, but Congress would not learn of that for two months. Who would provide the ratification by “three-fourths of the said Legislatures” needed to place the proposed amendments into effect?

Here at the end of 1791, things were finally looking promising. On November 14th President George Washington informed Congress that the Virginia House of Delegates had ratified the first article on October 25th, agreed to by the Senate on November 3rd. But what about the remaining eleven articles? Was that it? Although now ratified by Virginia, this first Article still lacked ratification by 11 states, so Virginia’s action had no real effect.

Unbeknownst to Madison and the rest of Congress, on December 5th, the Virginia House of Delegates had ratified “the second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth articles of the amendments proposed by Congress to the Constitution of the United States.” Ten days later, the Virginia Senate concurred. It would be another seven days before Assembly President Henry Lee sent off the official notice of his state’s ratification to Philadelphia and several more days before it arrived.  On December 30th, President Washington informed Congress of Virginia’s action.

But wait. Vermont’s ratification had still not made its way from that northernmost state. Congress pressed on with other urgent matters. Finally, on January 18, 1792, Vermont’s ratification finally arrived.  With it, Congress realized that Virginia’s December ratification had indeed placed ten of the Amendments into operation.

The rest of America symbolically shrugged its shoulders and went about its affairs. In the words of historian Gordon S. Wood, “After ratification, most Americans promptly forgot about the first ten amendments to the Constitution.”[3] It would be nearly 70 years before Americans even began referring to these first ten amendments as a “Bill of Rights.” Today, we seek their protections frequently, and vociferously.  Bravo, Mr. Madison!

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Congress moved from New York to Philadelphia between August and December of 1790.

[2] New Jersey declined to ratify Article Two, until 1992.

[3] Quoted in James Madison and the struggle for the Bill of Rights. by Labunski, Richard E., Oxford University Press, 2006, p. 258.

Guest Essayist: James D. Best
Fifth, Second and First Constitutional Amendments with gavel

On September 25, 1789, the First Congress sent a group of amendments to the states for ratification. Seventeen amendments had been approved by the House, the Senate trimmed the list to twelve, and ten ended up being ratified by the states to become our revered Bill of Rights. With Virginia’s ratification on December 15, 1791, the first ten amendments were incorporated into the supreme law of the land.

Bills of rights were not new at the time of the Founding. The 1215 Magna Carta, the 1689 English Bill of Rights of 1689, and many American states had previously enacted declarations of rights into their state constitutions. Although the original Constitution did not include a Bill of Rights, the base document included a few rights interspersed throughout the text. Writ of habeas corpus could not be suspended—except when the country was under attack; no bill of attainder or ex post facto law could be passed at the national or state level; Americans were guaranteed a jury trial for criminal cases; there could be no religious test for federal office; no state law could impair the obligation of contracts; and the citizens of each state were entitled to the privileges and immunities of the citizens of every other state.

Individual rights were not a significant issue during the Constitutional Convention, but a Bill of Rights certainly became a major issue during ratification. The clamor for a Bill of Rights was an antifederalist political weapon against ratification. For many antifederalists, the real objection was that the Constitution gave too much power to the national government. This argument floundered, while a demand for a bill of rights gained enormous traction, so prominent anti-Federalists made vocal and repeated demands for a Bill of Rights.

Despite the clamor for a Bill of Rights, most Federalists continued to insist that one was not needed because the national government’s powers were restricted, and most state constitutions already possessed declarations of rights. As Hamilton explained in Federalist 84, “Why declare that things shall not be done which there is no power to do?”

James Madison’s support for a bill of rights became crucial. At first, he objected, then became unsure, and finally became a forceful advocate. He came to believe that a Bill of Rights had become a political necessity. In his speech on June 8, 1789, he said, “It may be thought all paper barriers against the power of the community are too weak to be worthy of attention … yet, as they have a tendency to impress some … it may be one mean to control the majority from those acts to which they might be otherwise inclined.” Madison became a strong advocate for these amendments, but as these words reflect, he remained ambivalent philosophically.

Despite a modern perception that the first ten amendments bestow rights, it’s clear that the Bill of Rights is really a list of government prohibitions. The Founders did not believe in government benevolence and would never have accepted government as the arbiter of rights. Here are some of the restrictive clauses used in the first eight amendments:

Congress shall make no law

shall not be infringed

without the consent

shall not be violated

nor shall be compelled

the accused shall enjoy

nor be deprived

no fact tried by a jury, shall be otherwise re-examined

shall not be required

These phrases make clear that the Bill of Rights is a restraining order issued by the people against the national government. Natural rights are endowed by the Creator and the government is enjoined from interfering with these rights.

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” –Declaration of Independence

Because the Founders feared that a Bill of Rights might impede liberty due to sins of omission, the 9th Amendment provided that, “The enumeration in the Constitution of certain rights, shall not be construed to deny or disparage other rights retained by the people.” The 10th Amendment further stated that “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.”

These simple fifty words encapsulated the political philosophy of the Founders. Rights are not bestowed by the government, they are “endowed by their Creator” and reside with the people, and liberty depends on government operating within the restriction of enumerated powers delegated by a sovereign people.

Through the years, this sound philosophy has been diminished. The Supreme Court has succeeded in setting itself up as the arbiter of rights. So much so, many people have come to view government—specifically the Supreme Court—as the grantor and guarantor of rights. As the 9th Amendment states, rights exist that are not included in the first eight amendments, but the proper way to secure these rights from government interference is through laws at the state or national level or through the amendment process.

James D. Best, author of Tempest at Dawn, a novel about the 1787 Constitutional Convention, Principled Action, Lessons From the Origins of the American Republic, and the Steve Dancy Tales.

Guest Essayist: Scot Faulkner

Alexander Hamilton was America’s first Chief Operations Officer (COO).

Along with James Madison, Hamilton crafted the best operating system ever devised in human history. The U.S. Constitution provided a framework for sharing power and resolving differences. Madison and Hamilton provided details for operationalizing the Constitution with their Federal Papers essays. These Papers remain integral to interpreting the original intent for court cases to this day.

America was blessed with George Washington, the most indispensable person in our nation’s history. However, Washington needed to augment his phenomenal leadership skills with Hamilton’s management acumen. During the American Revolution, Hamilton translated Washington’s military strategy into clear and concise orders to his commanders. Now as President, Washington needed Hamilton to translate the Founders’ vision, and his policies, into reality.

With Jefferson still conducting diplomacy in Europe, Hamilton became not just the first Treasury Secretary, but effectively functioned as Washington’s “prime minister.” Decisions and documents, down to minute detail, flowed from Hamilton’s pen, creating the Executive Branch.

Hamilton’s love of administrative detail was matched by his devotion to commerce.

He was the only “modern man” among the Founders. Hamilton grew-up outside the American colonies and had a full appreciation of how nations interacted. As an accounting clerk for various trading companies in the West Indies, Hamilton developed a deep understanding for the inner workings of international trade and finance. His was America’s first “capitalist.” The systems and institutions he put in place laid the foundation for America becoming the greatest economic power in the world.

Hamilton’s greatest achievement was managing the onerous debts arising from the Revolutionary War. Each state incurred debt as their individual state militias needed to be paid for back wages. Both national and state level soldiers were paid in bonds or “IOUs.” After the war, many cash-strapped soldiers sold these bonds/“IOUs” to speculators for a fraction of their worth. Countless suppliers of their armed forces sued for nonpayment. The paper currency issued during the war was “not worth a Continental” and legions of war veterans, farmers, merchants, and craftsman (like blacksmiths, barrel makers, and carpenters) demanded payment, declaring Continental scrip were “IOUs.”

The total debt was $79 million: $54 million owed by the national government and $25 million owed by the states. Hamilton saw repayment of this debt as a strategic and moral imperative: “States, like individuals, who observe their engagements are respected and trusted, while the reverse is the fate of those who pursue an opposite conduct.”

Without a debt repayment strategy, the IOUs and lawsuits would continue to cripple America’s economy with unbridled speculation and uncertainty. Trust in the Federal government’s ability to meet its obligations had to be restored. Something had to be done. Hamilton declared, “In nothing are appearances of greater moment than in whatever regards credit.”

Repayment of debts would allow America to enter into international agreements and borrow funds for investing in business ventures and stimulate economic growth. Hamilton observed that the American economy was stagnating from a limited money supply, deflation of land values, and no liquid capital. He also was concerned that if America was seen as financially broke and politically fragmented, foreign governments may lure individual states with separate debt financing arrangements.

The solution was to consolidate all public debt and set aside some of the steady federal revenue to service interest and payoff the principal. These were revolutionary and futuristic concepts in 1790.

It was his conviction that, “an assumption of the debts of particular states by the union and a like provision for them as for those of the union will be a measure of sound policy and substantial justice.”

Hamilton determined that consolidating all the Revolutionary War debt would accomplish several things. [1] It would bring order from chaos with one large debt instead of thousands of smaller ones. [2] It would simplify the management and repayment of the debt. [3] It would establish loyalty among the creditors and bond/IOU holders who would promote the stability and success of the federal government to assure their claims were paid.

Another aspect of Hamilton’s solution was that the U.S. Constitution gave the federal government the exclusive right to collect import duties. The Federal Government assuming state debt would prevent states from trying to return to the Article of Confederation when states levied duties on interstate commerce. Hamilton wanted to unify America and forge a national economy.

The critical element in assuming all debt was to have a unified America attract foreign investment through issuing federal government bonds. Such bonded debt would create investment partners who would forge trade relationships that allowed the U.S. Government to raise the necessary revenue to meet its debt obligations.  Hamilton sought to create a web of economic loyalties and relationships that bound everyone to supporting everyone’s economic wellbeing. In doing so, Hamilton would establish America as a major player in the modern international financial system.

Hamilton’s vision and how to implement it, was at the core of his fifty-one-page “Report on Public Credit” to the Congress.  It was his hope that Congress would pass the necessary legislation to authorize implementation of this integrated plan.  Any editions or subtractions would ruin his delicate balance between the various economic interests. Hamilton worried, “Credit is the entire thing. Every part of it has the nicest sympathy with every other part. Wound one limb and the whole tree shrinks and decays.”

Many in Congress rejected the plan as confusing and overly complex. Some saw it as too much like the way England financed its wars. Others declared it a bailout for speculators. Even Madison refuted it. As the assumption plan related to spending, its first test was in the House of Representatives.

The House debate was a sensation. Packed galleries watched Madison rail against the plan as a “betrayal of the American Revolution.” Hamilton, a member of the Executive Branch, mustered his votes behind the scenes. On April 12, 1790, the House defeated the debt assumption plan: 29 ayes to 31 nays.

The death of debt assumption found resurrection in the future location of the nation’s capital. Hamilton and northerners wanted the capital to remain in New York or return to Philadelphia. Southerners wanted it in the South and located outside an existing urban area. Jefferson saw this as a struggle between his vision of an agrarian nation versus the grime of industry. Madison and Henry Lee had purchased land along the Potomac River in the hopes that Jefferson would prevail. All sides wanted a final decision on the future of the Nation’s Capital, and symbolically the character of the nation. To break the stalemate, the key players, Jefferson, Hamilton, Madison, and several others gathered for dinner on June 20, 1790, at Jefferson’s townhouse in New York City.

After much food, libations, and discussion a deal was struck. The Nation’s Capital would be along the banks of the Potomac between Georgetown in Maryland and Alexandria in Virginia. In exchange for Hamilton convincing northerners to support this location, Jefferson and Madison would support passage of Hamilton’s Debt Assumption plan.

On July 10, 1790 the House passed the Residence Act moving the temporary Capital back to Philadelphia and designating a ten-square mile area along the Potomac as the permanent Capital. The House then passed the Assumption bill on July 26. The Senate approved the plan on August 4, 1790.

Senator Daniel Webster placed Hamilton’s achievement into historical perspective:

“The fabled birth of Minerva from the brain of Jove was hardly more sudden or perfect than the financial system of the United States as it burst forth from the conception of Alexander Hamilton.”

Scot Faulkner is Vice President of the George Washington Institute of Living Ethics at Shepherd University. He was the Chief Administrative Officer of the U.S. House of Representatives. Earlier, he served on the White House staff. Faulkner provides political commentary for ABC News Australia, Newsmax, and CitizenOversight. He earned a Master’s in Public Administration from American University, and a BA in Government & History from Lawrence University, with studies in comparative government at the London School of Economics and Georgetown University.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

On September 11, 1789, the Senate confirmed President George Washington’s appointment of Alexander Hamilton as Secretary of the Treasury.  Hamilton wasted no time and worked all weekend to address immediate financial concerns and spent the next few years formulating the financial policies to engage in nation-building for the new republic.

As one of the primary authors of the Federalist and as a key delegate to the New York Ratifying Convention, Hamilton had been instrumental in winning ratification of the new Constitution strengthening the national government. During the 1790s, he would use the constitutional authority of that new government to build a lasting republic.

Hamilton’s visionary financial plan was the foundation of his nation-building in the 1790s. He wanted to establish the credit of the United States and encourage economic growth through a national bank. While he wanted to support a strong manufacturing base, he sought to integrate merchants, artisans, planters, farmers, and shippers from different sections of the country into a unified national economy.  Strong economic growth would allow the young republic to build a strong national security state to survive in a world of contending empires.  He believed that the soundness of the nation’s finances was essential for American prosperity and political stability for the national honor and future greatness.

The first part of the plan was to remedy the teetering financial footing of the new nation. President Washington thought the issue was of central importance to the new nation as he told Congress in his first State of the Union address. He said it was a “measure in which the character and permanent interests of the United States are so obviously and so deeply concerned.” In the fall, Congress had requested that the new treasury secretary submit a Report on Public Credit, which Hamilton did on January 14, 1790.

In the report, Hamilton wrote that the public debt totaled an estimated $79 million. He thought that it was a matter of national honor and natural law that the United States meet its financial obligations and the “punctual performance of contracts.” Practically, the good faith and respectability of the country was at stake.

A solid public credit in Hamilton’s estimation would result in many benefits. It would restore confidence in the United States. The country would enjoy lower interest rates and borrow on easier terms, freeing up capital for productive investment. The public credit would encourage domestic and foreign trade and thereby prosperity for all sectors of the economy. The public credit would “cement more closely the union of the states” and provide “security against foreign attack.”

The plan aroused a significant amount of opposition. The first major controversy was that some states had paid their Revolutionary War debts and others had not. Another source of contention was that many veterans had been paid in Continental securities but had sold the certificates when wartime inflation caused their value to drop. Speculators had bought them for ten or twenty cents on the dollar and would seemingly gain from gambling on the “distresses” of the soldiers.

Hamilton wanted to redeem the certificates of the current holders of the debt as a matter of contracts and justice. He also had a plan for the “assumption” of the state debts by the national government. He thought the costs of the war should be shared equally by all and wanted to empower the national government to collect the revenue to extinguish the debt gradually. He thought that “the proper funding of the present debt, will render it a national blessing” because it would restore the public credit and promote the productive engines of the American economy.

James Madison helped lead the opposition to the plan in the House of Representatives. He was particularly concerned by what he considered to be injustice against the Revolutionary War veterans who were supposedly victims of speculators. He also thought that a “public debt is a public curse.” Madison and other congressmen such as Representative William Maclay and Senator James Jackson used revolutionary ideology to criticize the proposal as encouraging rapacious speculators, vice, corruption, and political centralization that threatened republican self-government.

In late June, Thomas Jefferson hosted a dinner for Hamilton and Madison in which they helped to hammer out the Compromise of 1790 in which Hamilton won his financial plan and southerners won a capital in Washington, D.C. In July, after much debate and controversy, Congress eventually passed his plan for the federal government to assume the Revolutionary War debts of the states as well as the tariffs and excise taxes he wanted gradually to extinguish the debt.

In December, Hamilton submitted another major part of his financial vision for the country with his Report on a National Bank. Congress more easily passed the National Bank to circulate currency and lend money to promote economic growth. Washington was unsure of the constitutionality of the bank and solicited opinions from his cabinet because he took seriously his presidential duty only to sign bills that were constitutional. He sided with Hamilton’s more expansive view of the Necessary and Proper Clause that the bank was related to several other congressional powers in Article I, Section 8.

In a few short years, Hamilton’s triumph was vindicated by a thriving, dynamic economy. Hamilton successfully used the federal government to provide stability and order to the financial system that allowed individuals to thrive in the private free market. In the 1790s, Hamilton and Washington established the finances of the new nation and shaped the American regime of republican liberty and self-government.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Craig Bruce Smith

From the United States capitol of New York City’s Federal Hall, Congress passed one of the earliest acts of the seven-months-old federal government: a pivotal piece of legislation for the defense of the new nation and its people.

Passed on September 29, 1789 and approved by President George Washington, the act legally formalized a national army. In so doing, the some hundred congressmen and senators formally rejected the deep Anglo-American fear of a standing army assuming dictatorial control.

Technically, this moment could be considered the birth of the modern US Army, as it was done under US Constitution of 1787 that we still follow today.[1] However, there were older variants of the army under the Continental Congress and the Articles of Confederation. So when was the US Army actually born?

Wordily titled, “An Act to recognize and adapt to the constitution of the United States, the establishment of the troops raised under the resolves of the United States in Congress assembled and for other purposes,” the 1789 bill voted into tangible existence the military alluded to in the US Constitution.[2] Only ratified the previous year, the Constitution frequently refers to war (and a navy and militias), but only offers a single mention of an army in Article 2, Section 2, which outlines the powers of the executive branch. Without clearly creating any such organization, the document merely mentions: “The President shall be Commander in Chief of the Army and Navy of the United States, and of the Militia of the several States, when called into the actual Service of the United States.”[3]

Though legally created in 1789, the US Army’s true roots begin earlier. The colonies had long embraced a tradition of local militias and maintained the deep fear of standing armies that affected the British Empire since the time of Oliver Cromwell and the English Civil War. But in the American Revolution, an army became a necessity to defend the people against British hostility.

On April 19, 1775, fighting between the British army and colonial militias broke out in Massachusetts at Lexington and Concord. It didn’t take long before Massachusetts patriot and future major general Dr. Joseph Warren wrote to the newly convened Second Continental Congress sitting in Philadelphia that British designs to “to ruin and destroy the inhabitants of this colony” had made “the establishment of an army indispensably necessary.” Warren pleaded with Congress “that a powerful Army, on the side of America” was “the only mean left to stem the rapid Progress of a tyrannical Ministry. Without a force, superior to our Enemies, we must reasonably expect to become the Victims of their relentless fury.”[4] Despite clearly linking the fighting in his home colony to the “Cause of America,” there was still debate about committing military support and the actual raising of an army. Was this really America’s war?

As colonial militias became a “New England Army” and the city of Boston was besieged, petitions of peace and questions of independence were considered by Congress, while they also made recommendations to logistically aid the colonial forces in Massachusetts and encouraged other colonies to do the same.[5] The term Continental Army was used as early as June 3, 1775, though no such organization formally existed.[6] Men like Warren and Massachusetts delegate John Adams had been adamant that the New England militias needed Congressional support. Warren wrote, clearly understanding Anglo-American fears of the unchecked authority of a standing army, “we tremble at having an army (although consisting of our countrymen) established here without a civil power to provide for and control them.”[7]

Finally, on June 14, 1775, Congress created the Continental Army out of a growing sense of unity and necessity. Congress also created an oath for soldiers and officers that placed the army under civilian governmental control: “I have, this day, voluntarily enlisted myself, as a soldier, in the American continental army…And I do bind myself to conform, in all instances, to such rules and regulations, as are, or shall be, established for the government of the said Army.”[8]

A five-person committee (George Washington, Philip Schuyler, Silas Deane, Thomas Cushing, and Joseph Hewes) was promptly assembled to create the named rules and regulations. The next day Colonel Washington, famed for his service in the French and Indian War, was “unanimously elected” and commissioned as the Continental Army’s commander-in-chief.[9] Though there were others who desired the post, Washington’s Virginian roots helped bridge a divide between the northern and southern colonies. While simultaneously declaring his reluctance to accept, Washington donned his military uniform and appeared in front of Congress to declare he was ready and willing to serve. Washington affirmed he would submit himself to congressional authority and “enter upon the momentous duty, and exert every power I possess in their service and for the support of the glorious cause” of American liberty.[10]

Leaving immediately for Boston, Washington took command in Cambridge, Massachusetts on July 3, 1775 shortly after the Battle of Bunker Hill. Almost from his first moment in command, Washington constantly promoted civilian supremacy over the army and demanded “a due observance of those articles of war, established for the government of the army.”[11] In surrendering his commission back to Congress at the conclusion of the war in 1783 (along with the near complete demobilization of the army itself), he alleviated fears of a dictator backed by a standing army and firmly established the legacy of civilian supremacy. This is arguably the greatest moment in American history.

A year later on June 3, 1784, Congress resolved under the Articles of Confederation (America’s first constitution) to create the peacetime Regular Army comprised of only 700 men (also known as the First American Regiment) for the purpose of “securing and protecting the northwestern frontiers” after acquiring new lands from Britain after the Peace of Paris.[12] But the military proved ineffective under the Articles, because states had more power than the federal government. Partly in response to the failure of the government and its forces to suppress rural uprisings, such as in western Massachusetts’ Shays’ Rebellion, a new constitutional convention was called.

The 1789 congressional act continued the intent of the army as outlined in 1784 and 1775 as it was designed to “protect the inhabitant of the frontier” and “be governed by the rules and articles of war which have been established by the United States in Congress.”[13]

The act also created the oath of service that spelled out civilian supremacy and marked the military’s loyalty to the Constitution and the government — not to an individual: “…I will support the constitution of the United States…against all enemies…and to observe and obey the orders of the President.”[14] A variation of this oath still guides the Army today.[15]

A force of 700 soldiers unsurprisingly proved ineffective for the growing nation. In 1792, the army was again structurally reformed and enlarged as the Legion of the United States before finally adopting the name the “army of the United States” or US Army in 1796.[16]

So when was the Army as we know it today founded? 1775, 1784, 1789, 1792, or 1796? The answer depends on if you take a literal or spiritual interpretation.

Though legally created in 1789 under the current Constitution, the US Army instead chooses the spirit of liberty drawn from the American Revolution and the creation of the Continental Army. After all, it is these very ideals of liberty that continue to guide its soldiers and officers, and the entire nation.

In 1956, President Dwight Eisenhower, former five-star general and Supreme Allied Commander, signed an executive order for the creation of a US Army flag that prominently featured the date “1775” and declared it a “suitable design and appropriate for adoption.”[17] Today, the US Army continues to celebrate June 14, 1775 as its official “birthday” with much fanfare, memorials, military balls, and cake.[18]

Craig Bruce Smith is a historian and the author of American Honor: The Creation of the Nation’s Ideals during the Revolutionary Era. For more information, visit www.craigbrucesmith.com or follow him on Twitter @craigbrucesmith. All views are that of the author and do not represent those of the Federal Government, the US Army, or Department of Defense.

[1] For more work on the history of the US Army, also used as references throughout this article: Matthew S. Muehlbauer and David J. Ulbrich. Ways of War: American Military History from the Colonial Era to the Twenty-First Century. (New York: Routledge, 2014); Richard Stewart ED., American Military History. Volume I. (Washington, DC: US Army Center of Military History, 2009); David Hackett Fischer, Washington’s Crossing.  (New York: Oxford University Press, 2004), Ch. 1; Richard H. Kohn. Eagle and Sword: The Federalists and the Creation of the Military Establishment in America, 1783-1802. (New York: Free Press, 1975); Allen R. Millett and Peter Maslowski. For the Common Defense: A Military History of the United States. (New York: Free Press, 2012).

[2]“An Act to recognize and adapt to the constitution of the United States, the establishment of the troops raised under the resolves of the United States in Congress assembled and for other purposes,” 29 September 1789. https://history.army.mil/books/RevWar/ss/repdoc.htm

[3] US Constitution, 1787, https://www.ourdocuments.gov/doc.php?flash=false&doc=9&page=transcript

[4] Joseph Warren to the Continental Congress, 3 May 1775, Journals of the Continental Congress.

https://memory.loc.gov/cgibin/query/r?ammem/hlaw:@field(DOCID+@lit(jc0026)):

[5] John Adams, Autobiography, June-August 1775, Massachusetts Historical Society. https://www.masshist.org/digitaladams/archive/doc?id=A1_20&rec=sheet&archive=&hi=&numRecs=&query=&queryid=&start=&tag=&num=10&bc=/digitaladams/archive/browse/autobio1.php

[6] “Saturday, June 3, 1775 and “Saturday, June 10, 1775,” Journals of the Continental Congress.

[7] Joseph Warren to the Continental Congress, 16 May 1775, Journals of the Continental Congress.  https://memory.loc.gov/cgi bin/query/r?ammem/hlaw:@field(DOCID+@lit(jc00225)):; Adams, Autobiography.

[8] “Wednesday June 14, 1775,” Journals of the Continental Congress, https://memory.loc.gov/cgi-bin/query/r?ammem/hlaw:@field(DOCID+@lit(jc00235)):

[9] “Thursday June 15, 1775,” Journals of the Continental Congress, https://memory.loc.gov/cgi-bin/query/r?ammem/hlaw:@field(DOCID+@lit(jc00236)):

[10] George Washington, “Address to the Continental Congress,” 16 June 1775, Founders Online. https://founders.archives.gov/?q=%20Author%3A%22Washington%2C%20George%22%20Dates-From%3A1775-06-14&s=1111311111&r=7

[11] General Orders, 4 July 1775, Founders Online. https://founders.archives.gov/?q=%20Author%3A%22Washington%2C%20George%22%20Dates-From%3A1775-06-14&s=1111311111&r=26

[12] “Thursday, June 3, 1784,” Journals of the Continental Congress, Vol. 27, p. 530

[13] “An Act to recognize…the establishment of the troops…,”

[14] First Congress, Session I, Ch. 27. Resolutions, 1789 in The Public Statutes at Large of the United States of America. (Boston: Little and Brown, 1845,) p. 95-96.

[15] US Army, “Oath of Enlistment,” https://www.army.mil/values/oath.html and “Oath of Commissioned Officers,” https://www.army.mil/values/officers.html

[16] A.J. Birtle. “The Origins of the Legion of the United States,” the Journal of Military History. (Vol. 67, No. 4, October 2003,) pp. 1249-1261; Fourth Congress, Session I, 1796, https://www.loc.gov/law/help/statutes-at-large/4th-congress/c4.pdf

[17] Dwight Eisenhower, 12 June 1756, Executive Order 10670, National Archives. https://www.archives.gov/federal-register/codification/executive-order/10670.html

[18] John R. Maass, “June 14th: The Birthday of the U.S. Army,” US Army Center of Military History, https://history.army.mil/html/faq/birth.html; “Army Birthdays,” https://history.army.mil/faq/branches.htm

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath
George Washington, presided over the first Continental Congress; Commander-in-Chief of the Continental Army during the American Revolutionary War; first President of the United States; painting by Gilbert Stuart, 1796.

Two weeks after the death of George Washington on December 14, 1799, his long-time friend General Henry “Light Horse Harry” Lee delivered a funeral oration to Congress that lauded the deceased as, “First in war- first in peace- and first in the hearts of his countrymen, he was second to none in the humble and endearing scenes of private life; pious, just, humane, temperate and sincere; uniform, dignified and commanding, his example was as edifying to all around him, as were the effects of that example lasting.” The Jeffersonian newspaper of Philadelphia, The Aurora, had a rather different opinion than those countrymen. Sounding like his current counterparts in their sentiments about today’s President, the publisher declared on Washington’s retirement from office in 1797, “[T]his day ought to be a jubilee in the United States … for the man who is the source of all the misfortunes of our country, is this day reduced to a level with his fellow citizens.”

Each author likely could point to examples to buttress his case. Washington wore many hats in his public life, and his last service, as President from 1789 to 1797, had its shares of controversies. Washington kept his private life just that to his best abilities, with the result that it soon became mythologized. In public, Washington was reserved (or “dull,” to his detractors), dignified (or “stiff,” to his detractors), and self-disciplined. Yet his usually even-tempered nature occasionally flared, which few were willing to risk. According to Samuel Eliot Morison, during the Philadelphia Constitutional Convention, Alexander Hamilton bet Gouverneur Morris a dinner that the latter would not approach Washington, slap him on the back, and say, “How are you today, my dear General?” Morris, the convention’s jokester, took the bet, but after the look that Washington gave him upon the event, professed that he would never do so again for a thousand dinners. Washington’s formality had its limits. A Senate committee proposed that the official address to the President should be, “His Highness the President of the United States of America and the Protector of the Rights of the Same.” The Senate rejected this effusive extravagance, and Washington was simply addressed as Mr. President.

On a later occasion, Morris wrote Washington, “No constitution is the same on paper and in life. The exercise of authority depends on personal character. Your cool, steady temper is indispensably necessary to give firm and manly tone to the new government.” Not only is this a correct observation about constitutions in general. A formal charter, the “Constitution” as law, is not all that describes how the political system actually operates, that is, the “constitution” as custom and practice. It is particularly true about Article II of the Constitution of 1787, which establishes the executive branch and delineates most of its powers. While some of those powers are set out precisely, others are ambiguous, such as the “executive power” and “commander-in-chief” clauses.

In several contributions to The Federalist, most thoroughly in No. 70, Hamilton explained how the Constitution created a unitary executive. He stressed the need for energy and for clarity of accountability that comes from such a system. In No. 67, he ridiculed “extravagant” misrepresentations and “counterfeit resemblances” by which opponents had sought to demonize the President as a potentate with royal prerogatives. Still, it has often been acknowledged that the Constitution sets up a potentially strong executive-style government. Justice Robert Jackson in Youngstown Sheet & Tube Co. v. Sawyer in 1952 described the President’s real powers, “The Constitution does not disclose the measure of the actual controls wielded by the modern presidential office. That instrument must be understood as an Eighteenth-Century sketch of a government hoped for, not as a blueprint of the Government that is…. Executive power has the advantage of concentration in a single head in whose choice the whole Nation has a part, making him the focus of public hopes and expectations…. By his prestige as head of state and his influence upon public opinion, he exerts a leverage upon those who are supposed to check and balance his power which often cancels their effectiveness.” Washington was keenly aware of his groundbreaking role and used events during his time in office to define the constitutional boundaries of Article II and to shape the office of the President from this “sketch.”

Washington’s actions in particular controversies helped shape the contours of various ambiguous clauses in Article II of the Constitution. He shored up the consolidation of the executive branch into a “unitary” entity headed by the President and guarded its independence from the Congress. From the start, Washington was hamstrung by the absence of an administrative apparatus. The Confederation had officers and agents, but due to its circumscribed powers and lack of financial independence, it relied heavily on state officials to administer peacetime federal policy. The new Congress established various administrative departments, which quickly produced a controversy over the removal of federal officers. Would the President have this power exclusively, or would he have to receive Senate consent, as a parallel to the appointment power? The Constitution was silent. After much debate over the topic in the bill to establish the Departments of State and War, a closely-divided Congress assigned that power to the President alone. Some opponents of the law objected that the President already had that power as chief executive, and that the statute could be read as giving him that power only as a matter of legislative grace, to be withdrawn as Congress saw fit.

Even if this removal power was the President’s alone by implication from the executive power in Article II, the same analysis might not apply to other officers. Congress had been clear to note that the departments in the statute were closely tied to essential attributes of executive power, that is, foreign relations and control over the military during war. The position of the Treasury Secretary, on the other hand, was constitutionally much more ambiguous, given Congress’s preeminent role in fiscal matters. The Treasury Secretary was a sort of go-between who straddled Congress’s power over the purse and the President’s power to direct the administration of government. The law that created the Treasury Department required the Secretary to report to Congress and to “perform all such services relative to the finances, as he shall be directed to perform.”

This implied that the Secretary was responsible to Congress rather than the President. If followed with other departments, this would move the federal government in direction of a British-style parliamentary system and blur the separation of powers between the branches. Washington resisted that trend, but his victory in the removal question was incomplete. It was not until the Andrew Jackson administration that the matter was settled. Jackson removed two Treasury Secretaries who had refused his order to transfer government funds from the Second Bank of the United States. While the Senate censured him for assuming unconstitutional powers, Jackson’s position ultimately prevailed and the censure was later rescinded. Still, controversy over the removal of cabinet heads without Senatorial consent flared up again after the Civil War with the Tenure of Office Act of 1867 and led to the impeachment of President Andrew Johnson in 1868. It was not until 1926 in Myers v. U.S. that the Supreme Court acknowledged the President’s inherent removal power over executive officers.

A matter of much greater immediate controversy during the Washington administration was the President’s Neutrality Proclamation in 1793. The country was in no position, militarily, to get between the two European powers fighting each other, Great Britain and the French Republic. To stave off pressure from both sides, and from their American partisans, to join their cause, Washington declared the United States to be neutral. Domestic critics charged that this invaded the powers of Congress. Hamilton, ever eager to defend executive power, wrote public “letters” under the appropriately clever pen name “Pacificus.” He set forth a very broad theory of implied powers derived from elastic clauses in Article II, primarily the executive power clause. In light of those powers and the President’s position as head of the executive branch, the President could do whatever he deemed necessary for the well-being of the country and its people, unless the Constitution expressly limited him or gave the claimed power to Congress. In this instance, until Congress declared war, Washington could declare peace.

Hamilton’s position made sense, especially as Congress met only a few weeks each year, while the President could respond to events more quickly. However, Hamilton did not go unchallenged. At the urging of Jefferson, a reluctant James Madison wrote his “Helviticus” letters that presented a much more constrained view of those same constitutional clauses. Hamilton’s asseverations have generally carried the day, although political struggles between Congress and the President over claimed executive excesses have punctuated our constitutional history and continue to serve as flashpoints today. Hamilton’s theory, and Washington’s application thereof, cemented the “unitary executive” conception of the presidency.

While generally silent on foreign affairs, the Constitution does address treaties. The power to make treaties was part of the federative power of the British monarch. Thus, at least from Hamilton’s perspective, the President could conduct foreign affairs and make treaties as the sole representative of the country. However, constitutional limits must be observed. Thus, the Senate has an “advice and consent” role. Originally, this was understood to require the President to consult with the Senate on negotiating treaties before he actually made one.

Washington tried this approach early in his administration. He and Secretary of War Henry Knox appeared before the Senate to discuss pending treaty negotiations with the Creek Indians. Rather than engaging the President and Knox, the Senate referred the matter to a committee. Washington angrily left, declaring, “This defeats every purpose of my coming here.” Twice more he sent messages to get advice on negotiations. Receiving no responses, Washington gave up even those efforts. Since then, Presidents have made treaties without prior formal consultation with the Senate. The Senate’s role now is to approve or reject treaties through its “consent” function. Of course, informal discussions with individual Senators may occur. The Senate’s similar formal advice role for appointments of federal officers likewise has atrophied.

Washington also used constitutional tools to participate effectively in domestic policy. For one, the Constitution obliged the President to deliver to Congress from time to time information on the state of the union and to recommend proposals. Washington used this opportunity for an annual report that he presented in person at the opening of each session of Congress. Presidents have continued this tradition, although, beginning with Jefferson, they no longer appeared personally until Woodrow Wilson revived the practice.

Another such tool was the President’s qualified veto over legislation. A potentially powerful mechanism for executive dominance, early Presidents used it sparingly. The controversy was over the permissible basis of a veto. Could it be used for any reason, such as political disagreement with the legislation’s policy, or only for constitutional qualms? Washington sympathized with the latter position, advocated by Jefferson. On that ground, he first vetoed an apportionment of the House of Representatives in 1791 that he believed violated the Constitution’s prohibition against giving a state more than one representative for every 30,000 inhabitants. Andrew Jackson eventually used the veto for purely political reasons, which has become the modern practice.

One more constitutional evolution that Washington set in motion involved government secrecy and the President’s right to withhold information from Congress and the courts, a doctrine known as “executive privilege.” It appears nowhere in the Constitution, but was recognized under the common law. There are two broad aspects to this doctrine. One is to protect the confidentiality of communications between the President and his executive branch subordinates. The other is to guard state secrets in the interest of national and military security. Again, under Hamilton’s implied powers, the President needs such privilege to carry out the duties of his office and to protect the independence of the executive branch. Two events during Washington’s administration gave an early shape to this doctrine.

In October, 1791, General Arthur St. Clair, the governor of the Northwest Territory took 2,000 men, including the entire regular army plus several hundred militia, to build a fort to counter attacks by an alliance of Indian tribes supported by the British. On November 4, St. Clair’s force, down to about 920 from desertion and illness, was surprised by the Indians and suffered 900 casualties in the rout, the great majority of them killed. The Indians also killed the 200 camp followers, including wives and children, in what became the worst defeat of the American army by Indians. To no one’s surprise, the House ordered an inquiry and sought various documents from the War Department relating to the campaign.

Washington consulted his cabinet in what was perhaps the first meeting of the entire body. With the cabinet’s agreement, Washington refused to turn over most of the requested documents on the ground that they must be kept secret for the public good. Thus was the state secrets doctrine incorporated into American constitutional government. A committee in the House eventually exonerated St. Clair and blamed the rout on poor planning and equipping of the force. The defeat of St. Clair was reversed by General Anthony Wayne with a larger force of 2,000 regulars and 700 militia in August, 1794, at the Battle of Fallen Timbers. That victory produced a peace treaty, which ended the Indian threat.

The second occurred when the House demanded that the administration disclose to them the instructions Washington had given to American negotiators regarding the unpopular Jay Treaty of 1794 with Great Britain. The President declined on grounds of confidentiality, relying on the Constitution’s placement of the treaty power in the President and Senate. The flaw with Washington’s argument was that the House had to appropriate funds required by the treaty. The House insisted on receiving the documents to carry out its constitutional appropriations function. Washington stood his ground, and the House grudgingly dropped the matter.

Any overview of the Washington administration requires at least a brief mention of the influence of Alexander Hamilton. Hamilton had long enjoyed Washington’s support, well before he became Secretary of the Treasury. His influence was well-earned. It is not uncommon for historians to refer to the United States of the 1790s as Hamilton’s Republic. Perhaps his signal achievement were his reports on the public credit and on manufactures, which Congress had asked him to prepare. The former, which he submitted on January 14, 1790, recommended that the foreign and domestic debt of the United States be paid off at full value, rather than at the depreciated levels at which the notes were then trading. As well, the United States would assume the states’ outstanding debts. The entirety would be funded at par by newly-issued bonds paying 6% interest. Import duties and excise taxes imposed under Congress’s new taxing power would provide the source to pay the interest and principal. Congress narrowly approved Hamilton’s proposal after he struck a deal with Jefferson that would place the new national capital in the South in 1800. The foreign debt was paid off in 1795 and the domestic debt forty years later.

The plan also established the Bank of the United States, modeled broadly on the Bank of England and the abortive Bank of North America, a venture by Robert Morris and Hamilton under the Articles of Confederation. Among other functions, the Bank would stabilize monetary excesses and protect American credit rating. Congress approved the Bank Bill in February, 1791. Hamilton’s recommendations in his Report on Manufactures, presented at the end of 1791, were not accepted by Congress. They eventually became the foundation for protectionist policies in favor of nascent domestic industries in the nineteenth century.

Washington’s last contribution to American constitutional development was his refusal to serve more than two terms. He had agreed only reluctantly even to that second term. His retirement was not the first time he had left office voluntarily even though he had sufficient standing to retain power. Years earlier, he had surrendered his command of the Army to Congress at the end of the Revolutionary War. The Constitution was silent on presidential term limits. Indeed, Hamilton had argued against them in The Federalist. By leaving the Presidency after eight years, Washington established the two-term custom that was not violated until Franklin Roosevelt in the 1940 election. Fear of such “third-termites,” made worse by FDR’s election to a fourth term, soon produced the 22nd Amendment, which formalized the two-term custom.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: James D. Best
Signing of the Constitution - Independence Hall in Philadelphia on September 17, 1787, painting by Howard Chandler Christy, on display in the east grand stairway, House wing, United States Capitol.

James Madison took extensive notes during the Constitutional Convention. Monday, September 17, 1787 would be his last entry because the signing ceremony would be the final act of the convention. Many had doubted that the day would ever arrive.

For four months, delegates had been locked in a hot, closed room full of sweaty overdressed men. Swarms of horseflies frequently added to the discomfort that had heightened disputes. Acrimonious would be a polite description of the proceeding. Despite the hardships and ill-temper, the delegates stayed and stayed until they eventually hammered out a compromise they could accept.

A reverential spirit suffused the assembly that Monday. The chamber remained hushed as the secretary read the engrossed constitution in its entirety. At the conclusion, Franklin gave a short speech before declaring, “I move the constitution signed.”

Washington formally called on the delegates to sign the Constitution. For this momentous occasion, the secretary had set out the Syng inkstand used to sign the Declaration of Independence. Washington walked around the green baize-covered table to sign first. He then called the states from north to south. The delegates remained silent and respectful as they approached the low dais to apply their signatures to a document they hoped would permanently bind the country. Ratification was far from a certainty. As one of the delegates pointed out, the country remained at May 25th while the delegates had evolved through endless debates until they reached a consensus. When revealed to the general populous, the Constitution would come as a surprise.

Two Benjamin Franklin anecdotes have symbolized the ceremony for countless generations. Both are documented. The first appears in Madison’s notes and the second is described in the diary of Dr. James McHenry, one of Maryland’s delegates to the Convention.

Despite his illness, Franklin had remained standing after he signed, shaking hands with delegates and whispering an occasional aside.

While the last members were signing, Franklin raised his voice. “Gentlemen, have you observed the half sun painted on the back of the President’s chair? Artists find it difficult to distinguish a rising from a setting sun. In these many months, I have been unable to tell which it was. Now, I’m happy to exclaim that it is a rising, not a setting sun.”

Once the last signature was in place, everyone was anxious to leave the chamber that had dominated their life for so many months. Besides, one of the delegates was hosting a celebratory dinner at the City Tavern.

Because of the momentous day, an enfeebled Franklin had abandoned his rented prisoners who normally carried him to and from the chamber. He insisted on walking out of the State House. Washington took a point position in front of Franklin, who was helped by delegates at each elbow.

As the sentries threw open the doors, the delegates were assaulted by bright sunlight and a deafening roar. Hundreds of people cheered, clapped, and whistled at the sight of General George Washington framed by the great double doors of the State House. The sentries had skipped down the three steps and joined arms to hold back the surge of people. A rambunctious session on Saturday had informed Philadelphians that the convention had concluded its business.

As Franklin followed in Washington footsteps, the people continued to cheer and applaud. A woman leaned in to yell, “Dr. Franklin, what is it to be? A republic or a monarchy?”

His answer came in a firm, loud voice. “A republic—if you can keep it.”

James Madison wrote, “The infant periods of most nations are buried in silence, or veiled in fable, and perhaps the world has lost little it should regret. But the origins of the American Republic contain lessons of which posterity ought not to be deprived.”

Throughout history, new nations have come into being because of conquering armies, internal rebellion, or the edict of a great power. Although the United States of America was conceived in revolt, our governing institutions were born in calm reason. Our Constitution comes from a convention and ratification process where reasoned debate eventually led to a decision by a large segment of the population to put a new government in place.

The Founding of this great nation was unique. Until 1776, with a few brief exceptions, world history was about rulers and empires. The American experiment shook the world. Not only did we break away from the biggest and most powerful empire in history, we took the musings of the brightest thinkers of the Enlightenment and implemented them. Our Founding was simultaneously an armed rebellion against tyranny, and a revolution of ideas—ideas that changed the world.

That is why we still care about America’s founding and the Framers of our Constitution.

James D. Best, author of Tempest at Dawn, a novel about the 1787 Constitutional Convention, Principled Action, Lessons From the Origins of the American Republic, and the Steve Dancy Tales

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

In perhaps its most significant legislative action, the Congress of the Articles of Confederation passed the Northwest Ordinance on July 13, 1787. This landmark law was an act of institutional strength during a period of marked institutional weakness, a reminder of a national will that had been battered by fears of disunion, and a source of constitutional principles that defined parts of the fundamental charter that would replace the Articles a year later.

The domain ceded by the British to the states under the Treaty of Paris of 1783 extended to the Mississippi River, well westward of the main area of settlement and even of the “backcountry” areas such as the Piedmont regions of Virginia and the Carolinas. The Confederation and its component states were land-rich and cash-poor. The answer would appear to be to open up this land for settlement by selling tracts to bona fide purchasers and to encourage immigration from Europeans. But matters were not that easy.

For some years before independence, there had been a gradual stream of westward migration past the Allegheny and Cumberland Mountains. Shocked into action by the ferocity of the Pontiac War that flared even as the war with the French in North America was winding down, the British sought to end this movement.  Accordingly, King George III issued the Proclamation of 1763 in October of that year, which prohibited colonial governments from granting land titles to Whites beyond the sources of rivers that flow into the Atlantic Ocean. Nor could White squatters occupy this land. The objective was to pacify the Indians, secure the existing frontier of White settlement, reduce speculation in vast tracts of land, divert immigration to British Canada, and protect British commerce and importation of British goods by a population concentrated near the coast.

While the policy initially succeeded in damping western settlement, in the longer term it alienated the Americans and helped trigger the move to independence. Ironically, during the Revolutionary War, those who actually had moved to the western settlements often considered themselves aggrieved and politically marginalized by the colonial assemblies, provincial congresses, and early state legislatures that were controlled by the eastern counties. Westerners were more likely to sit out the war, flee to the off-limits lands, or even align with the British.

Over time, the policy increasingly was ignored. Subsequent treaties moved the line of settlement westward. Squatters, land speculators and local governments evaded that revision, too. The historian Samuel Eliot Morison describes the actions of George Washington and his partner William Crawford in obtaining deeds from the colonial government of Pennsylvania to a large tract of land that lay west of the Proclamation line. In a letter to Crawford, Washington expressed his conviction that the proclamation was temporary and bound to end in a few years. “Any person therefore who neglects the present opportunity of hunting out good lands and in some measure marking … them for their own (in order to keep others from settling them) will never regain it…. The scheme [of marking the claim must be] snugly carried out by you under the pretense of hunting other game.” Washington’s secretive “scheme” was standard practice.

Washington was a comparatively minor participant. Speculators included a who’s who of colonial (and British) politicians and upper class merchants. While the British government vetoed some of the more flagrant schemes that involved many millions of acres, the practice continued under the Articles of Confederation and the Constitution of 1787. With independence a reality, Americans need no longer be influenced by British imperial policy. The new governments could accede to the popular clamor to open up the western lands.

However, three issues needed to be resolved: the conflicting state claims to western land, by having the states cede the contested areas to the Confederation; the orderly disposition of public lands, by surveying, selling, and granting legal title; and the creation of a path to statehood for this unorganized wilderness. The Articles of Confederation addressed none of these. The first was accomplished by Congress in 1779 and 1780 through resolutions urging the states to turn over such disputed land claims to the Confederation as public land. Most did. Unlike other actions by Congress under the Articles that required assent by the state legislatures, these public lands would be administered directly by the Congress. During the later debate on the Constitution of 1787, James Madison and others used Congress’s control over the western lands as an example of the dangers of unchecked unenumerated powers. This was quite in contrast to their usual complaints about the Confederation’s weakness. To be fair to Madison, he admitted that he supported what Congress had done. Congress solved the second issue on May 20, 1785, when it legislated a system of surveying the new public lands, dividing them into townships, and selling the surveyed land by public auction. The third resulted in the Northwest Ordinance.

The catalyst for this last solution was the Ohio Company, one of the land speculation syndicates. General Rufus Putnam and various New England war veterans organized the company to purchase 1.5 million acres for $1 million in depreciated Continental currency with an actual value of about one-eighth of the face amount. Even with the potential to raise money for the Confederation’s empty coffers, Congress barely met its quorum when eight states met to consider the proposal. As a condition of the deal, the Ohio Company wanted the Northwest Ordinance in order to make their land sales more attractive to investors. Rufus King and Nathan Dane of Massachusetts drafted the Ordinance. All eight states represented approved the law, with all but one of the 18 delegates in favor. Ultimately, the Ohio Company was able to raise only half the amount promised and purchased 750,000 acres. However, the Ordinance applied throughout the unorganized territory north of the Ohio River.

The Ordinance did not spring spontaneously from the effort of King and Dane. Congress in 1780 had declared in its earlier resolution that the lands ceded to the Confederation would be administered directly by the Congress with the goal that they would be “settled and formed into distinct republican states, which shall become members of the Federal Union.” Four years later, Thomas Jefferson presented a proposal to Congress, which, with some amendments, was adopted as the Land Ordinance of 1784. It provided for division of the territory into ten eventual states, the establishment of a territorial government when the population reached 20,000, and statehood when the population reached the same as that of the smallest of the original thirteen.

The Ordinance had three important components. First, of course, the statute provided for the political organization of the territory. The whole territory was divided into three “districts.” A territorial assembly would be established for a portion of the territory as soon as that area had at least 5,000 male inhabitants. Congress would appoint a governor, and a territorial court would be established. All of these officials had to meet various property requirements consisting of freehold estates between 200 and 1,000 acres. Voting, too, required ownership of an estate of at least 50 acres. Once the population reached 60,000, the area could apply to Congress for admission to statehood on equal terms with the original states. Eventually, five states, Ohio, Indiana, Illinois, Michigan, and Wisconsin emerged from the Northwest Territory. The process of colonization and decolonization established under the Ordinance became the model followed in its general terms through the admission of Alaska and Hawaii in 1959.

Another critical feature of the Ordinance was the inclusion of an embryonic bill of rights in the first and second articles. The first protected the free exercise of religion. The second was more expansive and singled out, among others, various natural rights, such as the protection against cruel and unusual punishments, against uncompensated takings, and against retroactive interference with vested contract rights. The enumeration of specific restrictions on government power was consistent with constitutional practice at the state level. It also bolstered the demand of critics of the original Constitution of 1787 that a bill of rights be included in that document.

As a final matter, the Ordinance addressed the controversial question of slavery. Article VI both prohibited slavery itself in the territory and required that a fugitive slave escaping from one of the original states be “conveyed to the person claiming his, or her labor, or service ….” While this compromise was not ideal for Southern slave states, their delegations acquiesced because the Ordinance did not cover the territory most consequential to them, which extended westward from Virginia, North Carolina, and Georgia. The compromise also established a geographic line for the exclusion of slavery, which approach was not challenged until the debate over the admission of Missouri to statehood in 1819-1820. The eventual Missouri Compromise retained that solution, although a different geographic line was drawn. The fugitive slave provision and its successors were generally enforced until the 1830s, when the issue began to vex American politics and pit various states against each other and the federal government.

Article III of the Ordinance declared, “Religion, morality, and knowledge, being necessary to good government and the happiness of mankind, schools, and the means of education shall forever be encouraged.” This affirmation reflected republican theory of the time. John Adams would write in 1798, “Our Constitution was made only for a moral and religious People.” George Washington made a similar point in his Farewell Address on September 19, 1796, “Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports. . . . And let us with caution indulge the supposition that morality can be maintained without religion.” In that same speech, Washington tied religion and morality to human happiness and to popular and free government. Article III thus embodied a classic conception of the path to human fulfillment (“happiness”) and virtuous citizenship. Training in these necessary virtues must start early. Thus, schools were needed. Unlike for us and our modern sensibilities, there was no scruple that this would be an improper establishment of religion. The earlier Ordinance of 1785 had provided that in each surveyed township a certain area would be set aside to build schools. This article called for the spirit that would animate their physical structure.

The Confederation’s greatest achievement proved to be its last. The Northwest Ordinance had to be renewed when the Constitution of 1787 replaced the Confederation. The new Congress did so, with minor changes, in 1789, and President Washington signed the bill into law on August 7 of that year. On May 26, 1790, the Southwest Ordinance was approved to organize the territory south of the Ohio River. The terms of that statute were similar to its northern counterpart, except in the crucial matter of slavery. The Southwest Ordinance prohibited Congress from making any laws within the territory that would tend to the emancipation of slaves. This signaled Congress’s willingness to permit the “peculiar institution” to be extended into new states, if the settlers wished. Taken together with the Northwest Ordinance, the statutes set the pattern for compromise on the slavery issue that lasted until the 1850s. Intended to organize the “Old Southwest,” the Southwest Ordinance ultimately governed only Tennessee’s passage to statehood. The Northwest Ordinance affected a much larger area and lasted longer, ending with the admission of Wisconsin to the union in 1848.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

After the Revolutionary War, Americans flooded the frontier beyond the Appalachian Mountains in search of land and greater opportunity. The path for settlement was rooted in republican ideals and resulted in one of the greatest successes of the government under the Articles of Confederation.

One of the most important developments in settling the West was the states ceding their western land claims to the nation. For example, in 1781, Virginia ceded its claims to the territory north of the Ohio River to Congress, and other states quickly followed.

Thomas Jefferson drafted the Ordinance of 1784, which was considered and adopted by the Congress. The land ordinance established the principles of making new territories entering into the Union equal to the original thirteen states and guaranteeing the new states a republican form of government.

Jefferson included a clause that would have forever banned slavery in the western territories, but it narrowly lost by a single vote. Reflecting on its failure, Jefferson wrote a few years later: “The voice of a single individual would have prevented this abominable crime; heaven will not always be silent; the friends to the rights of human nature will in the end prevail.”

The following year, Congress adopted the Land Ordinance of 1785 which specified how the land in the Northwest Territory would be disposed of and divided as a model of orderly western settlement. The ordinance stated that the land was to be surveyed and then divided into townships and farms to shape civil society and individual land ownership. Land purchases were to be paid to the national government to provide revenue, especially to help retire the national debt. Communities would establish public schools to educate the citizens in knowledge and the virtues of republican citizenship.

In July 1787, while delegates were meeting at the Constitutional Convention in Philadelphia, representatives of several land companies lobbied the Congress in New York for land grants to settle the Northwest Territory. New England minister Manassah Cutler of the Ohio Company and New York speculator William Duer of the Scioto Company paid for large tracts of land of millions of acres.

On July 13, Congress adopted the Northwest Ordinance to establish government along republican principles for the territory. The document authorized the territory to be carved into three to five states. It provided a path to statehood and reaffirmed the idea that the new states would enter the Union equally with the other states.

The process for statehood started with Congress appointing a governor and council to govern a territory until the population for the territory reached 5,000. The people could then elect a representative assembly through free and frequent elections. When the population included 60,000 settlers, the territory could adopt a constitution and apply to Congress for statehood.

The Northwest Ordinance was rooted in republican government and natural rights as the foundation for just laws. “For extending the fundamental principles of civil and religious liberty, which form the basis whereon these republics, their laws and constitutions are erected; to fix and establish those principles as the basis of all laws, constitutions, and governments,” it declared.

Another republican measure included in the ordinance was the banning of primogeniture. The document thus prevented an aristocracy of land passed through the generations of first-born sons. Instead, it supported the principle of equality.

The ordinance specifically protected several individual liberties. Religious liberty was once again explicitly protected as an essential right. All citizens were protected from civil penalties for their “mode of worship or religious sentiments.”

The rights of the accused were firmly protected. They included a right to habeas corpus, trial by jury, bail, no cruel and unusual punishments, and due process of law. The governments were bound to protect property rights and the right to contract.

Perhaps most significantly, Article VI of the Northwest Ordinance banned slavery in the territory. While slavery was being abolished outright or gradually in most northern states at the time, the ordinance prevented slavery from spreading in three to five new states in the Northwest. It read, “There shall be neither slavery nor involuntary servitude in the said territory.” It did, however, provide for a fugitive slave clause for the recovery of escaped slaves.

Contrarily, the Southwest Ordinance of 1790 protected the expansion of slavery. “Provided always that no regulations made or to be made by Congress shall tend to emancipate Slaves.” The roots of the sectional divide over the western expansion of slavery were laid early in the new nation.

The Northwest Ordinance promoted education and religion as the basis of good and virtuous citizenship, which was in turn the foundation of republican self-government as the Ordinance held that “Religion, morality, and knowledge, being necessary to good government and the happiness of mankind, schools and the means of education shall forever be encouraged.”

The ordinance promised that justice, liberality, and good faith would always be practiced with the Indians in the territory. It also promised not to take their property without consent and not to disturb them. These good intentions were rarely practiced, and several battles would be fought over the ensuing decade for control of the area.

The Northwest Ordinance of 1787 was a seminal founding document. The republican and natural rights principles of the American founding shaped the ordinance and the creation of new states in that territory. That republican vision resulted in the dynamic growth of the continental American union and empire of liberty.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Val Crofts

George Washington deserves to be remembered as possibly the greatest figure in American history. He led the Continental Army to victory over the British in the American Revolution against unbelievable odds. He was the only president in U.S. history to be unanimously elected. Washington served as the first president of the United States for two terms, establishing the office and its precedents and customs for all future presidents to follow. We may not even have had a United States of America without Washington’s contributions. Twice during his life, Washington achieved great accomplishments by doing something very uncharacteristic. He gave up. More specifically, he gave up power.

Washington was honored and humbled to have been commissioned as commander-in-chief of the Continental Army by the 2nd Continental Congress in June of 1775. He did not think that he was adequate for the task given to him and even tried to avoid it, but Washington’s unending commitment to duty, honor, and his country prevailed. The humble statesman reluctantly accepted the position as commander-in-chief. General Washington proved himself an inspiring leader and innovative soldier as he commanded his men throughout the remainder of the war.

When military victory was secured at Yorktown in 1781, General Washington believed that he needed to stay in charge of the army until peace was secured. He could not step down until the British army left the United States, the American Revolution was totally resolved and the new nation was firmly standing on its own, ready to take its place in the world. Only then would he feel comfortable resigning the powers given to him by Congress in 1775.

General Washington was given great powers by the 2nd Continental Congress. The civil and military control he received were similar to a military dictator. He could have simply grabbed power and served over the United States as an absolute ruler or an “American King.” There were also some who felt that this should be what Washington should do to maintain stability for the new government and nation. But, like the story of the Roman general, Cincinnatus, Washington gave his power back to the people, where he felt it belonged.

Washington would have been familiar with the classical story of the Roman general, Cincinnatus, who was a former Roman general  given military and political power back when Rome was invaded. After repelling the invasion, Cincinnatus resigned his position and returned to his retirement. Washington longed to do the same thing. Because of the similarities between the two men, Washington is sometimes referred to as the “American Cincinnatus.”

Washington actually “retired” for the first time in 1758 and returned to his Virginia plantation, Mount Vernon, to be a farmer and gentleman for the rest of his life or so he thought. But, as tension mounted in the American colonies in the 1770s, Washington came out of retirement and attended meetings of the 1st and 2nd Continental Congresses. Because of his military experience and reputation, he was appointed commander-in-chief of the Continental Army in 1775 and served in that capacity until 1783.

In late 1783, Washington felt it was time to retire again. British soldiers had left New York, the Treaty of Paris was signed, and peace appeared to be a reality in the new nation. He was anxious to go home to Mount Vernon again and live out his days in the company of his wife, family and friends. He decided to give his powers and position back to the Congress and the people that had granted them to him in 1775.

He arrived in December of 1783 in Annapolis, Maryland, the then capital of the United States and delivered his remarks to the assembly present at the Maryland State House. He thanked Congress for their trust in him and stated his intent to resign from the service of his nation. Washington thanked the officers who had served with him throughout the war and whom he considered members of his family. Washington recommended Congress take notice of those officers and their service to the young nation. He then prayed that God would watch over the United States and its people. Washington followed by resigning his commission and departing to spend Christmas at Mount Vernon with his family.

Rarely in history do you find someone giving up power. The more power you possess, the tougher it may be to let it go. But, when you are selfless and think of what is best for others, specifically your nation, the decision may come easily. George Washington was this rare, selfless leader who had a tremendous love for this nation which he helped create. He knew that if the nation were to move along, he needed to give his power back to the people. By doing so, he helped to finish the final act of the war and to make the American Revolution a true revolution of power from kings to the people.

The quote by King George III of Great Britain in 1797 poetically and fittingly describes the impact of Washington’s selfless act of resigning his commission. When discussing the legacy of George Washington, the king said that Washington’s actions in giving up his commission made him the “most distinguished of any man living.” Now, we can add that Washington is one of the most distinguished men in history.

Val Crofts is a Social Studies teacher from Janesville, Wisconsin. He teaches as Milton High School in Milton, Wisconsin and has been there 16 years. He teaches AP U.S. Government and Politics, U.S. History and U.S. Military History. Val has also taught for the Wisconsin Virtual School for seven years, teaching several Social Studies courses for them. Val is also a member of the U.S. Semiquincentennial Commission celebrating the 250th Anniversary of the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

In an example of unrivaled statesmanship, General George Washington resigned his military commission at the State House in Annapolis, Maryland on December 23, 1783 to return to his Mount Vernon, Virginia home as a private citizen. Washington’s resignation was pivotal for American history because he willingly gave up power. He later participated in the Constitutional Convention of 1787 in Philadelphia, and was unanimously elected president of the United States in 1789. He reluctantly accepted the presidency and rejected any form of kingship. In 1797, Washington again surrendered his position, allowing a fellow American to serve as president. The example Washington set for America’s republican form of government was that of a peaceful transfer of power, a requirement the nation would need to serve by leadership and freedom rather than dictatorship.

On December 4, 1783, George Washington said goodbye to his Generals, a poignant moment captured in a piece of iconic artwork, Washington’s Farewell to His Officers in an engraving by Phillebrown, from a painting by Alonzo Chappel. “With a heart full of love and gratitude, I now take leave of you. I most devoutly wish that your latter days may be as prosperous and happy as your former ones have been glorious and honorable.”

The General then mounted his horse and turned towards Annapolis, Maryland. There was an appointment with destiny to keep. Washington was soon to become, in the words of King George III, “the greatest character of the age.”

The General and his entourage arrived in Annapolis on December 19, 1783. The normal 4-5 day trip had taken three times as long. They were feted along the 215 miles in every town and village they entered. Banquets, toasts, cannonades and the occasional militia demonstration had become familiar. Yet, this was no time for the General’s two aides to relax. Preparations and protocols had to be completed.

Promptly at noon on December 23, 1783, the highly scripted event began. Only twenty delegates from seven states were attending the Congress, greatly outnumbered by the Maryland Assembly whose larger chamber was borrowed for the event. The low attendance in Congress was not unusual. Three years later, little had changed. In a 1786 letter to Elbridge Gerry, Delegate Rufus King complained: “We are without money or the prospect of it in the Federal Treasury; and the States, many of them, care so little about the Union, that they take no measures to keep a representation in Congress.”

Historian Thomas Fleming explains what happened next:

“Washington took a designated seat in the assembly chamber, and his two aides sat down beside him. The three soldiers wore their blue and buff Continental Army uniforms. The doors of the assembly room were opened and Maryland’s governor and the members of the state’s legislature crowded into the room, along with, in the words of one eyewitness, “the principal ladies and gentlemen of the city.”

Other ladies filled every seat in a small gallery above the chamber. The President of Congress, Thomas Mifflin of Pennsylvania, began the proceedings: “Sir, the United States in Congress assembled are prepared to receive your communications.”

“Mr. President,” Washington began,

“The great events on which my resignation depended having at length taken place (the peace treaty with England) I now have the honor of offering my sincere congratulations to Congress and of presenting myself before them to surrender into their hands the trust committed to me, and to claim the indulgence of retiring from the service of my country.”

Washington’s voice faltered, but he quickly recovered his composure and proceeded:

“Happy in the confirmation of our independence and sovereignty, and pleased with the opportunity afforded the United States of becoming a respectable nation, I resign with satisfaction the appointment I accepted with diffidence.”

He thanked the country and the army for its support and added that he hoped Congress would acknowledge the “distinguished merits” of “the gentlemen who have been attached to my person during the war” — his aides. At the reference to his aides, Washington became so emotional that he reportedly had to grip the speech with both hands to hold it steady.

He continued:

“I consider it an indispensable duty to close this last solemn act of my official life by commending the interests of our dearest country to the protection of Almighty God and those who have the superintendence of them, to his holy keeping.”

Tears streamed down the General’s ruddy cheeks.

“Having now finished the work assigned me, I retire from the great theatre of Action; and bidding an Affectionate farewell to this August body under whose orders I have so long acted, I here offer my Commission, and take my leave of all the employments of public life.”

Washington handed his commission and a copy of his remarks to President Mifflin.

John Trumbull, himself a former aide-de-camp to Washington, who would memorialize this great event in a painting commissioned in 1817 by Congress, and hangs in the United States Capitol Rotunda, entitled General George Washington Resigning His Commission, called Washington’s resignation: “one of the highest moral lessons ever given to the world.”

Now unencumbered by his commission, Private Citizen George Washington, accompanied by Col. David Humphreys, literally galloped the 47 remaining miles to his beloved Mount Vernon home, arriving in time for Christmas Eve.

Gary Porter is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text. Gary presents talks on various Constitutional topics, writes periodic essays published on several different websites, and appears in period costume as James Madison, explaining to public and private school students “his” (i.e., Madison’s) role in the creation of the Bill of Rights and the Constitution. Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

The surrender of General Charles Cornwallis to General George Washington at Yorktown, Virginia, was the final battle of the American Revolution. Then, in 1783, the Treaty of Paris was signed after an appeal from the British for peace, and the American Revolutionary War was over.

In 1778, a full three years before his victory at Yorktown, General George Washington wrote: “The Hand of providence has been so conspicuous in all this, that he must be worse than an infidel that lacks faith, and more than wicked, that has not gratitude enough to acknowledge his obligations.

Washington was not lacking in gratitude; more than once God’s hand of providence had appeared to save his beleaguered army. Whether it was the sudden fog that enveloped the East River in August 1776, allowing his army to safely retreat from the Brooklyn Heights, or the “false spring” nearly two years later that tricked shad into beginning an early run up the Delaware River to Valley Forge, Washington knew whom to thank. But at least one more act of providence lay ahead. On September 5th, 1781, a French fleet appeared providentially to defeat a slightly smaller British fleet, thus preventing the rescue of General Cornwallis and his army from their fortified but surrounded position at Yorktown.

Cornwallis had received conflicting and confusing orders from his commander back in New York, General Sir Henry Clinton, but, like a good soldier, had followed them as he understood them, believing that his exposed position at Yorktown would be remedied, if necessary, by the British Fleet. It was a gamble that unfortunately did not pay off. It did not help the British that their fleet commander, Admiral Thomas Graves, proved indecisive at a critical juncture while the French fleet under Admiral Francois Joseph Paul de Grasse did not let their disadvantaged position exiting the Chesapeake Bay lead to their downfall; the French attacked aggressively and decisively.  Historians have called the Battle of the Virginia Capes the most critical naval engagement in history!  It is said to have converted “the United States” from a possibility into a certainty.

Cornwallis was embarrassed, to say the least, by being left “flying in the wind” by Graves’ defeat. So embarrassed that after the surrender of his force had been negotiated for October 19th, he cited illness and had his second in command Brigadier General Charles O’Hara surrender the sword instead. In a final attempt to humiliate Washington, O’Hara had been instructed by Cornwallis to present his sword to the French General Rochambeau. Rochambeau politely directed the British officer to Washington who, seeing this, directed his own second in command, General Benjamin Lincoln, to accept the surrender, payback for Lincoln’s defeat the previous year at Charlestown. What games these Generals play. The painting, Surrender of Lord Cornwallis, by John Trumbull is one of the eight large, iconic paintings located in the United States Capitol Rotunda.

There would be more fighting ahead – minor skirmishes at best — but Cornwallis’ surrender “took the wind from the sails” of the British force in America. Two years would elapse before a peace treaty would finally be signed in Paris on September 3, 1783 formally ending the eight year conflict, and nearly three more months before the last British troops boarded ships to leave New York on November 25th, but it was a wait worth enduring.

Nine days later on December 4, 1783, George Washington said goodbye to his Generals, a poignant moment captured in another piece of iconic artwork, Washington’s Farewell to His Officers in an engraving by Phillebrown, from a painting by Alonzo Chappel. “With a heart full of love and gratitude, I now take leave of you. I most devoutly wish that your latter days may be as prosperous and happy as your former ones have been glorious and honorable.”

The General then mounted his horse and turned towards Annapolis, Maryland.

Gary Porter  is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text.  Gary presents talks on various Constitutional topics, writes a weekly essay: Constitutional Corner which is published on multiple websites, and hosts a weekly radio show: “We the People, the Constitution Matters” on WFYL AM1140.  Gary has also begun performing reenactments of James Madison and speaking with public and private school students about Madison’s role in the creation of the Bill of Rights and Constitution.  Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath

The Declaration of Independence that formalized the revolutionary action of the Second Continental Congress of the thirteen states did not, however, establish a plan of government at the highest level of this American confederacy. The members of that body understood that such a task needed to be done to help their assembly move from a revolutionary body to a constitutional one. A political constitution in its elemental form merely describes a set of widely shared norms about who governs and how the governing authority is to be exercised. A collection of would-be governors becomes constitutional when a sufficiently large portion of the population at least tacitly accepts that assemblage as deserving of political obedience. Such acceptance may occur over time, even as a result of resigned sufferance. Presenting a formal plan of government to the population may consolidate that new constitutional order more quickly and smoothly.

That process was well underway at the state level before July 4, 1776. Almost all colonies had provincial congresses by the end of 1774, which, presently, assumed the functions of the previous colonial assemblies and operated without the royal governors. In 1775, the remaining three colonies, New York, Pennsylvania, and Georgia, followed suit. Although they foreswore any design for independence, as a practical matter, these bodies exercised powers of government, albeit as revolutionary entities.

In 1776, the colonies moved to formalize their de facto status as self-governing entities by adopting constitutions. New Hampshire did so by way of a rudimentary document in January, followed in March by South Carolina. A Virginia convention drawn from the House of Burgesses drafted a constitution in May and adopted it in June. Rhode Island and Connecticut simply used their royal charters, with suitable amendments to take account of their new republican status. On May 10, still two months before the Declaration of Independence, the Second Continental Congress, somewhat late to the game, resolved that the colonies should create regular governments. These steps, completed in 1777 by the rest of the states, other than Massachusetts, established them as formal political sovereignties, although their continued viability was uncertain until the British military was evicted and the Treaty of Paris was signed in 1783.

At the level of the confederacy, the Second Continental Congress continued to act as a revolutionary assembly, but took steps to establish a formal foundation for that union beyond resolutions and proclamations. A committee of 13, headed by John Dickinson of Pennsylvania, the body’s foremost constitutional lawyer, completed an initial draft in July, 1776. That draft was rejected, because many members claimed it gave too much power to Congress at the expense of the states. Although time was of the essence to set up a government to run the war effort successfully, Congress could not agree to a plan until November 15, 1777, when they voted to present the Articles of Confederation to the states for their approval.

Ten states approved in fairly short order by early 1778, two within another year. Maryland held out until March 1, 1781, just a half year before the military situation was decided decisively in favor of the Americans as a result of the Battle of Yorktown. Since the Articles required unanimous consent to go into effect, this meant that the war had been conducted without a formal governmental structure. But necessity makes its own rules, and the Congress acted all along as if the Articles had been approved. Such repeated and consistent action, accepted by all parties established a de facto constitution. While the British might demur, at some point between the approval of the Articles in Congress and Maryland’s formal acceptance, the Congress ceased to be merely a revolutionary body of delegates and became a constitutional body. Maryland’s belated action merely formalized what already existed. The Continental Congress became the Confederation Congress, although it was still referred to colloquially by its former name.

One of the persistent arguments about the Articles questions their political status. Were they a constitution of a recognized separate sovereignty, or merely a treaty among essentially independent entities. There clearly are textual indicia of each. The charter was styled “Articles of Confederation and Perpetual Union,” a phrase repeated emphatically in the document. On the other hand, Article II assured each state that it retained its “sovereignty, freedom, and independence, and every Power, Jurisdiction, and right, which is not … expressly delegated to the United States.” Moreover, Article III expressly declared that the states were severally entering into “a firm league of friendship with each other, ….”

Article I provided, “The Stile of this confederacy shall be ‘The United States of America.” That suggests a separate political entity beyond its component parts. Yet the document had numerous references to the “united states in congress assembled,” and defined “their” actions. This, in turn, suggests that the states were united merely in an operative capacity, and that an action by Congress merely represented those states’ collective choice. Indeed, the very word “congress” is usually attached to an assemblage of independent political entities, such as the Congress of Vienna.

As an interesting note, such linguistic nods to state independence continue in some fashion under the Constitution of 1787. Federal laws are still enacted by a “Congress.” More significant, each time that the phrase “United States” appears in the Constitution, where the structure makes the singular or plural form decisive, the plural form is used. For example, Article III, section 3 declares, “Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, ….”

The government established by the Articles had the structure of a classic confederation. Theoretical sovereignty remained in the states, and practical sovereignty nearly did. The Articles were a union of states, not directly of citizens. The state legislatures, as part of the corporate state governments, rather than the people themselves or through conventions, approved the Articles. Approval had to be unanimous, in that each state had to agree. The issue of state representation proved touchy, as it would later in the Philadelphia convention that drafted the Constitution of 1787. While the larger states wanted more power, based on factors such as wealth, population, and trade, this proved to be both too difficult to calculate and unacceptable politically to the smaller states. Due to the need to get something drafted during the war crisis, the solution was to continue with the system of state equality used in the Continental Congress and to leave further refinements for later. States were authorized, however, to send between two and seven delegates that would caucus to determine their state delegation’s vote. This state equality principal was also consistent with the idea of a confederation of separate sovereignties.

The Confederation Congress had no power to act directly on individuals, but only on the states. It was commonly described as a federal head acting on the body of the states. Congress also had no enforcement powers. They could requisition, direct, plead, cajole, and admonish, but nothing more. Much depended on good faith action by state politicians or on the threat of interstate retaliation if a state failed to abide by its obligations. Of course, such retaliation, done vigorously, might be the catalyst for the very evil of disunion that the Articles were designed to prevent.

From a certain perspective, the Congress was an administrative body over the operative political units, the states, at least as far as matters internal to this confederation. This was consistent with the “dominion theory” of the British Empire that Dickinson and others had envisioned for the colonies before the Revolution, where the colonies governed themselves internally and were administered by a British governor-general who represented the interests of the empire. Thus, Congress could not tax directly. Instead, it would direct requisitions apportioned on the basis of the assessed value of occupied land in each state, which the states were obligated to collect. With funds often uncollected and states frequently in arrears, Congress had to resort to borrowing funds from foreign sources and emitting “bills of credit,” that is, paper money unbacked by gold or silver. Those issues, the Continental currency, quickly depreciated. “Not worth a continental” became a phrase synonymous with useless. Neither could Congress regulate commerce directly, although it could oversee disputes among states over commerce and other issues, by providing a forum to resolve them. Article IX provided a complex procedure for the selection of a court to resolve such “disputes and differences … between two or more states concerning … any cause whatever.”

It was easy for critics, then and more recently, to dismiss the Articles as weak and not a true constitution of an independent sovereign. The British foreign secretary Charles James Fox sarcastically advised John Adams, then American minister to London, when the latter sought a commercial treaty with Britain after independence, that ambassadors from the states needed to be present, since the Congress would not be able to enforce its terms. Yet, a union it was in many critical ways, as was recognized in the preamble to its successor: “We, the People of the United States, in Order to form a more perfect Union, ….” The indissolubility of this union was attested to by affirmations of its perpetuity. The Articles gave the Congress power over crucial matters of war and peace, foreign relations, control of the military, coinage, and trade and other relations with the Indians. Indeed, the states were specifically prohibited from engaging in war, conducting foreign relations, or maintaining naval or regular peacetime land forces, without consent from Congress. As to congressional consent, exceptions were made if the state was actually invaded by enemies or had received information that “some nation of Indians” was preparing to invade before Congress could address the matter. A state could also fit out vessels of war, if “such state be infested by pirates,” a matter that seems almost comical to us, but was of serious concern to Americans into the early 19th century.

The controversial matter of who controlled the western lands, Congress or the states, was not addressed. Nor did Congress have any power to force states to end their conflicting claims over such lands, except to provide a forum to settle disputes if a state requested that. Instead, Congress in 1779 and 1780 passed resolutions to urge the states to turn over such disputed land claims to Congress, which most eventually did. This very issue of conflicting territorial claims caused Maryland to refuse its assent to the Articles until 1781.

Yet, it was precisely on this issue of control over the unsettled lands where Congress unexpectedly showed it could act decisively. Despite lacking clear authority to do so, the Confederation Congress passed the Land Ordinance of 1785 and the even more important Northwest Ordinance of 1787. Those statutes opened up the western lands for organized settlement, a matter that had been dear to Americans since the British Proclamation of 1763 effectively put the Trans-Allegheny west off-limits to White settlers. Ironically, during the later debate on the Constitution of 1787, James Madison, in Federalist No. 38, theatrically used these acts of strength by Congress to point to the dangers of unchecked unenumerated powers. This was quite in contrast to the usual portrait of the Confederation’s weakness that Madison and others painted. To be fair, Madison conceded that Congress could not have done otherwise.

Significant also were the bonds of interstate unity that the Articles established. Article IV provided, “The better to secure and perpetuate mutual friendship and intercourse among the people of the different states in this union, the free inhabitants of each state shall be entitled to all privileges and immunities of free citizens in the several states; ….” These rights would include free travel and the ability to engage in trade and commerce. As well, that Article required that fugitives be turned over to the authorities of the states from which they had fled, and that each state give full faith and credit to the decisions of the courts in other states. These same three clauses were brought into Article IV of the Constitution of 1787.

The Articles were doomed by their perceived structural weakness. Numerous attempts to reform them had foundered on the shoals of the required unanimity of the states for amendments. Another factor that likely caused the Philadelphia Convention of 1787 to abandon its quest merely to amend the Articles were their complexity and prolixity, with grants of power followed by exceptions, restrictions, and reservations set out in excruciating detail. The Articles’ weak form of federalism was replaced by the stronger form of the Constitution of 1787, stronger in the sense that the latter represented a more clearly distinct entity of the United States, with its republican legitimacy derived from the same source as the component states, that is, the people.

All of that acknowledged, the victor writes the history. Defenders of the Articles at the time correctly pointed out that this early constitution, drafted under intense pressure at a critical time in the country’s history and intended to deal foremost with the exigencies of war, had been remarkably successful. It was, after all, under this maligned plan that the Congress had formed commercial and military alliances, raised and disciplined a military force, and administered a huge territory, all while defeating a preeminent military and naval power to gain independence.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: The Honorable David L. Robbins

Most students of U.S. history know about the “Shot Heard ‘Round the World” that heralded the beginning of the American Revolution on April 19, 1775. The battles of Lexington and Concord, spurred on when British soldiers tried to confiscate the arms of the American colonists, were the first shots in the Revolutionary War. These battles which caused the British soldiers to retreat to Boston with heavy losses were the result of unrest by the colonialists from the harsh treatment from the British Crown. The battles showed the growing resistance to British rule and tyranny.

These battles followed the attempts of the First Continental Congress in 1774 to avoid war with Great Britain by addressing grievances of the colonists. Following the battles of Lexington and Concord, the Second Continental Congress was convened in Philadelphia (1775-1776). Again, the goal was to avoid war, but it also established the Continental Army with George Washington as General of the Army. This Congress also drafted and ratified the Declaration of Independence on July 4, 1776. Militias were organized to provide local protection and became citizen soldiers supporting the Continental Army in the Revolutionary War. The Militias were the colonial fighters at the battles of Lexington and Concord as there was no Continental Army yet.

By the early summer of 1777 the war for American independence was in full sway and the British believed they could break the resolve of the colonies following recent colonial losses at the Battle of Quebec. The British intended to cut off the New England colonies from the other colonies. Britain’s plan included sending three military columns to converge on Albany, New York, and hand the Continental Army a resounding defeat. British General John Burgoyne’s column of soldiers had been augmented by German troops and Native American fighters. After the Battle of Quebec, the British suffered a defeat at the Battle of Bennington that saw the loss of 1,000 men, and the Native American fighters all but abandoned the British. Burgoyne’s position was difficult. He needed to either retreat to Fort Ticonderoga or advance toward Albany. He decided to move down the Hudson River toward Albany.

With 6,500 fighters, Burgoyne positioned his men near Saratoga, at Freeman Farm, owned by a Loyalist. The Continental Army was led by General Horatio Gates along with Militia, over 12,000 men. The first engagement of the Battle of Saratoga was on September 19, 1777 and lasted for several hours with Burgoyne gaining a small tactical advantage against the much larger forces of General Gates and the Militia. However, Burgoyne suffered significant casualties. For two weeks the British regrouped.

With the tactical advantage, Burgoyne attacked the American forces on October 7 in what is called the Battle of Bemis Heights; this was the second Battle of Saratoga. The Americans captured a portion of the British defenses and Burgoyne was forced to retreat. On October 17, with his troops surrounded and vastly outmanned, Burgoyne surrendered. General Gates accepted his surrender in a respectful manner, allowing most of the British soldiers to return to Great Britain. One American that factored into the success of this battle was Benedict Arnold. He would later become disenchanted with the American military, enter into secret negotiations with the British and narrowly escape capture by the Americans, fleeing to England.

The final battle of Saratoga was a major defeat for the British and word of British surrender further rallied troops in the Continental Army and the Militias. Although the end of the war and full British surrender was years off, the Battle of Saratoga was a major turning point in the Revolutionary War. It brought the French to fully support the fledging nation with desperately needed military aid along with aid from Spain and other countries. Without such support the new nation might have failed to materialize. This battle and others emphasized the important role the Militia would continue to play in America’s war for independence.

Almost ten years passed after the Battle of Saratoga ended and the American experiment was in its infancy. After an attempt to bring the colonies (now called States) together via the Articles of Confederation, it was determined a more formal federal government was necessary. This brought about the Constitution of the United States. Strong memory of the excesses and tyranny of the British government pushed the framers of the new government to place limits on this new government in the Constitution and some of its first ten amendments. The first ten amendments also guaranteed certain rights to the people and are referred to as the Bill of Rights. These rights include the freedom of speech, freedom of religion, freedom of the press, rights to petition the government for grievances, the right to bear arms, rights of privacy and to own private property.

The founders of the United States recognized the need for a formal federal government, but also realized a federal government without limits could again grow to abuse and suppress its citizens. One of the means Great Britain used to control the colonies and prevent rebellion was the seizure of the people’s arms. This was the purpose of the British advance on Concord; to remove the weapons and power stored there. The Second Amendment is an acknowledgement of the role played by the Militias in gaining American independence and ensuring citizens could protect themselves from government overreach the colonies experienced from Great Britain.

David L. Robbins serves as Public Education Commissioner in New Mexico.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Tony Williams

In an 1825 letter to Henry Lee, Thomas Jefferson reflected on the making of the Declaration of Independence and its principles. Jefferson admitted that the Declaration was intended to be “an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion. All its authority rests then on the harmonizing sentiments of the day, whether expressed in conversation, in letters, [and] printed essays.”

The “harmonizing sentiments” of the American mind were present during the debate over British tyranny and taxes in the 1760s and 1770s. The American colonists drew on ancient history and philosophy, the English constitutional tradition, Protestant Christianity, and the Enlightenment ideas especially of John Locke in asserting their rights. They claimed the traditional rights of Englishmen and more importantly their inalienable natural rights and the republican ideal of governing themselves by their own consent.

In the wake of the Boston Tea Party and punitive parliamentary Coercive Acts, the Continental Congress met in 1774 as an expression of American unity. The delegates penned a declaration of rights that defended their natural rights and republican ideals. “That they are entitled to life, liberty, & property, and they have never ceded to any sovereign power whatever, a right to dispose of either without their consent.”

The natural rights republicanism continued to shape the American thinking and debate about independence. For example, a young Alexander Hamilton wrote in Farmer Refuted,  “The sacred rights of mankind are not to be rummaged for, among old parchments, or musty records. They are written, as with a sun beam, in the whole volume of human nature, by the hand of divinity itself; and can never be erased or obscured by mortal power.”

In July 1775, Jefferson helped to draft the Declaration of the Causes and Necessity of Taking Up Arms. He wrote, “The arms we have been compelled by our enemies to assume, we will, in defiance of every hazard, with unabating firmness and perseverence, employ for the preservation of our liberties; being with one mind resolved to die freemen rather than to live slaves.”

In 1776, Thomas Paine electrified the colonies with the best-selling pamphlet, Common Sense, which firmly put republican principles and American independence in the center of the debate. Paine wrote that the rule of law rather than the arbitrary will of a monarch was the basis of guarding essential liberties. “LAW IS KING.” The purpose of that new government would be to protect liberty, property, and religious freedom,” he wrote.

The Continental Congress took up the question of independence that spring. On May 10, it adopted a resolution for the representative colonial assemblies and conventions of the people to “adopt such government as shall, in the opinion of the representatives of the people, best conduce to the happiness and safety of their constituents in particular and America in general.”

Five days later, Adams added his own even more radical preamble expressing republican principles. “It is necessary that the exercise of every kind of authority under the said Crown should be totally suppressed, and all the powers of government exerted under the authority of the people of the colonies, for the preservation of internal peace, virtue, and good order, as well as for the defense of their lives, liberties, and properties.” This bold declaration was essentially a break from British authority and declaration of American sovereignty and liberties. He wrote excitedly to Abigail that this measure was “independence itself.”

On June 7, Richard Henry Lee rose in Congress and offered a resolution for independence. “That these United Colonies are, and of right ought to be, free and independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved.” Congress appointed a committee to draft a Declaration of Independence while states such as Virginia wrote constitutions and their own declarations of rights.

On June 12, the Virginia Convention published the Virginia Declaration of Rights that asserted the Lockean idea of the rights of nature and maintained that the purpose of government was to protect those liberties. It read: “That all men are by nature equally free and independent and have certain inherent rights… cannot by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety.”

The committee selected Jefferson to draft the Declaration of Independence because he was well-known for the elegance of his pen. In 1774, Jefferson had written the influential Summary View of the Rights of British North America. In that pamphlet, he described the natural rights basis of consensual republican government. The American colonists were “a free people claiming their rights, as derived from the laws of nature, and not as the gift of their chief magistrate.”  The colonists argued for the “rights which God and the laws have given equally and independently to all.” He concluded with a reflection on rights embedded in human nature: “The God who gave us life gave us liberty at the same time; the hand of force may destroy, but cannot disjoin them.”

When Jefferson sat down to compose the Declaration, he probably did not have a copy of Locke’s Second Treatise of Government. However, he knew the ideas of that book well and had a copy of the Virginia Declaration of Rights, which had been printed in the Pennsylvania Gazette on June 12. Benjamin Franklin and John Adams edited the document lightly and submitted it to Congress.

On July 1, John Dickinson and Adams engaged in an epic debate over whether America should declare its independence. The next day, Congress voted for independence by passing Lee’s resolution. Adams wrote to his wife, Abigail, that, “The Second Day of July will be the most memorable Epocha, in the history of America….It ought to be solemnized with Pomp and Parade, with Shews, Games, Sports, Guns, Bells, Bonfires, and Illuminations from one End of this Continent to the other from this Time forward forever more.”

The Congress then considered and edited the document much to Jefferson’s chagrin.  It adopted the Declaration of Independence on July 4 and enunciated the natural rights principles of the American republic.

The Declaration claimed that the natural rights of all human beings were self-evident truths that were axiomatic and did not need to be proven. They were equally “endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

The equality of human beings meant that they were equal in giving consent to their representatives in a republic to govern. All authority flowed from the sovereign people equally. The purpose of that government was to protect the rights of the people. “That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed.” The people had the right to overthrow a government that violated the people’s rights with a long train of abuses.

The American constitutional regime would provide the framework—or “picture of silver” for the “apple of gold” (the Declaration) in Abraham’s Lincoln’s immortal phrase—for creating a lasting republic and “more perfect Union” to preserve those natural rights and liberties in 1787.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. He is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Joerg Knipprath
Signing of the Declaration of Independence by John Trumbull, displayed in the United States Capitol Rotunda.

The adoption of the Declaration of Independence of “the thirteen united States of America” on July 4, 1776, formally ended a process that had been set in motion almost as soon as colonies were established in what became British North America. The early settlers, once separated physically from the British Isles by an immense ocean, in due course began to separate themselves politically, as well. Barely a decade after Jamestown was founded, the Virginia Company in 1619 acceded to the demands of the residents to form a local assembly, the House of Burgesses, which, together with a governor and council, would oversee local affairs. This arrangement eventually was recognized by the crown after the colony passed from the insolvent Virginia Company to become part of the royal domain. This structure then became the model of colonial government followed in all other colonies.

As the number and size of the colonies grew, the Crown sought to increase its control and draw them closer to England. However, those efforts were sporadic and of limited success during most of the 17th century, due to the isolation and the economic and political insignificance of the colonies, the power struggles between the King and Parliament, and the constitutional chaos caused in turn by the English Civil War, the Cromwell Protectorate, the Restoration, and the Glorious Revolution. There was, then, a period of benign neglect under which the colonies controlled their own affairs independent of British interference, save the inevitable local tussles between the assemblies and the royal governors jockeying for political position. Still, the increasingly imperial objectives of the British government and expansion of British control over disconnected territories eventually convinced the British of the need for more centralized policy.

This change was reflected in North America by a process of subordinating the earlier charter- or covenant-based colonial governments to more direct royal control, one example being the consolidation in the 1680s of the New England colonies, plus New York and New Jersey into the Dominion of New England. While the Dominion itself was short-lived, and some of the old colonies regained charters after the Glorious Revolution, their new governments were much more tightly under the King’s influence. Governors would be appointed by the King, laws passed by local assemblies had to be reviewed and approved by royal officials such as the Board of Trade, and trade restrictions under the Navigation Acts and related laws were enforced by British customs officials stationed in the colonies. William Penn and the other proprietors retained their possessions and claims, but the King, frequently allying himself with anti-proprietor sentiments among the settlers, forced them to make political concessions that benefited the Crown.

Trade and general imperial policy were dictated by Parliament and administered from London. Still, the colonial assemblies retained significant local control and, particularly in the decades between 1720 and 1760, took charge of colonial finance through taxation and appropriations and appointment of finance officers to administer the expenditure of funds. While direction of Indian policy, local defense, and intercolonial relations belonged to the Crown, in fact even these matters were left largely to local governments. The Crown’s interests were represented in the person of the royal governor. However strong the political position of those governors was in theory, in practice they were quite dependent on the colonial assemblies for financial support. The overall division of political authority between the colonial governments and the British government in London was not unlike the federal structure that the Americans adopted to define the state-nation relationship after independence.

A critical change occurred with the vast expansion of British control over North America and other possessions in the wake of the Seven Years’ War (the French and Indian War) in 1763. Britain was heavily indebted from the war, and its citizens labored under significant taxes. Thus, the government saw the lightly-taxed colonials as the obvious source of revenue to contribute to the cost of stationing a projected 10,000 troops to defend North America from hostilities from Indian tribes and from French or Spanish forces. Parliament’s actions to impose taxes and, after colonial protests, abandon those taxes, only to enact new ones, both emboldened and infuriated the Americans. This friction led to increasingly vigorous protests by various local and provincial entities and to “congresses” of the colonies that drew them into closer union a decade before the formal break. Colonials organized as the Sons of Liberty and similar grass-roots radicals destroyed British property and attacked royal officials, sometimes in brutal fashion. At the same time, British tactics against the Americans became more repressive, in ways economic, political, and, ultimately, military. That cycle began to feed on itself in a chain reaction that, by the early 1770s, was destined to lead to a break.

The progression from the protests of the Stamp Act Congress in 1765, to the Declaration of Resolves of the First Continental Congress and subsequent formation of the Continental Association to administer a collective boycott against importation of British goods in 1774, to the Declaration of the Causes and Necessity of Taking Up Arms issued by the Second Continental Congress in 1775, to the Declaration of Independence of 1776, shows a gradual but pronounced evolution of militancy in the Americans’ position. Protestations of loyalty to King and country and disavowal of a goal of independence were still common, but were accompanied by increasingly urgent promises of resistance to “unconstitutional” Parliamentary acts. American political leaders and polemicists advocated a theory of empire in which the local assemblies, along with a general governing body of the united colonies, would control internal affairs and taxation, subject only to the King’s assent. This “dominion theory” significantly reduced the role of Parliament, which would be limited to control of external commerce and foreign affairs. It was analogous to the status of Scotland within the realm, but was based on the constitutional argument that the colonies were in the King’s dominion, having emerged as crown colonies from the embryonic status of their founding as covenant, corporate, or proprietary colonies. Had the British government embraced such a constitutional change, as Edmund Burke and some other members urged Parliament to do, the resulting “British Commonwealth” status likely would have delayed independence until the next century, at least.

In early 1776, sentiment among Americans shifted decisively in the direction of the radicals. Continued military hostilities, the raising of American troops, the final organization of functioning governments at all levels, the realization that the British viewed them as a hostile population reflected in the withdrawal of British protection by the Prohibitory Act of 1775, and Thomas Paine’s short polemic Common Sense opened the eyes of a critical mass of Americans. They were independent already, in everything but name and military reality. Achieving those final steps now became a pressing, yet difficult, task.

The Declaration was the work of a committee composed of Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston. They were appointed on June 11, 1776, in response to a resolution introduced four days earlier by Richard Henry Lee acting on instruction of the state of Virginia. Jefferson prepared the first draft, while Franklin and the others edited that effort to alter or remove some of the more inflammatory and domestically divisive language, especially regarding slavery. They completed their work by June 28, and presented it to Congress. On July 2, Congress debated Lee’s resolution on independence. The result was no foregone conclusion. Pennsylvania’s John Dickinson and Robert Morris, both of whom had long urged caution and conciliation, agreed to stay away so that the Pennsylvania delegation could vote for independence. The Delaware delegation was deadlocked until Cesar Rodney made a late appearance in favor. The South Carolina delegation, representing the tidewater-based political minority that controlled the state, was persuaded to agree. The New York delegates abstained until the end. Two days later, the Declaration itself was adopted. It was proclaimed publicly on July 8 and signed on July 19.

Jefferson claimed that he did not rely on any book or pamphlet to write the Declaration. Yet the bill of particulars in the Declaration that accused King George of numerous perfidies is taken wholesale, and frequently verbatim, from Chapter II of the Virginia Declaration of Rights and Constitution proposed by a convention on May 6, 1776, and approved in two phases in June. Moreover, Jefferson’s Declaration clearly exposes its roots in John Locke’s Second Treatise of Government. It would be astounding if Jefferson, a Virginian deeply involved in the state’s affairs, was unaware of such a momentous event or was oblivious to the influence of Locke on the many debates and publications of his contemporaries.

Three fundamental ideas coalesced in the Declaration: 17th-century social compact and consent of the governed as the ethical basis of the state, a right of revolution if the government violates the powers it holds in trust for the people, and classic natural law/natural rights as the divinely-ordained origin of rights inherent in all humans. The fusion of these different strands of political philosophy showed the progression of ideas that had matured over the preceding decade from the at-times simplistic slogans about the ancient rights of Englishmen rooted in the king’s concessions to the nobles in Magna Charta and from the incendiary proclamations by the Sons of Liberty and other provocateurs.

The structure was that of a legal brief. The King was in the dock as an accused usurper, and he and the jury of mankind were about to hear the charges and the proposed remedy. At the heart of the case against the King were some fundamental propositions, “self-evident truths”: Mankind is created equal; certain rights are “unalienable” and come from God, not some earthly king or parliament; governments “derive their just powers from the consent of the governed” and exist to secure those rights; and, borrowing heavily from Locke, there exists a residual recourse to revolution against a “long train of abuses and usurpations.”

Once the legal basis of the complaint was set, supporting facts were needed. Jefferson’s list is emotional and provocative. As with any legal brief, it is also far from impartial or nuanced. Some of the nearly thirty accusations seem rather quaint and technical for a “tyrant,” such as having required legislative bodies to sit “at places unusual, uncomfortable, and distant from the depository of their public Records.” Others do not strike us as harsh under current circumstances as they might have been at the time, such as King George having “endeavoured to prevent the population of these States; for that purposed obstructing the Laws for Naturalization of Foreigners; refusing to pass others to encourage their migrations hither.” At least one other, describing the warfare by “the merciless Indian Savages,” sounds politically incorrect to the more sensitive among our modern ears.

The vituperative tone of these accusations is striking and results in a gross caricature of the monarch. But this was a critical part of the Declaration. Having brushed aside through prior proclamations and resolves Parliament’s legitimacy to control their affairs, the Americans needed to do likewise to the King’s authority. King George was young, energetic, and politically involved, with a handsome family, and generally popular with the British people. Many Americans, too, had favored him based on their opinion, right or wrong, that he had been responsible for Parliament repealing various unpopular laws, such as the Stamp Act. As well, as Hamilton remarked later at the constitutional convention in Philadelphia, the King was bound up in his person with the Nation, so it was emotionally difficult for many people to sever that common identity between themselves and the monarch. To “dissolve the political bands” finally, it would no longer suffice to blame various lords and ministers for the situation; the King himself must be made the villain.

Before the ultimate and extraordinary remedy of independence could be justified, it must be shown, of course, that more ordinary relief had proved unavailing. Jefferson mentions numerous unsuccessful warnings, explanations, and appeals to the British government and “our British brethren.” Those having proved ineffective, only one path remained forward: “We, therefore, the Representatives of the united States of America … declare, That these United Colonies are … Independent States.”

The Declaration was a manifesto for change, not a plan of government. That second development, moving from a revolutionary to a constitutional system, would have to await the adoption of the Articles of Confederation and, eventually, the Constitution of 1787. True, since the early days of the Republic, various advocates of causes such as the abolition of slavery have held up the Declaration’s principles of liberty and equality as infusing the “spirit” of the Constitution. But this has always been more a projection by those advocates of their own fervent wishes than a measure of what most Americans in 1776 actually believed.

Being “created equal” was a political idea in that there would be no hereditary monarchy or aristocracy in a republic based on consent. It was also a religious idea, in that all were equal before God. It did not mean, however, that people were equal “in their possessions, their opinions, and their passions,” as James Madison would mockingly write in The Federalist No. 10. He and Jefferson, along with most others, were convinced that, if people were left to their own devices, the natural inequality among mankind would sort things out socially, politically, and economically. Even less did such formal equality call for affirmative action by government to cure inequality of condition. It was, after all, as Madison explained in that same essay, “a rage for paper money, for an abolition of debts, for an equal division of property” that were the “improper and wicked project[s]” against which the councils of government must be secured.

In the specific context of slavery, the Declaration trod carefully. Jefferson’s criticism of the British negation of colonial anti-slave trade laws in his original draft of the Declaration was quickly excised by cooler heads who did not want to stir that pot, especially since almost all of the states permitted slavery. Jefferson’s later lamentation regarding slavery that “I tremble for my country when I reflect that God is just” was a distinct minority view. Many Americans had escaped grinding poverty in Europe, had served years of indentured servitude, or lived under dangerous and hardscrabble frontier conditions. As a result, as the historian Forrest McDonald observed, few of them trembled with Jefferson. It remained for later generations and the crucible of the Civil War and Reconstruction to realize the promise of equality that the Declaration held for the opponents of slavery.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Jeff Truitt

For thousands of years, nations have looked to the sea as a “global commons” that provides a source of sustenance, a means to efficiently trade goods in mutually advantageous economic transactions, and as a highway for the transport of armies. Since our nation’s own earliest origins, the advantages of efficient commerce over the seas have contributed to our rise as an ascendant economic power, our internal freedom, and our ability to project power and stability around the globe.

In 1775, the thirteen American Colonies were under attack by hostile forces from across the Atlantic Ocean. The Navy celebrates October 13, 1775 as the birth of the United States Navy because that is the date on which the Continental Congress officially authorized the funding of two ships to interdict British forces. However, a month earlier, General George Washington, acting unilaterally, deployed three schooners off the coast of Massachusetts and thereby provisioned the colonies with their first naval forces. Over the course of the Revolutionary War, more than 50 Continental vessels harassed the British, seized munitions, supplied the Continental Army, and engaged in international commerce with European allies like France.

The greatest naval successes of the Revolutionary War were secured by privateers, most famously John Paul Jones whose remains are kept in the crypt beneath the Chapel at the U.S. Naval Academy. A privateer was a private Sailor who was granted authority by a sovereign power by a “Letter of Marque” to intercept civilian merchant ships belonging to an enemy power.  These “prize” ships were hauled into a court which had the authority to award a share of the spoils to the privateer and the ship’s owners. Some 1,700 privateers captured more than 2,200 enemy ships during the Revolutionary War, compared to perhaps 200 ships captured by the Continental Navy.

The game changing event of the American Revolution was the defeat of the English forces at Yorktown in 1781. This forced surrender occurred because the French fleet defeated the English fleet at Chesapeake and were thereby poised to annihilate the English columns with their powerful cannon. Command of the littoral waters enabled land-based forces to prevail, a pattern repeated often throughout history.

Quality Navy ships are expensive and by 1785, the Continental Navy had been completely disbanded. After a decade without a Navy, State-sponsored pirate regimes in North Africa prevented U.S. merchant vessels from engaging in free commerce in the Mediterranean. The Naval Act of 1794 created a standing Navy, featuring the commissioning of six technologically sophisticated vessels that could engage or outrun any ship it encountered. One of them was the USS Constitution, still docked in Boston today.

After restoring freedom of navigation to the Mediterranean, the U.S. Navy prevented the invasion of New York state by the British in the War of 1812. Soon after, the U.S. Navy helped stamp out piracy on the high seas in South America, Africa and the Pacific. Between 1819 and the start of the Civil War, the U.S. Navy operated an Africa squadron which suppressed the slave trade, capturing more than 36 slave ships during this time. The U.S. Navy played a critical role in choking off supplies to the South during the Civil War, again highlighting the power of international trade to shape world events.

Interestingly, although European powers outlawed privateering in the 1856 Declaration of Paris following the Crimean War, the United States declined to join this convention because we feared that our underdog Navy might need such assistance. In the 1880s, we invested in modern steel battleships and by 1900 had built the world’s fifth largest Navy.  Privateering was outlawed for good at the Hague Conference of 1907.

Hopefully most Americans are still aware of the critical role that the U.S. Navy played in defending twice against the German threat in as many generations, as well as its defeat of Imperial Japan in 1945.

Since World War II, the United States Navy has provided a safety umbrella on the oceans around the world for international shipping.  Whether it is the trade of wheat, oil, pork, steel, timber, or finished goods, global commerce is enabled by the protection afforded by the United States Navy. While U.S. taxes support a strong Navy, safety at sea is a collective benefit enjoyed by everyone.

It is critical that we continue to support navigational rights around the world. The right of innocent passage hearkens back hundreds of years and contributed to the economic development of millions of souls.

Today, China has built a fleet that rivals the size of the United States forces. However, China does not vocally advocate for international freedom of the seas. To the contrary, it has claimed as its private domain most of the South China Sea, an area roughly the size of the Gulf of Mexico. This zone is bordered by a number of other coastal states with superior claims, according to an international tribunal that considered the matter in exhaustive detail.

In order to guarantee international freedom and economic prosperity, it is important that the United States continue to invest in a strong Navy and to support international allies who are committed to freedom of navigation on the high seas.

Jeff Truitt serves as a Captain in the U.S. Navy Reserve. He frequently leads small group seminars at the U.S. Naval War College in operational maritime law, and previously served on active duty as a submarine officer in the Cold War.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: David B. Kopel

During the first six decades of the eighteenth century, the American colonies were mostly allowed to govern themselves. In exchange, they loyally fought for Great Britain in imperial wars against the French and Spanish. But in 1763, after the British and Americans won the French and Indian War, King George III began working to eliminate American self-government. The succeeding years saw a series of political crises provoked by the king and parliament. What turned the political dispute into a war was arms confiscation at Lexington and Concord, Massachusetts, on April 19, 1775.

In 1774, the British government had realized that because armed Americans were so numerous, they could not be frightened into compliance with British demands. So in the latter months of 1774, the King and his Royal Governors in America instituted a gun control program. All firearms and ammunition imports to the American colonies were forbidden. At the governors’ command, British soldiers began raiding American armories, which stored firearms for militiamen who could not afford their own, and also held large quantities of gunpowder. Because the raids were accomplished peacefully in surprise pre-dawn maneuvers, they caused outrage, but nothing more. Both sides knew that if the British attempted to seize arms by force, the Americans would fight.

Ever since 1768, Boston had been occupied by a British army. In April 1775, a spy informed British General Gage that the Americans had secreted a large quantity of gunpowder in Concord, Massachusetts. Gage ordered his army to seize the American powder. This time, the Americans found out in advance.

On the night of April 18, 1775, British warships conveyed Redcoats across Boston Harbor, so they could march to Concord. Meanwhile, Paul Revere and William Dawes rode from town to town, shouting the warning “The British are coming.” The alarm was spread far and wide by the ringing of church bells and firing of guns.

To get to Concord, the British would have to march through Lexington; while the men of Lexington prepared to meet the British, the women of Lexington assembled ammunition cartridges late into the night.

The American Revolution began at dawn on April 19, 1775, when 700 Redcoats commanded by Major John Pitcairn confronted 200 Lexington militia on the town green. The militiamen, consisting of almost all able-bodied men sixteen to sixty, supplied their own firearms, although a few poor men had to borrow a gun.

“Disperse you Rebels—Damn you, throw down your Arms and disperse!” ordered Major Pitcairn. American folklore remembers the perhaps apocryphal words of militia commander Captain John Parker: “Don’t fire unless fired upon! But if they want to have a war, let it begin here!” The American policy was to put the onus of firing first on the British. Yet someone pulled a trigger, and although the gun did not go off, the sight of the powder flash in the firing pan instantly prompted the Redcoats to mass fire. The Americans were quickly routed.

With a “huzzah” of victory, the Redcoats marched on to Concord. By one account, the first man in Concord to assemble after the sounding of the alarm was the Reverend William Emerson, gun in hand.

At Concord’s North Bridge, the town militia met with some of the British army, and after a battle of two or three minutes, drove off the Redcoats. As the Reverend’s grandson, poet Ralph Waldo Emerson, later recounted in the “Concord Hymn”:

By the rude bridge that arched the flood,

Their flag to April’s breeze unfurled,

Here once the embattled farmers stood,

And fired the shot heard round the world.

Notwithstanding the setback at the bridge, the Redcoats had sufficient force to search the town for arms and ammunition. But the main powder stores at Concord had been hauled to safety before the British arrived.

Having failed to get the gunpowder, the British began to withdraw back to Boston. On the way, things got much worse for them as armed Americans swarmed in from nearby towns. Soon they outnumbered the British two-to-one.

Some armed American women fought in the battle. So did men of color, including David Lamson, leading a group of elderly men who, like him, were too old to be in the militia, but intended to fight anyway.

Although some Americans cohered in militia units, many just fought on their own, taking sniper positions wherever the opportunity presented itself.

Rather than fight in open fields, like European soldiers, the Americans hid behind natural barriers, fired from ambush positions, and harried the Redcoats all the way back to Boston.

One British officer complained that the Americans acted like “rascals” and fought as “concealed villains” with “the cowardly disposition . . . to murder us all.” Another officer reported: “These fellows were generally good marksmen, and many of them used long guns made for Duck-Shooting.”

The British expedition was nearly wiped out. It saved from annihilation by reinforcements from Boston—and by the fact that the Americans started running out of ammunition and gunpowder.

British Lieutenant-General Hugh Percy, who had led the rescue of the beleaguered expeditionary force, recounted:

“Whoever looks upon them as an irregular mob, will find himself much mistaken. They have men amongst them who know very well what they are about, having been employed as Rangers [against] the Indians & Canadians, & this country being much [covered with] wood, and hilly, is very advantageous for their method of fighting. Nor are several of their men void of a spirit of enthusiasm, as we experienced yesterday, for many of them concealed themselves in houses, & advanced within [ten yards] to fire at me & other officers, tho’ they were morally certain of being put to death themselves in an instant.”

At day’s end, there were 50 Americans killed, 39 wounded, and 5 missing. Among the British 65 were killed, 180 wounded, and 27 missing. On a per-shot basis, the Americans inflicted higher casualties than the British regulars.

That night, the Americans began laying siege to Boston where General Gage’s standing army was located. Soon, the British would begin confiscating guns in Boston. Reinforced by volunteers from other colonies, and commanded by General George Washington, the American forces would maintain the siege of Boston until the British gave up and sailed away on March 17, 1776.

Further reading: David B. Kopel, How the British Gun Control Program Precipitated the American Revolution, 38 Charleston Law Review 283 (2012).

David B. Kopel is adjunct professor of constitutional law at the University of Denver, Sturm College of Law.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Craig Bruce Smith

In a wooded clearing overlooking an imposing rock formation, roughly sixty-five miles outside modern day Pittsburgh, the face of North America would be irreparably altered. On May 28, 1754 this spot witnessed the first shot of the French and Indian War (or the Seven Years’ War around the world). The shot was fired under the order, or possibly even by the hand, of a twenty-two-year-old Virginian militia officer named George Washington. At the break of dawn and under the cover of the forest, British, French, and Native forces engaged in this brief (but globally impactful) battle that escalated the long-simmering tension over the contested lands of the Ohio Valley into a world war felt on five continents.

For generations there had been a tenuous stalemate in the territory west of the Appalachian Mountains and east of the Mississippi River between the French, British, and various Native American nations. Thinly settled by European colonists, there was a lack of clear authority or the means to impose it. It was a situation that allowed the Natives to pit the two colonial powers against each other. But as these European empires attempted to expand, this balance was shattered. All accused the others of encroaching upon their lands and sovereignty.

By 1753, the French began building a series of fortifications in the Ohio Valley. That same year, Virginia Lt. Governor Robert Dinwiddie tasked surveyor-turned-newly-appointed-militia-major George Washington (who spoke no French, contrary to the expectations of the eighteenth-century British gentleman) with carrying a message to the French commander, Captain Jacques Legardeur de Saint-Pierre, to withdraw from the contested lands. No retreat followed.

The prize of the region was the coveted strategic position at the intersection of the Ohio, Monongahela, and Allegheny Rivers. The British had previously established a small outpost, named Fort Prince George (or Trent’s Fort after Pennsylvania trader William Trent), to control trade and stake their own claim. On April 17, 1754, a sizeable French force under Captain Claude-Pierre Pécaudy, sieur de Contrecœur, drove the tiny overmatched garrison at Trent’s Fort under Ensign Edward Ward to surrender without a shot being fired. In its place rose Fort Duquesne (today’s Pittsburgh): a symbol of French authority that challenged not only the British but also the Mingo people (part of the Ohio Iroquois) and their leader Tanacharison (also known as Tanaghrisson or “Half-King”).

The fall of Trent’s Fort sparked alarm in the Virginia capital of Williamsburg and before news even reached London the now Lieutenant Colonel Washington and his force of 159 militiamen were marching to the frontier to combat the French threat. From the standpoint of the British and their new Native allies, it could be asserted that French incursions had initiated hostilities, but what followed would escalate the conflict into a war.[i]

After Washington and his troops reached the Great Meadows (located in present-day Farmington, PA), Silver Heels, a Native scout and warrior, reported a band of some fifty French soldiers “hidden” in a nearby encampment in a small glen surrounded by the dense wilderness. Their intentions were clearly set on ambushing Washington and his men, at least according to Tanacharison. The Mingo chief may have let personal matters influence his assessment of the situation, as he was convinced the French meant to murder him and his family. He alleged that this patrol was there “to take and kill all the English they should meet.”[ii] Washington decided to act.

Under the cover of darkness and a torrent of rain, a mixed band of forty militiamen and twelve Natives crept single file though the woods and surrounded the unsuspecting French patrol. As night turned to morning, Washington stood atop a rocky hill, looked down upon his adversaries, gave the command to fire, and personally loosed the first shot (as a signal or with aim is unclear).[iii] A volley immediately followed his discharge. Washington claimed the startled French, commanded by Ensign Joseph Coulon de Villiers, Sieur de Jumonville, had “discovered” them and the initial shots were to stop their mad dash to arm themselves. The French version differed, but regardless multiple volleys flashed on both sides. The battle (probably better described as a skirmish) only lasted about fifteen minutes and ended as quickly as it began, with the French “routed” by British bullets and at least some of the retreating men meeting “their destiny by the Indian tomahawks” wielded by Tanacharison and his warriors. Just over twenty Frenchmen survived.[iv]

Their wounded commander, Jumonville, claimed he was on a diplomatic mission. much like Washington had been in 1753. If it were true, under the rules of war and honor, the French ambassador should not have been attacked, as his “character being always sacred.”[v] But this was after the fact and there was a clear communication problem between the two leaders: Jumonville spoke French and Washington only understood English. Tanacharison, having dealt with each colonial power, was fluent in both. Before Washington could make sense of what was happening, Tanacharison buried his tomahawk into Jumonville’s head, killing him on the spot. Removing his embedded hatchet, the Mingo leader turned to French officer Michel Pepin dit La Force and taunted him “now I will let you see that the Six Nations [of Iroquois] can kill as well as the French.”[vi] As the chief raised his blood-drenched blade, the terrified La Force hid behind an undoubtedly shocked Washington who intervened, saved the man’s life, and stopped any further slaughter.

Why had Tanacharison acted this way? Perhaps it was to escalate the conflict to a full-fledged war. Or perhaps it was to defend himself, his family, and his people from what he perceived as French aggression Despite Tanacharison vehement assertions that the French “intentions were evil,” the affair ensured that Washington “never” again dealt with these Native allies or their leader.[vii]

Though the Virginian officer had not ordered the deathblow, his sense of honor, the possibility of truth of the diplomatic mission, and the lack of quarter given to the wounded Jumonville likely troubled him deeply and he feared its implications. Washington’s account of the incident to Dinwiddie glossed over it, saying, “amongst those that were killed was Monsieur De Jumonville the Commander.”[viii] In turn, Washington who was in command omitted Tanacharison’s execution, perhaps because it would reflect a lack of control or authority on the part of the novice Virginian officer.

Still, the full magnitude of this event would not be felt until a few months later in early July, when the again-promoted Col. Washington’s Virginia regiment (joined by those of Captain James Mackay’s South Carolina Independent Company) were besieged inside the wooden palisades of Fort Necessity on the nearby Great Meadows by a vastly superior French force. At the head of the 600 attackers was Captain Louis Coulon de Villiers, Jumonville’s older brother, who was tasked with seeking reprisals for the Battle of Jumonville Glen. Washington and Mackay were forced to surrender. Again plagued by his lack of French, Washington signed the Articles of Capitulation improperly translated by Jacob Van Braam, his former fencing master who possessed a limited grasp of French himself. He thought it said “death” or “killing of,” which was technically accurate, but it actually declared that he had assassinated Jumonville, who was on a diplomatic mission.[ix] This was considered a violation of one of the technically inviolable rules of war. But the grievous language would only be revealed after the French published the document—shaming Washington and Britain before the world.

The young Virginian attempted to defend his honor by refusing to accept this version of events and continued to insist, to both himself and the world, that Jumonville was not a diplomat, but “only a simple petty French officer; an ambassador has no need of spies.” Considering the standards of gentility of the time, Jumonville’s dress, bearing, and actions, in Washington’s estimation, precluded his being an emissary—he didn’t look the part. Rather, he argued, the French diplomatic mission was simply “A plausible pretense to discover our camp, and to obtain the knowledge of our forces and our situation!”[x]

But regardless of Washington’s justifications, his signature allowed the French to cast him as an “assassin,” and while the British disregarded the charge, it gave King Louis XV a pretext for a war. Despite initially drawing harsh British criticism, Washington’ reputation would survive and thrive as a “noble” hero based on his relationship with Dinwiddie and the influential aristocratic Fairfax family. Though he never altered his story, Washington would consider the lesson throughout his life, especially during the American Revolution, where he dealt with British Major John André (part of Benedict Arnold’s treason) as a spy, despite his looking the part of a gentleman.[xi]

While the Battle of Jumonville Glen may not be considered the start of the war from the British perspective, it resulted in an expanded colonial conflict engulfing the world in violence, which then began the rift between Britain and their colonists that set the stage for the American Revolution.

Craig Bruce Smith is a historian and the author of American Honor: The Creation of the Nation’s Ideals during the Revolutionary Era. For more information visit www.craigbrucesmith.com or follow him on Twitter @craigbrucesmith. All views are that of the author and do not represent those of the Federal Government, the US Army, or Department of Defense.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[i] For an excellent overview of the French and Indian War and its early battles see the following referenced throughout this article: Fred Anderson. The War that Made America: A Short History of the French and Indian War. (New York: Viking, 2005); David Preston. Braddock’s Defeat: The Battle of the Monongahela. (New York: Oxford University Press, 2015). For a brief overview of the incident at Jumonville Glen, also referenced throughout: Joseph F. Stoltz III, “Jumonville Glen Skirmish,” Digital Encyclopedia of George Washington, https://www.mountvernon.org/library/digitalhistory/digital-encyclopedia/article/jumonville-glen-skirmish/

[ii] George Washington, “Expedition to the Ohio,” 1754, Founders Online. https://founders.archives.gov/?q=jumonville&s=1111311111&sa=&r=1&sr=,

[iii] “An Ohio Iroquois Warrior’s Account of the Jumonville Affair, 1754,” in Preston, Braddock’s Defeat, Appendix E and p. 25-28.

[iv] Washington, “Expedition to the Ohio,”1754; “An Ohio Iroquois Warrior’s Account of the Jumonville Affair, 1754”; Stoltz, “Jumonville Glen Skirmish.”

[v] Washington, “Expedition to the Ohio,”1754.

[vi] “An Ohio Iroquois Warrior’s Account of the Jumonville Affair, 1754.”

[vii] Washington, “Expedition to the Ohio,” 1754, Founders Online.

[viii] George Washington to Robert Dinwiddie, 29 May 1754, Founders Online.

https://founders.archives.gov/?q=jumonville&s=1111311111&sa=&r=2&sr=

[ix] “Articles of Capitulation,” [3 July 1754], Founders Online. https://founders.archives.gov/?q=jumonville&s=1111311111&sa=&r=10&sr=; Paul K. Longmore. The Invention of George Washington. (Charlottesville: University of Virginia Press, 1999), p. 22-24.

[x] Washington, “Expedition to the Ohio,” 1754, Founders Online; Craig Bruce Smith, American Honor: The Creation of the Nation’s Ideals during the Revolutionary Era. (Chapel Hill: University of North Carolina Press, 2018), p. 38-40.

[xi] Smith, American Honor, p. 38-40, 160.

Guest Essayist: Joerg Knipprath

“In the name of God, amen. We whose names are under written … [h]aving undertaken for the Glory of God, and advancement of the christian [sic] faith, and the honour of our King and country, a voyage to plant the first colony in the northern parts of Virginia; do by these presents solemnly and mutually, in the presence of God and one another, covenant and combine ourselves together into a civil body politick, for our better ordering and preservation, and furtherance of the ends aforesaid: And by virtue hereof, do enact, constitute and frame such just and equal laws, ordinances, acts, constitutions and officers, from time to time, as shall be thought most meet and convenient for the general good of the colony ….”

Thus pledged 41 men on board the ship Mayflower that day, November 11, 1620, having survived a rough 64-day sea voyage, and facing an even more grueling winter and a “great sickness” like what had ravaged the Jamestown colony in Virginia. These Pilgrim Fathers had sailed to the New World with their families from exile in Leyden, Holland, with a stop in England to secure consent from the Virginia Company to settle on the latter’s territory. They were delayed by various exigencies from leaving England until the fall of 1620. The patent from the Company permitted the Pilgrims to establish a “plantation” near the mouth of today’s Hudson River, at the northern boundary of the Company’s own grant.

For whatever reason, either a major storm, as the Pilgrims claimed, or intent to avoid the reach of English creditors’ claims on indentured servants, as some historians allege, the ship ended up at Cape Cod on November 9. Bad weather and the precarious state of the passengers made further travel chancy, and the Pilgrim leaders decided to find a nearby place for settlement. Cape Cod was deemed unsuitable for human habitation. Instead, the Pilgrims disembarked on December 16 at Plymouth, so named earlier by Captain John Smith of the Virginia Company during one of his explorations. Since they were now a couple of hundred miles outside the Virginia Company’s territory, their patent was worthless. It became necessary to establish a new binding basis for government of their society.

The result was the Mayflower Compact, infused with a remarkable confluence of religious and political theory. The Pilgrims, like the Puritans who settled Massachusetts Bay in 1630, were dissenters from the Church of England. The former opted to separate themselves from what they perceived as the corruption of the Church of England, whereas the less radical nonconformists, the Puritans, sought to reform that church from within. Both groups, however, found the political and religious climate under the Stuart monarchs to be unfriendly to dissenters.

As common historical understanding has it, both groups sought to escape to the New World to practice their religion freely. However, that meant their religion. They set out to establish their vision of the City of God in an earthly commonwealth. As the Compact stated, their move was “undertaken for the Glory of God, and advancement of the christian faith.” Neither group set out to establish a classically liberal secular society tolerant of diverse faiths or even a commonwealth akin to the Dutch Republic, with an established church, yet accepting of religious dissent. The corrosive effect of such dissent would have been particularly dangerous to the survival of the small Pilgrim community clinging precariously to their isolated new home in Plymouth. Indeed, once the colony became established and became focused on commerce and trade, more devout members disturbed by this turn to the material left to form new communities of believers.

The religious orientation of the Mayflower Compact grew out of the Pilgrims’ Calvinist faith. In contrast to the Roman Catholic Church and its successor establishment in the Church of England, Calvinists rejected centralized authority with its dogmas and traditions as having erected impious barriers and distractions to a personal relationship with God. Instead, the congregation of like-minded believers gathered in community. It was a community founded on consent of the participants and given meaning by their shared religious belief. Those who rejected significant aspects of that belief would leave (or be shunned).

In Europe, those religious communities operated within–and chafed under–hostile existing political orders, most of which still were organized on principles other than consent of the participants. Once transplanted across the Atlantic Ocean, the Pilgrims were free of such restraints and could organize their religious life together with their political commonwealth within the Calvinist congregational framework. Their brethren, the Puritans of Massachusetts Bay, established their colony on the same type of religious foundation, as did a number of later communities that spread from the original settlements. The successor to the Puritans and Pilgrims was the Congregational Church, organized along those communitarian lines based on consent. That church became the de facto established church of Massachusetts Bay Colony and the state of Massachusetts under a system of state tax support, a practice that survived until 1833.

On the political side, the Mayflower Compact was one of three types of constitutions among the colonies in British North America. The others were the joint stock company or corporation model of the Virginia Company and the Massachusetts Bay Company, and the proprietary grant model, the dominant 17th-century form used for the remaining colonies, such as the grant to Lord Calvert for Maryland and William Penn for Pennsylvania. Of the three, the Mayflower Compact most profoundly and explicitly rested on the consent of the governed. It provided the model for other early American “constitutions” in New England, such as the 1636 compact among Roger Williams and his followers in founding Providence, Rhode Island, the compacts among settlers that similarly established Newport and Portsmouth in Rhode Island and the New Haven Colony in 1639, and, most significantly, the Fundamental Orders of Connecticut. The Orders, in 1639, united the Connecticut River Valley towns of Hartford, Windsor, and Wethersfield and provided a formal frame of government. Like the Mayflower Compact, the Orders rested on the consent of the people to join in community, but in their structure they closely resembled the Massachusetts Bay Company agreement.

The political analogue to the congregational organization of the Calvinist denominations was the “social compact” theory, an ethical basis for the state that also rested on the consent of the governed. Classical Greek theory had held that the polis represented a progression of human association beyond family and clan and evolved as the consummate means conducive to human flourishing. In its medieval scholastic version epitomized by the writings of Thomas Aquinas, the state was ordained by God to provide for the welfare and happiness of its people within an ordered universe governed by God’s law. By contrast, the social compact theory rested on the will of the individuals that came together to found the commonwealth. It was a rejection of the static universal political (and religious) order that had governed Western Christendom and in which one’s status and privileges depended on one’s place in that order. After the Reformation, Protestant sects had many, sometimes conflicting, assumptions about the nature and the specifics of the relationship between the believer and God. In similar manner, social compact theory was not a unified doctrine, but varied widely in its details of the relationship between the individual and the state, depending on the particular proponent.

The two social compact theorists with the greatest influence on Americans of the Revolutionary Era were Thomas Hobbes and John Locke, with the latter’s postulates the more evident among American essayists and political leaders. Locke’s reflections on religion and politics were greatly influenced by the Puritanism of his upbringing. Although the governments established under the various state constitutions, as well as those created through the Articles of Confederation and the Constitution of 1787, more closely resembled the corporate structures of the colonial joint stock company arrangements, they were formed through the direct or indirect consent of the governed. The Constitution of 1787, for example, very conspicuously required that no state would become a member of the broader “united” community without its consent. In turn, such consent had to be obtained through the most “explicit and authentic act” of the state’s people practicable under the circumstances, that is, through a state convention.

To whatever concrete extent the Mayflower Compact’s foundation on consent may have found its way into the organizing of American governments during the latter part of the 18th century, it is the Declaration of Independence that most clearly incorporates the compact’s essence. The influence of Locke and his expositors on Thomas Jefferson’s text has been analyzed long and frequently. But it is worth noting some of the language itself. The Declaration asserted that Americans were no longer connected in any bond (that is, any obligation) to the people of Britain, just as the Pilgrims, having sailed to a wilderness not under the control of the Virginia Company, believed that they were not bound by the obligations of the patent they had received. The Americans would establish a government based on the “consent of the governed,” “laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness,” just as the signatories of the Mayflower Compact had pledged.

So it came about that a brief pledge, signed by 41 men aboard a cramped vessel in 1620, “with no friends to welcome them, no inns to entertain or refresh them, no houses, or much less towns to repair unto to seek for succour,” with “a mighty ocean which they had passed…and now separate[d] them from all the civil parts of the world” behind them, and with “a hideous and desolate wilderness, full of wilde beasts and wilde men” in front of them, deeply affected the creation of the revolutionary political commonwealth founded in the New World a century and a half later.

An expert on constitutional law, and member of the Southwestern Law School faculty, Professor Joerg W. Knipprath has been interviewed by print and broadcast media on a number of related topics ranging from recent U.S. Supreme Court decisions to presidential succession. He has written opinion pieces and articles on business and securities law as well as constitutional issues, and has focused his more recent research on the effect of judicial review on the evolution of constitutional law. He has also spoken on business law and contemporary constitutional issues before professional and community forums, and serves as a Constituting America Fellow. Read more from Professor Knipprath at: http://www.tokenconservative.com/.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Gary Porter

 

— The yearning for self-government springs eternal –

In the first Federalist essay, Alexander Hamilton famously observes: It has been frequently remarked that it seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force. Reflection and choice or accident and force, which will it be? Fortunate indeed are those who get to choose.

**

The Virginia colony was off to a rocky start.

As April 26, 1607 dawned, the colonists spied the coastline of Virginia. Three weeks later they came ashore 40 miles upriver at Jamestown.

After surviving a harrowing five-month voyage from England, the intrepid Virginia colonists anxiously opened the sealed envelope that would identify the seven members who were to govern them. As they read off the names, one stood out: John Smith? Whoops! John Smith was being held on board their ship, securely in chains. There had been this little “incident” mid-voyage, you see.

The exceptionally slow voyage (a normal crossing took three months) allowed disease to spring up in the cramped quarters and factions to form among the colonists. This did not escape notice of the expedition’s leader: Captain Christopher Newport. When the expedition docked at the Canary Islands to take on supplies, Smith, a swashbuckling adventurer and soldier whose life story reads like a Hollywood script, was suddenly clapped in chains by Newport, charged with trying to “usurp the government, murder the council, and make himself king (of Virginia).” He would eventually be released to assume his place on the council, but suspicions persisted.

The plan of the Virginia Company was to govern the new colony through a 13-man council in England and a similar though smaller council in Jamestown. What the planners of the expedition did not count on, were the austere and hazardous conditions the adventurers would encounter: Within six months, 80% of the colonists were dead from illness, the seven-man council had been reduced to four, and President of the Council, Edward Wingfield, had been impeached for maladministration. He was the one now in chains, perhaps the same ones that had restrained John Smith. Captain John Ratcliffe replaced Wingfield as President of the Council, but Smith would soon assume de facto command of the colony.

Unwilling to simply let the colony die, Smith enacted harsh measures, akin to martial law, to ensure that “gentlemen” and commoners alike contributed equally to the raising and hunting of food. Despite his efforts, the winter of 1609-10 became known as the “Starving Time.”

In an attempt to breathe new life into the colony, by then hanging on by a thread, a new charter was granted in May 1609. The new charter included a provision that the colony would now extend from “sea to sea,” a gesture which provided no help to the beleaguered settlers. The charter established a new corporation and a new governing council in London that became the permanent administrative body of the corporation. A new governing council was created at Jamestown as well. A “Governour” was given extensive powers including the right to enforce martial law, if necessary.

By 1612, things were beginning to turn around. Numerous replenishments of supplies and manpower accompanied by a tenuous peace with the local natives had turned the settlement into a profitable and growing venture. A new, third charter was granted that year, extending Virginia’s jurisdiction eastward from the shoreline to include islands such as Bermuda. New settlers were each granted 100 acres of land.

On Friday, July 30, 1619, the newly appointed Governor, Sir George Yeardley set in motion the concept of self-government in the colony. Under instructions from the Virginia Company, he called forth the first representative legislative assembly in America, establishing “the oldest continuous law-making body in the New World,” Virginia’s House of Burgesses (today, the Virginia Assembly). The group convened in the colony’s largest building, the Jamestown Church “to establish one equal and uniform government over all Virginia” which would provide “just laws for the happy guiding and governing of the people there inhabiting.” The Governor, six men forming a Council of State, and, initially, twenty burgesses, two from each of ten settlements — “freely elected by the inhabitants thereof” — prepared to get underway.

An eleventh settlement, that of Captain John Martin, was not immediately allowed seats. A clause in Martin’s land patent exempted his plantation from the authority of the colony.[1] There would thus be little point in including him as a Burgess; any laws he participated in creating would not apply to his own settlement. A secretary, (former member of Parliament John Pory) and a Clerk (John Twine) were quickly appointed to their positions. Prayer was offered by Reverend Richard Buck: that “it would please God to guide and sanctifie all our proceedings to his owne glory and the good of this Plantation.”

An oath was then administered to all present The Oath of Supremacy, first established in 1534, required any person taking public or church office in England to swear allegiance to the English monarch as Supreme Governor of the Church of England. Roman Catholics who refused to take the oath were dealt with harshly. In April 1534, advisor to King Henry Sir Thomas More had refused to take the oath. He was imprisoned, tried for treason, and despite his close relationship with the King, beheaded the following year. Oaths, at least back then, were serious stuff.

The ten settlements represented that day in 1619 included “James Citty, Charles Citty, Henricus, Kiccowtan, Smythe’s Hundred, Martin’s Hundred (a different Martin than John Martin), Argall’s Guiffe, Flowerdieu Hundred, Captain Lawne’s Plantation and Captaine Warde’s Plantation.”

The lead representative of Warde’s Plantation, none other than Captain Warde himself, was immediately challenged by another Burgess as having settled in the colony without proper authority from the Company in England. But due to the great efforts Warde had made towards the colony’s success, particularly in bringing in “a good quantity of fishe,” he and his lieutenant were allowed to take their seats.

Once again, the Burgesses turned their attention to the issue of Captain John Martin’s two representatives. After a review of Martin’s patent it was decided that the two Burgesses-in-waiting should leave until such time as Captain Martin himself appeared to discuss the matter. But the assembly was not quite done with Martin. The Burgesses were next presented with a complaint that an Ensign Harrison, under Martin’s employ, had forcibly taken corn from Indians who had refused to sell to him, leaving the Indians with some “copper beades and other trucking stuffe.” The Indians had complained to Chief Opchanacanough, who had complained to Governor Yeardley. False dealing with the Indians was a serious offense; the shaky, on again, off again peace with the various Indian tribes was fragile, easily broken. It was ordered that Captain Martin appear before the Burgesses forthwith. The order to appear began: “To our very loving friend, Captain John Martin, Esquire, Master of the ordinance.” Martin’s last title in the salutation might explain the gentle tone taken.

Next, the “greate Charter, or commission of privileges, order and laws,” sent from England in four books, was presented. It was decided that two committees would be commissioned to review the first two of the books to see if they contained anything “not perfectly squaring with the state of this Colony or any lawe which did presse or binde too harde, that we might by waye of humble petition, seeke to have it redressed.” The two committees gave their reports the following day.

The Burgesses composed six petitions to send to the Council in England. The first four dealt with administrative matters; the fifth asked the Council’s permission to build “a university and colledge” in the colony. This “colledge” would eventually be named Henricus College, which today lays claim to being the oldest college in North America. It’s primary purpose? To educate the natives. The sixth petition asked permission to rename Kiccowtan settlement.[2]

The next day, Sunday, August 1, one of the Burgesses, a Mr. Shelley, died unexpectedly.

On Monday, August 2nd, the infamous Captain Martin appeared before the Burgesses. He was asked whether he would disavow the stipulation in his patent that his settlement would be exempt from the established laws. He would not. Whereupon the assembly voted that his settlement’s representatives not be admitted. As to the charge that his employees had unfairly dealt with the natives, Martin acknowledged the charges as true and said he would put up a security bond to ensure it would never happen again.

The issues with Captain Martin thus settled, the Burgesses set about to make some laws (why not?)

Laws against idleness, gaming, drunkenness and “excesse in apparel” were enacted. Settlers caught gaming at “dice and Cardes,” the winners at least, would forfeit their winnings; all the players would be fined “ten shillings a man.”

Not forgetting one of the main reasons for the settlement: the “propagating of Christian Religion to such People, as yet live in Darkness and miserable Ignorance of the true Knowledge and Worship of God,”[3] each settlement was to obtain “by just means” a number of the native children who would be educated by the settlers “in true religion and civile course of life.”

Each settler was required to plant six mulberry trees each year for seven years.

On Tuesday the 3rd of June, more laws.

On Wednesday the 4th of June, with many of his assembly coming down with malaria, Governor Yeardley decided that was enough for this session of the Burgesses and adjourned this first experiment in self-government. Many challenges lay ahead. While the 1619 House of Burgesses proved a turning point in the governing structure of Virginia; but it did not end the economic difficulties brought on by crop failures, war with the Indians, disputes among factions and bad investments.

For instance, after several years of strained coexistence, Chief Opchanacanough and his Powhatan Confederacy decided to eliminate the colony once and for all. On the morning of March 22, 1622, he and his men attacked the outlying plantations and communities up and down the James River in what became known as the Indian Massacre of 1622. More than 300 settlers were killed, about a third of the colony’s population. The fledgling developments at Henricus and Wolstenholme Towne, were essentially wiped out. Jamestown was spared only by the timely warning of a friendly Indian.  Of the 6,000 people known to have come to the settlement between 1608 and 1624, only 3,400 would survive.

In 1624, King James I finally dissolved the Virginia Company’s charter and established Virginia as a royal colony. In 1776, when the Fifth Virginia Convention declared its independence from Great Britain and became the independent Commonwealth of Virginia, the House of Burgesses was renamed the House of Delegates, which continues to serve as the lower house of Virginia’s General Assembly to this day.

Gary Porter  is Executive Director of the Constitution Leadership Initiative (CLI), a project to promote a better understanding of the U.S. Constitution by the American people.   CLI provides seminars on the Constitution, including one for young people utilizing “Our Constitution Rocks” as the text.  Gary presents talks on various Constitutional topics, writes a weekly essay: Constitutional Corner which is published on multiple websites, and hosts a weekly radio show: “We the People, the Constitution Matters” on WFYL AM1140.  Gary has also begun performing reenactments of James Madison and speaking with public and private school students about Madison’s role in the creation of the Bill of Rights and Constitution.  Gary can be reached at gary@constitutionleadership.org, on Facebook or Twitter (@constitutionled).

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

[1] Martin had been a member of the original Ruling Council; how he had received such a unique patent has not been explained.

[2] It would eventually be renamed Elizabeth City, site of the present day Hampton, Virginia.

[3] Found in the First charter of 1606

Guest Essayist: Tony Williams
In 1619, the Virginia House of Burgesses met in the Jamestown Church, the first elected legislative body in America.

In the early seventeenth century, gentlemen adventurers and common tradesmen voyaged to Jamestown and established the first permanent English settlement in North America. They were free and independent Englishmen who risked their lives and fortunes to brave the dangers of the New World for personal profit and the glory of England.

The settlement was part of the grand national political, economic, and religious European struggle for imperial preeminence. Unlike their Spanish counterparts who received official financial backing, the enterprising individuals created an entrepreneurial joint-stock company.

In 1606, John Smith and other wealthy adventurers and merchants organized the Virginia Company and received a royal charter to colonize the territory. They were promised the rights of Englishmen “as if they had been abiding and born within our realm of England.” The crown charged them with the religious purpose of spreading the Protestant faith to the Native Americans. While primarily interested in getting wealthy from gold and silver and the discovery of the fabled Northwest Passage to Asia, the company received rights to the commodities it found.

Almost 150 adventurers and sailors crossed the Atlantic in a harrowing voyage that took some five months. They sailed on the Susan Constant, Godspeed, and Discovery. They suffered a variety of contrary winds and storms that impeded their progress and caused tensions to escalate aboard the ships. The contentious John Smith ran afoul of the leaders of the armada and was clapped in chains and nearly hanged in the Caribbean.

The ships finally sighted Virginia and over the next few days went ashore where they erected a cross, encountered several groups of Indians who alternatively attacked and traded with them, and explored the James River. On May 14, 1607, they disembarked at Jamestown because they thought it bountiful and defensible against expected Spanish attacks. The instructions from the company were opened and the appointed leaders of the colony—including John Smith—were sworn into their offices.

While they had several peaceful trading encounters with the local Indians, the settlers suffered a large, deadly attack a few weeks later and decided to build a fort. That was only the beginning of the colony’s troubles. That summer, most of the company was sickened by drinking brackish water from the tidal James. They suffered a variety of maladies including salt poisoning, typhoid fever and dysentery. The settlers were mostly too sick to work or plant food. However, the gentlemen leaders of the colony believed that the colonists were being lazy. Moreover, disputes among the councilors resulted in the imprisonment of President Edward Maria Wingfield. The colony was in chaos.

The remedy was worse than the problems the colony faced. The leaders imposed draconian laws on the settlers, and Smith forced men to work or suffer punishment. The settlers did not enjoy the rights of Englishmen they were promised. They also had very little incentive to work because they did not own land or the fruits of the labor as they toiled for the company and consumed food from the common storehouse. They also completely depended on the goodwill of the Indians for food through trade or coercion at gunpoint.

The situation over the next few years did not improve because the colony was still governed poorly and based upon the wrong incentive structure. They depended upon regular resupply from England but sent scant precious metals or valuable raw materials back to England.

In 1609, the company dispatched a fleet of ships with 500 settlers and supplies led by the flagship, Sea Venture. A massive hurricane dispersed the fleet and sank the Sea Venture near Bermuda with the admiral of the fleet, the new president of the colony, its instructions, and most of the supplies destined for Jamestown. The shipwrecked survivors were stranded there for nearly a year.

Meanwhile, in Jamestown, the rest of the fleet had arrived with hundreds of tempest-tossed settlers but few supplies. In addition, people tired of John Smith, and he barely survived an assassination attempt and departed the colony. With the dearth of food and the leadership vacuum, the winter of 1609-1610 became known as the “Starving Time.” Desperate colonists ate rats, dogs, and snakes, and resorted to trying to eat leather goods and even each other. The colony was hanging by a thread.

In May 1610, Gates and the Bermuda castaways finally arrived in Jamestown but quickly decided to return to England before all starved to death. As they were sailing down the James, they encountered another supply fleet bringing the new governor, Lord De La Warr, who ordered the colonists to return to Jamestown. The governor attempted to rebuild the colony through the same methods that had failed the colony to date: martial law, harsh discipline, forced work, and communal ownership.

The colony barely survived over the next few years even with the arrival of tons of supplies and additional settlers to make up for the horrific death toll. Even the planting of tobacco did not fundamentally alter the structure of the colony or facilitate lasting success as commonly assumed.

Only in 1616 and 1617 did the colony find the path to permanent success and prosperity in Jamestown. The introduction of private property gave colonists the right incentive to grow crops including food and tobacco to sustain themselves. Moreover, the company finally guaranteed the traditional rights of Englishmen rooted in the common law including liberties and trial by jury. Most importantly, in 1619, the House of Burgesses—the first representative legislature in America—was created for just laws and good government.

Jamestown began to thrive over the next few years as opportunity beckoned despite the still frighteningly high death rate from disease. Approximately 4,000 settlers migrated to Virginia for greater opportunity. Women finally arrived in large numbers to support families and a lasting colony. The first Africans arrived in 1619 and had a largely obscure status until slavery was codified over the next several decades.

The settlement of Virginia had entrepreneurial origins that developed only in fits and starts and after almost a decade of failure. The introduction of private property, freedom, self-government, and a capitalist ethos laid the foundations of a successful colony and shaped the colonists’ thinking. Those ideals rested uneasily with the development of slavery, and this contradiction of slavery and freedom would continue for more than two centuries. However, the founding ideals of America were established along the James in Virginia.

Tony Williams is a Senior Fellow at the Bill of Rights Institute and is the author of six books including Washington and Hamilton: The Alliance that Forged America with Stephen Knott. Williams is currently writing a book on the Declaration of Independence.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90-Day Study on American History.

Guest Essayist: Wilfred M. McClay

We Americans need to know our history. And we need to know it far better than we have in the past. We are not a people bound together primarily by blood and soil. Instead we are people with our origins in many bloods and many soils, linked by shared principles embodied in shared institutions, and embedded in a shared history, with its shared triumphs and shared sufferings. There is a growing danger that we have been failing to pass along that flame to our posterity, with untold consequences. We have neglected an essential element in the formation of good citizens when we fail to provide the young with an accurate, responsible, and inspiring account of their own country – an account that will inform and deepen their sense of identification with the land they inhabit and equip them for the privileges and responsibilities of citizenship.

“Citizenship” here encompasses something larger than the civics-class meaning. It means a vivid and enduring sense of one’s full membership in one of the greatest enterprises in human history: the astonishing, perilous, and immensely consequential story of one’s own country. That’s what the study of American history should provide.

We need this knowledge for the deepest of all reasons. For the human animal, meaning is not a luxury; it is a necessity. Without it, we perish. Historical consciousness is to civilized society what memory is to individual identity. Without memory, and without the stories by which our memories are carried forward, we cannot say who, or what, we are. A culture without memory will necessarily be barbarous and easily tyrannized, even if it is technologically advanced. The incessant waves of daily events will occupy all our attention and defeat all our efforts to connect past, present, and future, thereby diverting us from an understanding of the human things that unfold in time, including the paths of our own lives. The stakes were beautifully expressed in the words of the great Jewish writer Isaac Bashevis Singer: “When a day passes it is no longer there. What remains of it? Nothing more than a story. If stories weren’t told or books weren’t written, man would live like the beasts, only for the day. The whole world, all human life, is one long story.”

Singer was right. As individuals, as communities, as countries: we are nothing more than flotsam and jetsam without the stories in which we find our lives’ meaning. These are stories of which we are already a part, whether we know it or not. They are the basis of our common life, the webs of meaning in which our shared identities are suspended. Just as we need meaning, so we need a sense of belonging. Without them we cannot flourish. The pathologies that we see creeping steadily into our national life—rise in suicides, youth depression, alcoholism, drug abuse, and astonishingly an overall decline in life expectancy—how can these not be related to a catastrophic loss of meaning, a sense of disconnected from others, and from the great story to which, by all rights, every American belongs?

I wrote the book Land of Hope to try to begin to redress this problem, to be a fresh invitation to the American story. It does not pretend to be a complete and definitive telling of that story. Such an undertaking would be impossible in any event, because the story is ongoing and far from being concluded. But what it does try to do is present the skeleton of the story, its indispensable underlying structure, in a form particularly appropriate for the education of American citizens living under a republican form of government. There are other ways of telling the story, and my own choice of emphasis should not be taken to imply that the other aspects of our history are not worth studying. On the contrary, they contain immense riches that historians have only begun to explore. But one cannot do everything all at once. One must begin at the beginning, with the most fundamental structures, before one can proceed to other topics. The skeleton is not the whole of the body – but there cannot be a functional body without it.

Permit, in concluding to say a word about my choice of title, Land of Hope, which forms one of the guiding and recurrent themes of the book. As the book argues from the very outset, the western hemisphere was largely inhabited by people who had come from elsewhere, unwilling to settle for the conditions into which they were born and drawn by the prospect of a new beginning, the lure of freedom, and the space to pursue their ambitions in ways their respective Old Worlds did not permit. Hope has both theological and secular meanings, spiritual ones as well as material ones. Both these sets of meanings exist in abundance in America. In fact, nothing about America better defines its distinctive character than the ubiquity of hope, a sense that the way things are initially given to us cannot be the final word about them, that we can never settle for that. Even those who are exceptions to this rule, those who were brought to America in chains, have turned out to be some of its greatest poets of hope.

Of course, hope and opportunity are not synonymous with success. Being a land of hope will also sometimes mean being a land of dashed hopes, of disappointment. That is unavoidable. A nation that professes high ideals makes itself vulnerable to searing criticism when it falls short of them – sometimes far short indeed, as America often has. We should not be surprised by that, however; nor should we be surprised to discover that many of our heroes turn out to be deeply flawed human beings. All human beings are flawed, as are all human enterprises.

What we should remember, though, is that the history of the United States includes the activity of searching self-criticism as part of its foundational makeup. There is immense hope implicit in that process, if we go about it in the right way. That means approaching the work of criticism with constructive intentions and a certain generosity that flows from the mature awareness that none of us is perfect and that we should therefore judge others as we would ourselves wish to be judged, blending justice and mercy. One of the worst sins of the present – not just ours but any present – is its tendency to condescend toward the past, which is much easier to do when one doesn’t trouble to know the full context of that past or try to grasp the nature of its challenges as they presented themselves at the time. My small book is an effort to counteract that condescension and remind us of how remarkable were the achievements of those who came before us, how much we are indebted to them.

But there is another value to the study of American history. Many Americans, including perhaps a majority of young people, believe that the present is so different from the past that the past no longer has anything to teach us. This could not be more wrong. As I say in the book’s epigraph, borrowing from the words of John Dos Passos:

In times of change and danger when there is a quicksand of fear under men’s reasoning, a sense of continuity with generations gone before can stretch like a lifeline across the scary present and get us past that idiot delusion of the exceptional Now that blocks good thinking. That is why, in times like ours, when old institutions are caving in and being replaced by new institutions not necessarily in accord with most men’s preconceived hopes, political thought has to look backwards as well as forwards.

With the grounding provided by a sense of history, we need never feel imprisoned by the “idiot delusion of the exceptional Now,” or feel alone and adrift in a world without precedents, without ancestors, without guidelines. But we cannot have that grounding unless it is passed along to us by others. We must redouble our efforts to make that past our own, and then be about the business of passing it on.

This year’s Constituting America study is going to be particularly valuable in this regard, since it revolves around the study of particular moments in the American past when something highly consequential was decided. Dates, you say? What could be more boring? Ah, but we sometimes forget, to our detriment, that nothing in history is predetermined, and no outcome is pre-assured. History can turn on a dime, in a single moment, on a single date, and that’s why dates matter.

History is all about contingency, about the way that our positive outcomes depend not only on our big ideas but on our actions, our character, our courage, our determination—and on our good fortune, on forces beyond our control that somehow have seemed to work together for our good. Some people call this “good fortune” Providence. The American Founders certainly did. See if you don’t agree that they were on to something, when you hear the stories to come. They will make you think twice when you hear about “the blessings of Liberty” which our Constitution was designed to secure.

Wilfred M. McClay is the G. T. and Libby Blankenship Chair in the History of Liberty at the University of Oklahoma, and the Director of the Center for the History of Liberty. In the 2019-20 academic year he is serving as the Ronald Reagan Professor of Public Policy at Pepperdine University’s School of Public Policy. He served from 2002 to 2013 on the National Council on the Humanities, the advisory board for the National Endowment for the Humanities, and is currently serving on the U.S. Semiquincentennial Commission, which is planning for the 250th anniversary of the United States, to be observed in 2026. He has been the recipient of fellowships from the Woodrow Wilson International Center for Scholars, the National Endowment for the Humanities, and the National Academy of Education, among others. His book The Masterless: Self and Society in Modern America won the 1995 Merle Curti Award of the Organization of American Historians for the best book in American intellectual history. Among his other books are The Student’s Guide to U.S. History, Religion Returns to the Public Square: Faith and Policy in America, Figures in the Carpet: Finding the Human Person in the American Past, Why Place Matters: Geography, Identity, and Public Life in Modern America, and most recently Land of Hope: An Invitation to the Great American Story. He was educated at St. John’s College (Annapolis) and received his Ph.D. from Johns Hopkins University in 1987.

Click Here to have the NEWEST essay in this study emailed to your inbox every day!

Click Here to view the schedule of topics in our 90 Day Study on Congress.

Signing of the Declaration of Independence by John Trumbull, displayed in the United States Capitol Rotunda.

Click Here to Hear Actress Janine Turner read the Declaration of Independence!

The Declaration of Independence: A Transcription

From the National Archives website: http://www.archives.gov/exhibits/charters/declaration_transcript.html


IN CONGRESS, July 4, 1776.

The unanimous Declaration of the thirteen united States of America,

When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to the separation.

Read more