Categories
Michael Novakhov - SharedNewsLinks℠

What we know about weapon used by suspect in Charlie Kirk’s fatal shooting



Michael_Novakhov
shared this story
.

Subscribe to Reuters to continue reading.

Unlimited access to Reuters globally trusted news.

Final price$1/week

Charged as $4 in advance every 4 weeks.You can cancel anytime.


Categories
Michael Novakhov - SharedNewsLinks℠

Has the Trump Light Switch Finally Turned On?



Michael_Novakhov
shared this story
.


Jonathan Sweet

Army Col. (Ret.) Jonathan Sweet (@JESweet2022) served 30 years as a military intelligence officer. His background includes tours of duty with the 101st Airborne Division and the Intelligence and Security Command. He led the U.S. European Command Intelligence Engagement Division from 2012-14.


Mark Toth

Mark Toth

Mark Toth (@MCTothSTL) writes on national security and foreign policy. Previously an economist and entrepreneur, he has worked in banking, insurance, publishing and global commerce. A former board member of the World Trade Center, St. Louis, he has lived in U.S. diplomatic and military communities around the world.


Categories
Michael Novakhov - SharedNewsLinks℠

Decadent Ideology, Decaying Fraternity


French Catholic philosopher Chantal Delsol, a member of France’s prestigious Académie des Sciences Morales et Politiques, is known for her searing accounts of totalitarian ideology and her penetrating works on modern European politics and culture that richly reward any reader who gives them close attention. Over the years, many of her books have been translated into English, including Icarus Fallen: Search for Meaning in an Uncertain World and The Unlearned Lessons of the Twentieth Century: An Essay on Late Modernity. Her latest work to be translated into English, Prosperity and Torment in France , is an analysis of the current state of affairs in French politics, economics, and cultural life that reveals key lessons for modern democracies around the world.

In particular, Delsol examines the seeming paradox of a wealthy France, whose people are unsatisfied with the current state of affairs despite the almost unrivaled free social services provided to its citizens. On one level, the book is a grim account of a nation that has become historically defined by various ideologies, turning even good ideas and political forms like republicanism into rigid concepts closed to further political development. In a style reminiscent of Tocqueville, Delsol considers how the French people are caught between the tremendous benefits provided by the government and their devotion to ideological abstractions like egalitarianism, individualism, and secularism.

“France,” Delsol declares, “is a country that is particularly smitten with ideologies. It prefers ideas to reality.” She remarks that “Marxism was so entrenched that it was necessary to wait until the fall of the Berlin Wall for it to fade away: only universal ridicule could put an end to it, but certainly not the lucidity of our brilliant brains.” Delsol describes how in a widespread appeal to a “farcical Marxism,” French domestic politics was dominated from 1972 into the early 1980s by the “Programme commun” or “Common Program,” signed into law by the French Socialist Party, the Communist Party and the Radical Party of the Left.

France’s national motto, “Liberté, égalité, fraternité,” originated during the Revolution of 1789. In this book, Delsol describes in withering detail how these ideological formulae have become closed and seemingly incapable of answering the severe challenges that confront France in the twenty-first century. France cannot afford to stand still or act as if these new controversies can be dismissed with the stand-pat answers that it has developed for decades, if not centuries.

The author also notes that “it is so good to live in France.” French citizens are “pampered by a welfare state the likes of which exists nowhere else.” Its citizens “do not pay for health care, or schooling.” France has some of the highest levels of welfare spending in the world, but also an economy with a high GDP and a significant level of economic redistribution. She observes that “French grief is incomprehensible in the face of the ‘fortune’ and the abundance that can be objectively verified. However, if a self-governing people orients itself politically by modern ideologies of egalitarianism or humanitarianism, it will be perpetually disappointed. [She concludes that] … this grief comes from a propensity to expect perfection here below—the habit of an ideologue.” The author anticipates that many will find her analysis “pessimistic.” Though she shares her belief that a nation with France’s history and longevity can reform itself, it cannot do so without “lucid diagnostics.”

French citizens now freely indulge in a technologically driven individualism, making it almost impossible to envisage the civic fraternity needed to make the republican ideal possible.

Delsol observes that France’s decline from being part of the leading cohort of nations to a mid-tier power has been difficult to endure. She also remarks that other values specific to “eternal” France are also fading. Delsol notes that the weight of its celebrated national education system now burdens the nation with its abundant mediocrity and declining performance. Delsol concludes that the suboptimal results are because of top-down control, and a purported ability to deliver free education at all levels with minimal cost consciousness guarantees. Yet, to critique it or to openly suggest that the stringent limits placed on private education should be lifted is to risk public censure. The social justice good that public schools supposedly serve endures in the national psyche, despite years of poor results. This, despite the angling and maneuvering by those with means to get into the best government schools or pull strings and gain admission to one of the few private school slots.

Likewise, the vaunted nature of the republican ideal in France also seems stilted, ideological even, in a country of tremendous individualism. France is starting to resemble other Western countries, and this has produced the impression that their national substance is being stolen from them. Less republican, egalitarian, and exceptional, what then will be left of France?

Delsol argues that republicanism requires generous actions, not ideological control over people. If citizens do not freely choose to place the country first, then forcing such actions smacks of authoritarianism. Yet French citizens now freely indulge in a technologically driven individualism, making it almost impossible to envisage the civic fraternity needed to make the republican ideal possible. No one, though, will admit to a reduced identification with republicanism. Delsol wonders why the former trappings of republicanism no longer captivate French hearts. She answers that republicanism as a government ideal has become ideologically corrupted.

Republicanism in France replaced the socialist ideal with the collapse of the Soviet Union. Until that moment, the national imagination was socialist, Delsol informs. Republicanism became something of a substitute for Marxism for leading intellectuals. This can be seen in how republicanism, which is always tied to a concrete place, a country, and for a specific people and their virtues, was reconceived as a humanitarian universalist program. Republicanism was for the world; thus, the spectacle of French youth, Delsol reports, rushing to various ports of entry to protest on behalf of migrants entering France. The French Republic must automatically accept them. But from socialist histrionics to republican humanitarianism are the “moments of great hope and moments of great bitterness.” France loves “the union of hearts in comparison with people’s freedom.” True, it seems, but the French nation in Delsol’s analysis suffers from a profound deficit of encountering reality on its truthful terms, choosing instead to attempt to fill reality with an exaggerated political longing.

Delsol argues that, in a period of individualism, the republican form must incorporate a “high degree of democracy.” The ideology of French republicanism is failing because it lauds an abstract sense of social justice and the common good at the expense of concrete local, religious, and even racial commitments; Delsol suggests that the solution is to extend citizens a greater freedom of choice than that currently offered by the French state. Only in this manner can the union of republicanism be open to a citizenry that no longer wants to ask the state for permission to engage in various commercial, educational, and consumer pursuits. However, another paradox emerges with this argument for greater individual liberty, and that is the French citizenry’s love for equality, which is necessarily threatened by perhaps allowing more options and choices in healthcare, schooling, work, and commerce. Can the French publicly admit that more substantial options are needed other than those provided by the government? The remedy itself is a threat to still cherished but hardly flourishing social settlements.

Delsol notes that France is a disciple of the sixteenth-century thinker Jean Bodin, who emphasized centralized sovereignty of the state as opposed to that of Bodin’s contemporary, German thinker, Johannes Althusius, of the same period, who articulated a federalism and subsidiarity thesis of actions by people in intermediate groups and associations. The French rationalistic conception of state control leads to remarkably different outcomes from the subsidiarity model, Delsol concludes. One of those outcomes is an isolating individualism that results from the existence of only two densities: man and the state. “Centralization increasingly produces the need for the state.”

The individual becomes less, losing agency and direction, requiring increasing assistance from the state. Such is the direction of French politics, Delsol articulates, which evolved dramatically in the age of the twentieth-century welfare state, whose existence continues unabated into the present day. The result is maternalistic government. Delsol argues that this path can be tied directly to the French Revolution, which consisted of a regicide and then a coalescence “around the symbol of Marianne, the mother of the republic.” The arts of association were never possible in post-revolutionary France, as associations and corporations were abolished, leaving individuals solitary and reduced to their own capacities. Delsol pointedly asks, “What else can the individual do, without the right to associate when acting?” Consequently, the citizens are reduced to the welfare state and begging a maternalistic government to meet every need. The citizens have an “infantile attitude” constantly demanding more resources from a state that is always giving and promising. The state as mother and the citizen as infant need one another.

Delsol states that such servility to the government undergirds the French preference for “equality to liberty: they prefer everyone to be dealt with in the same manner.” The republican ideal in France requires one standard for everyone to maintain a proper political union. In the face of declining quality of public services and mounting debt, most French citizens still want the state “to decide for everyone about minimum wages, working hours, school curricula, retirement age, and so on.” The French would prefer the unemployed to be equally cared for by the state on a large level rather than be thrust into competitive employment situations and risk inequality, or work in “little jobs.” Delsol’s damning observation entails that freedom and responsibility for one’s life are removed from any conception of citizenship.

Delsol notes that Macron isn’t engaging in democratic politics, which makes way for alternatives, but in a style that leads to “a war against all.”

Such egalitarianism extends even to philanthropy, where it is done quietly for fear of offending people with displays of financial inequality. State subsidies are the first-order method for helping various causes. This even extended to donations to repair Notre-Dame Cathedral after the 2019 fire, where the public turned against anyone “chasing after glory” by making significant donations. Again, state grants for such repairs were foremost in their minds. The Pinault family (owner of luxury goods, fashion houses, Christie’s auction house, travel companies, and vineyards, among many others) announced that it would not seek a tax deduction for its considerable gift to the repair of the Cathedral. One of the wealthiest families in France wanted to announce its equal status with everyone else.

Yet considerable threats continue to mount to the French political model. The left-right divide now faces both communist and reactionary populist elements that capitalize on the deep mistrust that exists in French society for the justice system, unions, corporations, Parliament, and the rich. A system built on corporatism at the expense of the human person’s freedom is now seen as corrupt and governing at the cost of the public. The left-populist elements offer more government services and programs, more statism. But the right-populist element proposes to make the French nation the center of its rule. One struggles to see either populist movement engaging in the economic, education, and welfare state reforms described by Delsol to raise the quality of services, increase choice, and spark growth.

The secularism of French life faces a challenge posed by the millions of Muslims who have been given entrance to the country and who report in polling data and behavior great loyalty to the Koran and a much lower level of belief in the French constitution. In a less emphatic vein, rising numbers of young French citizens, despite the pressures placed on them to abdicate their Catholic faith or at least keep it private, remain loyal to the church. While Catholicism is undoubtedly weak in French life, and this by dint of political atheism dominating the country for over a century, perhaps the laity are rebounding in their faith as they watch former national promises made come apart at the seams.

But the most dramatic pressure comes from the challenge to French humanitarian values that undergird the European Union project. The populist right counteroffer is home, a concrete place called France, which can’t be rearranged by migratory flows, climate change regulation, and a flattening of human life without any sense of history, loyalty, and love. French President Emmanuel Macron surely senses this when he refers to the National Rally party, formerly the National Front party, as the “enemy.” Delsol notes that Macron isn’t engaging in democratic politics, which makes way for alternatives, but in a style that leads to “a war against all.” But if the French economic and social model is under duress, if not on the brink of collapse, and its defenders refuse to change, then the next metamorphic change in political leadership in France will likely have to be won decisively.

The only question is if the populists have a program that can restore common sense by giving voice to French citizens who want their country’s sovereignty enforced and expanding freedom and virtue in a manner suitable to the restoration of the nation. The French nation, so heavily defined by an interwoven collection of ideologies, will have to become less. A nation of independent citizens, creatures, family members, and workers will have to become more.


Categories
Michael Novakhov - SharedNewsLinks℠

The Equity Trap


In 2002, the US Census Bureau published a report showing that college graduates earned nearly $1 million more over a lifetime than high school graduates—a gap approaching $2 million in 2025 dollars. The widely cited report reframed inequality as a credentials gap. If more people earned degrees, the logic went, wage gaps would close.

But correlation isn’t causation. The data didn’t prove that degrees caused higher earnings—only that degree holders tended to earn more. Still, the message stuck: underserved groups held fewer degrees, so policymakers assumed the solution was to remove barriers to college.

The harder path—improving K–12 academic preparation—would have required sustained investment in curriculum, teacher quality, and school accountability. Instead, leaders chose the shortcut: eliminate placement tests, ban remedial courses, and expand access by mandate. It was easier to legislate and more politically attractive. But it didn’t build capability. It just lowered the bar.

What this logic missed was how college creates value in the first place. Institutions don’t conjure skills out of thin air—they select for readiness and refine it through competition and academic rigor. Admissions standards exist to match students with programs they’re prepared to complete. Remove those filters, and the meaning of the credential collapses.

In 2009, President Barack Obama declared that “every American will need to get more than a high school diploma … by 2020.” College completion became a civic obligation and a macroeconomic strategy. If credentials alone created prosperity, we could solve inequality with a printing press. But degrees don’t create skills—they signal them. And when standards collapse, the signal fades. So does public trust. 

Removing placement tests didn’t eliminate academic screening—it just delayed it. Students hit barriers in coursework instead of admissions. Those aiming for high-value degrees were quietly diverted when they couldn’t keep up.

In fields like business, engineering, and health sciences, the first required math course often assumes years of preparation. At San José State University, for example, business majors must complete Business Calculus, which requires precalculus, which requires college algebra. For a student who hasn’t mastered Algebra II in high school, that’s a three-course ladder they can’t climb.

Engineering is even more demanding. Calculus I is the entry point—and it defeats many students who passed AP Calculus in high school. Success in these courses isn’t about cramming. It’s about a decade of structured math—long nights at the kitchen table, mastering foundations from elementary school onward.

Students who can’t keep pace in high-demand majors aren’t dismissed. They’re redirected into fields with lower expectations and weaker economic returns. Engineering becomes business. Business becomes psychology, communications, or justice studies. Institutions call it “flexibility.” But the outcome is the same: students land in programs with minimal quantitative demand and limited economic payoff.

Telling every student they can be whatever they want, regardless of academic record, isn’t guidance. It’s false hope disguised as empowerment.

Research confirms this shift disproportionately affects Black and Hispanic students: after California Assembly Bill 705 effectively eliminated remedial courses, they were more likely to be routed into SLAM (statistics and quantitative reasoning) rather than BSTEM (algebra and calculus) pathways—regardless of academic preparation—suggesting a new form of racialized tracking under the banner of equity.

One of the most influential studies driving this trend—Jo Boaler’s “Railside” project—claimed that de-tracked, collaborative math instruction improved outcomes for low-income students. It was widely cited and helped justify the elimination of eighth-grade Algebra I in California. But when independent researchers identified the school and examined public data, the reported gains vanished. Standardized test scores and college readiness outcomes didn’t improve. The lesson is stark: when feel-good pedagogy replaces real preparation, students are told they’re succeeding—right up until they hit the wall. 

We see the pattern in degree production. Between 2001 and 2022, annual bachelor’s degree awards rose by roughly 770,000—representing a nearly 40% increase relative to the US population. Low-return majors surged: psychology degrees rose 76%, criminal justice 126%, and interdisciplinary studies 194%. These fields are easy to scale, light on math, and often disconnected from clear career pathways. This surge wasn’t driven by student choice—it was institutional triage. Faced with waves of underprepared students, colleges expanded programs unlikely to screen them out.

The belief that every American should earn a bachelor’s degree was a costly mistake. Four years of college is expensive—not just in tuition, but in lost wages and delayed entry into productive work. And most jobs in the economy don’t require it.

The bachelor’s degree was designed for pursuits that demand sustained intellectual training—law, medicine, engineering, and education, for example. It rewards abstract reasoning, structured inquiry, and disciplinary depth. That model has real value—but only when such cognitive demands are central to the task. Not every domain of human activity calls for this kind of formal abstraction, just as not every person is built to lift their body weight in the heat or crawl through a 36-inch coal seam. Recognizing differences in skills and abilities contributes to specialization—something Adam Smith, in 1776, identified as essential to the wealth of nations.

So why try to universalize it? Not because the labor market demanded it, but because the politics of inequality did. As credentialed professionals pulled ahead and working-class wages stagnated, policymakers embraced a seductive narrative: if degrees equal earnings, then more degrees must mean more mobility.

Rather than address structural inequality directly, they offered a workaround: “learn to code.” At a 2014 White House event, President Obama urged students, “Don’t just play on your phone—program it.” It sounded empowering. But it blurred the line between cultural aspiration and practical workforce preparation.

The Hour of Code didn’t rebuild the trades or close wage gaps. It dressed inequality in borrowed tuition and vague tech dreams. Most jobs in America still rely on applied skill, not theory. Training, not abstraction. There’s nothing wrong with saying college isn’t for everyone. What’s wrong is pretending it is—and calling that equity. 

We didn’t just lower the bar—we raised expectations and sold students a story. Young people are the unwitting pawns in a larger political script. They’re told the “good people” have secured their seat at the table—and if they pull up a chair, economic mobility awaits. Just work hard.

But what they aren’t told is that the system doesn’t bend to support them—it bends to preserve itself. When students fall short in high-demand majors, they aren’t expelled. They’re taught a lesson in bait and switch. Colleges steer them into programs with lower academic demands and weaker labor market alignment. The institution meets its enrollment targets. If students refuse to switch majors, they drop out—no degree, plenty of debt—only to realize too late they weren’t prepared. Either way, the result is the same: a system that avoids accountability while the student shoulders all the risk.

At San José State University, roughly 27% of bachelor’s degrees are awarded in these low-math, low-ROI fields. These aren’t pipelines to professional careers. They’re pressure valves—used to keep students enrolled after hitting academic obstacles.

Most students don’t choose these majors out of passion. They choose them because they were redirected—and no one told them the economic tradeoffs. The tuition is the same. The time is the same. But the return is radically lower. This is not guidance. It’s enrollment management. And it’s funded by students who believed they were preparing for their futures.

Survey after survey confirms that students attend college for economic mobility. Steering them into debt-financed credentials with limited value isn’t equity—it’s a betrayal of the public trust. Higher education has a solemn mission. It should elevate students, not quietly reroute them to protect enrollment targets.

Real equity doesn’t require lowering expectations. It requires telling the truth. Even Mad Magazine understood the problem back in 1975.

The great philosopher Tom Koch wrote of guidance counselors:

Most counselors take pride in their Vocational Guidance techniques, which consist of signing you up for all the courses you’ll ever need to launch a career that you don’t want and they don’t understand. But even after you’ve taken every course and graduated with every honor, a Guidance Counselor is seldom ever able to find you a job as a New York Disc Jockey or a Hollywood Talent Scout or a Boston Symphony Conductor. More likely, his Placement Service will offer you work as a Super Market Box Boy or a Steel Mill Furnace Stoker or a Shepherd (Mad Magazine #175, June 1975).

That was satire from my old comic book collection, but for many students today, it’s reality.

Students deserve honest, structured, and data-grounded guidance. This isn’t ancillary—it’s a core function of public education. Programs must disclose entrance requirements, academic demands, graduation rates, and labor market outcomes. But real equity requires more than transparency—it requires honesty. Students need a clear-eyed assessment of where, how, and whether they fit before investing years and debt. That might mean being told college isn’t the right path for them—or it might reveal where they’re a great match. There’s no shortage of meaningful work in this country. What’s missing is the guidance to help students find their place in it.

Advising should be anchored in objective data: curriculum catalogs, NCES and Scorecard outcomes, and Department of Labor wage statistics. Placement must reflect demonstrated readiness—not race, zip code, or inflated transcripts. Passion matters, but it must be matched with preparation. Telling every student they can be whatever they want, regardless of academic record, isn’t guidance. It’s false hope disguised as empowerment.

Technology makes bias-resistant, transparent guidance possible. All that’s missing is the leadership to make it real. Pretending all majors are equally accessible and equally valuable isn’t guidance—it’s misdirection. 

Higher education was never designed to prepare students for every job in the economy. Its value lies in preparing students for fields that genuinely require deep academic preparation—and in being honest when that preparation is lacking. Turning college into a universal credentialing system is a fool’s errand: it dilutes purpose and erodes credibility.

Rather than making college the goal, our education system’s mission should be to prepare students for a rewarding adult life. That might mean technical training, on-the-job experience, or, for some, bachelor’s degrees.

A just system doesn’t sort by race, wealth, or prestige. It aligns knowledge, skills, and abilities with opportunity—and respects every path that leads to productive work. But in the name of equity, we lowered standards to raise degree totals. The result wasn’t mobility—it was misdirection. Unprepared students were steered into college full of hope, only to land in majors with low academic demands and limited value.

Preparation and selectivity weren’t obstacles to justice—they were its foundation. Real equity doesn’t mean rerouting ambition into academic dead ends. It means telling students the truth, honoring all forms of work, and making sure that when a degree is awarded, it actually means something.


Categories
Michael Novakhov - SharedNewsLinks℠

Generic Equivalents to Natural Law


While routinely invoked by Protestant Reformers during the Reformation, natural law ethics did not have a good twentieth century among Protestant theologians, particularly those in the Reformed (or “Calvinistic”) tradition. Karl Barth famously blasted natural law ethics. Cornelius Van Til blasted both Karl Barth and natural law ethics. More recently, albeit more irenically, Calvin College philosophy Professor James K. A. Smith detailed “What’s Wrong with Natural Law?

Despite the criticism, or perhaps because of it, the pendulum started swinging back earlier this century. Interest in natural law ethics revived not only among distinctly Reformed and Lutheran scholars, but among Evangelical scholars more generally. This May, Zondervan Academic published Natural Law: Five Views. The book seeks to introduce today’s Protestants to the renewed attention to natural law. To do so, it includes contributions by Catholic as well as Reformed and Lutheran commentators on natural law. It also includes a chapter by one “anti-natural law” theologian.

For the most part, the book’s editors and the four “pro” natural law authors each provide a helpful précis for their particular tradition’s view of natural law. These views are the classical view, the Lutheran view, the Reformed view, and the “new natural law” (a la Germain Grisez, John Finnis, and others).

Each chapter, and the back-and-forth discussion between the contributors that follows each chapter, accomplishes the editors’ and authors’ goal: each contributor provides a brief, chapter-length summary of their tradition’s distinctive emphases and provides sources and citations for the interested reader to follow up.

All that is useful enough. Yet a thought kept nagging me while reading the short volume: The editors and the contributors (except for the “anti-natural law” contributor, of course) seem distinctly interested in promoting “natural law” as a distinctive label or brand name. Pressing the brand name, however, risks muddying waters that the editors and authors aspire to clarify, and losing at least two distinct groups that natural lawyers presumably would count as allies or cobelligerents.

On the one hand, there is a set of scholars and commentators who, like natural lawyers, are moral realists yet who reject or find inadequate some elements of natural law systems. Moral realists hold that moral requirements are objectively true, as opposed to moral requirements being subjective or relative. In the main, all natural lawyers are moral realists, but not all moral realists are natural lawyers.

On the other hand, there are scholars and commentators who actually, if implicitly, apply or draw on forms of natural law methodologies in their arguments, but who want to reject the natural law brand name for one reason or another.

We’ll start with this second set first.

Natural Law in Name vs. Natural Law in Substance

C. S. Lewis, whom the editors and several of the contributors cite with approval, exemplifies brand-name ecumenism in his book, The Abolition of Man.

The thing which I have called for convenience the Tao, and which others may call Natural Law or Traditional Morality or the First Principles of Practical Reason or the First Platitudes, is not one among a series of possible systems of value. It is the sole source of all value judgments. If it is rejected, all value is rejected. … The effort to refute it and raise a new system of value in its place is self-contradictory.

What is critical in Lewis’s view is not whether one applies the “natural law” label to one’s view, but the substantive commitment to moral realism, whatever one terms it.

Lewis, for example, would welcome Ronald Dworkin’s echo of the “reductio” Lewis asserts in The Abolition of Man regarding the self-refuting nature of moral skepticism. In his book Law’s Empire, Dworkin writes that if a person “really believes, in an internally skeptical way, that no moral judgment is really better than any other, he cannot then add that in his opinion slavery is unjust.” This is a form of the “reductio” that Lewis employs, even if the affirmative content of the objective morality each asserts ultimately differs at signal points.

Yet while Dworkin devoted much of his career to arguing that morality cannot be separated from law, he nonetheless resisted the claim that his theory reflected some version of natural law theory. He did so not because it wasn’t true, but rather because he didn’t deem it “a very important objection.” He observed that labeling his theory a natural law theory merely “suggests a different way of reporting” what his theory is about.

Or consider Michigan Law Professor Scott Hershovitz, who argues a different version of Dworkin’s thesis in his recent book, Law Is a Moral Practice. Hershovitz, too, does not deny that he engages in natural-law reasoning. Rather, he rejects the label as just not “helpful” in identifying the nature of his argument, given the wide variation in theories that go under the label.

Whether they are moral realists who reject the natural law label or moral realists who accept the natural law label, it seems that in this day and age the wedge issue is moral realism versus the rejection of moral realism.

To be sure, natural law jurisprudence should not be identified with natural law ethics, but the two sets of literature do intersect. Both Dworkin and Hershovitz advance a form of moral realism in their arguments, although neither thought it necessary to anchor their moral realism in a deeper metaphysical system. (Dworkin, for instance, did not believe in God.)

On the other hand, the “anti-natural law” contributor to the “Five Views” volume, Peter Leithart (with whom I co-edited a book), argues that “natural law” does not apply to his thought because he believes it necessary to posit a deeper metaphysical system of thought to account for knowledge of law, and this requires divine rather than natural revelation. (More on this point below.) Despite rejecting the natural law label in application to his view, the volume’s editors wonder whether Leithart is “truly ‘Anti-Natural Law.’”

It seems to me that many who reject the natural law label nonetheless either apply a natural law methodology or assert a form of moral realism that rejects moral relativism, an outcome the editors of the Five Views book suggest is realized uniquely by natural law theories.

Natural Law Methodologies vs. the Natural Law Brand Name

The irony is that many scholars and commentators who ignore or reject the natural law label nonetheless employ one or another form of natural law methodology. The question then is whether it is worth the effort to persuade these scholars to apply the brand name to their product. This, as opposed to the possibility that self-identified natural law aficionados can, like C. S. Lewis, simply declare victory and focus on the substantive debate over the content of natural law principles.

To make this argument, we first need to identify what are “natural law” methodologies. Here, the heterogeneity of natural law theories can be a problem. I suspect that some scholars reject the natural law label because they think it requires a commitment to a methodology they do not employ or to which they object. Without any claim to exhaustiveness, and with the proviso that methodologies can overlap, I would generally follow Russell Kirk, with some differences by way of emphasis, and count at least four basic types of natural law methodologies:

  • Connaturalism and/or an intuitive commitment to some form of moral realism as self-evident (cf., Aquinas, ST I-II. q. 91, a3).
  • Principles and actions that promote achieving the human teleology, that is, achieving the human end or “nature” in the Aristotelian sense (see, for example, Aristotle’s Politics I.2, 1252b30-34 or Nichomachean Ethics I.7, 1097b24-29).
  • Rejection of self-refuting propositions. (See, for example, John Finnis; Aquinas, ST I-II. q. 94, a2).
  • Empirically observed universal, or near-universal human beliefs and/or behavior. (See, for example, Edward O. Wilson’s empirical/biological argument, or Aquinas, ST I-II. q. 94, a3, ad.2, or Lewis’s argument in The Abolition of Man.)

As I mentioned, these types can overlap. For example, Finnis’s argument in Natural Law and Natural Rights asserts both that the “basic goods” he identifies are “self-evident” and asserts the claim that to reject any of the basic goods he identifies is self-refuting.

The larger point of the exercise, however, is that any number of commentators who reject the label “natural law” nonetheless implicitly employ natural law methodologies. Dworkin and Hershovitz, for example, seem to employ a form of moral intuitionism.

The question is how much energy natural law advocates want or need to invest in persuading these folk expressly to apply a natural law nomenclature to their work versus the alternative that C. S. Lewis modeled, of simply recognizing that those who are not against us are for us, and focus attention and resources on substantive questions.

Anyone who reasons from a human telos, an image of human flourishing, implicitly engages in a natural law methodology even though the content or conclusions of their theory may diverge.

Consider the difference between Aristotle’s and Aquinas’s teleologies. Both Aristotle and Aquinas conceive of a human nature through what it means for a human to be wholly mature or fully flourishing. But Aristotle identifies flourishing with “an activity of the soul in accord with virtue,” while Aquinas identifies it with realizing the beatific vision. To be sure, there can be overlap between these two views, perhaps substantial overlap. But they are not necessarily the same thing.

The point of observing this is to note that anyone who reasons from a human telos, an image of human flourishing, implicitly engages in this sort of natural law methodology, even though the content or conclusions of their natural law theory will diverge depending on the distinctive telos they reason from.

Whether the image of human flourishing is that of Maslow’s hierarchy of needs, the freedom of the Jeffersonian yeoman, or overcoming Karl Marx’s alienation, all posit a telos that can be understood to identify a human “nature” to which the Aristotelian methodology can apply.

I want to emphasize that I am not suggesting that it doesn’t matter what we posit as the human telos or our view of human flourishing. It matters critically; analysts will argue over which image of human flourishing is correct or appropriate.

The point is that deriving moral or political implications from a concept of human flourishing—any concept of human flourishing—is a natural law methodology, whether one calls it that or not. That Aristotle and Aquinas (or others) disagree about the ultimate nature of the human telos does not mean that one or the other is therefore not engaging in natural law reasoning. Nonetheless, the promiscuity of natural law theory here is one reason scholars such as Hershovitz don’t think it’s helpful to be identified with natural law.

So, too, for example, Dworkin and other secular scholars. Despite not believing in God, Dworkin nonetheless embraced a form of moral intuitionism that required him to posit a form of moral realism. Pertinent to the Five Views book, this moral realism—the belief that moral principles were objective and could be known and applied—would seem to be consistent with the minimal threshold that the Apostle Paul identifies for non-believers reflecting the requirements of the law “by nature” in his letter to the Romans, a canonical text for natural law (Romans 2:14-15).

Even the likes of nineteenth-century legal positivist scholar John Austin, who expressly warred against the notion of “natural law” in jurisprudence (calling it “stark nonsense”), was nonetheless a moral realist. Austin assented “without hesitation” to the view that “all human laws ought to conform to the Divine laws.” He agreed that “if human commands conflict with the Divine law,” then the human law should be “disobey[ed]” in favor of the Divine law. While insisting that “law” can be identified by positive attributes alone—he insisted that “the existence of law is one thing; its merit or demerit is another”—Austin was nonetheless a moral realist.

Sin and the Problem of Gaps in Apprehension of Natural Law

The Protestant theologians who criticize natural law are moral realists as well. In the main, they object to the notion that natural law is accessible to reason on account that sin impacts humans so dramatically that it can limit natural knowledge of morality in significant ways. This creates a very practical problem for an argument commonly deployed to argue for natural law. The argument is that “natural law” is moral knowledge shared generically by humanity across culture and across time. In response, some Protestant theologians have argued that if sin so dramatically affects moral knowledge that there are gaps in the human conscience at particular times and cultures, then natural law cannot or does not provide a universally accessible moral system.

Many who reject the natural law label nonetheless either apply a natural law methodology or assert a form of moral realism that rejects moral relativism.

The issue revolves around whether sin affects the moral conscience so significantly that natural law fails to meet the threshold for robust versions of moral responsibility.

While Aquinas is often forwarded as a paradigmatic natural lawyer, it seems to me that he goes further down this anti-natural law road than is often conceded. Divine law, which is biblically revealed law for Aquinas, is needed not only for matters beyond what is accessible to reason (Aquinas includes the Gospel in this category), but also for matters that are accessible to reason but to which access has been “impeded” by sin. Aquinas writes:

It was fitting that the Divine law should come to man’s assistance not only in those things for which reason is insufficient, but also in those things in which human reason may happen to be impeded. … Through being habituated to sin, [human reason] became obscured in the point of things to be done in detail. … The reason of many men went astray to the extent to judging to be lawful things that are evil in themselves. Hence there was need for the authority of the Divine law to rescue man from both of these defects (ST I-II. Q. 99, A.2).

The issue here pertains to the robustness of natural law, that is, the problem of gaps in apprehension of natural law.

Note first that, for Aquinas, this is not a minor problem for humanity. Aquinas observes that being “habituated to sin” is the “reason … many men went astray” in approving evil things.

Secondly, in referring to moral sense being “obscured … in detail,” Aquinas means that the natural law becomes obscured on specific moral points, but these can be significant moral points. One example of impeded human reason that Aquinas discusses is German barbarians for whom “theft, although it is expressly contrary to the natural law, was not considered wrong” (ST I-II. Q.94, A.4). So, too,

the natural law can be blotted out from the human heart, either by evil persuasions, just as in speculative matters errors occur in respect of necessary conclusions; or by vicious customs and corrupt habits, as among some men, theft, and even unnatural vices, as the Apostle states (Rom. 1), were not esteemed sinful. (ST I-II. Q.94, A.6).

Let’s take Aquinas’s example of theft and consider how this would create a very practical problem for the deployment of natural law in concrete situations. Let’s say that the moral conscience is working just fine for nine of the Ten Commandments. But the consciences of “many men” in our society have gone astray on the natural law behind one commandment, the commandment against theft. The practical problem is that it is specifically when there’s a failure to follow the law that we would want to appeal to conscience to persuade people to stop stealing. But it’s precisely on the point of theft (in my hypothetical) that the appeal to conscience wouldn’t work because reasoning has been impeded regarding this principle.

The natural lawyers’ habitually appeal to Romans 2:14, when Paul discusses that “Gentiles who do not have doing by nature the things of Law,” doesn’t help at this point. This canonical text for Christian natural lawyers contains an opening conditional (a condition that is often elided over). Paul writes, “For when Gentiles who do not have the Law do by nature the things of the Law.” Paul’s argument here does not require that Gentiles by nature recognize all the things stipulated in the Law. His argument is only that “when” they do, their conscience bears witness to the Law.

Paul’s argument is consistent with the possibility of gaps in apprehension of natural law. Take the Ten Commandments again, and Aquinas’s example of ancient German barbarians thinking that theft is morally permissible. As long as their conscience “accuses” them regarding one of the other nine commandments, then Paul’s argument is satisfied. Their consciences “accuse them” on one or more of these other grounds, and, therefore, they know they have sinned (which is the bigger point that Paul is making in the passage).

The problem that “gaps” create for natural law systems is that natural law cannot be offered as a theory that accounts for a universal morality accessible to all people … except when it doesn’t.

Whether they are moral realists who reject the natural law label or moral realists who accept the natural law label, it seems that in this day and age, the wedge issue is moral realism versus the rejection of moral realism. As C. S. Lewis suggests, it doesn’t matter all that much what label we apply to the view as long as it’s some form of moral realism. I’m not suggesting that advocates of one view or the other shouldn’t burn any of their free time arguing over whether Coke is better than Pepsi or vice versa. At the same time, we don’t want to get caught up in a form of natural-law sectarianism akin to what Emo Philps lampooned with his telling “die heretic” joke.


Categories
Michael Novakhov - SharedNewsLinks℠

A Republican Excursion


Secretary of State Thomas Jefferson and Congressman James Madison, Republicans of Virginia, took a lengthy trip through northern climes together in the spring of 1791. Contemporaries surmised that the two of them had in mind to invigorate the Republican proto-party of which they were understood to be the leaders. Louis P. Masur’s exquisite little book A Journey North: Jefferson, Madison, & the Forging of a Friendship shows that they did far more than that.

Masur refers to the Virginians’ northern sojourn as “a gambol through upstate New York and parts of New England” and what we would nowadays call an opportunity for them to recharge their batteries. Their ongoing conflict with Alexander Hamilton’s Federalist party had somewhat dampened their spirits, and a bit of sightseeing would perhaps reinvigorate these prominent political contestants—even if it is somewhat difficult to imagine James Madison in so light-hearted a mood as to be gamboling. Masur’s other description of the journey as an “excursion, maybe an adventure” is more apt. They did do some work along the way, the author avers, as their sightseeing at Revolutionary battlefields and meetings with local eminences perforce had political implications.

In his prologue, Masur shows that Jefferson, at least, had in mind traveling with a companion from an early age. For example, he asked John Page—a young friend and future Virginia governor—whether he had in mind to travel: “If you have,” Jefferson told him years before the Revolution, “I shall be glad of your company.” As Page would not join him, Jefferson had to wait until he was posted several years later to represent the Confederation Congress in Europe to take in the sights in much of England and France. Of rural France, he wrote, “I am now in the land of corn, wine, oil, and sunshine. What more can man ask of heaven?” He counseled a younger kinsman that traveling “makes men wiser, but less happy.” As he was likely to learn to value his homeland less if he traveled abroad, Jefferson opined that the younger man should just take in American sights: “There is no place where your pursuit of knowledge will be so little obstructed by foreign objects as in your own country, nor any wherein the virtues of the heart will be less exposed to be weakened.”

One suspects that spending time together was what the two rising statesmen enjoyed most about their voyage.

James Madison was loath to undertake significant travel. He first rejected an invitation from Jefferson to spend much of 1784 in France and then the following year turned down James Monroe’s invitation to accompany him to the Ohio territory. Monroe’s suggestions of Montreal and Quebec sojourns drew no more positive a response. In 1784, however, Madison did accompany the Marquis de Lafayette to New York and up the Hudson River (which he had visited before). He told Jefferson in the wake of this journey that he would like to see “the eastern states,” i.e., New England, on the first convenient occasion.

Masur provides a kind of précis of Jefferson’s life prior to the trip with Madison, capturing all of the main points in a slight space and giving a good impression of the older man’s personality along the way. A substantial deepening of the two men’s friendship followed the death of Jefferson’s wife, particularly as the two shared time together in Philadelphia between that lamentable event and Jefferson’s departure for diplomatic duty in France. In 1784, not for the last time, Jefferson tried to persuade Madison to establish an abode close to Monticello.

Their political relationship is also succinctly presented. Like most Federalists of the 1780s, Madison was aghast at Shays’ Rebellion; Jefferson, away in France, found something very un-French to admire in Berkshires Massachusetts men’s tax resistance. A similar impulse left Jefferson quite skeptical of the proposed United States Constitution, which his friend had played the lead role in writing. To placate Orange County Baptists, Jefferson, Governor Edmund Randolph, George Mason, and George Nicholas, Madison took up Jefferson’s dear cause of constitutional amendments. Masur’s account of these matters, familiar to students of the men and the period, is brief and clear. So too that of the Jefferson/Madison-led Republican Party’s opposition to Treasury Secretary Alexander Hamilton’s program.

Some of the material Masur includes is of interest to students of Jefferson, Madison, and the United States in this period, though not likely to people solely interested in the Northern Sojourn of 1791. For example, the story of the slave James Hemings’ service to Jefferson in Virginia and France, besides his eventual emancipation in America and ultimate suicide, has little to do with the events of 1791. So too the imbroglio over publication of Jefferson’s comment on a copy of Tom Paine’s “Rights of Man” that it would counteract certain “political heresies which have sprung up among us.” The “heresies” he had in mind had been endorsed by Vice President John Adams, whom Jefferson certainly did not intend to contradict in so direct a manner before the public—yet here it was. Masur seems to include these marginally relevant tales just because he finds them interesting. (The Paine tale ends with an observation that Timothy Pickering was due to have a distinguished career, which is one way to describe it.)

We are told after this that Madison and Jefferson fell into “the very party system that they dreaded,” and by the end of the same paragraph, Madison says that party disputation “could not be prevented,”—which anyone familiar with his famous Federalist #10 would find totally unsurprising. Jefferson stooped to secret partisan machinations and lied to President Washington about being involved, as the President surely must have known. That the two Republican chieftains dined with prominent administration critics before their departure to the north cannot have allayed anyone’s suspicions.

“No question,” Masur says, “politics was on everyone’s mind” as our heroes began their journey. “Jefferson and Madison’s tour through Federalist New England undoubtedly reinforced for them the necessity of taking a firm public stand against what they saw as the heresies of the day. Yet, in the end, politics was not their main purpose.” “Health, recreation, and curiosity,” said Madison, prompted their trip. What else might we expect him to have said?

From then on, each chapter of the book about the journey itself is titled to refer to one of the main matters of interest to Jefferson and Madison as they made their way. Jefferson famously was a man of encyclopedic interests, and Madison, too, could be prompted to take up matters of fascination. While we are prone to think of them now as among the premier politicians in the country’s history, both of them were first substantial farmers, of course, and one of the purposes of their journey was to investigate the problems posed to American agriculture by the Hessian fly.

Masur provides information about the new pest’s appearance in Europe, about Jefferson’s role in spurring the American Philosophical Society to investigate the Hessian fly, about the questions regarding the fly—when it first appeared, whether it grew from egg or worm, the type(s) of wheat it attacked, how it had been successfully fought—to ask people along their route. “Jefferson’s most extensive writing” on the trip, we learn, “was his notes on the Hessian fly.” “They are never in the grain or chaff,” he jotted, likely irked by the British government’s measures to exclude American wheat imports. Though an amateur scientist, Jefferson was a notable one. In his leisure time, he did significant mental work, and Madison was right along with him. Much of their recreation on the trip had practical application.

Masur also describes Jefferson’s relationships with one of his daughters and a slave, as shown on this trip. Rather than a single narrative account, the book presents a timeline with several points of interest along it, at which the author delves into related, sometimes distantly related, matters. The “forging of a friendship” in the book’s title does not exactly capture the book’s content. For example, there are sections on Jefferson’s relationship with his younger daughter and on the slave man he took with him to France, neither of which is related to the older man’s relationship with James Madison.

A substantial section on Madison and slavery, though interesting, is not much about the men’s friendship either. Like Jefferson, he thought seriously about slavery, and Masur considers his record in this regard. Like many other Upper South liberals of his day, Madison believed that the sole practicable solution to the slavery problem was to find someplace to which American slaves could be sent. While Masur’s account of this matter will hold the interested reader’s attention, it is not obviously related to the book’s supposed theme.

In sum, A Journey North ably tells the story, with substantial digressions, of the northern trip James Madison and Thomas Jefferson took in 1791. Perhaps the most memorable aspect of this well-written little book is the story of Jefferson’s leaving his walking stick to Madison “as a token of the cordial and affectionate friendship which for nearly now an [sic] half century, has united us in the same principles and pursuits of what we have deemed for the greatest good of our country.” Masur illustrates it with a photo of the stick, which Madison returned to Thomas Jefferson Randolph, Jefferson’s favorite grandchild, thus accounting for its presence at Monticello today. One suspects that spending time together was what the two rising statesmen enjoyed most about their voyage.


Categories
Michael Novakhov - SharedNewsLinks℠

Cutting the Gordian Knot of Birthright Citizenship


Next year, the Supreme Court is expected to clarify the scope of birthright citizenship. In other words, the Court will determine who may, and who may not, claim to be American citizens by virtue of the Citizenship Clause of the Fourteenth Amendment.

The Citizenship Clause reads, “All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.” Thus, to qualify as a birthright citizen, a person must have been both (1) born or naturalized in the United States and (2) born “subject to the jurisdiction of” the United States. Disputes about the scope of the clause center on the meaning of “subject to the jurisdiction of.”

Problems with the Fourteenth Amendment

Congress proposed the Fourteenth Amendment in 1866, and state legislative ratification was declared complete on July 9, 1868. The amendment was designed primarily to protect newly freed slaves from hostile state governments. It has also been the basis for some of the Supreme Court’s most memorable, fair, and popular decisions. Some even hail it, along with the Thirteenth and Fifteenth Amendments, as the basis for a “Second Founding” (a characterization I consider overdrawn).

The value of the Fourteenth Amendment has made writers reluctant to criticize the measure’s text or its drafters. Candor compels, however, the conclusion that the Fourteenth Amendment is very poorly written.

Much evidence of poor drafting is in the results: Section 2, dealing with congressional apportionment, has proved unworkable. Section 3, the Disqualification Clause, is filled with uncertainties that fueled extensive litigation during the months leading to the 2024 presidential elections. Scholars are still debating the Privileges or Immunities Clause—not merely its specific applications but even its basic purpose. Scholars and jurists continue to debate the amendment’s Due Process Clause as well.

Thus, it is unsurprising that the scope of birthright citizenship also remains unsettled.

One reason for the difficulties in construing the Fourteenth Amendment is that, while the 1866 Civil Rights Act is often cited as an interpretive source for the amendment, the language of the amendment differs appreciably from that of its predecessor statute. One cannot dismiss the legal effect of those changes, as some have, simply because a senator or two thought (inaccurately) that they meant the same thing.

Another source of difficulty is that, unlike the framers of the original Constitution, the amendment’s drafters rarely relied on words and phrases with recoverable historical and legal meanings. Instead, they coined their own phrases (such as “equal protection of the laws”) or—as in the case of the amendment’s Privileges or Immunities Clause—referred to idiosyncratic definitions rather than established ones.

The most important source for the original meaning of a constitutional provision is usually the ratification record. And yet the Fourteenth Amendment’s state ratification records, to the extent that they are published at all, often are unhelpful—although the late James E. Bond has used them to show that ratification evidence contradicts the incorporation doctrine.

Because of the poor drafting of the Fourteenth Amendment, the conflicting statements among those who proposed it, and the lack of useful ratification history, there can be no perfect interpretation of the Citizenship Clause.

If you try to interpret the phrase “subject to the jurisdiction,” you encounter all these obstacles. This language differs from the corresponding phrase in the Civil Rights Act (“not subject to any foreign power, excluding Indians not taxed”). The traditional legal meaning of “subject to the jurisdiction” (that is, “within the territory governed by”) would render that phrase redundant, and the Senate debates confirm that a new, unprecedented definition was intended. But those debates are very unclear on what that new definition was.

The Senate Debates

Commentators on several sides of the birthright citizenship issue quote from the Senate debates to support their positions. They can do this, because the debates support several sides. Sometimes, even the same Senator is found supporting several sides.

To illustrate the point, let’s consider some comments not from opponents—who would be expected to issue conflicting interpretations—but exclusively from the amendment’s supporters:

  • Jacob Howard (R.-Mich), the principal sponsor, stated in his introductory speech that “subject to the jurisdiction” excluded the children of foreigners born in the United States.
  • But John Conness (R.-Cal.)—another supporter—expected the amendment to grant citizenship to the children of resident Chinese nationals. Timothy Howe (R.-Wis.) claimed the Fourteenth Amendment would admit to citizenship “all men … who are born and reared upon American soil”—thereby adding a requirement of being reared and deleting any exclusion of foreigners.
  • Lyman Trumbull (R.-Ill.) contended that “subject to the jurisdiction thereof” meant subject to the complete jurisdiction thereof: “not subject to some foreign Power”… owing “allegiance solely to the United States.” Thus, he agreed with Senator Howard that the amendment would exclude the children of all foreigners. But on another occasion, he said it meant, “birth within the territory of the United States, born of parents who at the time were subject to the authority of the United States.” The latter statement seems to include the children of foreigners subject to US authority.
  • In his initial speech, Senator Howard made no suggestion that tribal Indians in the territories were excluded by the phrase “subject to the jurisdiction”—even though they had been excluded by the Civil Rights Act. When challenged on the point, however, Howard claimed they were excluded. This reader gets the impression that he manufactured the exclusion for the moment.
  • Senator Howard also averred that the language—which, he said, excluded children of all foreigners—was merely “declaratory of … the law of the land already.” But, as explained below, it was not.

These incidents only begin to describe the confusion that characterizes the debates over the Citizenship Clause.

Deducing Principles

Unfortunately for Supreme Court justices, the jumbled state of the historical records does not excuse them from interpreting the Constitution as best they can. In this case, specific drafter expectations differed. But we may be able to deduce some common principles, and if so, those principles will have to trump divergent expectations. And the drafting history does disclose principles accepted by most, if not all, of the participants.

First: Both the presumption against redundancy and the Senate debates tell us that “subject to the jurisdiction” imposes a requirement additional to being born within the country. A 2011 Time Magazine cover story opined, “The 14th Amendment … holds that if you’re physically born in the US or a US territory, you’re a citizen. Full stop.” We can be confident this assessment is wrong.

Second: Several senators, including the principal sponsor, acknowledged that “subject to the jurisdiction” excluded the children of all or some foreigners.

Third: Several senators said, without contradiction, that the amendment restored the law as it had existed prior to the Dred Scott decision.

Fourth: Several suggested, without specific contradiction, that “subject to the jurisdiction” was tied to the Anglo-American concept of allegiance. For example, Edgar Cowan (R.-Pa.) said, “It is perfectly clear that the mere fact that a man is born in the country has not heretofore entitled him to the right to exercise political power.” He affirmed the prerogative of states to evict people “who acknowledge no allegiance, either to the State or the General Government.” Similarly, Senator Trumbell declared that tribal Indians “are not subject to our jurisdiction in the sense of owing allegiance solely to the United States.”

Supreme Court Precedent

Supreme Court precedent is broadly consistent with these principles. The Slaughterhouse Cases (1872) included dicta stating that “the phrase ‘subject to its jurisdiction’ was intended to exclude from its operation children of ministers, consuls, and citizens or subjects of foreign States born within the United States.” Elk v. Wilkins (1882) adopted the allegiance rationale to endorse Senator Howard’s view that tribal Indians were not “subject to the jurisdiction”:

The main object of the opening sentence of the fourteenth amendment was to … put it beyond doubt that all persons … owing no allegiance to any alien power, should be citizens of the United States and of the state in which they reside. … The evident meaning of these last words is … not merely subject in some respect or degree to the jurisdiction of the United States, but completely subject to their political jurisdiction, and owing them direct and immediate allegiance.

Although that language seems to exclude the children of all foreigners, United States v. Wong Kim Ark (1898) corrected course by ruling that legal foreign domiciliaries as well as citizens can pass citizenship to their children. In addition, the court imported wholesale the traditional principles of allegiance:

[The Constitution] must be interpreted in the light of the common law, the principles and history of which were familiarly known to the framers of the constitution.

The fundamental principle of the common law with regard to English nationality was birth within the allegiance. … The principle embraced all persons born within the king’s allegiance, and subject to his protection. Such allegiance and protection were mutual … and were not restricted to natural-born subjects and naturalized subjects, or to those who had taken an oath of allegiance; but were predicable of aliens in amity, so long as they were within the kingdom. Children, born in England, of such aliens, were therefore natural-born subjects. But the children, born within the realm, of foreign ambassadors, or the children of alien enemies, born during and within their hostile occupation of part of the king’s dominions, were not natural-born subjects, because not born within the allegiance, the obedience, or the power, or, as would be said at this day, within the jurisdiction, of the king.

The Law of Allegiance

In an earlier essay for Law & Liberty and, along with co-author Andrew Hyman, in an article for the British Journal of American Legal Studies, I outlined the traditional rules defining when a person was in or out of allegiance. The rules were as follows:

Citizens were in allegiance. A foreign diplomat was in allegiance only to his own nation and not to the host country. Otherwise, a foreigner from a friendly nation (an “alien friend”) was in “local allegiance” to the host country, in addition to the superseding allegiance he owed his sovereign. A foreigner from a hostile nation could be in local allegiance to a host country if the host country granted a special dispensation. One who seriously broke the obligations of allegiance was guilty of treason.

A person without a dispensation and from a hostile nation was an “alien enemy” and not in allegiance to the host country. The host country might prosecute an alien enemy for other crimes, but not for treason. Any person who entered the host country illegally or otherwise rejected allegiance and was likewise an alien enemy. Despite the court’s suggestion in Wong Kim Ark, a foreigner need not be in enemy-occupied territory to qualify as an alien enemy.

Observe that nothing in the law of allegiance limited it to those foreigners who were permanent residents. Foreign merchants temporarily in England were routinely considered in local allegiance to the Crown.

Observe further that allegiance was a concept applicable to free people. It did not apply to slaves, who, like other “property,” were always “subject to the jurisdiction” of the prevailing government. In Somerset v. Stewart (1762)—the case in which Lord Mansfield ruled that there was no slavery in England—the former slave James Somerset was able to establish allegiance because under English law, he was free.

Because of the poor drafting of the Fourteenth Amendment, the conflicting statements among those who proposed it, and the lack of useful ratification history, there can be no perfect interpretation of the Citizenship Clause. But there is a best one: A child is born “subject to the jurisdiction” of the United States when his or her parents are in allegiance to the United States. That means they are either US citizens or non-diplomat foreigners from friendly countries—temporarily or permanently, but legally—in the United States.


Categories
Michael Novakhov - SharedNewsLinks℠

The Long Descent to Unilateralism


In the eighteenth and nineteenth centuries, the war over war powers demonstrated a healthy, albeit messy, constitutional system. In both branches, there were battles about when and where the US should use its military and how large that military should be. These questions would shape deliberations between the executive and legislative branches for decades, with men in both branches attempting to assert their preferences, and they would claim those who disagreed did not have a proper understanding of the founding principles. Arguably, those who favored a larger military capable of helping the US become a great power tended to win out, even in the early years. Only a few decades after independence, they doubled the size of the US with the Louisiana Purchase in 1803, followed by the acquisition of the Floridas in 1819, the annexation of Texas in 1845, the acquisition of the Oregon territory in 1846, and finally seizing half of Mexico’s territory by the end of the war in 1848. Those who hoped for a larger military could point to the expansive territory and the two oceans as a justification for increasing the size of the military.

The desire to move across North America did not come exclusively from the executive branch. Voices in both branches wanted continental dominance, and they won out over those whose preference was for a small republic. Even in the nineteenth century, when the United States was considered isolationist, these acquisitions and the Mexican-American War showed a clear victory for those with grand ambitions for the United States and its place in the world.

During this time, we do not see presidential unilateralism. If a president wished to initiate a military operation, he would ask Congress for permission to purchase territory or start a war—in Jefferson’s case, he asked for forgiveness after a fait accompli. Congress, in turn, would engage in a meaningful debate about the merits of the action, and provide him with permission or deny it. Subsequently, if a war effort proved questionable or problematic, they would again debate the merits of the prosecution of the war and hold the executive accountable. During an operation, spending would increase, and a president would have more power. At the conclusion of the operation, the military would draw down, and Congress would return to its coequal status.

This changed with the Spanish-American War. President William McKinley assumed a great deal of power—at the expense of Congress—and military spending started ratcheting up. The healthy push and pull over the eighteenth and nineteenth centuries between the branches began eroding then continued to do so through WWI. The healthy balance never returned after WWII. The Cold War and the War on Terror then exacerbated an already problematic relationship. Congress did not have the incentives to reassert its coequal status. This is certainly problematic for the separation of powers and the health of the constitutional system; there is a bigger problem. Congress facilitated unilateral presidential decision-making when it comes to the military without much oversight from the people, the courts, or Congress. This lack of deliberation and accountability has led to the operationalization of bad policy. This policy creates new security threats rather than diminishing them, and we have seen decades without a coherent grand strategy. Despite mountains of evidence proving the need for a more assertive Congress, presidents continue to make the same kind of mistakes in military engagements, large and small.

World War I

World War I was a flash point in the balance of power between the branches, with Congress standing firm against a president encroaching on legislative powers. In the early days of the war, Congress and President Woodrow Wilson agreed that the US should stay neutral. The sinking of the passenger ship, Lusitania, in 1915 changed their thinking. At this point, Wilson felt compelled to bring the United States into the fight due to the immorality of the Germans. He followed the steps outlined in the Constitution and solidified by nineteenth-century norms: He produced a war message for Congress explaining that he had exhausted every diplomatic avenue available. He emphasized the inhumanity of the Germans and requested a declaration of war, which Congress then provided. In the declaration, it claimed that the Imperial German Government (not the people) was at war with the American Government and that the German government was the aggressor. As a consequence, “the President … is hereby, authorized and directed to employ the entire naval and military forces of the United States and the resources of the Government to carry on war against the Imperial German Government; and to bring the conflict to a successful term.”

After WWII, the United States and its industries remained intact, making it the sole liberal democracy capable of defending that form of government.

Congress told the president what he was authorized to do, what he could do to accomplish it, and the expected conclusion of the conflict. Until that conclusion, Congress accepted its responsibility to use its power of the purse to ensure a successful conclusion. We see in Wilson’s application to Congress and its declaration, coequal branches with different but interlocking responsibilities to each other and the people of the country.

As the American constitutional system allowed, during wartime, Wilson enjoyed more discretion and a greater ability to command. At the end of the war, a healthy rebalancing occurred. Wilson came to the Senate with the League of Nations Treaty and commanded them to pass it without revisions. Quite legitimately, the Senators claimed that if they signed the treaty, they would lose some of their Article I powers. Wilson attempted to go over their heads to the people, but this did not sway the Senators. They rejected it a second time.

World War II

The American response to WWII echoed their view of WWI. They wanted to remain out of another European war. Reflecting their will, Congress passed several neutrality acts. They did not see what President Franklin Roosevelt saw: the battle for a metanarrative. For him, the fascist governments would continue to expand unless liberal democracies fought back. He attempted to work around the will of the people as expressed through these laws.

As I noted in my book on the theory and history of US war powers, “it is shocking to see how far FDR and his lawyers pushed the concept of executive power in what should have been a balanced system, with the political branches working in tandem.” One such example was the “destroyers-for-bases” agreement, in which FDR provided warships to Britain in exchange for basing rights, all by executive agreement. At the time, eminent legal scholar Edward Corwin said that Attorney General Robert Jackson’s opinion justifying the action threatened all congressional authority and constituted a step toward “totalitarian” rule. Despite his efforts, members of Congress did not check him or take steps to either support or undo the destroyers for bases agreement.

The following year, on December 7, 1941, the Japanese attacked Hawaii and the Philippines. This was a proverbial breaking point for Congress. Roosevelt asked for a declaration and, like Wilson, explained that they had to resort to war. In turn, Congress produced a declaration using the same language used in WWI, which authorized and commanded the president to act. It told him what resources he had at his disposal, who he was fighting (the governments of the Axis powers), and what happened at the conclusion.

Cold War

After WWII, the United States entered a new era with the largest economy, one of the largest militaries, and a victor’s sense of righteousness. Unlike their allies in Europe, their nation and industries remained intact, making it the sole liberal democracy capable of defending that form of government. They accepted this role as an international police force. Perhaps curiously, Congress would increasingly recede into the background as more and more power accumulated into the hands of presidents. It is almost breathtaking to see how members of the Senate reacted to the United Nations treaty and the NATO treaty. In both instances, they did not stand up for their control over the declaration of war and other Article I powers—as Senators did after WWI. The emerging imbalance in wartime decision-making came very quickly. In 1950, the UN Security Council (UNSC) created a resolution calling for a police action in Korea. Truman circumvented Congress and justified sending 6.8 million American men and women to fight using the UNSC resolution as his legal justification. Congress did not stop him.

What explains this dramatic change? There are several major factors. First, there was a broad consensus on both sides of the Atlantic that the Europeans needed to demilitarize. It was a feature of NATO. As the first NATO Secretary General Lord Hastings Lionel Ismay quipped, it would “keep the Soviet Union out, the Americans in, and the Germans down.” Second, the Americans had created nuclear weapons. The Soviets were not far behind, successfully testing their own nuclear weapon in 1950. The destructive power of this new weapon led to questions about who would control it. There was a broad consensus that the president alone should wield it to ensure flexibility and nimbleness (there are still no restraints on presidential control). The third reason is closely related to the second. Once the Soviets also had nuclear weapons, Americans worried they would use them. The Soviets wanted communism to win out over liberal democracy, and they were ruthlessly spreading it. As Russians increased their weaponry, the Americans responded in kind. Both engaged in an arms race and maintained large militaries.

While all of these contributed to the increase in presidential unilateralism, the large standing military caused the most damage to the powers of the legislative branch. In essence, they had already given tacit permission to the president to exercise his discretion by passing a large military budget, year after year. With a standing military, the president did not have to explain his decisions to Congress. He could send the military anywhere in the world, for any reason, without any significant check on his discretion.

One of the worst examples of this power is the war in Vietnam. US involvement dates back to President Dwight Eisenhower, but the escalation of the war occurred under President Lyndon Johnson. During his administration, Johnson attempted to keep the war away from the public, quietly escalating month after month. Members of his administration convinced him that if they just sent more troops, they could overcome the threat from the Viet Cong. Instead, the United States was pulled into a war it could never win and used tactics that caused a great deal of suffering. Congress only authorized force long after US involvement began, and when it did, it gave the president broad authority to escalate the conflict and use his own discretion to determine what would constitute a successful conclusion. When the legislative branch became more aware of the circumstances and realized the impossibility of achieving a victory, they used the power of the purse to draw down troops. In 1973, during the Vietnam War, Congress passed the War Powers Act to try to constrain presidential unilateralism. While there has been some controversy about the act and about how presidents address it, there are rare examples of presidents ignoring it or failing to abide by it.

The War on Terror

The tragedy of 9/11 facilitated a great deal more presidential unilateralism and congressional abdication. Due to the shocking nature of the attack and the immense pain it caused, Americans and many around the world were psychologically primed to bring the fight to this stateless enemy. Problematically, unlike the Germans or the Japanese, with a stateless enemy, there is no return address. Terrorists existed in a variety of countries around the world.

In contrast to the WWI and WWII declarations, the 2001 Authorization for the Use of Military Force (AUMF) is remarkably vague. It says the president is:

authorized to use all necessary and appropriate force against those nations, organizations, or persons he determines planned, authorized, committed, or aided the terrorist attacks that occurred on September 11, 2001, or harbored such organizations or persons.

The broad grant of power arguably shows congressional support. One could also argue, however, that this is Congress ceding its coequal status and allowing the president carte blanche to make decisions—putting even more power into presidential hands. Bush had complete control over how and when to use military force to carry out an ambiguous mission without a sunset clause or an objective that would indicate when the war concluded. Even the Gulf of Tonkin Resolution—which was hardly a good example of Congress ensuring accountability—had a sunset clause. When compared with earlier declarations of war, it is clear that Congress has turned towards ambiguity and away from clarity.

Armed with such sweeping authorization, Bush’s 2002 State of the Union announced that he could and would search for terrorists in any nation. If you were not with the US, you were against it, he claimed, leading many allies to feel bullied. He labeled North Korea, Iran, and Iraq, the “Axis of Evil,” giving many the impression that the US would intervene militarily in at least one, if not all three.

By the summer of 2002, the Bush administration started beating the drum for war in Iraq. By the fall, Bush asked Congress for an authorization. Once again, we see that Congress failed to perform its duty. After listing the problems caused by Iraqi leader Saddam Hussein and the attempts by the UNSC to stop him, Congress authorized the president to:

Use the Armed Forces of the United States as he determines to be necessary and appropriate in order to 1) defend the national security of the United States against the continuing threat posed by Iraq; and 2) enforce all relevant United Nations Security Council resolutions regarding Iraq.

The ambiguity and the grant of power are staggering. Congress allowed the president to decide what was necessary and appropriate when it comes to the use of force. Furthermore, the president is defending “national security.” What is within the scope of national security? With another carte blanche, Bush and members of his administration convinced themselves the American military could overwhelm Hussein’s forces quickly—as they arguably had in 1991—and they would be “greeted as liberators.” He launched the invasion in March of 2003.

The rest of the decade would see a civil war in Iraq, followed by a surge authorized, this time, by an opposition Congress, Democrats having gained the majority in the legislature in 2006, partly due to their campaign promise to end the unpopular and disastrous war. These decisions created an appetite among the American people for a dramatic change. Tapping into this unique moment, Senator Barack Obama entered the race promising hope and change. Domestically, he had many accomplishments that pleased Democrats. In the realm of foreign policy, however, he essentially continued Bush-era policy, albeit on a smaller scale and with more circumspect rhetoric.

In this very complicated world, the decisions about when and where the US military will engage in operations large and small rest in the hands of one man.

In his campaign, Obama labeled the war in Afghanistan the “good war” and Iraq the “dumb war.” Feeling pressure from the military and the public to show strength, he was pressured into a troop surge in Afghanistan. He claimed the military would achieve success if he had these additional troops. Success, however, was ambiguous. Did it just involve degrading al Qaeda, or did it involve degrading the Taliban as well? He suggested both in a speech at Westpoint in 2009, announcing the surge. Congress did not scrutinize any of these issues. They simply provided him with the funding without reviewing whether 30,000 more troops would produce the desired result (let alone what the desired result was). This war would quietly continue until 2021. At this point, Biden planned to withdraw completely. They had accomplished what they came to do—degrade al Qaeda—and it was time to leave. Americans would leave by the end of August 2021. Due to the weakness and corruption of the Afghan government, the Taliban swept across the country and took control of Kabul on August 15.

The Arab Spring

In the spring of 2011, after decades of corrupt and brutal leaders, citizens of the Middle East and North Africa rose up and demanded change. There were crackdowns in many countries, but the two most salient ones for US foreign policy were in Libya and Syria. In Libya, Muamar Qaddafi openly expressed his violent intentions, leading the Arab League and the African League to abandon him, and they implored NATO to take action. Within a month of the violence, Obama ordered the American military to carry out air strikes in Libya as he explained in a letter addressed to Congress. Without even suggesting that he has to obtain permission from Congress, he explained the “regional and international threat” posed by Qaddafi’s actions. They had to avoid “wider instability in the Middle East” and the civil war in Libya was a threat to “the national security interests of the United States.” This level of unilateralism would shock the Founders and presidents well into the twentieth century.

Simultaneously, the Syrian leader, Bashar al-Assad, cracked down on his people. Unlike Muamar Qaddafi, however, he had close allies in Iran and Russia. NATO decided not to act, allowing a civil war to draw out for years and a refugee crisis to destabilize countries near and far. Besides these horrible consequences, the anarchy allowed for the rise of the Islamic State. Starting in 2013, this group inflicted brutal violence. In the summer of 2014, they took the dam outside of Mosul after the Iraqi military (that the US had trained) fled. Without asking permission from Congress, Obama immediately deployed US military forces to Iraq to address the very real threat.

Once again, Congress shirked its responsibility, and the president acted unilaterally to address an issue. In an attempt to find legal justification for his action, Obama reached back to the 2001 AUMF to claim that his actions against ISIS are legally sanctioned by that document and he does not need any new legislation. Congress made some attempts to revise the 2001 AUMF or create a sunset clause, but they failed. While these operations succeeded in driving ISIS off the land they claimed, the region remains unstable.

The Contemporary Landscape

Unlike his two predecessors, President Donald Trump did not start any new military operations in his first term that required a congressional response. He continued the war in Afghanistan on a low simmer, but he generally left military affairs to the generals, and they did not find any “monsters to destroy.” That doesn’t necessarily mark a turn away from presidential unilateralism, however.

On January 3, 2020, without even alerting any members of Congress, Trump ordered a lethal drone strike against the Commander of the Quds Force, Qassem Suleimani. This decision sent shock waves around the world and could have easily caused war between the US and Iran. Yet in a precedent-setting move, unlike previous presidents of both parties, he only asked executive branch lawyers to produce a justification well after the fact. The strike clearly violated international and domestic law, to say nothing of norms. Yet Congress did not rise to the occasion to restrain presidential unilateralism.

Today, in the second Trump term, the war in Ukraine persists and shows little sign of ending. There is concern that the Chinese may invade Taiwan, especially if Russia succeeds in Ukraine. War still rages between Hamas and Israel, and there is a growing humanitarian crisis in Gaza. Iranian nuclear program likely continues in some capacity despite the US strike in June, undertaken without congressional approval or input. Relations between the United States and its closest allies remain tense. In other eras, there were serious deliberations between the branches and within the branches about the direction of US foreign policy. There were still mistakes and lapses in judgment, to be sure, but all was not exclusively determined by one individual. Over the last 20 years, Congress has made only limited and unsuccessful attempts to create laws that would restrict presidential unilateralism or restrain the use of military force. The Islamic State and Obama’s use of the 2001 and 2002 AUMFs to justify his actions caused many to worry that Congress had sanctioned a forever war. In this very complicated world, the decisions about when and where the US military will engage in operations large and small rest in the hands of one man.

When looking at the Constitution and the debate in the Constitutional Convention, it is clear that the power of the legislative branch concerned the Founders. Comparatively, the executive seemed weak and required ways to ensure the administration could defend itself against legislative encroachment. Over time, the executive branch has managed to draw power into its branch at the expense of legislative power. We do not see “ambition … made to counteract ambition.” Instead, we see a constantly encroaching executive and a supine Congress. As a consequence, the president essentially has the authority to initiate any military operation, anywhere in the world, for any reason. The lack of oversight leads to questionable decisions, from the war in Iraq to the operation in Libya to the lethal drone strike against Qassem Soleimani. Without scrutiny and accountability, presidents are going to follow their own impulses and interests when making decisions that have consequences for the United States and the world. Without a healthy constitutional system where another branch checks the worst impulses of the executive, the US will continue to see questionable decision-making from an unchecked executive branch.


Categories
Michael Novakhov - SharedNewsLinks℠

Freedom for Worship


What is the United States’ greatest achievement? Winning World War II? Landing a man on the moon? Hollywood’s global reach? For Dartmouth College historian and Episcopal priest Randall Balmer, all these accomplishments pale in comparison to a less celebrated but more enduring breakthrough: the separation of church and state. Few ideas, he argues, have done more to preserve both religious vitality and civic peace. In America’s Best Idea, Balmer offers a spirited defense of this foundational principle, contending that the First Amendment’s twin guarantees—no establishment of religion and free exercise thereof—have made the United States a uniquely fertile ground for religious pluralism and, in turn, a more virtuous and democratic citizenry. 

He is alarmed, however, by what he views as a growing desire in some quarters to return to an older model in which church and state walked much closer together. His book is at once a historical account of how this achievement was won and a warning (at times a touch hyperbolic) about the threats now arrayed against it.

For Balmer, this arrangement is not only good for the country, but good for the faith itself. It protects religion from state corruption and safeguards government from sectarian dominance. From Roger Williams’s exile in Rhode Island to William Penn’s “holy experiment” in Pennsylvania, Balmer traces how a long line of dissenters, reformers, and visionaries helped craft a constitutional order rooted in freedom of conscience rather than religious coercion.

And in our own unsettled moment, when some Americans fear the rise of Christian nationalism, others lament Christianity’s retreat from the public square, and religious liberty lawsuits surround everything from Ten Commandments displays to Satanic Temple nativity scenes, Balmer contends that the American model remains both radical and essential. In a nation where Hindu, Muslim, Jewish, Catholic, Mormon, and secular candidates now routinely seek office, and where the religiously unaffiliated continue to grow as a cultural force, his argument feels all the more urgent. 

A mixture of history, polemic, and pastoral plea, America’s Best Idea is Balmer’s attempt to remind Americans why the First Amendment was worth creating and why it is still worth defending. His sense of urgency stems from what he sees as a growing and deeply troubling threat: the rise of Christian nationalism. For Balmer, recent efforts to conflate Christian identity with American citizenship (whether through Ten Commandments mandates in schools, public funding for religious education, or political campaigns wrapped in religious rhetoric) represent a betrayal of the founders’ vision and a danger to both church and republic. But given the flood of recent books attacking Christian nationalism, Balmer’s critique adds little that hasn’t already been said. His concerns and arguments closely mirror those found in works like Katherine Stewart’s The Power Worshippers and Andrew Seidel’s The Founding Myth, both of which portray Christian nationalism as little more than a cynical power grab built on a willfully distorted reading of America’s founding. Like them, Balmer treats Christian nationalism as a manifestly bad-faith movement—historically dubious, theologically misguided, and politically corrosive. But while those critiques may carry some merit, Balmer’s tone often lacks nuance. He shows little interest in understanding the appeal or growth of Christian nationalism and is often more interested in denunciation than diagnosis.

The notion that the United States is on the brink of becoming a theocratic nation-state owes more to Twitter threads and fringe podcasts than to any measurable political reality.

Nevertheless, what distinguishes America’s Best Idea from many other recent critiques of Christian nationalism is that Balmer is not merely issuing cultural warnings—he is casting a historically grounded, theologically informed vision for the American experiment in religious liberty. As both a historian and an Episcopal priest, Balmer defends the separation of church and state not as a secularist imposition, but as a theological and civic gift that has allowed religion in America not only to survive, but to flourish. He situates the First Amendment as a radical break from the European model of established churches, tracing its lineage to figures like Roger Williams and the Baptists, whose commitment to religious voluntarism was rooted in the gospel’s refusal to coerce. Balmer sees this system not as a safeguard against religion, but as a safeguard for religion, protecting it from factional capture and state corruption. 

His account celebrates this pluralistic religious economy as central to both the vibrancy of American faith and the health of its democracy. 

Along the way, Balmer reminds readers that evangelicals were once at the forefront of social reform movements, from abolition to temperance to women’s education. In that spirit, he calls today’s believers to recover that legacy of public witness, not by grasping for political power, but by preaching from the margins. Rather than lamenting the decline of cultural privilege, America’s Best Idea urges both religious and secular Americans to preserve the delicate architecture of the First Amendment, a system that, in Balmer’s view, has conserved both faith and freedom better than any official religion ever could.

By the end of America’s Best Idea, readers will likely come away with a renewed appreciation for the remarkable achievements of the First Amendment. Balmer’s historical sweep makes clear just how dangerous (and often deadly) state-established religion has been. Beginning with the sectarian conflicts that plagued Europe for centuries, Balmer shows that religious establishment has more often led to coercion and violence than to piety or peace. Against this grim backdrop, the American model of religious disestablishment appears not just prudent but inspired. Balmer underscores that it is precisely under this framework of constitutional neutrality that once outlawed or marginalized faiths have flourished. Baptists were once jailed in Virginia, Mormons driven west by mob violence, Catholics viewed with suspicion, and Jews barred from elite institutions. But all of these, along with newer movements like Pentecostalism, Islam, and Hinduism, have found space to grow, organize, and even shape public life in America. While the experience of religious liberty in the United States has certainly not been a straight line, when set against the alternatives found in both past and present, Balmer’s case for the First Amendment’s enduring genius is inspiring. 

While one can appreciate Balmer’s passion for the First Amendment, aspects of his framing are historically problematic. He rightly celebrates early champions of religious liberty such as Roger Williams and William Penn, yet he often portrays the American experiment in religious freedom as if it arose chiefly in opposition to traditional Christianity, rather than emerging from within it. The very Baptists he praises (figures like Isaac Backus and John Leland) were not theological progressives or pure Lockean liberals; their arguments for liberty of conscience were rooted explicitly in biblical exegesis and evangelical convictions. Furthermore, Balmer’s repeated appeals to a “wall of separation” between church and state rely on a modern and legally contested interpretation of the First Amendment. One shaped more by mid-twentieth-century jurisprudence than by the text, context, or original intent of the founding generation. Jefferson’s metaphor, which was lifted not from a legal text or constitutional debate, but from a private letter to a Baptist association, has come to bear far more constitutional weight than the framers ever intended or could have imagined. 

In emphasizing rigid separation, Balmer overlooks the fact that early American states routinely supported religion without establishing it, tied public morality to religious belief, and defended the right of religious citizens to contribute meaningfully to public life. Massachusetts, for instance, maintained religious tests for public office well into the 1830s. Connecticut’s 1818 constitution explicitly affirmed “the duty of all men to worship the Supreme Being,” and several states, including Maryland and North Carolina, required public officials to profess belief in God or in divine judgment. Far from being anomalies, such measures reflected a broad consensus that religion, particularly Christianity, was essential to civic virtue and republican self-government, even if no single denomination should be elevated above others. Much of what Balmer presents as a timeless constitutional principle is, in fact, a projection of modern jurisprudence and liberal Protestant values onto a founding generation that held a far more complex and variegated view of church and state. 

One is left to wonder why Christian moral witness is celebrated in one era but viewed as suspect in another.

Balmer’s narrative tends to flatten this complexity into a simplistic binary, either establishment or total separation, when the historical record reveals a spectrum of arrangements across the states, many of which retained close church-state ties well after 1791. By reading back a post-Jeffersonian, mid-twentieth-century model of “separation” as the founders’ original intent, Balmer risks turning a rich and pluralistic founding landscape into a legal abstraction better suited to modern polemic than historical accuracy.

Then there are the words of warning against Christian nationalism. Like many books in this genre, America’s Best Idea offers warnings that feel hyperbolic and out of proportion to the actual threat. Balmer largely overlooks Christian nationalism’s limited real-world influence, its lack of theological or organizational coherence, and its marginal growth beyond chronically online circles. As Mark David Hall and Miles Smith IV have persuasively argued, the notion that the United States is on the brink of becoming a theocratic nation-state owes more to Twitter threads and fringe podcasts than to any measurable political reality. In fact, and somewhat ironically given Balmer’s earlier work on the rise of the Religious Right, the more significant transformation in recent years has been the emergence of a non-religious right. In contrast to Christian nationalism, this is a tangible and measurable shift: according to the Public Religion Research Institute, the share of religiously unaffiliated Republicans has tripled, from about 4 percent in 2006 to roughly 12 percent in 2022, and Gallup reports that nearly one-quarter of nonreligious Americans now lean Republican. Ironically, this growing secular bloc on the right (which is probably far more aligned with Balmer’s pluralist ideals) gets far less attention than the overhyped specter of Christian nationalism, despite representing a deeper and more lasting shift in American life.

Which brings me to a perplexing tension in Balmer’s account. He lauds evangelical involvement in nineteenth-century reform movements (particularly abolition, temperance, and women’s education) as exemplars of Christian public witness. These efforts, in his view, demonstrated faith speaking truth to power and working for the common good. Balmer also praises historical figures like William Jennings Bryan for his economic populism and Martin Luther King Jr. for his prophetic civil rights leadership, holding up such examples of progressive, justice-oriented engagement as faithful expressions of Christianity in the public square. More broadly, he voices admiration for faith-based activism that advances values like social justice, equality, and inclusion.

Conversely, Balmer is consistently critical of recent evangelical political engagement, especially when it aligns with the Republican Party or centers on issues such as abortion, gay rights, or religious symbolism in public life. He often portrays such activism not as prophetic witness but as a bid to reclaim lost cultural privilege or enforce sectarian morality through legislation. One is left to wonder why Christian moral witness is celebrated in one era but viewed as suspect in another. Of course, Balmer is entitled to his political and theological commitments, but the criteria by which he distinguishes faithful from inappropriate activism often seem ad hoc and selectively applied. The result is a framework in which Christian political engagement is endorsed when it advances progressive goals but dismissed when it reflects more traditional convictions.

In short, Balmer seems comfortable rendering unto Caesar when Caesar shares his views, yet eager to proclaim “Jesus is Lord” when Caesar does not.

Despite its limitations, America’s Best Idea stands as a compelling progressive tribute to the Madisonian tradition and its vision of religious liberty. Balmer’s greatest strength lies in his passionate and historically informed defense of the First Amendment as a civic and theological breakthrough, one that has allowed an astonishing diversity of religious communities not merely to survive, but to flourish. In an era when “religious pluralism” can often sound like a platitude, Balmer roots the phrase in real historical struggle, making clear just how hard-won (and how uniquely American) this achievement truly is. His narrative reminds readers that the separation of church and state was not designed to diminish faith, but to preserve its integrity and safeguard public life from religious domination. While reasonable people may disagree over how this principle has been interpreted or applied over time, Balmer makes a compelling case that our church-state separation truly is one of America’s best ideas.


Categories
Michael Novakhov - SharedNewsLinks℠

Prospects for Congress


Congress is down, but how close is it to being out? What is the ultimate source of its vitality, and how might it return to that wellspring in our deeply cynical political moment?

The three excellent responses to my initial essay, “Choosing Congressional Irrelevance,” helpfully probe these questions and bring to light some useful disagreements. Yuval Levin, Joseph Postell, Shep Melnick, and I look at Congress from different enough angles that we each perceive different possibilities for further marginalization or, perhaps, revival. In this response to their perspectives, I start by probing what I take to be the fundamental question: what animates Congress? I then consider just how gloomy we ought to be about Congress’s prospects and briefly take up a few solutions.

Do We Believe in Representation?

As is so frustratingly often the case, Yuval Levin lays out many of my central ideas with greater clarity and force than I mustered. Although Congress is our lawmaking body, Levin insists we remember that “Congress’s most fundamental purpose is not to advance major legislation.” Rather, “It is to facilitate bargaining across factional and party lines.” To the extent we think of Congress as a tool for efficient action, we will naturally come to think that “members are the problem and leaders are the solution.” If we want a congressional renaissance, we will need members to take their own role in producing a legitimate political order more seriously.

Postell takes nearly the opposite tack. He says that giving members more opportunities for influence is likely to be a recipe for institutional stagnation. In his reading of the historical record, “decentralized structures and procedures such as open amendment processes, leadership shorn of committee assignment and agenda control powers, and powerful committees, have tended to fragment Congress and render its collective action more difficult.” Were we to move away from the centralized, omnibus-heavy procedures behind most of the contemporary Congress’s enactments, our legislature would quickly find itself even more stymied by internal dissent and even more irrelevant than it is today.

Postell is surely correct to say that, at present, congressional policymaking depends on this path—but, with Levin, I take very different lessons from the historical record. Members have sought efficiency, but their energies have dissipated. As Levin puts it, the ironic result of prioritizing programmatic, ideological coordination has been to devalue their own representative function.

What does that mean, exactly?

What makes representation potent is the sense that there is something real in each congressional district that needs to be made present in national deliberations. This is something different than the political beliefs held by the majority of a district’s voters. I’m happy to go with Postell in identifying the relevant distinction as being between (national partisan) ideology and (locally rooted) interest. I share Edmund Burke’s belief in the solidity of interests, separate from opinion, as a sturdy basis for politics. We want to grapple with realities, not fantasies, even when they are somewhat grubby. Henry “Scoop” Jackson of Washington was, for decades, known as the Senator from Boeing. This was meant as an insult, but it seems healthy for a corporation at the heart of America’s military-industrial complex, which employed many tens of thousands of Washingtonians, to have had its say. (Jackson, in turn, forcefully brought the public’s concerns into the corporation.)

Narrow-minded “parochialism” is generally contrasted with high-minded universalism, but I hold with Willmoore Kendall in believing that the two values need to be in constant conversation with each other, and that Congress is the appropriate venue for the rooted interests to contend with each other and temper the grand schemes that often emanate from the White House.

Especially because of the rise of artificial intelligence, we are heading into a time of massive social upheaval, and we need a functioning politics to help us find our collective way through.

For that vision to make sense, we must believe in the connection between the organic community and its representative, who has a distinctive way of knowing about his or her community and its needs. There are three components of that: 1) believing that the organic community itself is real and distinctive; 2) believing that the elected representative has a special relationship to it; and 3) believing that, in carrying out the activity of representation, the representative will hold faith with the community, rather than betraying its interests. If all those hold, then, as Levin says, Congress takes on the emergent “capacity [of facilitating] broadly acceptable negotiated legislative bargains,” which is of immense value to our constitutional republic. (This is what I argued makes Congress “indispensable.”)

Each of these three necessary beliefs is strained today. Our belief in the integrity of geographic communities has waned as people forge more of their connections in life through the Internet, and more people work for firms far away from their homes. We are justifiably more skeptical of the idea that our representatives orient themselves toward their districts, given how much more nationalized our politics has become. If the “D” or “R” appearing next to a candidate’s name vastly outweighs everything else about them, how special of a relationship can that person really have with the district? And, finally, as Melnick points out, we live in a time when we are generally dubious of fidelity in all forms. This certainly holds regarding the public’s views of their legislators. Recent research indicates that Fenno’s paradox, in which citizens hold their own member of Congress in high esteem even as they mistrust the institution, has lost steam in recent years. Many voters clearly feel that their members of Congress care little about them, rather than their place in the news cycle. With representativeness itself under strain, Congress’s institutional self-confidence sags.

Melnick calls our attention to an even deeper concern: Counterintuitively, the juggernaut of democracy itself may be working against representation in a development that spans centuries rather than decades. Citing Tocqueville’s apprehensions of the individualistic, leveling tendencies of the democratic spirit, he notes that the purest little-d democrats may be naturally “allergic to forms and formalities. They want their favorite policies, and they want them now.” Citizens who think in these terms are likely to be skeptical of the complicated give-and-take of congressional bargaining and attracted to the presidency’s promises of instant gratification, even if they are dimly aware that the president is offering sugar highs rather than real sustenance. I, too, worry that the democratic logic triumphant in our time promotes distrust of intermediaries of all kinds. Why should representatives have any greater voice than you or I? This impulse flares up constantly in the public’s relationship with Congress.

How Bleak Is It, Really?

Then again, that point surely rang true at much earlier points in our nation’s history, and Congress has time and again shown its resiliency. We have to be careful of taking any sort of historical logic to its endpoint, or presuming we live there.

Attending to our own specific moment, we should consider: Is anything so bad about Congress in the present moment? Postell reminds us of the ongoing importance of “Secret” (“low-salience” Congress, which can achieve a good deal with little fanfare. And he (with me) notes that members of Congress did play a large role in shaping the reconciliation law that is the centerpiece of Trump’s busy 2025. He also asks whether overall productivity might be holding up just fine, notwithstanding consistently negative media coverage of Congress. Maybe legislators have changed how they work, without losing influence.

I hope these suggestions (which, to be clear, Postell offers as helpful provocations) turn out to be right, and that Congress is poised to unleash a gusher of productive legislation. But I doubt it. I tried to make clear in my original piece that Congress still does a great deal, and that it would be a mistake to simply write it off. But my sense is that the institution is genuinely on a downward trajectory. Based on previous research, I can say with some confidence that the 118th Congress (2023–24) was historically unproductive. It is too early to judge the 119th, but I’m willing to bet on low output (coupled with continued historically high reliance on omnibuses). We have lost a great deal, without reaching a nadir. We can lose much more.

Supposing that is correct, how difficult would it be to turn things around? In a different vein of his response, Postell brings out an inevitability argument: “Reducing partisan loyalty and incentivizing cross-cutting policies may simply be out of touch with the mood of the people, and perhaps no amount of institutional reform within Congress can change that.” Our Congress is what it is because we are what we are, and no amount of reformist messing around can change that. Melnick also strikes a pessimistic note, saying Americans’ dislike of open conflict will make it difficult for Congress to ever regain people’s trust.

I (try to) maintain more hope for Congress because I feel that the American people really are more complex (and interesting) than our current Manichaean style of politics, which repulses enough people to make burnout and reinvention a live possibility. Especially because of the rise of artificial intelligence, we are heading into a time of massive social upheaval, and we need a functioning politics to help us find our collective way through. Trust generated by shared experience of place may be harder to come by, but it is still a real force, which makes geography-rooted representative government the best solution. That’s especially clear given how obvious it’s become that the public fora of social media can never function as an acceptable “universal town square.” The deficiencies of mass plebiscitary democracy, unmediated (or poorly mediated) by a powerful representative legislature, are clearer every day.

How to Make It Better

Of course, articulating the good that a more self-assured Congress could bring is no recipe for actually delivering one. So let me conclude with a brief run-through of some of the suggestions laid out by my interlocutors. Postell recommends:

  • Expanding the House such that, instead of representing some 750,000 constituents, each member would represent only 250,000, thereby strengthening the connection between citizens and their representatives. The principle is good, but I worry that a House of 1,300 members would be too large to support any genuine deliberation. Madison warned in Federalist #55 that an assembly’s number must be low enough “to avoid the confusion and intemperance of a multitude.” That concern makes me more receptive to the recommendation to expand to 585 members made by the American Academy of Arts and Sciences report on the subject, which Yuval Levin coauthored.
  • Cancelling direct congressional primaries. Yes, but how could this possibly gain political momentum? Likewise with the cause of devolving policymaking powers back to state and local governments.
  • Reforming campaign finance so that a district’s constituents are privileged. I’ve been persuaded by Michael Malbin’s work on this subject, though devising a workable scheme that doesn’t run afoul of the First Amendment is difficult.
  • Limiting the presidential veto and reviving the legislative veto. I’m sold on both, but trying to practice constitutional politics outside of our current partisan divide seems very difficult, and so all Article V amendments seem like longshots. We should build bipartisan support for constitutionally valid mechanisms that approximate the legislative veto.

Rather than seek a reformist groundswell, my inclination (shared by Levin) is to urge members of Congress to reorient their chambers toward committee work, especially in the House. That this sounds dull as bricks to outsiders is an advantage; it is a program that can be pursued underneath the din of national politics. Members who care about policy and plan to spend years in Congress need to see how institutional reconfiguration can serve their own ambitions. Hard work needs to be rewarded with agenda control. Back benchers have nothing to lose but their leashes.