You are here

Skeptic.com feed

Subscribe to Skeptic.com feed feed Skeptic.com feed
Popular Science. Nonpartisan. Reality-Based.
Updated: 13 hours 39 min ago

Is Christianity a “Load-Bearing Wall” for American Democracy?

Tue, 05/27/2025 - 6:48am

In his new book Cross Purposes: Christianity’s Broken Bargain with Democracy, Jonathan Rauch argues that Christianity is a “load-bearing wall” in American democracy. As Christianity has been increasingly co-opted by politics, Rauch believes it is straying from its core tenets and failing to serve its traditional role as a spiritual and civic ballast. He blames this shift for the decline of religiosity in the United States, as well as collapsing faith in democratic institutions.

The Rise of the Nones and Its Effects 

Rauch writes that his book is “penitence for the dumbest thing I ever wrote,” a 2003 essay for The Atlantic about the rise of what he called “apatheism”—a “disinclination to care all that much about one’s own religion, and an even stronger disinclination to care about other people’s.” The essay argued that the growing number of people who aren’t especially concerned about religion is a “major civilizational advance” and a “product of a determined cultural effort to discipline the religious mindset.” Rauch cites John Locke’s case for religious tolerance and pluralism to argue that the emergence of apatheism represented the hard-fought taming of “divisive and volatile” religious forces. 

In Cross Purposes, Rauch explains why he now repudiates this view. First, he argues that the decline of religion has led Americans to import “religious zeal into secular politics.” Second, he believes Christianity is losing its traditional role in shaping culture—the faith now reflects American society and culture instead of the other way around—and argues that this has been corrosive to the civic health of the country. Third, Rauch claims that “there is no secular substitute for the meaning and moral grounding which religious life provides.” 

All of these arguments rely on shaky assumptions about modern religiosity and the influence of secularism in America. In 2003, Rauch rightly questioned the idea that “everyone brims with religious passions.” While he acknowledged that human beings appear “wired” to believe, he also recognized that secularization, in the aggregate, is a real phenomenon. He now rejects this observation in favor of the increasingly fashionable view that religiosity never really declines but can only be repurposed: “We see this in the soaring demand for pseudo-religions in American life,” he writes. These pseudo-religions, he observes, include everything from “wellness culture” to wokeness and political extremism. 

But Americans have held quasi-religious, supernatural beliefs throughout history—including during periods of much greater religiosity than today. The popularity of practices like astrology and tarot reading isn’t a recent development, and pagan religions like Wicca originated and spread in the God-fearing middle of the twentieth century. Belief in UFOs and extraterrestrial encounters surged in the 1940s and 1950s, an era when over 90 percent of Americans were Christians. In the early 1990s, 90 percent of Americans still identified as Christians compared to 63 percent today. But a 1991 Gallup poll of Americans found a wide array of paranormal and other supernatural beliefs—nearly half believed in extrasensory perception (ESP), 36 percent believed in telepathy, 29 percent believed houses could be haunted, 26 percent believed in clairvoyance, and 25 percent believed in astrology. Religious belief wasn’t much of a bulwark against these other beliefs. Even in cases when those beliefs contradicted traditional Christian teachings—such as reincarnation—significant proportions of Christians believed them. 

The secularism of Western liberal democracies is a historical aberration. For most of history, the separation of church and state didn’t exist.

Rauch argues that “it has become pretty evident that secularism has not been able to fill what has been called the ‘God-shaped hole’ in American life.” He continues: “In today’s America, we see evidence everywhere of the inadequacy of secular liberalism to provide meaning, exaltation, spirituality, transcendence, and morality anchored in more than the self.” But the evidence Rauch is referring to—aside from the latest spiritual fads, many of which have been adopted by religious and irreligious Americans alike—is thin. He cites a 2023 survey conducted by the Wall Street Journal and NORC, which found that the percentage of Americans who say religion is “very important” to them fell from 62 percent in 1998 to 39 percent in 2023. The survey also found that the proportion of Americans who regard patriotism, community involvement, and having children as “very important” declined over the same period. Meanwhile, a growing proportion of Americans said money is very important. 

While it’s possible that secularization has played a role in making Americans more greedy and less community or family-oriented, it isn’t enough to merely assert that rising secularism is to blame for the decline of these values in the United States. Even if it’s true that secularism has some social costs, those costs would need to be weighed against its benefits. “As a homosexual American,” Rauch writes, I owe my marriage—and the astonishing liberation I have enjoyed during my lifetime—to the advance of enlightened secular values.” Rauch argues that the Founders believed the governance system they set up would only work if it remained on a firm foundation of Christian morality. He cites John Adams, who declared that the Constitution was “made only for a moral and religious people.” But he also could have cited Thomas Jefferson’s trenchant criticisms of Christianity or Thomas Paine’s argument in The Age of Reason that many Christian doctrines are, in fact, deeply immoral, superstitious, and corrosive to human freedom. 

While Rauch doesn’t appear to regard his own secularism as an impediment to patriotism or any other civic virtue—and thus he doesn’t need religion—he appears to believe that other Americans do. He invokes an argument made by Friedrich Nietzsche nearly 150 years ago: “When religious ideas are destroyed one is troubled by an uncomfortable emptiness and deprivation. Christianity, it seems to me, is still needed by most people in old Europe even today.” A central theme of Cross Purposes is a paternalistic view that, while it’s possible for some people to be good citizens and live lives of meaning without religion, it’s not possible for many others. 

Without religion, Rauch argues, most people will be adrift with no grounding for their moral values. He claims that “moral propositions … must have some external validity.” He observes that “scientific or naturalistic” foundations for morality fail because they “anchor morality in ourselves and our societies, not in something transcendent.” He asks: “If there is no transcendent moral order anchored in a purposive universe—something like God-given laws—why must we not be nihilistic and despairing sociopaths?” However, he qualifies his argument… 

Now, speaking as an atheist and a scientific materialist, I do not believe religions actually answer that question. Instead, they rely on a cheat, which they call God. They assume their conclusion by simply asserting the existence of a transcendent spiritual and moral order. They invent God and then claim he solves the problem. … The Christians who believe the Bible is the last word on morality—and, not coincidentally, that they are the last word on interpreting the Bible—are every bit as relativistic as I am; it’s just that I admit it and they don’t. 

After presenting this powerful rejoinder to the religious pretension to have a monopoly on objective morality, Rauch writes: 

That is neither here nor there. I am not important. What is important is that the religious framing of morality and mortality is plausible and acceptable to humans in a way nihilism and relativism are not and never will be. 

But this is a false dichotomy—the choice isn’t between religious morality and nihilistic relativism. The choice is between religious morality and an attempt to develop an ethical system that is far more epistemically honest and humble. Instead of relying on the God “cheat”—a philosophical sleight of hand Rauch feels he is equipped to identify, but one he evidently assumes most people are incapable of understanding—we can attempt to develop and ground ethical arguments in ways that don’t require the invention of a supernatural, supervising entity. As he writes: 

For most people, the idea that the universe is intended and ordered by God demonstrably provides transcendent meaning and moral grounding which scientific materialism demonstrably does not. … God may be (as I believe) a philosophical shortcut, but he gets you there—and I don’t. 

But Rauch just admitted that religion only “gets you there” in an illusory way. It may be comforting for believers to convince themselves that there’s a divine superintendent who ensures that the universe is morally intelligible, but the religious are no closer to apprehending fundamental moral truth than nonbelievers. 

Rauch also argues that “purely secular thinking about death will never satisfy the large majority of people.” While he personally doesn’t struggle with the idea of mortality, he once again assumes that a critical mass of people “rely on some version of faith to rescue them from the bleak nihilism of mortality.” While Rauch presents this view in a self-deprecating way—“I am weird!” he informs the reader—it’s difficult to shake the impression that he believes himself capable of accepting hard realities that others aren’t equipped to handle. 

While Rauch believes his scientific materialism and secular morality is some kind of exotic oddity, these views were at the heart of the Enlightenment and they have informed centuriesof Western philosophy. A fundamental aspect of Enlightenment thought was that religious authorities don’t have a monopoly on truth or morality. Secularists like David Hume resisted religious dogma and undermined the notion that morality must be grounded in God. Secularism was rare and dangerous hundreds of years ago, but it has gone mainstream. Pew reports that the number of Christians in the United States fell from around 90 percent in 1990 to 63 percent in 2024. Gallup found that other measures of religiosity have declined as well, such as church attendance and membership. Pew has also recorded substantial and sustained declines in religious belief across Europe. 

The idea that there’s a latent level of religiosity in human societies that remains static over the centuries is dubious.

Rauch was right in 2003—plenty of people are capable of leading ethical and meaningful lives without religious faith. There are more of these people today than there used to be, and this doesn’t mean they have all been taken in by some God-shaped superstition or cult. The idea that there’s a latent level of religiosity in human societies that remains static over the centuries is dubious—in pre-Enlightenment Europe, religious belief was ubiquitous and mandated by law. Heretics were publicly executed. So were witches. Scientific discoveries were suppressed and punished if they were seen as conflicting with religious teachings. Regular people had extremely limited access to information that wasn’t audited by religious authorities. Science often blended seamlessly with pseudoscience (even Newton was fascinated by alchemy and other aspects of the occult, along with his commitment to biblical interpretation). Incessant religious conflict culminated in the Thirty Years’ War, which caused millions of deaths—with some estimates ranging as high as around a third of central Europe’s population. 

The last execution for blasphemy in Europe was the hanging of Thomas Aikenhead in Edinburgh in 1697, whose crimes included criticizing scripture and questioning the divinity of Jesus Christ. Aikenhead was a student at the University of Edinburgh, where Hume would attend just a couple of decades later. It wouldn’t be long before several of the most prominent philosophers in Europe were publicly making arguments that would have once sent them to the gallows. Drawing upon the work of these philosophers, less than a century after Aikenhead’s execution the United States would be founded on the principle of religious liberty. The world has secularized, and this is exactly what Rauch once believed it to be: a major civilizational advance. 

When the Load-Bearing Wall Buckles 

Rauch believes the decline of religion is to blame for many of the most destructive political pathologies in the United States today. He argues that the “collapse of the ecumenical churches has displaced religious zeal into politics, which is not designed to provide purpose in life and breaks when it tries.” According to Rauch, when the “load-bearing wall” of Christianity “buckles, all the institutions around it come under stress, and some of them buckle, too.” Much of Cross Purposes is an explanation for why this buckling has occurred. 

Rauch fails to demonstrate why Christianity is a necessary foundation for morality.

Rauch organizes the book around what he describes as Thin, Sharp, and Thick Christianity. Thin Christianity describes a process whereby the faith is “no longer able, or no longer willing, to perform the functions on which our constitutional order depends.” One of these functions is the export of Christian values to the rest of society. “My claim,” he writes, “is not just that secular liberalism and religious faith are instrumentally interdependent but that each is intrinsically reliant on the other to build a morally and epistemically complete and coherent account of the world.” This is the claim we discussed in the first section—Rauch fails to demonstrate why Christianity is a necessary foundation for morality. He explains that people may find it easier to ground their values in God and why religion makes mortality easier to handle, but these are hardly arguments for the necessity of faith in the public square. 

Rauch is particularly concerned about what he describes as Sharp Christianity—a version of the faith that is “not only secularized but politicized, partisan, confrontational, and divisive.” Instead of focusing on the teachings of Jesus, Rauch writes, these Christians “bring to church the divisive cultural issues they hear about on Fox News” and believe “Christianity is under attack and we have to do something about it.” Sharp Christianity is best captured by the overwhelming evangelical support for Donald Trump, who received roughly 80 percent of the evangelical vote in 2020 and 2024. An April Pew survey found that Trump’s support among evangelicals remains strong after his first 100 days in office—while 40 percent of Americans approve of his performance, this proportion jumps to 72 percent among evangelicals. 

Rauch challenges the view held by many Sharp Christians that their faith is constantly under assault from Godless liberals. He critiques what he regards as an increasingly powerful “post-liberal” movement on the right, which argues that the liberal emphasis on individualism and autonomy has led to the atomization of society and the rejection of faith, family, and patriotism. Rauch acknowledges that liberalism on its own doesn’t inspire the same level of commitment as religion, and he rightly notes that this is by design: “the whole point of liberalism was to put an end to centuries of bloody coercion and war arising from religious and factional attempts to impose one group’s moral vision on everyone else.” 

While Rauch does an excellent job critiquing the post-liberal right, he grants one of its central claims: that Christianity is the necessary glue that holds liberal society together. As he notes: “liberals understood they could not create and sustain virtue by themselves, and they warned against trying.” It’s true that liberalism is capacious enough to encompass many competing values and ideologies, but there are certain values that are in the marrow of liberal societies—such as individual rights, pluralism, and democracy. Mutual respect for these values can cultivate virtues like openness, tolerance, and forbearance. 

Rauch emphasizes the achievements of liberalism: “constitutional democracy, mass prosperity, the scientific revolution, outlawing slavery, empowering women, and—not least from my point of view—tolerating atheistic homosexual Jews instead of burning us alive.” That he should have added that many of these advancements were made in the teeth of furious religious opposition brings us to a central problem with Cross Purposes—Rauch would argue that all the Christian bloodletting, intolerance, and authoritarianism throughout history is based on a series of misconceptions about what Christianity really is. His central demand is that American Christians rediscover the true meaning of their faith, which he regards as an anodyne and narrow reading of Jesus Christ’s essential teachings. He reduces millennia of Christian thought and the whole of the Bible to a simple formula (which he first heard from the Catholic theologian and priest James Alison): “Don’t be afraid. Imitate Jesus. Forgive each other.” But Rauch then admits: “I am in no position to judge whether those are the essential elements of Christianity, but they certainly command broad and deep reverence in America’s Christian traditions.” 

While this tidy formula does capture some central elements of Jesus’ teachings, it intentionally leaves out other less agreeable (but no less essential) aspects of Christianity. Jesus urged his followers not to be afraid because he would return and they would be granted eternal life in the presence of God. He told his Apostles that their “generation will not pass away” before his return, so they could expect their reward in short order. For those who did not accept his gospel, Jesus had another message: “Depart from me, you cursed, into the eternal fire prepared for the devil and his angels.” Rauch may be correct that “Don’t be afraid” captures one of Jesus’ core messages, but this is a message that only applies to believers—all others should be very afraid. As for the idea of forgiveness, Jesus clearly believed there were some limits—once the “cursed” are consigned to “eternal fire,” redemption appears to be unlikely. 

Even at its best, Christianity is inherently divisive.

While Rauch admits that he is in “no position to judge … the essential elements of Christianity” (nor am I), but any summary of the faith that leaves out Jesus’ most fundamental teaching of all—that his followers must accept the truth of Christianity or face eternal destruction—isn’t in touch with reality. It’s also untenable to present an essentialized version of Christianity that leaves out the entire Old Testament, which is crammed with scriptural warrants for slavery, genocide, misogyny, and persecution on a horrifying scale. There’s a reason Christianity has been such a repressive force throughout history—despite the moderating influence of Jesus, the Bible is chockablock full of justifications for the punishment of nonbelievers and religious warfare. Even at its best, Christianity is inherently divisive—the “wages of sin is death,” and there’s no greater sin than the rejection of the Christian God. Because Christianity is a universalist, missionary faith, believers have a responsibility to deliver the gospel to their neighbors. If you believe, as evangelicals do, that millions of souls are at stake, the stripped-down, liberal version of Christianity offered by Rauch may seem like a deep abrogation of responsibility. 

“If we wanted to summarize the direction of change in American Christianity over the past century or so,” Rauch writes, “we might do well to use the term secularization.” While Rauch argues that some secularization has been good for Christianity by helping it integrate with the broader culture, he also argues that the “mainline church cast its lot with center-left progressivism and let itself drift, or at least seem to drift, from its scriptural moorings.” He cites the historian Randall Balmer, who observed in 1996 that many Protestants “stand for nothing at all, aside from some vague (albeit noble) pieties like peace, justice, and inclusiveness.” But this is just what Rauch is calling for—the elevation of vague pieties about forgiveness and courage to a central role in how Christianity interacts with the wider culture. 

Rauch argues that American evangelicals have become “secularized.” The thrust of this argument is that evangelicals thought they would reshape the GOP in their image when they became more political in the 1980s, but the opposite occurred. For decades, white evangelicals have been one of the largest and most loyal Republican voting blocs, and Rauch observes that this has been a self-reinforcing process: “Republicans self-selected into evangelical religious identities and those identities in turn reinforced the church’s partisanship.” Rauch points out that church attendance and other indicators of religiosity have declined among evangelicals in recent decades. He even argues that evangelical Christianity has become “primarily a political rather than religious identity.” 

While there are some signs that evangelicals aren’t quite as committed to their religious practices as they were at the turn of the century, the idea that politics has displaced their faith is a bold overstatement. According to the latest data from Pew, evangelicals remain disproportionately fervent in their beliefs and religious behaviors: 97 percent believe in a soul or spirit beyond the physical body; 72 percent say they pray daily; 82 percent consider the Bible very or extremely important; 84 percent believe in heaven; and 82 percent believe in hell. American history demonstrates that piety and politics don’t cancel each other out. Rauch explains why Christians are tempted to enter the political arena by summarizing several of the arguments political evangelicals often make: 

…some might expect conservative Christians to meekly accept the industrial-scale murder of unborn children, the aggressive promotion of LGBT ideology, the left’s intolerance of traditional social mores, and the relentless advance of wokeness in universities, corporations, and the media; but enough is enough. It is both natural and biblical for Christians to stand up for their values. 

Rauch challenges these claims and argues the “war on Christianity” frequently invoked by evangelicals is imaginary. The current U.S. Supreme Court is extremely pro-religious freedom, American evangelicals are protected by the First Amendment, most members of Congress are Christians, and surveys show that the vast majority of Americans approve of Christianity. But evangelicals’ perception is what matters—they have felt like their faith is under attack for decades, which has pushed them toward political action. Rauch cited a 1979 conversation between Ronald Reagan and the evangelical Jim Bakker in which the GOP presidential candidate asked: “Do you ever get the feeling sometimes that if we don’t do it now, if we let this be another Sodom and Gomorrah, that maybe we might be the generation that sees Armageddon?” 

It’s an inconvenient fact for Rauch’s argument that Christianity can coexist so comfortably with hyper-partisanship and authoritarianism.

While it’s fine to call for a gentler and more civically responsible Christianity, Rauch appears to believe that any version of the faith that inflames partisan hatreds or focuses on the culture war is, by definition, un-Christian. But this isn’t the case. When Reagan worried about the United States becoming Sodom and Gomorrah and ushering in Armageddon, he wasn’t “secularizing” Christianity by blending it with worldly politics. He was allowing his religious beliefs to inform his political views, which many Christians regard as morally and spiritually obligatory. 

The secularism of Western liberal democracies is a historical aberration. For most of history, the separation of church and state didn’t exist—everyone in society was forced to submit to the same religious strictures, and the punishment for failing to do so was often torture and death. One reason for this history of state-sanctioned dogma and repression is that eschatology is central to Christianity. The idea that certain actions on earth will lead to either eternal reward or punishment is a powerful force multiplier in human affairs, which is one of the reasons the European wars of religion were so bloody and why the role of religion in many other conflicts around the world has been to increase the level of tribal hatred on both sides. Modern religion-infused politics is just a return to the historical norm.

Photo by Julian GentileTrump: God’s Wrecking Ball 

Then there is President Donald Trump. “Absolutely nothing about secular liberalism,” Rauch writes, “required white evangelicals to embrace the likes of Donald Trump.” If there’s one argument in favor of the idea that evangelicals have allowed politics to distort their faith, it’s the overwhelming support President Trump still commands within their ranks. Rauch cites a survey conducted by the Public Religion Research Institute, which reported that evangelicals were suddenly much less concerned about the personal character of elected officials after they threw their weight behind Trump. In 2011, just 30 percent of evangelicals said an “elected official can behave ethically even if they have committed transgressions in their personal life”—a proportion that jumped to 72 percent in October 2016. 

There are many reasons evangelicals cite for supporting Trump, from his nomination of pro-life Supreme Court justices who overturned Roe v. Wade to the conviction that he’s an enthusiastic culture warrior who will crush wokeness. Because evangelicals are consumed by the paranoid belief that they’re an embattled group clinging to the margins of the dominant culture, they decided that they could dispense with concerns over character if it meant mobilizing a larger flock and gaining political and cultural influence. Over three-quarters of evangelicals believe the United States is losing its identity and culture, so the idea of making America great again appeals to them. Rauch cites Os Guinness, who described Trump as “God’s wrecking ball stopping America in its tracks [from] the direction it’s going and giving the country a chance to rethink.” But Rauch is right that arguments like this don’t explain the depth of evangelical support for the 45-47 President or the fact that “they did not merely support Trump, they adored him.” 

“Whatever the predicates,” Rauch writes, “embracing Trump and MAGA was fundamentally a choice and a change.” It’s true that it would have once been difficult to imagine evangelicals supporting a president like Donald Trump. It’s also true, as Rauch contends, that evangelicals now appear to follow “two incommensurable moralities, an absolute one in the personal realm and an instrumental one in the political realm.” But Cross Purposes isn’t just about the hypocrisy and moral bankruptcy of American evangelicals or the post-liberal justifications for Trumpism. Rauch is calling for a revival of public Christianity in America, and the evangelical capitulation to Trump raises questions about the viability of that project. 

It’s an inconvenient fact for Rauch’s argument that Christianity can coexist so comfortably with hyper-partisanship and authoritarianism. Rauch insists that evangelical Christianity is the product of a warping process of secularization—the “Church of Fear is more pagan than Christian,” he insists. But as Pew reports, evangelicals are disproportionately likely to attend church, pray daily, believe in the importance of the Bible, and so on. Rauch is in no position to adjudicate who is a true believer and who isn’t (nor is anyone else, me included), and if it’s true that the only real Christianity is the reassuring liberal version he endorses, the vast majority of Christians throughout history were just as “secularized” as today’s evangelicals. 

“Mr. Jefferson, Build Up that Wall” 

Because Rauch has such an innocuous view of “essential” Christian theology, he believes Christianity doesn’t need to “be anything other than itself” to ensure that Christians keep their commitments to “God and liberal democracy.” If only it were so easy. Despite the steady decline of Christianity in the United States, 63 percent of the adult population still self-reports as Christian—a proportion that has actually stabilized since 2019. In any religious population so large, there will always be significant variation in what people believe and how they express those beliefs in the public square. Christianity doesn’t necessarily lead to certain political positions—the faith has been invoked to support slavery and to oppose it; to justify imperialism and to condemn it; to damn nonbelievers as heretics bound for hell or to embrace everyone as part of a universalist message of redemption. Of course, it would be nice if all Christians adopted Jonathan Rauch’s version of civic theology, but there will always be scriptural warrants for other forms of theology that Rauch believes are corrosive to our civic culture. 

Americans who believe that Christianity is untrue and unnecessary for morality should continue to make their case in the public square.

According to Pew, Trump’s net favorability rating among American agnostics is just 17 percent, and it falls to 12 percent among atheists. On average, nearly half of American Protestants view Trump favorably—a proportion that falls to 25 percent among the “religiously unaffiliated,” which includes atheists, agnostics, and those who define their religious beliefs as “nothing in particular.” Rauch presents the rise of post-liberal Christianity and the politicization of American evangelicals as examples of secular intrusions of one kind or another. He doesn’t entertain the possibility that hisconception of Christianity as conveniently aligned with liberal democracy is a modern, secularized vision that isn’t consistent with how Christianity has historically functioned politically—or with the Bible itself. 

It’s a shame that Rauch regards his 2003 essay about the value of secularization as the “dumbest thing I ever wrote.” While there’s nothing wrong with emphasizing the aspects of Christian theology that support liberal democracy, there’s a more effective way to resist post-liberal Christianity, MAGA evangelicalism, and all the other intersections between faith and politics today. Americans who believe that Christianity is untrue and unnecessary for morality should continue to make their case in the public square. Rauch is wrong to argue that Christianity is a load-bearing wall in American democracy. The real load-bearing wall in the United States is the one constructed by Jefferson at the nation’s founding, and which has sustained our liberal democratic culture ever since: the wall of separation between church and state.

Categories: Critical Thinking, Skeptic

Standardized Admission Tests Are Not Biased. In Fact, They’re Fairer Than Other Measures

Thu, 05/22/2025 - 3:08pm
“It ain’t what you know that gets you into trouble. It’s what you know for sure that just ain’t so.” —Mark Twain

When it comes to opinions concerning standardized tests, it seems that most people know for sure that tests are simply terrible. In fact, a recent article published by the National Education Association (NEA) began by saying, “Most of us know that standardized tests are inaccurate, inequitable, and often ineffective at gauging what students actually know.”1 But do they really know that standardized tests are all these bad things? What does the hard evidence suggest? In the same article, the author quoted a first-grade teacher who advocated teaching to each student’s particular learning style—another ill-conceived educational fad 2 that, unfortunately, draws as much praise as standardized tests draw damnation.

Indeed, a typical post in even the most prestigious of news outlets34 will make several negative claims about standardized admission tests. In this article, we describe each of those claims and then review what mainstream scientific research has to say about them.

Claim 1: Admission tests are biased against historically disadvantaged racial/ethnic groups.

Response: There are racial/ethnic average group differences in admission test scores, but those differences do not qualify as evidence that the tests are biased.

The claim that admission tests are biased against certain groups is an unwarranted inference based on differences in average test performance among groups.

The differences themselves are not in question. They have persisted for decades despite substantial efforts to ameliorate them.5 As shown in the table above and reviewed more comprehensively elsewhere,67 average group differences appear on just about any test of cognitive performance—even those administered before kindergarten. Gaps in admission test performance among racial groups mirror other achievement gaps (e.g., high school GPA) that also manifest well before high school graduation. (Note: these group differences are differences between the averages— technically, the means—for the respective groups. The full range of scores is found within all the groups, and there is significant overlap between groups.)

Group differences in admission test scores do not mean that the tests are biased. An observed difference does not provide an explanation of the difference, and to presume that a group difference is due to a biased test is to presume an explanation of the difference. As noted recently by scientists Jerry Coyne and Luana Maroja, the existence of group differences on standardized tests is well known; what is not well understood is what causes the disparities: “genetic differences, societal issues such as poverty, past and present racism, cultural differences, poor access to educational opportunities, the interaction between genes and social environments, or a combination of the above.”8 Test bias, then, is just one of many potential factors that could be responsible for group disparities in performance on admission tests. As we will see in addressing Claim 2, psychometricians have a clear empirical method for confirming or disconfirming the existence of test bias and they have failed to find any evidence for its existence. (Psychometrics is that division of psychology concerned with the theory and technique of measurement of cognitive abilities and personality traits.)

Claim 2: Standardized tests do not predict academic outcomes.

Response: Standardized tests do predict academic outcomes, including academic performance and degree completion, and they predict with similar accuracy for all racial/ethnic groups.

The purpose of standardized admission tests is simple: to predict applicants’ future academic performance. Any metric that fails to predict is rendered useless for making admission decisions. The Scholastic Assessment Test (now, simply called the SAT) has predictive validity if it predicts outcomes such as college grade point average (GPA), whether the student returns for the second year (retention), and degree completion. Likewise, the Graduate Record Examination (GRE) has predictive validity if it predicts outcomes such as graduate school GPA, degree completion, and the important real world measure of publications. In practice, predictive validity, for example between SAT scores and college GPA, implies that if you pull two SAT-takers at random off the street, the one who earned a higher score on the SAT is more likely to earn a higher GPA in college (and is less likely to drop out). The predictive utility of standardized tests is solid and well established. In the same way that blood pressure is an important but not perfect predictor of stroke, cognitive test scores are an important but not perfect predictor of academic outcomes. For example, the correlation between SAT scores and college GPA is around .5,91011 the correlations between GRE scores and various measures of graduate school performance range between .3 and .4,12 and the correlation between Medical College Admission Test (MCAT) scores and licensing exam scores during medical school is greater than .6.13 Using aggregate rather than individual test scores yields even higher correlations that predict a college’s graduation rate given the ACT/SAT score of its incoming students. Based on 2019 data, the correlations between six-year graduation rate and a college’s 25th percentile ACT or SAT score are between .87 and .90.14

Standardized tests do predict academic outcomes, including academic performance and degree completion, and they predict with similar accuracy for all racial/ethnic groups.

Research confirming the predictive validity of standardized tests is robust and provides a stark contrast to popular claims to the contrary.151718 The latter are not based on the results of meta-analyses1920 nor on studies conducted by psychometricians.2122232425 Rather, those claims are based on cherry-picked studies that rely on select samples of students who have already been admitted to highly selective programs—partially because of their high test scores—and who therefore have a severely restricted range of test scores. For example, one often-mentioned study26 investigated whether admitted students’ GRE scores predicted PhD completion in STEM programs and found that students with higher scores were not more likely to complete their degree. In another study of students in biomedical graduate programs at Vanderbilt,27 links between GRE scores and academic outcomes were trivial. However, because the samples of students in both studies had a restricted range of GRE scores—all scored well above average28—the results are essentially uninterpretable. This situation is analogous to predicting U.S. men’s likelihood of playing college basketball based on their height, but only including in the sample men who are well above average. If we want to establish the link between men’s height and playing college ball, it is more appropriate to begin with a sample of men who range from 5'1" (well below the mean) to 6'7" (well above the mean) than to begin with a restricted sample of men who are all at least 6'4" (two standard deviations above the mean). In the latter context, what best differentiates those who play college ball versus not is unlikely to be their height—not when they are all quite tall to begin with.

Students of higher socioeconomic status (SES) do tend to score higher on the SAT and fare somewhat better in college. However, this link is not nearly as strong as many people … tend to assume.

Given these demonstrated facts about predictive validity, let’s return to the first claim, that admission tests are biased against certain groups. This claim can be evaluated by comparing the predictive validities for each racial or ethnic group. As noted previously, the purpose of standardized admission tests is to predict applicants’ future academic performance. If the tests serve that purpose similarly for all groups, then, by definition, they are not biased. And this is exactly what scientific studies find, time and time again. For example, the SAT is a strong predictor of first year college performance and retention to the second year, and to the same degree (that is, they predict with essentially equal accuracy) for students of varying racial and ethnic groups.2930 Thus, regardless of whether individuals are Black, Hispanic, White, or Asian, if they score higher on the SAT, they have a higher probability of doing well in college. Likewise, individuals who score higher on the GRE tend to have higher graduate school GPAs and a higher likelihood of eventual degree attainment; and these correlations manifest similarly across racial/ethnic groups, males and females, academic departments and disciplines, and master’s as well as doctoral programs.313233, 34 When differential prediction does occur, it is usually in the direction of slightly overpredicting Black students’ performance (such that Black students perform at a somewhat lower level in college than would be expected based on their test scores).

Claim 3: Standardized tests are just indicators of wealth or access to test preparation courses.

Response: Standardized tests were designed to detect (sometimes untapped) academic potential, which is very useful; and controlling for wealth and privilege does not detract from their utility.

Some who are critical of standardized tests say that their very existence is racist. That argument is not borne out by the history and expansion of the SAT. One of the long-standing purposes of the SAT has been to lessen the use of legacy admissions (set-asides for the progeny of wealthy donors to the college or university) and thereby to draw college students from more walks of life than elite high schools of the East Coast.35 Standardized tests have a long history of spotting “diamonds in the rough”—underprivileged youths of any race or ethnic group whose potential has gone unnoticed or who have under-performed in high school (for any number of potential reasons, including intellectual boredom). Notably, comparisons of Black and White students with similar 12th grade test scores show that Black students are more likely than White students to complete college.36 And although most of us think of the SAT and comparable American College Test (ACT) as tests taken by high school juniors and seniors, these tests have a very successful history of identifying intellectual potential among middle-schoolers37 and predicting their subsequent educational and career accomplishments.38

Students of higher socioeconomic status (SES) do tend to score higher on the SAT and fare somewhat better in college.39 However, this link is not nearly as strong as many people, especially critics of standardized tests, tend to assume—17 percent of the top 10 percent of ACT and SAT scores come from students whose family incomes fall in the bottom 25 percent of the distribution.40 Further, if admission tests were mere “wealth” tests, the association between students’ standardized test scores and performance in college would be negligible once students’ SES is accounted for statistically. Instead, the association between SAT scores and college grades (estimated at .47) is essentially unchanged (moving only to .44) after statistically controlling for SES.4142

Standardized tests have a long history of spotting “diamonds in the rough”—underprivileged youths of any race or ethnic group whose potential has gone unnoticed.

A related common criticism of standardized tests is that higher SES students have better access to special test preparation programs and specific coaching services that advertise their potential to raise students’ test scores. The findings from systematic research, however, are clear: the effects of test preparation programs, including semester-long, weekly, in-person structured sessions with homework assignments,43 demonstrate limited gains, and this is the case for the ACT, SAT, GRE, and LSAT.44454647 Average gains are small—approximately one-tenth to one-fifth of a standard deviation. Moreover, free test preparation materials are readily available at libraries and online; and for tests such as the SAT and ACT, many high schools now provide, and often require, free in-class test preparation sessions during the year leading up to the test.

Claim 4: Admission decisions are fairer without standardized tests.

Response: The admissions process will be less useful, and more unfair, if standardized tests are not used.

According to the fairtest.org website, in 2019, before the pandemic, just over 1,000 colleges were test-optional. Today, there are over 1,800. In 2022–2023, only 43 percent of applicants submitted ACT/SAT scores, compared to 75 percent in 2019–2020.48 Currently, there are over 80 colleges that do not consider ACT/SAT scores in the admissions process even if an applicant submits them. These colleges are using a test-free or test-blind admissions policy. The same trend is occurring for the use of the GRE among graduate programs.49

The movement away from admission tests began before the COVID-19 pandemic but was accelerated by it, and there are multiple reasons why so many colleges and universities are remaining test-optional or test-free. First, very small colleges (and programs) have taken enrollment hits and suffered financially. By eliminating the tests, they hope to attract more applicants and, hopefully, enroll more students. Once a few schools go test-optional or test-free, other schools feel they have to as well in order to be competitive in attracting applicants. Second, larger, less-selective schools (and programs) can similarly benefit from relaxed admission standards by enrolling more students, which, in turn, benefits their bottom line. Both types of schools also increase their percentages of minority student enrollment. It looks good to their constituents that they are enrolling young people from historically underrepresented groups and giving them a chance at success in later life. Highly selective schools also want a diverse student body but, similar to the previously mentioned schools, will not see much of a change in minority graduation rates simply by lowering admission standards if they also maintain their classroom academic standards. They will get more applicants, but they are still limited by the number of students they can serve. Rejection rates increase (due to more applicants) and other metrics become more important in identifying which students can succeed in a highly competitive academic environment.

The admissions process will be less useful, and more unfair, if standardized tests are not used.

There are multiple concerns with not including admission tests as a metric to identify students’ potential for succeeding in college and advanced degree programs, particularly those programs that are highly competitive. First, the admissions process will be less useful. Other metrics, with the exception of high school GPA as a solid predictor of first-year grades in college, have lower predictive validity than tests such as the SAT. For example, letters of recommendation are generally considered nearly as important as test scores and prior grades, yet letters of recommendation are infamously unreliable—there is more agreement between two letters about two different applicants from the same letter-writer than there is between two letters about the same applicant from two different letter-writers.50 (Tip to applicants—make sure you ask the right person to write your recommendation). Moreover, letters of recommendation are weak predictors of subsequent performance. The validity of letters of recommendation as a predictor of college GPA hovers around .3; and although letters of recommendation are ubiquitous in applications for entry to advanced degree programs, their predictive validity in that context is even weaker.51 More importantly, White and Asian students typically get more positive letters of recommendation than students from underrepresented groups.52 For colleges that want a more diverse student body, placing more emphasis on such admission metrics that also reveal race differences will not help.

Without the capacity to rely on a standard, objective metric such as an admission test score, some admissions committee members may rely on subjective factors, which will only exacerbate … disparate representation.

This brings us to our second concern. Because race differences exist in most metrics that admission officers would consider, getting rid of admission test scores will not solve any problems. For example, race differences in performance on Advanced Placement (AP) course exams, now used as an indicator of college readiness, are substantial. In 2017, just 30 percent of Black students’ AP exams earned a qualifying score compared to more than 60 percent of Asian and White students’ exams.53 Similar disparities exist for high school GPA; in 2009, Black students averaged 2.69, whereas White students averaged 3.09,54 even with grade inflation across U.S. high schools.5556 Finally, as mentioned previously, race differences even exist in the very subjective letters of recommendation submitted for college admission.57

Removing tests from the process is not going to address existing inequities; if anything, it promises to exacerbate them.

Without the capacity to rely on a standard, objective metric such as an admission test score, some admissions committee members may rely on subjective factors, which will only exacerbate any disparate representation of students who come from lower-income families or historically underrepresented racial and ethnic groups. For example, in the absence of standardized test scores, admissions committee members may give more attention to the name and reputation of students’ high school, or, in the case of graduate admissions, the name recognition of their undergraduate research mentor and university. Admissions committees for advanced degree programs may be forced to pay greater attention to students’ research experience and personal statements, which are unfortunately susceptible to a variety of issues, not the least being that students of high socioeconomic backgrounds may have more time to invest in gaining research experience, as well as the resources to pay for “assistance” in preparing a well-written and edited personal statement.58

So why continue to shoot the messenger?

If scientists were to find that a medical condition is more common in one group than in another, they would not automatically presume the diagnostic test is invalid or biased. As one example, during the pandemic, COVID-19 infection rates were higher among Black and Hispanic Americans compared to White and Asian Americans. Scientists did not shoot the messenger or engage in ad hominem attacks by claiming that the very existence of COVID tests or support for their continued use is racist.

Sadly, however, that is not the case with standardized tests of college or graduate readiness, which have been attacked for decades,59 arguably because they reflect an inconvenient, uncomfortable, and persistent truth in our society: There are group differences in test performance, and because the tests predict important life outcomes, the group differences in test scores forecast group differences in those life outcomes.

The attack on testing is likely rooted in a well-intentioned concern that the social consequences of test use are inconsistent with our social values of equality.60 That is, there is a repeated and illogical rejection of what “is” in favor of what educators feel “ought” to be.61 However, as we have seen in addressing misconceptions about admission tests, removing tests from the process is not going to address existing inequities; if anything, it promises to exacerbate them by denying the existence of actual performance gaps. If we are going to move forward on a path that promises to address current inequities, we can best do so by assessing as accurately as possible each individual to provide opportunities and interventions that coincide with that individual’s unique constellation of abilities, skills, and preferences.6263

Categories: Critical Thinking, Skeptic

Can Evolutionary Psychology Explain Fashion?

Tue, 05/20/2025 - 6:15pm

When people think of fashion, they often picture runway shows, luxury brands, pricey handbags, or the latest trends among teens and young adults. Fashion can be elite and expensive or cheap and fleeting—a statement made through clothing, hairstyles, or even body modifications. Regardless of gender, fashion is frequently viewed as a way to signal income, social status, group affiliation, personal taste, or even to attract a partner. But why does fashion serve these purposes, and where do these associations come from? An evolutionary perspective offers surprising insights into the role of fashion in signaling status and sexual attraction.

The adaptive nature of ornamentation is something that has been long admired and studied in a wealth of nonhuman species. Most examples are ornaments the animals grow themselves.1 Consider the peacock’s tail, a sexually selected trait present only in males.2 Peahens are attracted to males with the largest and most symmetrical tails.

The ability of males to grow a large and symmetric tail is related to their overall fitness (the ability to pass their genes into the next generation), so that females that mate with them will have better quality offspring. Studies have shown that altering the length and symmetry of peacock tails influences mating success—shorter tails lead to less mating opportunities for the males. Antlers are primarily found on male members of the Cervidae family, which include elk, deer, moose, and caribou (the one species in which the females also grow antlers).3 Antlers, unlike horns, are shed and regrown every year. They are used as weapons, symbols of sexual prowess or status, and as tools to dig in the snow for food. Antlers increase in size until males reach maturity, and grow larger with better nutrition, higher testosterone levels, and better health or absence of disease during growth. The size of a male’s antlers is also influenced by genetics and females prefer to mate with males with larger antlers compared to smaller ones (much like in the peacocks).45

In many species, exaggerated male structures like tails, antlers, bright coloration, and sheer size can serve as a weapon in intrasexual competition and as an ornament to signal genetic quality and thereby promote female choice. As a result, much attention has been focused on male ornamentation in nonhuman animals and what it indicates.6 Moreover, males of various species add outside materials to their bodies, nests, and environments specifically to attract mates. Consider the caddisfly, the bower bird, and even the decorator crab; all use decoration to attract females.7 Interestingly, in what are often referred to as sex role-reversed species, such as the pipefish,8 it is the females who are more competitive for mates and are more highly ornamented. But what about humans? Has ornamentation or fashion in humans also been shaped by sexual selection?

Humans do not have “natural” ornaments like tails or antlers to display their quality.

Humans have a fascination with fashion, as best summed up by the psychologist George Sproles:9 “Psychologists speak of fashion as the seeking of individuality; sociologists see class competition and social conformity to norms of dress; economists see a pursuit of the scarce; aestheticians view the artistic components and ideals of beauty; historians offer evolutionary explanations for changes in design. Literally hundreds of viewpoints unfold, from a literature more immense than for any phenomenon of consumer behavior.” To be fair, humans do not have “natural” ornaments like tails or antlers to display their quality. They also do not have much in the way of fur, armor, or feathers to protect their bodies or to regulate temperature, so “adornment” in the form of clothing was necessary for survival. However, humans have spent millennia fashioning and refashioning what they wear, not just according to climate or condition, but for status, sex, and aesthetics.

If fashion has been such a large part of human history with deep evolutionary roots, why do so many trends, preferences, and standards fluctuate across cultures and time? This is because fashion is a display of status as well as mating appeal. Many human preferences are influenced by context. For example, male preferences for women’s body size and weight shift with resource availability; in populations with significant history of food shortages, larger or obese women are prized. Larger women are displaying that they have, or can acquire, resources that others cannot and have sufficient bodily resources for reproduction.10 When resources are historically abundant, men prefer thinner women; in this context, these women display that they can acquire higher-quality nutrition and have time or resources to keep a fit, youthful figure. When tan bodies indicated working outside, and therefore lower standing, pale skin was preferred. When some societies shifted to tan bodies reflecting a life of resources and leisure, they gave tanning prestige, and it became “fashionable.”11

The shifts in what is fashionable can be attributed to these environmental changes, but one principle remains constant: if it displays status (social, financial, or sexual), it is preferred.12 A good example of this would be jewelry, which shifts with fashion trends—whether gold or silver is in this season, or whether rose gold is passé. However, if the appeal of jewelry was just aesthetic—to be shiny or pretty—people would not care whether the jewels were real and expensive or cheap “costume” jewelry. However, they do care, because expense indicates greater wealth and status. This is so much so that people often make comments regarding the authenticity or the size (and therefore cost) of jewels, such as the size of diamonds in engagement rings.13

Fashion for Sexual Display

It would be surprising if fashion and how humans choose to ornament themselves was not influenced by sexual selection. Humans show a number of traits associated across other species that are sexually selected, including dimorphism in physical size and aggression, delayed sexual maturity in males, and greater male variation in reproductive success (defined as the number of offspring).14 Men typically choose clothing that emphasizes the breadth of their shoulders and sometimes adds to their height through shoes with lifts or heels. In many modern western populations, men also spend significant time crafting their body shape by weight lifting to attain that triangle shaped upper body without the benefit of shoulder pads or other deceptive tailoring signals. These are all traits that females have been shown to value in terms of choosing a mate.15

Illustration by Marco Lawrence for SKEPTIC

Examining artistic depictions of bodies provides particular insights into human preferences, as these figures are not limited by biology and can be as exaggerated as the artist wants. We can also see how the population reacts to these figures in terms of popularity and artistic trends. The triangular masculine body shape has been historically exaggerated in art and among fictional heroes, and this feature continues today as comic books and graphic artists create extreme triangular torsos and film superhero costumes with tight waists and padded shoulders and arms. These costumes are not new and do not vary a great deal. They mimic the costume of warriors, soldiers, and other figures of authority or dominance. As cultural scholar Friedrich Weltzien writes, “The superhero costume is an imitation of the historical models of the warrior, the classic domain of heroic manhood.”16

If it displays status (social, financial, or sexual), it is preferred.

Indeed, military personnel and heroes share behaviors and purposes (detecting threats, fighting adversaries, protecting communities, and achieving status in hierarchies). These costumes act as physical markers and are used to display dominance in size, muscularity, and markers of testosterone. Research has found that comic book men have shoulder-to-waist ratios (the triangular torso) and upper body muscularity almost twice that of real-life young men, and that Marvel comic book heroes in particular are more triangular and muscular than championship body builders. What is remarkable is that even with imaginary bodies, male comic book hero “suits” have several features that, not coincidentally, exaggerate markers of testosterone and signal dominance and strength. Even more triangular torsos are created by padded shoulders and accents (capes, epaulets) and flat stomachs (tight costumes with belts, abdominal accents) with chest pieces that have triangular shapes or insignia, large legs and footwear (boots, holsters), and helmets and other face protection that create angular jawlines.17

Men’s choice of clothing and jewelry … convey information about status and resources that are valued by the opposite sex for what they may contribute to offspring success.

The appearance of a tall, strong, healthy masculine body shape is often weighted strongly by women in their judgments of men. There is also an interaction between sex appeal and status. Women choose these men in part because the men’s appearance affects how other men treat them. Men who appear more masculine and dominant elevate their status among men, which makes them more attractive to women.18 Men’s choice of clothing and jewelry or other methods of adornment can not only emphasize physical traits but also convey information about status and resources that are valued by the opposite sex for what they may contribute to offspring success. Some clothing brands (or jewelry) are more expensive and are associated with more wealth, and so are likely to attract the attention of the opposite sex; think of brand logos, expensive watches, or even the car logo on a keychain as indicators of wealth.19

Female fashion also shows indications of being influenced by its ability to signal mate value or enhance it, sometimes deceptively. In many mammals, female red ornamentation is a sexual signal designed to attract mates.20 Experimental studies of human females suggest that they are more likely to choose red clothing when interacting with an attractive man than an attractive woman;21 the suggestion being that red coloration can serve a sexual signaling function in humans as well as other primates. Red dyes in clothing and cosmetics have been extremely popular over centuries, notably cochineal, madder, and rubia. In fact, the earliest documented dyed thread was red.22

One of the primary attributes that women have accentuated throughout time is their waist-to-hip ratio, a result of estrogen directing fat deposition23—a signal of reproductive viability. The specific male preferences regarding waist to hip ratio have been documented for decades.24 But is this signal, and its amplification, really a global phenomenon? It is easy to give western examples of waist minimization and hip amplification—corsets, hoop skirts, bustles, and especially panniers,25 or fake hips that can make a woman as wide as a couch. Even before these, there was the “bum roll”—rolled up fabric attached to a belt to create a larger bulge over the buttocks.

Outside of Western cultures, one can find a variety of “wrappers” (le pagne in Francophone African cultures), yards of fabric wrapped around the hips and other parts of the body to accentuate and amplify the hips.26 Not surprisingly, these are also a show of status as the quality of the fabric is prioritized and displayed.

Just as with men, this specific attribute is wildly exaggerated in fictional depictions of women, from ancient statues to contemporary comic, film, and video game characters. One study concluded that “when limitations imposed by biology are removed, preferred waist sizes become impossibly small.”27 Comic book heroines are drawn with skintight costumes and exaggerated waist-to-hip ratios. They have smaller waists and wider hips than typical humans by far; the average waist-to-hip ratio of a comic book woman was smaller than the minimum waist-to-hip ratio of real women in the U.S. Heroine costumes further accentuate this already extreme curve by use of small belts or sashes, lines, and color changes. Costumes are either skintight or show skin (or both), with cutouts on the arms, thighs, midriff, and in particular, on the chest to show cleavage. The irony of battle uniforms that serve no protective purpose has been pointed out many times in cultural studies.28

Another feminine feature that plays a role in fashion is leg length. Various artistic depictions of the human body throughout history show that while the ideal leg length in women has increased over time, the preference for male leg length has not shifted. This increase appears to emerge during the Renaissance, which may be due to increases in food security and health during that time. As with many physical preferences in humans, leg length can be an indicator of health, particularly in cases of malnutrition or illness during development. This is another important reminder that preferences are shaped by resources, and consistently shift toward features that display status. What is the ideal leg length? One study found that if a woman’s height was 170 cm (5 foot 7 inches), the majority favored a leg length that was 6 cm (2.36 inches) longer, a difference that corresponds to the average height of high-heeled shoes.29 You can probably see where this is going: Sexual attractiveness ratings of legs correlate with perceived leg length, and legs are perceived as longer with high-heeled shoes. It should come as no surprise that women may accentuate or elongate their legs with high heels.

Photo by Ham Kris / Unsplash

High heeled shoes were not originally the domain of women, as they are thought to have originated in Western Asia prior to the 16th century in tandem with male military dress and equestrianism. The trend spread to Europe, with both sexes wearing heightened heels by the mid-17th century.30 They have remained present in men’s fashion in the form of shoes for rockstars and entertainers (e.g., Elton John), and boots worn by cowboys and motorcyclists. However, these heels are either short or hidden as lifts to make the men appear taller. By the 18th century, high heels became worn primarily by women, particularly as societies redefined fashion as frivolous and feminine.

As one might expect, high heels do more than elongate legs and increase height. High heels change the shape of the body and how it moves. Women wearing heels increase their lumbar curvature and exaggerate their hip rotation, breasts, and buttocks, making their body curvier. As supermodel Veronica Webb put it, “Heels put your ass on a pedestal.” When women walk in heels, they must take smaller steps, utilize greater pelvic rotation, and have greater pelvic tilt. All of these changes result in greater attractiveness ratings. Wearing high heels also denotes status—high heel shoes are typically more expensive than flat shoes, and women who wear them sustain serious damage if they have occupations that require a lot of labor. Therefore, women who wear heels appear to be in positions where they do less labor and have more resources. Research has asked this question directly, and both men and women view women in high heels as being of higher status than women wearing flat shoes.31

Fashion can also signify membership in powerful groups, such as the government, the military, or nobility.

At this point, it’s hardly surprising to learn that, compared to actual humans, comic book women are depicted with longer legs that align with peak preferences for leg length in several cultures, while men are shown with legs of average length. Women are also far more often drawn in heels or on tiptoe, regardless of context. Women are even drawn on tiptoe when barefoot, in costume stocking feet, and even when wearing other types of shoes or boots. This further elongates their already longer legs.32

Fashion as Status Signaling

Social status, as previously mentioned in terms of traits valued by the opposite sex, is also often displayed through fashion in ways relevant to within-sex status signaling, particularly when it comes to accessories. Men making fashion choices that indicate masculinity and dominance include preferences for expensive cars and watches—aspects of luxury consumption.33 Women not only emphasize their own beauty but also carry bags, for example, that are brand conscious, conveying information about their wealth and perhaps their preferences for specific causes, as in the popularity of animal welfare friendly high-end brands such as Stella McCartney.

Unlike high-end cars, however, which signal status to possible mates as well as status competitors, men are largely unaware of the signals sent from women to other women by such accessories. Women are highly attuned to brands and costs of women’s handbags, while most men do not seem to recognize the signaling value.34 While luxury products can boost self-esteem, express identity, and signal status, men tend to use conspicuous luxury products to attract mates, while women may use such products to deter female rivals. Some studies have shown that activating mate guarding motives prompts women to seek and display lavish possessions, such as clothes, handbags, and jewelry, and that women use pricey possessions to signal that their romantic partner is especially devoted to them.35

Fashion can also signify membership in powerful groups, such as the government, the military, or nobility. It can also signify the person’s role in society in other ways, for example, whether someone is married, engaged, or betrothed (by their own volition or by family). There are several changes in fashion that are specific to the various events surrounding a wedding, each with its own cultural differences and symbolism, and far too many to review here.36 Several researchers have explored the prominence and the symbolic value of a bride’s traditional dress in different societies.37 However, these signifiers are not just specific to the wedding rituals; what these women wear as wives (and widows) is culturally dictated for the rest of their lives.

These types of salient markers of female marital status are present in a number of societies. For example, not only are Latvian brides no longer allowed to wear a crown, but they may be given an apron and other displays (such as housekeeping tools) that indicate that they are now wives. In other cultures, girls will wear veils from puberty to their wedding day, and the removal of the veil is an obvious display of the change in status. Some cultures symbolically throw away the bride’s old clothes, as she is no longer that person; she is now the wife of her husband. In Turkey, married Pomak women cut locks of hair on either side of their head, and their clothing is much simpler in style than the highly decorated daily clothing of unmarried Pomak women. However, wives do wear more expensive necklaces—gold or pearls rather than beads.38 Notice that this is not only a signal of marital status, but also a signal of the groom’s wealth.

An evolutionary perspective suggests … people who choose to tattoo and pierce their bodies are doing so … because it serves as an advertisement or signal of their genetic quality.

Meanwhile, for men, the vast majority of cultures possess only one marker for married men—a wedding ring—which is also expected of women. Why are there more visible markers of marital status for women than for men? This seems likely to be a product of the elevated sexual jealousy and resulting proprietariness employed by men to prevent cuckoldry—what evolutionary psychologists call mate guarding. Salient markers of marital status for women show other men that she is attached to, or the property of, her husband. If the term “property” seems like an exaggeration, cultures have been documented to have rituals specifically for the purpose of transferring ownership of the bride from her parents to her husband, with the accompanying changes in appearance to declare that transfer to the public.39

Tattoos as Signals of Mate Quality, Social Status, and Group Membership

Body modifications, such as tattoos and piercings, have become increasingly prevalent in recent years in Western culture, with rates in the United States approaching 25 percent.40 Historically, tattooing and piercing were frequently used as an indicator of social status41 or group membership, for example, among criminals, gang members, sailors, and soldiers. While this corresponds with all of the other types of adornment we have reviewed, other researchers have suggested that these explanations don’t fully illuminate why individuals should engage in such costly and painful behavior when other methods of affiliation, such as team colors, clothing, or jewelry are less of a health risk. Tattoos and piercings are not only painful but entail health risks, including infections and disease transmission, such as hepatitis and HIV.42 One could suggest that the permanence of body modifications is a marker of commitment or significance, but an evolutionary perspective suggests an additional level of explanation: that people who choose to tattoo and pierce their bodies are doing so not only to show their bravery and toughness, but also because it serves as an advertisement or signal of their genetic quality. Good genetic quality and immunocompetence may be signaled by the presence and appearance of tattoos and piercings in much the same way as ornamentation, much as the peacock’s tail (in its size and symmetry), serves as a signal of male health and genetic quality.43

Photo by benjamin lehman / Unsplash

Even with tattoos, the same areas of the body are accentuated as we see in clothing.44 Researchers have reported sex differences in the placement of tattoos such that their respective secondary sexual characteristics were highlighted, with males concentrating on their upper bodies drawing attention to the shoulder-to-hip ratio. Females had more abdominal and backside tattoos, drawing attention to the waist-to-hip ratio. The emphasis seems to be on areas highlighting fertility in females and physical strength in males, essential features of physical attractiveness.45 In fact, female body modification in the abdominal region was most common in geographic regions with higher pathogen load, again suggesting that such practices may serve to signal physical and reproductive health.46 Recent work has also indicated social norms influence how tattoos affect perceptions of beauty such that younger people and ones who themselves are tattooed see them as enhancing attractiveness.47

Tattoos and piercings are not only painful but entail health risks, including infections and disease transmission, such as hepatitis and HIV.

Studies on humans and nonhuman animals have indicated that low fluctuating asymmetry (that is, greater overall symmetry in body parts) is related to developmental stability and is a likely indicator of genetic quality.48 Fluctuating asymmetry (FA), which is defined as deviation from perfect bilateral symmetry, is thought to reflect an organism’s relative inability to maintain stable morphological development in the face of environmental and genetic stressors. One study found49 FA to be lower (that is, the symmetry was greater) in those with tattoos or piercings. This effect was much stronger in males than in females, suggesting that those with greater developmental stability were able to tolerate the costs of tattoos or piercings, and that these serve as an honest signal of biological quality, at least in the men in this study.50 Researchers have also tested the “human canvas hypothesis,” which suggests that tattooing and piercing are hard to fake advertisements of fitness or social affiliations and the “upping the ante hypothesis,” which suggests tattooing is a costly honest signal of good genes in that injury to the body can demonstrate how well it heals. In short, tattoos and piercings not only display a group affiliation, but also that the owner possesses higher genetic quality and health, and these tattoos are placed on areas that accentuate “sexy” body parts. Thus, we have come full circle with humans: Just as other species like peacocks, people show off ornamentation to display their quality as mates and access to resources. Even taking into account cultural differences and generational shifts, the primary message remains.

Social Factors in Human Ornamentation

In addition to all of the evidence we have presented here, ornamentation is not just about mating or even signaling social status. Humans also signal group membership or allegiance through fashion. Modern sports fans show their allegiance to their sports teams by various shirts, hats, and other types of clothing—think the “cheese head” hats worn by Green Bay Packers fans at the team’s NFL home games. Fans of various musical performers, from Kid Rock to Taylor Swift, display their loyalty with concert shirts and other apparel. Typically, they also feel an automatic sense of connection when they encounter others sporting similar items. As discussed, tattoos can be seen as signals of genetic quality or health, and over the last twenty or so years tattoos have also increasingly become seen as statements of individuality. And yet, many serious sports fans, for example, have similar tattoos representing their favorite teams. Marvel fans sport Iron Man and Captain America illustrations on their skin, while fans of the television show Supernatural have the anti-possession symbol from the show tattooed on their torso. It may be that in many populations with weak social and family connections, individuals are seeking connection, and adornment is one way of indicating participation in a community or group. You can also see this in terms of political allegiance and the proliferation of Harris-Waltz and MAGA-MAHA merchandise during the 2024 election cycle in the United States.

While it is clear that an adaptationist approach to ornamentation can explain many aspects of fashion related to signaling social status (whether honest or not), group membership, or mate quality, much research remains to be done, including more work on what aspects are cross-culturally consistent and that are constrained more by unique cultural aspects or the local ecology. Not everything is the product of an adaptation; some aspects of fashion that seem less predictable or may be less enduring are unlikely to be explained by ornamentation and signaling theory because they are not rooted in mating or social motives. That being said, many fashion choices, including our own (for better or worse) make a lot of sense in the light of evolutionary processes. For all the small shifts from generation to generation and across cultures, the main themes remain the same. As Rachel Zoe noted: “Style is a way to say who you are without having to speak.”

What do your fashion choices have to say?

Categories: Critical Thinking, Skeptic

A Look Into the Mind’s Eye

Fri, 05/16/2025 - 1:04pm

When navigating the modern world with its varied conveniences and modes of leisure, it seems that we humans are completely detached from the harsh environments that our species evolved out of thousands of years ago. Under stress, or in moments of crisis, however, the tools that our minds have evolved to deal with danger or imminent threat become quite apparent. During times like the recent global COVID-19 pandemic, when resources become unpredictably unavailable, we can turn to rather selfishly acquiring large quantities of particular products. From toilet paper rolls to baking flour, perceived essentials are coveted and cached away, hidden from other individuals, reserved for personal use in the future.

During such periods of uncertainty and upheaval, we aim also to construct meaning and a story line from the world rapidly changing around us—one by-product of which is the development of conspiracy theories. While such actions may be frowned upon in today’s society, and can be explained by hardwired behavioral reactions, they also point out the sophisticated cognitive tools that were likely critical to our evolutionary survival, indeed success, namely: recall of specific past events, future planning, the attribution of mental states to other individuals (theory of mind), a strong belief in some source of causation, and an underlying curiosity about the world we live in.

Thankfully, perhaps, we are not the only species with a tendency to cache goods when resources become scarce or when environments are risky—this is a trait we share with over 200 other vertebrates.1 Food-caching behavior is particularly impressive among birds such as the Clark’s nutcracker. This species lives in harsh seasonal environments and can cache tens of thousands of pine seeds within a season. Remarkably, they are able to remember and retrieve the seeds with great accuracy over nine months after storing them.2 The scrub jay on the other hand, caches a smaller number of more varied items, some of which perish relatively quickly (insects and olives, for example), and must therefore also keep track of the decay rates of different food items, and the passage of time, in order to successfully retrieve edible snacks.3 Are these remarkable behavioral feats potentially underpinned by sophisticated cognitive tools like our own, or can they be explained in terms of simpler, hard-wired behavioral predispositions?

The Clark’s nutcracker can cache tens of thousands of pine seeds within a season and remember and retrieve the seeds with great accuracy over nine months after storing them.

Ethologists and comparative psychologists who study some of the cleverest organisms on the planet have grappled with such questions concerning the nature and origin of intelligence for decades, across a wide variety of different contexts and animal taxa. The comparative study of animal cognition has raised a number of critical questions over the years, including: Are other animals conscious?4 Can they “mentally travel in time” by storing specific memories and imagining the future?5 Are non-human animals able to attribute mental states to other individuals,6 and does curiosity motivate their interaction and exploration of these abstract phenomena?7 Ultimately, what is it about human cognition that sets us apart from other animals, and why? Trying to answer these types of questions is more important than ever. Not only does it give insight into the nature and origins of our own thinking and behavior, tackling these questions can also help us better understand, build, and predict artificial forms of intelligence, which are becoming increasingly embedded in the fabric of society and our daily lives.8

Though comparative cognition is a vast field, researchers are unified by a central challenge: unlocking the secrets of animal minds, which are like black boxes whose contents are neither directly visible nor accessible. Unlike work in human psychology that can partly rely on participants to report their own subjective experiences, research in animal cognition must employ creative behavioral tasks and interventionist approaches in order to test causal hypotheses about mechanisms that underlie behavior. This is the only way to tease apart hardwired responses or simpler forms of associative learning from more complex forms of cognition that could potentially explain behavior in question.9

Take, for example, the remarkable (and often frustrating) ability of ant colonies to identify and efficiently transport food from sparsely scattered patches in the environment to their nests. Research employing mazes has shown that Argentine ants are capable of solving fiendishly difficult transport optimization problems, flexibly finding the shortest path to food sources, even when known routes become blocked off.10 When watching individuals zealously journey out of the nest and back again, in close coordination with one another, it would be reasonable to assume that each ant had an understanding of the transport problem being solved, or that a central organizing force was shaping the behavior of the colony. Yet this feat is an example of self-organizing collective intelligence; a phenomenon that does not require a global controller, or even that the individuals be aware of the nature of the challenge that they are solving together. By adhering to simple, fixed rules of pheromone following and production, individual ants by means of only local interactions can produce complex collective behavior that does not rely upon any sophisticated cognition at all. This example highlights the need to employ carefully crafted experiments to elucidate correctly the true nature of behavioral processes.

Ants efficiently solve complex transport problems, working together through simple rules of pheromone following, showing self-organizing collective intelligence, without needing a leader or central control. (Photo by Ivan Radic, Flickr, CC BY 2.0)

Initially, comparative studies of complex cognition focused primarily on other primate species.11 Their close evolutionary relation to humans means they provide something of a window into the ancestral origins of our sophisticated cognition, and by comparison, the novel idiosyncrasies that characterize human intelligence (although they too have evolved both their bodies and their behavior in response to the selective pressures they have encountered since their split from us and our common ancestor). Nonetheless, it is anthropocentric to assume that complex cognition is exclusive to primates. Indeed, research on primate cognition has generated two influential hypotheses for the evolution of advanced intelligence that are applicable to a wide range of taxa. The Ecological Intelligence Hypothesis suggests that challenges associated with efficiently finding and processing food promote sophisticated cognition,12 while the Social Intelligence Hypothesis argues that activities involved in group living, including the need to cooperate with and potentially deceive others, drive the evolution of sophisticated cognition.13

Understanding the nature of intelligence is a tricky business but comparative psychology provides us with experimental tools that offer a window into the mind’s eye of other animals.

Over the last three decades increasing evidence has accumulated to show that a similar combination of selective pressures has driven the evolution of comparably complex cognition in other animal groups, notably the corvids.14 This group of birds, which includes crows, jays, ravens, and jackdaws, is capable of remarkable behavioral feats. These include the manufacture and use of tools for specific tasks,1516 and even the ability to “count out loud” by producing precise numbers of vocalizations in response to numerical values.17 The discovery of such behaviors points to complex underlying cognition, and given that primates and corvids diverged some 300 million years ago, it also suggests that advanced intelligence evolved independently at least twice within animals as the result of convergent evolutionary pressures.

In order to closely elucidate the nature of intelligence in animals, it is instructive to first identify natural behavior that may reflect complex cognitive processes, especially ones that can also be studied in controlled laboratory conditions. The food caching behavior of birds has proven to be a powerful model through which to investigate the nature of animal intelligence across a range of domains, including recall of past events, future planning, and the ability to attribute mental states to other individuals (“Machiavellian intelligence”). In particular, laboratory studies on scrub jays have leveraged that species’ propensity to cache a variety of perishable foods, but not eat items that have degraded. How do individual birds efficiently recover the hundreds of spatially distinct caches they make daily, given that different food items decay at different rates?

Western Scrub-Jay, Aphelocoma californica (Photo by Martyne Reesman, Oregon Department of Fish and Wildlife, via Wikimedia)

In a notable study published in Nature,18 researchers hypothesized that jays use a flexible form of memory that previously had been thought exclusive to humans—episodic memory. Episodic memory allows us to recall specific events that have occurred in our mind’s eye, and we experience these memories as our own, with a sense that they represent events that have occurred in the past. In the absence of a method to ascertain whether jays subjectively experience memories as we do, the researchers proposed behavioral criteria that would indicate “episodic-like” memory: an ability to retrieve information about “where” a unique event or “episode” took place, “what” occurred during the event, and “when” it happened. To test this, they conducted a series of experiments in which jays were presented with perishable worms that could be cached in trays at one site and non-perishable nuts that could be cached at another. The results of the experiments showed that when given the option to recover caches after a short time, the birds preferred to search for the more desirable, tasty worms, but switched to searching for the less attractive nuts after longer delays, when the worms had decayed. These experiments demonstrated for the first time that a non-human animal can recall the “what-where-when” of specific events in the past using abilities akin to episodic memory in humans.

While birds might rely on recall of specific events to successfully retrieve cached items, the initial act of caching itself is prospective, functioning to provide resources for the future when they might otherwise be scarce. This raises the possibility that non-human animals are capable of future planning, mentally traveling forwards in time to anticipate future needs that differ from present ones. However, caching may also simply be a hardwired behavioral urge, rather than a flexible response that is reliant on learning. To explore this, researchers tested scrub jays using a “planning for breakfast” paradigm.19 Over a period of six days the jays were exposed daily to either a “hungry room” where breakfast was never provided, or a “breakfast room” where food was available in the morning. Otherwise, the jays were provided with powdered (uncacheable) food in a middle room that linked the other two. Then, the birds were offered nuts in the middle room, and the opportunity to cache them in either the hungry or breakfast room. The results showed that the birds spontaneously strongly preferred to cache the nuts in the hungry room, indicating for the first time that a non-human animal can plan for the future, guiding its behavior based on anticipated future needs independent of their present motivational state.

The examples above demonstrate the ability of birds to “mentally travel in time” and form representations of their own past and future. To recover their caches successfully, however, each individual bird must also pay attention to the other birds who might attempt to steal their caches. To lessen the risk of that happening, individual birds employ a range of strategies to protect their stored food, including caching food behind barriers, out of the sight of other birds, and producing decoy caches that do not contain any edible items. To explore the cognitive processes involved in cache protection behavior researchers allowed scrub jays to cache food when alone, or while being watched by another bird. The caching birds were then provided the opportunity to recover their caches while in private, giving them a chance to re-cache the hidden food items that might be vulnerable to pilfering. Interestingly, not all birds re-cached the items most at risk of being stolen (those cached in front of the conspecific). Only those scrub jays who were experienced pilferers themselves decided to re-cache items that had been watched by another individual.20 The implication is that birds who have been thieves in the past project their experience of stealing onto others, thereby anticipating future stealing of their own caches. In other words, it takes a thief to know one! This experiment therefore raises the possibility that the jays simulate the perspectives of other individuals, suggesting that like humans, they may be able to attribute mental states to others, and therefore have a knowledge of other minds as well as other times.

The approach employed in these studies highlights the utility of exploring behavioral criteria indicative of complex cognitive processes by using a carefully controlled experimental procedure. One advantage of this approach is that it is widely applicable, since it relies on externally observable behavior, rather than obscure internal states, and can therefore be used to investigate a diverse range of intelligences. Recently, comparative psychologists have started to apply these techniques to systematically investigate the intelligence of soft-bodied cephalopods—the invertebrate group comprised of octopus, cuttlefish, and squid.21 These remarkable animals have captured the imagination of naturalists for hundreds of years and reports suggest they are capable of highly flexible and sophisticated behaviors. For example, veined octopuses transport coconut shells in which they hide themselves when faced with a threatening predator, raising the possibility that they may be able to plan for the future. Further, the male giant Australian cuttlefish avoids fights with other males by deceptively changing their appearance to resemble that of females—perhaps they are capable of attributing mental states to other members of their species.

A coconut octopus (Amphioctopus marginatus), hides from threatening predators between a coconut shell and a clam shell. Using its tentacles, it carries the shells, while pulling itself along. Sensing a threat, the octopus clamps itself shut between the shells. (Photo by Nick Hobgood, Wikimedia)

Recently, laboratory experiments with the common cuttlefish have shown that like some birds, apes, and rodents, they are able to recollect “what-where-when” information about past events through episodic-like memory.22 Unlike other species however, episodic memory in cuttlefish does not decline with age, offering exciting opportunities to study resistance to age-related decline in cognition.23 As with food caching among corvids, behavioral experiments with cuttlefish have also revealed prospective, future-oriented behavior: after learning temporal patterns of food availability, cuttlefish learn to forgo immediately available prey items in order to consume more preferred food that only becomes available later.2425 Presently, however, it is not clear whether this reflects genuine future planning, which requires individuals to act independently of current needs—and so presents an exciting avenue for future research.

Given the broad applicability of the experimental approach developed in comparative psychology, it is worth considering the utility of experimental paradigms to investigate the behavior of non-organic forms of intelligence. Artificial Neural Networks (ANNs) are becoming increasingly embedded in the way that we work, solve problems, and learn, perhaps best exemplified by the advent of Large Language Models (LLMs), such as ChatGPT, now ubiquitous by their use in content creation and even serving as a source of knowledge.26 It is more important than ever that we develop an understanding of the behavior of these forms of intelligence. Fortunately, decades of research aimed at understanding the minds of animals has provided us with the conceptual tools needed to elucidate the processes underlying artificial behavior, and the means to build a form of artificial intelligence that is more flexible and less biased. Though reports of ANNs besting humans in traditionally complex, strategic games such as poker abound,27 some have argued that these wins are often restricted to very specific domains, and that ANNs are far from displaying the general intelligence of animals, let alone humans.28

Interdisciplinary efforts, however, are helping to close this gap. Inspired by research in cognitive psychology, computer scientists have incorporated an analogue of episodic memory into the architecture of ANNs. Endowed with the ability to compare present environmental variables with those encountered during specific points in the past, ANNs are able to behave much more flexibly.29 Recently, influenced by classic tasks in comparative psychology, psychologists and computer scientists have collaborated to produce a competition testing the relative cognitive abilities of ANNs.30 Dubbed the “Animal-AI Olympics,”31 this competition should help to promote the development of artificial forms of intelligence capable of mirroring the general intelligence displayed by animals, and perhaps one day, humans.

Understanding the nature of intelligence is a tricky business, but comparative psychology provides us with experimental tools that offer a window into the mind’s eye of other animals. In the future, these approaches may prove invaluable in providing insights into the behavior of artificial forms of intelligence, and one day, perhaps, into the behavior of organic life that looks very different from that on Earth.

Categories: Critical Thinking, Skeptic

Eyewitness Testimony: How to Engage With People and Accounts of Extraordinary Claims Without Evoking Anger

Thu, 05/15/2025 - 1:15pm

Skeptics are well aware that there are issues with eyewitness testimony as evidence. These issues are popular topics of discussion at skeptical conferences and are the impetus for numerous skeptical articles. Human perception and memory are notoriously inaccurate, indeed malleable. Preconceptions and cognitive biases shape both our immediate perceptions of events and how we later recall, interpret, and relate them.

The testimony issue goes beyond simple eyewitness accounts, i.e., the descriptions people give of things they visually saw. Testimony can include any description or characterization of something that a person draws from the memory of their perceptions. Something they heard, felt, smelled, read, viewed indirectly, or sensed in any way.

While skeptics find it logically correct to point out these problems, it’s not going to do anyone any good if all that happens is you make people angry.

In discussing contentious topics, the interpretation of testimony can become highly emotional and swiftly evolve into an overly polarized argument that misses the nuance of the situation. I routinely encounter this type of reaction to my examination of testimony, in particular with UFO witnesses. At first I found this rather surprising. After all, I was just trying to be logical, follow the facts, and cover all the bases—one of which being the possibility of false witness testimony. But I was often met with an unexpectedly angry response.

This is something we need to avoid. Anger, of course, is rarely helpful in scientific communication. While skeptics find it logically correct to point out these problems, it’s not going to do anyone any good if all that happens is you make people angry. In fact, if you are perceived (as I often have been) of attacking, disrespecting, or denigrating a witness, then this can affect your credibility and destroy communication opportunities in other areas too.

Over the last couple of decades of encountering this problem, I’ve come across a few important concepts that have been helpful to keep in mind. Essentially, they are blind spots on the part of the supporters of the testimony, but if we don’t take them into account, they become our blind spots too.

Truth & Lies

When I explain that I don’t believe an individual’s testimony is true then their supporters will assume I’m accusing the witness of lying. This then drags the conversation either down the irrelevant path of “why would they lie” or the more perilous road of “how dare you suggest this wonderful person is lying!”

This is a false dichotomy. It’s not a simple matter of “truth” vs. “lies”. There are other options. Yet, even great minds fall into the trap. Here is Thomas Paine on miracles in his 1794 classic The Age of Reason:

If we are to suppose a miracle to be something so entirely out of the course of what is called Nature that she must go out of that course to accomplish it, and we see an account given of such miracle by the person who said he saw it, it raises a question in the mind very easily decided, which is: Is it more probable that Nature should go out of her course, or that a man should tell a lie? We have never seen, in our time, Nature go out of her course, but we have good reason to believe that millions of lies have been told in the same time; it is, therefore, at least millions to one that the reporter of a miracle tells a lie.

That paragraph gives me deeply mixed feelings each time I read it. Paine was examining the possibility of miracles from a rationalist perspective. He asked the reader to consider that people verifiably lie all the time, but miracles are both rare and lacking in scientific evidence. So which is more likely? In this dichotomy, the witness lying seems by far the most probable.

So this classic skeptical quote is fatally flawed, enough to make it useless because the opposite of truth is not lies. The opposite of truth is falseness. Truth means a statement is correct, in agreement with fact or reality. The opposite concept, falseness, means a statement is incorrect, and is contradicted by fact or reality, whether or not a person is lying. Paine’s contemporary, David Hume, in his analysis of miracles in his 1758 An Enquiry Concerning Human Understanding, acknowledged that in addition to deceiving (lying) people can also be deceived:

The plain consequence is (and it is a general maxim worthy of our attention), “That no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous than the fact which it endeavors to establish.” When anyone tells me that he saw a dead man restored to life, I immediately consider with myself whether it be more probable, that this person should either deceive or be deceived, or that the fact, which he relates, should really have happened. I weigh the one miracle against the other; and according to the superiority, which I discover, I pronounce my decision, and always reject the greater miracle. If the falsehood of his testimony would be more miraculous than the event which he relates; then, and not till then, can he pretend to command my belief or opinion.

There are many more ways for people to be deceived than to deceive, yet it’s oh-so-easy to fall for the false dichotomy of true vs lie. As evidenced by the rather clumsy and unfamiliar set of antonyms we have for “truth” (“falseness,” “falsity,” “untruth”), the idea that if someone speaks falsely then they are lying is familiar, quite understandable and inevitable, and so we must take great pains to explicitly avoid that misperception and give other options their appropriate weight.

There are many more ways for people to be deceived than to deceive, yet it’s oh-so-easy to fall for the false dichotomy of true vs lie.

If someone is not telling the truth, then they might be lying, but they might also simply be wrong—perhaps they are misinterpreting something, or they made a mistake, or they succumbed to some perfectly ordinary illusion. Either way, the fact that they are saying something that is false does not mean they are lying. Giving people the benefit of the doubt, skeptics should focus on the possibilities other than lying. Before accusing people of lying, one might perhaps ask, “Perhaps they made a mistake?” “What if they misremembered?” “Could it have been an optical illusion?”

Of course, people lie, and we shouldn’t rule that out entirely, but my experience with believers in UFOs, conspiracy theories, and strange phenomena, the majority of witnesses are quite honest in their descriptions, and unless you are dealing with an obvious charlatan it’s best to avoid even mentioning the lie hypothesis because it will immediately become the focus of outrage and resistance. Focus instead on the possibilities of mistakes, misperceptions, faulty memory, illusions, and hallucinations, and assume lies will be revealed in the process of deeper investigation.

Illustration by James Bennett for SKEPTICTrusting the Victim

When a witness to an event or situation is also a victim (i.e., they have been hurt, assaulted, become ill, or suffered other harm) then things become even more fraught with highly charged emotional obstacles to investigation and communication. The witness testimony of victims is simultaneously revered as sacrosanct, yet it is also known to be unreliable.

Nevertheless, as a general principle the accounts of victims should not be automatically disbelieved. I think everyone deserves a fair hearing with the assumption that they are acting in good faith. Examining the accounts of people who were hurt, especially emotionally, is a tricky path to tread, and very easily leads to the perception of the skeptic being on the attack. In response, a blocking defense attenuates further discussion.

In recent years I’ve focused on the UFO community, and while skeptics don’t usually think of UFOlogists as being victims, many people who feel they had some kind of extraterrestrial encounter often feel they suffer from an associated emotional trauma. Sometimes this is from what they feel happened to them (which can be quite extreme, with perceived physical effects, even abductions and physical examinations) but can also be the result of years of being disbelieved.

The witness testimony of victims is simultaneously revered as sacrosanct, yet it is also known to be unreliable.

It is even more of an issue when the harm a victim is experiencing is the main evidence, or the actual contended phenomenon. Here any examination of the validity of their testimony can readily be perceived or reframed as a personal attack on the individual, and that’s the end of the discussion.

This deference to victims crops up in many areas of interest to skeptics. In the curious case of Havana syndrome, discussed in depth in Vol. 26 No. 4 of Skeptic, several people have become very ill, and were then convinced that their symptoms were related to a loud noise they heard, or sensation they felt, which they now attribute to some kind of directed energy weapon attack. Since they are obviously suffering, it makes it difficult to critique their testimony without seeming callous.

My own experience with this issue dates back to 2006, when a condition known as “Morgellons disease” was getting some media attention. According to sufferers of the malady, their symptoms of itching and general malaise consistent with aging coincided with what they described as “fibers” that wormed their way out of their skin.

Morgellons disease is a form of delusional parasitosis in which individuals report fibers or filaments emerging from the skin, often accompanied by itching, pain, and persistent sores. While sufferers attribute symptoms to an infectious or environmental cause, most scientific studies have found no underlying pathogen—linking the condition instead to psychiatric disorders.

From their descriptions, their testimony, and the occasional images and video provided, it seemed quite apparent that they were simply finding normal hairs and clothing fibers. I blogged about this, describing how I could find similar fibers on my own skin (they are literally everywhere), and how the accounts of fibers emerging from skin were probably a mistake from not understanding the prevalence of microscopic fibers (a base rate error in Bayesian reasoning).

In response to my explanation, I was attacked, portrayed as someone who was accusing the victims of malingering or making up their symptoms, which I certainly was not. But because my initial skeptical approach was to point out what they had got wrong it came across as contradicting their entire testimony. While the fibers were almost certainly unrelated to their experiences, they were actually suffering from a variety of physical symptoms and conditions.

The Morgellons experience taught me that we need to first treat the victim testifier with respect. Their suffering is real, regardless of the cause. Acknowledge that and avoid describing their testimony in absolutes. Instead, as with the “truth vs. lies” issue, raise other possibilities as considerations for them, not assertions from you. Instead of leading an assessment of a traumatic alien abduction story with “that’s nonsense, obviously they dreamt the whole thing!” instead ask “is it possible that sleep paralysis might have played a part here?”

Highly Trained Observers

On a near daily basis I am accused of dismissing the eyewitness testimony of highly trained observers. For example, Commander David Fravor, a decorated U.S. Navy pilot, has testified that he saw a 40-foot Tic-Tac-shaped UFO engage his plane in a short dogfight, and then shoot off at incredible speed with no visible means of propulsion.

I don’t know what he saw, but from his description of how the object seemed to perfectly mirror him, I suspected he had mistaken the size of the object and hence fallen for a parallax illusion that made it seem to move much faster than it actually was (if it was moving at all.) So I proposed this idea and was met with a range of responses, mostly derisive and angry that I would have the temerity to insult the testimony of a highly trained observer like a U.S. Navy pilot.

The notion of a “trained observer” is something of a myth.

These moving responses included the perception that I was accusing Fravor of lying, or being incompetent, stupid, or insane. But I was doing none of those things; rather, I was simply pointing out that he might have made an understandable mistake.

U.S. Navy Commander David Fravor was flying an F/A-18 Hornet when he reported seeing a UFO, later nicknamed the “Tic Tac.” The object hovered over the ocean, appeared to respond to the jets, and perplexed those who watched it. Fravor described the encounter in a report for the Navy and has since been a proponent of the theory that he encountered alien life.

The notion of a “trained observer” is something of a myth. Of course military personnel are trained to observe things, but they are trained to observe specific known things, and not things that are highly unexpected (like a giant flying tic-tac) or out of the realm of human experience (like craft exhibiting non-Newtonian physics.)

Fast-moving UFOs are not something that pilots are trained to observe.

Military pilots’ training in observation of airborne objects comes largely in the form of recognizing other known planes. Since the 1940s pilots have been issued Visual Aircraft Recognition study cards, which show a variety of known friendly and enemy aircraft, usually in silhouette from various angles. More sophisticated recognition training takes place in simulators. But fast-moving UFOs are not something that pilots are trained to observe.

In fact, this intensive training might make matters worse. Being highly trained to identify a particular set of things can mean you will shoehorn outliers into that set. When Fravor saw the Tic-Tac he had no way of judging how large it was, but he settled on 40-feet, because he felt it was about the same size as an F/A-18, the most common plane he saw in the air. Would he have picked the same size if he had been a commercial pilot of larger jets?

No matter how valid my hypothesis, and the potential for error on Fravor’s part, the “how dare you!” reaction prevents wider consideration of the hypothesis. Even though it seems annoying, I find it works better if I set the scene by explicitly explaining how I don’t think he’s lying, or incompetent, stupid, or crazy. I have to establish that I do think he’s a highly skilled pilot, with years of experience, and trained in observing other aircraft. Then when this is established, I can tentatively explore how an understandable mistake might have been made by such a highly trained observer.

This awareness of the emotional reactions to criticism of witness testimony, and the techniques for avoiding those reactions, feels annoying and even unnecessary, as if pandering to bad thinking. But the goal here is effective communication, so getting people to consider an alternative hypothesis is best done by understanding them in the hope that they, in turn, will understand you.

Categories: Critical Thinking, Skeptic

It’s Time for Papal Transparency on the Holocaust

Fri, 05/09/2025 - 11:58am

The new Pope Leo XIV can make history by at long last releasing the World War II archives of the Vatican Bank and expose one of the church’s darkest chapters.

The Catholic Church has a new leader—Pope Leo XIV—born in 1955 in Chicago, Robert Francis Prevost is the first American to head the church and serve as sovereign of the Vatican City State. Many Vatican watchers will be looking for early signs that Pope Leo XIV intends to continue the legacy of Pope Francis for reforming Vatican finances and for making the church a more transparent institution.

There is one immediate decision he could make that would set the tone for his papacy. Pope Leo could order the release of the World War II archives of the Vatican Bank, the repository with files that would answer lingering questions of how much the Catholic Church might have profited from wartime investments in Third Reich and Italian Fascist companies and if it acted as a postwar haven for looted Nazi funds. By solving one of last great mysteries about the Holocaust, Pope Leo would embrace long overdue historical transparency that had proved too much for even his reform-minded predecessor.

The Vatican is not only the world’s largest representative body of Christians, but also unique among religions since it is a sovereign state.

What is sealed inside the Vatican Bank archives is more than a curiosity for historians. The Vatican is not only the world’s largest representative body of Christians, but also unique among religions since it is a sovereign state. It declared itself neutral during World War II and after the war claimed it had never invested in Axis powers nor stored Nazi plunder.

In my 2015 history of the finances of the Vatican (God’s Bankers: A History of Money and Power at the Vatican), I relied on company archives from German and Italian insurers, Alliance and Generali, to show the Vatican Bank had invested in both firms during the war. The Vatican earned outsized profits when those insurers expropriated the cash values of the life insurance policies of Jews sent to the death camps. After the war, when relatives of those murdered in the Holocaust tried collecting on those life insurance policies, they were turned away since they could not produce death certificates.

When relatives of those murdered in the Holocaust tried collecting on life insurance policies, they were turned away.

How much profit did the Vatican earn from the cancelled life insurance policies of Jews killed at Nazi death camps? The answer is inside the Vatican Bank archives.

Also in the Vatican Bank wartime files is the answer to whether the bank hid more than $200 million in gold stolen from the national bank of Nazi-allied Croatia. According to a 1946 memo from a U.S. Treasury agent, the Vatican had either smuggled the stolen gold to Spain or Argentina through its “pipeline” or used that story as a “smokescreen to cover the fact that the treasure remains in its original repository [the Vatican].”

Photo by Karsten Winegeart

The Vatican has long resisted international pressure to open those wartime bank files. World Jewish Congress President Edgar Bronfman Sr. had convinced President Bill Clinton in 1996 that it was time for a campaign to recover Nazi-looted Jewish assets. Clinton ordered 11 U.S. agencies to review and release all Holocaust-era files and urged other countries and private organizations with relevant documents to do the same.

The Vatican refused to join 25 nations in collecting documents across Europe to create a comprehensive guide for historians.

The Vatican refused to join 25 nations in collecting documents across Europe to create a comprehensive guide for historians. At a 1997 London conference on looted Nazi gold, the Vatican was the only one of 42 countries that rejected requests for archival access. At a restitution conference in Washington the following year, it ignored Secretary of State Madeleine Albright’s emotional plea, and it opted out of an ambitious plan by 44 countries to return Nazi-looted art and property, settle unpaid life insurance claims and reassert the call for public access to Holocaust-era archives.

Subsequent requests for opening the files by President Clinton and Jewish organizations went unanswered. Historians were meanwhile inundated with millions of declassified wartime documents from more than a dozen countries and only a handful of Jewish advocacy groups pressed the issue during the last years of John Paul II’s papacy and the eight years of Benedict XVI.

Pope Francis opened millions of the Church’s documents.

To his credit, in March 2020, Pope Francis opened millions of the church’s documents about its controversial wartime pope, Pius XII. That fulfilled in part a promise Pope Francis had made when he was the cardinal of Buenos Aires: “What you said about opening the archives relating to the Shoah [Holocaust] seems perfect to me. They should open them [the Holocaust files] and clarify everything. The objective has to be the truth.”

Photo by Ashwin Vaswani

And while Pope Francis was responsible for reforming a bank that had often served as an offshore haven for tax evaders and money launderers and frustrated six of his predecessors, he nevertheless kept the Vatican Bank files sealed.

Pope Leo XIV is the Vatican Bank’s sole shareholder. It has only a single branch located in a former Vatican dungeon.

Pope Leo XIV is the Vatican Bank’s sole shareholder. It has only a single branch located in a former Vatican dungeon in the Torrione di Nicoló V (Tower of Nicholas V). The new Pope can order the release of the wartime Vatican Bank archives with the speed and ease with which a U.S. president issues an executive order. It would be a bold move in an institution with a well-deserved reputation for keeping files hidden sometimes for centuries. It took more than 400 years for the Church to release some of its Inquisition files (and at long last exonerate Galileo Galilei), and more than 700 years before it cleared the Knights Templar of a heresy charge and opened the trial records.

Opening the Vatican Bank’s wartime archives would send the unequivocal message that transparency is not merely a talking point, but instead a high priority that the new Pope plans to apply to the finances of the church, both in its history as well as going forward. Such a historic decree will mark his Papacy as having shed some light on one of the church’s darkest chapters. In so doing, Pope Leo will pay tribute to the families of victims of World War II who have been long been demanding transparency and some semblance of justice.

Categories: Critical Thinking, Skeptic

Aleksandr Dugin, Vladimir Putin, and Donald Trump

Thu, 05/08/2025 - 3:18pm

A review of The Trump Revolution: A New Order of Great Powers by Aleksandr Dugin, Arktos Media, 2025, 136 pages.

Aleksandr Dugin has been described as the Kremlin’s chief ideologue for his substantial influence on Russian politics, promoting nationalist and traditionalist themes and publishing extensively on Russia’s central role in world civilization. He is also a long-time supporter of Donald Trump, and in his new book, The Trump Revolution: A New Order of Great Powers, Dugin celebrates the election of America’s 47th president as the culmination of his life’s work.

In an earlier article in Skeptic, I called Dugin “a mystical high priest of Russian fascism who wants to bring about the end of the world,” but also noted that is he is a philosopher who specializes in the study and use of ideologies. In the 1990s, Dugin set himself the task of synthesizing a new ideology to replace the defunct communist movement as the foundation for the Kremlin’s international fifth column. For a while he played with the idea of uniting all antiliberal ideologies, including socialism, fascism, and ecologism, into a single allegedly profound “fourth political theory.”

Ultimately, however, Dugin was drawn toward the Third Reich’s National Socialism, which he found to be admirable. Dugin came to realize that the essence of Nazism was not its historical particulars, but Hitler’s key political insight, namely that there is no contradiction between nationalism and socialism. On the contrary, it is only by invoking the tribal instinct that a leader can arouse the passion needed to realize the full collectivist program, whose top priority is not the collectivization of property, but the eradication of individual reason and conscience through the collectivization of minds.

As Dugin saw it, every country could have its own tribal-collectivist movement. He thus proceeded on this principle to organize an “Alt-Right” Comintern with parties in nearly every Western nation all based on the same template, combining militant national chauvinism, most often mobilized around anti-immigrant sentiment. Participating units in this franchise include the French National Rally led by Marine Le Pen, the German Alternative für Deutschland (AfD), the followers of Nigel Farage in the UK, Victor Orbán’s party in Hungary, and similar parties in the Netherlands, Austria, Italy, Slovakia, and many other European countries. In The Trump Revolution, Dugin welcomes what he sees as the triumph of its American branch.

The sunwheel-like swastika used by the Thule Society and the German Workers’ Party.(Source: Wikipedia, by NsMn, CC BY-SA 3.0)

Arktos Media, the publisher of Dugin and a long list of other ultra-nationalist writers, derives its name from the Thule Society, which was devoted to the historical and anthropological search for the origin of the superior Germanic race, and which provided much of the mystical antecedents for the Nazi movement (the society’s logo adopted the Sanskrit symbol for “good fortune”—a hooked cross called the svastika). Accordingly, Arktos Editor in Chief Constantin von Hoffmeister provides the book’s preface by trumpeting its message in suitably Wagnerian terms—“the world ended.”

It ended in a neon blizzard, and electric storm of shattered paradigms. … We have Trumpism 2.0, and it is a revolution beyond revolutions–a final reckoning that promises to devour the remains of the corrupted system and rebuild something ancient, something powerful, something terrifying in its purity. … The Globalist Cathedral is in ruins. The Swamp has been burned, and from its ashes rises something ancient, something terrible, something divine. This is the Trumpian Ragnarök. This is history breaking apart and reassembling itself in a new form. Welcome to the Renewed World Order. It is not for the weak.

Dugin takes up the mantle from there:

Trumpism has emerged as a unique phenomenon, combining the long-marginalized national-populist agenda of paleoconservatives with an unexpected shift in Silicon Valley, where influential high-tech tycoons have begun aligning with conservative politics. … Thus, Trump’s second term has become the final chord in the geopolitics of a multipolar world, marking the overturning of the entire liberal-globalist ideology.

Dugin’s war is against the West, considered both as a creed whose Enlightenment liberal humanism threatens the principle of the need for unlimited tyranny (in Russian the “Silnaya Ruka,” or strong arm) underpinning Kremlin rule, and as a concrete geostrategic military power that must be defeated to expand Russian global dominion. He celebrates the election of Trump as serving both of those purposes:

This “illiberal nationalism” has become the ideological axis of the MAGA (Make America Great Again) movement. The United States is no longer presented as the global distributor of liberal democracy and its guarantor on a planetary scale. Instead, it is redefined as a Great Power–focused on its own greatness, sovereignty, and prosperity.

Thus, under Trump, there is no reason for the U.S. to maintain its alliances with other liberal democracies, supporting the system of collective security of the free world superpower, or what Dugin decries as the “unipolar world.” Instead, the U.S. can become, alongside Russia and China, one of several predatory Great Powers dominating continental spheres of influence within a new “multipolar world.”

Call it political transactionalism or moral nihilism if you prefer. I think evil hits the nail right on the head.

There is on offer here something akin to the Molotov–Ribbentrop Pact, under which the United States gets North America, China gets east Asia and south Asia, and Russia gets Eurasia, “from Lisbon to Vladivostok.” It takes little imagination to see why such a division of spoils might appeal to the political leaders of Russia. But the fruition of such a great-power realignment very much depends upon the Americans abandoning their role as crusaders for the cause of world freedom. Fortunately, says Dugin, Trump is on board with this new alignment:

Trump and his ideology categorically reject any notion of internationalism, and rhetoric about so-called “universal human values,” “world democracy,” or “human rights.” Instead Trump appears to envision a final rupture with both the Yalta system and the unipolar globalist movement. He has therefore set out to dismantle all international institutions that symbolize the past eighty years–the UN, globalist structures like the WHO and USAID, and even NATO. Trump sees the United States as a new empire, and himself as a modern Augustus, who formally ended the decaying Republic. His ambitions extend beyond America itself–hence his interest in acquiring Greenland, Canada, the Panama Canal, and even Mexico. [italics in original]

Dugin’s ideas may appear to be quite mad, but they are actually instructive in revealing possible motivations for the actions of the Trump administration during its opening months. For example, abolishing USAID was justified by the Trump administration as a fiscal responsibility measure, but Dugin contends that entirely other motivations were operational:

The liquidation of the United States Agency for International Development (USAID) is an event whose significance can hardly be overstated. When the Soviet Union abolished the Comintern … structures that advocated the ideological interests of the USSR on a global scale, it marked the beginning of the end for the international Soviet system. … Something similar is happening in America, as USAID was the main operational structure for the implementation of globalist projects. Essentially, it was the primary transmission belt for globalism as an ideology aimed at the worldwide imposition of liberal democracy, market economics, and human rights.The banning of USAID is a critical, fundamental move, the importance of which, as I said, cannot be overstated. This is especially true because countries like Ukraine largely depend on the agency, receiving significant funding through it. All Ukrainian media, NGOs, and ideological structures were financed by USAID. The same applied to almost the entire liberal opposition in the post-Soviet space, as well as liberal regimes in various countries, including Maia Sandu’s Moldovan administration and many European political regimes, which were also on USAID’s payroll.

Dugin presents Ukraine as central to Russia’s strategic objectives within the envisioned multipolar world order, framing it as a pivotal asset in the redistribution of global influence:

Zelensky undoubtedly realizes that his time is running out, and with it the history of Ukraine comes to an end. … The current U.S. leadership does not intend to continue this policy of support, and therefore Zelensky finds himself in a dead-end situation. His desperate attempts to intervene in the situation resemble a frog trying to climb out of a bucket of milk. He seeks to draw attention to himself despite the fact that no one is asking him, and no negotiations are being conducted with him. The main discussions will be centered on the strategic dialogue between Putin and Trump, concerning not only Ukraine but also the global order. … This is the essence of building a multipolar world, in which Ukraine has no place.

A U.S. disengagement from Ukraine would precipitate a broader decline of European stability, triggering a wider unraveling of the continent’s geopolitical coherence and power:

Handing over a half-decayed, toxic corpse, exuding radiation and stench, is hardly a worthy gift for one’s allies and friends. In this context, Ukraine appears to be just such a toxic waste. Trump is seemingly eager to rid himself of this burden. If Europe is left to face Russia alone, the collapse of the globalist liberal elite will accelerate. Thus, the Trojan gift–the assignment of responsibility to Europe for waging war against Russia in Ukraine–is presumably Trump’s strategy to quickly weaken, and possibly even dismantle his trade competitors while undermining his ideological opponents in Europe. The European elite openly opposes Trumpism.

For example, when Trump suggested the annexation of Greenland, Denmark responded by declaring its willingness to fight. Simply astonishing! Add this to Ukraine, with its decaying society and a frenzied population driven to a state of subhuman aggression–filled with rage towards everyone, clamoring for more money and weapons–this terrorist entity reveals the clear reality that the responsibility for this conflict will fall squarely on the shoulders of the European Union.

If the EU assumes responsibility for Ukraine, it will become the most effective way for Trump to rid himself of a toxic asset that shackles him and places him at a disadvantage in the ongoing redrawing of the geopolitical map. While this will not lead to Europe’s greatness, it will instead hasten its disintegration.

In this vision, Europe is probably gone, although Dugin does admit the outside possibility that during the process of the dissolution of NATO and the EU the German revanchist AfD might be able to step up and Make Europe Great Again. One can readily conceive of how such an eventuality might not work out so well for Russia, but Dugin seems unconcerned.

Dugin—not unlike Trump—does not seem to believe that there is any reality to the concepts of right and wrong. Rather, they only believe in advantage and disadvantage.

There is one fly in the ointment of Dugin’s brave new multipolar world of predatory great powers feasting on the weak. That is Israel. For some irrational reason, says Dugin, Trump “takes a staunchly pro-Israel stance.” What does this mean for Dugin’s multi-polar world order?

I believe Trump is making his first major geopolitical miscalculation in shaping the new world order in the Middle East. He is alienating the Islamic world–a powerful force that he fails to recognize as an independent geopolitical pole. This is especially true regarding his antagonism towards Iran and the Shiite factions that maintain staunchly anti-Zionist and anti-Israel positions. … I hope that, despite his radical rhetoric and actions, once he fully assumes a role as a key global political architect, he will begin to take reality into account. Otherwise he risks ending up like the liberals he ousted.

Dugin freely mixes current Kremlin propaganda lines into his analysis. For example, in line with Putin’s effort to portray Russia’s “special military operation” in Ukraine as a replay of the Soviets’ Great Patriotic War of resistance against Hitler (known as the Second World War to the rest of us), Dugin denounces the Ukrainians as Nazis. It is not only untrue, but hypocritical because in the past Dugin has repeatedly stated his affinity not only for the Waffen SS, but for the work of foundational Nazi intellectuals, including Hitlers’ geopolitical mentor Professor General Karl Haushofer, philosopher Martin Heidegger, and legal theorist Carl Schmitt. Indeed, it is Schmitt’s argument that the idea of fundamental human rights is an intolerable restraint on the Will of the People as expressed through its Leader that is at the core of Dugin’s case against liberalism.

Perhaps to make himself more appealing to some elements of Trump’s political base, Dugin goes out of his way in his book to represent himself as a Christian, and to describe his cause as the defense of “White Christian civilization.” This is quite remarkable, not only because of the identification of Christianity with the interests of a particular race, but because in the past Dugin had openly celebrated Nazi paganism, going so far as to sponsor an artistic cult devoted to its promotion. Indeed, the anti-Christian nature of Dugin’s mystical theology is so rabid that in 2014 Lutheran bishop James Heiser wrote an entire book diagnosing it as systemically evil.

I am not a theist, so some of Heiser’s arguments pass me by. Yet I think that in a fundamental sense he is onto something. Dugin—not unlike Trump—does not seem to believe that there is any reality to the concepts of right and wrong. Rather, they only believe in advantage and disadvantage. On the basis of this “transactionalist” belief structure, Dugin believes the Trump administration is leaning towards abandoning America’s role as “the watchmen on the walls of world freedom,” an outcome that Dugin has devoted his life to obtain. (“We are the watchmen on the walls of world freedom,” is a famous line from the speech that President John F. Kennedy intended to give at the Dallas Trade Mart on November 22, 1963, but he was gunned down before by Lee Harvy Oswald, an ex-Marine who had defected to the Soviet Union.)

Yet it is precisely the collective security arrangements and international system of free trade underlying what Dugin decries as the “unipolar world” that have prevented a general war or a depression since 1945. This unprecedented 80 year-long period of peace, prosperity, and progress has showered enormous blessings not only on America and Europe, but most emphatically Russia as well. Over the course of the final four decades of the “multipolar world” that preceded the establishment of the Pax Americana, Russia was defeated or devastated by war no less than five times. Between the Russo-Japanese War, World War I, the Russian Civil War, the Russo-Polish War, and World War II, well over fifty million Russians were violently sent to their graves. Horrors on that scale ended in 1945. Yet that is the world that Dugin ardently seeks to recreate.

Call it political transactionalism or moral nihilism if you prefer. I think evil hits the nail right on the head.

Categories: Critical Thinking, Skeptic

From the Ordinary to the Extraordinary

Tue, 05/06/2025 - 2:16pm

Rising temperatures, biodiversity loss, drought, mass migration, the spread of misinformation, inflation, infertility—these are just some of the major challenges facing societies around the globe. Generating innovative solutions to such challenges requires expanding our understanding of what’s currently possible. But how do we cultivate the necessary imagination? By debunking counterproductive myths about imagination, Occidental College cognitive developmental scientist Andrew Shtulman might just provide us with a starting point. “Unstructured imagination succumbs to expectation,” he writes, “but imagination structured by knowledge and reflection allows for innovation.” (p. 12)

Imagination Myths

In Learning to Imagine: The Science of Discovering New Possibilities, Shtulman challenges the highly intuitive, yet obstructive notion that great imagination stems from a place of ignorance. Through the sharing of everyday examples and detailed experimental studies, Shtulman effectively tackles the pervasive deficit view of imagination—that it’s something we engage in a great deal during childhood and sadly lose as we get older.

Contrary to the conventional wisdom, Shtulman demonstrates that children’s imagination, relative to adults’, is constrained by what they think is physically plausible, statistically probable, and socially and morally acceptable. While some early philosophers and social scientists considered children to be lacking intelligence and their minds to be a blank slate, a contemporary swing of the pendulum has led to a confusing romanticization of children as “little scientists” capable of unacknowledged insight. A more informed view recognizes that though children’s minds are not blank slates, they do often conflate what they’ve personally experienced with what “could be.” To a child’s mind, what they can’t imagine, can’t exist.

“Unstructured imagination succumbs to expectation, but imagination structured by knowledge and reflection allows for innovation.” —Andrew Shtulman

Shtulman notes how young children often deny the existence of uncommon but entirely possible events, such as finding an alligator under a bed, catching a fly with chopsticks, and a man growing a beard down to his toes. Children find these situations as impossible as eating lightning for dinner. While children often engage in pretend play, it often mimics mundane aspects of real life, such as cooking or construction. Children also often believe in magic and fantastical beings such as Santa Claus, but such myths were not spontaneously created by children, they were first endorsed by adults they trust. Children rarely generate novel solutions to problems as they tend to fixate on the rules and norms familiar to them, often correcting others when they have deviated from what is expected and sometimes become offended by “rule” violations.

Learning to Imagine is not only about children’s cognition; it is fundamentally a book about human reasoning and contains insights that are applicable to all of us. Shtulman sheds light on the many important ways in which adults continue to constrain their own imagination through self-interest, habit, fear, and a fixation on conforming to one’s social group. For example, we may constrain ourselves with resistance to adopting new technologies like artificial intelligence because they reduce the need for our skillset or simply engender fear of the unknown. We may resist something that requires us to change our habits (e.g., carrying reusable bags), to something that forces us to take risks (e.g., trusting a quickly developed vaccine), or to deviate from our in-group (e.g., advocating for a new theory or openly sharing an unpopular opinion).

What is Imagination, Anyhow?

So, just what is imagination? Shtulman argues that it is the ability to abstract from the here and the now, to contemplate what could be or what could have been. Imagination is an evolved cognitive skill that is used for the purpose of everyday planning, predicting, and problem-solving. We imagine what we would buy at the store, how a meeting at work might go, how if we had only said something a different way, then we could have avoided that fight with our spouse, and so on. Simply put, imagination is the ability to ask “What’s possible?”. Imagination can be engaged in for our subjective experiences (e.g., imagining how life would have been different if path B was chosen instead of path A), and for what might be more relevant to others (e.g., works of art, new policies, technologies).

Simply put, imagination is the ability to ask “What’s possible?”.

Even though Shtulman’s case for imagination is grounded in this “what’s possible” definition, how it is intertwined with closely related constructs such as “creativity” and “innovation” is somewhat less clear. What can be discerned from his book is that while imagination can be collaborative in the sense that we draw on human knowledge to ask, “what if,” it is largely a personal endeavor. Creativity, on the other hand, is the product of imagination that can be shared with others. Building upon imagination and creativity, innovation is the product of extraordinary imagination and can be developed and refined.

Mechanisms for Expanding What’s Possible

What are the proposed means by which we expand our knowledge, thereby improving the likelihood that we shift our imagination from the ordinary to the extraordinary? Shtulman outlines three ways of learning, specifically through: (1) examples, (2) principles, and (3) models.

The first mechanism, learning through examples, involves learning about new possibilities via other people’s testimony, demonstrations, empirical discoveries, and technological creations. Through education, others’ knowledge becomes our knowledge. However, expanding our imagination through examples is the easiest but also the most limited means by which to expand our own. On one hand, new possibilities are added to our database of what could be, but in doing so, we are potentially limited by overly fixating on the suboptimal (yet adequate) solution we have learned; as Shtulman notes, we “privilege [our] expectation over observation.” For example, we tend to copy the necessary and unnecessary actions of others when trying to achieve the same goal—we fixate on the solution we are familiar with rather than engage in the little effort required to abstract a more efficient solution. Children are even more susceptible to such a process. For example, imagine you see a toy with a handle that is stuck at the bottom of a long tube, and you are provided with a straight pipe cleaner. How might you reach and retrieve the toy? You likely imagine bending the pipe cleaner, yet most preschoolers tasked to reach the toy in this scenario are unable to imagine how the pipe cleaner can be used as a sufficient tool.

The second mechanism, learning through principles, refers to generating a new collection of possibilities by learning about abstract theories about “how” and “why” things operate. These include learning about scientific/cause-and-effect, mathematical, and ethical principles. As a means for expanding imagination, principles are more valuable than examples because they can help us extrapolate possibilities from one situation and apply it across different domains. One illustrative example is the physicist Ernest Rutherford, who won a Nobel prize in chemistry. Rutherford hypothesized (correctly) that electrons, like planets orbiting a sun, may orbit a nucleus. By using the principle of gravity and applying it in a different context, Rutherford generalized an insight from physics to innovate in the field of chemistry. Engaging with principles allows us to practice applying our knowledge and better understand novel relationships. While most of us are not scientists striving to win a Nobel prize, we can still learn new principles that expand our imagination. However, principles can be overgeneralized, and Shtulman argues that new applications should still be tested and replicated to confirm the connection.

An artist recreates childrens’ drawings as if they were real creatures. (Source: Things I Have Drawn)

The third mechanism, learning through models, might be the most exciting as it concerns expanding our ideas about what is possible by immersing ourselves in simulated versions of reality that can be manipulated with little to no consequences. These simulations allow for personal reflection through the process of mental time travel. This includes expanding our imagination through pretense (i.e., pretend play), fiction, and religion. Pretense allows us to expand our symbolic imagination by toying with alternative possibilities somewhat rooted in reality because the real-world elements of pretend play help to make it meaningful. For example, when children and many adults are asked to draw an animal that doesn’t exist, the product is usually an amalgamation of existing animal parts rather than a completely unique creature. Such mental play supports the development of logical reasoning. Through different mediums such as books and film, fiction expands our imagination by allowing us to experience the social world through the eyes and thoughts of others. We see how others react to situations we haven’t experienced and contemplate how we might respond if we were in their shoes.

You have to represent reality before you can tinker with it, to know the facts before you can entertain counterfactuals.

Religion is rooted less in the here and now but may enable us to expand metaphysical ideas and explore moral reasoning by directing thoughts and behavior according to the core values of a specific religion. Ultimately, models allow us to experience the lessons of working out various problems without the risks associated with acting on them in real life. On the other hand, sometimes models may communicate false information that we mark as true. Though models may sometimes lead us astray, Shtulman argues that they “provide the raw materials. … You have to represent reality before you can tinker with it, to know the facts before you can entertain counterfactuals” (p. 12).

The numerous examples that Shtulman provides for how examples, principles, and models expand imagination generate a convincing case for the central thesis of his book—that, unlike the current conventional wisdom, children’s lack of knowledge, experience, and reflection make them less imaginative than adults. However, attempts to distinguish many overlapping concepts within in the book (e.g., religious models vs. fictional models; social imagination vs. moral imagination) is sometimes disorientating.

Should You Read This Book?

Will this book provide you with a specific list of ways to quickly develop a more “imaginative mindset” for yourself and others? No, it is not a self-help book. Instead, you’ll spend hours on an engaging (and, dare we say, nourishing) tour of the limitations and achievements of human imagination. By the end, you’ll know a lot more about how the human mind develops and reasons, and about the cognitive mechanisms that impede and enhance innovation across eras, societies, and an individual lifetime. Through your newfound knowledge, you may begin to imagine solutions to both personal and global challenges that you hadn’t considered before.

Categories: Critical Thinking, Skeptic

The Asteroid That Made a Mouse Into a Man

Mon, 05/05/2025 - 6:06pm

Kids love dinosaurs. I know I certainly did. So much so that when I was in kindergarten and still hadn’t learned to read, I kept pestering my brother (5 years my elder) to keep bringing home books about them from the school library just so I could look at the pictures. He finally asked our mother to intervene and explain to me that they simply wouldn’t let him renew the same books every week.

And once kids—or adults, in the case of frequenters of the Creation Museum—get past taking The Flintstones as gospel, they want to know, “What killed the dinosaurs?” and “Why did those big dangerous dinos all die when so many other creatures survived?”

When I was growing up in the 1950s, the usual answer was that while they sure were big, as the late, great Muhammad Ali would say of his opponents, they “didn’t have a chaaaance” because they were just “too slow, too dumb,” and yes, “too ugly!” Smaller, but much smarter, nice furry mammals showed up on the evolutionary time scale and gobbled up the dinosaur’s eggs. After all, those newly arrived mammals were obviously superior in all but size—they had insulating hair rather than conductive scales, carried their young rather than laying eggs exposed to predation, and being warm-blooded, they were active and fast, while those dinos were coldblooded, slow, and so, opposite to Count Dracula, only active while the sun shined. And dino brains were kinda small—especially compared to their huge body size. But those mammals had much bigger brains relative to their much smaller body size! A Darwinian drama where the smart, little guys win, ready-made for an animated Disney drama (cue in Stravinsky’s Rite of Spring) or a classroom film—Triumph of the Nerdy Mammalians.

There have been other explanations for Dino-geddon, before and since. We now know that the dinosaurs did not go extinct because:

  • They just got too big. Why not? Because the biggest dinosaurs lived in the Jurassic Period (201.4 to 145 million years ago), which was millions of years before the geologically sudden mass extinction at the end of the Cretaceous, approximately 66 million years ago.
  • The arrival of those smart little egg-eating mammals caused their extinction. Why not? Because mammals had actually coexisted with dinosaurs for 150 million years before Dino-geddon.
  • They were thick-headed, but thin-shelled. Why not? Some of their eggs had thin shells, some thick. And living birds (whom we now know are their direct lineal descendants) and reptiles (collateral cousins) have survived quite well laying their own thin-shelled eggs.
  • Climate change made the earth get hotter or colder—so all the dinosaurs hatched as males. (Sex is determined by outside temperature among living gators and crocs; too hot or too cold yields females, while males need the “just right” temperatures). Why not? It’s somewhat hard to test whether dinosaurs even had temperature-dependent sex determination (TSD), but why did the direct ancestors of today’s TSD gators and crocs survive?
  • The dinosaurs really were just too dumb! Why not? As best we can guess, the intelligence of various dinosaur species varied. But they all went extinct, not just the dumber ones. (Well, kind of. Their bird-brained relatives not only survived but thrived). And whatever their intelligence, they were certainly brighter than many other animal groups that did survive. And it wasn’t just dinosaurs—around half of the species living at that time went extinct.

As for the aforementioned, foreordained, straight-ahead Darwinian drama often shown in the schoolbooks of my time that started with a mindless amoeba at the bottom of the Scala Natura and progressed up ascending rungs of increasing intelligence and consciousness until modern man— usually, White and kinda Nordic—emerged at the top? Alas, as with most such stories, it just ain’t so.

We now know what actually did happen. In 1980, geologist Walter Alvarez and his father, Nobel Prize winning physicist Luis, proposed that an asteroid collision wiped out the dinosaurs. Since then, evidence for their hypothesis has accumulated, as has evidence against the aforementioned alternatives.

So it wasn’t bad brains that got the big, dumb, ugly dinos. It was bad luck!

But what if that asteroid had hit some other piece of space junk and altered course just enough to miss hitting the Earth? Skeptic icon Stephen Jay Gould famously opined in Wonderful Life (1989), “Replay the tape [of life] a million times … and I doubt that anything like Homo sapiens would ever evolve again.”

Paleontologist Dale Russell with his Dinosauroid—a hypothetical, human-shaped theropod, invented during the early 1980s and sculpted by Ron Séguin (Photo credit: Canadian Museum of Nature)

Right? Well, maybe half right—but then, maybe not. Seven years before Gould declared that evolution (indeed, history) operated more on the basis of contingency than on necessity, paleontologist Dale Russell, Curator of Fossil Vertebrates at the Canadian Museum of Nature, had proposed, as a thought experiment, the possibility that some relatively brainier dinosaur lineage might have eventually evolved into a really big-brained dinosauroid that would have had forward-looking eyes, an erect stance, grasping hands, and big brains had that asteroid only missed Planet Earth. Why? Because of convergent evolution, that is, when two organisms look and/or behave in a very similar way, even though they’re only distantly related. And that means they’ve evolved those similarities independently rather than inheriting them from a common ancestor. Convergence in evolution happens regularly—as does non-convergence.

Here are a few examples:

  • Sharks, dolphins, and the extinct ichthyosaurs each developed a streamlined body to swim far faster than any Olympian.
  • Camera-type eyes evolved separately in mammals, and earlier in cephalopods (octopuses and squids).
  • Some sort of opposable “digit” evolved separately in primates, opossums, koalas, giant pandas, and chameleons that allows for grasping—though the primates may have first evolved theirs to move through the trees. The giant panda hijacked a wrist bone so it could be used for grasping leaves. And only millennia later did humans hit upon using their opposable thumbs to type text messages.
  • Flight—a major accomplishment in transformation of locomotion—evolved separately among insects, in the extinct pterosaurs, and in mammals (bats) and birds, only to be lost later when there was an advantage in doing so to assist in running (ostriches) or swimming (penguins).
  • Echolocation is a pretty demanding feat of bioengineering. Yet it evolved separately among cetaceans (whales and dolphins) and bats.
  • A form of bio-antifreeze evolved separately in Arctic fish and in Antarctic fish, which are only very distantly related.
  • There are many more examples, including really unpleasant ones like stinging and blood sucking.

My personal favorite example of convergent evolution is Ankylosaurus (left) and Glyptodon (right). I had an Ankylosaurus in my childhood set of dinosaur models and was surprised to find out those two tank-like herbivores weren’t even distantly related when I first encountered Glyptodon in a textbook years later.

Ankylosaurus was indeed a dinosaur that lived from 160–65 million years ago; Glyptodonts, however, were mammals that lived only 38 million to 10 thousand years ago, of which Glyptodon is the best known species. Each had an armor-like carapace (like a Galápagos tortoise), large body size, stiff back, and club-like tail that the fossil record shows evolved to become stiff before the tip of the tail expanded into a dangerous weapon. Each evolved similar traits to take advantage of a particular ecological niche because despite the difference in time, there can only be a few good solutions to similar selective pressures.

So, play the tape only few times, and some of those times something very similar happens (though not always). The real questions then are not if convergent evolution can happen, but whenwhy, and how much it happens.

Not Necessarily the Same, but Similar

So if the asteroid had veered off just a little along the way, maybe some dinosaurs could have evolved higher intelligence and, starting with Troodon, evolved into something like Russell’s hypothetical forward looking, erect (which Troodon already was) and big-brained dinosauroid, with grasping hands that could have inherited the Earth. Even before Russell, another Skeptic icon, Carl Sagan, musing on extraterrestrial intelligence in The Dragons of Eden, speculated that, had they not gone extinct, one group of dinosaurs might have achieved the brain size and intelligence sufficient even to develop an octal numbering system, given their number of digits.

The point here is that Russell’s argument was a thought experiment to prompt us to consider what evolution is and how it works.

The Meaningful Measurement of Minds

Defining intelligence for humans is hard enough, and trying to measure it has been open to debate and criticism. But since neither extinct species nor ETs can take IQ tests how can anyone even speculate as to their intelligence?

Well, in the case of extinct species, but not ETs until and unless some UFOlogist produces the real remains of one, we often have their fossilized skeletons. Particularly instructive is the brain case—the cranial bones that once enclosed their brain. And from the brain case we can get an endocast—an internal cast of the brain once so enclosed. Sometimes we have to make an internal cast by inserting some rubbery material. But sometimes we get lucky and nature has already made an endocast through fossilization. The advantage here is that we not only have the overall size and shape of the brain, we also can determine the proportions occupied by the different brain areas (lobes) and in some cases even something about the extent of folding. And folding is important because it allows more brain matter to fit within a skull of given size.

So much for brain size, what about intelligence? Over the long haul of evolutionary time and the wide range of animal species we can observe a strong relation between neural size and neural complexity on the one hand and general behavioral complexity and adaptability on the other. And this is more so within a particular evolutionary lineage than when comparing across lineages. Of course, there will be exceptions for specialized abilities and the strength of the relation becomes harder to tease out if we try to compare individuals or groups within a given species.

Brain size, then, gives us one metric by which to estimate the intelligence of both living and extinct animals. But we know from observing living animals that brain size varies a lot just based on sheer body size. Given that neurons have a relatively uniform size across species, a more meaningful measure of neural complexity is brain size relative to body size. And that’s why for so long dinos have been considered so dumb. Despite their massive body sizes (as estimated from their fossilized skeletons) they had really minuscule brains (as approximated by endocasts). Well, at least most did.

Neuroscientist Harry Jerison developed an even more sophisticated and accurate measure termed the Encephalization Quotient (EQ). And from EQs it’s then possible to estimate the number of Extra Neurons. The Encephalization Quotient for a species is the ratio between its observed brain size (whether measured directly at autopsy/ necropsy or from an endocast) and the predicted brain mass for that species given its size and taxon. More technically, it’s predicted from a nonlinear regression across a range of related reference species. By comparing the measured EQ of a given species to the predicted EQ for an animal in its taxonomic group having its average measured body size, we can get a measure of its Extra Neurons. And Extra Neurons are like extra RAM in your computer or smart phone—the more of them you have, the more information you can process and the faster you can do so.

Ratios of Brain Size to Body Size & Encephalization Quotients for Various Species

As the figures in the table show, across a range of living species, the Ratios of Brain Size to Body Size and especially the Encephalization Quotients correspond pretty well with both our armchair estimations of the intelligence of living animals and with their behavioral complexity and learning capacity as determined by controlled experiments.

What about those dumb dinos? Well, the four-legged herbivores such as the well-known BrontosaurusStegosaurus, and Triceratops, all fall below living gators and crocs (at about 1.0). But bipedal carnivores, a group that includes T. rex, have higher EQs. Some scientists have even claimed these carnivorous dinosaurs achieved EQs comparable to those of a baboon! More recent estimates have scaled those back to the crocodilian range. Troodon’s EQ ends up being possibly five times higher than that of your average dino. So, if the asteroid had missed, could dinosauroids have inherited the Earth?

The big problem for Russell’s thinking exercise is that dinosaurs likely had brains similar to birds. Assuming that their brains had the same avian nuclear type of pallial organization, rather than the mammalian-type cortical organization, no dinosaur could have achieved the brain complexity required for higher mammalian-type behavioral complexity. Or so it was assumed.

Mammalian cortical brains have a laminar architecture— they are arranged in layers, one atop the other. (Think plywood or chocolate seven-layer cake). Avian brains have a pallial or nuclear architecture—clusters of similar nerves separated by a mass of different type cells. (Think knots in a sheet of pine or meatballs in a large tray of spaghetti). The cortical organization of mammalian brains allows more layers to be packed in one on top of the other (like stuffing as many clothes in your suitcase as possible). With the brain cells close to one another, transmission is short, simple, and fast. And the layers can be folded, like squishing up a towel, so that you can fit even more in a given space. Avian-type brains, on the other hand, simply could not achieve the volume or the neural transmission speed required to support higher intelligence. At least that was the theory. However, as Albert Einstein is said to have said, “In theory there is no difference between theory and practice—in practice there is.”

If the asteroid had missed, could dinosauroids have inherited the Earth?

When inferring behavior, brain size, or even structure, let alone from the endocasts of extinct species, we only see through a glass darkly. Looking more closely at the actual behavior of living birds has now demonstrated that while most birds can fly, their minds are anything but flighty. Pigeons can distinguish cubist style paintings from impressionistic styles, crows not only make useful tools but pass on those skills to others, and parrots can learn words and use them to communicate with us. Pigeons can even be trained to communicate their differing internal experience upon receiving uppers, downers, or placebos—to another pigeon! Bird brains have now racked up so many cognitive accomplishments that some neuroanatomists have argued that the cortex-like cognitive functions of the avian pallium demand a new neuroanatomical terminology that better reflects not their differences but rather the homologies (similarities in function) between avian and mammalian brains.

So bird brains—and by implication smart dinosaur brains—are sufficiently high enough on the evolutionary scale and complex enough to allow for complex cognitive behaviors. And just how much neural complexity is required for complex cognition? Well, we now know that honeybees, who are insects with a vastly simpler nervous system, are able to discriminate Monet paintings from Picasso’s after extracting and learning the characteristic visual information inherent in each painting style.

Still, only humans can solve the mirror self-recognition test. OK, only humans and the great apes. Wait, only humans and primates—no, humans, primates—and elephants—and dolphins and killer whales. Actually, given the proper training, magpies can too. And so can those pigeons! And that’s for a test that uses visual stimuli. As with public opinion polls, the answer you get in an experiment may all lie in how you ask the question. Vision is a dominant sense for humans—and for those birds as well. Yet, as dog owners know, for canines—unlike humans or birds—olfaction is the dominant sense over vision. Before we can truly evaluate the intelligence of other species, we need to at least make an attempt at understanding the world as they experience it. And that’s just what one German biologist did.

Umwelts and the Enneadic Brain of the Octopus

Baron Professor Jakob Johann von Uexküll (1864–1944) was a German biologist whose research ranged from physiology to animal behavior. His most important contribution was introducing the concept of Umwelt and developing its importance for biology, specifically for the understanding of animal behavior. Literally translated as the “around world,” von Uexküll defined the Umwelt as the surrounding environment as perceived by a particular animal species given its specific sensory system. That concept has since influenced fields ranging from sensory and cognitive biology to environmental design engineering, cybernetics, semiotics, and even existential philosophy.

To better understand just what von Uexküll meant by the term Umwelt, consider some very relevant differences between dogs and people. First, dogs have only two-way (dichromatic) color vision, not three-way (trichromatic) vision like humans. Our eyes have three types of color receptors, termed cones, that allow us to recognize and identify a palette consisting of reds, blues, greens, and their combinations. Dogs, on the other hand, possess only two types of cones—blue and yellow. A dog’s Umwelt does not include the range of colors from red to green (along with blue) that we see but rather just shades of yellow, blue, and grey. Grass, for example, only appears yellowish or brown to dogs.

But dogs can smell so much more than we can. It has been estimated that dogs can smell anywhere from 1,000 to 10,000 times better than humans. They have 40 times more smell-sensitive receptors, and their nasal cavities are far more complex thus amplifying their advantage in mere number of receptor cells. Some breeds, such as bloodhounds, that have been specifically bred for scent tracking are more sensitive sniffers than others. Those such as whippets and greyhounds, bred for visual tracking of fast-running prey, have sacrificed some olfactory for enhanced visual acuity, though even their sense of smell far exceeds ours.

And whereas vision is, well, line-of-sight, and transitory (out of sight, out of mind), smell is multidirectional (though affected by wind) and lasting. Even among us olfactorily-challenged humans, sensations of smell evoke some of our strongest and most enduring emotional reactions, from the sensual scent of a loved one’s perfume or cologne lingering on the pillow case to the stench of rotting garbage left uncollected on the big city street below. A mind that has evolved to handle predominantly olfactory input will construct a far different Umwelt than one built around overwhelmingly visual stimuli. And then there are the cetaceans, especially porpoises and dolphins, who construct their mental world based on auditory stimuli and whose motor responses have evolved to function in a different medium (water). While the canine olfactory Umwelt and the cetacean auditory Umwelt “map” to our human visual Umwelt, and vice versa, each “misses” a lot of the other two’s moment-by-moment experience.

Now suppose that not only are the sensory and motor systems of different species different, what if the unit that processes those inputs and generates their outputs to create their respective Umwelts is vastly different? Consider then the octopus, smartest of the invertebrates. They’re quite good at learning to get through mazes and they employ tools in constructing their well-known “gardens.”

The octopus has the highest brain size to body size ratio among those without a backbone and about as many total neurons as a dog. Instead of one brain, it has nine! Well, one generalized, central hub in the head and a smaller specialized processor in each of its eight arms, each with its own set of neurons. And if an arm is removed, the octopus can regenerate it! What might be their mental map of the world?

If the octopus could evolve human-level intelligence, would it think in terms of dichotomies of good versus bad or political liberal versus conservative the way we do? Or would the octopus have a much more nuanced enneadic view of things given its nine brains? Could distributed brains ever evolve the level of intelligence achieved by centralized brains?

Have we, as a byproduct of our great success as a species, dumbesticated ourselves?

Truly accurate and meaningful measurement and comparison of the intelligence of various species, living or extinct, would therefore have to take into account their entire Umwelt: their sensory inputs, motor responses, and the structure of the intellect that processes them. Until then, we’re left to deal with approximations. The more closely related the subjects we’re comparing, the more accurate the comparison.

Intelligence Costs, So Stupidity Sometimes Pays

If there is such an advantage to increases in intelligence over the course of evolutionary history, why are there any “dumb” animals left? The most basic and perhaps only law of economics is “there’s no such thing as a free lunch.” Since you can’t get something for nothing, getting one thing means giving up something else. And animal intelligence, like artificial intelligence, is very expensive, and the brain is the most energy-expensive organ in the body. Brain tissue uses 20 times the metabolic energy as an equivalent mass of muscle. Therefore, increases in intelligence must bring a significant selective advantage or they won’t take place and would not have taken place over the course of evolutionary history in so many lineages. But sometimes they don’t.

There indeed can be evolutionary advantages in being stupid. If an organism can get by with its existing intelligence, increases may actually decrease survivability. Perhaps that’s why about half of the domesticated species decreased their brain size compared to their wild ancestors. So domestication, as often as not, results in “dumbestication.” And what of the seemingly “smarter”—to our minds—domesticated ones? Are dogs such as Border Collies that humans can train to herd sheep and obey over 500 commands, using both words and hand signals, really smarter than wolves that successfully navigate much harsher environments, often outsmarting both people and dogs?

Consider that the average human brain size has decreased over the last 10,000 years, with our transition from hunter-gatherers to agriculturalists. Being a hunter-gatherer calls for brains as well as brawn, and you’re regularly facing many life-determining novel situations compared to all those regular, repetitive days on the farm. Have we, as a byproduct of our great success as a species, dumbesticated ourselves, and, if so, is that still going on today? Does the relaxed selection pressure resulting from the benefits of modern technological society foster similar changes as our transition to agriculture? Clearly, evolving increased intelligence is not the sure and only path to survival and success, nor is such success guaranteed.

A Tale Told by an Intellect, Filled With Chance and Necessity

Nonetheless, over the course of evolution, there has been an increasing, self-perpetuating, competitive advantage to be derived from increases in neural complexity (which can be approximately, variously, though somewhat fuzzily estimated by brain size) and behavioral complexity (gauged in like manner through various tests of intelligence, reaction time, and cognition). Our best tests of intelligence involve the process of decontextualization—removing a stimulus from its immediate sensory meaning.

A prime example is when a set of arbitrary marks are used to represent sounds, and then some of those marks are used in a completely different context to represent concepts unrelated to those sounds. The equation E=mc2 is about as decontextualized as you can get. There’s nothing in those sounds or marks that directly represents energy, mass, and the speed of light, or the process of multiplying a quantity by itself, which can be represented geometrically by a shape called the square. Yet, that equation has certainly increased the power in humanity’s hand—enough to power or annihilate whole cities. Across human history, we see how our increased intelligence has been put to work to harness increasing amounts of that energy and turn that into increased levels of societal complexity.

And with that increase in intelligence, one hopes, there has been a similar competitive advantage in ever-increasing awareness of the environment, including and especially the minds of other individuals of our own group and of our own species, and with that in developing consciousness, and even conscience.

Illustration by Chris Wisnia for SKEPTIC

Alas, the race of life is not to get ahead—it’s to get ahead of somebody else. Another species, whether predatory or competing, another group, or an individual in the same group competing for food, range, or mates, rival relatives or siblings, even for resources in the womb in the case of twins and other multiple births. Differential outcomes depend on group and individual differences. And without individual differences, the very concept of intelligence itself would be meaningless—group differences being merely the aggregated individual differences.

Things didn’t have to happen exactly as they did, but given enough chances, either in various evolutionary lineages over the course of geological time—or even other places in the cosmos over the course of astronomical time—the odds are in favor of something very much, though by no means exactly, like that eventually happening somewhere sometime. Will Homo sapiens’ particular instantiation of “higher minds”—those with not only intelligence, emotion, consciousness, but possibly even conscience—ever encounter lesser, equal, or superior others before our evolutionary run runs out, whether through chance external encounter or necessitated by self-indulgent failure to use our intelligence?

The answer to that question lies less with scientists than with philosophers, to wit—

“Everything existing in the Universe is the fruit of chance and necessity.” —DEMOCRITUS“… or vice versa!” —YOGI BERRA
Categories: Critical Thinking, Skeptic

Covid Conspiracies and the Next Pandemic

Thu, 05/01/2025 - 2:24pm

While investigating Unidentified Anomalous Phenomena (UAP) for the UK Ministry of Defence, I was exposed to conspiracy theories that allege that the government is covering up proof of an alien presence. I’ve since become an occasional media commentator on conspiracy theories and have even been the subject of one myself, with some people claiming that I’m still secretly working for the government on the UAP issue. Most conspiracy theories are binary: we either did or didn’t go to the moon; Lee Harvey Oswald either did or didn’t act alone; 9/11 either was or wasn’t an inside job—and if it was an inside job, the choice is binary again: The Government Made It Happen or The Government Let It Happen.

The Covid pandemic generated multiple conspiracy theories, but the fact that most have been proven to be false shouldn’t lead people to conclude that what might be termed “the official narrative” about Covid is necessarily true in all aspects. It wasn’t.

A flawed “everyone’s at risk” narrative was promoted.

Covid wasn’t a “plandemic” orchestrated by nefarious Deep State players. Neither did the vaccines contain nanobots activated by 5G phone signals. But not everything we were told about Covid was correct: lockdowns and cloth masks didn’t have anywhere near the impact on slowing community spread or lowering mortality rates that was originally hoped for and subsequently claimed. Some studies now suggest the benefits were statistically insignificant. The vaccines didn’t stop transmission. And in one staggering admission—written by a New York Times journalist, no less!—a child was statistically more likely to die in a car accident on the way to school, than of Covid caught at school: “Severe versions of Covid, including long Covid, are extremely rare in children. For them, the virus resembles a typical flu. Children face more risk from car rides than Covid.”

A flawed “everyone’s at risk” narrative was promoted, in a situation where elderly people and others with comorbidities were vastly more likely to have serious health outcomes. The benefits of natural immunity were downplayed, and obesity as a risk factor was hardly discussed, perhaps because of politically correct sensitivities about fat-shaming. Partly, all this was because Covid was new, with key pieces of the puzzle unknown—especially in the early days of the pandemic. Later, it reflected the difficulty of interpreting statistics and analyzing data, especially where there were different ways of doing so, in different countries, or at different times. The debate over whether someone died of Covid (i.e., the virus killed them) or died with Covid (they died of some other cause and happened to be infected with the virus) is one example of this.

Obesity as a risk factor was hardly discussed, perhaps because of politically correct sensitivities about fat-shaming.

Nothing exemplifies the more nuanced nature of Covid conspiracy theories than the lab leak debate. Was Covid a case of zoonotic emergence, centered on a wet market in Wuhan, or an accident involving the Wuhan Institute of Virology? According to previous assessments by the Office of the Director of National Intelligence, some parts of the U.S. Intelligence community favored one theory, some favored the other, while some were undecided. Then, on April 18, 2025,  www.covid.gov and www.covidtests.gov were both redirected to a new White House website titled “Lab Leak: The True Origins of Covid-19.”

Screenshot of the White House webpage: “Lab Leak: The True Origins of Covid-19.”

Particularly in the early days of the pandemic, the lab leak hypothesis was portrayed as a crazy conspiracy theory and was seen by many as being a rightwing dog whistle, along with any mention of Sweden’s more laissez-faire policies, the Danish Mask Study, and much more besides. This was part of the wider politicization of the virus, or rather, the official response to the virus. Broadly speaking, in the first weeks of the pandemic the American Left downplayed it, while the Right rang alarm bells, a trend that soon reversed entirely—ultimately the Left believed the pandemic was more serious than did the Right, and the Left supported the various mandates to a greater extent than the Right.

Should we err on the side of caution, especially in the beginning when we just don’t know, but do know the history of earlier pandemics?

Defining a conspiracy theory is tricky, and we shouldn’t conflate an elaborately constructed false narrative with a disputed fact. But when the line can be blurred, and when “conspiracy theorist” is itself sometimes used as a pejorative, the polarized debate over Covid can be tricky to navigate. “Covid vaccines didn't work” is false, but “Covid vaccines didn’t stop transmission, so mandating them, especially for those at little risk, was unnecessary” is true. Then again, if there’s any doubt at the time, why not err on the side of caution? Vaccination has proven to be among the most successful methods of modern medicine and much, much cheaper and less disruptive than shutdowns. “Masks didn’t work” is false, but “cloth masks generally had only a statistically insignificant health benefit” when deployed at scale is true. Then again, when in doubt, should we err on the side of caution, especially in the beginning when we just don’t know, but do know the history of earlier pandemics?

Why does any of this matter, especially as the pandemic fades into the rearview mirror? First, the truth is important, and we owe it to ourselves and to posterity to tell as full and accurate a story as possible, especially about such a major, impactful event. Secondly, we need to have a conversation about the failed response to Covid because not only were the various mandates on lockdowns, masks, vaccines and school closures much less effective than claimed, but also, many of those who questioned governmental and institutional narratives were demonized.

Authorities bet the farm on measures that were both divisive—mandates are almost always going to fall into this category—and ineffective.

On social media, dissenting voices were deplatformed or shadow-banned (a user’s content is made less visible or even hidden from others without the user being explicitly banned, or notified, or even aware that it has happened). So we never had an open and honest debate about possible alternative strategies, such as the Great Barrington Declaration authored by the Stanford physician-scientist and current NIH director Jay Bhattacharya. The authorities bet the farm on measures that were both divisive—mandates are almost always going to fall into this category—and ineffective. Dying on the hill of dragging traumatized 2-year old children off airplanes because they couldn’t keep a mask on was bizarre and even perverse, as was closing playgrounds, hiking trails, and beaches, and even the risibly ridiculous arresting of a lone paddleboarder off the coast of Malibu. Across the board, civil liberties were set back for years, while the consequences of school closures—both in terms of education and social development—have yet to be properly assessed (although preliminary studies indicate that students may be at least one year behind where they should be). And what about the level of preparedness of hospitals and medical equipment manufacturers? We need to talk about all this.

The next pandemic may have an attack rate and a case fatality ratio that would make Covid look like, well, the flu.

But most of all, this matters because of the next pandemic. It may be bird flu, the Nipah virus or mpox. Alternatively, it’ll be a Disease X that comes suddenly and unexpectedly from left field. But it’s inevitable, and the next pandemic may have an attack rate and a case fatality ratio that would make Covid look like, well, the flu. Such a pandemic would need a “we’re all in this together” response, just when half the country would regard such a soundbite as an Orwellian reminder of what many refer to as “Covid tyranny.” Trust in the public health system, and many other institutions, is at an all-time low. We need to depoliticize healthcare and ensure that never again do people misappropriate science by appealing to it but not following it (“masks and lockdowns, except for mass BLM protests”). We need a data-led approach and not a dogma-led one.

Having a full, robust and open national conversation about Covid—with accountability and apologies where necessary—is vital. That’s because identifying the mistakes and learning the lessons of the failed response to the last pandemic is essential in preparing to combat the next one.

Nick Pope’s new documentary film on which this essay is based is Apocalypse Covid. Watch the trailer here and the full film here.

Categories: Critical Thinking, Skeptic

The New Zealand Māori Astrology Craze: A Case Study

Wed, 04/30/2025 - 4:57am
“It is fundamental in science that no knowledge is protected from challenge. … Knowledge that requires protection is belief, not science.” —Peter Winsley

There is growing international concern over erosion of objectivity in both education and research. When political and social agendas enter the scientific domain there is a danger that they may override evidence-based inquiry and compromise the core principles of science. A key component of the scientific process is an inherent skeptical willingness to challenge assumptions. When that foundation is replaced by a fear of causing offense or conforming to popular trends, what was science becomes mere pseudoscientific propaganda employed for the purpose of reinforcing ideology.

When Europeans formally colonized New Zealand in 1840 with the signing of the Treaty of Waitangi, the culture of the indigenous Māori people was widely disparaged and their being viewed an inferior race. One year earlier historian John Ward described Māori as having “the intellect of children” who were living in an immature society that called out for the guiding hand of British civilization.1 The recognition of Māori as fully human, with rights, dignity, and a rich culture worthy of respect, represents a seismic shift from the 19th century attitudes that permeated New Zealand and much of the Western world, and that were used to justify the European subjugation of indigenous peoples. 

Since the 1970s, Māori society has experienced a cultural Renaissance with a renewed appreciation of the language, art, and literature of the first people to settle Aotearoa—“the land of the long white cloud.” While speaking Māori was once banned in public schools, it is now thriving and is an official language of the country. Learning about Māori culture is an integral part of the education system that emphasizes that it is a treasure (taonga) that must be treated with reverence. Māori knowledge often holds great spiritual significance and should be respected. Like all indigenous knowledge, it contains valuable wisdom obtained over millennia, and while it contains some ideas that can be tested and replicated, it is not the same as science. 

When political and social agendas enter the scientific domain there is a danger that they may override evidence-based inquiry

For example, Māori knowledge encompasses traditional methods for rendering poisonous karaka berries safe for consumption. Science, on the other hand, focuses on how and why things happen, like why karaka berries are poisonous and how the poison can be removed.2 The job of science is to describe the workings of the natural world in ways that are testable and repeatable, so that claims can be checked against empirical evidence—data gathered from experiments or observations. That does not mean we should discount the significance of indigenous knowledge—but these two systems of looking at the world operate in different domains. As much as indigenous knowledge deserves our respect, we should not become so enamoured with it that we give it the same weight as scientific knowledge. 

The Māori Knowledge Debate 

In recent years the government of New Zealand has given special treatment to indigenous knowledge. The issue came to a head in 2021, when a group of prominent academics published a letter expressing concern that giving indigenous knowledge parity with science could undermine the integrity of the country’s science education. The seven professors who signed the letter were subjected to a national inquisition. There were public attacks by their own colleagues and an investigation by the New Zealand Royal Society on whether to expel members who had signed the letter.3

Ironically, part of the reason for the Society’s existence is to promote science. At its core is the issue of whether “Māori ancient wisdom” should be given equal status in the curriculum with science, which is the official government position.4 This situation has resulted in tension in the halls of academia, where many believe that the pendulum has now swung to another extreme. Frustration and unease permeate university campuses as professors and students alike walk on eggshells, afraid to broach the subject for fear of being branded racist and anti-Māori, or subjected to personal attacks or harassment campaigns. 

The Lunar Calendar 

Infatuation with indigenous knowledge and the fear of criticising claims surrounding it has infiltrated many of the country’s key institutions, from the health and education systems to the mainstream media. The result has been a proliferation of pseudoscience. There is no better example of just how extreme the situation has become than the craze over the Māori Lunar Calendar. Its rise is a direct result of what can happen when political activism enters the scientific arena and affects policymaking. Interest in the Calendar began to gain traction in late 2017. 

An example of the Maramataka Māori lunar calendar (Source: Museum of New Zealand)

Since then, many Kiwis have been led to believe that it can impact everything from horticulture to health to human behavior. The problem is that the science is lacking, but because of the ugly history of the mistreatment of the Māori people, public institutions are afraid to criticize or even take issue anything to do with Māori culture. Consider, for example, media coverage. Between 2020 and 2024, there were no less than 853 articles that mention “maramataka”—the Māori word for the Calendar which translates to “the turning of the moon.” After reading through each text, I was unable to identify a single skeptical article.5 Many openly gush about the wonders of the Calendar, and gave no hint that it has little scientific backing. 

Based on the Dow Jones Factiva Database

The Calendar once played an important role in Māori life, tracking the seasons. Its main purpose was to inform fishing, hunting, and horticultural activities. There is some truth in the use of specific phases or cycles to time harvesting practices. For instance, some fish are more active or abundant during certain fluctuations of the tides, which in turn are influenced by the moon’s gravitational pull. Two studies have shown a slight increase in fish catch using the Calendar.6 However, there is no support for the belief that lunar phases influence human health and behavior, plant growth, or the weather. Despite this, government ministries began providing online materials that feature an array of claims about the moon’s impact on human affairs. Fearful of causing offense by publicly criticizing Māori knowledge, the scientific position was usually nowhere to be found. 

Soon primary and secondary schools began holding workshops to familiarize staff with the Calendar and how to teach it. These materials were confusing for students and teachers alike because most were breathtakingly uncritical and there was an implication that it was all backed by science. Before long, teachers began consulting the maramataka to determine which days were best to conduct assessments, which days were optimal for sporting activities, and which days were aligned with “calmer activities at times of lower energy phases.” Others used it to predict days when problem students were more likely to misbehave.7

Fearful of causing offense by publicly criticizing Māori knowledge, the scientific position was usually nowhere to be found.

As one primary teacher observed: “If it’s a low energy day, I might not test that week. We’ll do meditation, mirimiri (massage). I slowly build their learning up, and by the time of high energy days we know the kids will be energetic. You’re not fighting with the children, it’s a win-win, for both the children and myself. Your outcomes are better.”8 The link between the Calendar and human behavior was even promoted by one of the country’s largest education unions.9 Some teachers and government officials began scheduling meetings on days deemed less likely to trigger conflict,10 while some media outlets began publishing what were essentially horoscopes under the guise of ‘ancient Māori knowledge.’11

The Calendar also gained widespread popularity among the public as many Kiwis began using online apps and visiting the homepages of maramataka enthusiasts to guide their daily activities. In 2022, a Māori psychiatrist published a popular book on how to navigate the fluctuating energy levels of Hina—the moon goddess. In Wawata Moon Dreaming, Dr. Hinemoa Elder advises that during the Tamatea Kai-ariki phase people should: “Be wary of destructive energies,”12 while the Māwharu phase is said to be a time of “female sexual energy … and great sex.”13 Elder is one of many “maramataka whisperers” who have popped up across the country. 

By early 2025, the Facebook page “Maramataka Māori” had 58,000 followers,14 while another, “Living by the Stars” on Māori Astronomy had 103,000 admirers.15 Another popular book, Living by the Moon, also asserts that lunar phases can affect a person’s energy levels and behavior. We are told that the Whiro phase (new moon) is associated with troublemaking. It even won awards for best educational book and best Māori language resource.16 In 2023, Māori politician Hana Maipi-Clarke, who has written her own book on the Calendar, stood up in Parliament and declared that the maramataka could foretell the weather.17

A Public Health Menace 

Several public health clinics have encouraged their staff to use the Calendar to navigate “high energy” and “low energy” days and help clients apply it to their lives. As a result of the positive portrayal of the Calendar in the Kiwi media and government websites, there are cases of people discontinuing their medication for bipolar disorder and managing contraception with the Calendar.18 In February 2025, the government-funded Māori health organization, Te Rau Ora, released an app that allows people to enhance their physical and mental health by following the maramataka to track their mauri (vital life force).

While Te Rau Ora claims that it uses “evidence-based resources,” there is no evidence that mauri exists, or that following the phases of the moon directly affects health and well-being. Mauri is the Māori concept of a life force—or vital energy—that is believed to exist in all living beings and inanimate objects. The existence of a “life force” was once the subject of debate in the scientific community and was known as “vitalism,” but no longer has any scientific standing.19 Despite this, one of app developers, clinical psychologist Dr. Andre McLachlan, has called for widespread use of the app.20 Some people are adamant that following the Calendar has transformed their lives, and this is certainly possible given the belief in its spiritual significance. However, the impact would not be from the influence of the Moon, but through the power of expectation and the placebo effect. 

No Science Allowed 

While researching my book, The Science of the Māori Lunar Calendar, I was repeatedly told by Māori scholars that it was inappropriate to write on this topic without first obtaining permission from the Māori community. They also raised the issue of “Māori data sovereignty”—the right of Māori to have control over their own data, including who has access to it and what it can be used for. They expressed disgust that I was using “Western colonial science” to validate (or invalidate) the Calendar. 

It is a dangerous world where subjective truths are given equal standing with science under the guise of relativism.

This is a reminder of just how extreme attempts to protect indigenous knowledge have become in New Zealand. It is a dangerous world where subjective truths are given equal standing with science under the guise of relativism, blurring the line between fact and fiction. It is a world where group identity and indigenous rights are often given priority over empirical evidence. The assertion that forms of “ancient knowledge” such as the Calendar, cannot be subjected to scientific scrutiny as it has protected cultural status, undermines the very foundations of scientific inquiry. The expectation that indigenous representatives must serve as gatekeepers who must give their consent before someone can engage in research on certain topics is troubling. The notion that only indigenous people can decide which topics are acceptable to research undermines intellectual freedom and stifles academic inquiry. 

While indigenous knowledge deserves our respect, its uncritical introduction into New Zealand schools and health institutions is worrisome and should serve as a warning to other countries. When cultural beliefs are given parity with science, it jeopardizes public trust in scientific institutions and can foster misinformation, especially in areas such as public health, where the stakes are especially high.

Categories: Critical Thinking, Skeptic

The Measure of the Wealth of Nations: Why Economic Statistics Matter

Tue, 04/29/2025 - 2:08pm
Are things getting better?
For whom? What does “better” mean?

The economic and social phenomena so clear in everyday experience are invisible in the standard national accounts and GDP (Gross Domestic Product) statistics. The current concept of value added used to construct GDP numbers does not correspond to the views many people hold about societal value. This disconnect has given momentum to the Beyond GDP movement and to those similarly challenging the metrics of shareholder value that determine how businesses act. The digitalization of the economy, in shifting the ways economic value can be created, amplifies the case for revisiting existing economic statistics.

Without good statistics, states cannot function. In my work focusing on both the digital economy and the natural economy, I have worked closely with official statisticians in the ONS (Office for National Statistics), BEA (Bureau of Economic Analysis), OECD (Organization for Economic Cooperation and Development), INSEE (National Institute of Statistics and Economic Studies), and elsewhere for many years. Without question there has been a widespread loss of belief in conventional statistics even among knowledgeable commentators, as the vigorous Beyond GDP agenda testifies.

Why Not Well-Being?

An alternative metric of social welfare that many people find appealing is the direct measurement of well-being. Economists who focus on well-being have differing views on exactly how to measure it, but the balance of opinion has tilted toward life satisfaction measured on a fixed scale. One such measurement is the Cantril Ladder, which asks respondents to think of a ladder, with the best possible life for them being a 10 and the worst possible life being a 0, and are then asked to rate their own current lives on that 0 to 10 scale.

Although people’s well-being is the ultimate aim of collective action, using it as a measurement is problematic in several ways. One is the set of measurement issues highlighted in research by Mark Fabian. These include scale norming, whereby when people state their life satisfaction as, say, a 7 on a scale of 1 to 10 at different time periods, they are doing so by reference to the scale rather than events in their life.12 One of the more firmly established behavioral facts is the idea of an individual set point, whereby individuals generally revert to an initial level of well-being after experiencing events that send it up or down, but this is hardly a reason for concluding that nothing can improve in their lives.

Although people’s well-being is the ultimate aim of collective action, using it as a measurement is problematic.

Another issue is that the empirical literature is atheoretical, providing a weak basis for policy intervention in people’s lives. The conclusion from my research project on well-being is that while national policy could certainly be informed by top-down life satisfaction survey statistics, at smaller scales people’s well-being will depend on the context and on who is affected; the definition and measurement of well-being should be tailored appropriately, and it is not a very useful metric for policy at an aggregate level.

Why Not an Alternative Index?

GDP is calculated by summing up the total value of all final goods and services produced within a country’s borders during a specific period, typically a year. Over the years, several single indices as alternatives to GDP have been proposed. However, indices internalize the trade-offs to present a single number that advocates hope will dethrone conventional measures. Some of these are explicit about the social welfare framework they involve.

Another alternative is provided by Jones and Klenow (2016),3 who include consumption, leisure, inequality, and mortality in social welfare. They convert other indicators into “consumption-equivalent welfare,” which has a long tradition in economics.4 In their paper, they observe that France has much lower consumption per capita than the United States—it is only at 60 percent of the U.S. level—but less inequality, greater life expectancy at birth, and longer leisure hours. Their adjustment puts France at 92 percent of the consumption-equivalent level of the United States.

A well-established alternative to GDP is the Human Development Index (HDI), inspired by Nobel Prize winning economist Amartya Sen’s capabilities approach—improving access to the tools people use to live a fulfilling life. The index demonstrates the dangers of combining a number of indicators, each one measuring something relevant, without having a conceptual structure for the trade-offs and how the components should be weighted together. The late Martin Ravallion of the World Bank advocated for a multidimensional set of indicators, with the aggregation necessary to get to these being informed by talking to poor people about their priorities:

The role played by prices lies at the heart of the matter. It is widely agreed that prices can be missing for some goods and deceptive for others. There are continuing challenges facing applied economists in addressing these problems. However, it is one thing to recognize that markets and prices are missing or imperfect, and quite another to ignore them in welfare and poverty measurement. There is a peculiar inconsistency in the literature on multidimensional indices of poverty, whereby prices are regarded as an unreliable guide to the tradeoffs, and are largely ignored, while the actual weights being assumed in lieu of prices are not made explicit in the same space as prices. We have no basis for believing that the weights being used are any better than market price.5Why Not a Dashboard?

One frequent proposal, which certainly has intuitive appeal, is replacing the political and policy focus on GDP growth and related macroeconomic statistics with a broader dashboard. But there are three big challenges related to what to display on the dashboard. One, which indicators? A proliferation of alternatives has focused on what their advocates think is important rather than being shaped by either theory or broad consensus. So potential users face an array of possibilities and can select what interests them. Second, there are trade-offs and dependencies between indicators, and although dashboards could be designed to display these clearly, often they do not. Consequently, the third challenge is how to weight or display the various component indicators for decision purposes.

Table 1 lists the headline categories for four frequently cited dashboards, showing how little they overlap. The selection of indicators to represent an underlying concept is evidently arbitrary, in the sense that the lists do not have a clear theoretical basis, and the selection of indicators is generally determined by what data are available or even by political negotiation. For instance, I was told by someone closely involved in the process that the debate within the UN about the SDGs (Sustainable Development Goals) included a discussion about the definition of a tree; depending on the height specified in the definition, coffee bushes might or might not be included, which for some countries would affect their measure of deforestation. Practicality and arbitrary decisions certainly affect mainstream economic statistics too, but these result from decades of debate and practice among the community of relevant experts informed by a theoretical basis. We are not there yet with dashboards.

Still, there are many things people care about in life, even if one confines the question to their economic well-being. Indeed, one of my criticisms of using growth of real GDP as a guide was the flawed assumption that utility can be collapsed to a single dimension.

Comprehensive Wealth

If not well-being directly measured, nor (yet) a dashboard, nor a single index number alternative to GDP, what are the options? Consider comprehensive wealth. First, it embeds sustainability because of its focus on assets. Adding in effect a balance sheet recording stocks—or equivalently a full account of the flow of services provided by the assets—immediately highlights the key trade-off between present and future consumption. One measurement challenge is to identify the economically relevant assets and collect the underlying data. Focusing on assets revives an old debate in economics during the 1950s and early 1960s between the “two Cambridges”—Cambridge, Massachusetts, home to MIT and Harvard (where I did my PhD), and Cambridge, England (where I now work). That debate was about whether it made any sense to think of (physical) capital as a single aggregate when this would inevitably be a mash-up of many different types of physical buildings and equipment.

The American Cambridge (led by Paul Samuelson and Robert Solow) said yes, and the concept has become the “K” of production functions and growth accounting. The British Cambridge (particularly Piero Sraffa and Joan Robinson) disputed this, arguing for example that different vintages of capital would embed different generations of technology, so even a straightforward machine tool to stamp out components could not be aggregated with a twenty-year-old equivalent. Even the review articles discussing the debate (Cohen and Harcourt 2003,6 Stiglitz 19747) take sides, but the mainstream profession has given total victory to the U.S. single-aggregate version.

A balance-sheet approach also helps integrate the role of debt into consideration of progress.

A second point in favor of a comprehensive wealth approach is that investment for future consumption always involves different types of assets in combination. This means it will be important to consider not just the stocks of different assets—whether machines, patents, or urban trees (which cool the ambient temperature)—but also the extent to which the services they provide are substitutes or complements for each other: What is the correlation matrix? A patent for a new gadget will require investment in specific machines to put it into production and may benefit from tree planting if the production process heats the factory; the trees may substitute for an air-conditioning plant and also for concrete flood defenses downstream if their roots absorb enough rain. A recent paper8 highlights the importance of understanding the complementarities: “So long as a particular irreversible capital good remains with its project, in many cases until it is scrapped, its contribution comes not solely on its own account but as a result of complementarity with other capital goods. The project’s income is not composed of distinct contributions from individual assets.”

A balance-sheet approach also helps integrate the role of debt into consideration of progress. Debt is how consumption occurs now at the expense of consumption in future. In addition to financial debt, whether issued by governments or businesses or owed by individuals, there is a large and unmeasured burden of debt to nature. In a range of natural capital assets, including a stable climate, past and current consumption is reducing future opportunities.

In summary, to track sustainable economic welfare, a comprehensive wealth approach is desirable, identifying separately the types of assets that contribute capital services to economic actors. Some of them have no natural volume units. (You can count the number of isotope ratio mass spectrometers, but how do you count the accumulated know-how of a top law firm?) Many will not have a market price at all, and if they do, it is likely not to be the shadow price relevant to social welfare, so the monetary valuation needed to aggregate individual assets (by putting them into a common unit of account) is problematic.9 And the complementarities and substitutability across categories need to be better understood, including non-market assets such as organizational capabilities. (The development economics literature talks about this in terms of institutions or social capital; Singapore had few physical assets and little manufacturing industry to speak of in 1946, so it clearly relied on other assets to become one of the world’s highest per capita income countries.)

This is a challenging measurement agenda to say the least, but it is an obvious path for statistical development. Some readers will find the sustainability argument the most persuasive. There are two other supporting rationales, though. One is that a significant body of economic theory (appealing to both neoclassical and heterodox economists) supports it:1011 An increase in comprehensive wealth, at appropriately measured shadow prices, corresponds to an increase in social well-being. The other is that the statistical community has already started heading down this path with the agreement of UN statistical standards for measuring (some) natural capital and the services it provides.

The 2025 System of National Accounts (SNA) revision will include a little more detail about how official statisticians should be implementing this. It is a giant step forward, conceptually and practically— although it does not go far enough in that it insists on the use of valuations as close as possible to market prices, when the main issue in accounting for the environment is that markets grotesquely misprice resource use. (SNA is an internationally agreed-upon framework for compiling economic data, providing a standardized approach to measuring economic activity, including GDP and other key economic variables, facilitating analysis and policy-making.)

Conclusion

Today’s official framework for measuring the economy dates from an era when physical capital was scarce and natural resources were seemingly unconstrained. Manufacturing was the leading sector of the economy, and digital technology was in its infancy. The original national accounts were created using a mechanical calculating machine, not on a computer. Digital technologies have transformed the structure of production and consumption, and at a time of such significant structural change the supply side of the economy needs to be taken seriously. Policy decisions taken now will affect people’s lives for decades to come because the structure of so many industries is changing significantly. It is no wonder industrial policy is back in fashion among policymakers.

Unfortunately, there are yawning gaps in our basic statistics. Official statisticians do important work even as many governments have been cutting their budgets. However, the focus of the statistical agencies is on incremental improvement to the existing System of National Accounts, which will change for the better but not by much when the new standards are confirmed in 2025. There are huge data collection and analytical gaps in what is needed now, comprehensive wealth and time use, and a huge intellectual agenda when those statistics do become available. Just as the production of the first GDP figures gave birth to theories of economic growth, so sustainable balance sheet and time-use metrics will be generative for economists thinking about how societies progress.

The critiques of the earlier Beyond GDP movement have given way to a more constructive period of statistical innovation.

There is no doubt this area of economic statistics will continue to expand—because it is all too obvious that something new is needed. The critiques of the earlier Beyond GDP movement have given way to a more constructive period of statistical innovation—and I have given some examples of fruitful new methods and types of data.

However, I think some conclusions are clear. Measures that account for sustainability, natural and societal, are clearly imperative; the comprehensive wealth framework does this, and can potentially provide a broad scaffolding that others can use to tailor dashboards that serve specific purposes. A second conclusion is that while ideas have always driven innovation and progress, their role in adding value is even more central as the share of intangible value in the economy increases.

Finally, economic value added cannot be defined and measured without an underlying conception of value. This normative conception varies greatly between societies and over time, not least because of profound changes in technology and structure. It is a question of public philosophy as much as economics. Welfare economics has hardly moved on from the heyday of social choice theory in the 1970s, with social welfare defined as the sum of individual utilities; the philosophically rich capabilities approach has made little headway in everyday economics, except perhaps for development economics.

It is not yet clear whether the OECD economies will break away from the public philosophy of individualism and markets that has dominated policy for the past half century, despite all the critiques of neoliberalism; but the fact of popular discontent and its political consequences suggest they might. No wonder commentators so often reach for Gramsci’s famous Prison Notebooks comment, “The old order is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.”

Economic value added cannot be defined and measured without an underlying conception of value.

If a new shared understanding of economic value emerges from the changes underway now, it will look quite different. It will acknowledge the importance of context and variety, moving beyond averages and “representative consumers.” It will incorporate collective outcomes alongside individual ones, while recognizing the differences between them due to pervasive externalities, spillovers, and scale effects. And, it will embed the economy in nature, appreciating the resource constraints that limit future growth.

Excerpted and adapted by the author from The Measure of Progress: Counting What Really Matters © 2025 Diane Coyle. Reprinted with permission of Princeton University Press.

Categories: Critical Thinking, Skeptic

Don’t Ban the Book: Kids Can Benefit From Challenging Stories

Mon, 04/28/2025 - 10:03am

During her sojourns among the Inuit throughout the 1960s and 70s, pioneering anthropologist Jean Briggs observed some peculiar parenting practices. In a chapter she contributed to The Anthropology of Peace and Nonviolence, a collection of essays from 1994, Briggs describes various methods the Inuit used to reduce the risk of physical conflict among community members. Foremost among them was the deliberate cultivation of modesty and equanimity, along with a penchant for reframing disputes or annoyances as jokes. “An immodest person or one who liked attention,” Briggs writes, “was thought silly or childish.” Meanwhile, a critical distinction held sway between seriousness and playfulness. “To be ‘serious’ had connotations of tension, anxiety, hostility, brooding,” she explains. “On the other hand, it was highest praise to say of someone: ‘He never takes anything seriously’.”1 The ideal then was to be happy, jocular, and even-tempered.

This distaste for displays of anger applied in the realm of parenting as well. No matter how unruly children’s behavior, adults would refrain from yelling at them. So, it came as a surprise to Briggs that Inuit adults would often purposely instigate conflicts among the children in their charge. One exchange Briggs witnessed involved an aunt taking her three-year-old niece’s hand and putting it in another child’s hair while telling her to pull it. When the girl refused, the aunt gave it a tug herself. The other child, naturally enough, turned around and hit the one she thought had pulled her hair. A fight ensued, eliciting laughter and cheers from the other adults, who intervened before anyone was hurt. None of the other adults who witnessed this incident seemed to think the aunt had done anything wrong.

“Why Don’t You Kill Your Baby Brother?” The provocations didn’t always involve rough treatment or incitements to conflict but often took the form of outrageous lines of questioning.

On another occasion, Briggs witnessed a mother picking up a friend’s baby and saying to her own nursling, “Shall I nurse him instead of you?” The other mother played along, offering her breast to the first woman’s baby, saying, “Do you want to nurse from me? Shall I be your mother?”2 The nursling shrieked in protest, and both mothers burst into laughter. Briggs witnessed countless more of what she calls “playful dramas” over the course of her research. Westerners might characterize what the adults were doing in these cases as immature, often cruel pranks, even criminal acts of child abuse. What Briggs came to understand, however, was that the dramas served an important function in the context of Inuit culture. Tellingly, the provocations didn’t always involve rough treatment or incitements to conflict but often took the form of outrageous or disturbing lines of questioning. This approach is reflected in the title of Briggs’s chapter, “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps.” However, even these gentler sessions were more interrogation than thought experiment, the clear goal being to arouse intense emotions in the children. 

The parents were training the children, using simulated and age-calibrated dilemmas, to develop exactly the kind of equanimity and joking attitude they would need to mature into successful adults.

From interviews with adults in the communities hosting her, Briggs gleaned that the purpose of these dramas was to force children to learn how to handle difficult social situations. The term they used is isumaqsayuq, meaning “to cause thought,” which Briggs notes is a “central idea of Inuit socialization.” “More than that,” she goes on, “and as an integral part of thought, the dramas stimulate emotion.” The capacity for clear thinking in tense situations—and for not taking the tension too seriously—would help the children avoid potentially dangerous confrontations. Briggs writes: 

The games were, themselves, models of conflict management through play. And when children learned to recognize the playful in particular dramas, people stopped playing those games with them. They stopped tormenting them. The children had learned to keep their own relationships smoother—to keep out of trouble, so to speak— and in doing so, they had learned to do their part in smoothing the relationships of others.3

The parents, in other words, were training the children, using simulated and age-calibrated dilemmas, to develop exactly the kind of equanimity and joking attitude they would need to mature into successful adults capable of maintaining a mostly peaceful society. They were prodding at the kids’ known sensitivities to teach them not to take themselves too seriously, because taking yourself too seriously makes you apt to take offense, and offense can often lead to violence. 

Are censors justified in their efforts at protecting children from the wrong types of lessons? 

The Inuit’s aversion to being at the center of any drama and their penchant for playfulness in potentially tense encounters are far removed from our own culture. Rather their approach to socialization relies on an insight that applies universally, one that’s frequently paid lip service in the West but even more frequently lost sight of. Anthropologist Margaret Mead captures the idea in her 1928 ethnography Coming of Age in Samoa, writing, “The children must be taught how to think, not what to think.”4 People fond of spouting this truism today usually intend to communicate something diametrically opposite to its actual meaning, with the suggestion being that anyone who accepts rival conclusions must have been duped by unscrupulous teachers. However, the crux of the insight is that education should not focus on conclusions at all. Thinking is not about memorizing and being able to recite facts and propositions. Thinking is a process. It relies on knowledge to be sure, but knowledge alone isn’t sufficient. It also requires skills.

Inuit Children. Photo by UC Berkeley, Department of Geography.

Cognitive psychologists label knowing that and knowing how as declarative and procedural knowledge, respectively.5 Declarative knowledge can be imparted by the more knowledgeable to the less knowledgeable—the earth orbits the sun—but to develop procedural knowledge or skills you need practice. No matter how precisely you explain to someone what goes into riding a bike, for instance, that person has no chance of developing the requisite skills without at some point climbing on and pedaling. Skills require training, which to be effective must incorporate repetition and feedback. 

It’s good to be honest, but should you lie to protect a friend?

What the Inuit understood, perhaps better than most other cultures, is that morality plays out far less in the realm of knowing what than in the realm of knowing how. The adults could simply lecture the children about the evils of getting embroiled in drama, but those children would still need to learn how to manage their own aggressive and retributive impulses. And explaining that the most effective method consists of reframing slights as jokes is fine, but no child can be expected to master the trick on first attempt. So it is with any moral proposition. We tell young children it’s good to share, for instance, but how easy is it for them to overcome their greedy impulses? And what happens when one moral precept runs up against another? It’s good to share a toy sword, but should you hand it over to someone you suspect may use it to hurt another child? Adults face moral dilemmas like this all the time. It’s wrong to cheat on your spouse, but what if your spouse is controlling and threatens to take your children if you file for divorce? It’s good to be honest, but should you lie to protect a friend? There’s no simple formula that applies to the entire panoply of moral dilemmas, and even if there were, it would demand herculean discipline to implement. 

Conservatives are working to impose bans on books they deem inappropriate for school children. Left-leaning citizens are being treated to PC bowdlerizations of a growing list of classic books.

Unfortunately, Western children have a limited range of activities that provide them opportunities to develop their moral skillsets. Perhaps it’s testament to the strength of our identification with our own moral principles that few of us can abide approaches to moral education that are in any regard open-ended. Consider children’s literature. As I write, political conservatives in the U.S. are working to impose bans on books6 they deem inappropriate for school children. Meanwhile, more left-leaning citizens are being treated to PC bowdlerizations7 of a disconcertingly growing8 list of classic books. One side is worried about kids being indoctrinated with life-deranging notions about race and gender. The other is worried about wounding kids’ and older readers’ fragile psyches with words and phrases connoting the inferiority of some individual or group. What neither side appreciates is that stories can’t be reduced to a set of moral propositions, and that what children are taught is of far less consequence than what they practice.

Do children’s books really have anything in common with the playful dramas Briggs observed among the Inuit? What about the fictional stories adults in our culture enjoy? One obvious point of similarity is that stories tend to focus on conflict and feature high-stakes moral dilemmas. The main difference is that reading or watching a story entails passively witnessing the actions of others, as opposed to actively participating in the plots. Nonetheless, the principle of isumaqsayuq comes into play as we immerse ourselves in a good novel or movie. Stories, if they’re at all engaging, cause us to think. They also arouse intense emotions. But what could children and adults possibly be practicing when they read or watch stories? If audiences were simply trying to figure out how to work through the dilemmas faced by the protagonists, wouldn’t the outcome contrived by the author represent some kind of verdict, some kind of lesson? In that case, wouldn’t censors be justified in their efforts at protecting children from the wrong types of lessons? 

What could children possibly be practicing when they read stories? Wouldn’t the outcome contrived by the author represent some kind of verdict or lesson?

To answer these questions, we must consider why humans are so readily held rapt by fictional narratives in the first place. If the events we’re witnessing aren’t real, why do we care enough to devote time and mental resources to them? The most popular stories, at least in Western societies, feature characters we favor engaging in some sort of struggle against characters we dislike—good guys versus bad guys. In his book Just Babies: The Origins of Good and Evil, psychologist Paul Bloom describes a series of experiments9 he conducted with his colleague Karen Wynn, along with their then graduate student Kiley Hamlin. They used what he calls “morality plays” to explore the moral development of infants. In one experiment, the researchers had the babies watch a simple puppet show in which a tiger rolls a ball to one rabbit and then to another. The first rabbit rolls the ball back to the tiger and a game ensues. But the second rabbit steals away with the ball at first opportunity. When later presented with both puppets and encouraged to reach for one to play with, the babies who had witnessed the exchanges showed a strong preference for the one who had played along. What this and several related studies show is that by as early as three months of age, infants start to prefer characters who are helpful and cooperative over those who are selfish and exploitative.

Photo by Natasha Jenny / Unsplash

That such a preference would develop so early and so reliably in humans makes a good deal of sense in light of how deeply dependent each individual is on other members of society. Throughout evolutionary history, humans have had to cooperate to survive, but any proclivity toward cooperation left them vulnerable to exploitation. This gets us closer to the question of what we’re practicing when we enjoy fiction. In On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd points out that animals’ play tends to focus on activities that help them develop the skills they’ll need to survive, typically involving behaviors like chasing, fleeing, and fighting. When it comes to what skills are most important for humans to acquire, Boyd explains: 

Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch.10

Fiction, then, can be viewed as a type of imaginative play that activates many of the same evolved cognitive mechanisms as gossip, but without any real-world stakes. This means that when we’re consuming fiction, we’re not necessarily practicing to develop equanimity in stressful circumstances as do the Inuit; we’re rather honing our skills at assessing people’s proclivities and weighing their potential contributions to our group. Stories, in other words, activate our instinct, while helping us to develop the underlying skillset, for monitoring people for signals of selfish or altruistic tendencies. The result of this type of play would be an increased capacity for cooperation, including an improved ability to recognize and sanction individuals who take advantage of cooperative norms without contributing their fair share. 

Ethnographic research into this theory of storytelling is still in its infancy, but the anthropologist Daniel Smith and his colleagues have conducted an intensive study11 of the role of stories among the Agta, a hunter-gatherer population in the Philippines. They found that 70 percent of the Agta stories they collected feature characters who face some type of social dilemma or moral decision, a theme that appears roughly twice as often as interactions with nature, the next most common topic. It turned out, though, that separate groups of Agta invested varying levels of time and energy in storytelling. The researchers saw this as an opportunity to see what the impact of a greater commitment to stories might be. In line with the evolutionary account laid out by Boyd and others, the groups that valued storytelling more outperformed the other groups in economic games that demand cooperation among the players. This would mean that storytelling improves group cohesion and coordination, which would likely provide a major advantage in any competition with rival groups. A third important finding from this study is that the people in these groups knew who the best storytellers were, and they preferred to work with these talented individuals on cooperative endeavors, including marriage and childrearing. This has obvious evolutionary implications. 

Remarkably, the same dynamics at play in so many Agta tales are also prominent in classic Western literature. When literary scholar Joseph Carroll and his team surveyed thousands of readers’ responses to characters in 200 novels from authors like Jane Austen and Charles Dickens, they found that people see in them the basic dichotomy between altruists and selfish actors. They write: 

Antagonists virtually personify Social Dominance—the self-interested pursuit of wealth, prestige, and power. In these novels, those ambitions are sharply segregated from prosocial and culturally acquisitive dispositions. Antagonists are not only selfish and unfriendly but also undisciplined, emotionally unstable, and intellectually dull. Protagonists, in contrast, display motive dispositions and personality traits that exemplify strong personal development and healthy social adjustment. They are agreeable, conscientious, emotionally stable, and open to experience.12

Interestingly, openness to experience may be only loosely connected to cooperativeness and altruism, just as humor is only tangentially related to peacefulness among the Inuit. However, being curious and open-minded ought to open the door to the appreciation of myriad forms of art, including different types of literature, leading to a virtuous cycle. So, the evolutionary theory, while focusing on cooperation, leaves ample room for other themes, depending on the cultural values of the storytellers.

Photo by João Rafael / Unsplash

In a narrow sense then, cooperation is what many, perhaps most, stories are about, and our interest in them depends to some degree on our attraction to more cooperative, less selfish, individuals. We obsessively track the behavior of our fellow humans because our choices of who to trust and who to team up with are some of the most consequential in our lives. This monitoring compulsion is so powerful that it can be triggered by opportunities to observe key elements of people’s behavior—what they do when they don’t know they’re being watched—even when those people don’t exist in the real world. But what keeps us reading or watching once we’ve made our choices of which characters to root for? And, if one of the functions of stories is to help us improve our social abilities, what mechanism provides the feedback necessary for such training to be effective? 

Fiction can be viewed as a type of imaginative play that activates many of the same evolved cognitive mechanisms as gossip, but without any real-world stakes.

In Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction, literary scholar William Flesch theorizes that our moment-by-moment absorption in fictional plots can be attributed to our desire to see cooperators rewarded and exploiters punished. Citing experiments that showed participants were willing to punish people they had observed cheating other participants—even when the punishment came at a cost13 to the punishers— Flesch argues that stories offer us opportunities to demonstrate our own impulse to enforce norms of fair play. Within groups, individual members will naturally return tit for tat when they’ve been mistreated. For a norm of mutual trust to take hold, however, uninvolved third parties must also be willing to step in to sanction violators. Flesch calls these third-party players “strong reciprocators” because they respond to actions that aren’t directed at them personally. He explains that 

the strong reciprocator punishes or rewards others for their behavior toward any member of the social group, and not just or primarily for their individual interactions with the reciprocator.14

His insight here is that we don’t merely attend to people’s behavior in search of clues to their disposition. We also watch to make sure good and bad alike get their just deserts. And the fact that we can’t interfere in the unfolding of a fictional plot doesn’t prevent us from feeling that we should. Sitting on the edge of your seat, according to this theory, is evidence of your readiness to step in.

It doesn’t matter that a story is fictional if a central reason for liking it is to signal to others that we’re the type of person who likes the type of person portrayed in that story.

Another key insight emerging from Flesch’s work is that humans don’t merely monitor each other’s behavior. Rather, since they know others are constantly monitoring them, they also make a point of signaling that they possess desired traits, including a disposition toward enforcing cooperative norms. Here we have another clue to why we care about fictional characters and their fates. It doesn’t matter that a story is fictional if a central reason for liking it is to signal to others that we’re the type of person who likes the type of person portrayed in that story. Reading tends to be a solitary endeavor, but the meaning of a given story paradoxically depends in large part on the social context in which it’s discussed. We can develop one-on-one relationships with fictional characters for sure, but part of the enjoyment we get from these relationships comes from sharing our enthusiasm and admiration with nonfictional others. 

Children who read Harry Potter discuss which House the Sorting Hat would place them in, but you don’t hear many of them enthusiastically talking about Voldemort murdering Muggles.

This brings us back to the question of where feedback comes into the social training we get from fiction. One feedback mechanism relies on the comprehensibility and plausibility of the plot. If a character’s behavior strikes us as arbitrary or counter to their personality as we’ve assessed it, then we’re forced to think back and reassess our initial impressions—or else dismiss the story as poorly conceived. A character’s personality offers us a chance to make predictions, and the plot either confirms or disproves them. However, Flesch’s work points to another type of feedback that’s just as important. The children at the center of Inuit playful dramas receive feedback from the adults in the form of laughter and mockery. They learn that if they take the dramas too seriously and thus get agitated, then they can expect to be ridiculed. Likewise, when we read or watch fiction, we gauge other audience members’ reactions, including their reactions to our own reactions, to see if those responses correspond with the image of ourselves we want to project. In other words, we can try on traits and aspects of an identity by expressing our passion for fictional characters who embody them. The outcome of such experimentation isn’t determined solely by how well the identity suits the individual fan, but also by how well that identity fits within the wider social group. 

We obsessively track the behavior of our fellow humans because our choices of who to trust and who to team up with are some of the most consequential in our lives.

Parents worried that their children’s minds are being hijacked by ideologues will hardly be comforted by the suggestion that teachers and peers mitigate the impact of any book they read. Nor will those worried that their children are being inculcated with more or less subtle forms of bigotry find much reassurance in the idea that we’re given to modeling15 our own behavior on that of the fictional characters we admire. Consider, however, the feedback children receive from parents who respond to the mere presence of a book in a school library with outrage. What do children learn from parents’ concern that single words may harm or corrupt them? 

Kids are graduating high school with historically unprecedented rates of depression and anxiety.

Today, against a backdrop of increasing vigilance and protectiveness among parents, kids are graduating high school and moving on to college or the workforce with historically unprecedented rates of depression16 and anxiety,17 having had far fewer risky but rewarding experiences18 such as dating, drinking alcohol, getting a driver’s license, and working for pay. It’s almost as though the parents who should be helping kids learn to work through difficult situations by adopting a playful attitude have themselves become so paranoid and humorless that the only lesson they manage to impart is that the world is a dangerous place, one young adults with their fragile psyches can’t be trusted to navigate on their own.

Even pre-verbal infants are able to pick out the good guys from the bad.

Parents should, however, take some comfort from the discovery that even pre-verbal infants are able to pick out the good guys from the bad. As much as young Harry Potter fans discuss which Hogwarts House the Sorting Hat would place them in, you don’t hear19 many of them talking enthusiastically about how cool it was when Voldemort killed all those filthy Muggles. The other thing to keep in mind is that while some students may embrace the themes of a book just because the teacher assigned it, others will reject them for the same reason. It depends on the temperament of the child and the social group they hope to achieve status in.

Should parents let their kids read just anything? We must acknowledge that books, like playful dramas, need to be calibrated to the maturity levels of the readers. However, banning books deemed dangerous deprives children not only of a new perspective. It deprives them of an opportunity to train themselves for the difficulties they’ll face in the upcoming stages of their lives. If you’re worried your child might take the wrong message from a story, you can make sure you’re around to provide some of your own feedback on their responses. Maybe you could even introduce other books to them with themes you find more congenial. Should we censor words or images—or cease publication of entire books—that denigrate individuals or groups? Only if we believe children will grow up in a world without denigration. Do you want your children’s first encounter with life’s ugliness to occur in the wild, as it were, or as they sit next to you with a book spread over your laps? 

What should we do with great works by authors guilty of terrible acts? What about mostly good characters who sometimes behave badly? What happens when the bad guy starts to seem a little too cool? These are all great prompts for causing thought and arousing emotions. Why would we want to take these training opportunities away from our kids? It’s undeniable that books and teachers and fellow students and, yes, even parents themselves really do influence children to some degree. That influence, however, may not always be in the intended direction. Parents who devote more time and attention to their children’s socialization can probably improve their chances of achieving desirable ends. However, it’s also true that the most predictable result of any effort at exerting complete control over children’s moral education is that their social development will be stunted.

Categories: Critical Thinking, Skeptic

The Lazarus Sign: When Faith and Medicine Diverge

Wed, 04/23/2025 - 5:50pm

My life changed in February of 1993. It began with an early morning phone call from a fellow student at our private Evangelical Christian college. I was informed that our mutual friend Tim had fallen asleep while driving home from a ski trip. He’d been critically injured in a terrible accident and was now lying unconscious in an Ohio hospital. Though it was over 100 miles away and we had class that morning, we left immediately.

Arriving, we were advised to prepare ourselves before seeing him. We tried, but how can one do so? We walked in and recoiled at what was left of our friend. Others came. We took turns praying over our dying friend after being assured by our spiritual leader that, if we prayed hard enough and believed, Tim would be healed.

Hovering over his body, we began our prayer. We held hands as we closed our eyes, me taking Tim’s left hand as we pleaded for a miracle. Tim lifted my hand in the air about six inches as we did so! I opened my eyes in wonderment, and considered interrupting the prayer, but chose to wait and show them. As soon as our leader said “Amen,” and everyone opened their eyes, Tim’s strength left and my hand fell with his.

If he was brain dead, how could he lift my hand?

Unsure what had happened, I told the others about Tim lifting my hand. It was unanimously agreed that God was communicating with me through Tim. It was such a fantastic coincidence that it could only be attributed to divine intervention. We asked ourselves, “If he was brain dead, how could he lift my hand?” And why, if not to send a message from God, did he do so at the precise moments our prayer began and ended?

A doctor examined Tim and told his parents their son’s pupils were not responding to light, he was brain dead, and his body was shutting down. He respectfully advised them that they needed to prepare themselves for his death. The most devout among us corrected the good doctor, assuring him (and me, specifically) that Tim would rise again. The doctor kindly responded, “No. He has one foot in the grave.” Our leader countermanded him, reminding us “Jesus had two feet in the grave.” I believed our leader.

Tim passed away three days later, as the doctor predicted he would. Our leader rationalized Tim’s death (and the false assurances that he would be healed) as having been God’s will. We convinced ourselves that Tim, as a fellow believer, was now rejoicing in heaven, where we would meet him when our time came. I adopted Tim’s hand raising my own into my testimony as I turned my life around.

My dying friend’s disinhibited spinal cord told him to raise his hand.

Over the years I’ve come to accept that my life-changing miracle of a hand-raising while brain dead was, in actuality, explainable. The kind doctor who tried to prepare Tim’s parents probably knew exactly why Tim lifted my hand, and he knew it wasn’t from divine intervention. My dying friend’s disinhibited spinal cord told him to raise his hand. My hand was lifted by a “reflex arc”—a residual signal passing through a neural pathway in Tim’s spinal column and not, crucially, through his (no longer registering) brain.12 Neither Tim nor the Holy Spirit was responsible.

Photo by Aarón Blanco Tejedor / Unsplash

Raising one’s limbs, in reality, is common for those experiencing brain death.3 First reported in 1974, “brain death-associated reflexes and automatisms” are frequent enough to have gained a moniker, “the Lazarus Sign.”4 People experiencing brain death have been recorded doing much more than raising another’s hand too, including hugging motions for up to 30 seconds, rapidly jerking all four limbs for up to eight inches, and symmetric movement of both arms.5

Raising one’s limbs, in reality, is common for those experiencing brain death.

There is another seemingly inexplicable facet to the story, though: If raising my hand can be explained naturally, what then of the incredible coincidence that my hand was raised and lowered at the same moment when the group prayer began and ended?

Swiss psychologist Carl Jung might describe my experience as an example of “synchronicity,” i.e., an acausal connecting principle.6 According to Jung and his adherents, science cannot offer a reasonable causal connection to explain why a brain-dead man lifted my hand at the exact moment a prayer began and dropped it at the exact moment the prayer ended.7 Jung adherents claim the odds are so improbable that the connection must be cosmic.8

Interpreting Tim’s act of lifting my hand as a ‘miracle’ was the result of my creative license, probability, and desire to find meaning.

But science can explain the coincidence. My profound coincidence was causal. Interpreting Tim’s act of lifting my hand at a certain moment as a “miracle” was the result of my creative license, probability, and desire to find a pattern and meaning through trauma. In fact, research through the years has revealed much about the phenomena of coincidence. This can be illustrated through a skeptical examination of seemingly much more widely known coincidences: A list of eerie comparisons between the assassinations of Abraham Lincoln and John F. Kennedy. The first of these lists appeared in the year following Kennedy’s assassination in a GOP newsletter and typically include the following:9

  • “Lincoln” and “Kennedy” each have seven letters.
  • Both presidents were elected to Congress in ’46 and later to the presidency in ’60.
  • Both assassins, John Wilkes Booth and Lee Harvey Oswald, were born in ’39 and were known by their three names, which were composed of fifteen letters.
  • Both presidents were succeeded by southerners named Johnson. • Booth ran from a theater and was caught in a warehouse; Oswald ran from a warehouse and was caught in a theater.
  • Oswald and Booth were killed before they could be put on trial.

And so on…10

How Coincidences Work1. Creative License and the Role of Context

First, the likelihood of noticing one becomes more flexible when defining what counts as a coincidence.11 Given enough creative license and disregarding context,12 one can find coincidences in any two events. Let us look, for example, at the two other assassinations, those of James A. Garfield and William McKinley. Both “Garfield” and “McKinley” have eight letters, both were Ohioans, both served as officers in the Civil War on the same side, both were shot twice in the torso, and both of their successors were from New York state.

Creative license is also used to justify such coincidences: Booth ran from a theater and was caught in a warehouse; Oswald ran from a warehouse and was caught in a theater. Booth did run from Ford’s Theater, and Oswald was indeed apprehended in a movie house called “The Texas Theater.”13 John Wilkes Booth did not, however, get caught in a warehouse. A federal soldier named Boston Corbett shot him from outside a burning tobacco barn in Bowling Green, VA, on April 26, 1865. Booth was dragged out still alive and died later that day.14

Our brains are wired to create order from chaos.

Creative license is also used in, Both presidents were elected to Congress in ’46 and later to the presidency in ’60. The apostrophe preceding each numbered year omits the glaring inconsistency that Lincoln and Kennedy were elected to these offices 100 years apart from each other. In context, the “coincidence” doesn’t seem so incredible.

2. Probability

Coincidences are counterintuitive. Consider the probability found in three of the Lincoln and Kennedy coincidences:

Both presidents were elected to Congress in ’46 and later to the presidency in ’60. We only elect our representatives to Congress every two years, and a president every four years. This omits all odd-numbered years.

Both presidents were succeeded by southerners named Johnson. “Johnson” is second only to “Smith” as the most common surname in the U.S.15 Both northern presidents (Lincoln was from Illinois, Kennedy from Massachusetts) needed a southerner to balance the ticket. In the years following the American Civil War, it wasn’t until 1992 that a ticket with two southerners (Clinton and Gore) won the presidency.16

Oswald and Booth were killed before they could be put on trial.17 Booth and Oswald were the subjects of nationwide manhunts and unprecedented vitriol. It is little wonder they were murdered before their trials.

Being elected in years that end in the same two digits, having a successor with a popular surname, and an assassin who was killed before being brought to trial are not at all impossible; indeed, they are relatively probable.

3. Looking for Meaning

Science has shown us that people who describe themselves as religious or spiritual (that is, those seeking meaning and those searching for signs) are more likely to experience coincidences.18 Our brains are wired to create order from chaos,19 and the days following each presidential assassination were overwhelmingly chaotic. The country was shocked when Presidents Lincoln and Kennedy were assassinated, and it seemed just too simple that such inspiring leaders could be shot down by two relative nobodies who would otherwise be forgotten by history.

Photo by Alexei Scutari / UnsplashWas my experience with my dying friend a divine sign? Was it acausal? Probably not.

But both tragic events were really that simple. John Wilkes Booth shot Lincoln while the president was watching Our American Cousin in Ford’s Theater in Washington, DC.20 Lee Harvey Oswald shot John F. Kennedy from the sixth-floor window of the Texas School Book Depository in Dallas.21

Applying Creative License, Probability, and Looking for Meaning to My Profound Coincidence

Let us now return to my profound coincidence. I used creative license in accepting Tim raising my hand as miraculous. I was desperately looking for any sign that he could communicate with me and took it as such. The probability of my friend dying young in a car accident doesn’t defy probability at all. The National Safety Council reports that 6,400 Americans die annually from falling asleep while driving.22 Tim raising one of our hands is probable, too. Movement of the body from residual spinal activity has been found in up to a third of those suffering from brain death.23 During my time with Tim at the hospital, I was surrounded by Evangelicals who assured me of my friend’s resurrection. Being confronted with the unexpected loss of a loved one heightened my emotions. I was more susceptible to believing in miracles than in my normal, rational state.

Was my experience with my dying friend a divine sign? Was it acausal? Probably not. Science has shown me the spiritual “meaning” I once attributed to Tim raising my hand was, in reality, meaningless. As years have gone by, I still stay in touch with a few of my friends who surrounded Tim. We are middle-aged now, with children of our own. Sometimes we remember Tim together. And that’s enough.

Categories: Critical Thinking, Skeptic

The Vatican: City, City-State, Nation, or … Bank?

Mon, 04/21/2025 - 9:43am

Many think of Vatican City only as the seat of governance for the world’s 1.3 billion Roman Catholics. Atheist critics view it as a capitalist holding company with special privileges. However, that postage-stamp parcel of land in the center of Rome is also a sovereign nation. It has diplomatic embassies—so-called apostolic nunciatures—in over 180 countries, and has permanent observer status at the United Nations.

Only by knowing the history of the Vatican’s sovereign status is it possible to understand how radically different it is compared to other countries. For over 2,000 years the Vatican has been a nonhereditary monarchy. Whoever is Pope is its supreme leader, vested with sole decision-making authority over all religious and temporal matters. There is no legislature, judiciary, or any system of checks and balances. Even the worst of Popes—and there have been some truly terrible ones—are sacrosanct. There has never been a coup, a forced resignation, or a verifiable murder of a Pope. In 2013, Pope Benedict became the first pope to resign in 600 years. Problems of cognitive decline get swept under the rug. In its unchecked power of a single man, the Vatican is closest in its governance style to a handful of absolute monarchies such as Saudi Arabia, Brunei, Oman, Qatar, and the UAE. 

During the Renaissance, Popes were feared rivals to Europe’s most powerful monarchies.

From the 8th century until 1870 the Vatican was a semifeudal secular empire called the Papal States that controlled most of central Italy. During the Renaissance, Popes were feared rivals to Europe’s most powerful monarchies. Popes believed God had put them on earth to reign over all other worldly rulers. The Popes of the Middle Ages had an entourage of nearly a thousand servants and hundreds of clerics and lay deputies. That so-called Curia—referring to the court of a Roman emperor—became a Ladon-like network of intrigue and deceit composed largely of (supposedly) celibate single men who lived and worked together at the same time they competed for influence with the Pope. 

The cost of running the Papal States, while maintaining one of Europe’s grandest courts, kept the Vatican under constant financial strain. Although it collected taxes and fees, had sales of produce from its agriculturally rich northern region, and rents from its properties throughout Europe, it was still always strapped for cash. The church turned to selling so-called indulgences, a sixth-century invention whereby the faithful paid for a piece of paper that promised that God would forgo any earthly punishment for the buyer’s sins. The early church’s penances were often severe, including flogging, imprisonment, or even death. Although some indulgences were free, the best ones—promising the most redemption for the gravest sins—were expensive. The Vatican set prices according to the severity of the sin.

The Church had to twice borrow from the Rothschilds.

All the while, the concept of a budget or financial planning was anathema to a succession of Popes. The humiliating low point came when the Church had to twice borrow from the Rothschilds, Europe’s preeminent Jewish banking dynasty. James de Rothschild, head of the family’s Paris-based headquarters, became the official Papal banker. By the time the family bailed out the Vatican, it had only been thirty-five years since the destabilizing aftershocks from the French Revolution had led to the easing of harsh, discriminatory laws against Jews in Western Europe. It was then that Mayer Amschel, the Rothschild family patriarch, had walked out of the Frankfurt ghetto with his five sons and established a fledgling bank. Little wonder the Rothschilds sparked such envy. By the time Pope Gregory asked for the first loan they had created the world’s biggest bank, ten times larger than their closest rival. 

The Vatican’s institutional resistance to capitalism was a leftover of Middle Age ideologies, a belief that the church alone was empowered by God to fight Mammon, a satanic deity of greed. Its ban on usury—earning interest on money loaned or invested—was based on a literal biblical interpretation. The Vatican distrusted capitalism since it thought secular activists used it as a wedge to separate the church from an integrated role with the state. In some countries, the “capitalist bourgeoisie”—as the Vatican dubbed it—had even confiscated church land for public use. Also fueling the resistance to modern finances was the view that capitalism was mostly the province of Jews. Church leaders may not have liked the Rothschilds, but they did like their cash. 

The Church’s sixteen thousand square miles was reduced to a tiny parcel of land.

In 1870, the Vatican lost its earthly empire overnight when Rome fell to the nationalists who were fighting to unify Italy under a single government. The Church’s sixteen thousand square miles was reduced to a tiny parcel of land. The loss of its Papal States income meant the church was teetering on the verge of bankruptcy. 

St. Peter's Basilica, Vatican City, Rome (Photograph by Bernd Marx)

The Vatican survived going forward on something called Peter’s Pence, a fundraising practice that had been popular a thousand years earlier with the Saxons in England (and later banned by Henry VIII when he broke with Rome and declared himself head of the Church of England). The Vatican pleaded with Catholics worldwide to contribute money to support the Pope, who had declared himself a prisoner inside the Vatican and refused to recognize the new Italian government’s sovereignty over the Church. 

During the nearly 60-year stalemate that followed, the Vatican’s insular and mostly incompetent financial management kept it under tremendous pressure. The Vatican would have gone bankrupt if Mussolini had not saved it. Il Duce, Italy’s fascist leader, was no fan of the Church, but he was enough of a political realist to know that 98 percent of Italians were Catholics. In 1929, the Vatican and the Fascist government executed the Lateran Pacts. It gave the Church the most power since the height of its temporal kingdom. It set aside 108.7 acres as Vatican City and fifty-two scattered “heritage” properties as an autonomous neutral state. It reinstated Papal sovereignty and ended the Pope’s boycott of the Italian state. 

The settlement—worth about $1.6 billion in 2025 dollars—was approximately a third of Italy’s entire annual budget.

The Lateran Pacts declared the Pope was “sacred and inviolable,” the equivalent of a secular monarch, and acknowledged he was invested with divine rights. A new Code of Canon Law made Catholic religious education obligatory in state schools. Cardinals were invested with the same rights as princes by blood. All church holidays became state holidays and priests were exempted from military and jury duty. A three-article financial convention granted “ecclesiastical corporations” full tax exemptions. It also compensated the Vatican for the confiscation of the Papal States with 750 million lire in cash and a billion lire in government bonds that paid 5 percent interest. The settlement—worth about $1.6 billion in 2024 dollars—was approximately a third of Italy’s entire annual budget and a desperately needed lifeline for the cash-starved church. 

Satirical depiction of Pope Pius XI and Benito Mussolini during the Lateran Treaty negotiations. (Illustration by Erich Schilling, for the cover of Simplicissimus magazine, March 1929.)

Pius XI, the Pope who struck the deal with Mussolini, was savvy enough to know that he and his fellow cardinals needed help managing the enormous windfall. He therefore brought in a lay outside advisor, Bernardino Nogara, a devout Catholic with a reputation as a financial wizard. 

Nogara took little time in upending hundreds of years of tradition. He ordered, for instance, that every Vatican department produce annual budgets and issue monthly income and expense statements. The Curia bristled when he persuaded Pius to cut employee salaries by 15 percent. And after the 1929 stock market crash, Nogara made investments in blue-chip American companies whose stock prices had plummeted. He also bought prime London real estate at fire-sale prices. As tensions mounted in the 1930s, Nogara further diversified the Vatican’s holdings in international banks, U.S. government bonds, manufacturing companies, and electric utilities. 

Only seven months before the start of World War II, the church got a new Pope, Pius XII, one who had a special affection for Germany (he had been the Papal Nuncio—ambassador—to Germany). Nogara warned that the outbreak of war would greatly test the financial empire he had so carefully crafted over a decade. When the hot war began in September 1939, Nogara realized he had to do more than shuffle the Vatican’s hard assets to safe havens. He knew that beyond the military battlefield, governments fought wars by waging a broad economic battle to defeat the enemy. The Axis powers and the Allies imposed a series of draconian decrees restricting many international business deals, banning trading with the enemy, prohibiting the sale of critical natural resources, and freezing the bank accounts and assets of enemy nationals. 

The United States was the most aggressive, searching for countries, companies, and foreign nationals who did any business with enemy nations. Under President Franklin Roosevelt’s direction, the Treasury Department created a so-called blacklist. By June 1941 (six months before Pearl Harbor and America’s official entry into the war), the blacklist included not only the obvious belligerents such as Germany and Italy, but also neutral nations such as Switzerland, and the tiny principalities of Monaco, San Marino, Liechtenstein, and Andorra. Only the Vatican and Turkey were spared. The Vatican was the only European country that proclaimed neutrality that was not placed on the blacklist. 

There was a furious debate inside the Treasury department about whether Nogara’s shuffling and masking of holding companies in multiple European and South American banking jurisdictions was sufficient to blacklist the Vatican. It was only a matter of time, concluded Nogara, until the Vatican was sanctioned. 

The Vatican Bank could operate anywhere worldwide, did not pay taxes … disclose balance sheets, or account to any shareholders.

Every financial transaction left a paper trail through the central banks of the Allies. Nogara needed to conduct Vatican business in secret. The June 27, 1942, formation of the Istituto per le Opere di Religione (IOR)—the Vatican Bank—was heaven sent. Nogara drafted a chirograph (a handwritten declaration), a six-point charter for the bank, and Pius signed it. Since its only branch was inside Vatican City—which, again, was not on any blacklist—the IOR was free of any wartime regulations. The IOR was a mix between a traditional bank like J. P. Morgan and a central bank such as the Federal Reserve. The Vatican Bank could operate anywhere worldwide, did not pay taxes, did not have to show a profit, produce annual reports, disclose balance sheets, or account to any shareholders. Located in a former dungeon in the Torrione di Nicoló V (Tower of Nicholas V), it certainly did not look like any other bank. 

The Vatican Bank was created as an autonomous institution with no corporate or ecclesiastical ties to any other church division or lay agency. Its only shareholder was the Pope. Nogara ran it subject only to Pius’s veto. Its charter allowed it “to take charge of, and to administer, capital assets destined for religious agencies.” Nogara interpreted that liberally to mean that the IOR could accept deposits of cash, real estate, or stock shares (that expanded later during the war to include patent royalty and reinsurance policy payments). 

Many nervous Europeans were desperate for a wartime haven for their money. Rich Italians, in particular, were anxious to get cash out of the country. Mussolini had decreed the death penalty for anyone exporting lire from Italian banks. Of the six countries that bordered Italy, the Vatican was the only sovereignty not subject to Italy’s border checks. The formation of the Vatican Bank meant Italians needed only a willing cleric to deposit their suitcases of cash without leaving any paper trail. And unlike other sovereign banks, the IOR was free of any independent audits. It was required—supposedly to streamline recordkeeping—to destroy all its files every decade (a practice it followed until 2000). The IOR left virtually nothing by which postwar investigators could determine if it was a conduit for shuffling wartime plunder, held accounts, or money that should be repatriated to victims. 

The Vatican immediately dropped off the radar of U.S. and British financial investigators.

The IOR’s creation meant the Vatican immediately dropped off the radar of U.S. and British financial investigators. It allowed Nogara to invest in both the Allies and the Axis powers. As I discovered in research for my 2015 book about church finances, God’s Bankers: A History of Money and Power at the Vatican, Nogara’s most successful wartime investment was in German and Italian insurance companies. The Vatican earned outsized profits when those companies escheated the life insurance policies of Jews sent to the death camps and converted the cash value of the policies. 

After the war, the Vatican claimed it had never invested or made money from Nazi Germany or Fascist Italy. All its wartime investments and money movements were hidden by Nogara’s impenetrable Byzantine offshore network. The only proof of what happened was in the Vatican Bank archives, sealed to this day. (I have written opinion pieces in The New York TimesWashington Post, and Los Angeles Times, calling on the church to open its wartime Vatican Bank files for inspection. The Church has ignored those entreaties.) 

Its ironclad secrecy made it a popular postwar offshore tax haven for wealthy Italians wanting to avoid income taxes.

While the Vatican Bank was indispensable to the church’s enormous wartime profits, the very features—no transparency or oversight, no checks and balances, no adherence to international banking best practices—became its weakness going forward. Its ironclad secrecy made it a popular postwar offshore tax haven for wealthy Italians wanting to avoid income taxes. Mafia dons cultivated friendships with senior clergy and used them to open IOR accounts under fake names. Nogara retired in the 1950s. The laymen who had been his aides were not nearly as clever or imaginative as was he. It opened the Vatican Bank to the influence of lay bankers. One, Michele Sindona, was dubbed by the press as “God’s Banker” in the mid-1960s for the tremendous influence and deal making he had with the Vatican Bank. Sindona was a flamboyant banker whose investment schemes always pushed against the letter of the law. (Years later he would be convicted of massive financial fraud and murder of a prosecutor and would himself be killed in an Italian prison.) 

Exacerbating the bad effect of Sindona directing church investments, the Pope’s pick to run the Vatican Bank in the 1970s was a loyal monsignor, Chicago-born Paul Marcinkus. The problem was that Marcinkus knew almost nothing about finances or running a bank. He later told a reporter that when he got the news that he would oversee the Vatican Bank, he visited several banks in New York and Chicago and picked up tips. “That was it. What kind of training you need?” He also bought some books about international banking and business. One senior Vatican Bank official worried that Marcinkus “couldn’t even read a balance sheet.” 

Marcinkus allowed the Vatican Bank to become more enmeshed with Sindona, and later another fast-talking banker, Roberto Calvi. Like Sindona, Calvi would also later be on the run from a host of financial crimes and frauds, but he never got convicted. He was instead found hanging in 1982 under London’s Blackfriars Bridge. 

“You can’t run the church on Hail Marys.” —Vatican Bank head Paul Marcinkus, defending the Bank’s secretive practices in the 1980s.

By the 1980s the Vatican Bank had become a partner in questionable ventures in offshore havens from Panama and the Bahamas to Liechtenstein, Luxembourg, and Switzerland. When one cleric asked Marcinkus why there was so much mystery about the Vatican Bank, Marcinkus dismissed him saying, “You can’t run the church on Hail Marys.” 

All the secret deals came apart in the early 1980s when Italy and the U.S. opened criminal investigations on Marcinkus. Italy indicted him but the Vatican refused to extradite him, allowing Marcinkus instead to remain in Vatican City. The standoff ended when all the criminal charges were dismissed and the church paid a stunning $244 million as a “voluntary contribution” to acknowledge its “moral involvement” with the enormous bank fraud in Italy. (Marcinkus returned a few years later to America where he lived out his final years at a small parish in Sun City, Arizona.) 

Throughout the 1990s and into the early 2000s, the Vatican Bank remained an offshore bank in the heart of Rome.

It would be reasonable to expect that after having allowed itself to be used by a host of fraudsters and criminals, the Vatican Bank cleaned up its act. It did not, however. Although the Pope talked a lot about reforms, it kept the same secret operations, expanding even into massive offshore deposits disguised as fake charities. The combination of lots of money, much of it in cash, and no oversight, again proved a volatile mixture. Throughout the 1990s and into the early 2000s, the Vatican Bank remained an offshore bank in the heart of Rome. It was increasingly used by Italy’s top politicians, including prime ministers, as a slush fund for everything from buying gifts for mistresses to paying off political foes. 

Italy’s tabloids, and a book in 2009 by a top investigative journalist Gianluigi Nuzzi, exposed much of the latest round of Vatican Bank mischief. It was not, however, the public shaming of “Vatileaks” that led to any substantive reforms in the way the Church ran its finances. Many top clerics knew that as a 2,000-year-old institution, if they waited patiently for the public outrage to subside, the Vatican Bank could soon resume its shady dealings. 

In 2000, the Church signed a monetary convention with the European Union by which it could issue its own euro coins.

What changed everything in the way the Church runs its finances came unexpectedly in a decision about a common currency—the euro—that at the time seemed unrelated to the Vatican Bank. Italy stopped using the lira as its currency and adopted the euro in 1999. That initially created a quandary for the Vatican, which had always used the lira as its currency. The Vatican debated whether to issue its own currency or to adopt the euro. In December 2000, the church signed a monetary convention with the European Union by which it could issue its own euro coins (distinctively stamped with Città del Vaticano) as well as commemorative coins that it marked up substantially to sell to collectors. Significantly, that agreement did not bind the Vatican, or two other non-EU nations that had accepted the euro—Monaco and Andorra—to abide by strict European statutes regarding money laundering, antiterrorism financing, fraud, and counterfeiting. 

A Vatican 50 Euro Cent Coin, issued in 2016

What the Vatican did not expect was that the Organization for Economic Cooperation and Development (OECD), a 34-nation economics and trade group that tracks openness in the sharing of tax information between countries, had at the same time begun investigating tax havens. Those nations that shared financial data and had in place adequate safeguards against money laundering were put on a so-called white list. Those that had not acted but promised to do so were slotted onto the OECD’s gray list, and those resistant to reforming their banking secrecy laws were relegated to its blacklist. The OECD could not force the Vatican to cooperate since it was not a member of the European Union. However, placement on the blacklist would cripple the Church’s ability to do business with all other banking jurisdictions. 

The biggest stumbling block to real reform is that all power is still vested in a single man.

In December 2009, the Vatican reluctantly signed a new Monetary Convention with the EU and promised to work toward compliance with Europe’s money laundering and antiterrorism laws. It took a year before the Pope issued a first ever decree outlawing money laundering. The most historic change took place in 2012 when the church allowed European regulators from Brussels to examine the Vatican Bank’s books. There were just over 33,000 accounts and some $8.3 billion in assets. The Vatican Bank was not compliant on half of the EU’s forty-five recommendations. It had done enough, however, to avoid being placed on the blacklist. 

In its 2017 evaluation of the Vatican Bank, the EU regulators noted the Vatican had made significant progress in fighting money laundering and the financing of terrorism. Still, changing the DNA of the finances of the Vatican has proven incredibly difficult. When a reformer, Argentina’s Cardinal Jorge Bergoglio, became Pope Francis in 2013, he endorsed a wide-ranging financial reorganization that would make the church more transparent and bring it in line with internationally accepted financial standards and practices. Most notable was that Francis created a powerful financial oversight division and put Australian Cardinal George Pell in charge. Then Pell had to resign and return to Australia where he was convicted of child sex offenses in 2018. By 2021, the Vatican began the largest financial corruption trial in its history, even including the indictment of a cardinal for the first time. The case floundered, however, and ultimately revealed that the Vatican’s longstanding self-dealing and financial favoritism had continued almost unabated under Francis’s reign. 

Photo by Ashwin Vaswani / Unsplash

It seems that for every step forward, somehow, the Vatican manages to move backwards when it comes to money and good governance. For those of us who study it, while it is a more compliant and normal member of the international community today than at any time in its past, the biggest stumbling block to real reform is that all power is still vested in a single man that the Church considers the Vicar of Christ on earth. 

The Catholic Church considers the reigning pope to be infallible when speaking ex cathedra (literally “from the chair,” that is, issuing an official declaration) on matters of faith and morals. However, not even the most faithful Catholics believe that every Pope gets it right when it comes to running the Church’s sovereign government. No reform appears on the horizon that would democratize the Vatican. Short of that, it is likely there will be future financial and power scandals, as the Vatican struggles to become a compliant member of the international community.

Categories: Critical Thinking, Skeptic

Kawaii and the Cult of Cute

Mon, 04/14/2025 - 10:00am
Japan’s culture of childlike innocence, vulnerability, and playfulness has a downside.

Dogs dressed up in bonnets. Diamond-studded iPhone cases shaped like unicorns. Donut-shaped purses. Hello Kitty shoes, credit cards, engine oil, and staplers. My Little Pony capsule hotel rooms. Pikachu parades. Hedgehog cafes. Pink construction trucks plastered with cartoon eyes. Miniature everything. Emojis everywhere. What is going on here?

Top left to right: Astro Boy, Hello Kitty credit card, Hello Kitty backpack, SoftBank’s Pepper robot, Pikachu Parade, Hello Kitty hat, film still from Ponyo by Studio Ghibli

Such merch, and more, are a manifestation of Japan’s kawaii culture of innocence, youthfulness, vulnerability, playfulness, and other childlike qualities. Placed in certain contexts, however, it can also underscore a darker reality—a particular denial of adulthood through a willful indulgence in naïveté, commercialization, and escapism. Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.

The roots of kawaii can be traced back to Japan’s Heian (“peace” or “tranquility”) period (794–1185 CE), a time when aristocrats appreciated delicate and endearing aesthetics in literature, art, and fashion.1 During the Edo period (1603–1868 CE), art and culture began to emphasize aesthetics, beauty, and playfulness.2 Woodblock prints (ukiyo-e) often depicted cute and whimsical characters.3 The modern iteration of kawaii began to take shape during the student protests of the late 1960s,4 particularly against the backdrop of the rigid culture of post-World War II Japan. In acts of defiance against academic authority, university students boycotted lectures and turned to children’s manga—a type of comic or graphic novel—as a critique of traditional educational norms.5

Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.

After World War II, Japan experienced significant social and economic changes. The emerging youth culture of the 1960s and 1970s began to embrace Western influences, leading to a blend of traditional Japanese aesthetics with Western pop culture.6 During the economic boom of the 1970s and 1980s, consumer subcultures flourished, and the aesthetic of cuteness found expression in playful handwriting, speech patterns, fashion, products, and themed spaces like cafes and shops. The release of Astro Boy (Tetsuwan Atomu) in 1952, created by Osamu Tezuka, is regarded by scholars as a key moment in the development of kawaii culture.7 The character’s large eyes, innocent look, and adventurous spirit resonated with both children and adults, setting the stage for the rise of other kawaii characters in popular culture. Simultaneously, as Japanese women gained more prominence in the workforce, the “burikko” archetype8—an innocent, childlike woman—became popular. This persona, exuding charm and nonthreatening femininity, was seen as enhancing her desirability in a marriage-centric society.9

Left to right: burikko handwriting, bento box, Kumamon mascot

Another catalyst for kawaii culture was the 1970’s emergence of burikko handwriting among teenage girls.10 It was this playful, childlike, rounded style of writing that included hearts, stars, and cartoonish doodles. To the chagrin of educators, it became a symbol of youthful rebellion and a break from rigid societal expectations.

Japanese culture is deeply rooted in tradition, with strict social norms governing behavior and appearance. If you drop something, it’s common to see people rush to retrieve it for you. Even at an empty intersection with no car in sight, a red light will rarely be ignored. Business cards are exchanged with a sense of deference, and social hierarchies are meticulously observed. Conformity is highly valued, while femininity is often dismissed as frivolous. Against this backdrop, the emergence of kawaii can be seen as an act of quiet resistance.

The rise of shōjo (girls’) manga in the 1970s introduced cute characters with large eyes and soft rounded faces with childlike features, popularizing the kawaii aesthetic among young girls.11 Then, in 1974, along came Sanrio’s Hello Kitty,12 commercializing and popularizing kawaii culture beyond Japan’s borders. While it started as a product range for children, it soon became popular with teens and adults alike.

Kawaii characters like Hello Kitty are often depicted in a simplistic style, with oversized eyes and minimal facial expressions. This design invites people to project their own feelings and emotions onto the characters. As a playful touch, Hello Kitty has no mouth—ensuring she’ll never reveal your secrets!

By the 1980s and 1990s, kawaii had permeated stationery, toys, fashion, digital communications, games, and beyond. Franchises like Pokémon, anime series such as Sailor Moon, and the whimsical works of Studio Ghibli exported a sense of childlike wonder and playfulness to audiences across the globe. Even banks and airlines embraced cuteness as a strategy to attract customers, as did major brands like Nissan, Mitsubishi, Sony, and Nintendo. What may have begun as an organic expression of individuality was quickly commodified by industry.

Construction sites, for example, frequently feature barricades shaped like cartoon animals or flowers, softening the visual impact of urban development.13 They also display signs with bowing figures apologizing for any inconvenience. These elements are designed to create a sense of comfort for those passing by. Similarly, government campaigns use mascots like Kumamon,14 a cuddly bear, to promote tourism or public health initiatives. Japanese companies and government agencies use cute mascots, referred to as Yuru-chara, to create a friendly image and foster a sense of connection. You’ll even find them in otherwise harsh environments like high security prisons, the Tokyo Metropolitan Police, and, well, the Japanese Sewage Association uses them too.15

Kawaii aesthetics have also appeared in high-tech domains. Robots designed for elder care, such as SoftBank’s Pepper,16 often adopt kawaii traits to appear less intimidating and foster emotional connections. In the culinary world, bento boxes featuring elaborately arranged food in cute and delightful shapes have become a creative art form, combining practicality with aesthetic pleasure—and turning ordinary lunches into whimsical and joyful experiences.

Sanrio Puroland (website)

Kawaii hasn’t stayed confined to Japan’s borders. It has become popular in other countries like South Korea, and had a large influence in the West as well. It has become a global representation of Japan, so much so that it helps draw in tourism, particularly to the Harajuku district in Tokyo and theme parks like Sanrio Puroland. In 2008, Hello Kitty was even named as Japan’s official tourism ambassador.17

The influence of kawaii extends beyond tourism. Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty-themed designs, menus, and crew uniforms on its Paris-Taipei route.18 Even the Vatican couldn’t resist the power of cute: In its appeal to younger generations, it introduced Luce, a cheerful young girl with big eyes, blue hair, and a yellow raincoat, as the mascot for the 2025 Jubilee Year and the Vatican’s pavilion at Expo 2025.19

Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty- themed designs, menus, and crew uniforms on its Paris–Taipei route.

Could anime and kawaii culture become vehicles for Catholicism? Writing for UnHerd, Katherine Dee suggests that Luce represents a global strategy to transcend cultural barriers in ways that traditional symbols, like the rosary, cannot. She points out that while Europe’s Catholic population has been shrinking, the global Catholic community continues to grow by millions.20 But while Luce may bring more attention to the Vatican, can she truly inspire deeper connections to God or spirituality?

All that said, the bigger question remains: Why does anyone find any of this appealing or cute?

One answer comes from the cultural theorist Sianne Ngai, who said that there’s a “surprisingly wide spectrum of feelings, ranging from tenderness to aggression, that we harbor toward ostensibly subordinate and unthreatening commodities.”21 That’s a fancy way of saying that humans find babies cute, a discovery that, in fact, was awarded the 1973 Nobel Prize in Physiology or Medicine to the Austrian zoologist and ethologist Konrad Lorenz for his research on the “baby schema”22 (or Kindchenschema), to explain how and why certain infantile facial and physical traits are seen as cute. These features include an overly large head, rounded forehead, large eyes, and protruding cheeks.23 Lorenz argued that this is so because such features trigger a biological response within us—a desire to nurture and protect because we view them as proxies for vulnerability. The more such features, the more we are wired to care for those who embody them.24 Simply put, when these traits are projected onto characters or art or products, it promotes the same kind of response in us as seeing a baby.

Modern research validates Lorenz’s theory. A 2008 brain imaging study showed that viewing infant faces, but not adult ones, triggered a response in the orbitofrontal cortex linked to reward processing.25 Another brain imaging study conducted at Washington University School of Medicine26 investigated how different levels of “baby schema” in infant faces—characteristics like big eyes and round cheeks—affect brain activity. Researchers discovered that viewing baby-like features activates the nucleus accumbens, a key part of the brain’s reward system responsible for processing pleasure and motivation. This effect was observed in women who had never had children. The researchers concluded that this activation of the brain’s reward system is the neurophysiological mechanism that triggers caregiving behavior.

A very different type of study,27 conducted in 2019, further confirmed that seeing baby-like features triggers a strong emotional reaction. In this case, the reaction is known as “kama muta,” a Sanskrit term that describes the feeling of being deeply moved or touched by love. This sensation is often accompanied by warmth, nostalgia, or even patriotism. The researchers found that videos featuring cute subjects evoked significantly more kama muta than those without such characteristics. Moreover, when the cute subjects were shown “interacting affectionately,” the feeling of kama muta was even stronger compared to when the subjects were not engaging in affectionate behavior.

In 2012, Osaka University professor Hiroshi Nittono led a research study that found that “cuteness” has an impact on observers, increasing their focus and attention.28 It also speaks to our instinct to nurture and protect that which appears vulnerable—which cute things, with their more infantilized traits, do. After all, who doesn’t love Baby Yoda? Perhaps that’s why some of us are so drawn to purchase stuffed dolls of Eeyore—it makes us feel as if we are rescuing him. When we see something particularly cute, many of us feel compelled to buy it. Likewise, it’s possible, at least subconsciously, that those who engage in cosplay around kawaii do so out of a deeper need to feel protected themselves. Research shows that viewing cute images improves moods and is associated with relaxation.29

Kawaii may well be useful in our fast-paced and stressful lives. For starters, when we find objects cute or adorable, we tend to treat them better and give them greater care. There’s also a contagious happiness effect. Indeed, could introducing more kawaii into our environments make people happier? Might it encourage us to care more for each other and our communities? The kawaii aesthetic could even be used in traditionally serious spaces—like a doctor’s waiting room or emergency room—to help reduce anxiety. Instead of staring at a blank ceiling in the dentist’s chair, imagine looking up at a whimsical kawaii mural instead.

Consider also the Tamagotchi digital pet trend of the 1990s. Children were obsessed with taking care of this virtual pet, tending to its needs ranging from food to entertainment. Millions of these “pets” were sold and were highly sought after. There’s something inherently appealing to children about mimicking adult roles, especially when it comes to caregiving. It turns out that children don’t just want to be cared for by their parents—they also seem to have an innate desire to nurture others. This act of caregiving can make them feel capable, empowered, and useful, tapping into a deep sense of responsibility and connection.

At Chuo University in Tokyo, there’s an entire new field of “cute studies” founded by Dr. Joshua Dale, whose book summarizes his research: Irresistible: How Cuteness Wired our Brains and Changed the World.30 According to Dale, there are four traditional and aesthetic values of Japanese culture that contributed to the rise of kawaii: (1) valuing the diminutive, (2) treasuring the transient, (3) preference for simplicity, and (4) appreciating the playful and transient.31 His work emphasizes how kawaii is not just about cuteness, but in fact expresses a deeply rooted cultural philosophy that reflects Japanese views on beauty, life, and emotional expression.

The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions.

In other words, there’s something about kawaii that goes beyond a style or a trend. It is a reflection of deeper societal values and emotional needs. In a society that has such rigid hierarchies, social structures, decorum, and an intense work culture, kawaii provides a form of escapism—offering a respite from the harsh realities of adulthood and a return to childlike innocence. It is a safe form of vulnerability. Yet, does it also hint at an inability to confront the realities of life?

The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions. By surrounding themselves with cuteness and positivity, they may be trying to shield themselves from darker feelings and worries. In some cases, people even adapt their own personal aesthetics to appear cuter, as this can make them seem more innocent and in need of help—effectively turning cuteness into a protective layer.

Kawaii also perpetuates infantilization, particularly among women who feel pressured to conform to kawaii aesthetics, which often places them in a submissive role. This is especially evident in subgenres like Lolita fashion—a highly detailed, feminine, and elegant style inspired by Victorian and Rococo fashion, but with a modern and whimsical twist. While this style is adopted by many women with the female gaze in mind, the male gaze remains inescapable.

Japanese Lolita fashion

As a result, certain elements of kawaii can sometimes veer into the sexual, both intentionally and as an unintended distortion of innocence. Maid cafes, for example, though not designed to be sexually explicit, often carry sexual undertones that undermine their seemingly innocent and cute appeal. In these cafes, maids wear form-fitting uniforms and play into fantasies of servitude and submission—particularly when customers are addressed as “masters” and flirtatious interactions are encouraged.

It’s important to remember that things that look sweet and cute can also be sinister. The concept of “cute” often evokes feelings of trust, affection, and vulnerability, which can paradoxically make it a powerful tool for manipulation, subversion, and even control. Can kawaii be a Trojan horse?

When used in marketing to sell products, it may seem harmless, but how much of the rational consumer decision-making process does it override? And what evil lurks behind all the sparkle? In America, cuteness manifests itself even more boldly and aggressively. One designer, Lisa Frank, built an entire empire in the 1980s and 1990s on vibrant, neon colors and whimsical artwork featuring rainbow-colored animals, dolphins, glitter, and images of unicorns on stickers, adorning backpacks and other merchandise. Her work is closely associated with a sense of nostalgia for millennials who grew up in that era. Yet, as later discovered and recently recalled in the Amazon documentary, “Glitter and Greed: The Lisa Frank Story,” avarice ultimately led to a toxic work environment, poor working conditions, and alleged abuse.

Worse, can kawaii be used to mask authoritarian intentions or erase the memory of serious crimes against humanity?

As Japan gained prominence in global culture, its World War II and earlier atrocities have been largely overshadowed, causing many to overlook these grave historical events.32 When we think of Japan today, we often think of cultural exports like anime, manga, Sanrio, geishas, and Nintendo. Even though Japan was once an imperial power, today it exercises “soft power” in the sociopolitical sphere. This concept, introduced by American political scientist Joseph Nye,33 refers to influencing others by promoting a nation’s culture and values to make foreign audiences more receptive to its perspectives.

Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort.

Japan began leveraging this strategy in the 1980s to rehabilitate its tarnished postwar reputation, especially in the face of widespread anti-Japanese sentiment in neighboring Asian nations. Over time, these attitudes shifted as Japan used “kawaii culture” and other forms of pop-culture diplomacy to reshape its image and move beyond its violent, imperialist past.

Kawaii also serves as a way to neutralize our fears by transforming things we might typically find unsettling into endearing and approachable forms—think Casper the Friendly Ghost or Monsters, Inc. This principle extends to emerging technologies, such as robots. Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort. Embedding frightening concepts with qualities that evoke happiness or safety allows us to navigate the interplay between darkness and light, innocence and danger, in a more approachable way. In essence, it’s a coping mechanism for our primal fears.

An interesting aspect of this is what psychologists call the uncanny valley—a feeling of discomfort that arises when something is almost humanlike, but not quite. Horror filmmakers have exploited this phenomenon by weaponizing cuteness against their audiences with characters like the Gremlins and the doll Chucky. The dissonance between a sweet appearance and sinister intent creates a chilling effect that heightens the horror.

When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?

Ultimately, all this speaks to the multitude of layers to kawaii. It is more than an aesthetic; it’s a cultural phenomenon with layers of meaning, and it reflects both societal values and emotional needs. Its ability to evoke warmth and innocence can also be a means of emotional manipulation. It can serve as an unassuming guise for darker intentions or meanings. It can be a medium for individual expression, and yet simultaneously it has been commodified and overtaken by consumerism. It can be an authentic expression, yet mass production has also made it a symbol of artifice. It’s a way to embrace the innocent and joyful, yet it can also be used to avoid facing the harsher realities of adulthood. When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?

It’s worth asking whether the prevalence of kawaii in public and private spaces reflects a universal desire for escapism or if it serves as a tool to maintain conformity and compliance. Perhaps, at its core, kawaii holds up a mirror to society’s collective vulnerabilities—highlighting not just what we nurture, but also what we are willing to overlook for the sake of cuteness.

Categories: Critical Thinking, Skeptic

What Did Einstein Believe About God?

Tue, 04/08/2025 - 2:24pm

This article was originally published in Skeptic in 1997.

Presented here for the first time are the complete texts of two letters that Einstein wrote regarding his lack of belief in a personal god.

Just over a century ago, near the beginning of his intellectual life, the young Albert Einstein became a skeptic. He states so on the first page of his Autobiographical Notes (1949, pp. 3–5):

Thus I came—despite the fact I was the son of entirely irreligious (Jewish) parents—to a deep religiosity, which, however, found an abrupt ending at the age of 12. Through the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence was a positively fanatic [orgy of] freethinking coupled with the impression that youth is intentionally being deceived… Suspicion against every kind of authority grew out of this experience, a skeptical attitude … which has never left me….

We all know Albert Einstein as the most famous scientist of the 20th century, and many know him as a great humanist. Some have also viewed him as religious. Indeed, in Einstein’s writings there is well-known reference to God and discussion of religion (1949, 1954). Although Einstein stated he was religious and that he believed in God, it was in his own specialized sense that he used these terms. Many are aware that Einstein was not religious in the conventional sense, but it will come as a surprise to some to learn that Einstein clearly identified himself as an atheist and as an agnostic. If one understands how Einstein used the terms religion, God, atheism, and agnosticism, it is clear that he was consistent in his beliefs.

Part of the popular picture of Einstein’s God and religion comes from his well-known statements, such as:

“God is cunning but He is not malicious.” (Also: “God is subtle but he is not bloody-minded.” Or: “God is slick, but he ain’t mean.”) (1946)“God does not play dice.” (On many occasions.)“I want to know how God created the world. I am not interested in this or that phenomenon, in the spectrum of this or that element. I want to know His thoughts, the rest are details.” (Unknown date.)

It is easy to see how some got the idea that Einstein was expressing a close relationship with a personal god, but it is more accurate to say he was simply expressing his ideas and beliefs about the universe.

Figure 1

Einstein’s “belief” in Spinoza’s God is one of his most widely quoted statements. But quoted out of context, like so many of these statements, it is misleading at best. It all started when Boston’s Cardinal O’Connel attacked Einstein and the General Theory of Relativity and warned the youth that the theory “cloaked the ghastly apparition of atheism” and “befogged speculation, producing universal doubt about God and His creation” (Clark, 1971, 413–414). Einstein had already experienced heavier duty attacks against his theory in the form of anti-Semitic mass meetings in Germany, and he initially ignored the Cardinal’s attack. Shortly thereafter though, on April 24, 1929, Rabbi Herbert Goldstein of New York cabled Einstein to ask: “Do you believe in God?” (Sommerfeld, 1949, 103). Einstein’s return message is the famous statement:

“I believe in Spinoza’s God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with fates and actions of human beings” (103). The Rabbi, who was intent on defending Einstein against the Cardinal, interpreted Einstein’s statement in his own way when writing:

Spinoza, who is called the God-intoxicated man, and who saw God manifest in all nature, certainly could not be called an atheist. Furthermore, Einstein points to a unity. Einstein’s theory if carried out to its logical conclusion would bring to mankind a scientific formula for monotheism. He does away with all thought of dualism or pluralism. There can be no room for any aspect of polytheism. This latter thought may have caused the Cardinal to speak out. Let us call a spade a spade (Clark, 1971, 414).

Both the Rabbi and the Cardinal would have done well to note Einstein’s remark, of 1921, to Archbishop Davidson in a similar context about science: “It makes no difference. It is purely abstract science” (413).

The American physicist Steven Weinberg (1992), in critiquing Einstein’s “Spinoza’s God” statement, noted: “But what possible difference does it make to anyone if we use the word “God” in place of “order” or “harmony,” except perhaps to avoid the accusation of having no God?” Weinberg certainly has a valid point, but we should also forgive Einstein for being a product of his times, for his poetic sense, and for his cosmic religious view regarding such things as the order and harmony of the universe.

But what, at bottom, was Einstein’s belief? The long answer exists in Einstein’s essays on religion and science as given in his Ideas and Opinions (1954), his Autobiographical Notes (1949), and other works. What about a short answer?

In the Summer of 1945, just before the bombs of Hiroshima and Nagasaki, Einstein wrote a short letter stating his position as an atheist (Figure 1, above). Ensign Guy H. Raner had written Einstein from mid-Pacific requesting a clarification on the beliefs of the world famous scientist (Figure 2, below). Four years later Raner again wrote Einstein for further clarification and asked “Some people might interpret (your letter) to mean that to a Jesuit priest, anyone not a Roman Catholic is an atheist, and that you are in fact an orthodox Jew, or a Deist, or something else. Did you mean to leave room for such an interpretation, or are you from the viewpoint of the dictionary an atheist; i.e., “one who disbelieves in the existence of a God, or a Supreme Being?” Einstein’s response is shown in Figure 3.

Figure 2

Combining key elements from the first and second response from Einstein there is little doubt as to his position:

From the viewpoint of a Jesuit priest I am, of course, and have always been an atheist…. I have repeatedly said that in my opinion the idea of a personal God is a childlike one. You may call me an agnostic, but I do not share the crusading spirit of the professional atheist whose fervor is mostly due to a painful act of liberation from the fetters of religious indoctrination received in youth. I prefer an attitude of humility corresponding to the weakness of our intellectual understanding of nature and of our being.

I was fortunate to meet Guy Raner, by chance, at a humanist dinner in late 1994, at which time he told me of the Einstein letters. Raner lives in Chatsworth, California and has retired after a long teaching career. The Einstein letters, a treasured possession for most of his life, were sold in December, 1994, to a firm that deals in historical documents (Profiles in History, Beverly Hills, CA). Five years ago a very brief letter (Raner & Lerner, 1992) describing the correspondence was published in Nature. But the two Einstein letters have remained largely unknown.

“I have repeatedly said that in my opinion the idea of a personal God is a childlike one.” —Einstein

Curiously enough, the wonderful and well-known biography Albert Einstein, Creator and Rebel, by Banesh Hoffmann (1972) does quote from Einstein’s 1945 letter to Raner. But maddeningly, although Hoffmann quotes most of the letter (194–195), he leaves out Einstein’s statement: “From the viewpoint of a Jesuit Priest I am, of course, and have always been an atheist.”!

Hoffmann’s biography was written with the collaboration of Einstein’s secretary, Helen Dukas. Could she have played a part in eliminating this important sentence, or was it Hoffmann’s wish? I do not know. However, Freeman Dyson (1996) notes “…that Helen wanted the world to see, the Einstein of legend, the friend of school children and impoverished students, the gently ironic philosopher, the Einstein without violent feelings and tragic mistakes.” Dyson also notes that he thought Dukas “…profoundly wrong in trying to hide the true Einstein from the world.” Perhaps her well-intentioned protectionism included the elimination of Einstein as atheist.

Figure 3

Although not a favorite of physicists, Einstein, The Life and Times, by the professional biographer Ronald W. Clark (1971), contains one of the best summaries on Einstein’s God: “However, Einstein’s God was not the God of most men. When he wrote of religion, as he often did in middle and later life, he tended to … clothe with different names what to many ordinary mortals—and to most Jews—looked like a variant of simple agnosticism….This was belief enough. It grew early and rooted deep. Only later was it dignified by the title of cosmic religion, a phrase which gave plausible respectability to the views of a man who did not believe in a life after death and who felt that if virtue paid off in the earthly one, then this was the result of cause and effect rather than celestial reward. Einstein’s God thus stood for an orderly system obeying rules which could be discovered by those who had the courage, the imagination, and the persistence to go on searching for them” (19).

Einstein continued to search, even to the last days of his 76 years, but his search was not for the God of Abraham or Moses. His search was for the order and harmony of the world.

Bibliography
  • Dyson, F. 1996. Forward In The Quotable Einstein (Calaprice, Alice, Ed. ) Princeton, New Jersey: Princeton University Press. 1996. (Note: The section “On Religion, God, and Philosophy” is perhaps the best brief source to present the range and depth of Einstein’s views.)
  • Einstein, A. 1929. quoted in Sommerfeld (see below). 1949. Also as Telegram to a Jewish Newspaper, 1929; Einstein Archive Number 33–272.
  • ___. 1946 and of unknown date. In Einstein, A Centenary Volume. (A. P. French, Ed.) Cambridge: Harvard Univ Press. 1979. 32, 73, & 67.
  • ___. 1959 (1949). “Autobiographical Notes.” In Albert Einstein, Philosopher–Scientist. (Paul Arthur Schilpp, Ed.) New York: Harper & Bros.
  • ___. 1950. Letter to M. Berkowitz, October 25,1950; Einstein Archive Number 59–215.
  • ___. 1954. Ideas and Opinions. New York: Crown Pub.
  • ___. on many occasions. In Albert Einstein, Creator and Rebel. (B. Hoffmann with the collaboration of Helen Dukas.) New York: The Viking Press.
  • Hoffmann, B. (collaboration with Helen Dukas). 1972. Albert Einstein, Creator and Rebel. New York: The Viking Press.
  • Raner, G.H. & Lerner, L. S. “Einstein’s Beliefs.” Nature, 358:102.
  • Sommerfeld, A. 1949. “To Albert Einstein’s 70th Birthday.” In Albert Einstein, Philospher–Scientist. (Paul Arthur Schilpp, Ed.) New York: Harper & Bros. 1959. 99–105.
  • Weinberg, S. 1992. Dreams of a Final Theory. New York: Pantheon Books. 245.
Categories: Critical Thinking, Skeptic

Lessons from 200 Years of Tariff History

Mon, 04/07/2025 - 7:58am

Tariff policy has been a contentious issue since the founding of the United States. Hamilton clashed with Jefferson and Madison over tariff policy in the 1790s, South Carolina threatened to secede from the union over tariff policy in 1832, and the Hawley-Smoot tariff generated outrage in 1930. Currently, Trump is sparking heated debates about his tariff policies. 

To understand the ongoing tariff debate, it is essential to grasp the basics: Tariffs are taxes levied by governments on imported goods. They have been the central focus of U.S. trade policy since the federal government was established in 1789. Historically, tariffs have been used to raise government revenue, protect domestic industries, and influence the trade policies of other nations. The history of U.S. tariffs can be understood in three periods corresponding with these three uses. 

From 1790 until the Civil War in 1861, tariffs primarily served as a source of federal revenue, accounting for about 90 percent of government income (since 2000, however, tariffs have generated less than 2 percent of the federal government’s income).1 Both the Union and the Confederacy enacted income taxes to help finance the Civil War. After the war, public resistance to income taxes grew, and Congress repealed the federal income tax in 1872. Later, when Congress attempted to reinstate an income tax in 1894, the Supreme Court struck it down in Pollock v. Farmers’ Loan & Trust Co. (1895), ruling it unconstitutional. To resolve this issue, the Sixteenth Amendment was ratified in 1913, granting Congress the authority to levy income taxes. Since then, federal income taxes have provided a much larger source of revenue than tariffs, allowing for greater federal government expenditures. The shift away from tariffs as the primary revenue source began during the Civil War and was further accelerated by World War I, which required large increases in federal spending.

The 16th Amendment was ratified in 1913, granting Congress the authority to levy income taxes.

Before the Civil War, the North and South had conflicting views on tariffs. The North, with its large manufacturing base, wanted higher tariffs to protect domestic industries from foreign competition. This protection would decrease the amount of competition Northern manufacturers faced, allowing them to charge higher prices and encounter less risk of being pushed out of business by more efficient foreign producers. By contrast, the South, with an economy rooted in agricultural exports (especially cotton) favored low tariffs, as they benefited from cheaper imported manufactured goods. These imports were largely financed by selling Southern cotton, produced by enslaved labor, to foreign markets, particularly Great Britain. The North-South tariff divide eventually led to the era of protective tariffs (1860-1934) after the Civil War, when the victorious North gained political power, and protectionist policies dominated U.S. trade.

For more than half a century after the Civil War, U.S. trade policy was dominated by high protectionist tariffs. Republican William McKinley, a strong advocate of high tariffs, won the presidency in 1896 with support from industrial interests. Between 1861 and the early 1930s, average tariff rates on dutiable imports rose to around 50 percent and stayed elevated for decades. As a point of comparison, average tariffs had declined to about 5 percent by the early 21st century. 

Republicans passed the Hawley-Smoot Tariff in 1930, which coincided with the Great Depression. While it is generally agreed among economists that the Hawley-Smoot Tariff did not cause the Great Depression, it further hurt the world economy during the economic downturn (though many observers at the time thought that it was responsible for the global economic collapse). The widely disliked Hawley-Smoot Tariff, along with the catastrophic effects of the Great Depression, allowed the Democrats to gain political control of both Congress and the Presidency in 1932. They passed the Reciprocal Trade Agreements Act (RTAA) in 1934, which gave the president the power to negotiate reciprocal trade agreements. 

The RTAA transitioned some of the power over trade policy, i.e., tariffs, away from Congress and to the President. Whereas the constituencies of specific members of Congress are in certain regions of the U.S., the entire country can vote in Presidential elections. For that reason, regional producers generally have less political power over the President than they do over their specific members of Congress, and therefore the President tends to be less responsive to their interests and more responsive to the interests of consumers and exporters located across the nation. Since consumers and exporters generally benefit from lower tariffs, the President has an incentive to decrease them. Thus, the RTAA contributed to the U.S. lowering tariff barriers around the world. This marked the beginning of the era of reciprocity in U.S. tariff policy (1934-2025) in which the U.S. has generally sought to reduce tariffs worldwide.

World War II and its consequences also pushed the U.S. into the era of reciprocity. The European countries, which had been some of the United States’ strongest economic competitors, were decimated after two World Wars in 30 years. Exports from Europe declined and the U.S. shifted even more toward exporting after the Second World War. As more U.S. firms became larger exporters, their political power was aimed at lowering tariffs rather than raising them. (Domestic companies that compete with imports have an interest in lobbying for higher tariffs, but exporting companies have the opposite interest.)

The World Trade Organization (WTO) was founded in 1995. Photo © WTO.

The end of WWII left the U.S. concerned that yet another World War could erupt if economic conditions were unfavorable around the world. America also sought increased trade to stave off the spread of Communism during the Cold War. These geopolitical motivations led the U.S. to seek increased trade with non-Communist nations, which was partially accomplished by decreasing tariffs. This trend culminated in the creation of the General Agreement on Tariffs and Trade (GATT) in 1947, which was then superseded by the World Trade Organization (WTO) in 1995. These successive organizations helped reduce tariffs and other international trade barriers.

Although there is a strong consensus among economists that tariffs do more harm than good,2,3,4 there are some potential benefits of specific tariff policies.

Pros
  1. National Security: In his 1776 classic The Wealth of Nations, Adam Smith acknowledged that trade restrictions could be justified when used to protect industries essential to national defense. In times of war, a nation's wealth is secondary to its security, and tariffs can protect essential industries. However, it is often challenging to determine which industries are truly vital for defense, and some firms have exploited this argument to gain protection, even when their goods are not crucial to national security. 
  2. Negotiating Tool: Tariffs can also provide leverage in negotiations. For example, in early 2025, President Trump threatened Mexico and Canada with tariffs, but then (temporarily as it turns out) removed the threat once they agreed to address the flow of fentanyl into the U.S. However, this tactic is a dangerous game. For the threat of tariffs to work, the nations you are negotiating with must believe you are willing to impose the tariffs you are threatening. This can lead to a difficult decision wherein you either have to back down from your threat and lose reputation, or follow through and impose the tariffs even though you do not want to.
  3. Protection of Infant Industries: It is possible that imposing tariffs on specific goods can foster the growth of developing industries that would not have been able to grow in the environment of foreign competition that existed without tariff protection. It is also possible that these protected industries could create more wealth for the nation once they are grown than would have been generated in the no-tariff scenario. Most economists argue that there is not much historical evidence of this occurring. Although the infant industry argument is possible, it relies on the assumption that the government can effectively identify which firms are going to prosper and bring greater economic benefits than would be accrued due to free trade. The government not only has to identify which infant industries to protect, but it also must have the appropriate incentives and mechanisms to carry out the protection. In reality, no one can consistently identify the appropriate infant industries to support.
  4. Revenue: Tariffs can raise revenue for the government, but they are not capable of funding the current levels of government spending for many developed nations.
The “Chicken War” of the 1960s was a trade dispute between the United States and the European Economic Community (EEC), triggered by the EEC’s introduction of tariffs on imported chicken to protect its domestic poultry industry, leading to US retaliatory tariffs on trucks and other goods. Graphic courtesy of the Library of Congress.Cons
  1. Economic Inefficiency: Let us imagine a scenario where person A and person B are trading with each other. The government then imposes a tax on the purchase of person B’s goods. Both person A and B are affected because person A must pay more for person B’s goods and person B cannot sell as many goods to person A. The same logic applies to tariffs. This is the process by which tariffs distort markets and lead to deadweight economic loss. Often, people assume that tariffs help the imposing country and hurt the country forced to pay them. This is incorrect. Tariffs hurt both countries, as both persons A and B are harmed due to the loss of economic efficiency. In addition, protectionist tariffs shelter domestic industries from competition, thereby allowing them to be less efficient. Lessened efficiency leads to poorer quality products and higher prices. Lastly, tariffs can disrupt supply chains that span multiple countries. For example, US tariffs on Chinese goods increased costs for American manufacturers that rely on parts imported from China.5 Disrupting supply chains also leads to economic inefficiency.
  2. Higher Prices for Consumers: In our thought experiment above, person A must pay more for person B’s goods after the tariff is imposed. This is how tariffs raise prices for domestic consumers.
  3. Trade Wars: Tariffs can spark trade wars that end up greatly decreasing the amount of trade across nations. When country A imposes tariffs on country B, country B may react by imposing counter-tariffs on country A. This causes an expanding cycle that leads to further decreases in trade and economic efficiency. A well-known example is the “Chicken War” in the 1960s, in which the U.S. imposed a 25% tariff on light trucks from Western Europe, which is still in place today.
  4. Corruption: Once tariffs are enacted, they are politically difficult to remove. The costs of tariffs are thinly spread over millions of Americans, whereas the benefits are concentrated in a comparatively small number of people involved in specific industries. This makes the beneficiaries more politically motivated to maintain the tariffs than the general populace is to resist them, leading to long-lasting tariffs that aid a powerful few while harming the public.

Although tariffs have some theoretical benefits in specific situations, the competence and incentives of the U.S. political system often do not allow these benefits to come to fruition. Tariffs almost always come with the cost of economic inefficiency, which is why economists generally agree that tariffs do more harm than good. Does the increase in U.S. tariffs, particularly on China, since 2016 mark the end of the era of reciprocity or is it just a blip? The answer will affect the economic well-being of Americans and people around the world. 

The history of tariffs described in this article is largely based on Clashing Over Commerce by Douglas Irwin (2017).

The author would like to thank Professor John L. Turner at the University of Georgia for his invaluable input.

Categories: Critical Thinking, Skeptic

Why Tariffs Decrease the Wealth of Nations

Mon, 04/07/2025 - 5:14am

Throughout the early modern period—from the rise of the nation state through the nineteenth century—the predominant economic ideology of the Western world was mercantilism, or the belief that nations compete for a fixed amount of wealth in a zero-sum game: the +X gain of one nation means the –X loss of another nation, with the +X and –X summing to zero. The belief at the time was that in order for a nation to become wealthy, its government must run the economy from the top down through strict regulation of foreign and domestic trade, enforced monopolies, regulated trade guilds, subsidized colonies, accumulation of bullion and other precious metals, and countless other forms of economic intervention, all to the end of producing a “favorable balance of trade.” Favorable, that is, for one nation over another nation. As President Donald Trump often repeats, “they’re ripping us off!” That is classic mercantilism and economic nationalism speaking.

Adam Smith famously debunked mercantilism in his 1776 treatise An Inquiry into the Nature and Causes of the Wealth of Nations. Smith’s case against mercantilism is both moral and practical. It is moral, he argued, because: “To prohibit a great people…from making all that they can of every part of their own produce, or from employing their stock and industry in the way that they judge most advantageous to themselves, is a manifest violation of the most sacred rights of mankind.”1 It is practical, he showed, because: “Whenever the law has attempted to regulate the wages of workmen, it has always been rather to lower them than to raise them.”2

Producers and Consumers

Adam Smith’s The Wealth of Nations was one long argument against the mercantilist system of protectionism and special privilege that in the short run may benefit producers but which in the long run harms consumers and thereby decreases the wealth of a nation. All such mercantilist practices benefit the producers, monopolists, and their government agents, while the people of the nation—the true source of a nation’s wealth—remain impoverished: “The wealth of a country consists, not of its gold and silver only, but in its lands, houses, and consumable goods of all different kinds.” Yet, “in the mercantile system, the interest of the consumer is almost always constantly sacrificed to that of the producer.”3

Adam Smith statue in Edinburgh, Scotland. Photo by K. Mitch Hodge / Unsplash

The solution? Hands off. Laissez Faire. Lift trade barriers and other restrictions on people’s economic freedoms and allow them to exchange as they see fit for themselves, both morally and practically. In other words, an economy should be consumer driven, not producer driven. For example, under the mercantilist zero-sum philosophy, cheaper foreign goods benefit consumers but they hurt domestic producers, so the government should impose protective trade tariffs to maintain the favorable balance of trade.  

But who is being protected by a protective tariff? Smith showed that, in principle, the mercantilist system only benefits a handful of producers while the great majority of consumers are further impoverished because they have to pay a higher price for foreign goods. The growing of grapes in France, Smith noted, is much cheaper and more efficient than in the colder climes of his homeland, for example, where “by means of glasses, hotbeds, and hotwalls, very good grapes can be raised in Scotland” but at a price thirty times greater than in France. “Would it be a reasonable law to prohibit the importation of all foreign wines, merely to encourage the making of claret and burgundy in Scotland?” Smith answered the question by invoking a deeper principle: 

What is prudence in the conduct of every private family, can scarce be folly in that of a great kingdom. If a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it of them.4

This is the central core of Smith’s economic theory: “Consumption is the sole end and purpose of all production; and the interest of the producer ought to be attended to, only so far as it may be necessary for promoting that of the consumer.” The problem is that the system of mercantilism “seems to consider production, and not consumption, as the ultimate end and object of all industry and commerce.”5 So what?

When production is the object, and not consumption, producers will appeal to top-down regulators instead of bottom-up consumers. Instead of consumers telling producers what they want to consume, government agents and politicians tell consumers what, how much, and at what price the products and services will be that they consume. This is done through a number of different forms of interventions into the marketplace. Domestically, we find examples in tax favors for businesses, tax subsidies for corporations, regulations (to control prices, imports, exports, production, distribution, and sales), and licensing (to control wages, protect jobs).6 Internationally, the interventions come primarily through taxes under varying names, including “duties,” “imposts,” “excises,” “tariffs,” “protective tariffs,” “import quotas,” “export quotas,” “most-favored nation agreements,” “bilateral agreements,” “multilateral agreements,” and the like. 

Such agreements are never between the consumers of two nations; they are between the politicians and the producers of the nations. Consumers have no say in the matter, with the exception of indirectly voting for the politicians who vote for or against such taxes and tariffs. And they all sum to the same effect: the replacement of free trade with “fair trade” (fair for producers, not consumers), which is another version of the mercantilist “favorable balance of trade” (favorable for producers, not consumers). Mercantilism is a zero-sum game in which producers win by the reduction or elimination of competition from foreign producers, while consumers lose by having fewer products from which to choose, along with higher prices and often lower quality products. The net result is a decrease in the wealth of a nation.

The principle is as true today as it was in Smith’s time, and we still hear the same objections Smith did: “Shouldn’t we protect our domestic producers from foreign competition?” And the answer is the same today as it was two centuries ago: no, because “consumption is the sole end and purpose of all production.”  

Nonzero Economics

The founders of the United States and the framers of the Constitution were heavily influenced by the Enlightenment thinkers of England and the continent, including and especially Adam Smith. Nevertheless, it was not long after the founding of the country before our politicians began to shift the focus of the economy from consumption to production. In 1787, the United States Constitution was ratified, which included Article 1, Section 8: “The Congress shall have the power to lay and collect taxes, duties, imposts, and excises to cover the debts of the United States.” As an amusing exercise in bureaucratic wordplay, consider the common usages of these terms in the Oxford English Dictionary

Tax: “a compulsory contribution to the support of government”
Duty: “a payment to the public revenue levied upon the import, export, manufacture, or sale of certain commodities”
Impost: “a tax, duty, imposition levied on merchandise”
Excise: “any toll or tax.” 

(Note the oxymoronic phrase “compulsory contribution” in the first definition.)

A revised Article 1, Section 8 reads: “The Congress shall have the power to lay and collect taxes, taxes, taxes, and taxes to cover the debts of the United States.”

A revised Article 1, Section 8 of the Constitution reads: “The Congress shall have the power to lay and collect taxes, taxes, taxes, and taxes to cover the debts of the United States.” Photo by Anthony Garand / Unsplash

In the U.K. and on the continent, mercantilists dug in while political economists, armed with the intellectual weapons provided by Adam Smith, fought back, wielding the pen instead of the sword. The nineteenth-century French economist Frédéric Bastiat, for example, was one of the first political economists after Smith to show what happens when the market depends too heavily on top-down tinkering from the government. In his wickedly raffish The Petition of the Candlemakers, Bastiat satirizes special interest groups—in this case candlemakers—who petition the government for special favors: 

We are suffering from the ruinous competition of a foreign rival who apparently works under conditions so far superior to our own for the production of light, that he is flooding the domestic market with it at an incredibly low price.... This rival... is none other than the sun.... We ask you to be so good as to pass a law requiring the closing of all windows, dormers, skylights, inside and outside shutters, curtains, casements, bull’s-eyes, deadlights and blinds; in short, all openings, holes, chinks, and fissures.7

Zero-sum mercantilist models hung on through the nineteenth and twentieth centuries, even in America. Since the income tax was not passed until 1913 through the Sixteenth Amendment, for most of the country’s first century the practitioners of trade and commerce were compelled to contribute to the government through various other taxes. Since foreign trade was not able to meet the growing debts of the United States, and in response to the growing size and power of the railroads and political pressure from farmers who felt powerless against them, in 1887 the government introduced the Interstate Commerce Commission. The ICC was charged with regulating the services of specified carriers engaged in transportation between states, beginning with railroads, but then expanded the category to include trucking companies, bus lines, freight carriers, water carriers, oil pipelines, transportation brokers, and other carriers of commerce.8 Regardless of its intentions, the ICC’s primary effect was interference with the freedom of people to buy and sell between the states of America.

The ICC was followed in 1890 with the Sherman Anti-Trust Act, which declared: “Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony,” resulting in a massive fine, jail, or both. 

When stripped of its obfuscatory language, the Sherman Anti-Trust Act and the precedent-setting cases that have been decided in the courts in the century since it was passed, allows the government to indict an individual or a company on one or more of four crimes: 

  1.  Price gouging (charging more than the competition) 
  2. Cutthroat competition (charging less than the competition)
  3. Price collusion (charging the same as the competition), and 
  4. Monopoly (having no competition).9, 10 

This was Katy-bar-the-door for anti-business legislators and their zero-sum mercantilist bureaucrats to restrict the freedom of consumers and producers to buy and sell, and they did with reckless abandon.  

Completing Smith’s Revolution

Tariffs are premised on a win-lose, zero-sum, producer-driven economy, which ineluctably leads to consumer loss. By contrast, a win-win, nonzero, consumer-driven economy leads to consumer gain. Ultimately, Smith held, a consumer-driven economy will produce greater overall wealth in a nation than will a producer-driven economy. Smith’s theory was revolutionary because it is counterintuitive. Our folk economic intuitions tell us that a complex system like an economy must have been designed from the top down, and thus it can only succeed with continual tinkering and control from the top. Smith amassed copious evidence to counter this myth—evidence that continues to accumulate two and a half centuries later—to show that, in the modern language of complexity theory, the economy is a bottom-up self-organized emergent property of complex adaptive systems. 

Adam Smith launched a revolution that has yet to be fully realized. A week does not go by without a politician, economist, or social commentator bemoaning the loss of American jobs, American manufacturing, and American products to foreign jobs, foreign manufacturing, and foreign products. Even conservatives—purportedly in favor of free markets, open competition, and less government intervention in the economy—have few qualms about employing protectionism when it comes to domestic producers, even at the cost of harming domestic consumers.

Citing the need to protect the national economic interest—and Harley-Davidson—Ronald Reagan raised tariffs on Japanese motorcycles from 4.4 percent to 49.4 percent. Photo by Library of Congress / Unsplash

Even the icon of free market capitalism, President Ronald Reagan, compromised his principles in 1982 to protect the Harley-Davidson Motor Company when it was struggling to compete against Japanese motorcycle manufactures that were producing higher quality bikes at lower prices. Honda, Kawasaki, Yamaha, and Suzuki were routinely undercutting Harley-Davidson by $1500 to $2000 a bike in comparable models.

On January 19, 1983, the International Trade Commission ruled that foreign motorcycle imports were a threat to domestic motorcycle manufacturers, and a 2-to-1 finding of injury was ruled on petition by Harley-Davidson, which complained that it could not compete with foreign motorcycle producers.10 On April 1, Reagan approved the ITC recommendation, explaining to Congress, “I have determined that import relief in this case is consistent with our national economic interest,” thereby raising the tariff from 4.4 percent to 49.4 percent for a year, a ten-fold tax increase on foreign motorcycles that was absorbed by American consumers. The protective tariff worked to help Harley-Davidson recover financially, but it was American motorcycle consumers who paid the price, not Japanese producers. As the ITC Chairman Alfred E. Eckes explained about his decision: “In the short run, price increases may have some adverse impact on consumers, but the domestic industry’s adjustment will have a positive long-term effect. The proposed relief will save domestic jobs and lead to increased domestic production of competitive motorcycles.”11

Photo by Lisanto 李奕良 / Unsplash

Whenever free trade agreements are proposed that would allow domestic manufacturers to produce their goods cheaper overseas and thereby sell them domestically at a much lower price than they could have with domestic labor, politicians and economists, often under pressure from trade unions and political constituents, routinely respond disapprovingly, arguing that we must protect our domestic workers. Recall Presidential candidate Ross Perot’s oft-quoted 1992 comment in response to the North American Free Trade Agreement (NAFTA) about the “giant sucking sound” of jobs being sent to Mexico from the United States.

In early 2007, the Nobel laureate economist Edward C. Prescott lamented that economists invest copious time and resources countering the myth that it is “the government’s economic responsibility to protect U.S. industry, employment and wealth against the forces of foreign competition.” That is not the government’s responsibility, says Prescott, echoing Smith, which is simply “to provide the opportunity for people to seek their livelihood on their own terms, in open international markets, with as little interference from government as possible.” Prescott shows that “those countries that open their borders to international competition are those countries with the highest per capita income” and that open economic borders “is the key to bringing developing nations up to the standard of living enjoyed by citizens of wealthier countries.”12

“Protectionism is seductive,” Prescott admits, “but countries that succumb to its allure will soon have their economic hearts broken. Conversely, countries that commit to competitive borders will ensure a brighter economic future for their citizens.” But why exactly do open economic borders, free trade, and international competition lead to greater wealth for a nation? Writing over two centuries after Adam Smith, Prescott reverberates the moral philosopher’s original insight:

It is openness that gives people the opportunity to use their entrepreneurial talents to create social surplus, rather than using those talents to protect what they already have. Social surplus begets growth, which begets social surplus, and so on. People in all countries are motivated to improve their condition, and all countries have their share of talented risk-takers, but without the promise that a competitive system brings, that motivation and those talents will only lie dormant.13

The Evolutionary Origins of Tariffs and Zero-Sum Economics

Why is mercantilist zero-sum protectionism so pervasive and persistent? Bottom-up invisible hand explanations for complex systems are counterintuitive because of our folk economic propensity to perceive designed systems to be the product of a top-down designer. But there is a deeper reason grounded in our evolved social psychology of group loyalty. The ultimate reason that Smith’s revolution has not been fulfilled is that we evolved a propensity for in-group amity and between-group enmity, and thus it is perfectly natural to circle the wagons and protect one’s own, whoever or whatever may be the proxy for that group. Make America Great Again! 

For the first 90,000 years of our existence as a species we lived in small bands of tens to hundreds of people. In the last 10,000 years some bands evolved into tribes of thousands, some tribes developed into chiefdoms of tens of thousands, some chiefdoms coalesced into states of hundreds of thousands, and a handful of states conjoined together into empires of millions. The attendant leap in food-production and population that accompanied the shift to chiefdoms and states allowed for a division of labor to develop in both economic and social spheres. Full-time artisans, craftsmen, and scribes worked within a social structure organized and run by full-time politicians, bureaucrats, and, to pay for it all, tax collectors. The modern state economy was born.

In this historical trajectory our group psychology evolved and along with it a propensity for xenophobia—in-group good, out-group bad. In the Paleolithic social environment in which our moral commitments evolved, one’s fellow in-group members consisted of family, extended family, friends, and community members who were well known to each other. To help others was to help oneself. Those groups who practiced in-group harmony and between-group antagonism would have had a survival advantage over those groups who experienced within-group social divide and decoherence, or haphazardly embraced strangers from other groups without first establishing trust. Because our deep social commitments evolved as part of our behavioral repertoire of responses for survival in a complex social environment, we carry the seeds of such in-group inclusiveness today. The resulting within-group cohesiveness and harmony carries with it a concomitant tendency for between-group xenophobia and tribalism that, in the context of a modern economic system, leads to protectionism and mercantilism.

And tariffs. We must resist the tribal temptation.

Categories: Critical Thinking, Skeptic

Jonestown: Cult Dynamics and Survivorship

Fri, 04/04/2025 - 4:00pm

Annie Dawid’s most recent novel revisits the Jonestown Massacre from the perspective of the people who were there, taking the spotlight off cult leader Jim Jones and rehumanizing the “mindless zombies” who followed one man from their homes in the U.S. to their death in Guyana, but as our notion of victimhood is improving, we’re also forced to confront the ugly truth: In the almost fifty years since Jonestown: large-scale cult-related death has not gone away.

On the 18th of November, 2024, fiction author Annie Dawid’s sixth book, Paradise Undone: A Novel of Jonestown, celebrated its first birthday on the same day as the forty-sixth anniversary of its subject matter, an incident that saw the largest instance of intentional U.S. citizen death in the 20th Century and introduced the world to the horrors and dangers of cultism—The Jonestown Massacre. 

A great deal has been written on Jonestown after 1978, although mostly non-fiction, and the books Raven: The Untold Story of the Rev. Jim Jones and His People (1982) and The Road to Jonestown: Jim Jones and Peoples Temple (2017) are considered some of the most thorough investigations into what happened in the years in the lead up to the massacre. Many historical and sociological studies of Jonestown focus heavily on the psychology and background of the man who ordered 917 men, women, and children to die with him in the Guyanese jungle—The Reverend Jim Jones. 

For cult survivors beginning the difficult process of unpacking and rebuilding after their cult involvement—or for those who lose family members or friends to cult tragedy—the shame of cult involvement and the public’s misconception that cult recruitment stems from a psychological or emotional fault are challenges to overcome. 

And when any subsequent discussions of cult-related incidents can result in a disproportionate amount of attention given to cult leaders, often classified as pathological narcissists or having Cluster-B personality disorders, there’s a chance that with every new article or book on Jonestown, we’re just feeding the beast—often at the expense of recognizing the victims. 

An aerial view of the dead in Jonestown.

Annie Dawid, however, uses fiction to avoid the trap of revisiting Jonestown through the lens of Jones, essentially removing him and his hold over the Jonestown story.

“He’s a man that already gets too much air time,” she says, “The humanity of 917 people gets denied by omission. That’s to say their stories don’t get told, only Jones’ story gets told over and over again.”

“I read so many books about him. I was like enough,” she says, “Enough of him.” 

Jones of Jonestown 

By all accounts, Jones, in his heyday, was a handsome man. 

An Internet image search for Jones pulls up an almost iconic, counter-culture cool black-and-white photo of a cocksure man in aviator sunglasses and a dog collar, his lips parted as if the photographer has caught him in the middle of delivering some kind of profundity. 

Jones’s signature aviator sunglasses may have once been a fashion statement, a hip priest amongst the Bay Area kids, but now he never seems to be without them as an increasing amphetamine and tranquilizer dependency has permanently shaded the areas under his eyes. 

Jim Jones in 1977. By Nancy Wong

“Jim Jones is not just a guy with an ideology; he was a preacher with fantastic charisma, says cult expert Mike Garde, the director of the Irish charity Dialogue Ireland, an independent charity that educates the public on cultism and assists its victims. “And this charisma would have been unable to bring people to Guyana if he had not been successful at doing it in San Francisco,” he adds. 

Between January 1977 and August 1978, almost 900 members of the Peoples Temple gave up their jobs, and life savings, and left family members behind in the U.S. to relocate to Guyana to begin moving into the new home: Peoples Temple Agricultural Mission, an agricultural commune inspired by Soviet socialist values. 

On November 19th, 1978, U.S. Channel 7 interrupted its normal broadcast with a special news report, and presenter Tom Van Amburg encouraged viewer discretion and described the horror of hardened newsmen upon seeing the scenes at Jonestown that had “shades of Auschwitz.” 

As a story, the details of Jonestown feel like a work of violent fiction, like a prototype Cormac McCarthy novel: A Hearts of Darkness-esque cautionary tale of Wild-West pioneering gone wrong in a third-world country with Jones cast in the lead role. 

“I feel like there’s a huge admiration for bad boys, and if they’re good-looking, that helps too,” Dawid says, “This sort of admiration of the bad boy makes it that we want to know, we’re excited by the monster—we want to know all about the monster.” 

Dawid understands Jones’ allure, his hold over the Jonestown narrative as well as the public’s attention, but “didn’t want to indulge that part of me either,” she says. 

“But I wasn’t tempted to because I learned about so many interesting people that were in the story but never been the subjects of the story,” she adds, “So I wanted to make them the subjects.” 

Screenshot of the website for the award-winning film Jonestown: The Life and Death of Peoples Temple by Stanley Nelson, Marcia Smith, and Noland WalkerThe People of the Peoples Temple 

For somebody who was there from the modest Pentecostal beginnings of the Peoples Temple in 1954 until the end in Guyana, very little attention had ever been paid to Marceline Jones in the years after Jonestown. 

“She was there—start to finish. For me, she made it all happen, and nobody wrote anything about her,” Dawid says, “The woman behind the man doesn’t exist.” 

Even for Garde, Marceline was another anonymous victim of no significance beyond the surname connecting her to the husband: “My initial read of Marceline was that she was ‘Just a cipher, she wasn’t a real person,” he says, “She didn’t even register on my dial.” 

Dawid gives Marceline an existence, and in her book, she’s a “superwoman” juggling her duties as a full-time nurse and the Peoples Temple—a caring, selfless individual who lives in the service of others, mainly the children and the elderly of the Peoples Temple. 

“In the sort of awful way, she’s this smart, interesting, energetic woman, but she can’t escape the power of her husband,” Dawid says, “It’s just very like domestic violence where the woman can’t get away from the abuser [and] I have had so much feedback from older women who felt that they totally related to her.” 

The woman behind the man doesn’t exist.

Selfless altruism was a shared characteristic of the Peoples Temple, as members spent most of their time involved in some kind of charity work, from handing out food to the homeless or organizing clothes drives. 

“You know, I did grow to understand the whole sort of social justice beginnings of Peoples Temple,” Dawid says, “I came to admire the People’s Temple as an organization.” 

“Social justice, racism, and caring for old people, that was a big part of the Peoples Temple. And so it made sense why an altruistic, smart, young person would say, ‘I want to be part of this,’” she adds. 

Guyana 

For Dawid, where it all went down is just as important—and arguably just as overlooked in the years after 1978—as the people who went there. 

Acknowledging the incredible logistical feat of moving almost 1000 people, many of them passport-less, to a foreign country, Dawid sees the small South American country as another casualty of Jonestown: “I had to have a Guyanese voice in my book because Guyana was another victim of Jones,” Dawid says. 

The English-speaking Guyana—recently free of British Colonial rule and leaning towards Socialism under leader Cheddi Jagan—offered Jones a haven from the increasing scrutiny back in the U.S. amidst accusations of fraud and sexual abuse, and was “a place to escape the regulation of the U.S. and enjoy the weak scrutiny of the Guyanese state,” according to Garde. 

“He was not successful at covering up the fact he had a dual model: he was sexually abusing women, taking money, and accruing power to himself, and he had to do it in Guyana,” Garde adds, “He wanted a place where he could not be observed.” 

There may be a temptation to overstate what happened in 1978 as leaving an indelible, defining mark on the reputation of a country during its burgeoning years as an independent nation, but in the columns of many newspapers on the breakfast tables of American households in the years afterward, one could not be discussed without the other: “So it used to be that if you read an article that mentioned Guyana, it always mentioned Jonestown,” Dawid says. 

In the few reports interested in the Guyanese perspective after Jonestown, the locals have gone through a range of feelings from wanting to forget the tragedy ever happened, or turning the site into a destination for dark tourism

However, the country’s 2015 discovery of offshore oil means that—in the pages of some outlets and the minds of some readers—Jonestown is no longer the only thing synonymous with Guyana: “I read an article in the New York Times about Guyana’s oil,” Dawid says, “and it didn’t mention Jonestown.” 

From victimhood to survivorship: out of the darkness and into the light…Victimhood to Survivorship 

According to Garde, the public’s perception of cult victims as mentally defective, obsequious followers, or—at worst—somehow deserving of their fate is not unique to victims of religious or spiritual cults. 

“Whenever we use the words ‘cult’, ‘cultism’ or ‘cultist’ we are referring solely to the phenomenon where troubling levels of undue psychological influence may exist. This phenomenon can occur in almost any group or organization,” reads Dialogue Ireland’s mission statement

“Victim blaming is something that is now so embedded that we take it for granted. It’s not unique to cultism contexts—it exists in all realms where there’s a victim-perpetrator dynamic,” Garde says, “People don’t want to take responsibility or face what has happened, so it can be easier to ignore or blame the victim, which adds to their trauma.” 

While blaming and shaming prevent victims from reporting crimes and seeking help, there does seem to be recent improvements in their treatment, regardless of the type of abuse: 

“We do seem to be improving our concept of victims, and we are beginning to recognize the fact that the victims of child sexual abuse need to be recognized, the #MeToo movement recognizes what happened to women,” says Garde, “They are now being seen and heard. There’s an awareness of victimhood and at the same time, there’s also a movement from victimhood to survivorship.” 

Paradise Undone: A Novel of Jonestown focuses on how the survivors process and cope with the fallout of their traumatic involvement with or connection to Jonestown, making the very poignant observation that cult involvement does not end when you escape or leave—the residual effects persist for many years afterward. 

“It’s an extremely vulnerable period of time,” Garde points out, “If you don’t get out of that state, in that sense of being a victim, that’s a very serious situation. We get stuck in the past or frozen in the present and can’t move from being a victim to having a future as a survivor.” 

Support networks and resources are flourishing online to offer advice and comfort to survivors: “I think the whole cult education movement has definitely humanized victims of cults,” Dawid points out, “And there are all these cult survivors who have their own podcasts and cult survivors who are now counseling other cult survivors.” 

At the very least, these can help reduce the stigma around abuse or kickstart the recovery process; however, Garde sees a potential issue in the cult survivors counseling cult survivors dynamic: “There can be a danger of those operating such sites thinking that, as former cult members, they have unique insight and don’t recognize the expertise of those who are not former members,” he says, “We have significant cases where ex-cultists themselves become subject to sectarian attitudes and revert back to cult behavior.” 

Whenever we use the words ‘cult’, ‘cultism’ or ‘cultist’ we are referring solely to the phenomenon where troubling levels of undue psychological influence may exist.

And while society’s treatment and understanding of cult victims may be changing, Garde is frustrated with the overall lack of support the field of cult education receives, and all warnings seem to fall on deaf ears, as they once were in the lead up to Jonestown

The public’s understanding seems to be changing, but the field of cult studies still doesn’t get the support or understanding it needs from the government or the media. I can’t get through to journalists and government people, or they don’t reply. It’s so just unbelievably frustrating in terms of things not going anywhere. 

One fundamental issue remains; some might say that things have gotten worse in the years post-Jonestown: “The attitude there is absolutely like pro-survivor, pro-victim, so that has changed,” Dawid says, “You know, it does seem like there are more cults than ever, however.” 

A History of Violence 

The International Cultic Studies Association’s (ICSA) Steve Eichel estimates there are around 10,000 cults operating in the U.S. alone. Regardless of the number, in the decades since Jonestown, there has been no shortage of cult-related tragedies resulting in a massive loss of life in the U.S. and abroad. 

The trial of Paul Mackenzie, the Kenyan pastor behind the 2023 Shakahola Forest Massacre (also known as the Kenyan starvation cult), is currently underway. Mackenzie pleads not guilty to the death of 448 people and charges of murder, child torture, and terrorism as Kenyan pathologists are still working to identify all of the exhumed bodies. 

“It’s frustrating and tragic to see events like this still happening internationally, so it might seem like we haven’t progressed in terms of where we’re at,” Garde laments. 

Jonestown may be seen as the progenitor of the modern-cult tragedy, an incident for which other cult incidents are compared, but for Dawid, the 1999 Colorado shooting that left 13 teenagers dead and 24 injured would shock American society in the same way, and leave behind a similar legacy. 

“I see a kind of similarity in the impact it had,” Dawid says, “Even though there had been other school shootings before Columbine….I think it did a certain kind of explosive number on American consciousness in the same way that Jones did, not just on American consciousness, but world consciousness about the danger of cults.” 

Victim blaming is something that is now so embedded that we take it for granted.

Just as everyone understands that Jonestown refers to the 917 dead U.S. citizens in the Guyanese jungle, the word “Columbine” is now a byword for school shootings. However, if you want to use their official, unabbreviated titles, you’ll find both events share the same surname—massacre. 

“All cult stories will mention Jonestown, and all school shootings will [mention] Columbine,” Dawid points out. 

In Memoriam 

The official death toll on November 18, 1978, is 918, but that figure includes the man who couldn’t bring himself to follow his own orders. 

According to the evidence, Jim Jones and the nurse Annie Moore were the only two to die of gunshot wounds at Jonestown. The entry wound on Jones’ left temple meant there was a very good chance the shooter wasn’t right-handed (as Jones was). It is believed that Jones ordered Moore to shoot him first, confirming for Garde, Jones’ cowardice: “We saw his pathetic inability to die as he set off a murder-suicide. He could order others to kill themselves, but he could not take the same poison. He did not even have the guts to shoot himself.” 

On the anniversary of Jonestown (also International Cult Awareness Day), people gather at the Jonestown Memorial at the Workers at the Evergreen Cemetery in Oakland, California, but the 2011 unveiling of the memorial revealed something problematic. Nestled between all the engraved names of the victims is the name of the man responsible for it all: James Warren Jones. 

The inclusion of Jones’ name has outraged many in attendance, and there are online petitions calling for it to be removed. Garde agrees, and just as Dawid retired Jones from his lead role in the Jonestown narrative, he believes Jones’ name should be physically removed from the memorial. 

“He should be definitely excluded and there should be a sign saying very clearly he was removed because of the fact that it was totally inappropriate for him to be connected to this.” he says, “It’s like the equivalent of a murderer being added as if he’s a casualty.” 

In the years since she first started researching the book, Dawid feels that the focus on Jones: “There’s been a lot written since then, and I feel like some of the material that’s been published since then has tried to branch out from that viewpoint,” she says. 

It’s frustrating and tragic to see events like this still happening internationally.

Modern re-examinations challenge the long-time framing of Jonestown as a mass suicide, with “murder-suicide” providing a better description of what unfolded, and the 2018 documentary Jonestown: The Women Behind the Massacre explores the actions of the female members of Jones’ inner circle

While it may be difficult to look at Jonestown and see anything positive, with every new examination of the tragedy that avoids making him the central focus, Jones’ power over the Peoples Temple, and the story of Jonestown, seems to wane. 

And looking beyond Jones reveals acts of heroism that otherwise go unnoticed: “The woman who escaped and told everybody in the government that this was going to happen. She’s a hero, and nobody listened to her,” Dawid says. 

That person is Jonestown defector Deborah Layton, the author of the Jonestown book Seductive Poison, whose 1978 affidavit warned the U.S. government of Jones’ plans for a mass suicide. 

And in the throes of the chaos of November 18, a single person courageously stood up and denounced the actions that would define the day. 

For Christine, who refused to submit.

Dawid’s book is dedicated to the memory of the sixty-year-old Christine Miller, the only person known to have spoken out that day against the Jones and his final orders. Her protests can be heard on the 44-minute “Death Tape”—an audio recording of the final moments of Jonestown. 

The dedication on the opening page of Paradise Undone: A Novel of Jonestown reads: “For Christine, who refused to submit.” 

Perceptions of Jonestown may be changing, but I ask Dawid how the survivors and family members of the victims feel about how Jonestown is represented after all these years. 

“It’s a really ugly piece of American history, and it had been presented for so long as the mass suicide of gullible, zombie-like druggies,” Dawid says, “We’re almost at the 50th anniversary, and the derision of all the people who died at Jonestown as well as the focus on Jones as if he were the only important person, [but] I think they’re encouraged by how many people still want to learn about Jonestown.” 

“They’re very strong people,” Dawid tells me.

Categories: Critical Thinking, Skeptic

Pages