In his new book Cross Purposes: Christianity’s Broken Bargain with Democracy, Jonathan Rauch argues that Christianity is a “load-bearing wall” in American democracy. As Christianity has been increasingly co-opted by politics, Rauch believes it is straying from its core tenets and failing to serve its traditional role as a spiritual and civic ballast. He blames this shift for the decline of religiosity in the United States, as well as collapsing faith in democratic institutions.
The Rise of the Nones and Its EffectsRauch writes that his book is “penitence for the dumbest thing I ever wrote,” a 2003 essay for The Atlantic about the rise of what he called “apatheism”—a “disinclination to care all that much about one’s own religion, and an even stronger disinclination to care about other people’s.” The essay argued that the growing number of people who aren’t especially concerned about religion is a “major civilizational advance” and a “product of a determined cultural effort to discipline the religious mindset.” Rauch cites John Locke’s case for religious tolerance and pluralism to argue that the emergence of apatheism represented the hard-fought taming of “divisive and volatile” religious forces.
In Cross Purposes, Rauch explains why he now repudiates this view. First, he argues that the decline of religion has led Americans to import “religious zeal into secular politics.” Second, he believes Christianity is losing its traditional role in shaping culture—the faith now reflects American society and culture instead of the other way around—and argues that this has been corrosive to the civic health of the country. Third, Rauch claims that “there is no secular substitute for the meaning and moral grounding which religious life provides.”
All of these arguments rely on shaky assumptions about modern religiosity and the influence of secularism in America. In 2003, Rauch rightly questioned the idea that “everyone brims with religious passions.” While he acknowledged that human beings appear “wired” to believe, he also recognized that secularization, in the aggregate, is a real phenomenon. He now rejects this observation in favor of the increasingly fashionable view that religiosity never really declines but can only be repurposed: “We see this in the soaring demand for pseudo-religions in American life,” he writes. These pseudo-religions, he observes, include everything from “wellness culture” to wokeness and political extremism.
But Americans have held quasi-religious, supernatural beliefs throughout history—including during periods of much greater religiosity than today. The popularity of practices like astrology and tarot reading isn’t a recent development, and pagan religions like Wicca originated and spread in the God-fearing middle of the twentieth century. Belief in UFOs and extraterrestrial encounters surged in the 1940s and 1950s, an era when over 90 percent of Americans were Christians. In the early 1990s, 90 percent of Americans still identified as Christians compared to 63 percent today. But a 1991 Gallup poll of Americans found a wide array of paranormal and other supernatural beliefs—nearly half believed in extrasensory perception (ESP), 36 percent believed in telepathy, 29 percent believed houses could be haunted, 26 percent believed in clairvoyance, and 25 percent believed in astrology. Religious belief wasn’t much of a bulwark against these other beliefs. Even in cases when those beliefs contradicted traditional Christian teachings—such as reincarnation—significant proportions of Christians believed them.
The secularism of Western liberal democracies is a historical aberration. For most of history, the separation of church and state didn’t exist.Rauch argues that “it has become pretty evident that secularism has not been able to fill what has been called the ‘God-shaped hole’ in American life.” He continues: “In today’s America, we see evidence everywhere of the inadequacy of secular liberalism to provide meaning, exaltation, spirituality, transcendence, and morality anchored in more than the self.” But the evidence Rauch is referring to—aside from the latest spiritual fads, many of which have been adopted by religious and irreligious Americans alike—is thin. He cites a 2023 survey conducted by the Wall Street Journal and NORC, which found that the percentage of Americans who say religion is “very important” to them fell from 62 percent in 1998 to 39 percent in 2023. The survey also found that the proportion of Americans who regard patriotism, community involvement, and having children as “very important” declined over the same period. Meanwhile, a growing proportion of Americans said money is very important.
While it’s possible that secularization has played a role in making Americans more greedy and less community or family-oriented, it isn’t enough to merely assert that rising secularism is to blame for the decline of these values in the United States. Even if it’s true that secularism has some social costs, those costs would need to be weighed against its benefits. “As a homosexual American,” Rauch writes, I owe my marriage—and the astonishing liberation I have enjoyed during my lifetime—to the advance of enlightened secular values.” Rauch argues that the Founders believed the governance system they set up would only work if it remained on a firm foundation of Christian morality. He cites John Adams, who declared that the Constitution was “made only for a moral and religious people.” But he also could have cited Thomas Jefferson’s trenchant criticisms of Christianity or Thomas Paine’s argument in The Age of Reason that many Christian doctrines are, in fact, deeply immoral, superstitious, and corrosive to human freedom.
While Rauch doesn’t appear to regard his own secularism as an impediment to patriotism or any other civic virtue—and thus he doesn’t need religion—he appears to believe that other Americans do. He invokes an argument made by Friedrich Nietzsche nearly 150 years ago: “When religious ideas are destroyed one is troubled by an uncomfortable emptiness and deprivation. Christianity, it seems to me, is still needed by most people in old Europe even today.” A central theme of Cross Purposes is a paternalistic view that, while it’s possible for some people to be good citizens and live lives of meaning without religion, it’s not possible for many others.
Without religion, Rauch argues, most people will be adrift with no grounding for their moral values. He claims that “moral propositions … must have some external validity.” He observes that “scientific or naturalistic” foundations for morality fail because they “anchor morality in ourselves and our societies, not in something transcendent.” He asks: “If there is no transcendent moral order anchored in a purposive universe—something like God-given laws—why must we not be nihilistic and despairing sociopaths?” However, he qualifies his argument…
Now, speaking as an atheist and a scientific materialist, I do not believe religions actually answer that question. Instead, they rely on a cheat, which they call God. They assume their conclusion by simply asserting the existence of a transcendent spiritual and moral order. They invent God and then claim he solves the problem. … The Christians who believe the Bible is the last word on morality—and, not coincidentally, that they are the last word on interpreting the Bible—are every bit as relativistic as I am; it’s just that I admit it and they don’t.After presenting this powerful rejoinder to the religious pretension to have a monopoly on objective morality, Rauch writes:
That is neither here nor there. I am not important. What is important is that the religious framing of morality and mortality is plausible and acceptable to humans in a way nihilism and relativism are not and never will be.But this is a false dichotomy—the choice isn’t between religious morality and nihilistic relativism. The choice is between religious morality and an attempt to develop an ethical system that is far more epistemically honest and humble. Instead of relying on the God “cheat”—a philosophical sleight of hand Rauch feels he is equipped to identify, but one he evidently assumes most people are incapable of understanding—we can attempt to develop and ground ethical arguments in ways that don’t require the invention of a supernatural, supervising entity. As he writes:
For most people, the idea that the universe is intended and ordered by God demonstrably provides transcendent meaning and moral grounding which scientific materialism demonstrably does not. … God may be (as I believe) a philosophical shortcut, but he gets you there—and I don’t.But Rauch just admitted that religion only “gets you there” in an illusory way. It may be comforting for believers to convince themselves that there’s a divine superintendent who ensures that the universe is morally intelligible, but the religious are no closer to apprehending fundamental moral truth than nonbelievers.
Rauch also argues that “purely secular thinking about death will never satisfy the large majority of people.” While he personally doesn’t struggle with the idea of mortality, he once again assumes that a critical mass of people “rely on some version of faith to rescue them from the bleak nihilism of mortality.” While Rauch presents this view in a self-deprecating way—“I am weird!” he informs the reader—it’s difficult to shake the impression that he believes himself capable of accepting hard realities that others aren’t equipped to handle.
While Rauch believes his scientific materialism and secular morality is some kind of exotic oddity, these views were at the heart of the Enlightenment and they have informed centuriesof Western philosophy. A fundamental aspect of Enlightenment thought was that religious authorities don’t have a monopoly on truth or morality. Secularists like David Hume resisted religious dogma and undermined the notion that morality must be grounded in God. Secularism was rare and dangerous hundreds of years ago, but it has gone mainstream. Pew reports that the number of Christians in the United States fell from around 90 percent in 1990 to 63 percent in 2024. Gallup found that other measures of religiosity have declined as well, such as church attendance and membership. Pew has also recorded substantial and sustained declines in religious belief across Europe.
The idea that there’s a latent level of religiosity in human societies that remains static over the centuries is dubious.Rauch was right in 2003—plenty of people are capable of leading ethical and meaningful lives without religious faith. There are more of these people today than there used to be, and this doesn’t mean they have all been taken in by some God-shaped superstition or cult. The idea that there’s a latent level of religiosity in human societies that remains static over the centuries is dubious—in pre-Enlightenment Europe, religious belief was ubiquitous and mandated by law. Heretics were publicly executed. So were witches. Scientific discoveries were suppressed and punished if they were seen as conflicting with religious teachings. Regular people had extremely limited access to information that wasn’t audited by religious authorities. Science often blended seamlessly with pseudoscience (even Newton was fascinated by alchemy and other aspects of the occult, along with his commitment to biblical interpretation). Incessant religious conflict culminated in the Thirty Years’ War, which caused millions of deaths—with some estimates ranging as high as around a third of central Europe’s population.
The last execution for blasphemy in Europe was the hanging of Thomas Aikenhead in Edinburgh in 1697, whose crimes included criticizing scripture and questioning the divinity of Jesus Christ. Aikenhead was a student at the University of Edinburgh, where Hume would attend just a couple of decades later. It wouldn’t be long before several of the most prominent philosophers in Europe were publicly making arguments that would have once sent them to the gallows. Drawing upon the work of these philosophers, less than a century after Aikenhead’s execution the United States would be founded on the principle of religious liberty. The world has secularized, and this is exactly what Rauch once believed it to be: a major civilizational advance.
When the Load-Bearing Wall BucklesRauch believes the decline of religion is to blame for many of the most destructive political pathologies in the United States today. He argues that the “collapse of the ecumenical churches has displaced religious zeal into politics, which is not designed to provide purpose in life and breaks when it tries.” According to Rauch, when the “load-bearing wall” of Christianity “buckles, all the institutions around it come under stress, and some of them buckle, too.” Much of Cross Purposes is an explanation for why this buckling has occurred.
Rauch fails to demonstrate why Christianity is a necessary foundation for morality.Rauch organizes the book around what he describes as Thin, Sharp, and Thick Christianity. Thin Christianity describes a process whereby the faith is “no longer able, or no longer willing, to perform the functions on which our constitutional order depends.” One of these functions is the export of Christian values to the rest of society. “My claim,” he writes, “is not just that secular liberalism and religious faith are instrumentally interdependent but that each is intrinsically reliant on the other to build a morally and epistemically complete and coherent account of the world.” This is the claim we discussed in the first section—Rauch fails to demonstrate why Christianity is a necessary foundation for morality. He explains that people may find it easier to ground their values in God and why religion makes mortality easier to handle, but these are hardly arguments for the necessity of faith in the public square.
Rauch is particularly concerned about what he describes as Sharp Christianity—a version of the faith that is “not only secularized but politicized, partisan, confrontational, and divisive.” Instead of focusing on the teachings of Jesus, Rauch writes, these Christians “bring to church the divisive cultural issues they hear about on Fox News” and believe “Christianity is under attack and we have to do something about it.” Sharp Christianity is best captured by the overwhelming evangelical support for Donald Trump, who received roughly 80 percent of the evangelical vote in 2020 and 2024. An April Pew survey found that Trump’s support among evangelicals remains strong after his first 100 days in office—while 40 percent of Americans approve of his performance, this proportion jumps to 72 percent among evangelicals.
Rauch challenges the view held by many Sharp Christians that their faith is constantly under assault from Godless liberals. He critiques what he regards as an increasingly powerful “post-liberal” movement on the right, which argues that the liberal emphasis on individualism and autonomy has led to the atomization of society and the rejection of faith, family, and patriotism. Rauch acknowledges that liberalism on its own doesn’t inspire the same level of commitment as religion, and he rightly notes that this is by design: “the whole point of liberalism was to put an end to centuries of bloody coercion and war arising from religious and factional attempts to impose one group’s moral vision on everyone else.”
While Rauch does an excellent job critiquing the post-liberal right, he grants one of its central claims: that Christianity is the necessary glue that holds liberal society together. As he notes: “liberals understood they could not create and sustain virtue by themselves, and they warned against trying.” It’s true that liberalism is capacious enough to encompass many competing values and ideologies, but there are certain values that are in the marrow of liberal societies—such as individual rights, pluralism, and democracy. Mutual respect for these values can cultivate virtues like openness, tolerance, and forbearance.
Rauch emphasizes the achievements of liberalism: “constitutional democracy, mass prosperity, the scientific revolution, outlawing slavery, empowering women, and—not least from my point of view—tolerating atheistic homosexual Jews instead of burning us alive.” That he should have added that many of these advancements were made in the teeth of furious religious opposition brings us to a central problem with Cross Purposes—Rauch would argue that all the Christian bloodletting, intolerance, and authoritarianism throughout history is based on a series of misconceptions about what Christianity really is. His central demand is that American Christians rediscover the true meaning of their faith, which he regards as an anodyne and narrow reading of Jesus Christ’s essential teachings. He reduces millennia of Christian thought and the whole of the Bible to a simple formula (which he first heard from the Catholic theologian and priest James Alison): “Don’t be afraid. Imitate Jesus. Forgive each other.” But Rauch then admits: “I am in no position to judge whether those are the essential elements of Christianity, but they certainly command broad and deep reverence in America’s Christian traditions.”
While this tidy formula does capture some central elements of Jesus’ teachings, it intentionally leaves out other less agreeable (but no less essential) aspects of Christianity. Jesus urged his followers not to be afraid because he would return and they would be granted eternal life in the presence of God. He told his Apostles that their “generation will not pass away” before his return, so they could expect their reward in short order. For those who did not accept his gospel, Jesus had another message: “Depart from me, you cursed, into the eternal fire prepared for the devil and his angels.” Rauch may be correct that “Don’t be afraid” captures one of Jesus’ core messages, but this is a message that only applies to believers—all others should be very afraid. As for the idea of forgiveness, Jesus clearly believed there were some limits—once the “cursed” are consigned to “eternal fire,” redemption appears to be unlikely.
Even at its best, Christianity is inherently divisive.While Rauch admits that he is in “no position to judge … the essential elements of Christianity” (nor am I), but any summary of the faith that leaves out Jesus’ most fundamental teaching of all—that his followers must accept the truth of Christianity or face eternal destruction—isn’t in touch with reality. It’s also untenable to present an essentialized version of Christianity that leaves out the entire Old Testament, which is crammed with scriptural warrants for slavery, genocide, misogyny, and persecution on a horrifying scale. There’s a reason Christianity has been such a repressive force throughout history—despite the moderating influence of Jesus, the Bible is chockablock full of justifications for the punishment of nonbelievers and religious warfare. Even at its best, Christianity is inherently divisive—the “wages of sin is death,” and there’s no greater sin than the rejection of the Christian God. Because Christianity is a universalist, missionary faith, believers have a responsibility to deliver the gospel to their neighbors. If you believe, as evangelicals do, that millions of souls are at stake, the stripped-down, liberal version of Christianity offered by Rauch may seem like a deep abrogation of responsibility.
“If we wanted to summarize the direction of change in American Christianity over the past century or so,” Rauch writes, “we might do well to use the term secularization.” While Rauch argues that some secularization has been good for Christianity by helping it integrate with the broader culture, he also argues that the “mainline church cast its lot with center-left progressivism and let itself drift, or at least seem to drift, from its scriptural moorings.” He cites the historian Randall Balmer, who observed in 1996 that many Protestants “stand for nothing at all, aside from some vague (albeit noble) pieties like peace, justice, and inclusiveness.” But this is just what Rauch is calling for—the elevation of vague pieties about forgiveness and courage to a central role in how Christianity interacts with the wider culture.
Rauch argues that American evangelicals have become “secularized.” The thrust of this argument is that evangelicals thought they would reshape the GOP in their image when they became more political in the 1980s, but the opposite occurred. For decades, white evangelicals have been one of the largest and most loyal Republican voting blocs, and Rauch observes that this has been a self-reinforcing process: “Republicans self-selected into evangelical religious identities and those identities in turn reinforced the church’s partisanship.” Rauch points out that church attendance and other indicators of religiosity have declined among evangelicals in recent decades. He even argues that evangelical Christianity has become “primarily a political rather than religious identity.”
While there are some signs that evangelicals aren’t quite as committed to their religious practices as they were at the turn of the century, the idea that politics has displaced their faith is a bold overstatement. According to the latest data from Pew, evangelicals remain disproportionately fervent in their beliefs and religious behaviors: 97 percent believe in a soul or spirit beyond the physical body; 72 percent say they pray daily; 82 percent consider the Bible very or extremely important; 84 percent believe in heaven; and 82 percent believe in hell. American history demonstrates that piety and politics don’t cancel each other out. Rauch explains why Christians are tempted to enter the political arena by summarizing several of the arguments political evangelicals often make:
…some might expect conservative Christians to meekly accept the industrial-scale murder of unborn children, the aggressive promotion of LGBT ideology, the left’s intolerance of traditional social mores, and the relentless advance of wokeness in universities, corporations, and the media; but enough is enough. It is both natural and biblical for Christians to stand up for their values.Rauch challenges these claims and argues the “war on Christianity” frequently invoked by evangelicals is imaginary. The current U.S. Supreme Court is extremely pro-religious freedom, American evangelicals are protected by the First Amendment, most members of Congress are Christians, and surveys show that the vast majority of Americans approve of Christianity. But evangelicals’ perception is what matters—they have felt like their faith is under attack for decades, which has pushed them toward political action. Rauch cited a 1979 conversation between Ronald Reagan and the evangelical Jim Bakker in which the GOP presidential candidate asked: “Do you ever get the feeling sometimes that if we don’t do it now, if we let this be another Sodom and Gomorrah, that maybe we might be the generation that sees Armageddon?”
It’s an inconvenient fact for Rauch’s argument that Christianity can coexist so comfortably with hyper-partisanship and authoritarianism.While it’s fine to call for a gentler and more civically responsible Christianity, Rauch appears to believe that any version of the faith that inflames partisan hatreds or focuses on the culture war is, by definition, un-Christian. But this isn’t the case. When Reagan worried about the United States becoming Sodom and Gomorrah and ushering in Armageddon, he wasn’t “secularizing” Christianity by blending it with worldly politics. He was allowing his religious beliefs to inform his political views, which many Christians regard as morally and spiritually obligatory.
The secularism of Western liberal democracies is a historical aberration. For most of history, the separation of church and state didn’t exist—everyone in society was forced to submit to the same religious strictures, and the punishment for failing to do so was often torture and death. One reason for this history of state-sanctioned dogma and repression is that eschatology is central to Christianity. The idea that certain actions on earth will lead to either eternal reward or punishment is a powerful force multiplier in human affairs, which is one of the reasons the European wars of religion were so bloody and why the role of religion in many other conflicts around the world has been to increase the level of tribal hatred on both sides. Modern religion-infused politics is just a return to the historical norm.
Photo by Julian GentileTrump: God’s Wrecking BallThen there is President Donald Trump. “Absolutely nothing about secular liberalism,” Rauch writes, “required white evangelicals to embrace the likes of Donald Trump.” If there’s one argument in favor of the idea that evangelicals have allowed politics to distort their faith, it’s the overwhelming support President Trump still commands within their ranks. Rauch cites a survey conducted by the Public Religion Research Institute, which reported that evangelicals were suddenly much less concerned about the personal character of elected officials after they threw their weight behind Trump. In 2011, just 30 percent of evangelicals said an “elected official can behave ethically even if they have committed transgressions in their personal life”—a proportion that jumped to 72 percent in October 2016.
There are many reasons evangelicals cite for supporting Trump, from his nomination of pro-life Supreme Court justices who overturned Roe v. Wade to the conviction that he’s an enthusiastic culture warrior who will crush wokeness. Because evangelicals are consumed by the paranoid belief that they’re an embattled group clinging to the margins of the dominant culture, they decided that they could dispense with concerns over character if it meant mobilizing a larger flock and gaining political and cultural influence. Over three-quarters of evangelicals believe the United States is losing its identity and culture, so the idea of making America great again appeals to them. Rauch cites Os Guinness, who described Trump as “God’s wrecking ball stopping America in its tracks [from] the direction it’s going and giving the country a chance to rethink.” But Rauch is right that arguments like this don’t explain the depth of evangelical support for the 45-47 President or the fact that “they did not merely support Trump, they adored him.”
“Whatever the predicates,” Rauch writes, “embracing Trump and MAGA was fundamentally a choice and a change.” It’s true that it would have once been difficult to imagine evangelicals supporting a president like Donald Trump. It’s also true, as Rauch contends, that evangelicals now appear to follow “two incommensurable moralities, an absolute one in the personal realm and an instrumental one in the political realm.” But Cross Purposes isn’t just about the hypocrisy and moral bankruptcy of American evangelicals or the post-liberal justifications for Trumpism. Rauch is calling for a revival of public Christianity in America, and the evangelical capitulation to Trump raises questions about the viability of that project.
It’s an inconvenient fact for Rauch’s argument that Christianity can coexist so comfortably with hyper-partisanship and authoritarianism. Rauch insists that evangelical Christianity is the product of a warping process of secularization—the “Church of Fear is more pagan than Christian,” he insists. But as Pew reports, evangelicals are disproportionately likely to attend church, pray daily, believe in the importance of the Bible, and so on. Rauch is in no position to adjudicate who is a true believer and who isn’t (nor is anyone else, me included), and if it’s true that the only real Christianity is the reassuring liberal version he endorses, the vast majority of Christians throughout history were just as “secularized” as today’s evangelicals.
“Mr. Jefferson, Build Up that Wall”Because Rauch has such an innocuous view of “essential” Christian theology, he believes Christianity doesn’t need to “be anything other than itself” to ensure that Christians keep their commitments to “God and liberal democracy.” If only it were so easy. Despite the steady decline of Christianity in the United States, 63 percent of the adult population still self-reports as Christian—a proportion that has actually stabilized since 2019. In any religious population so large, there will always be significant variation in what people believe and how they express those beliefs in the public square. Christianity doesn’t necessarily lead to certain political positions—the faith has been invoked to support slavery and to oppose it; to justify imperialism and to condemn it; to damn nonbelievers as heretics bound for hell or to embrace everyone as part of a universalist message of redemption. Of course, it would be nice if all Christians adopted Jonathan Rauch’s version of civic theology, but there will always be scriptural warrants for other forms of theology that Rauch believes are corrosive to our civic culture.
Americans who believe that Christianity is untrue and unnecessary for morality should continue to make their case in the public square.According to Pew, Trump’s net favorability rating among American agnostics is just 17 percent, and it falls to 12 percent among atheists. On average, nearly half of American Protestants view Trump favorably—a proportion that falls to 25 percent among the “religiously unaffiliated,” which includes atheists, agnostics, and those who define their religious beliefs as “nothing in particular.” Rauch presents the rise of post-liberal Christianity and the politicization of American evangelicals as examples of secular intrusions of one kind or another. He doesn’t entertain the possibility that hisconception of Christianity as conveniently aligned with liberal democracy is a modern, secularized vision that isn’t consistent with how Christianity has historically functioned politically—or with the Bible itself.
It’s a shame that Rauch regards his 2003 essay about the value of secularization as the “dumbest thing I ever wrote.” While there’s nothing wrong with emphasizing the aspects of Christian theology that support liberal democracy, there’s a more effective way to resist post-liberal Christianity, MAGA evangelicalism, and all the other intersections between faith and politics today. Americans who believe that Christianity is untrue and unnecessary for morality should continue to make their case in the public square. Rauch is wrong to argue that Christianity is a load-bearing wall in American democracy. The real load-bearing wall in the United States is the one constructed by Jefferson at the nation’s founding, and which has sustained our liberal democratic culture ever since: the wall of separation between church and state.
How one special moment redefined how a science teacher does her job.
Learn about your ad choices: dovetail.prx.org/ad-choicesWhen it comes to opinions concerning standardized tests, it seems that most people know for sure that tests are simply terrible. In fact, a recent article published by the National Education Association (NEA) began by saying, “Most of us know that standardized tests are inaccurate, inequitable, and often ineffective at gauging what students actually know.”1 But do they really know that standardized tests are all these bad things? What does the hard evidence suggest? In the same article, the author quoted a first-grade teacher who advocated teaching to each student’s particular learning style—another ill-conceived educational fad 2 that, unfortunately, draws as much praise as standardized tests draw damnation.
Indeed, a typical post in even the most prestigious of news outlets3, 4 will make several negative claims about standardized admission tests. In this article, we describe each of those claims and then review what mainstream scientific research has to say about them.
Claim 1: Admission tests are biased against historically disadvantaged racial/ethnic groups.Response: There are racial/ethnic average group differences in admission test scores, but those differences do not qualify as evidence that the tests are biased.
The claim that admission tests are biased against certain groups is an unwarranted inference based on differences in average test performance among groups.
The differences themselves are not in question. They have persisted for decades despite substantial efforts to ameliorate them.5 As shown in the table above and reviewed more comprehensively elsewhere,6, 7 average group differences appear on just about any test of cognitive performance—even those administered before kindergarten. Gaps in admission test performance among racial groups mirror other achievement gaps (e.g., high school GPA) that also manifest well before high school graduation. (Note: these group differences are differences between the averages— technically, the means—for the respective groups. The full range of scores is found within all the groups, and there is significant overlap between groups.)
Group differences in admission test scores do not mean that the tests are biased. An observed difference does not provide an explanation of the difference, and to presume that a group difference is due to a biased test is to presume an explanation of the difference. As noted recently by scientists Jerry Coyne and Luana Maroja, the existence of group differences on standardized tests is well known; what is not well understood is what causes the disparities: “genetic differences, societal issues such as poverty, past and present racism, cultural differences, poor access to educational opportunities, the interaction between genes and social environments, or a combination of the above.”8 Test bias, then, is just one of many potential factors that could be responsible for group disparities in performance on admission tests. As we will see in addressing Claim 2, psychometricians have a clear empirical method for confirming or disconfirming the existence of test bias and they have failed to find any evidence for its existence. (Psychometrics is that division of psychology concerned with the theory and technique of measurement of cognitive abilities and personality traits.)
Claim 2: Standardized tests do not predict academic outcomes.Response: Standardized tests do predict academic outcomes, including academic performance and degree completion, and they predict with similar accuracy for all racial/ethnic groups.
The purpose of standardized admission tests is simple: to predict applicants’ future academic performance. Any metric that fails to predict is rendered useless for making admission decisions. The Scholastic Assessment Test (now, simply called the SAT) has predictive validity if it predicts outcomes such as college grade point average (GPA), whether the student returns for the second year (retention), and degree completion. Likewise, the Graduate Record Examination (GRE) has predictive validity if it predicts outcomes such as graduate school GPA, degree completion, and the important real world measure of publications. In practice, predictive validity, for example between SAT scores and college GPA, implies that if you pull two SAT-takers at random off the street, the one who earned a higher score on the SAT is more likely to earn a higher GPA in college (and is less likely to drop out). The predictive utility of standardized tests is solid and well established. In the same way that blood pressure is an important but not perfect predictor of stroke, cognitive test scores are an important but not perfect predictor of academic outcomes. For example, the correlation between SAT scores and college GPA is around .5,9, 10, 11 the correlations between GRE scores and various measures of graduate school performance range between .3 and .4,12 and the correlation between Medical College Admission Test (MCAT) scores and licensing exam scores during medical school is greater than .6.13 Using aggregate rather than individual test scores yields even higher correlations that predict a college’s graduation rate given the ACT/SAT score of its incoming students. Based on 2019 data, the correlations between six-year graduation rate and a college’s 25th percentile ACT or SAT score are between .87 and .90.14
Standardized tests do predict academic outcomes, including academic performance and degree completion, and they predict with similar accuracy for all racial/ethnic groups.Research confirming the predictive validity of standardized tests is robust and provides a stark contrast to popular claims to the contrary.15, 17, 18 The latter are not based on the results of meta-analyses19, 20 nor on studies conducted by psychometricians.21, 22, 23, 24, 25 Rather, those claims are based on cherry-picked studies that rely on select samples of students who have already been admitted to highly selective programs—partially because of their high test scores—and who therefore have a severely restricted range of test scores. For example, one often-mentioned study26 investigated whether admitted students’ GRE scores predicted PhD completion in STEM programs and found that students with higher scores were not more likely to complete their degree. In another study of students in biomedical graduate programs at Vanderbilt,27 links between GRE scores and academic outcomes were trivial. However, because the samples of students in both studies had a restricted range of GRE scores—all scored well above average28—the results are essentially uninterpretable. This situation is analogous to predicting U.S. men’s likelihood of playing college basketball based on their height, but only including in the sample men who are well above average. If we want to establish the link between men’s height and playing college ball, it is more appropriate to begin with a sample of men who range from 5'1" (well below the mean) to 6'7" (well above the mean) than to begin with a restricted sample of men who are all at least 6'4" (two standard deviations above the mean). In the latter context, what best differentiates those who play college ball versus not is unlikely to be their height—not when they are all quite tall to begin with.
Students of higher socioeconomic status (SES) do tend to score higher on the SAT and fare somewhat better in college. However, this link is not nearly as strong as many people … tend to assume.Given these demonstrated facts about predictive validity, let’s return to the first claim, that admission tests are biased against certain groups. This claim can be evaluated by comparing the predictive validities for each racial or ethnic group. As noted previously, the purpose of standardized admission tests is to predict applicants’ future academic performance. If the tests serve that purpose similarly for all groups, then, by definition, they are not biased. And this is exactly what scientific studies find, time and time again. For example, the SAT is a strong predictor of first year college performance and retention to the second year, and to the same degree (that is, they predict with essentially equal accuracy) for students of varying racial and ethnic groups.29, 30 Thus, regardless of whether individuals are Black, Hispanic, White, or Asian, if they score higher on the SAT, they have a higher probability of doing well in college. Likewise, individuals who score higher on the GRE tend to have higher graduate school GPAs and a higher likelihood of eventual degree attainment; and these correlations manifest similarly across racial/ethnic groups, males and females, academic departments and disciplines, and master’s as well as doctoral programs.31, 32, 33, 34 When differential prediction does occur, it is usually in the direction of slightly overpredicting Black students’ performance (such that Black students perform at a somewhat lower level in college than would be expected based on their test scores).
Claim 3: Standardized tests are just indicators of wealth or access to test preparation courses.Response: Standardized tests were designed to detect (sometimes untapped) academic potential, which is very useful; and controlling for wealth and privilege does not detract from their utility.
Some who are critical of standardized tests say that their very existence is racist. That argument is not borne out by the history and expansion of the SAT. One of the long-standing purposes of the SAT has been to lessen the use of legacy admissions (set-asides for the progeny of wealthy donors to the college or university) and thereby to draw college students from more walks of life than elite high schools of the East Coast.35 Standardized tests have a long history of spotting “diamonds in the rough”—underprivileged youths of any race or ethnic group whose potential has gone unnoticed or who have under-performed in high school (for any number of potential reasons, including intellectual boredom). Notably, comparisons of Black and White students with similar 12th grade test scores show that Black students are more likely than White students to complete college.36 And although most of us think of the SAT and comparable American College Test (ACT) as tests taken by high school juniors and seniors, these tests have a very successful history of identifying intellectual potential among middle-schoolers37 and predicting their subsequent educational and career accomplishments.38
Students of higher socioeconomic status (SES) do tend to score higher on the SAT and fare somewhat better in college.39 However, this link is not nearly as strong as many people, especially critics of standardized tests, tend to assume—17 percent of the top 10 percent of ACT and SAT scores come from students whose family incomes fall in the bottom 25 percent of the distribution.40 Further, if admission tests were mere “wealth” tests, the association between students’ standardized test scores and performance in college would be negligible once students’ SES is accounted for statistically. Instead, the association between SAT scores and college grades (estimated at .47) is essentially unchanged (moving only to .44) after statistically controlling for SES.41, 42
Standardized tests have a long history of spotting “diamonds in the rough”—underprivileged youths of any race or ethnic group whose potential has gone unnoticed.A related common criticism of standardized tests is that higher SES students have better access to special test preparation programs and specific coaching services that advertise their potential to raise students’ test scores. The findings from systematic research, however, are clear: the effects of test preparation programs, including semester-long, weekly, in-person structured sessions with homework assignments,43 demonstrate limited gains, and this is the case for the ACT, SAT, GRE, and LSAT.44, 45, 46, 47 Average gains are small—approximately one-tenth to one-fifth of a standard deviation. Moreover, free test preparation materials are readily available at libraries and online; and for tests such as the SAT and ACT, many high schools now provide, and often require, free in-class test preparation sessions during the year leading up to the test.
Claim 4: Admission decisions are fairer without standardized tests.Response: The admissions process will be less useful, and more unfair, if standardized tests are not used.
According to the fairtest.org website, in 2019, before the pandemic, just over 1,000 colleges were test-optional. Today, there are over 1,800. In 2022–2023, only 43 percent of applicants submitted ACT/SAT scores, compared to 75 percent in 2019–2020.48 Currently, there are over 80 colleges that do not consider ACT/SAT scores in the admissions process even if an applicant submits them. These colleges are using a test-free or test-blind admissions policy. The same trend is occurring for the use of the GRE among graduate programs.49
The movement away from admission tests began before the COVID-19 pandemic but was accelerated by it, and there are multiple reasons why so many colleges and universities are remaining test-optional or test-free. First, very small colleges (and programs) have taken enrollment hits and suffered financially. By eliminating the tests, they hope to attract more applicants and, hopefully, enroll more students. Once a few schools go test-optional or test-free, other schools feel they have to as well in order to be competitive in attracting applicants. Second, larger, less-selective schools (and programs) can similarly benefit from relaxed admission standards by enrolling more students, which, in turn, benefits their bottom line. Both types of schools also increase their percentages of minority student enrollment. It looks good to their constituents that they are enrolling young people from historically underrepresented groups and giving them a chance at success in later life. Highly selective schools also want a diverse student body but, similar to the previously mentioned schools, will not see much of a change in minority graduation rates simply by lowering admission standards if they also maintain their classroom academic standards. They will get more applicants, but they are still limited by the number of students they can serve. Rejection rates increase (due to more applicants) and other metrics become more important in identifying which students can succeed in a highly competitive academic environment.
The admissions process will be less useful, and more unfair, if standardized tests are not used.There are multiple concerns with not including admission tests as a metric to identify students’ potential for succeeding in college and advanced degree programs, particularly those programs that are highly competitive. First, the admissions process will be less useful. Other metrics, with the exception of high school GPA as a solid predictor of first-year grades in college, have lower predictive validity than tests such as the SAT. For example, letters of recommendation are generally considered nearly as important as test scores and prior grades, yet letters of recommendation are infamously unreliable—there is more agreement between two letters about two different applicants from the same letter-writer than there is between two letters about the same applicant from two different letter-writers.50 (Tip to applicants—make sure you ask the right person to write your recommendation). Moreover, letters of recommendation are weak predictors of subsequent performance. The validity of letters of recommendation as a predictor of college GPA hovers around .3; and although letters of recommendation are ubiquitous in applications for entry to advanced degree programs, their predictive validity in that context is even weaker.51 More importantly, White and Asian students typically get more positive letters of recommendation than students from underrepresented groups.52 For colleges that want a more diverse student body, placing more emphasis on such admission metrics that also reveal race differences will not help.
Without the capacity to rely on a standard, objective metric such as an admission test score, some admissions committee members may rely on subjective factors, which will only exacerbate … disparate representation.This brings us to our second concern. Because race differences exist in most metrics that admission officers would consider, getting rid of admission test scores will not solve any problems. For example, race differences in performance on Advanced Placement (AP) course exams, now used as an indicator of college readiness, are substantial. In 2017, just 30 percent of Black students’ AP exams earned a qualifying score compared to more than 60 percent of Asian and White students’ exams.53 Similar disparities exist for high school GPA; in 2009, Black students averaged 2.69, whereas White students averaged 3.09,54 even with grade inflation across U.S. high schools.55, 56 Finally, as mentioned previously, race differences even exist in the very subjective letters of recommendation submitted for college admission.57
Removing tests from the process is not going to address existing inequities; if anything, it promises to exacerbate them.Without the capacity to rely on a standard, objective metric such as an admission test score, some admissions committee members may rely on subjective factors, which will only exacerbate any disparate representation of students who come from lower-income families or historically underrepresented racial and ethnic groups. For example, in the absence of standardized test scores, admissions committee members may give more attention to the name and reputation of students’ high school, or, in the case of graduate admissions, the name recognition of their undergraduate research mentor and university. Admissions committees for advanced degree programs may be forced to pay greater attention to students’ research experience and personal statements, which are unfortunately susceptible to a variety of issues, not the least being that students of high socioeconomic backgrounds may have more time to invest in gaining research experience, as well as the resources to pay for “assistance” in preparing a well-written and edited personal statement.58
So why continue to shoot the messenger?If scientists were to find that a medical condition is more common in one group than in another, they would not automatically presume the diagnostic test is invalid or biased. As one example, during the pandemic, COVID-19 infection rates were higher among Black and Hispanic Americans compared to White and Asian Americans. Scientists did not shoot the messenger or engage in ad hominem attacks by claiming that the very existence of COVID tests or support for their continued use is racist.
Sadly, however, that is not the case with standardized tests of college or graduate readiness, which have been attacked for decades,59 arguably because they reflect an inconvenient, uncomfortable, and persistent truth in our society: There are group differences in test performance, and because the tests predict important life outcomes, the group differences in test scores forecast group differences in those life outcomes.
The attack on testing is likely rooted in a well-intentioned concern that the social consequences of test use are inconsistent with our social values of equality.60 That is, there is a repeated and illogical rejection of what “is” in favor of what educators feel “ought” to be.61 However, as we have seen in addressing misconceptions about admission tests, removing tests from the process is not going to address existing inequities; if anything, it promises to exacerbate them by denying the existence of actual performance gaps. If we are going to move forward on a path that promises to address current inequities, we can best do so by assessing as accurately as possible each individual to provide opportunities and interventions that coincide with that individual’s unique constellation of abilities, skills, and preferences.62, 63
About 30-40% of the produce we grow ends up wasted. This is a massive inefficiency in the food system. It occurs at every level, from the farm to the end user, and for a variety of reasons. This translates to enough food worldwide to feed 1.6 billion people. We also have to consider the energy that goes into growing, transporting, and disposing of this wasted food. Not all uneaten food winds up in landfills. About 30% of the food fed to animals is food waste. Some food waste ends up in compost which is used as fertilizer. This still is inefficient, but at least it is recycled.
There is a huge opportunity for increased efficiency here, one that can save money, reduce energy demand, reduce the carbon footprint of our food infrastructure, and reduce the land necessary to meet our nutritional needs. Increased efficiency will be critical as our populations grows (it is estimated to likely peak at about 10 billion people). But there is no one cause of food waste, and therefore there is no one solution. It will take a concerted effort in many areas to minimize food waste, and make the best use of the food that does not get eaten by people.
One method is to slow food spoilage. The longer food lasts after it has been harvested, the less likely it is to be wasted due to spoilage. Delaying spoilage also makes it easier to get food from the farm to the consumer, because there is more time for transport. And delayed spoilage, if sufficient, may reduce dependence on the cold chain – an expensive and energy dense process by which food must be maintained in refrigerated conditions for its entire life from the farm until used by the consumer.
A recent study explores one method for delaying spoilage – injecting small amounts of melatonin into plants through silk microneedles. The melatonin regulates the plants stress response and slow spoilage. In this study they looked at pak choy. The treated plants had a shelf-life (time in which it can be sold) from 4 days to 8 without refrigeration, and with refrigeration shelf life was extended from 15 days to 25. This was a lab proof-of-concept, and so the process would need to be industrialized and made cost-effective enough to be viable. It also would not necessarily be needed in every situation, but could be used in areas with a cold chain is very difficult or expensive, or transportation is slow. This could therefore not only reduce waste, but improve food availability in challenging areas.
Perhaps the most effective way to extend shelf life is through irradiation, a proven and cost-effective method. This exposes food to either gamma rays (from cobalt-60 sources), electron beams, or x-rays, killing most microorganisms and delaying ripening or sprouting. This is completely safe – the resulting food is not radioactive. The radiation just passed through it. There is no significant difference in nutritional value and only subtle changes to taste (compared to the effects of pasteurization on milk). The effectiveness depends on the food item being irradiated – fresh produce may last for an additional week, meat for an additional month, and dried goods for months or even years. This process not only reduces food waste and reliance on the cold chain, it reduces foodborne illness as well.
The main limitation of irradiation is public acceptance. Studies show that between 40-50% of people would accept irradiated food, but this number increases to 80-90% with education. In the US irradiated food is considered not organic – yet another perfectly safe technology opposed by the counterproductive organic lobby. Part of the problem is mandated labeling that mostly scares rather than informs consumers.
These same problems, of course, exist for another way to extend shelf-life – genetic engineering. There is already approval for GMO apples, bananas, strawberries, tomatoes, and potatoes with extended shelf life. GMO produce is perfectly safe, something I have written about extensively. All of the tropes spread by the anti-GMO / organic lobby are false or grossly misleading. Meanwhile this technology can dramatically increase the efficiency of our food infrastructure, which is the best way to limit the environmental footprint of our food system. It is ironic that a group, organic farmers and consumers, that state they are interested in helping the environment are directly harming it, and represent one of the greatest threats to the environment. By limiting the use of GMOs they are effectively increasing land use for agriculture (which is the biggest negative effect agriculture has on the environment) and blocking the most effective methods to limit food waste.
They argue that the point of opposing GMOs is to limit pesticide use, but this is false one two main levels. First, GMO technology is not just about making pesticide-tolerate cultivars, that is one application. It makes no sense to oppose a technology because you object to one specific application. But there is also no evidence that pesticide tolerant GMOs increase overall pesticide use. They increase the use of the specific pesticide to which the plants are tolerant, but decrease the use of usually more toxic pesticides. Also, some GMOs decreased pesticide use by creating plants that are inherently pest resistant. Further, organic farmers do use pesticides – just “natural” ones that we cannot assume are safe, and are generally less effective, and therefore have to be used more frequently and is larger amounts. This is what happens when you substitute logic and evidence with ideology (such as the appeal-to-nature fallacy).
Reducing food waste may not be sexy, but this is an important area that deserves our attention. It is a huge opportunity to increase efficiency, reduce disease, improve nutrition, and decrease the environmental footprint of agriculture.
The post Preserving Food first appeared on NeuroLogica Blog.
When people think of fashion, they often picture runway shows, luxury brands, pricey handbags, or the latest trends among teens and young adults. Fashion can be elite and expensive or cheap and fleeting—a statement made through clothing, hairstyles, or even body modifications. Regardless of gender, fashion is frequently viewed as a way to signal income, social status, group affiliation, personal taste, or even to attract a partner. But why does fashion serve these purposes, and where do these associations come from? An evolutionary perspective offers surprising insights into the role of fashion in signaling status and sexual attraction.
The adaptive nature of ornamentation is something that has been long admired and studied in a wealth of nonhuman species. Most examples are ornaments the animals grow themselves.1 Consider the peacock’s tail, a sexually selected trait present only in males.2 Peahens are attracted to males with the largest and most symmetrical tails.
The ability of males to grow a large and symmetric tail is related to their overall fitness (the ability to pass their genes into the next generation), so that females that mate with them will have better quality offspring. Studies have shown that altering the length and symmetry of peacock tails influences mating success—shorter tails lead to less mating opportunities for the males. Antlers are primarily found on male members of the Cervidae family, which include elk, deer, moose, and caribou (the one species in which the females also grow antlers).3 Antlers, unlike horns, are shed and regrown every year. They are used as weapons, symbols of sexual prowess or status, and as tools to dig in the snow for food. Antlers increase in size until males reach maturity, and grow larger with better nutrition, higher testosterone levels, and better health or absence of disease during growth. The size of a male’s antlers is also influenced by genetics and females prefer to mate with males with larger antlers compared to smaller ones (much like in the peacocks).4, 5
In many species, exaggerated male structures like tails, antlers, bright coloration, and sheer size can serve as a weapon in intrasexual competition and as an ornament to signal genetic quality and thereby promote female choice. As a result, much attention has been focused on male ornamentation in nonhuman animals and what it indicates.6 Moreover, males of various species add outside materials to their bodies, nests, and environments specifically to attract mates. Consider the caddisfly, the bower bird, and even the decorator crab; all use decoration to attract females.7 Interestingly, in what are often referred to as sex role-reversed species, such as the pipefish,8 it is the females who are more competitive for mates and are more highly ornamented. But what about humans? Has ornamentation or fashion in humans also been shaped by sexual selection?
Humans do not have “natural” ornaments like tails or antlers to display their quality.Humans have a fascination with fashion, as best summed up by the psychologist George Sproles:9 “Psychologists speak of fashion as the seeking of individuality; sociologists see class competition and social conformity to norms of dress; economists see a pursuit of the scarce; aestheticians view the artistic components and ideals of beauty; historians offer evolutionary explanations for changes in design. Literally hundreds of viewpoints unfold, from a literature more immense than for any phenomenon of consumer behavior.” To be fair, humans do not have “natural” ornaments like tails or antlers to display their quality. They also do not have much in the way of fur, armor, or feathers to protect their bodies or to regulate temperature, so “adornment” in the form of clothing was necessary for survival. However, humans have spent millennia fashioning and refashioning what they wear, not just according to climate or condition, but for status, sex, and aesthetics.
If fashion has been such a large part of human history with deep evolutionary roots, why do so many trends, preferences, and standards fluctuate across cultures and time? This is because fashion is a display of status as well as mating appeal. Many human preferences are influenced by context. For example, male preferences for women’s body size and weight shift with resource availability; in populations with significant history of food shortages, larger or obese women are prized. Larger women are displaying that they have, or can acquire, resources that others cannot and have sufficient bodily resources for reproduction.10 When resources are historically abundant, men prefer thinner women; in this context, these women display that they can acquire higher-quality nutrition and have time or resources to keep a fit, youthful figure. When tan bodies indicated working outside, and therefore lower standing, pale skin was preferred. When some societies shifted to tan bodies reflecting a life of resources and leisure, they gave tanning prestige, and it became “fashionable.”11
The shifts in what is fashionable can be attributed to these environmental changes, but one principle remains constant: if it displays status (social, financial, or sexual), it is preferred.12 A good example of this would be jewelry, which shifts with fashion trends—whether gold or silver is in this season, or whether rose gold is passé. However, if the appeal of jewelry was just aesthetic—to be shiny or pretty—people would not care whether the jewels were real and expensive or cheap “costume” jewelry. However, they do care, because expense indicates greater wealth and status. This is so much so that people often make comments regarding the authenticity or the size (and therefore cost) of jewels, such as the size of diamonds in engagement rings.13
Fashion for Sexual DisplayIt would be surprising if fashion and how humans choose to ornament themselves was not influenced by sexual selection. Humans show a number of traits associated across other species that are sexually selected, including dimorphism in physical size and aggression, delayed sexual maturity in males, and greater male variation in reproductive success (defined as the number of offspring).14 Men typically choose clothing that emphasizes the breadth of their shoulders and sometimes adds to their height through shoes with lifts or heels. In many modern western populations, men also spend significant time crafting their body shape by weight lifting to attain that triangle shaped upper body without the benefit of shoulder pads or other deceptive tailoring signals. These are all traits that females have been shown to value in terms of choosing a mate.15
Illustration by Marco Lawrence for SKEPTICExamining artistic depictions of bodies provides particular insights into human preferences, as these figures are not limited by biology and can be as exaggerated as the artist wants. We can also see how the population reacts to these figures in terms of popularity and artistic trends. The triangular masculine body shape has been historically exaggerated in art and among fictional heroes, and this feature continues today as comic books and graphic artists create extreme triangular torsos and film superhero costumes with tight waists and padded shoulders and arms. These costumes are not new and do not vary a great deal. They mimic the costume of warriors, soldiers, and other figures of authority or dominance. As cultural scholar Friedrich Weltzien writes, “The superhero costume is an imitation of the historical models of the warrior, the classic domain of heroic manhood.”16
If it displays status (social, financial, or sexual), it is preferred.Indeed, military personnel and heroes share behaviors and purposes (detecting threats, fighting adversaries, protecting communities, and achieving status in hierarchies). These costumes act as physical markers and are used to display dominance in size, muscularity, and markers of testosterone. Research has found that comic book men have shoulder-to-waist ratios (the triangular torso) and upper body muscularity almost twice that of real-life young men, and that Marvel comic book heroes in particular are more triangular and muscular than championship body builders. What is remarkable is that even with imaginary bodies, male comic book hero “suits” have several features that, not coincidentally, exaggerate markers of testosterone and signal dominance and strength. Even more triangular torsos are created by padded shoulders and accents (capes, epaulets) and flat stomachs (tight costumes with belts, abdominal accents) with chest pieces that have triangular shapes or insignia, large legs and footwear (boots, holsters), and helmets and other face protection that create angular jawlines.17
Men’s choice of clothing and jewelry … convey information about status and resources that are valued by the opposite sex for what they may contribute to offspring success.The appearance of a tall, strong, healthy masculine body shape is often weighted strongly by women in their judgments of men. There is also an interaction between sex appeal and status. Women choose these men in part because the men’s appearance affects how other men treat them. Men who appear more masculine and dominant elevate their status among men, which makes them more attractive to women.18 Men’s choice of clothing and jewelry or other methods of adornment can not only emphasize physical traits but also convey information about status and resources that are valued by the opposite sex for what they may contribute to offspring success. Some clothing brands (or jewelry) are more expensive and are associated with more wealth, and so are likely to attract the attention of the opposite sex; think of brand logos, expensive watches, or even the car logo on a keychain as indicators of wealth.19
Female fashion also shows indications of being influenced by its ability to signal mate value or enhance it, sometimes deceptively. In many mammals, female red ornamentation is a sexual signal designed to attract mates.20 Experimental studies of human females suggest that they are more likely to choose red clothing when interacting with an attractive man than an attractive woman;21 the suggestion being that red coloration can serve a sexual signaling function in humans as well as other primates. Red dyes in clothing and cosmetics have been extremely popular over centuries, notably cochineal, madder, and rubia. In fact, the earliest documented dyed thread was red.22
One of the primary attributes that women have accentuated throughout time is their waist-to-hip ratio, a result of estrogen directing fat deposition23—a signal of reproductive viability. The specific male preferences regarding waist to hip ratio have been documented for decades.24 But is this signal, and its amplification, really a global phenomenon? It is easy to give western examples of waist minimization and hip amplification—corsets, hoop skirts, bustles, and especially panniers,25 or fake hips that can make a woman as wide as a couch. Even before these, there was the “bum roll”—rolled up fabric attached to a belt to create a larger bulge over the buttocks.
Outside of Western cultures, one can find a variety of “wrappers” (le pagne in Francophone African cultures), yards of fabric wrapped around the hips and other parts of the body to accentuate and amplify the hips.26 Not surprisingly, these are also a show of status as the quality of the fabric is prioritized and displayed.
Just as with men, this specific attribute is wildly exaggerated in fictional depictions of women, from ancient statues to contemporary comic, film, and video game characters. One study concluded that “when limitations imposed by biology are removed, preferred waist sizes become impossibly small.”27 Comic book heroines are drawn with skintight costumes and exaggerated waist-to-hip ratios. They have smaller waists and wider hips than typical humans by far; the average waist-to-hip ratio of a comic book woman was smaller than the minimum waist-to-hip ratio of real women in the U.S. Heroine costumes further accentuate this already extreme curve by use of small belts or sashes, lines, and color changes. Costumes are either skintight or show skin (or both), with cutouts on the arms, thighs, midriff, and in particular, on the chest to show cleavage. The irony of battle uniforms that serve no protective purpose has been pointed out many times in cultural studies.28
Another feminine feature that plays a role in fashion is leg length. Various artistic depictions of the human body throughout history show that while the ideal leg length in women has increased over time, the preference for male leg length has not shifted. This increase appears to emerge during the Renaissance, which may be due to increases in food security and health during that time. As with many physical preferences in humans, leg length can be an indicator of health, particularly in cases of malnutrition or illness during development. This is another important reminder that preferences are shaped by resources, and consistently shift toward features that display status. What is the ideal leg length? One study found that if a woman’s height was 170 cm (5 foot 7 inches), the majority favored a leg length that was 6 cm (2.36 inches) longer, a difference that corresponds to the average height of high-heeled shoes.29 You can probably see where this is going: Sexual attractiveness ratings of legs correlate with perceived leg length, and legs are perceived as longer with high-heeled shoes. It should come as no surprise that women may accentuate or elongate their legs with high heels.
Photo by Ham Kris / UnsplashHigh heeled shoes were not originally the domain of women, as they are thought to have originated in Western Asia prior to the 16th century in tandem with male military dress and equestrianism. The trend spread to Europe, with both sexes wearing heightened heels by the mid-17th century.30 They have remained present in men’s fashion in the form of shoes for rockstars and entertainers (e.g., Elton John), and boots worn by cowboys and motorcyclists. However, these heels are either short or hidden as lifts to make the men appear taller. By the 18th century, high heels became worn primarily by women, particularly as societies redefined fashion as frivolous and feminine.
As one might expect, high heels do more than elongate legs and increase height. High heels change the shape of the body and how it moves. Women wearing heels increase their lumbar curvature and exaggerate their hip rotation, breasts, and buttocks, making their body curvier. As supermodel Veronica Webb put it, “Heels put your ass on a pedestal.” When women walk in heels, they must take smaller steps, utilize greater pelvic rotation, and have greater pelvic tilt. All of these changes result in greater attractiveness ratings. Wearing high heels also denotes status—high heel shoes are typically more expensive than flat shoes, and women who wear them sustain serious damage if they have occupations that require a lot of labor. Therefore, women who wear heels appear to be in positions where they do less labor and have more resources. Research has asked this question directly, and both men and women view women in high heels as being of higher status than women wearing flat shoes.31
Fashion can also signify membership in powerful groups, such as the government, the military, or nobility.At this point, it’s hardly surprising to learn that, compared to actual humans, comic book women are depicted with longer legs that align with peak preferences for leg length in several cultures, while men are shown with legs of average length. Women are also far more often drawn in heels or on tiptoe, regardless of context. Women are even drawn on tiptoe when barefoot, in costume stocking feet, and even when wearing other types of shoes or boots. This further elongates their already longer legs.32
Fashion as Status SignalingSocial status, as previously mentioned in terms of traits valued by the opposite sex, is also often displayed through fashion in ways relevant to within-sex status signaling, particularly when it comes to accessories. Men making fashion choices that indicate masculinity and dominance include preferences for expensive cars and watches—aspects of luxury consumption.33 Women not only emphasize their own beauty but also carry bags, for example, that are brand conscious, conveying information about their wealth and perhaps their preferences for specific causes, as in the popularity of animal welfare friendly high-end brands such as Stella McCartney.
Unlike high-end cars, however, which signal status to possible mates as well as status competitors, men are largely unaware of the signals sent from women to other women by such accessories. Women are highly attuned to brands and costs of women’s handbags, while most men do not seem to recognize the signaling value.34 While luxury products can boost self-esteem, express identity, and signal status, men tend to use conspicuous luxury products to attract mates, while women may use such products to deter female rivals. Some studies have shown that activating mate guarding motives prompts women to seek and display lavish possessions, such as clothes, handbags, and jewelry, and that women use pricey possessions to signal that their romantic partner is especially devoted to them.35
Fashion can also signify membership in powerful groups, such as the government, the military, or nobility. It can also signify the person’s role in society in other ways, for example, whether someone is married, engaged, or betrothed (by their own volition or by family). There are several changes in fashion that are specific to the various events surrounding a wedding, each with its own cultural differences and symbolism, and far too many to review here.36 Several researchers have explored the prominence and the symbolic value of a bride’s traditional dress in different societies.37 However, these signifiers are not just specific to the wedding rituals; what these women wear as wives (and widows) is culturally dictated for the rest of their lives.
These types of salient markers of female marital status are present in a number of societies. For example, not only are Latvian brides no longer allowed to wear a crown, but they may be given an apron and other displays (such as housekeeping tools) that indicate that they are now wives. In other cultures, girls will wear veils from puberty to their wedding day, and the removal of the veil is an obvious display of the change in status. Some cultures symbolically throw away the bride’s old clothes, as she is no longer that person; she is now the wife of her husband. In Turkey, married Pomak women cut locks of hair on either side of their head, and their clothing is much simpler in style than the highly decorated daily clothing of unmarried Pomak women. However, wives do wear more expensive necklaces—gold or pearls rather than beads.38 Notice that this is not only a signal of marital status, but also a signal of the groom’s wealth.
An evolutionary perspective suggests … people who choose to tattoo and pierce their bodies are doing so … because it serves as an advertisement or signal of their genetic quality.Meanwhile, for men, the vast majority of cultures possess only one marker for married men—a wedding ring—which is also expected of women. Why are there more visible markers of marital status for women than for men? This seems likely to be a product of the elevated sexual jealousy and resulting proprietariness employed by men to prevent cuckoldry—what evolutionary psychologists call mate guarding. Salient markers of marital status for women show other men that she is attached to, or the property of, her husband. If the term “property” seems like an exaggeration, cultures have been documented to have rituals specifically for the purpose of transferring ownership of the bride from her parents to her husband, with the accompanying changes in appearance to declare that transfer to the public.39
Tattoos as Signals of Mate Quality, Social Status, and Group MembershipBody modifications, such as tattoos and piercings, have become increasingly prevalent in recent years in Western culture, with rates in the United States approaching 25 percent.40 Historically, tattooing and piercing were frequently used as an indicator of social status41 or group membership, for example, among criminals, gang members, sailors, and soldiers. While this corresponds with all of the other types of adornment we have reviewed, other researchers have suggested that these explanations don’t fully illuminate why individuals should engage in such costly and painful behavior when other methods of affiliation, such as team colors, clothing, or jewelry are less of a health risk. Tattoos and piercings are not only painful but entail health risks, including infections and disease transmission, such as hepatitis and HIV.42 One could suggest that the permanence of body modifications is a marker of commitment or significance, but an evolutionary perspective suggests an additional level of explanation: that people who choose to tattoo and pierce their bodies are doing so not only to show their bravery and toughness, but also because it serves as an advertisement or signal of their genetic quality. Good genetic quality and immunocompetence may be signaled by the presence and appearance of tattoos and piercings in much the same way as ornamentation, much as the peacock’s tail (in its size and symmetry), serves as a signal of male health and genetic quality.43
Photo by benjamin lehman / UnsplashEven with tattoos, the same areas of the body are accentuated as we see in clothing.44 Researchers have reported sex differences in the placement of tattoos such that their respective secondary sexual characteristics were highlighted, with males concentrating on their upper bodies drawing attention to the shoulder-to-hip ratio. Females had more abdominal and backside tattoos, drawing attention to the waist-to-hip ratio. The emphasis seems to be on areas highlighting fertility in females and physical strength in males, essential features of physical attractiveness.45 In fact, female body modification in the abdominal region was most common in geographic regions with higher pathogen load, again suggesting that such practices may serve to signal physical and reproductive health.46 Recent work has also indicated social norms influence how tattoos affect perceptions of beauty such that younger people and ones who themselves are tattooed see them as enhancing attractiveness.47
Tattoos and piercings are not only painful but entail health risks, including infections and disease transmission, such as hepatitis and HIV.Studies on humans and nonhuman animals have indicated that low fluctuating asymmetry (that is, greater overall symmetry in body parts) is related to developmental stability and is a likely indicator of genetic quality.48 Fluctuating asymmetry (FA), which is defined as deviation from perfect bilateral symmetry, is thought to reflect an organism’s relative inability to maintain stable morphological development in the face of environmental and genetic stressors. One study found49 FA to be lower (that is, the symmetry was greater) in those with tattoos or piercings. This effect was much stronger in males than in females, suggesting that those with greater developmental stability were able to tolerate the costs of tattoos or piercings, and that these serve as an honest signal of biological quality, at least in the men in this study.50 Researchers have also tested the “human canvas hypothesis,” which suggests that tattooing and piercing are hard to fake advertisements of fitness or social affiliations and the “upping the ante hypothesis,” which suggests tattooing is a costly honest signal of good genes in that injury to the body can demonstrate how well it heals. In short, tattoos and piercings not only display a group affiliation, but also that the owner possesses higher genetic quality and health, and these tattoos are placed on areas that accentuate “sexy” body parts. Thus, we have come full circle with humans: Just as other species like peacocks, people show off ornamentation to display their quality as mates and access to resources. Even taking into account cultural differences and generational shifts, the primary message remains.
Social Factors in Human OrnamentationIn addition to all of the evidence we have presented here, ornamentation is not just about mating or even signaling social status. Humans also signal group membership or allegiance through fashion. Modern sports fans show their allegiance to their sports teams by various shirts, hats, and other types of clothing—think the “cheese head” hats worn by Green Bay Packers fans at the team’s NFL home games. Fans of various musical performers, from Kid Rock to Taylor Swift, display their loyalty with concert shirts and other apparel. Typically, they also feel an automatic sense of connection when they encounter others sporting similar items. As discussed, tattoos can be seen as signals of genetic quality or health, and over the last twenty or so years tattoos have also increasingly become seen as statements of individuality. And yet, many serious sports fans, for example, have similar tattoos representing their favorite teams. Marvel fans sport Iron Man and Captain America illustrations on their skin, while fans of the television show Supernatural have the anti-possession symbol from the show tattooed on their torso. It may be that in many populations with weak social and family connections, individuals are seeking connection, and adornment is one way of indicating participation in a community or group. You can also see this in terms of political allegiance and the proliferation of Harris-Waltz and MAGA-MAHA merchandise during the 2024 election cycle in the United States.
While it is clear that an adaptationist approach to ornamentation can explain many aspects of fashion related to signaling social status (whether honest or not), group membership, or mate quality, much research remains to be done, including more work on what aspects are cross-culturally consistent and that are constrained more by unique cultural aspects or the local ecology. Not everything is the product of an adaptation; some aspects of fashion that seem less predictable or may be less enduring are unlikely to be explained by ornamentation and signaling theory because they are not rooted in mating or social motives. That being said, many fashion choices, including our own (for better or worse) make a lot of sense in the light of evolutionary processes. For all the small shifts from generation to generation and across cultures, the main themes remain the same. As Rachel Zoe noted: “Style is a way to say who you are without having to speak.”
What do your fashion choices have to say?
Their exciting nature, combined with the fact nobody's ever won one, make paranormal challenge prizes important educational tools.
Learn about your ad choices: dovetail.prx.org/ad-choicesLet’s talk about climate change and life on Earth. Not anthropogenic climate change – but long term natural changes in the Earth’s environment due to stellar evolution. Eventually, as our sun burns through its fuel, it will go through changes. It will begin to grow, becoming a red giant that will engulf and incinerate the Earth. But long before Earth is a cinder, it will become uninhabitable, a dry hot wasteland. When and how will this happen, and is there anything we or future occupants of Earth can do about it?
Our sun is a main sequence yellow star. The “main sequence” refers to the Hertzsprung-Russell diagram (HR diagram), which maps all stars based on mass, luminosity, temperature, and color. Most stars fall within a band called the main sequence, which is where stars will fall when they are burning hydrogen into helium as their source of energy. More massive stars are brighter and have a color more towards the blue end of the spectrum. They also have a shorter lifespan, because they burn through their fuel faster than lighter stars. Blue stars can burn through their fuel in mere millions of years. Yellow stars, like our own, can last 10 billion years, while red dwarfs can last for hundreds of billions of year or longer.
Which stars are the best for life? We categorize main sequence stars as blue, white, yellow, orange, and red (this is a continuum, but that is how we humans categorize the colors we see). Interestingly, there are no green stars, which has more to do with human color perception than anything else. Stars at an otherwise “green” temperature have enough blue and red mixed in to appear white to our color perception. The hotter the star the farther away a planet would have to be to be in its habitable zone, and that zone can be quite wide. But hotter stars are short-lived. Colder stars last for a long time but have a small and close-in habitable zone, so close they may be tidally locked to their star. Red dwarfs are also relatively unstable and put out a lot of solar wind which is unfriendly to atmospheres.
So the ideal color for a star, if you want to evolve some life, is probably in the middle – yellow, right where we are. However, some astronomers argue that the optimal temperature may be orange, which can last for 15-45 or more billion years, but with a comfortably distant habitable zone. If we are looking for life in our galaxy than orange stars are probably the way to go.
What about our humble yellow sun? Our sun is about 4.6 billion years old, with a total lifespan of about 10 billion years. So it might seem as if we have another 5 billion years to go, which is a comfortable chunk of time. While main sequence stars are relatively stable, they do subtly change, and can significantly change toward the end of their life. So the question is – when will our sun change enough to threaten the habitability of the Earth? The 5 billion years figure is how much longer our sun can burn hydrogen. After that it will start burning its helium at the core, and that is when it will start expanding into a red giant. However, we will run into problems long before then. As the sun burns hydrogen and collects helium at its core, it heats up, by about 10% every billion years. When will this slow heating spell doom for life on Earth?
There are two other variables to consider. The environment of the Earth depends on three main things – the sun, the orbit of the Earth (and anything else in the solar system that might affect Earth), and conditions on Earth itself (the atmosphere, the biosphere, geologically, our magnetic field). When you think about it, having a stable environment for billions of years is pretty amazing.
A recent paper considers the interaction between the slowly warming sun and the biosphere. Using a supercomputer to model what may happen, they conclude:
Our results suggest that the planetary carbonate–silicate cycle will tend to lead to terminally CO2-limited biospheres and rapid atmospheric deoxygenation, emphasizing the need for robust atmospheric biosignatures applicable to weakly oxygenated and anoxic exoplanet atmospheres and highlighting the potential importance of atmospheric organic haze during the terminal stages of planetary habitability.
In other words, the increasing heat will lead to chemical reactions that will reduce atmosphere CO2, this in turn will limit oxygen production through photosynthesis. Oxygen levels will crash, making the Earth uninhabitable to anything dependent on CO2 or oxygen. This will happen in about 1 billion years – 4 billion year sooner than our red giant phase. Eventually the Earth will continue to heat anyway, burning away all our water and resulting in a dry lifeless desert.
Is there anything we can or should do about this? I will leave a deep discussion of “should” to philosophers, and only say keeping Earth habitable to life for as long as possible seems like a good idea to me. Assuming we want this, what can we do? First let me say that I think the question is irrelevant from a practical perspective. Even in a million years, humanity will have changed significantly, definitely technologically, but also probably biologically. In 20 million years or 100 million years, still long before the Earth becomes uninhabitable, other technological species may evolve on Earth. Many things can happen. It’s massively premature to worry about things on that timescale.
I also think its very likely that long before this becomes an issue humanity will either be extinct, or (hopefully) we will be a multi-planet species. We will likely settle many parts of our own solar system, and eventually travel to the nearest stars. Even still, the future technological inhabitants of Earth may want to preserve its ecosystem for as long as possible.
Assuming we cannot change the sun (barring some ridiculously advanced stellar engineering) we could try to manipulate the other variables. We could, for example, put objects into orbit that will reflect away part of the sun’s light and heat to compensate for its increased output. Another option seems more radical but may be easier, and even necessary – we could slowly move the Earth further from the sun to precisely compensate for the sun’s increased temperature. We could use spacecraft flybys to take some angular momentum from Jupiter and give it to the Earth, pushing it a tiny bit further from the sun. By one calculation, such a flyby would only need to occur once every 6,000 years in order to compensate for the warming of the sun (hat tip to Warwick for sending me this link).
But it seems likely that if we have a robust space presence within our solar system over the next billion years (seems likely), there will be countless Earth flybys by spacecraft. What we will need to do is track all the flybys, and/or their effects, and then calculate a compensatory flyby schedule, which can include moving the Earth slowly further from the sun.
It’s interesting, and daunting, to think about such long time scales. It reminds me of a science-fiction story (I forget which one) in which a tourist planet started to run into the problem of tourists carrying away net mass. Over hundreds and thousands of years, the planet was losing mass. So they had to pass and strictly enforce rules that no visitor could leave with more mass than they came with. If you wanted souvenirs (or even gain a little weight, which people on vacation often do) you had to pack your suitcase with some rocks to leave behind.
It seems like it will not be overly difficult for future Earth inhabitants (whether humans or something else) to keep Earth habitable for the full 5 billion years left in our sun’s main sequence life. So we have that going for us. But seriously, while all this is a fun thought experiment informed by our current scientific knowledge, it is also a reminder of how fragile our ecosystem is, especially when you think long term. We should respect our current stability, and we shouldn’t mess with it casually.
The post End of Life on Earth first appeared on NeuroLogica Blog.
When navigating the modern world with its varied conveniences and modes of leisure, it seems that we humans are completely detached from the harsh environments that our species evolved out of thousands of years ago. Under stress, or in moments of crisis, however, the tools that our minds have evolved to deal with danger or imminent threat become quite apparent. During times like the recent global COVID-19 pandemic, when resources become unpredictably unavailable, we can turn to rather selfishly acquiring large quantities of particular products. From toilet paper rolls to baking flour, perceived essentials are coveted and cached away, hidden from other individuals, reserved for personal use in the future.
During such periods of uncertainty and upheaval, we aim also to construct meaning and a story line from the world rapidly changing around us—one by-product of which is the development of conspiracy theories. While such actions may be frowned upon in today’s society, and can be explained by hardwired behavioral reactions, they also point out the sophisticated cognitive tools that were likely critical to our evolutionary survival, indeed success, namely: recall of specific past events, future planning, the attribution of mental states to other individuals (theory of mind), a strong belief in some source of causation, and an underlying curiosity about the world we live in.
Thankfully, perhaps, we are not the only species with a tendency to cache goods when resources become scarce or when environments are risky—this is a trait we share with over 200 other vertebrates.1 Food-caching behavior is particularly impressive among birds such as the Clark’s nutcracker. This species lives in harsh seasonal environments and can cache tens of thousands of pine seeds within a season. Remarkably, they are able to remember and retrieve the seeds with great accuracy over nine months after storing them.2 The scrub jay on the other hand, caches a smaller number of more varied items, some of which perish relatively quickly (insects and olives, for example), and must therefore also keep track of the decay rates of different food items, and the passage of time, in order to successfully retrieve edible snacks.3 Are these remarkable behavioral feats potentially underpinned by sophisticated cognitive tools like our own, or can they be explained in terms of simpler, hard-wired behavioral predispositions?
The Clark’s nutcracker can cache tens of thousands of pine seeds within a season and remember and retrieve the seeds with great accuracy over nine months after storing them.Ethologists and comparative psychologists who study some of the cleverest organisms on the planet have grappled with such questions concerning the nature and origin of intelligence for decades, across a wide variety of different contexts and animal taxa. The comparative study of animal cognition has raised a number of critical questions over the years, including: Are other animals conscious?4 Can they “mentally travel in time” by storing specific memories and imagining the future?5 Are non-human animals able to attribute mental states to other individuals,6 and does curiosity motivate their interaction and exploration of these abstract phenomena?7 Ultimately, what is it about human cognition that sets us apart from other animals, and why? Trying to answer these types of questions is more important than ever. Not only does it give insight into the nature and origins of our own thinking and behavior, tackling these questions can also help us better understand, build, and predict artificial forms of intelligence, which are becoming increasingly embedded in the fabric of society and our daily lives.8
Though comparative cognition is a vast field, researchers are unified by a central challenge: unlocking the secrets of animal minds, which are like black boxes whose contents are neither directly visible nor accessible. Unlike work in human psychology that can partly rely on participants to report their own subjective experiences, research in animal cognition must employ creative behavioral tasks and interventionist approaches in order to test causal hypotheses about mechanisms that underlie behavior. This is the only way to tease apart hardwired responses or simpler forms of associative learning from more complex forms of cognition that could potentially explain behavior in question.9
Take, for example, the remarkable (and often frustrating) ability of ant colonies to identify and efficiently transport food from sparsely scattered patches in the environment to their nests. Research employing mazes has shown that Argentine ants are capable of solving fiendishly difficult transport optimization problems, flexibly finding the shortest path to food sources, even when known routes become blocked off.10 When watching individuals zealously journey out of the nest and back again, in close coordination with one another, it would be reasonable to assume that each ant had an understanding of the transport problem being solved, or that a central organizing force was shaping the behavior of the colony. Yet this feat is an example of self-organizing collective intelligence; a phenomenon that does not require a global controller, or even that the individuals be aware of the nature of the challenge that they are solving together. By adhering to simple, fixed rules of pheromone following and production, individual ants by means of only local interactions can produce complex collective behavior that does not rely upon any sophisticated cognition at all. This example highlights the need to employ carefully crafted experiments to elucidate correctly the true nature of behavioral processes.
Ants efficiently solve complex transport problems, working together through simple rules of pheromone following, showing self-organizing collective intelligence, without needing a leader or central control. (Photo by Ivan Radic, Flickr, CC BY 2.0)Initially, comparative studies of complex cognition focused primarily on other primate species.11 Their close evolutionary relation to humans means they provide something of a window into the ancestral origins of our sophisticated cognition, and by comparison, the novel idiosyncrasies that characterize human intelligence (although they too have evolved both their bodies and their behavior in response to the selective pressures they have encountered since their split from us and our common ancestor). Nonetheless, it is anthropocentric to assume that complex cognition is exclusive to primates. Indeed, research on primate cognition has generated two influential hypotheses for the evolution of advanced intelligence that are applicable to a wide range of taxa. The Ecological Intelligence Hypothesis suggests that challenges associated with efficiently finding and processing food promote sophisticated cognition,12 while the Social Intelligence Hypothesis argues that activities involved in group living, including the need to cooperate with and potentially deceive others, drive the evolution of sophisticated cognition.13
Understanding the nature of intelligence is a tricky business but comparative psychology provides us with experimental tools that offer a window into the mind’s eye of other animals.Over the last three decades increasing evidence has accumulated to show that a similar combination of selective pressures has driven the evolution of comparably complex cognition in other animal groups, notably the corvids.14 This group of birds, which includes crows, jays, ravens, and jackdaws, is capable of remarkable behavioral feats. These include the manufacture and use of tools for specific tasks,15, 16 and even the ability to “count out loud” by producing precise numbers of vocalizations in response to numerical values.17 The discovery of such behaviors points to complex underlying cognition, and given that primates and corvids diverged some 300 million years ago, it also suggests that advanced intelligence evolved independently at least twice within animals as the result of convergent evolutionary pressures.
In order to closely elucidate the nature of intelligence in animals, it is instructive to first identify natural behavior that may reflect complex cognitive processes, especially ones that can also be studied in controlled laboratory conditions. The food caching behavior of birds has proven to be a powerful model through which to investigate the nature of animal intelligence across a range of domains, including recall of past events, future planning, and the ability to attribute mental states to other individuals (“Machiavellian intelligence”). In particular, laboratory studies on scrub jays have leveraged that species’ propensity to cache a variety of perishable foods, but not eat items that have degraded. How do individual birds efficiently recover the hundreds of spatially distinct caches they make daily, given that different food items decay at different rates?
Western Scrub-Jay, Aphelocoma californica (Photo by Martyne Reesman, Oregon Department of Fish and Wildlife, via Wikimedia)In a notable study published in Nature,18 researchers hypothesized that jays use a flexible form of memory that previously had been thought exclusive to humans—episodic memory. Episodic memory allows us to recall specific events that have occurred in our mind’s eye, and we experience these memories as our own, with a sense that they represent events that have occurred in the past. In the absence of a method to ascertain whether jays subjectively experience memories as we do, the researchers proposed behavioral criteria that would indicate “episodic-like” memory: an ability to retrieve information about “where” a unique event or “episode” took place, “what” occurred during the event, and “when” it happened. To test this, they conducted a series of experiments in which jays were presented with perishable worms that could be cached in trays at one site and non-perishable nuts that could be cached at another. The results of the experiments showed that when given the option to recover caches after a short time, the birds preferred to search for the more desirable, tasty worms, but switched to searching for the less attractive nuts after longer delays, when the worms had decayed. These experiments demonstrated for the first time that a non-human animal can recall the “what-where-when” of specific events in the past using abilities akin to episodic memory in humans.
While birds might rely on recall of specific events to successfully retrieve cached items, the initial act of caching itself is prospective, functioning to provide resources for the future when they might otherwise be scarce. This raises the possibility that non-human animals are capable of future planning, mentally traveling forwards in time to anticipate future needs that differ from present ones. However, caching may also simply be a hardwired behavioral urge, rather than a flexible response that is reliant on learning. To explore this, researchers tested scrub jays using a “planning for breakfast” paradigm.19 Over a period of six days the jays were exposed daily to either a “hungry room” where breakfast was never provided, or a “breakfast room” where food was available in the morning. Otherwise, the jays were provided with powdered (uncacheable) food in a middle room that linked the other two. Then, the birds were offered nuts in the middle room, and the opportunity to cache them in either the hungry or breakfast room. The results showed that the birds spontaneously strongly preferred to cache the nuts in the hungry room, indicating for the first time that a non-human animal can plan for the future, guiding its behavior based on anticipated future needs independent of their present motivational state.
The examples above demonstrate the ability of birds to “mentally travel in time” and form representations of their own past and future. To recover their caches successfully, however, each individual bird must also pay attention to the other birds who might attempt to steal their caches. To lessen the risk of that happening, individual birds employ a range of strategies to protect their stored food, including caching food behind barriers, out of the sight of other birds, and producing decoy caches that do not contain any edible items. To explore the cognitive processes involved in cache protection behavior researchers allowed scrub jays to cache food when alone, or while being watched by another bird. The caching birds were then provided the opportunity to recover their caches while in private, giving them a chance to re-cache the hidden food items that might be vulnerable to pilfering. Interestingly, not all birds re-cached the items most at risk of being stolen (those cached in front of the conspecific). Only those scrub jays who were experienced pilferers themselves decided to re-cache items that had been watched by another individual.20 The implication is that birds who have been thieves in the past project their experience of stealing onto others, thereby anticipating future stealing of their own caches. In other words, it takes a thief to know one! This experiment therefore raises the possibility that the jays simulate the perspectives of other individuals, suggesting that like humans, they may be able to attribute mental states to others, and therefore have a knowledge of other minds as well as other times.
The approach employed in these studies highlights the utility of exploring behavioral criteria indicative of complex cognitive processes by using a carefully controlled experimental procedure. One advantage of this approach is that it is widely applicable, since it relies on externally observable behavior, rather than obscure internal states, and can therefore be used to investigate a diverse range of intelligences. Recently, comparative psychologists have started to apply these techniques to systematically investigate the intelligence of soft-bodied cephalopods—the invertebrate group comprised of octopus, cuttlefish, and squid.21 These remarkable animals have captured the imagination of naturalists for hundreds of years and reports suggest they are capable of highly flexible and sophisticated behaviors. For example, veined octopuses transport coconut shells in which they hide themselves when faced with a threatening predator, raising the possibility that they may be able to plan for the future. Further, the male giant Australian cuttlefish avoids fights with other males by deceptively changing their appearance to resemble that of females—perhaps they are capable of attributing mental states to other members of their species.
A coconut octopus (Amphioctopus marginatus), hides from threatening predators between a coconut shell and a clam shell. Using its tentacles, it carries the shells, while pulling itself along. Sensing a threat, the octopus clamps itself shut between the shells. (Photo by Nick Hobgood, Wikimedia)Recently, laboratory experiments with the common cuttlefish have shown that like some birds, apes, and rodents, they are able to recollect “what-where-when” information about past events through episodic-like memory.22 Unlike other species however, episodic memory in cuttlefish does not decline with age, offering exciting opportunities to study resistance to age-related decline in cognition.23 As with food caching among corvids, behavioral experiments with cuttlefish have also revealed prospective, future-oriented behavior: after learning temporal patterns of food availability, cuttlefish learn to forgo immediately available prey items in order to consume more preferred food that only becomes available later.24, 25 Presently, however, it is not clear whether this reflects genuine future planning, which requires individuals to act independently of current needs—and so presents an exciting avenue for future research.
Given the broad applicability of the experimental approach developed in comparative psychology, it is worth considering the utility of experimental paradigms to investigate the behavior of non-organic forms of intelligence. Artificial Neural Networks (ANNs) are becoming increasingly embedded in the way that we work, solve problems, and learn, perhaps best exemplified by the advent of Large Language Models (LLMs), such as ChatGPT, now ubiquitous by their use in content creation and even serving as a source of knowledge.26 It is more important than ever that we develop an understanding of the behavior of these forms of intelligence. Fortunately, decades of research aimed at understanding the minds of animals has provided us with the conceptual tools needed to elucidate the processes underlying artificial behavior, and the means to build a form of artificial intelligence that is more flexible and less biased. Though reports of ANNs besting humans in traditionally complex, strategic games such as poker abound,27 some have argued that these wins are often restricted to very specific domains, and that ANNs are far from displaying the general intelligence of animals, let alone humans.28
Interdisciplinary efforts, however, are helping to close this gap. Inspired by research in cognitive psychology, computer scientists have incorporated an analogue of episodic memory into the architecture of ANNs. Endowed with the ability to compare present environmental variables with those encountered during specific points in the past, ANNs are able to behave much more flexibly.29 Recently, influenced by classic tasks in comparative psychology, psychologists and computer scientists have collaborated to produce a competition testing the relative cognitive abilities of ANNs.30 Dubbed the “Animal-AI Olympics,”31 this competition should help to promote the development of artificial forms of intelligence capable of mirroring the general intelligence displayed by animals, and perhaps one day, humans.
Understanding the nature of intelligence is a tricky business, but comparative psychology provides us with experimental tools that offer a window into the mind’s eye of other animals. In the future, these approaches may prove invaluable in providing insights into the behavior of artificial forms of intelligence, and one day, perhaps, into the behavior of organic life that looks very different from that on Earth.
Skeptics are well aware that there are issues with eyewitness testimony as evidence. These issues are popular topics of discussion at skeptical conferences and are the impetus for numerous skeptical articles. Human perception and memory are notoriously inaccurate, indeed malleable. Preconceptions and cognitive biases shape both our immediate perceptions of events and how we later recall, interpret, and relate them.
The testimony issue goes beyond simple eyewitness accounts, i.e., the descriptions people give of things they visually saw. Testimony can include any description or characterization of something that a person draws from the memory of their perceptions. Something they heard, felt, smelled, read, viewed indirectly, or sensed in any way.
While skeptics find it logically correct to point out these problems, it’s not going to do anyone any good if all that happens is you make people angry.In discussing contentious topics, the interpretation of testimony can become highly emotional and swiftly evolve into an overly polarized argument that misses the nuance of the situation. I routinely encounter this type of reaction to my examination of testimony, in particular with UFO witnesses. At first I found this rather surprising. After all, I was just trying to be logical, follow the facts, and cover all the bases—one of which being the possibility of false witness testimony. But I was often met with an unexpectedly angry response.
This is something we need to avoid. Anger, of course, is rarely helpful in scientific communication. While skeptics find it logically correct to point out these problems, it’s not going to do anyone any good if all that happens is you make people angry. In fact, if you are perceived (as I often have been) of attacking, disrespecting, or denigrating a witness, then this can affect your credibility and destroy communication opportunities in other areas too.
Over the last couple of decades of encountering this problem, I’ve come across a few important concepts that have been helpful to keep in mind. Essentially, they are blind spots on the part of the supporters of the testimony, but if we don’t take them into account, they become our blind spots too.
Truth & LiesWhen I explain that I don’t believe an individual’s testimony is true then their supporters will assume I’m accusing the witness of lying. This then drags the conversation either down the irrelevant path of “why would they lie” or the more perilous road of “how dare you suggest this wonderful person is lying!”
This is a false dichotomy. It’s not a simple matter of “truth” vs. “lies”. There are other options. Yet, even great minds fall into the trap. Here is Thomas Paine on miracles in his 1794 classic The Age of Reason:
If we are to suppose a miracle to be something so entirely out of the course of what is called Nature that she must go out of that course to accomplish it, and we see an account given of such miracle by the person who said he saw it, it raises a question in the mind very easily decided, which is: Is it more probable that Nature should go out of her course, or that a man should tell a lie? We have never seen, in our time, Nature go out of her course, but we have good reason to believe that millions of lies have been told in the same time; it is, therefore, at least millions to one that the reporter of a miracle tells a lie.That paragraph gives me deeply mixed feelings each time I read it. Paine was examining the possibility of miracles from a rationalist perspective. He asked the reader to consider that people verifiably lie all the time, but miracles are both rare and lacking in scientific evidence. So which is more likely? In this dichotomy, the witness lying seems by far the most probable.
So this classic skeptical quote is fatally flawed, enough to make it useless because the opposite of truth is not lies. The opposite of truth is falseness. Truth means a statement is correct, in agreement with fact or reality. The opposite concept, falseness, means a statement is incorrect, and is contradicted by fact or reality, whether or not a person is lying. Paine’s contemporary, David Hume, in his analysis of miracles in his 1758 An Enquiry Concerning Human Understanding, acknowledged that in addition to deceiving (lying) people can also be deceived:
The plain consequence is (and it is a general maxim worthy of our attention), “That no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous than the fact which it endeavors to establish.” When anyone tells me that he saw a dead man restored to life, I immediately consider with myself whether it be more probable, that this person should either deceive or be deceived, or that the fact, which he relates, should really have happened. I weigh the one miracle against the other; and according to the superiority, which I discover, I pronounce my decision, and always reject the greater miracle. If the falsehood of his testimony would be more miraculous than the event which he relates; then, and not till then, can he pretend to command my belief or opinion.There are many more ways for people to be deceived than to deceive, yet it’s oh-so-easy to fall for the false dichotomy of true vs lie. As evidenced by the rather clumsy and unfamiliar set of antonyms we have for “truth” (“falseness,” “falsity,” “untruth”), the idea that if someone speaks falsely then they are lying is familiar, quite understandable and inevitable, and so we must take great pains to explicitly avoid that misperception and give other options their appropriate weight.
There are many more ways for people to be deceived than to deceive, yet it’s oh-so-easy to fall for the false dichotomy of true vs lie.If someone is not telling the truth, then they might be lying, but they might also simply be wrong—perhaps they are misinterpreting something, or they made a mistake, or they succumbed to some perfectly ordinary illusion. Either way, the fact that they are saying something that is false does not mean they are lying. Giving people the benefit of the doubt, skeptics should focus on the possibilities other than lying. Before accusing people of lying, one might perhaps ask, “Perhaps they made a mistake?” “What if they misremembered?” “Could it have been an optical illusion?”
Of course, people lie, and we shouldn’t rule that out entirely, but my experience with believers in UFOs, conspiracy theories, and strange phenomena, the majority of witnesses are quite honest in their descriptions, and unless you are dealing with an obvious charlatan it’s best to avoid even mentioning the lie hypothesis because it will immediately become the focus of outrage and resistance. Focus instead on the possibilities of mistakes, misperceptions, faulty memory, illusions, and hallucinations, and assume lies will be revealed in the process of deeper investigation.
Illustration by James Bennett for SKEPTICTrusting the VictimWhen a witness to an event or situation is also a victim (i.e., they have been hurt, assaulted, become ill, or suffered other harm) then things become even more fraught with highly charged emotional obstacles to investigation and communication. The witness testimony of victims is simultaneously revered as sacrosanct, yet it is also known to be unreliable.
Nevertheless, as a general principle the accounts of victims should not be automatically disbelieved. I think everyone deserves a fair hearing with the assumption that they are acting in good faith. Examining the accounts of people who were hurt, especially emotionally, is a tricky path to tread, and very easily leads to the perception of the skeptic being on the attack. In response, a blocking defense attenuates further discussion.
In recent years I’ve focused on the UFO community, and while skeptics don’t usually think of UFOlogists as being victims, many people who feel they had some kind of extraterrestrial encounter often feel they suffer from an associated emotional trauma. Sometimes this is from what they feel happened to them (which can be quite extreme, with perceived physical effects, even abductions and physical examinations) but can also be the result of years of being disbelieved.
The witness testimony of victims is simultaneously revered as sacrosanct, yet it is also known to be unreliable.It is even more of an issue when the harm a victim is experiencing is the main evidence, or the actual contended phenomenon. Here any examination of the validity of their testimony can readily be perceived or reframed as a personal attack on the individual, and that’s the end of the discussion.
This deference to victims crops up in many areas of interest to skeptics. In the curious case of Havana syndrome, discussed in depth in Vol. 26 No. 4 of Skeptic, several people have become very ill, and were then convinced that their symptoms were related to a loud noise they heard, or sensation they felt, which they now attribute to some kind of directed energy weapon attack. Since they are obviously suffering, it makes it difficult to critique their testimony without seeming callous.
My own experience with this issue dates back to 2006, when a condition known as “Morgellons disease” was getting some media attention. According to sufferers of the malady, their symptoms of itching and general malaise consistent with aging coincided with what they described as “fibers” that wormed their way out of their skin.
Morgellons disease is a form of delusional parasitosis in which individuals report fibers or filaments emerging from the skin, often accompanied by itching, pain, and persistent sores. While sufferers attribute symptoms to an infectious or environmental cause, most scientific studies have found no underlying pathogen—linking the condition instead to psychiatric disorders.From their descriptions, their testimony, and the occasional images and video provided, it seemed quite apparent that they were simply finding normal hairs and clothing fibers. I blogged about this, describing how I could find similar fibers on my own skin (they are literally everywhere), and how the accounts of fibers emerging from skin were probably a mistake from not understanding the prevalence of microscopic fibers (a base rate error in Bayesian reasoning).
In response to my explanation, I was attacked, portrayed as someone who was accusing the victims of malingering or making up their symptoms, which I certainly was not. But because my initial skeptical approach was to point out what they had got wrong it came across as contradicting their entire testimony. While the fibers were almost certainly unrelated to their experiences, they were actually suffering from a variety of physical symptoms and conditions.
The Morgellons experience taught me that we need to first treat the victim testifier with respect. Their suffering is real, regardless of the cause. Acknowledge that and avoid describing their testimony in absolutes. Instead, as with the “truth vs. lies” issue, raise other possibilities as considerations for them, not assertions from you. Instead of leading an assessment of a traumatic alien abduction story with “that’s nonsense, obviously they dreamt the whole thing!” instead ask “is it possible that sleep paralysis might have played a part here?”
Highly Trained ObserversOn a near daily basis I am accused of dismissing the eyewitness testimony of highly trained observers. For example, Commander David Fravor, a decorated U.S. Navy pilot, has testified that he saw a 40-foot Tic-Tac-shaped UFO engage his plane in a short dogfight, and then shoot off at incredible speed with no visible means of propulsion.
I don’t know what he saw, but from his description of how the object seemed to perfectly mirror him, I suspected he had mistaken the size of the object and hence fallen for a parallax illusion that made it seem to move much faster than it actually was (if it was moving at all.) So I proposed this idea and was met with a range of responses, mostly derisive and angry that I would have the temerity to insult the testimony of a highly trained observer like a U.S. Navy pilot.
The notion of a “trained observer” is something of a myth.These moving responses included the perception that I was accusing Fravor of lying, or being incompetent, stupid, or insane. But I was doing none of those things; rather, I was simply pointing out that he might have made an understandable mistake.
U.S. Navy Commander David Fravor was flying an F/A-18 Hornet when he reported seeing a UFO, later nicknamed the “Tic Tac.” The object hovered over the ocean, appeared to respond to the jets, and perplexed those who watched it. Fravor described the encounter in a report for the Navy and has since been a proponent of the theory that he encountered alien life.The notion of a “trained observer” is something of a myth. Of course military personnel are trained to observe things, but they are trained to observe specific known things, and not things that are highly unexpected (like a giant flying tic-tac) or out of the realm of human experience (like craft exhibiting non-Newtonian physics.)
Fast-moving UFOs are not something that pilots are trained to observe.Military pilots’ training in observation of airborne objects comes largely in the form of recognizing other known planes. Since the 1940s pilots have been issued Visual Aircraft Recognition study cards, which show a variety of known friendly and enemy aircraft, usually in silhouette from various angles. More sophisticated recognition training takes place in simulators. But fast-moving UFOs are not something that pilots are trained to observe.
In fact, this intensive training might make matters worse. Being highly trained to identify a particular set of things can mean you will shoehorn outliers into that set. When Fravor saw the Tic-Tac he had no way of judging how large it was, but he settled on 40-feet, because he felt it was about the same size as an F/A-18, the most common plane he saw in the air. Would he have picked the same size if he had been a commercial pilot of larger jets?
No matter how valid my hypothesis, and the potential for error on Fravor’s part, the “how dare you!” reaction prevents wider consideration of the hypothesis. Even though it seems annoying, I find it works better if I set the scene by explicitly explaining how I don’t think he’s lying, or incompetent, stupid, or crazy. I have to establish that I do think he’s a highly skilled pilot, with years of experience, and trained in observing other aircraft. Then when this is established, I can tentatively explore how an understandable mistake might have been made by such a highly trained observer.
This awareness of the emotional reactions to criticism of witness testimony, and the techniques for avoiding those reactions, feels annoying and even unnecessary, as if pandering to bad thinking. But the goal here is effective communication, so getting people to consider an alternative hypothesis is best done by understanding them in the hope that they, in turn, will understand you.
What the true impact of artificial intelligence (AI) is and soon will be remains a point of contention. Even among scientifically literate skeptics people tend to fall into decidedly different narratives. Also, when being interviewed I can almost guarantee now that I will be asked what I think about the impact of AI – will it help, will it hurt, is it real, is it a sham? The reason I think there is so much disagreement is because all of these things are true at the same time. Different attitudes toward AI are partly due to confirmation bias. Once you have an AI narrative, you can easily find support for that narrative. But also I think part of the reason is that what you see depends on where you look.
The “AI is mostly hype” narrative derives partly from the fact that the current AI applications are not necessarily fundamentally different than AI applications in the last few decades. The big difference, of course, is the large language models, which are built on a transformer technology. This allows for training on massive sets of unstructured data (like the internet), and to simulate human speech is a very realistic manner. But they are still narrow AI, without any true understanding of concepts. This is why they “hallucinate” and lie – they are generating probable patterns, not actually thinking about the world.
So you can make the argument that recent AI is nothing fundamentally new, the output is highly flawed, still brittle in many ways, and mostly just flashy toys and ways to steal the creative output of people (who are generating the actual content). Or, you can look at the same data and conclude that AI has made incredible strides and we are just seeing its true potential. Applications like this one, that transforms old stills into brief movies, give us a glimpse of a “black mirror” near future where amazing digital creations will become our everyday experience.
But also, I think the “AI is hype” narrative is looking at only part of the elephant. Forget the fancy videos and pictures, AI is transforming scientific research in many areas. I read dozens of science news press releases every week, and there is now a steady stream of news items about how using AI allowed researchers to perform months of research in hours, or accomplish tasks previously unattainable. The ability to find patterns in vast amounts of data is a perfect fit for genetics research, proteinomics, material science, neuroscience, astronomy, and other areas. AI is also poised to transform medical research and practice. The biggest problem for a modern clinician is the vast amount of data they need to deal with. It’s literally impossible to keep up in anything but a very narrow area, which is while so many clinicians specialize. But this causes a lack of generalists who play a critical role in patient care.
AI has already proven to be equal to or superior to human clinicians in reading medical scans, making diagnoses, and finding potential interactions, for example. This is mostly just using generic Chat-GPT type programs, but there are medical specific ones coming out. AI also is a perfect match for certain types of technology, such as robotics and brain-machine interface. For example, allowing users to control a robotic prosthetic limb is greatly improved, with training accelerated, using AI. AI apps can predict what the user wants to do, and can find patterns in nerve or muscle activity to correspond to the desired movement.
These are concrete and undeniable applications that pretty much destroy the “AI is all hype” narrative. But – that does not mean that other proposed AI applications are not mostly hype. Most new technologies are accompanied by the snake oil peddlers hoping to cash in on the resulting hype and the general unfamiliarity of the public with the new technology. AI is also very much a tool looking for an application, and that will take time, to sort out what it does best, where it works and where it doesn’t. We have to keep in mind how fast this is all moving.
I am reminded of the early days of the web. One of my colleagues observed that the internet was going to go the way of CB radio – it was a fad without any real application that would soon fade. Many people shared a similar opinion – what was this all for, anyway? Meanwhile there was an internet-driven tech bubble that was literally mostly hype, and that soon burst. At the same time there were those who saw the potential of the internet and the web and landed on those applications for which it was best suited (and became billionaires). We cannot deny now that the web has transformed our society, the way we shop, the way we consume news and communicate, and the way we consume media, and spend a lot of our time (what are you doing right now?). The web was hype, and real, and caused harm, and is a great tool.
AI is the same, just at an earlier part of the curve. It is hype, but also a powerful tool. We are still sorting out what it works best for and where its true potential lies. It is and will transform our world, and it will be for both good and for ill. So don’t believe all the hype, but ignore it at your peril. If it will be a net positive or negative for society depends on us – how we use it, how we support it, and how we regulate it. We basically failed to regulate social media and are now paying the price while scrambling to correct our mistakes. Probably the same thing will happen with AI, but there is an outside chance we may learn from our recent and very similar mistakes and get ahead of the curve. I wouldn’t hold my breath (certainly not in the current political environment), but crazier things have happened.
Like with any technology – it can be used for good or bad, and the more powerful it is the greater the potential benefit or harm. AI is the nuclear weapon of the digital world. I think the biggest legitimate concern is that it will become a powerful tool in the hands of authoritarian governments. AI could become an overwhelming tool of surveillance and oppression. Not thinking about this early in the game may be a mistake from which there is no recovery.
The post The AI Conundrum first appeared on NeuroLogica Blog.
Here are many popular myths about chocolate. How many can you tell are true or not?
Learn about your ad choices: dovetail.prx.org/ad-choicesMy last post was about floating nuclear power plants. By coincidence I then ran across a news item about floating solar installations. This is also a potentially useful idea, and is already being implemented and increasing. It is estimated that in 2022 total installed floating solar was at 13 gigawatts capacity (growing from only 3 GW in 2020). The growth rate is estimated to be 34% per year.
“Floatovoltaics”, as they are apparently called, are grid-scale solar installations on floating platforms. They are typically installed on artificial bodies of water, such as reservoirs and irrigations ponds. Such installations can have two main advantages. They can reduce evaporation which helps preserve the reservoirs. They also are a source of clean energy without having to use cropland or other land.
Land use can be a major limiting factor of solar power, depending on how it is installed. Here is an interesting comparison of the various energy sources and their land use. The greatest land use per energy produced is hydroelectric (33 m^2 / MWh). The best is nuclear, at 0.3 (that’s two orders of magnitude better). Rooftop solar is among the best at 1.2, while solar photovoltaic installed on land is among the worst at 19. This is exactly why I am a big advocate of rooftop solar, even though this is more expensive up front than grid-scale installations. Right now in the US rooftop solar produces about 1.5% of electricity, but the total potential capacity is about 45%. More realistically (excluding the least optimal locations), shooting for 20-30% of energy production from rooftop solar is a reasonable goal. If this is paired with home battery backup, this makes solar power even better.
Floating solar installations have the potential of having the best of both worlds – less land use than land-based solar, and better economics and rooftop solar. If the installation is serving double-duty as an evaporation-prevention strategy, this is even better. This also can potentially dovetail nicely with closed loop pumped hydro. This is a promising grid-level energy storage solution, in that it can store massive amounts of energy for long periods of time, enough to shift energy production to demand seasonally. The main source of energy loss with pumped hydro is evaporation, which can be mitigated by anti-evaporation strategies, which could include floating solar. Potentially you could have a large floating solar installation on top of a reservoir used for closed-loop pumped hydro, which stores the energy produced by the solar installation.
But of course no energy source is without its environmental impact. For floating solar one significant concern is the impact on water birds (where there are bodies of water, even artificial ones, there are water birds). This is an issue because water bird populations are already in decline. Unfortunately, right now we have very little data. We need to see how the installations effect water birds, and how those bird would affect the installations. The linked research is mainly laying out the questions we need to ask. I doubt this will become a deal-killer for floating solar. Mainly it’s good to know how to do this with minimal impact on wildlife.
This is true of energy production in general, and perhaps especially renewable energy as we plan to dramatically increase renewable energy installations. There has already been a big conversation around wind turbines and birds. Yes, wind turbines do kill birds. Even off shore wind turbines kill birds. In the US it is estimated that between 150k and 700k birds are killed annually by wind turbine. However, this is a round-off error to the 1-3 billion birds killed by domestic cats annually. It is also estimated that over 1 billion birds die annually by flying into windows. We can save far more bird lives by keeping domestic cats indoors, controlling feral cat population, and using bird-safe windows on big buildings than all the birds killed by renewable energy. But sure, we can also deploy wind turbines in locations designed to minimize the impact on wild life (birds and bats mostly). We should not put them in corridors used for bird migration or feeding, for example.
The same goes for floating solar – there are likely ways to deploy floating solar to minimize the impact on water birds and their ecosystems. The impact will never be zero, and we have to keep things in perspective, but taking reasonable measures to minimize the negative environmental impact of our energy production is a good idea.
We also have to keep in mind that all of the negative environmental impacts of renewable energy (and nuclear power, for that matter – any low carbon energy source), is dwarfed by the environmental impact of burning fossil fuel. Fossil fuel plants kill an estimated 14.5 million birds in the US annually – about a 500-1000 times as many as wind and solar combined. And this is direct causes of death from impact with infrastructure and pollution. This doesn’t even count global warming. Once we factor that in, any environmental impact comparison is very likely to favor just about anything except fossil fuel.
Likely we will have a lot more floating solar installations in our future, and this is also likely a good thing.
The post Floating Solar Farms first appeared on NeuroLogica Blog.
The new Pope Leo XIV can make history by at long last releasing the World War II archives of the Vatican Bank and expose one of the church’s darkest chapters.
The Catholic Church has a new leader—Pope Leo XIV—born in 1955 in Chicago, Robert Francis Prevost is the first American to head the church and serve as sovereign of the Vatican City State. Many Vatican watchers will be looking for early signs that Pope Leo XIV intends to continue the legacy of Pope Francis for reforming Vatican finances and for making the church a more transparent institution.
There is one immediate decision he could make that would set the tone for his papacy. Pope Leo could order the release of the World War II archives of the Vatican Bank, the repository with files that would answer lingering questions of how much the Catholic Church might have profited from wartime investments in Third Reich and Italian Fascist companies and if it acted as a postwar haven for looted Nazi funds. By solving one of last great mysteries about the Holocaust, Pope Leo would embrace long overdue historical transparency that had proved too much for even his reform-minded predecessor.
The Vatican is not only the world’s largest representative body of Christians, but also unique among religions since it is a sovereign state.What is sealed inside the Vatican Bank archives is more than a curiosity for historians. The Vatican is not only the world’s largest representative body of Christians, but also unique among religions since it is a sovereign state. It declared itself neutral during World War II and after the war claimed it had never invested in Axis powers nor stored Nazi plunder.
In my 2015 history of the finances of the Vatican (God’s Bankers: A History of Money and Power at the Vatican), I relied on company archives from German and Italian insurers, Alliance and Generali, to show the Vatican Bank had invested in both firms during the war. The Vatican earned outsized profits when those insurers expropriated the cash values of the life insurance policies of Jews sent to the death camps. After the war, when relatives of those murdered in the Holocaust tried collecting on those life insurance policies, they were turned away since they could not produce death certificates.
When relatives of those murdered in the Holocaust tried collecting on life insurance policies, they were turned away.How much profit did the Vatican earn from the cancelled life insurance policies of Jews killed at Nazi death camps? The answer is inside the Vatican Bank archives.
Also in the Vatican Bank wartime files is the answer to whether the bank hid more than $200 million in gold stolen from the national bank of Nazi-allied Croatia. According to a 1946 memo from a U.S. Treasury agent, the Vatican had either smuggled the stolen gold to Spain or Argentina through its “pipeline” or used that story as a “smokescreen to cover the fact that the treasure remains in its original repository [the Vatican].”
Photo by Karsten WinegeartThe Vatican has long resisted international pressure to open those wartime bank files. World Jewish Congress President Edgar Bronfman Sr. had convinced President Bill Clinton in 1996 that it was time for a campaign to recover Nazi-looted Jewish assets. Clinton ordered 11 U.S. agencies to review and release all Holocaust-era files and urged other countries and private organizations with relevant documents to do the same.
The Vatican refused to join 25 nations in collecting documents across Europe to create a comprehensive guide for historians.The Vatican refused to join 25 nations in collecting documents across Europe to create a comprehensive guide for historians. At a 1997 London conference on looted Nazi gold, the Vatican was the only one of 42 countries that rejected requests for archival access. At a restitution conference in Washington the following year, it ignored Secretary of State Madeleine Albright’s emotional plea, and it opted out of an ambitious plan by 44 countries to return Nazi-looted art and property, settle unpaid life insurance claims and reassert the call for public access to Holocaust-era archives.
Subsequent requests for opening the files by President Clinton and Jewish organizations went unanswered. Historians were meanwhile inundated with millions of declassified wartime documents from more than a dozen countries and only a handful of Jewish advocacy groups pressed the issue during the last years of John Paul II’s papacy and the eight years of Benedict XVI.
Pope Francis opened millions of the Church’s documents.To his credit, in March 2020, Pope Francis opened millions of the church’s documents about its controversial wartime pope, Pius XII. That fulfilled in part a promise Pope Francis had made when he was the cardinal of Buenos Aires: “What you said about opening the archives relating to the Shoah [Holocaust] seems perfect to me. They should open them [the Holocaust files] and clarify everything. The objective has to be the truth.”
Photo by Ashwin VaswaniAnd while Pope Francis was responsible for reforming a bank that had often served as an offshore haven for tax evaders and money launderers and frustrated six of his predecessors, he nevertheless kept the Vatican Bank files sealed.
Pope Leo XIV is the Vatican Bank’s sole shareholder. It has only a single branch located in a former Vatican dungeon.Pope Leo XIV is the Vatican Bank’s sole shareholder. It has only a single branch located in a former Vatican dungeon in the Torrione di Nicoló V (Tower of Nicholas V). The new Pope can order the release of the wartime Vatican Bank archives with the speed and ease with which a U.S. president issues an executive order. It would be a bold move in an institution with a well-deserved reputation for keeping files hidden sometimes for centuries. It took more than 400 years for the Church to release some of its Inquisition files (and at long last exonerate Galileo Galilei), and more than 700 years before it cleared the Knights Templar of a heresy charge and opened the trial records.
Opening the Vatican Bank’s wartime archives would send the unequivocal message that transparency is not merely a talking point, but instead a high priority that the new Pope plans to apply to the finances of the church, both in its history as well as going forward. Such a historic decree will mark his Papacy as having shed some light on one of the church’s darkest chapters. In so doing, Pope Leo will pay tribute to the families of victims of World War II who have been long been demanding transparency and some semblance of justice.
A review of The Trump Revolution: A New Order of Great Powers by Aleksandr Dugin, Arktos Media, 2025, 136 pages.
Aleksandr Dugin has been described as the Kremlin’s chief ideologue for his substantial influence on Russian politics, promoting nationalist and traditionalist themes and publishing extensively on Russia’s central role in world civilization. He is also a long-time supporter of Donald Trump, and in his new book, The Trump Revolution: A New Order of Great Powers, Dugin celebrates the election of America’s 47th president as the culmination of his life’s work.
In an earlier article in Skeptic, I called Dugin “a mystical high priest of Russian fascism who wants to bring about the end of the world,” but also noted that is he is a philosopher who specializes in the study and use of ideologies. In the 1990s, Dugin set himself the task of synthesizing a new ideology to replace the defunct communist movement as the foundation for the Kremlin’s international fifth column. For a while he played with the idea of uniting all antiliberal ideologies, including socialism, fascism, and ecologism, into a single allegedly profound “fourth political theory.”
Ultimately, however, Dugin was drawn toward the Third Reich’s National Socialism, which he found to be admirable. Dugin came to realize that the essence of Nazism was not its historical particulars, but Hitler’s key political insight, namely that there is no contradiction between nationalism and socialism. On the contrary, it is only by invoking the tribal instinct that a leader can arouse the passion needed to realize the full collectivist program, whose top priority is not the collectivization of property, but the eradication of individual reason and conscience through the collectivization of minds.
As Dugin saw it, every country could have its own tribal-collectivist movement. He thus proceeded on this principle to organize an “Alt-Right” Comintern with parties in nearly every Western nation all based on the same template, combining militant national chauvinism, most often mobilized around anti-immigrant sentiment. Participating units in this franchise include the French National Rally led by Marine Le Pen, the German Alternative für Deutschland (AfD), the followers of Nigel Farage in the UK, Victor Orbán’s party in Hungary, and similar parties in the Netherlands, Austria, Italy, Slovakia, and many other European countries. In The Trump Revolution, Dugin welcomes what he sees as the triumph of its American branch.
The sunwheel-like swastika used by the Thule Society and the German Workers’ Party.(Source: Wikipedia, by NsMn, CC BY-SA 3.0)Arktos Media, the publisher of Dugin and a long list of other ultra-nationalist writers, derives its name from the Thule Society, which was devoted to the historical and anthropological search for the origin of the superior Germanic race, and which provided much of the mystical antecedents for the Nazi movement (the society’s logo adopted the Sanskrit symbol for “good fortune”—a hooked cross called the svastika). Accordingly, Arktos Editor in Chief Constantin von Hoffmeister provides the book’s preface by trumpeting its message in suitably Wagnerian terms—“the world ended.”
It ended in a neon blizzard, and electric storm of shattered paradigms. … We have Trumpism 2.0, and it is a revolution beyond revolutions–a final reckoning that promises to devour the remains of the corrupted system and rebuild something ancient, something powerful, something terrifying in its purity. … The Globalist Cathedral is in ruins. The Swamp has been burned, and from its ashes rises something ancient, something terrible, something divine. This is the Trumpian Ragnarök. This is history breaking apart and reassembling itself in a new form. Welcome to the Renewed World Order. It is not for the weak.Dugin takes up the mantle from there:
Trumpism has emerged as a unique phenomenon, combining the long-marginalized national-populist agenda of paleoconservatives with an unexpected shift in Silicon Valley, where influential high-tech tycoons have begun aligning with conservative politics. … Thus, Trump’s second term has become the final chord in the geopolitics of a multipolar world, marking the overturning of the entire liberal-globalist ideology.Dugin’s war is against the West, considered both as a creed whose Enlightenment liberal humanism threatens the principle of the need for unlimited tyranny (in Russian the “Silnaya Ruka,” or strong arm) underpinning Kremlin rule, and as a concrete geostrategic military power that must be defeated to expand Russian global dominion. He celebrates the election of Trump as serving both of those purposes:
This “illiberal nationalism” has become the ideological axis of the MAGA (Make America Great Again) movement. The United States is no longer presented as the global distributor of liberal democracy and its guarantor on a planetary scale. Instead, it is redefined as a Great Power–focused on its own greatness, sovereignty, and prosperity.Thus, under Trump, there is no reason for the U.S. to maintain its alliances with other liberal democracies, supporting the system of collective security of the free world superpower, or what Dugin decries as the “unipolar world.” Instead, the U.S. can become, alongside Russia and China, one of several predatory Great Powers dominating continental spheres of influence within a new “multipolar world.”
Call it political transactionalism or moral nihilism if you prefer. I think evil hits the nail right on the head.There is on offer here something akin to the Molotov–Ribbentrop Pact, under which the United States gets North America, China gets east Asia and south Asia, and Russia gets Eurasia, “from Lisbon to Vladivostok.” It takes little imagination to see why such a division of spoils might appeal to the political leaders of Russia. But the fruition of such a great-power realignment very much depends upon the Americans abandoning their role as crusaders for the cause of world freedom. Fortunately, says Dugin, Trump is on board with this new alignment:
Trump and his ideology categorically reject any notion of internationalism, and rhetoric about so-called “universal human values,” “world democracy,” or “human rights.” Instead Trump appears to envision a final rupture with both the Yalta system and the unipolar globalist movement. He has therefore set out to dismantle all international institutions that symbolize the past eighty years–the UN, globalist structures like the WHO and USAID, and even NATO. Trump sees the United States as a new empire, and himself as a modern Augustus, who formally ended the decaying Republic. His ambitions extend beyond America itself–hence his interest in acquiring Greenland, Canada, the Panama Canal, and even Mexico. [italics in original]Dugin’s ideas may appear to be quite mad, but they are actually instructive in revealing possible motivations for the actions of the Trump administration during its opening months. For example, abolishing USAID was justified by the Trump administration as a fiscal responsibility measure, but Dugin contends that entirely other motivations were operational:
The liquidation of the United States Agency for International Development (USAID) is an event whose significance can hardly be overstated. When the Soviet Union abolished the Comintern … structures that advocated the ideological interests of the USSR on a global scale, it marked the beginning of the end for the international Soviet system. … Something similar is happening in America, as USAID was the main operational structure for the implementation of globalist projects. Essentially, it was the primary transmission belt for globalism as an ideology aimed at the worldwide imposition of liberal democracy, market economics, and human rights.The banning of USAID is a critical, fundamental move, the importance of which, as I said, cannot be overstated. This is especially true because countries like Ukraine largely depend on the agency, receiving significant funding through it. All Ukrainian media, NGOs, and ideological structures were financed by USAID. The same applied to almost the entire liberal opposition in the post-Soviet space, as well as liberal regimes in various countries, including Maia Sandu’s Moldovan administration and many European political regimes, which were also on USAID’s payroll.Dugin presents Ukraine as central to Russia’s strategic objectives within the envisioned multipolar world order, framing it as a pivotal asset in the redistribution of global influence:
Zelensky undoubtedly realizes that his time is running out, and with it the history of Ukraine comes to an end. … The current U.S. leadership does not intend to continue this policy of support, and therefore Zelensky finds himself in a dead-end situation. His desperate attempts to intervene in the situation resemble a frog trying to climb out of a bucket of milk. He seeks to draw attention to himself despite the fact that no one is asking him, and no negotiations are being conducted with him. The main discussions will be centered on the strategic dialogue between Putin and Trump, concerning not only Ukraine but also the global order. … This is the essence of building a multipolar world, in which Ukraine has no place.A U.S. disengagement from Ukraine would precipitate a broader decline of European stability, triggering a wider unraveling of the continent’s geopolitical coherence and power:
Handing over a half-decayed, toxic corpse, exuding radiation and stench, is hardly a worthy gift for one’s allies and friends. In this context, Ukraine appears to be just such a toxic waste. Trump is seemingly eager to rid himself of this burden. If Europe is left to face Russia alone, the collapse of the globalist liberal elite will accelerate. Thus, the Trojan gift–the assignment of responsibility to Europe for waging war against Russia in Ukraine–is presumably Trump’s strategy to quickly weaken, and possibly even dismantle his trade competitors while undermining his ideological opponents in Europe. The European elite openly opposes Trumpism.In this vision, Europe is probably gone, although Dugin does admit the outside possibility that during the process of the dissolution of NATO and the EU the German revanchist AfD might be able to step up and Make Europe Great Again. One can readily conceive of how such an eventuality might not work out so well for Russia, but Dugin seems unconcerned.
Dugin—not unlike Trump—does not seem to believe that there is any reality to the concepts of right and wrong. Rather, they only believe in advantage and disadvantage.There is one fly in the ointment of Dugin’s brave new multipolar world of predatory great powers feasting on the weak. That is Israel. For some irrational reason, says Dugin, Trump “takes a staunchly pro-Israel stance.” What does this mean for Dugin’s multi-polar world order?
I believe Trump is making his first major geopolitical miscalculation in shaping the new world order in the Middle East. He is alienating the Islamic world–a powerful force that he fails to recognize as an independent geopolitical pole. This is especially true regarding his antagonism towards Iran and the Shiite factions that maintain staunchly anti-Zionist and anti-Israel positions. … I hope that, despite his radical rhetoric and actions, once he fully assumes a role as a key global political architect, he will begin to take reality into account. Otherwise he risks ending up like the liberals he ousted.Dugin freely mixes current Kremlin propaganda lines into his analysis. For example, in line with Putin’s effort to portray Russia’s “special military operation” in Ukraine as a replay of the Soviets’ Great Patriotic War of resistance against Hitler (known as the Second World War to the rest of us), Dugin denounces the Ukrainians as Nazis. It is not only untrue, but hypocritical because in the past Dugin has repeatedly stated his affinity not only for the Waffen SS, but for the work of foundational Nazi intellectuals, including Hitlers’ geopolitical mentor Professor General Karl Haushofer, philosopher Martin Heidegger, and legal theorist Carl Schmitt. Indeed, it is Schmitt’s argument that the idea of fundamental human rights is an intolerable restraint on the Will of the People as expressed through its Leader that is at the core of Dugin’s case against liberalism.
Perhaps to make himself more appealing to some elements of Trump’s political base, Dugin goes out of his way in his book to represent himself as a Christian, and to describe his cause as the defense of “White Christian civilization.” This is quite remarkable, not only because of the identification of Christianity with the interests of a particular race, but because in the past Dugin had openly celebrated Nazi paganism, going so far as to sponsor an artistic cult devoted to its promotion. Indeed, the anti-Christian nature of Dugin’s mystical theology is so rabid that in 2014 Lutheran bishop James Heiser wrote an entire book diagnosing it as systemically evil.
I am not a theist, so some of Heiser’s arguments pass me by. Yet I think that in a fundamental sense he is onto something. Dugin—not unlike Trump—does not seem to believe that there is any reality to the concepts of right and wrong. Rather, they only believe in advantage and disadvantage. On the basis of this “transactionalist” belief structure, Dugin believes the Trump administration is leaning towards abandoning America’s role as “the watchmen on the walls of world freedom,” an outcome that Dugin has devoted his life to obtain. (“We are the watchmen on the walls of world freedom,” is a famous line from the speech that President John F. Kennedy intended to give at the Dallas Trade Mart on November 22, 1963, but he was gunned down before by Lee Harvy Oswald, an ex-Marine who had defected to the Soviet Union.)
Yet it is precisely the collective security arrangements and international system of free trade underlying what Dugin decries as the “unipolar world” that have prevented a general war or a depression since 1945. This unprecedented 80 year-long period of peace, prosperity, and progress has showered enormous blessings not only on America and Europe, but most emphatically Russia as well. Over the course of the final four decades of the “multipolar world” that preceded the establishment of the Pax Americana, Russia was defeated or devastated by war no less than five times. Between the Russo-Japanese War, World War I, the Russian Civil War, the Russo-Polish War, and World War II, well over fifty million Russians were violently sent to their graves. Horrors on that scale ended in 1945. Yet that is the world that Dugin ardently seeks to recreate.
Call it political transactionalism or moral nihilism if you prefer. I think evil hits the nail right on the head.
This is an intriguing idea, and one that I can see becoming critical over the next few decades, or never manifesting – developing a fleet of floating nuclear power plants. One company, Core Power, is working on this technology and plans to have commercially deployable plants by 2035. Company press releases touting their own technology and innovation is hardly an objective and reliable source, but that doesn’t mean the idea does not have merit. So let’s explore the pros and cons.
The first nuclear-powered ship, the USS Nautilus, was deployed in 1955. So in that sense we have had ship-based nuclear reactors operating continuously (collectively, not individually) for the last 70 years. Right now there are about 160 nuclear powered ships in operation, mostly submarines and aircraft carriers. They generally produce several hundred megawatts of electricity, compared to around 1600 for a typical large nuclear reactor. They are, however, in the range of small modular reactors which have been proposed as the next generation of land-based nuclear power. The US has operated nuclear powered ships without incident – a remarkable safety record. There have been a couple of incidents with Soviet ships, but arguably that was a Soviet problem, not an issue with the technology. In any case, that is a very long record of safe and effective operation.
Core Power wants to take this concept and adapt is for commercial energy production. They are designing nuclear power barges – large ships that are designed only to produce nuclear power, so all of their space can be dedicated to this purpose, and they can produce as much electricity as a standard nuclear power plant. They plan on using a Gen IV salt-cooled reactor design, which is inherently safer than older designs and does not require high pressure for operation and cooling.
The potential advantages of this approach are that these nuclear barges can be produced in a centralized manufacturing location, essentially a shipyard, with allows for economies of scale and mass production. They intend to leverage the existing experience and workforce for shipyards to keep costs down and production high. The barges can then be towed to the desired location. Core Power points out that 65% of economic activity occurs in coastal regions, therefore the demand for power there is high, and offshore power could provide some of that demand. Nuclear barges could be towed into port or they could be anchored farther off shore. Maintenance and waste disposal could all be handled centrally. Since there is no site preparation, that is a huge time and cost savings. Further there is no land use, and these barged could be place relatively close to dense urban centers.
There are potential downsides. The first that comes to mind is that there isn’t a pre-existing connection to the grid. One of the advantages of land-based nuclear is that you can decommission a coal plant and then build a nuclear power plant on the same site and use the same grid connections. This of course is not a deal killer, but it will require new infrastructure. A second issue is safety. While ship-based nuclear has a long and safe history, this would be a new design. Further, a radiation leak in a coastal environment could be disastrous and this would need to be studied. I do think this concept is only viable because of the salt-cooled design, but still it will require extensive safety regulation.
And this relates to another potential problem – the mid-2030s is likely ambitious. While I think we should “warp speed” new nuclear to fight climate change, this unfortunately is not likely to happen. New projects like this can get bogged down in regulation. Safety regulation is, in itself, reasonable, and it will likely be a tough sell to speed up or streamline safety. There is a reasonable compromise between speed and safety, and I can only hope we will get close to this optimal compromise, but history tells a different story.
What about the usual complaint of nuclear waste? This is often the reason given for those who are anti-nuclear. I have discussed this before – waste is actually not that big a problem. The highly radioactive waste is short-lived, and the long half-life nuclear waste is very low level (by definition). We just need to put it somewhere. Right now this is purely a political (mostly NIMBY) problem, not a technology problem.
On balance it seems like this is an idea worth exploring. Given the looming reality of climate change, exploring all options is the best way forward. Also, Core Power plans, as a phase 2, to adapt their technology for a commercial fleet of nuclear powered ships. Ocean shipping produces about 3% of global CO2 emissions, which is not insignificant. If our cargo carriers were mostly nuclear powered that could avoid a lot of CO2 release. They are also not the only company working on this technology. A nuclear cargo ship would have more space for cargo, since it doesn’t need to carry a lot of fuel for itself. It would also be able to operate for years without refueling. This means it can be commercially viable for shipping companies.
Maritime nuclear power may turn out to be an important part of the solution to our green house gas problem. The technology seems viable. The determining factor may simply be how much of a priority do we make it. Given the realities of climate change, I don’t see why we shouldn’t make it a high priority.
The post Floating Nuclear Power Plants first appeared on NeuroLogica Blog.
Rising temperatures, biodiversity loss, drought, mass migration, the spread of misinformation, inflation, infertility—these are just some of the major challenges facing societies around the globe. Generating innovative solutions to such challenges requires expanding our understanding of what’s currently possible. But how do we cultivate the necessary imagination? By debunking counterproductive myths about imagination, Occidental College cognitive developmental scientist Andrew Shtulman might just provide us with a starting point. “Unstructured imagination succumbs to expectation,” he writes, “but imagination structured by knowledge and reflection allows for innovation.” (p. 12)
Imagination MythsIn Learning to Imagine: The Science of Discovering New Possibilities, Shtulman challenges the highly intuitive, yet obstructive notion that great imagination stems from a place of ignorance. Through the sharing of everyday examples and detailed experimental studies, Shtulman effectively tackles the pervasive deficit view of imagination—that it’s something we engage in a great deal during childhood and sadly lose as we get older.
Contrary to the conventional wisdom, Shtulman demonstrates that children’s imagination, relative to adults’, is constrained by what they think is physically plausible, statistically probable, and socially and morally acceptable. While some early philosophers and social scientists considered children to be lacking intelligence and their minds to be a blank slate, a contemporary swing of the pendulum has led to a confusing romanticization of children as “little scientists” capable of unacknowledged insight. A more informed view recognizes that though children’s minds are not blank slates, they do often conflate what they’ve personally experienced with what “could be.” To a child’s mind, what they can’t imagine, can’t exist.
“Unstructured imagination succumbs to expectation, but imagination structured by knowledge and reflection allows for innovation.” —Andrew ShtulmanShtulman notes how young children often deny the existence of uncommon but entirely possible events, such as finding an alligator under a bed, catching a fly with chopsticks, and a man growing a beard down to his toes. Children find these situations as impossible as eating lightning for dinner. While children often engage in pretend play, it often mimics mundane aspects of real life, such as cooking or construction. Children also often believe in magic and fantastical beings such as Santa Claus, but such myths were not spontaneously created by children, they were first endorsed by adults they trust. Children rarely generate novel solutions to problems as they tend to fixate on the rules and norms familiar to them, often correcting others when they have deviated from what is expected and sometimes become offended by “rule” violations.
Learning to Imagine is not only about children’s cognition; it is fundamentally a book about human reasoning and contains insights that are applicable to all of us. Shtulman sheds light on the many important ways in which adults continue to constrain their own imagination through self-interest, habit, fear, and a fixation on conforming to one’s social group. For example, we may constrain ourselves with resistance to adopting new technologies like artificial intelligence because they reduce the need for our skillset or simply engender fear of the unknown. We may resist something that requires us to change our habits (e.g., carrying reusable bags), to something that forces us to take risks (e.g., trusting a quickly developed vaccine), or to deviate from our in-group (e.g., advocating for a new theory or openly sharing an unpopular opinion).
What is Imagination, Anyhow?So, just what is imagination? Shtulman argues that it is the ability to abstract from the here and the now, to contemplate what could be or what could have been. Imagination is an evolved cognitive skill that is used for the purpose of everyday planning, predicting, and problem-solving. We imagine what we would buy at the store, how a meeting at work might go, how if we had only said something a different way, then we could have avoided that fight with our spouse, and so on. Simply put, imagination is the ability to ask “What’s possible?”. Imagination can be engaged in for our subjective experiences (e.g., imagining how life would have been different if path B was chosen instead of path A), and for what might be more relevant to others (e.g., works of art, new policies, technologies).
Simply put, imagination is the ability to ask “What’s possible?”.Even though Shtulman’s case for imagination is grounded in this “what’s possible” definition, how it is intertwined with closely related constructs such as “creativity” and “innovation” is somewhat less clear. What can be discerned from his book is that while imagination can be collaborative in the sense that we draw on human knowledge to ask, “what if,” it is largely a personal endeavor. Creativity, on the other hand, is the product of imagination that can be shared with others. Building upon imagination and creativity, innovation is the product of extraordinary imagination and can be developed and refined.
Mechanisms for Expanding What’s PossibleWhat are the proposed means by which we expand our knowledge, thereby improving the likelihood that we shift our imagination from the ordinary to the extraordinary? Shtulman outlines three ways of learning, specifically through: (1) examples, (2) principles, and (3) models.
The first mechanism, learning through examples, involves learning about new possibilities via other people’s testimony, demonstrations, empirical discoveries, and technological creations. Through education, others’ knowledge becomes our knowledge. However, expanding our imagination through examples is the easiest but also the most limited means by which to expand our own. On one hand, new possibilities are added to our database of what could be, but in doing so, we are potentially limited by overly fixating on the suboptimal (yet adequate) solution we have learned; as Shtulman notes, we “privilege [our] expectation over observation.” For example, we tend to copy the necessary and unnecessary actions of others when trying to achieve the same goal—we fixate on the solution we are familiar with rather than engage in the little effort required to abstract a more efficient solution. Children are even more susceptible to such a process. For example, imagine you see a toy with a handle that is stuck at the bottom of a long tube, and you are provided with a straight pipe cleaner. How might you reach and retrieve the toy? You likely imagine bending the pipe cleaner, yet most preschoolers tasked to reach the toy in this scenario are unable to imagine how the pipe cleaner can be used as a sufficient tool.
The second mechanism, learning through principles, refers to generating a new collection of possibilities by learning about abstract theories about “how” and “why” things operate. These include learning about scientific/cause-and-effect, mathematical, and ethical principles. As a means for expanding imagination, principles are more valuable than examples because they can help us extrapolate possibilities from one situation and apply it across different domains. One illustrative example is the physicist Ernest Rutherford, who won a Nobel prize in chemistry. Rutherford hypothesized (correctly) that electrons, like planets orbiting a sun, may orbit a nucleus. By using the principle of gravity and applying it in a different context, Rutherford generalized an insight from physics to innovate in the field of chemistry. Engaging with principles allows us to practice applying our knowledge and better understand novel relationships. While most of us are not scientists striving to win a Nobel prize, we can still learn new principles that expand our imagination. However, principles can be overgeneralized, and Shtulman argues that new applications should still be tested and replicated to confirm the connection.
An artist recreates childrens’ drawings as if they were real creatures. (Source: Things I Have Drawn)
The third mechanism, learning through models, might be the most exciting as it concerns expanding our ideas about what is possible by immersing ourselves in simulated versions of reality that can be manipulated with little to no consequences. These simulations allow for personal reflection through the process of mental time travel. This includes expanding our imagination through pretense (i.e., pretend play), fiction, and religion. Pretense allows us to expand our symbolic imagination by toying with alternative possibilities somewhat rooted in reality because the real-world elements of pretend play help to make it meaningful. For example, when children and many adults are asked to draw an animal that doesn’t exist, the product is usually an amalgamation of existing animal parts rather than a completely unique creature. Such mental play supports the development of logical reasoning. Through different mediums such as books and film, fiction expands our imagination by allowing us to experience the social world through the eyes and thoughts of others. We see how others react to situations we haven’t experienced and contemplate how we might respond if we were in their shoes.
You have to represent reality before you can tinker with it, to know the facts before you can entertain counterfactuals.Religion is rooted less in the here and now but may enable us to expand metaphysical ideas and explore moral reasoning by directing thoughts and behavior according to the core values of a specific religion. Ultimately, models allow us to experience the lessons of working out various problems without the risks associated with acting on them in real life. On the other hand, sometimes models may communicate false information that we mark as true. Though models may sometimes lead us astray, Shtulman argues that they “provide the raw materials. … You have to represent reality before you can tinker with it, to know the facts before you can entertain counterfactuals” (p. 12).
The numerous examples that Shtulman provides for how examples, principles, and models expand imagination generate a convincing case for the central thesis of his book—that, unlike the current conventional wisdom, children’s lack of knowledge, experience, and reflection make them less imaginative than adults. However, attempts to distinguish many overlapping concepts within in the book (e.g., religious models vs. fictional models; social imagination vs. moral imagination) is sometimes disorientating.
Should You Read This Book?Will this book provide you with a specific list of ways to quickly develop a more “imaginative mindset” for yourself and others? No, it is not a self-help book. Instead, you’ll spend hours on an engaging (and, dare we say, nourishing) tour of the limitations and achievements of human imagination. By the end, you’ll know a lot more about how the human mind develops and reasons, and about the cognitive mechanisms that impede and enhance innovation across eras, societies, and an individual lifetime. Through your newfound knowledge, you may begin to imagine solutions to both personal and global challenges that you hadn’t considered before.
The recent discussions about autism have been fascinating, partly because there is a robust neurodiversity community who have very deep, personal, and thoughtful opinions about the whole thing. One of the issues that has come up after we discussed this on the SGU was that of self-diagnosis. Some people in the community are essentially self-diagnosed as being on the autism spectrum. Cara and I both reflexively said this was not a good thing, and then moved on. But some in the community who are self-diagnosed took exception to our dismissiveness. I didn’t even realize this was a point of contention.
Two issues came up, the reasons they feel they need self-diagnosis, and the accuracy of self diagnosis. The main reason given to support self-diagnoses was the lack of adequate professional services available. It can be difficult to find a qualified practitioner. It can take a long time to get an appointment. Insurance does not cover “mental health” services very well, and so often getting a professional diagnosis would simply be too expensive for many to afford. So self-diagnosis is their only practical option.
I get this, and I have been complaining about the lack of mental health services for a long time. The solution here is to increase the services available and insurance coverage, not to rely on self-diagnosis. But this will not happen overnight, and may not happen anytime soon, so they have a point. But this doesn’t change the unavoidable reality that diagnoses based upon neurological and psychological signs and symptoms are extremely difficult, and self-diagnosis in any medical area is also fraught with challenges. Let me start by discussing the issues with self-diagnosis generally (not specifically with autism).
I wrote recently about the phenomenon of diagnosis itself. (I do recommend you read that article first, if you haven’t already.) A medical/psychological diagnosis is a complex multifaceted phenomenon. It exists in a specific context and for a specific purpose. Diagnoses can be purely descriptive, based on clinical signs and symptoms, or based on various kinds of biological markers – blood tests, anatomical scans, biopsy findings, functional tests, or genetics. Also, clinical entities are often not discrete, but are fuzzy around the edges, manifest differently in different populations and individuals, and overlap with other diagnoses. Some diagnoses are just placeholders for things we don’t understand. There are also generic categorization issues, like lumping vs splitting (do we use big umbrella diagnoses or split every small difference up into its own diagnosis?).
Ideally, I diagnostic label predicts something. It informs prognosis, or helps us manage the patient or client, for example by determining which treatments they are likely to respond to. Diagnostic labels are also used for researchers to communicate with each other. They are also used as regulatory categories (for example, a drug can only have an FDA indication to treat a specific disease). Diagnostic labels are also used for public health communication. Sometimes a diagnostic label can serve all of these purposes well at once, but often they are at cross-purposes.
Given this complexity, it takes a lot of topic expertise to know how to apply diagnostic criteria. This is especially true in neurology and psychology where signs and symptoms can be difficult to parse, and there are many potential lines of cause and effect. For example, someone can have primary anxiety and their anxiety then causes or exacerbates physical symptoms. Or, someone can have physical symptoms that then cause or exacerbate their anxiety. Or both can be true at the same time, and the conditions are “comorbid”.
One main problem with self-diagnosis is that a complex diagnosis requires objectivity, and by definition it is difficult to be objective about yourself. Fear, anxiety, and neuroticism make it even more difficult. As a clinician I see all the time the end-results of self-diagnosis. They are usually a manifestation of the patient’s limited knowledge and their fears and concerns. We see this commonly in medical students, for example. It is a running joke in medical education that students will self-diagnosis with many of the conditions that they are studying. We discuss this with them, and why this is happening.
This is partly the Forer Effect – the tendency to see ourselves in any description. This is mostly confirmation bias – we cherry pick the parts that seem to fit us, and we unconsciously search our vast database of life experience to search for matches to the target symptoms. Yes, I do occasionally cough. My back does hurt at times. Now imagine this process with cognitive symptoms – I do get overwhelmed at times. I can focus on small details and get distracted, etc. With the Forer Effect (the most common example of this is people seeing themselves in any astrological personality profile), the more vague or non-specific the description, the stronger the effect. This makes psychological diagnoses more susceptible.
To make an accurate diagnosis one also need to understand the different between specific and non-specific symptoms. A fever is a symptom of an acute or subacute Lyme infection, but it is an extremely non-specific one as fevers can result from hundreds of causes. A targeted rash is a specific sign (so specific it is called pathognomonic, meaning if you have the sign you have the disease). (BTW – a symptom is something you experience, a sign is something someone else sees.) So, having a list of symptoms that are consistent with a diagnosis, but all non-specific, is actually not that predictive. But the natural tendency is to think that it is – “I have all the symptoms of this disease” is a common refrain I hear from the wrongly self-diagnosed.
Also, it is important to determine if any symptoms can have another cause. If someone is depressed, for example, because a loved-one just died, that depression is reactive and healthy, not a symptom of a disorder.
Further, many signs and symptoms are a matter of degree. All muscles twitch, for example, and a certain amount of twitching is considered to be physiological (and normal). At some point twitching becomes pathological. Even then it may be benign or a sign of a serious underlying neurological condition. But if you go on the internet and look up muscle twitching, you are likely to self-diagnose with a horrible condition.
An experienced clinician can put all of this into perspective, and make a formal diagnosis that actually has some predictive value and can be used to make clinical decisions. Self-diagnosis, however, is hit or miss. Mainly I see false-positives, people who think they have a diagnosis based on anxiety or non-specific symptoms. These tend to cluster around diagnoses that are popular or faddish. The internet is now a major driver of incorrect self-diagnosis. Some people, or their families, do correctly self-diagnose. Some neurological conditions, like Parkinson’s disease, for example, tend to have fairly easily detected and specific signs and symptoms that a non-expert can recognize. Even with PD, however, there are subtypes of PD and there are some secondary causes and comorbidities, so you still need a formal expert diagnosis.
With autism spectrum disorder, I do not doubt that some people can correctly determine that they are on the spectrum. But I would not rely on self-diagnosis or think that it is automatically accurate (because people know themselves). The diagnosis still benefits from formal testing, using formal criteria and cutoffs, ruling out other conditions and comorbidities, and putting it all into perspective. I also am concerned that self-diagnosis can lead to self-treatment, which has a separate list of concerns worthy of its own article. Further, the internet makes it easy to create communities of people who are self-diagnosed and seeking self-treatment, or getting hooked up with dubious practitioners more than willing to sell them snake oil. I am not specifically talking about autism here, although this does exist (largely attached to the anti-vaccine and alternative medicine cultures).
There is now, for example, a chronic Lyme community who help each other self diagnosis and get treated by “Lyme literate” practitioners. This community and diagnosis are now separate from scientific reality, existing in their own bubble, one which foments distrust of institutions and seeks out “mavericks” brave enough to go against the system. It’s all very toxic and counterproductive. This is what concerns me the most about an internet fueled community of the self-diagnosed – that it will drift off into its own world, and become the target of charlatans and snake oil peddlers. The institutions we have an the people who fill them are not perfect – but they exist for a reason, and they do have standards. I would not casually toss them aside.
The post The Problem with Self-Diagnosis first appeared on NeuroLogica Blog.