As I write this the week of April 20, 2026, both mainstream media and social media are chockablock full of coverage of the disappearance or death of eleven (and counting) U.S. scientists who worked on UFOs, nuclear weapons, military defense, propulsion systems, or other related fields (a category that keeps growing as new deaths or disappearances are identified not associated with one of the original categories).
House Oversight Chair James Comer, for example, told Fox News “Congress is very concerned about this. Our committee is making this one of our priorities now because we view this as a national security threat,” adding “there’s a high possibility that something sinister is taking place here.”
Congressman Eric Burlison (R) told Fox News “This has all the hallmarks of a foreign operation,” and suggested to Elizabeth Vargas at NewsNation that it could be China, Russia, or Iran behind the cabal. Famed physicist Michio Kaku opined “If 10 scientists suddenly die or vanish who all have access to sensitive research, this is cause for national concern.” Even President Trump admitted that this is “pretty serious stuff…some of them were very important people,” but added “I hope it’s random.”
It’s random, Mr. President. Connecting a small cohort of individuals from a wide range of fields to deaths or disappearances is an example of what I call patternicity, or the tendency to find meaningful patterns in random noise. It is also a case study in what cognitive psychologists call base rate neglect, or the tendency to focus on specific, vivid, or anecdotal evidence and ignore statistical generalizations that better explain the phenomenon.
One of the eleven scientists, for example, Amy Eskridge, who was president of the Institute for Exotic Science (an organization she co-founded) and worked on anti-gravity propulsion and electrostatic propulsion systems, died by suicide of a self-inflicted gunshot wound to the head. How unusual is that? According to the Johns Hopkins University Center for Gun Violence Solutions, 27,300 people die each year by gun-inflicted suicide in the U.S. That’s the base rate, and Eskridge’s own non-conspiratorial family accepts the fact that Amy was another lamentable casualty of gun violence and suicidality and not the victim of a vicious UFO cabal. “Scientists die also, just like other people,” explained her father Richard.
Most of the other scientists have similar prosaic (albeit heartbreaking) explanations. Monica Reza, who worked on orbital communication systems, for example, disappeared while hiking in the Angeles National Forest near Mount Waterman in California, which is a remote forested area near where I live in which people go missing every year. Although she was accompanied by two other experienced hikers who reported that she just dropped off the side of the trail, I have done a fair amount of hiking and mountain biking in those mountains and well know that there are countless precipitous cliffs off which one could easily fall off and disappear into thick brush below (which is how I broke my collarbone on a mountain bike ride in 1991).
A similar disappearance is that of retired Major General William Neil McCasland, who was Director of Air Force Research Lab who worked on hypersonics, directive energy systems, and advanced propulsion technology, who went missing during a wilderness hike on February 27, 2026 in New Mexico, apparently taking with him his wallet and a .38 caliber revolver and leather holster (leaving behind his phone and prescription glasses). According to his wife, McCasland had been experiencing short-term memory loss, medical issues, anxiety, and a lack of sleep, adding that she suspected he “planned not to be found” and, in any case, “He retired from the [Air Force] almost 13 years ago and has had only very commonly held clearances since. It seems quite unlikely that he was taken to extract very dated secrets from him.”
Before we jump to conspiratorial speculations on these particular vanishings, consider the fact that somewhere between 1,200 and 1,600 people disappear in America’s National Parks annually in the U.S., a stunning number that shrinks by comparison to the over 500,000 people who go missing each year according to the FBI. That’s a base rate one should never neglect and likely is the explanation for the disappearance of 48-year-old government contractor Steven Garcia in August of 2025, also in New Mexico, who worked on nuclear and aerospace research, carrying a handgun and also leaving behind his phone, keys, wallet and car. Anecdotally weird? Sure. Statistically out of the ordinary for missing persons? No.
The rest of the outcomes are equally unsurprising and not out of the ordinary: Michael Hicks “undisclosed cause of death” was in reality, according to the LA County Coroner, caused by arteriosclerotic cardiovascular disease, for which the CDC and the American Heart Association document over 900,000 Americans die each year due to this and related heart diseases.
Plasma physicist Nuno Loureiro was murdered by a revenge-seeking ex-classmate from the 1990s, who confessed that he’d been planning it for years and that he was envious and resentful of Loureiro’s success. Disturbing, but not mysterious.
Astronomer Carl Grillmair, a 67-year-old Caltech professor who worked on exoplanets, stellar streams, and near-earth objects, was shot to death in February 2026 on the front porch of his rural home in Antelope Valley, CA (about a hundred miles from Caltech out in the desert outside Los Angeles), by 29-year-old Freddy Snyder, a known criminal with a long rap sheet that included carjacking and burglary, including on Grillmair’s property months before, which the astronomer responded to by calling the police on him (as one would rationally do). Again, troubling and tragic, but not inexplicable or grand conspiratorial.
And so on.
The Internet, especially X, is rapidly filling up with additional confusions over these alleged cabals. One Dr. John Brandenburg, a self-identified “plasma physicist” who works on “fusion energy and advanced space propulsion,” with “Phd” in his X username, told his 22.2k followers (see screenshot below) that the death of an “antigravity researcher” named Dr. Ning Li, who was stuck by a vehicle and sustained brain damage that would take her life many years later, was actually the victim of a murderous conspiracy:
Dear Friends, Like Dr. Ning Li, antigravity researcher, professor John Mack of Harvard, Pulitzer Prize winner, and a Psychiatrist researching UFO abductees, was also run over by a car. This happened in London in 2004. This must end, and whoever is responsible brought to justice.In fact, Dr. Li died of Alzheimer’s disease in 2021 at the age of 78, following a long health decline after a 2014 automobile accident where she was struck by a vehicle while crossing a street at the University of Alabama in Huntsville and sustained permanent brain damage. As I explained to Dr. Brandenberg in my response to his post on X:
In the US ~7,500 pedestrians are killed in traffic crashes annually. Globally, WHO reports ~ 1.19 million deaths/year. Before you concoct wild conspiracy theories about UFO people being run over, stop neglecting the base rate.The tireless UFO disclosure activist and one-time government insider Lue Elizondo went on Chris Cuomo’s popular podcast to explain that UFO disclosure activists and former (and present) government insiders are being murdered, which as I also pointed out on X (see screen shot below) is just what one would do if you didn’t actually believe that you could be murdered yourself.
And in this mode, I also pointed out on X all the proponents of UFO and UAP disclosure who have not been murdered or disappeared, which again as a counterfactual would seem to negate what is on the table with this so-called mystery, namely that such people are being murdered by some nefarious “they” purportedly operating in the name of some government agency or private corporation.
More generally, this phenomenon is also emblematic of what I call the fallacy of excluded exceptions, an illustration for which can be seen in a 2x2 matrix of four cells (see figure below). Cell 1 represents our mystery, namely UFO and nuclear/military scientists who go missing or are found dead before old age. What about all the UFO and nuclear/military scientists who do not go missing or are not found dead before old age (Cell 2)? Or the non-UFO and non-nuclear/military scientists who go missing or are found dead before old age (Cell 3)? Or the non-UFO and non-nuclear/military scientists who do not go missing or are not found dead before old age (Cell 4)? Suddenly our mystery disappears. There’s nothing unusual to explain in the broader context of everything else that could happen but are ignored in our focus on just the combination we’re interested in exploring.
Keep this matrix of possibilities in mind as we hear about additional Cell 1 examples in the coming days and weeks, such as the one posted by Representative Anna Paulina Luna (R) on April 21, 2026 (see screenshot below), about “the tragic passing of David Wilcock,” citing the biblical passage of John 8:32, which reads “Then you will know the truth, and the truth will set you free.”
What truth is that? David Wilcock was an American paranormal writer and YouTube influencer (over 500,000 followers) deeply involved in the UFO “disclosure movement”, who suggested that he might be the reincarnation of the famed early 20th century psychic Edgar Cayce, that he is in telepathic contact with space aliens, and that reptilian aliens inhabit parts of Antarctica where they are preparing for an invasion to take over the world’s governments and banks.
Sadly, Wilcock died by suicide the morning of April 20, 2026. Although Luna suggests otherwise, according to the Boulder County Sheriff’s Office, “The emergency communications specialist who took the call suspected the caller was experiencing a mental health crisis.” Additional details noted that “officers reportedly reached around 11:02am and tried to make contact with the male who was outside his residence holding a weapon.”
Again, regretfully but necessarily, we must consider the base rate for this issue: according to the CDC nearly 50,000 Americans every year die by suicide, around half of which are struggling with mental health issues. As such, and woefully but realistically, I think most of us can agree that if you think you are telepathically communicating with alien beings and you think they may be trying to take over the world, you may not be fully sound of mind.
No doubt more deaths and disappearances will be announced in the coming weeks as believers go digging around for more examples of Cell 1, but keep the other cells in mind, along with these other principles of critical thinking, before jumping to unwarranted conspiratorial conclusions.
Zoologist by career, TV celebrity in the 1960s, renowned surrealist painter, and bestselling author of more than 70 books, Desmond Morris left a legacy that enlightened our species, answered taboo questions, and made audiences around the world look at behavior with renewed eyes. This is a tribute to one of the greatest observers of human behavior.
He never shied away from controversy. His first popular book, published in 1967, proclaimed on its cover what at the time was seen as offensive: that we humans are “naked apes.” The logic was compelling: if one were to place close to 400 primate species side by side, a quick visual inspection would reveal that the most notorious difference is the general lack of body hair in humans. Not intelligence, not language, not technology. That was the beginning of his effort to spoon-feed society a lesson in evolutionary humility: there is nothing insulting in seeing humans as animals; every species is extraordinary in its own way.
Going back to that book, in his 1979 autobiography Animal Days, Morris recounts the 30 days he took to write the whole manuscript for The Naked Ape on a typewriter, without editing—an astonishing result by any measure. The book spread fast not only because of its provocativeness, but because the world got to experience what descriptive, entertaining, and compelling writing can do when science merges with audience-centered prose. With over 20 million copies sold, it still stands among the 100 bestselling books in history.
Desmond’s curiosity was unstoppable, and it can be traced back to his unusual rise in academic science through the study of animal behavior. His Ph.D. began with small fish, sticklebacks. While his mentor Niko Tinbergen—the man who showed him there was a path for studying animals without putting them in cages through ethology—was adamant about the importance of specializing in a single species, Desmond rebelled against that idea. That was his character. He then expanded, in his postdoctoral studies, to birds, particularly the small finch. By this time his basement at the university had become overcrowded with multiple species, and there was even an aviary on the department’s roof. No fewer than 84 species passed through his lab during this period at Oxford. He was able to dedicate three full years to the ten-spined stickleback, while exploring variation in other species, fulfilling his tendency to be a “spreader”—to broaden his interests too much.
Out of academia, Morris became curator of the largest collection of mammals at the renowned London Zoo, sharpening his observations across more than 300 species. His insatiable curiosity pushed him to want to know everything there was to know about every mammal. He later focused on our closest relatives, non-human primates, such as Congo—the chimpanzee he taught to paint and whose works ended up in the hands of world-class painters like Picasso and Miró. Again, non-human primates were only a pitstop before the next stage, an obvious one to him: humans.
Once The Naked Ape skyrocketed, Morris moved to Malta, where he enjoyed the pleasure of spending his earnings and living a comfortable life. There he realized something that we may better understand from the flip side: “The city is not a concrete jungle, it is a human zoo.” Under that premise, he published what could be seen as a follow-up to The Naked Ape, called Human Zoo (1969), where he revisits controversial topics of status, sex, and power. From this work, his commandments of dominance are priceless. He lists the behaviors that, in primate species, are associated with gaining and defending power and status, like “make changes even if no change is needed to demonstrate that you are in control” or “a leader should display his position in their demeanour.” All his work cultivated a unique view of the human animal through the lens of ethology, or through Desmond’s eyes.
Then, motivated by his book editor, Morris began the odyssey that he never finished. It started with a simple premise: a full description of the repertoire of human behavior. After a few months of work, his editor asked about his progress, and he said he was covering the eyebrows. To the editor’s surprise, he had started not from the feet but from the top of the head. That was a sign that his dedication to cataloging gestures was going to take him a lifetime, much like the Oxford English Dictionary (OED).
Not coincidentally, Morris moved to North Oxford, to the house of James Murray, one of the main lexicographic contributors to the OED, as if foreshadowing his own intentions. His book originally titled Manwatching (1977), later adapted to the zeitgeist of our times as Peoplewatching (2003), is still, to this day, the most exhaustive and profound description of human behavior. I believe it offers the highest rate of insight per sentence among all the books I’ve read, and I have called it the bible of human behavior. Ten years later Morris produced another version of that project, this time focused on areas of the body, covering each one through biology, anatomy, culture, and behavior, called Bodywatching (1985). For the serious human observer, these two are indispensable guides.
But Morris knew that the journey was longer than a book. The human repertoire of behaviors cannot be compressed into a trade book. He kept collecting behaviors, labeling them one by one. He had to coin names for many of them, because code-to-elbow or nose-to-forehead behaviors are not commonly described in ordinary language. His approach aimed to solve the natural ambiguity of behavior, so he used descriptive labels to avoid subjective interpretations. His encyclopedia of human actions, titled The Human Ethogram, reached at least two thousand entries by the time he decided to let it go. Now those archives sit at the University of Porto, at the Museu de História Natural e da Ciência, where at some point they may be compiled into one of those posthumous manuscripts worthy of Desmond’s legacy.
Morris’s success transcended writing, probably inspired by the admiration he held for Julian Huxley, a trailblazing biologist who broke scientific etiquette by appearing in mass media. Desmond became a celebrity-like figure with his weekly TV show Zootime. Each week he introduced audiences to different species from the London Zoo, where he worked. The anecdotes are hilarious, and his descriptions of behavior glued audiences to topics they otherwise might have ignored. He developed a charismatic presence that evolved further in his documentaries.
Over his life Morris ended up writing three autobiographies, each time adding new elements, culminating in his more than 600-page 2006 memoir, Watching. This book is as funny as a comedy, and it has the depth and texture of stories that let you enjoy and learn in equal parts. In it, Desmond shares an observational palate so rich that he successfully predicts winners of sumo fights, accidentally receives a papal blessing from Paul VI, and is mistaken for British intelligence in Moscow.
Since 2017, I have had the great good fortune to be in regular contact with Desmond Morris. We exchanged ideas, discussed a few gesture interpretations, like the elbow clapping, and he revealed that his favorite animal was the chequered elephant shrew. He kindly wrote a letter of recommendation for my Ph.D., gave me a few signed books, and invited me to dinner with his family in Ireland. I conducted one of the last interviews with him.
Desmond Morris with the author, Alan Crawley.Over these years I asked Morris many questions. Among them was: “If you have to give a single recommendation to those interested in studying nonverbal behavior, what would it be?” Here is Desmond Morris’s insightful response (personal communication, 03/03/2021):
With body language studies, it is my impression that there is often too much abstract theorizing and semantic debate, when we should be getting out in the street conducting field studies. The question I would ask any student of human behavior is “How many hours of field observation have you done?”, not “How many theoretical papers have you written?” How many riots, bar-fights, pop concerts, boxing matches, art auctions, festivals, law courts, beach parties, military parades, religious gatherings and sporting events, have you attended as an objective, body language observer?Desmond had in mind Tinbergen’s warning about his tendency to spread too thin across multiple problems and numerous species, a signature of his identity. That tension lived in the two sides of his personality: scientific researcher and popularizer. Those identities wrestled within him, and both appear relentlessly in his work and demeanor. For example, in Oxford Morris bought the neighboring house to accommodate his collection of more than 20,000 books. Intrigued by how many of them he had actually read, I asked. His answer was revealing:
I can’t remember the last time I read a book cover to cover.That line reveals the tradeoff between scope and depth. Morris consumed texts across domains, ages, and styles, allowing him to create unique compilations of facts organized under a single ethological framework, something that could only have been achieved by an unsatisfied curious mind that pursued one question and then moved on to the next. Such an approach may increase the likelihood of stating inaccurate claims, and some people use Desmond’s mistakes as a convenient excuse to discard the rest of his ideas. That is a dishonest and unfair approach. He was a prolific well of novel ideas: where others saw laughter, he saw an evolved mechanism of tension; where Freud saw sexual fixation, Morris described behavioral relics that increase in frequency under discomfort.
Awards and prizes were not his motivation. He was never interested in being knighted as a Sir. Someone of his accomplishments would have been a strong candidate for such recognition. I once asked him about this, to which he replied in his unique humorous manner:
I have made enough rude comments about the authorities and about politicians to ensure that my name is safe from that nonsense. And The Naked Ape won’t have helped.Morris was well aware of the consequences brought on by the depiction he made of the human animal. Those depictions may have reached their widest audience through his TV documentaries, like The Human Animal, a fantastic visual portrayal of human behavior across more than 40 cultures.
Desmond enjoyed his competing interests—writing and painting—which occupied his mind deeply throughout the day. In his words:
There are two Desmond Morrises, and they are quite different people. I can easily pass from one to the other, but I cannot be both at the same time. When I'm Desmond Morris the painter, I am quite different.... There is rarely any clash between the two aspects. The one helps the other. I obey the two sides of my brain alternately.Morris’s legacy is gigantic. Beyond more than 12 books on human behavior, he produced books on the behavior of dogs, cats, horses, primates, bison, leopards, and owls. Yet his impact on surrealism was far more than a hobby. Not only were books like The Lives of Surrealists (2018) influential, but, more importantly, in 1950 his paintings were exhibited in galleries alongside Joan Miró. He was an accomplished surrealist painter and filmmaker. If you have read Dawkins’ most famous book, The Selfish Gene, you may have encountered one of his paintings, since Richard himself chose one for the cover.
Until his last days he kept painting and writing. In perspective, he was an outlier who reached the highest level in two incredibly different professions through sheer excellence. And that excellence was cultivated over time, until the end.
For the past five years, he shared in his emails that he woke up with the desire to write and paint—a man in his late 90s who continued relentlessly to enjoy his daily work. Someone who, at the age of 95, published three books in a single year. This year he was also doing two gallery exhibitions of his paintings. That was Desmond: an unstoppable force of passion and curiosity.
Thanks, Desmond. We will continue watching for you.
In 1989, Bob Lazar told Las Vegas reporter George Knapp that he had worked at a secret facility called S4 near Area 51, where his job was to help reverse-engineer the propulsion system of a craft “not made by human hands.”
More than three decades later, despite other whistleblowers alleging the existence of such programs, Lazar remains a rare figure in claiming direct technical work on a purportedly non-human vehicle. And he is now back in the spotlight because a new documentary, S4: The Bob Lazar Story, directed by Luigi Vendittelli, was released on Amazon Prime in early April 2026, and Lazar then did a burst of media coverage, including Joe Rogan, Area52, and Jessie Michels.
He has claimed to have earned two master’s degrees, one in physics from MIT and the other in engineering from Caltech. Skeptics reported finding no record of him at either institution.Lazar is a contested figure. He has claimed to have earned two master’s degrees, one in physics from MIT and the other in engineering from Caltech. Skeptics, including ufologist Stanton Friedman, reported finding no record of him at either institution and have pointed to the absence of identifiable professors or classmates who could corroborate his attendance. Friedman also cited evidence that Lazar attended Pierce Junior College in Los Angeles, which he argued was difficult to reconcile with the timeline Lazar later described. Lazar has maintained that records connected to his work were altered or removed. He also pleaded guilty in 1990 to a felony pandering charge in Nevada. Taken together, these elements have remained central to skeptical assessments of his credibility.
But beyond these biographical facts lies a deeper disagreement about how his case should be evaluated at all. Part of the friction in the Lazar debate is about what kinds of evidence people are willing—or able—to perceive. When you listen to Lazar at length, you start processing how his claims are generated. Over time, it produces a strong impression that the account is being recalled rather than constructed. Notably, individuals who have spent extended time with Lazar without prior exposure to his story have described a similar shift: from initial skepticism to the sense that they were dealing with a person recounting, rather than constructing, an experience. For some observers, that distinction becomes difficult to ignore.
Many skeptics, however, operate with a different evidentiary filter. When claims are extraordinary, they tend to discount behavioral authenticity signals almost entirely, treating them as unreliable or irrelevant. Testimony, in this view, is flattened: people lie and misremember, and beyond that there is little to be extracted from the manner of delivery. This has the advantage of protecting against being misled by charismatic or deceptive individuals. But it also comes at a cost. It removes from consideration a set of cues that, while imperfect, are often central to how humans actually evaluate one another in real-world contexts.
So we are left with a perceptual mismatch. Where one person sees constraint, specificity, and resistance to fabrication, another sees only an unverified claim. One may register the difference between a narrative that is expanding versus bounded, while another treats both as functionally equivalent. On top of this, many skeptics place heavy weight on abstract priors—chief among them the assumption that non-human technology is so unlikely that no amount of testimonial evidence can meaningfully shift the balance. Once that prior is fixed, the rest of the evaluation becomes largely procedural.
This produces a kind of epistemic stalemate with asymmetrical risks. If behavioral signals are granted no weight, then no amount of constraint, consistency, or non-performative delivery can ever move the needle. Testimony collapses into a binary of verified or dismissed, and cases like Lazar’s are effectively decided in advance by prior assumptions. But if those signals are taken seriously, even provisionally, then the burden shifts: one can no longer dismiss the account wholesale without offering a comparably structured alternative explanation. The alternative explanations largely fall into two categories: 1) Bob Lazar fabricated the story, or 2) Bob Lazar is sincerely recounting a real experience that he fundamentally misinterpreted.
Before turning to those explanations, it is worth acknowledging that Lazar’s disputed credentials and legal history are real and relevant, and any serious assessment has to take them into account. They establish that he is not an unimpeachable witness and that elements of his biography invite skepticism. Whether they are sufficient, on their own, to resolve the case is far less obvious.
Bob Lazar is a FabulistLazar’s central claim has not been proved, but several elements once dismissed as fantasy have since entered the documentary record. After his account told to George Knapp, Area 51 was eventually acknowledged by the CIA, and federal litigation in the 1990s showed that the government was willing to invoke state-secrets doctrine and repeated presidential exemptions to shield information about the Groom Lake site. That does not prove Lazar worked on non-human craft, but it does mean one major plank of the old dismissive posture—that he had built an outlandish story around an imaginary place—has aged badly.
The CIA’s own history describes daily air shuttles moving personnel and cargo to the facilityThe same is true of the surrounding logistics and of Lazar himself. Beyond a secret base in the desert, his story concerned a tightly compartmented installation serviced through unusual access patterns, including shuttle flights out of Las Vegas. The CIA’s own history describes daily air shuttles moving personnel and cargo to the facility, and reporting from Las Vegas has since made the JANET system (or Janet Airlines—a highly classified, top-secret airline operated for the United States Air Force) and its secure terminal common knowledge. Again, this proves far less than believers want. But it also proves more than skeptics used to allow. A fabulist could have been lucky once. He is harder to dismiss as a mere fabulist when elements of the practical architecture around his story keeps turning out to be real.
It is also worth recalling the context in which these claims were first made. In 1989, even within UFO circles, the idea of intact craft in government possession—let alone reverse-engineering programs—sat at the fringe of an already fringe field. The involvement of the U.S. Navy in such matters was not part of the discourse at all. Whatever one ultimately makes of Lazar’s account, it did not emerge as a straightforward amplification of existing narratives.
Then there is Lazar himself. Whatever one makes of his grander claims, it is no longer serious to imply that he was simply invented out of whole cloth as a nobody pretending to have moved in scientific circles. A 1982 Los Alamos Monitor article identified him as a physicist at the Los Alamos Meson Physics Facility, years before the UFO story made him notorious. Even the skeptical archival work that has tried hardest to reduce that credential concedes the key point: Lazar was in the Los Alamos world, and the facility in question was a major user laboratory hosting large numbers of outside researchers and contractors. That does not settle what his precise status was, but it does narrow the space for the old picture of Lazar as a basement fantasist who conjured a scientific persona after the fact.
Taken together, these later confirmations vindicate enough of the external scaffolding of his story to make the pure-fabulist thesis look increasingly strained. Even the once-mocked reference to element 115 no longer belongs to the category of obvious fantasy, though its later recognition by IUPAC does not validate Lazar’s specific claims about a stable isotope or gravity propulsion. But the record increasingly undermines the idea that he spun his tale out of pure nonsense.
The most common objection to Lazar’s credibility concerns his lack of verifiable academic records, particularly his claim of having attended MIT. This is often treated as dispositive. But it only is if one assumes a normal career trajectory. Lazar has consistently maintained—publicly in broad terms, and in more detail in private conversations—that his presence in that environment was tied to recruitment into classified work. If that is even partially true, the absence of a standard paper trail is a predictable outcome. That explanation may be challenged, but it is not incoherent, and it is not obviously less plausible than the idea that an individual capable of navigating Los Alamos environments simply fabricated an MIT background without anticipating the most obvious line of scrutiny.
That is why the fabulist position now looks less like skepticism than inertia. That model asks us to believe that Lazar wrapped an elaborate falsehood around a secret aerospace world he happened, by chance or intuition, to sketch in several increasingly accurate ways before much of that world entered the public record. That is possible, but it is no longer the modest position. Too much of the story’s external scaffolding has since been independently corroborated to go on speaking as if we are dealing with a man who simply spun a science-fiction yarn out of thin air.
Bob Lazar is Sincere but MistakenLazar may not be lying, this argument goes, but that does not mean he is reporting reality accurately. He may be recounting a real experience, interpreted incorrectly.
At first glance, this sounds like a reasonable position. It avoids the embarrassment of outright credulity while refusing the cheap certainty that he is simply a fraud. It lets one acknowledge the obvious fact that Lazar does not present like a conventional fabricator without having to follow that concession where it may lead.
“He believes what he is saying” has no explanatory power.The trouble is that this middle position is often treated as though it were self-supporting. It is not. “He believes what he is saying” has no explanatory power. It tells us something about Lazar, but almost nothing about the world. To get from there to a real account of events, one has to specify how a sincere man ended up with this particular story: a decades-long account of a highly unusual engineering environment, populated by sharply bounded details that do not behave like decorative embellishments.
A more concrete version of the “sincere but mistaken” hypothesis is sometimes proposed: that Lazar did have some level of access to classified environments, but in a limited or peripheral role—variously described as a technician, contractor, or even something as mundane as scanning badges—after which he constructed a far more elaborate narrative around fragmentary exposure. In this version, the expansion is not assumed to be deceptive, but the result of inference that gradually hardened into belief. This is, in many ways, the strongest non-fabulist alternative. It preserves sincerity, explains his familiarity with certain logistical details, and avoids the need to posit a decades-long fabrication.
But this refinement simply relocates the core difficulty. It still has to explain how limited, peripheral access could generate a highly specific, mechanically structured account of a system he would not have meaningfully interacted with. It must also explain why that account exhibits the same constraint, stability, and resistance to embellishment as a bounded recollection, rather than the looser, more adaptive structure one would expect from extrapolation. In other words, it replaces one explanatory burden with another, without clearly reducing the overall cost.
He says he did not believe in flying saucers and thought those who did were nuts.One striking thing is that Lazar describes initially drawing the ordinary conclusion. When he first saw the craft, he says the American flag on it made him think it belongs to the US, a top-secret breakthrough that would explain the UFO reports he had previously dismissed. He says he did not believe in flying saucers and thought those who did were nuts. Only later did he conclude that it was not human-made. In his account, the non-human inference was what he was pulled into by the structure of the work itself.
That is already a problem for the standard middle position. It means the “misinterpretation” in question cannot be a simple matter of a UFO-minded witness projecting his prior beliefs onto an ambiguous event. Lazar’s own account begins with the conservative interpretation and moves away from it only when the setting itself stops making sense under that frame. The skeptic who grants that Lazar is sincere now has to say more than “people can be mistaken.” Of course they can. The question is: mistaken about what, exactly?
That question becomes sharper once one notices the kind of details around which his account is built. The memorable parts are not the ones a hoaxer would obviously choose. Instead of dwelling on awe, he repeatedly says the dominant feeling when coming into contact with the craft was ominous, even creepy. The emotional tone is constraining.
One need not treat that as decisive.The same is true of the physical details. Lazar describes the inside of the craft not in grandiose terms but in awkward, almost inconvenient ones: no seams, no stylized features, the same sheen and radius of curvature everywhere, light behaving strangely inside, halogen lamps illuminating where they were aimed but failing to brighten the surrounding interior the way one would expect. Luigi Vendittelli, director of the S4 documentary that recreated the facility in a VR environment, says that when they built the set, they ran into exactly this problem: the interior remained unexpectedly dark. He presents this as one of the moments that made him feel Lazar had not simply invented a cool image but was describing a physicality that does not lend itself easily to intuitive fabrication. One need not treat that as decisive. But it is exactly the sort of thing that makes the middle position harder. The details are bounded in ways that feel discovered rather than chosen.
That distinction is central. A constructed story tends to optimize for effect, and answers too many questions. Lazar’s account contains stubborn little irregularities. He says the craft turned into sky when he walked beneath it because the light bent around it, and that the weight was simply gone rather than transferred to the ground. He describes people working around a purportedly non-human craft in a surprisingly nonchalant, dusty hangar rather than in the kind of sterilized environment one might imagine from science fiction. These details raise the cost of the fallback explanation that he is sincere and simply mistaken.
He also describes intimidation tactics after going public.We are also not in the presence of a private mythology floating free of the world. Lazar told Gene Huff first, then John Lear, and brought them out to see a Wednesday-night test flight because he had the schedule. He also describes intimidation tactics after going public: locked car doors and trunks found open, houses entered, George Knapp himself being followed. One can reject some or all of that. But once again, the middle position cannot simply wave it away with the generic proposition that sincere people can misread events. It has to say what kind of reality generates this pattern.
“He believes it” allows a skeptic to concede the very thing that gives the case its force while refusing to pay the price of that concession. But once sincerity is granted, the path to error is no longer cheap. It has to explain why Lazar’s account exhibits the structure of a constrained recollection of a specific environment, rather than that of an interpretation layered over an ambiguous experience.
In short, Lazar’s central claim—the custody and reverse-engineering of non-human craft—remains unproven, but the standard counterclaims do not carry the weight often assigned to them. Treating Lazar as a fabulist requires a level of sustained fabrication that sits uneasily with the structure of his account and its partial alignment with a once-hidden environment. Treating him as sincere but mistaken requires a chain of error that struggles to generate the specific, constrained features of the story. Neither path collapses under scrutiny, but neither settles the matter.
What remains is a less comfortable position: the case resists easy resolution, and the confidence with which it is often dismissed exceeds the explanatory work that has been done.
A feature length documentary film, released in December 2025, has revived an oft-touted claim of strong evidence of the supernatural. The Case For Miracles1 is based on the 2018 book of the same title by Christian evangelist Lee Strobel.2 Since the film has been criticized for being long on drama and short on evidence, I decided to look for documentation in the book. Unfortunately, when it comes to presenting specific cases of miraculous cures, this is limited to a single chapter, titled “A Tide of Miracles.”3
Among the dramatic cases cited by Strobel is that of a woman identified only as “Barbara” who was suffering from multiple sclerosis to the point that she had been confined to bed for seven years. She heard a voice telling her to rise and walk, which she did. She was sure this was the voice of Jesus. The documentation of this miracle, along with other claims in the chapter, however, is less than impressive, since they all consist entirely of testimonials; nor did the end notes to that chapter provide any medical documentation.4
Among the problems with testimonials as accurate histories are the imprecision of human memory, the tendency of narratives involving storytelling to arise among a group of those witnessing the same event, and bias on the part of witnesses. For example, consider the testimony of Tim Ley and members of his family regarding the appearance of the Phoenix Lights (thought by some to be UFOs) in 1997. He, along with his wife Bobbi, his son Hal, and his grandson Damien Turnidge, initially saw them as five lights in an arc shape. They soon realized the lights were moving toward them. As they did so, over the next ten minutes the lights resolved into a V shape similar to a carpenter’s square, or like two sides of an equilateral triangle. They, like other witnesses, reported a huge object, discernible not only by five lights on its leading edge, but as well because it blotted out stars in the night sky as it passed silently over the city. Soon, the object appeared to be coming right down the street where they lived, only about 100 to 150 feet above them, traveling so slowly it appeared to hover.
It would appear that much of what witnesses saw resulted from the perceptual centers of their brains automatically filling in the spaces between the lights to create a whole object.Fortunately, in addition to the testimony of many witnesses, we have videos taken of the 1997 incident,5 which show a series of lights appearing in the sky, one by one, then winking out one at a time. In one of the videos, the man filming it exclaims, “Another one just showed up!” In that video the first three lights form a line, then a fourth appears in such a position as to make a shallow angle. In another video, this one without sound, one light appears, then another, then more, up to five, then six lights. These are first in shallow “V” shape, then in a more or less straight line. Then the lights wink out, one by one. None of the videos shows a solid V-shaped object blotting out the stars as it moves overhead. In fact, in most of them the lights simply hover, rather than moving in any discernible direction.6 It would appear that much of what witnesses saw resulted from the perceptual centers of their brains automatically filling in the spaces between the lights to create a whole object.
The images of the lights in these videos support the claim by the Air Force that the “Phoenix Lights” were not alien spaceships but military flares dropped by an Air Force reserve unit on a training mission. These flares are used in combat to illuminate a battlefield at night. As such, they were dropped by parachutes, which allowed them to hover for some time. They were dropped west of the Estrella Mountains, which lie west of Phoenix. They seemed to suddenly wink out as they slowly drifted downward, and their images were blocked by the darkened, hence invisible, mountains.
More to the point of miracle cures, consider the claim that the Indian mystic and holy man, Sathya Sai Baba, raised a devotee of his, Walter Cowan, from the dead on Christmas 1971. The narrative of this miraculous healing begins with Walter Cowan and his wife Elsie, followers of Sathya Sai Baba, arriving in Madras, India, on December 23, 1971. Walter, an elderly man, suffered a massive heart attack on Christmas Eve and was taken to a hospital, where he died. Then, on Christmas Day, Sai Baba entered the hospital room where Mr. Cowan’s body lay. After a time, he left. Then, friends of Cowan’s arrived and found him alive. This miracle was attested to by a medical doctor, Dr. John Hislop. His wife reported:
When we reached the hospital with the vibhuti, Mrs. Cowan said, “Walter took a very bad turn just a little while ago. I thought he was dead, and I was terrified. I at once called Baba in a loud voice. Now, Walter seems a little improved. When I called Baba I felt his presence at once.”7The validity of this dramatic testimony is somewhat undone by Elsie’s statement that she thought her husband was dead and that he was then “a little improved.” In any case, both she, along with Dr. Hislop and his wife, were devotees of Sai Baba, rendering the objectivity of their testimonies suspect.
Since I wasn’t able to find more rigorous evidence than testimonies in Strobel’s book, I decided to look online for medical reports of miraculous healing, specifically healing attributed to the effect of intercessory prayer. In an article in the medical journal Heliyon from 2023 I found an article titled “The remote intercessory prayer, during the clinical evolution of patients with COVID-19, randomized double-blind clinical trial.”8 The article states the objective of the study as follows:
The objective of this study was to evaluate the effect of intercessory prayer performed by a group of spiritual leaders on the health outcomes of hospitalized patients with Novel Coronavirus (COVID-19) infection, specifically focusing on mortality and hospitalization rates. Design: This was a double-blinded, controlled, and randomized trial conducted at a private hospital in São Paulo, Brazil.Here are the results of the study:
A total of 199 participants were randomly assigned to the groups. The primary outcome, in-hospital mortality, occurred in 8 out of 100 (8.0 percent) patients in the intercessory prayer group and 8 out of 99 (8.1 percent) patients in the control group […] The study found no evidence of an effect of intercessory prayer on the primary outcome of mortality or on the secondary outcomes of hospitalization time, ICU time, and mechanical ventilation time.In another study, doctors measured the healing effects of intercessory prayer on patients recovering from cardiac bypass surgery:
Patients at 6 U.S. hospitals were randomly assigned to 1 of 3 groups: 604 received intercessory prayer after being informed that they may or may not receive prayer; 597 did not receive intercessory prayer also after being informed that they may or may not receive prayer; and 601 received intercessory prayer after being informed they would receive prayer. Intercessory prayer was provided for 14 days.9The study yielded the following results and conclusions:
In the 2 groups uncertain about receiving intercessory prayer, complications occurred in 52 percent (315/6o4) of patients who received intercessory prayer versus 51 percent (304/597) of those who did not […] Complications occurred in 59 percent (352/601) of patients certain of receiving intercessory prayer compared with the 52 percent (315/6o4) of those uncertain of receiving intercessory prayer […] Major events and 30-day mortality were similar across the 3 groups.Conclusions:
Intercessory prayer itself had no effect on complication-free recovery […] but certainty of receiving intercessory prayer was associated with a higher incidence of complications.Another clinical double-blind study gave more positive results,10 in which intercessory prayers were made by a group that did not know the patient for whom they were praying, nor did any of the patients know whether or not they were the subjects of intercessory prayers. The researchers concluded that remote, intercessory prayer was associated with lower CCU scores (a metric used to evaluate severity of cardiac illness), suggesting that prayer may be an effective adjunct to standard medical care. While this study suggested that intercessory prayer aided recovery, the benefits gained were far from dramatic:
Using the unweighted MAHI-CCU score, which simply counted elements in the original scoring system without assigning point values, the prayer group had 10 percent fewer elements […] than the usual care group. There were no statistically significant differences between groups for any individual component of the MAHI-CCU score.While a ten percent improvement sounds good, it hardly equals Strobel’s claimed miracle case of the woman with multiple sclerosis, bedridden for seven years, suddenly walking.
Effects of emotions or psychological states on the brain … can result in the transmission of healing by way of the nervous system acting on the body through the endocrine system.Far more dramatic and positive results occurred in a notable Dutch study on the efficacy of intercessory prayer as an instrument of healing: “A Dutch Study of Remarkable Recoveries After Prayer: How to Deal with Uncertainties of Explanation.”11 The study encompasses in-depth interviews of 14 people selected from a group of 27 cases, which were evaluated by a medical assessment team at the Amsterdam University Medical Center. Each of the participants had experienced a remarkable recovery immediately after, or even during, intercessory prayer sessions. So, is this evidence of miraculous, supernatural healing? Not necessarily.
The article begins with a description of one of these healings, experienced by a woman named Julia who was diagnosed in 1990 with post-traumatic dystrophy, also known as Complex Regional Pain Syndrome (CRPS). She was wheelchair bound due to intense pain. In 2007, after 17 years of suffering, she and her husband took part in a prayer healing session led by a well-known Dutch evangelist. After the session, Julia stood up and started walking without a trace of pain. She was still free of pain 15 years later, when the study was conducted.
Julia’s CRPS is initially acute pain caused by an injury, that persisted long after the injury was healed. Among the causes of this syndrome are psychological factors and a neurologically triggered autoimmune response.12 In autoimmune disorders, the immune system goes from attacking foreign invaders, such as viruses and bacteria, to attacking the person’s own body. Other patients in the study also suffered from autoimmune disorders. Among these were muscular dystrophy, psoriatic arthritis, ulcerative colitis, and Crohn’s Disease. Some of the patients also suffered from purely psychological problems, such as anorexia nervosa and alcoholism.
All of these diseases can be induced by malfunctioning of the nervous system. This is not to say these disorders are all in the patients’ heads. However, the effects of emotions or psychological states on the brain-such as taking part in a prayer session and states of belief-can result in the transmission of healing by way of the nervous system acting on the body through the endocrine system.
Three other patients suffered from brain injuries or malfunction. One patient had Parkinson’s Disease, which is caused by the failure of certain brain cells to produce dopamine. Another had suffered from a stroke. Another patient suffered from deafness. While the healing of these problems cannot be so simply assigned to the effect of a psychological state on the nervous system and transmission of these effects to the body by way of the endocrine system, they all do involve central nervous system functioning, which could be affected by an induced emotional state.
Only four of the patients suffered from complaints seemingly separate from the nervous system. One suffered from iatrogenic aortic dissection—an injury or scarring suffered during a surgical procedure, such as the insertion of a stent. This is usually treated with beta blockers. These medications block adrenaline, thus relaxing the heart and easing stress on the aorta. So, a changed psychological state could, likewise ease this stress.
Another patient suffered from pelvic instability, which often results from pregnancy and is caused by a weakening of the ligaments at the pelvic floor. This is a basin-shaped structure, consisting of the sacrum, pubis, and hip bones, all held together by ligaments. When these ligaments are overstretched or injured, the bones of the pelvic floor move excessively during physical activities, resulting in pain in the groin, hip, or back. This makes even simple activities difficult and painful. This condition is usually treated by various stretching exercises.
Another patient suffered from drug induced hepatitis. This is inflammation of the liver caused by various medications, treated by simply stopping the use of these medications. Finally, one patient suffered from rotator cuff rupture. While this is caused by traumatic injury, its protracted pain results from inflammation. Thus, just as in Julia’s case, all four of these disorders involve chronic inflammation.
There are three problems imputing the dramatic healings to divine intervention. One is that they all seem to stem, one way or another, from either chronic pain or nervous system dysfunction. We do not see in them people being healed of drastic infectious diseases, such as COVID-19. Nor do any of them involve permanent remission of metastasizing cancers.
It is too far a leap to extrapolate divine intervention from a few healings we can’t explain.Another problem is that of patient involvement. In both the study involving patients with COVID-19 and the one dealing with patients recovering from cardiac bypass surgery, the intercessory prayers were remote for the purposes of performing objective double-blind studies. Particularly in the case of Julia’s healing, the patients in the Dutch study were actively involved in the prayer sessions, thus clouding any clear evidence of cause and effect. Finally, it is too far a leap to extrapolate divine intervention from a few healings we can’t explain.
One last problem with seemingly miraculous cures as evidence of the Judeo-Christian God, is that such a deity would seem to be acting in a rather haphazard manner, healing some people here and there, while not bothering to intervene in horrific atrocities, for example, either the Holocaust or the Armenian genocide. In the latter event, the Armenians were targeted specifically because they were Christians.13 Between one and two million of them perished at the hands of the Turks and other of their Muslim neighbors.
Thus, these now and again, possibly miraculous, healings hardly constitute proof of the God of the Bible.
I am a firm believer in miracles—a confession that will be immediately off-putting to readers of Skeptic. Below I will offer a definition of miracles and attempt to justify belief in them, but for the moment I will focus on a fundamental distinction between two modes of causality. I call these because-of causal mode and so-that causal mode. We can think of these as two ways of explaining an event.
Because-of causal mode example: a man walks into a bank and we ask for an explanation. One explanation tells us about the neurons firing in the motor cortex of the brain that excited a cascade of additional neuron firings, and then muscle flexing. And, of course, there was the mass of the body, the friction of shoes against the sidewalk, the heft and leverage of the doorway, and so on. This mechanical explanation makes the event intelligible; it tells us how the event took place. It took place because of all these enabling factors.
So-that causal mode example: There’s another way of making the event intelligible, and that is to explain the purpose of the man’s actions—he went into the bank so-that he could deposit some money. This is a teleological explanation.
The scientific because of explanation is concerned with immediate past events—facts about what things happened and theories about how they happened. Meanwhile, teleological explanations focus on future outcomes involving values. A teleological explanation tells us that an agent is acting for the sake of bringing about an intended state of affairs—causality guided by purpose. All living systems act with purpose; they seek beneficial outcomes; their behaviors are goal-directed, functional. They are about something.
Here we have two modes of causal explanation—both claiming to render events intelligible, but in different ways. There has been a long tradition of attempts to conciliate these two modes of causality, a tradition that I will now grossly oversimplify. Some people say that the so-that mode of causality is a mere illusion, or at best, a convenient pretense. They believe there is only one kind of causality, and that all genuine explanations can be reduced to the logic of because-of causality.
Others believe that teleological explanations are real, insisting that the universe has some sort of inherent or endowed purpose—it has a point, it is about something, for something. The entire universe behaves in the ways it does so-that an ultimate purpose in creation might be achieved. In one approach because-of causality is ultimately real and so-thatcausality is a fantasy. In the other approach so-that causality is ultimately real and the because-of causality of science is merely an instrument for working out an ultimate cosmic purpose.
The cosmic bus isn’t going anywhere that matters. It has no driver and no destination.Here’s the big question prompted by our encounter with contemporary science: is the grand epic of cosmic evolution in some way driven or guided so-that some destiny might be achieved, or is the cosmos, despite its awesome splendors, ultimately void of genuine meaning or purpose? As Steven Weinberg famously said, “the more we know about the universe the more it appears to be pointless.” There are difficulties with each of these views. If you claim there is genuine meaning somehow inherent in the cosmos, then you must tell us what it is and why we should accept it. But if the claim is that teleological dynamics are not genuinely real, then you are left with the problem of convincing us that meanings (e.g., values, expectations, the force of will) fail to have genuinely real consequences.
I wish to offer a third option, one that avoids both problems. This view says that all the elements of so-that causality (goal-directed behavior) are genuinely real phenomena, but they are recent and unintended emergents of because-of dynamics.
We might frame this emergence view in terms of two different perspectives on the nature of matter: the grunge theory and the glitz theory of matter. The grunge theory says that matter isn’t much—it’s just some sort of vague or chaotic and uninteresting stuff that becomes interesting only when the laws of nature or the will of God whip it into shape. So the grunge theory appears to assign matter to one domain, while relegating both natural law and divine purpose to another.
I want to reject the dualism of this view in favor of what I’m calling the glitz theory of matter, which holds that there are no independently real laws of nature. What we have are simply the properties of matter. A law of nature is just something we formulate as we observe regularities in the properties of matter. If we take this view then we can see that matter is not boring grunge, but wonderfully interesting and creative stuff. What makes it interesting: when certain properties of matter interact with other properties of matter, we find increasing probabilities that novel and unanticipated properties of matter will emerge spontaneously.
Here’s a simple illustration: Oxygen and hydrogen atoms have distinctive properties, and when they interact they can produce water molecules, which present new properties not found in either oxygen or hydrogen. And then the interaction of water properties with other properties of matter will increase the probability of even more novel properties. And, as proposed above, the emergence of new properties of matter may result in the formulation of completely new laws of nature. All of this follows the straight-forward logic of because-of causality. As interactions continue the probability of getting large molecules will increase, and when you have interactions between large molecules, then the probability of emergent living systems will increase dramatically. And as living creatures arrive on the scene, so too does the visionary logic of so-that causality. In a fundamental sense, the story of creation is a story about shifting probabilities and how these result in the various entities, events, properties and relations that make up the natural world.
I want to suggest that the goal-directed causal dynamics of teleology amounts to an emergent property of living systems. Before the appearance of living systems causality was limited to because-of dynamics, but with life comes purpose and value. Now agency enters the picture and things begin to matter. Living systems behave in certain ways so-that they will survive and reproduce. Molecules don’t do this. Molecules are created and constrained entirely by the care-less dynamics of because-of causality. But when molecules get really complex and interactive then it becomes more and more probable that they will gang up and behave according to a completely new mode of causality. This does not mean that because-ofcausality becomes overruled or deactivated. It means only that the because-of dynamics have called into play additional sets of anticipatory, goal-directed algorithms.
A meaningless universe has inadvertently, accidentally and aimlessly created the conditions for meaningfulness.Purposeful behavior and meaningfulness are real phenomena, not illusory; but they are also recent (~4 billion years ago) and localized (on Earth, at least). This suggests that the cosmos itself is essentially absurd—it has no meaning; it is not guided or coaxed by any agent or purpose. It is not about anything. However, without question, there are pockets of genuine meaning and purpose within the cosmos, as we are here to attest. The cosmic bus isn’t going anywhere that matters. It has no driver and no destination. But there are living beings on the bus, and they hustle here and there with all kinds of determination. My life, your life, all our lives, can be rich and full of meaning without having to claim they have cosmic significance. Life can be worth living even if we are not the point of some cosmic drama. The thing that impresses me most about the cosmic drama is that a meaningless universe has inadvertently, accidentally and aimlessly created the conditions for meaningfulness. This mysterious and wonderfully ironic accident—dare I say, “miracle”?—takes my breath away.
By “miracle” I do not mean an impossible event occurring at the behest of an all-powerful supernatural agent. I mean only this: any event, the occurrence of which is considered to be so radically improbable as to be virtually impossible. (I am excluding logically impossible events from discussion because they have a probability of zero—even gods cannot square circles). A miracle is an event having a probability value so close to zero that you cannot imagine any conditions under which it might occur. Given these terms, it might be said with good reason that many miracles have occurred in our universe—it’s just that they never occur before their time.
A thought experiment might help to clarify this. Suppose we place ourselves backward in time to some point immediately after the primordial Big Bang, when the universe was nothing but a raging inferno (no quarks, no atoms, just pure radiation) and consider the prospect of a supernova. Nothing that might have been known of the natural world at the time could possibly predict or explain the formation of stars, not to mention their fusion and expulsion of atoms. The very idea of such events would be considered so improbable as to be preposterous, impossible, and contrary to nature.
Life can be worth living even if we are not the point of some cosmic drama.Or, let us go back a mere four billion years. Again, at that point we would be completely incredulous if faced with the notion that billions of tiny objects would soon be exploring about on our young planet and behaving in complex patterns that defy all that could possibly be known at the time about the natural order of things. And yet, lo and behold, living beings emerged, not because of some magic wand, and not because of necessity, but rather because a countless series of unpredictable probability-enhancing events brought forth the enabling conditions.
We have the meaning-bearing lives we do because they were made incrementally less improbable by the epic events of cosmic evolution, whereby matter was distilled out of radiant energy, segregated into galaxies, collapsed into stars, fused into atoms, swirled into planets, spliced into molecules, captured into cells, mutated into species, compromised into ecosystems, provoked into thought, and cajoled into cultures. Surely, there is nothing intellectually shameful about embracing the staggering beauty and the humbling fortuity of these events as … miraculous.
Telegony is a long-discredited concept of sexual heredity that has been making a surprising comeback in recent years—particularly within digital filter bubbles, right-wing esoteric milieus, and so-called energy coaching scenes. But what does this tongue-twisting term actually mean?
Classical philologists will recognize Telegony as the title of a lost Greek epic recounting the story of Telegonus, the son of Odysseus and the sorceress Circe.1 This rare literary reference, however, has little to do with the way the term is used today.
In scientific-historical terms, telegony refers to the former belief that a woman’s previous sexual partner—often assumed to be the first—could permanently influence her body and thereby affect the traits of children conceived later with different partners. One dictionary definition calls it “a former belief that a sire can influence the characteristics of the progeny of the female parent by subsequent mates.”2
Derived from the Greek tēle (distant) and goneia (procreation), telegony literally means “remote reproduction.” According to this notion, an earlier partner leaves a lasting biological imprint that shapes a woman’s health and the genetic makeup of future offspring—even when those children are fathered by someone else.
This assumption has been decisively refuted for more than a century. Since the formulation of Mendel’s laws of inheritance, modern genetics has established beyond doubt that only the biological parents contribute to a child’s genetic constitution.3 Telegony has therefore long been classified as a pseudoscientific myth.
Curiously, contemporary dictionaries still cite prominent media outlets—Time, Newsweek, and The Guardian—as sources that allegedly support or discuss telegony. A closer examination, however, reveals persistent misinterpretations.
Both Time and Newsweek claim that Aristotle defended telegony.4 Not so. While Aristotle wrote extensively on biology and reproduction, his treatise, De generatione animalium, does not propose that former sexual partners influence future offspring. Instead, he advanced a speculative model in which male semen supplies form while the female body provides matter.5 This reflects a metaphysical conception of gender—associating masculinity with form and intellect, femininity with substance and passivity— rather than an empirical theory of heredity.
Telegony’s modern revival is not a scientific rediscovery but a cultural repetition—a myth repackaged to meet contemporary anxieties about sexuality, identity, and control.The remaining references stem from The Guardian and are often cited in sensational headlines.6 These articles report on field studies by Australian researchers suggesting that previous mates might influence offspring size.7 Crucially, however, the observed effect concerned houseflies only. What headlines obscure—but the articles themselves clarify—is that these findings have no relevance for mammals, let alone humans.
From Discredited Biology to Political MythAlthough Mendel’s laws relegated telegony to scientific error by the early twentieth century, ideas of genetic “imprinting” did not disappear entirely. They resurfaced in ideological form within National Socialist racial doctrine—though not under the explicit label of telegony.
The Nuremberg Laws did not claim that a woman’s first sexual partner permanently affected her later offspring. Yet the underlying logic of “Aryan bloodlines” and the notion of racial defilement through sexual contact relied on structurally similar assumptions: that sexual encounters could transmit lasting biological or moral contamination.8 Political theorists have long noted that myths become politicized when they resonate with prevailing cultural anxieties— whether about heredity, purity, or social order.
This recursive history did not end with the twentieth century. The contemporary revival of telegony occurs in milieus that generally reject any association with historical racism. Nevertheless, similar narrative patterns reappear—now reframed in spiritual, esoteric, or pseudotherapeutic language.
In October 2025, these developments reached a broader public audience. At a Skeptic Awards ceremony in Vienna, a European provider of so-called “telegony erasure” services placed third in a public vote for the most unscientific claim of the year.9 The Berlin-based proponent advertised the ability to remove alleged energetic imprints of former sexual partners from a person’s DNA through nonmedical “energetic healing,” and claimed to have trained a network of practitioners across Germany, Austria, and Switzerland.
Publicly available material reveals striking similarities across these offerings. Multiple providers use nearly identical language, concepts, and website structures when promoting telegony deletion services, suggesting not isolated belief but a loosely organized commercial ecosystem.
The idea that a woman is permanently “imprinted” by her first sexual partner functions as a mechanism of control, naturalizing female subordination.The ideological references invoked by these providers are revealing. Alongside esoteric concepts, they cite the so-called Rita Laws and Slavic-Aryan Vedas as foundational sources.10 These texts are largely dismissed within Slavic studies as modern fabrications, likely originating in the twentieth century. Today, they are frequently employed within strands of Slavic neopaganism (Rodnoverie) to mythologize ethnonationalist ideas such as hereditary purity and ancestral obligation—claims devoid of medical or historical foundation.11
In this context, the Anastasia movement also appears. Based on novels by Russian author Vladimir Megre, the movement centers on a fictional Siberian healer and promotes a social utopia grounded in “natural” living, ancestral land, and hereditary harmony.12 Telegony-like ideas—particularly notions of female purity, bodily contamination, and transgenerational burden—play a central role.13 Sect-monitoring bodies in several European countries have classified parts of the movement as sectarian and, in some cases, as promoting antisemitic and ethnonationalist motifs.
These environments often overlap with right-wing esotericism, purity cultures, and manosphere-related discourses. Blogs and forums within these spheres repeatedly—and incorrectly— reject Mendelian genetics, misattribute claims to Aristotle, and revive essentialist gender models in which women are framed as permanently passive and subordinate to male agency. What emerges is not a revival of science, but a repackaging of myth—adapted to digital platforms and marketed as personal transformation.
The Demand Behind the MythWhen a long-disproved concept resurfaces despite overwhelming refutation, a psychological belief question arises: Why do people adopt the myth rather than the evidence? The revival of telegony is driven by several overlapping dynamics.
Within Anastasia-related narratives, telegony is embedded in a closed worldview that promotes rigid gender hierarchies.14 Men are portrayed as active lineage bearers, women as passive vessels and spiritual caretakers. Within this framework, the idea that a woman is permanently “imprinted” by her first sexual partner functions as a mechanism of control, naturalizing female subordination.
Comparable patterns appear in manosphererelated online environments, where telegony is framed polemically as pseudobiological justification for moral judgments about women’s sexuality. In these filter bubbles, reductive gender stereotypes dominate.15
The wish to “remove” traces of former sexual partners may reflect dissatisfaction with experiences of medicine and intimacy.By contrast, telegony’s resonance in alternative medicine and energy-healing scenes follows a different logic. Here, the appeal lies less in authoritarian gender ideology than in the promise of liberation from perceived constraints of conventional medicine. Audiences range from curious experimentalists to resolute opponents of scientific institutions.16
Across these contexts, however, a more general motive may be discerned. The wish to “remove” traces of former sexual partners may reflect dissatisfaction with experiences of medicine and intimacy. Many people long for healthcare that feels meaningful rather than bureaucratic, and for sexuality that carries symbolic weight beyond the purely physical.17
Against this backdrop, telegony can appear to offer something else: the promise that sexual encounters matter, that they leave traces, that intimacy has depth and consequence. This emotional appeal helps explain why myths such as telegony persist despite scientific refutation.
Telegony’s modern revival is not a scientific rediscovery but a cultural repetition—a myth repackaged to meet contemporary anxieties about sexuality, identity, and control. Recognizing this pattern is essential to distinguishing legitimate meaning-making from the misuse of discredited science.
Ostensibly, the reasons Donald Trump and his administration (particularly Secretary of War Pete Hegseth) went to war with Iran were as a response to the Iranian leadership’s brutal suppression of Iranian protesters, putting a stop to the activities of Iran’s network of proxy groups throughout the Middle East and to destroy Iran’s ability to create a nuclear arsenal.1 President Trump specifically stated (emphasis in the original):
(…) if we didn’t do what we’re doing right now, you would have had a nuclear war, and they would have taken out many countries.2He continued:
The regime already had missiles capable of hitting Europe and our bases, both local and overseas, and would soon have had missiles capable of reaching our beautiful America.3Since then, the Trump administration has added “enriched uranium” as another reason to invade.
Iran’s religiously based autocratic regime has indeed brutally suppressed peaceful protest and does support a considerable number of violent proxies in the Middle East. However, there appears to be little or no support for the president’s assertions that Iran has a viable nuclear weapons program. He has previously stated that U.S. strikes on Iran’s nuclear facilities in June 2025 had “obliterated” that nation’s nuclear weapons program.4
So, if Iran’s military capabilities aren’t the rationale for the Trump administration’s war on Iran, did the administration’ prosecute this war to help pro-democracy groups in Iran bring down that country’s dictatorial regime? Apparently not. War Secretary Pete Hegseth said at a March 2 Pentagon press briefing, “This is not a so-called regime-change war, but the regime sure did change, and the world is better off for it.”5
That’s not quite correct. Iran’s new leader, Mojtaba Khamenei, the son of Ayatollah Ali Khamenei, Iran’s previous religious and political leader, recently killed in a U.S. air strike, isn’t likely to turn Iran into a secular democracy. U.S. air strikes have, if anything, hardened the anti-western, anti-democracy stance of the Iranian leadership.
This view—that we are involved in a holy war against Islam—is not Hegseth’s alone.So, if the United States isn’t intent on democratizing Iran, and Iran’s military capabilities aren’t an issue, what is our government’s motivation for attacking Iran, even bringing it to its knees in what President Trump characterized as “unconditional surrender”? While Trump’s motives may be a bit murky and unfocused, those of Secretary Hegseth are not.
Sporting on his chest, among his many other tattoos, is a Jerusalem cross—a favored emblem of the medieval crusaders. Hegseth, author of the 2020 book, American Crusade, told CBS reporter, Major Garrett: “I mean, obviously, we’re fighting religious fanatics who seek a nuclear capability in order for some religious Armageddon.”6 Troops, he later added, “need a connection with their almighty God in these moments.” A couple of days later, not long after returning from a dignified transfer of soldiers killed in action, Hegseth quoted Psalm 144 at a Pentagon press conference, “Blessed be the Lord, my rock, who trains my hands for war and my fingers for battle.”
This view—that we are involved in a holy war against Islam—is not Hegseth’s alone. The Military Religious Freedom Foundation (MRFF) has received over 110 complaints from enlisted personnel that their officers, referencing the Book of Revelation, have been essentially preaching to them, telling them this war was part of a divine plan. In one such complaint, a noncommissioned officer (NCO) explained that his commander even said President Trump was divinely anointed to carry out this plan: “This morning our commander opened up the combat readiness status briefing by urging us to not be ‘afraid’ as to what is happening with our combat operations in Iran right now,” the NCO wrote. “He said that ‘President Trump has been anointed by Jesus to light the signal fire in Iran to cause Armageddon and mark his return to Earth,’” the NCO continued. “He had a big grin on his face when he said all of this which made his message seem even more crazy.”7
This message reflects Hegseth’s own rhetoric, as expressed at a recent Pentagon Prayer Service (emphasis added):
Give them wisdom in every decision, endurance for the trial ahead, unbreakable unity, and overwhelming violence of action against those who deserve no mercy.8One major source of evangelical Christian bias among officers in the military is the Air Force Academy. Evangelical Christian proselytizing and pressure to adhere to fundamentalist end-times rhetoric has long been a problem at the Academy. Consider this 2007 news item:
Three faculty members from United States Air Force Academy (USAFA) in Colorado Springs, Colorado–one of whom is also a former cadet–have gone public today with their criticisms of evangelical Christian proselytizing at the USAFA. They are joined by another former cadet now serving in Iraq. One faculty member has been reassigned to the Air Command and Staff College at Maxwell Air Force Base in Alabama.9This is one of several news items I found reporting on this problem during the 2000s. Since I was unable to find any recent news stories on the present state affairs at the Academy, I called the Military Religious Freedom Foundation and was privileged to speak with Michael Weinstein, founder and president of MRFF. I asked him if, since there had been some congressional scrutiny of the Air Force Academy’s religious policies, if the Academy had reformed with respect to its religious bias. He told me that, unfortunately, the problem of evangelical Christian religious proselytizing was now worse than ever.10
The coupling of war-making with religious dogma also dredges up the specter of religious wars in the past, culminating in the Thirty Years War.Among the many instances of religious coercion posted on MRFF’s Air Force Academy’s “Wall of Shame” is the 2022 incident in which a training day was scheduled on Yom Kippur, perhaps the most solemn of Jewish religious holidays (emphasis in the original):
In its latest slap in the face to Jewish cadets, the ever-religious-diversity-challenged Air Force Academy this year scheduled its “Commandant’s Challenge” on October 5, perfectly timed to fall right smack on Yom Kippur, the most solemn of all Jewish holy days, forcing Jewish cadets to choose between their religion and joining their much-preferred Christian counterparts in the semester’s most important training day.11This would seem to be an obvious violation of the separation of church and state. However, when the Air Force Academy invited the highly religious former Housing and Urban Development Secretary Ben Carson to speak, he answered a cadet’s question about the separation of church and state as follows:
[God] is the reason that our nation excelled the way that it does. And those people that like to criticize America—criticize people in America—and always talking about separation of church and state, which is not in the Constitution, by the way—do they realize that our founding document, the Declaration of Independence, talks about certain unalienable rights given to us by our creator, a.k.a. God—do they realize that the Pledge of Allegiance to our flag says we are one nation under God—in many courtrooms, on the wall, it says ‘In God we Trust’—every coin in our pocket, every bill in our wallet says ‘In God we Trust.’ So, if it’s in our founding documents, it’s in our Pledge, it’s on our courts, it’s on our money, but we’re not supposed to talk about it. What in the world is that? In medicine we call it schizophrenia.12While “In God We Trust” is engraved on our coins, and while “under God:” was inserted into the Pledge of Allegiance in the 1950s, this hardly constitutes the imposition of a state religion. In any case, Carson was wrong in saying separation of church and state is not in the Constitution. The First Amendment, possibly the most important portion of the Bill of Rights opens with a prohibition against government involvement in religion:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.Further erosion of the separation of church and state may be found at the Air Force Academy, as evidenced by the recent appointment of Erika Kirk, conservative activist and widow of Charlie Kirk, to the academy’s Board of Visitors. A recent news report on this appointment reported how this is in keeping with Secretary Hegseth’s framing of the current war in terms of a Christian end-times struggle between good and evil:
Records from the United States Air Force Academy’s oversight board show leaders dismantling diversity programs and reviewing curriculum as the board embraces what critics call a concerning ideological turn toward Christian nationalism and prepares to seat conservative activist Erika Kirk.13The rhetoric voiced above by a military commander to his troops is ominous since it brings to mind the specter of nuclear war. The coupling of war-making with religious dogma also dredges up the specter of religious wars in the past, culminating in the Thirty Years War, and the creation of religious states such as Savonarola’s Florence, Calvin’s Geneva, and Oliver Cromwell’s England. Our more secular society grew out of the Enlightenment of the 18th century, itself engendered in reaction to the excesses of these religious wars and religious states.
Hegseth, in contrast, sees our nation not as one founded on the principles of the Enlightenment, but rather as a specifically Christian nation:
“America was founded as a Christian nation,” he said at a recent National Prayer Breakfast. “It remains a Christian nation in our DNA, if we can keep it,” he added, splicing some religion onto a famous Benjamin Franklin quip about whether the US was a republic or a monarchy.14So, was America founded as a Christian nation? Not according to the second president John Adams who was one of the authors of the Constitution. Adams, then vice president under George Washington, while negotiating the Treaty of Tripoli in 1796 to secure commercial shipping rights and to protect American ships in the Mediterranean from the Barbary pirates, said:
As the government of the United States of America is not in any sense founded on the Christian Religion,—as it has in itself no character of enmity against the laws, religion or tranquility of Musselmen [Muslims],—and as the said States never have entered into any war or act of hostility against any Mehomitan nation, it is declared by the parties that no pretext arising from religious opinions shall ever produce an interruption of the harmony existing between the two countries.15Hegseth is heavily influenced by Douglas Wilson, a conservative theologian and Christian Nationalist—one who advocates for Christian dominance over government and society. The sort of Christianity Wilson advocates is something few American Christians today would recognize as what they believe16 Hegseth’s views would also seem to derive from the (now discredited) end-times scenario proposed by the late Hal Lindsey, which involved the building of the Third Temple, elucidated in a 2015 report from one of his websites:
Unbelieving religious Jews will rebuild the false temple and offer false animal sacrifices during the first part of the Tribulation. (Daniel 11:31). Then the “man of lawlessness”, the Antichrist, will desecrate that false temple of God by taking his seat in the Holy of Holies, displaying himself as being God. (2 Thessalonians 2:3-4 NASB) That event will start the last half of the Tribulation. That will start 3½ years of the greatest horrors yet known to mankind. It will end with the visible Coming of THE ALMIGHTY, the Lord Jesus Christ. He will rule for 1000 years of peace. Then is the last Judgment of all unbelievers of all Ages. He will then establish forever the New Heaven and Earth.17In a 2018 speech, Hegseth rhapsodized about the possibility of building the Third Temple on the Temple Mount.18 Lindsey’s prophecies, first expressed in his first book The Late, Great Planet Earth(the best-selling book of the 1970s), originally called for the Tribulation, the seven-year period leading up to the Battle of Armageddon, to begin within the generation (in his reckoning a period of 40 years) of the creation of the state of Israel. Since Israel became a state in 1948, that would have meant the Tribulation would have begun in 1988. However, as that year approached without it being likely it would be the beginning of the end, Lindsey recalculated the time two different ways. First, he said that the beginning of Israel as a state perhaps should not be calculated as 1948. Rather, it should be calculated as 1967, when Israel captured the West Bank in the Six Day War. Thus, the Tribulation would begin in 2007. Next, he decided a generation might really mean 100 years, rather than 40. Thus, the Tribulation might well begin in 2048 (1948 + 100) or even 2067 (1967 + 100).
The end-times scenarios that so animate Pete Hegseth and many of the proselytizers at the Air Force Academy aren’t really based that firmly on the Christian scriptures.The event that will supposedly herald the Tribulation is the Rapture—the belief that, just before the horrific catastrophes of the end-times are about to take place, true believers will be taken up to heaven, thus saved from all the horrors specified in the Book of Revelation. This elaborate doctrine is based on just two verses from the Pauline epistle 1 Thessalonians, 1 Thess.14:16, 17:
For the Lord himself will come down from heaven, with a loud command, with the voice of the archangel and with the trumpet call of God, and the dead in Christ will rise first. After that, we who are still alive and are left will be caught up together with them in the clouds to meet the Lord in the air. And so, we will be with the Lord forever.The “we” Paul was referring to in these verses was quite literal, since the Christians of the first century believed the world would end with their generation. Consider, for example, the following passages from the Gospel of Matthew, First (Mt. 10:23):
When you are persecuted in one place, flee to another. Truly I tell you, you will not finish going through all the towns of Israel before the Son of Man comes.This view that Christ would return to the earth in the generation of the first believers is made even more explicit in MT. 16:27, 28:
For the Son of man shall come in the glory of his Father with his angels; and then he shall requite every man according to his works. Verily I say unto you: There are some standing here, which shall not taste of death, till they see the Son of man coming in his kingdom.Since Jesus didn’t return in the lifetimes of those to whom he was speaking, to requite everyone according to their works, i.e., the Last Judgement, and since Paul and the Christians of the first century did not rise to meet God in the air, how is it that end times prognosticators see the verses above as applying to today, some two thousand years later? Christian apologists go to great lengths to explain these contradictions. One of these rationalizations is that, “the Son of man coming in the glory of his father” refers to the Transfiguration, when, according to the Synoptic Gospels (Mark, Matthew, and Luke) Jesus was supernaturally transformed on a mountain in the presence of three of his disciples.19
While this interpretation is rather adroit it fails to explain the allusion to the last judgment in Mt. 16:28. A less adroit rationalization is that Mt. 16:27, 28 refers to the miracle of Pentecost (Acts 2:1–12) when the Holy Spirit supposedly descend upon the disciples, allowing them to speak in other languages than their own. Both rationalizations violate Occam’s Razor. The simplest and most direct interpretation of the verses above is that both Paul and the author of Matthew believed in the imminent return of Jesus, and that the verses above were never intended to refer to events two thousand years in the future.20
They are, in fact, extrabiblical elaborations, wild fantasies based on teasing bizarre interpretations out of tenuous biblical passagesThe end-times scenarios that so animate Pete Hegseth and many of the proselytizers at the Air Force Academy aren’t really based that firmly on the Christian scriptures. They are, in fact, extrabiblical elaborations, wild fantasies based on teasing bizarre interpretations out of tenuous biblical passages. As an example of this, consider the Rapture, a mainstay of modern end-times narratives. As noted above, the entire biblical support for this is just two verses from a single Pauline epistle, 1 Thessalonians 14:16, 17. In fact, the modern fundamentalist scenario of the Rapture, that has believers suddenly and mysteriously disappearing en masse as a prelude to the Tribulation, was the invention, in 1830, of a single maverick theologian of dubious credentials, John Nelson Darby (1800–1882).21
Perhaps Secretary of War Pete Hegseth, the proselytizers at the Air Force Academy, and those military officers who see the President as anointed by God to bring about Armageddon, and who reference the Bible to back up their views, should read one more Bible verse, purporting to be the words of Jesus, concerning when the end will come, Matthew 24:36 (KJV): “But of that day and hour knoweth no man, no, not the angels of heaven, but my Father only.”
Humans have to signal just like birds have to sing, beavers have to build, bears have to hibernate, fish have to swim, and wolves have to howl. Such behaviors are how those animals make themselves legible to one another. Social life under uncertainty forces them to externalize what matters like fitness, temperament, and willingness to cooperate. Humans face the same basic problem with more complicated traits like temperament, virtue, skill, and intelligence—traits that aren’t directly observable. So people must signal them to coordinate and to survive. Humans are a highly cooperative species that will cooperate with almost anyone on almost any task if they are trustworthy and reliable enough as a cooperation partner—it is our evolutionary superpower.
The temptation, especially in the age of social media, is to treat signaling as a mode pathology of people who need attention and lack good taste—a symptom of moral decadence or attention addiction. So much so that until recently, the term virtue signaling was a favored insult. But even if much of what gets called virtue signaling is shallow or cheap, the underlying practice is a structural feature of social life. If people never signaled their moral commitments, reliability, or competence, strangers would have no basis for trust, coalition, or cooperation. In such a world, hiring and romance, to give a couple examples, would be harder and more expensive. Signaling is what we get instead of omniscience.
If people never signaled their moral commitments, reliability, or competence, strangers would have no basis for trust, coalition, or cooperation.Start with the simplest case—other people—who are, at best, partial strangers to one another (and even to themselves). People do not directly observe the counterfactual behavior of other people—things they would have done under different conditions. People do not directly perceive the strength of their willpower, their long-run loyalty, or their competence once the training wheels are off. What we see are limited slices and outcomes. Under those conditions, reputations are a necessary compression device—a running summary of the signals someone has sent over time. And the more costly and stable those signals are, the more weight observers give them.
This is why temperament, virtue, intelligence, and skill are surrounded by behavioral scaffolding. Calmness under pressure is signaled by how people behave in cramped and stressful situations. Trustworthiness is signaled by patterns of keeping or breaking commitments when defection would have been tempting. Intelligence is signaled by the difficulty of problems one can reliably solve. Skill is signaled through portfolios, track records, and performances that are costly to fake and time-consuming to build. None of this guarantees accuracy, but it does allow for some sorting in a world where full information is off the table.
People discover who they are by seeing what they actually do in situations that impose real costs.Less obvious, but crucial for understanding why signaling is inescapable, is that we are also partial strangers to ourselves. Introspection does not give us the same kind of access to our dispositions that we sometimes imagine. People often misjudge their own resolve, generosity, loyalty, and competence. They discover who they are by seeing what they actually do in situations that impose real costs. In that sense, signaling is a way of generating evidence for ourselves when first-person access is unreliable.
This is self-signaling. When people make public commitments, take on demanding projects, or voluntarily incur costs that close off tempting alternatives, they are creating a record that will constrain their future self. Once they have logged enough signals of a certain kind—being the colleague who always shows up prepared, the partner who follows through, the person who sees difficult tasks through to completion—it becomes psychologically and socially harder to act out of character. The signals help stabilize identity over time in the face of temptation and fatigue. They are, in effect, side bets placed against one’s own future wavering.
A great deal of moral psychology can be reinterpreted through that lens. Consider moral outrage, which at first glance looks like a purely internal reaction: an emotional upsurge in response to perceived wrongdoing. It does not feel strategic from the inside. But when researchers isolate outrage and punishment in controlled experiments, a different pattern appears. In a set of studies, Jillian Jordan and David Rand find that people express more outrage and are more willing to punish selfish behavior when they lack the opportunity to signal their virtue through direct helping. When opportunities to share resources or incur costs for others are blocked, participants “compensate” with condemnation instead.
The key twist is that these experiments are anonymous, one-shot interactions. No one in the subject pool can build a usable, long-term reputation off their choices. And yet people behave as if punishment and moral condemnation will function as signals of trustworthiness and moral commitment even when, in fact, they will not. This is what Jordan and Rand call a reputation heuristics account where our minds are calibrated for environments in which reputation usually is at stake, so those heuristics continue to operate even in artificially anonymous contexts. Moral outrage, on this picture, is one of the mechanisms by which we communicate that we can be counted on to side with the cooperative, norm-abiding majority.
Trying to strip all signaling out of moral life would be like trying to strip chirping from the life of birds.The usual complaint is that this makes outrage “fake,” as if any reputational logic behind an emotion automatically discredits it. That assumes that either one really cares or they are performing for an audience. The data suggests that the impulse to signal one’s moral commitments and the felt experience of moral concern are tightly coupled. People want to be good and be seen as good, and the psychology that bundles those aims together is what actually enforces many norms in practice. That does not mean every expression of outrage is proportionate or wise. But it does mean that trying to strip all signaling out of moral life would be like trying to strip chirping from the life of birds.
The same work also helps explain why some moral signals function like moral junk food. In other writing, I have compared low-cost moral outrage to ultra-processed snacks: engineered to satisfy strong cravings with minimal nutritional value. Outrage, especially in online environments, is often cheap, fast, and highly visible. Donating significant time or money, bearing interpersonal costs to repair harm, or changing one’s own habits in light of a moral insight are expensive, slow, and often invisible. When opportunities for high-cost moral behavior are scarce or blocked, the cheaper substitute predictably fills the gap. People must still demonstrate that they care about fairness, harm, and loyalty. When costlier moral actions are constrained, cheaper signals in the form of moral outrage are often substituted.
Economically speaking, when the cost of supplying a valued good rises, people shift to substitutes. That is the structure behind the experimental results: when participants are denied the chance to help, they lean harder on condemnation. The signaling need remains, and the portfolio of available signals changes. Craving for reputational evidence is built deeply into how cooperation and trust function.
Signals help stabilize identity over time in the face of temptation and fatigue. They are, in effect, side bets placed against one's own future wavering.And not just in the moral domain. Employers face self-selection problems: applicants know far more about their own character and competence than hiring committees. In romantic settings, each person knows more about their own long-term intentions and vulnerabilities than the other. Friends, business partners, and political allies all confront versions of the same problem. Under those conditions, signals are one of the main ways both sides try to reduce the risk of pairing with the wrong person.
Degrees, certificates, job titles, grants, and publications are costly to accumulate and relatively hard to fake at scale. They are imperfect, often biased toward certain kinds of talents, but serve an indispensable sorting function in the absence of omniscience. Employers rely on them because the alternative is guessing. The same goes for how people signal temperament and character in everyday life. Someone who consistently reacts to provocation with restraint is signaling about their temperament.
Romantic life adds an extra layer because the signals here often involve foreclosing alternatives. A willingness to invest significant time, to endure periods of difficulty, or to incur costs for a partner’s sake are all signals that burn resources that could have gone elsewhere—what economists call opportunity costs. A promise that leaves all options open is cheap. A sacrifice that rules out other paths sends a clearer message about one’s priorities. This is a reminder that absent signals, no one would know what sort of partner they were dealing with until it was too late and the incentives would be even more against pairing up.
Seen in this light, the analogy with nonhuman animals reappears in a less sentimental form. Birds sing because individuals that failed to advertise themselves effectively left fewer descendants. Beavers that did not build or maintain dams paid the price. Social animals whose signals did not reliably track underlying traits found their cooperative arrangements collapsing. Humans occupy a different ecological and cultural niche, but the basic information problem is the same. Only the content of the signals has changed.
Signaling is the price we pay for cooperation under uncertainty.So when people insist that humans should stop virtue signaling and be authentic, it is worth noting how much that demand presupposes a world where others already know what we are like, a world without asymmetric information or risk, a world where employers, partners, and friends do not need to make educated guesses. That is not the world we inhabit. People must signal temperament, virtue, skill, and intelligence because they are partial strangers both to others and to themselves, and social life requires bets about who can be trusted with what. Signaling is the price we pay for cooperation under uncertainty.
I have long argued that free will, as understood by most people, is simply an illusion, and I recently criticized Shermer’s view that it is not. In response, Shermer says I’m mistaken, but concludes that the issue of free will versus determinism is “an insoluble problem because we may be ultimately talking past one another at different levels of causality.”
In fact, the problem is not one of levels of causality, but of semantics: Shermer has made up a new definition of free will that’s very different from the one most people hold, and different as well from definitions offered by other “compatibilists”—people who argue that yes, human decisions and behavior are determined by the laws of physics, but we still have free will anyway. Here, I argue that Shermer’s compatibilist definition of free will is incoherent and incapable of refutation. In contrast, my form of determinism, adhering to purely physical causation of thoughts and behaviors free from any human “will,” is scientifically testable—and, so far, supported by lots of evidence.
But first let’s look at our respective definitions. I adhere to biochemist Anthony Cashmore’s definition of free will:
… I believe that free will is better defined as a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature.In this definition there’s a “will” that doesn’t involve physical processes but can alter decisions. Another way of saying this is the way most people understand free will: “If you could replay the tape of life and return to a moment of decision at which everything—every molecule—was in exactly the same position, you have free will if you could have decided differently—and that decision was up to you.” This in turn can be condensed to the view that “you could have done other than what you did.” This concept is called “libertarian free will” or “contra-causal free will.”
Surveys in different countries show that most people indeed think we live in a world in which behavior is not deterministic, and our actions are controlled by an intangible, nonphysical “will.” The prevailing view is that we could have done other than what we did.
The science suggests that our feeling that we could have acted differently is, pure and simple, an illusion.This concept is rejected by physical determinists like Shermer and me. Determinism does, however, allow different outcomes in a moment of decision, but only insofar as the laws of physics are non-deterministic and inherently unpredictable. The only physical laws with such unpredictability are those of quantum mechanics (some physicists suggest that quantum events are deterministic in a way we don’t yet understand). For example, it is possible that you ordered a steak rather than salmon because, somewhere in the neurons of your brain, a quantum event took place when you gave your order. But most physicists and biologists think that quantum effects don’t apply on the macro scale of human behavior, where classical mechanics probably rules. And, at any rate, quantum effects cannot buttress free will, for we cannot will the movement of electrons. Libertarianism says the decision must be up to you, not up to probabilistic movements of particles.
Like most compatibilists, Shermer is a determinist, asserting that, “I agree with Jerry and Dan [Dennett] that we live in a determined universe governed by laws of nature.” But he argues that this determinism still leaves us room for free will.
How can that be? It’s because Shermer defines free will in such a way that even in a physics-determined universe we still have a “freedom to choose.” Although I find his definition somewhat confusing, here’s what he says:
So, while the world is determined, we are active agents in determining our decisions going forward in a self-determined way, in the context of what already happened and what might happen.Shermer adds that our behavior satisfies the three requirements for volition given by philosopher Christian List. We have:
All this is puzzling because if we live in a universe governed by the laws of nature, then of course our bodies and brains are part of that physical nexus. Our brains, of course, are the meat computers that form intentions, weigh possibilities, and emit decisions. But this doesn’t answer the critical question: At any moment, could we have done other than what we did? If so, then there is something spooky going on whereby our brains are somehow exempt from the laws of physics. This seems to reside in Shermer’s claim that we are “active agents in determining our decisions going forward in a self-determined way.” What else can that mean but a form of dualism, or even magic?
This smuggled-in dualism becomes clear when Shermer claims that although the action of individual neurons may be determined, “billions of interacting neurons is exactly where self-determinism (or volition or free will) arises.” But how can one neuron be governed by the laws of physics but a group of interacting neurons not be governed by the laws of physics. If they are, then there is no freedom, no volition, no “willed” control of our behavior, and no ability to have done otherwise. Yet Shermer argues that when a group of neurons cooperates, some kind of “will” arises. This dilemma won’t be resolved until Shermer explains the relevant difference between the behavior of one neuron and of a group of neurons.
This is not a semantic distinction, for the definition of free will I gave is testable while Shermer’s is not. There are many experiments and phenomena showing that our sense of agency can be altered by physically manipulating the brain (a big group of neurons), observing human behavior, or performing psychological tricks. For example, neurological experiments show that predictable binary “choices” occur in the brain well before they are consciously made by an individual—up to ten seconds in advance. Such decisions cannot come from conscious “will.” Various lesions in the brain can remove the illusion that we can make real choices (e.g., alien hand syndrome), and doctors, by electrically stimulating parts of the brain can create intentions to do specific acts, like licking your lips or moving your arms. Given more electricity, patients report that they had indeed done those acts even when they didn’t.
What we think of as choice is really a neuronal newsreel screened after the events have already happened.Alternatively, computer games or Ouija boards show that humans can perform actions they attribute to external forces like spirits even though they’re actually, but unconsciously, moving their muscles. All of this suggests that our conscious intentions are not “free,” but are formed by the brain before we’re aware of them, and can be manipulated to either add or remove feelings of “intention.” “Will,” “volition,” or “agency” may well be post facto phenomena in which deterministic activity in the brain is brought into consciousness a bit later, so that what we think of as choice is really a neuronal newsreel screened after the events have already happened. To repeat, it’s useless to see freedom in groups of neurons if it doesn’t occur in single neurons. As Cashmore noted:
Some will argue that free will could be explained by emergent properties that may be associated with neural networks. This is almost certainly correct in reference to the phenomenon of consciousness. However, as admirably appreciated by Epicurus and Lucretius, in the absence of any hint of a mechanism that affects the activities of atoms in a manner that is not a direct and unavoidable consequence of the forces of GES [genes, environment, and stochastic processes], this line of thinking is not informative in reference to the question of free will.The science suggests that our feeling that we could have acted differently is, pure and simple, an illusion.
In contrast, Shermer’s definition of free will is untestable, precisely because he’s defined free will tautologically: because people feel and act like they have free will, they do have some form of it. We feel like we control our actions, weigh alternatives, and make “choices” among those alternatives. But if we couldn’t have done other than what we did—if, at bottom, all we think and do reflects physical law—then what exactly is “free” about our decisions and behaviors?
As Shermer notes, 59 percent of surveyed philosophers are compatibilists while the rest are almost equally divided between libertarians, determinists, and those with no opinion. He deems philosophers the “most qualified people” to pronounce on the problem, but are philosophers more qualified than neuroscientists or physicists? As Sam Harris (a neuroscientist and a determinist) said:
[Compatibilism] ignores the very source of our belief in free will: the feeling of conscious agency. People feel that they are the authors of their thoughts and actions, and this is the only reason why there seems to be a problem of free will worth talking about.Importantly, the “folk” conception of free will—the libertarian version—is what most people think they have. It is that version that permeates society, the legal system, and, of course, religion, and is therefore the most important version to discuss.
Frankly, I’m puzzled by the eagerness of intellectuals to embrace various forms of compatibilism, and I’ve concluded—Dennett said this explicitly—that this comes largely from the view that without some idea that we have free will, society would fall apart, with nobody being “morally responsible” for their actions. I don’t have space to rebut that claim, except to say that it’s an untested assertion. Further, it’s clear that most determinists are not running amok by flouting morality and the law, nor are we nihilists who see no point in getting out of bed. I’ll add that while we are “responsible” for our actions in the sense that we performed them, under determinism the concept of moral responsibility is incoherent, for it assumes we could have made either a moral or an immoral choice.
Finally, Shermer poses what he sees as an unassailable challenge to my determinism:
In fact, billions of interacting neurons is exactly where self-determinism (or volition or free will) arises. This is why I like to ask determinists: Where is inflation [of the monetary sort] in the laws and principles of physics, biology, or neuroscience? It’s not, because inflation is an emergent property arising from millions of individuals in economic exchange, a subject properly described by economists, not physicists, biologists, or neuroscientists.That is a red herring. Like all phenomena in human society, you won’t find monetary inflation in the laws of physics. Nor will you find academics, music, sports, or any other human endeavor. The question is not whether these phenomena are in the laws of physics, but whether they result from the laws of physics as emergent phenomena wholly compatible with underlying naturalism. And Shermer himself said yes, they do: “we live in a determined universe governed by laws of nature.”
The problem of free will is “insoluble” only insofar as Shermer, trying to retain an idea of self-control, and ignoring the massive body of data on affecting volition, has confected a new definition that simply redescribes human behavior. The important question is this: “Is there physical determinism of human behavior or not?” Both Shermer and I agree that there is. In the end, however, Shermer seems to argue that we have free will because we feel like it. One might as well say that there’s a God because we feel like there is one.
On February 27, 2026, Robert F. Kennedy Jr. appeared on Joe Rogan’s podcast and announced that the FDA is preparing to move approximately 14 experimental peptide compounds off its restricted list and back into the hands of compounding pharmacies. He called himself a “big fan” of peptides. He said he expected the announcement “within a couple of weeks.”
One industry executive responded with what he called a prediction: “We’re about to unleash one of the biggest medical experiments in the history of America onto Americans as the test subjects.” He meant it as a good thing.
In response, first let me tell you about a patient of mine. Two weeks before I wrote this, a 40-year-old man came into my clinic in acute distress. He was intelligent, fit, and successful—and he was terrified. His throat was swelling. Hives covered his body. He was struggling to breathe. And … he had been injecting a peptide “stack” he’d ordered online. He’d been at it for exactly two weeks.
That timing is not a coincidence. Two weeks is how long it takes for our immune systems to mount a full IgE-mediated allergic response to a new foreign substance—the same mechanism behind severe penicillin reactions. With a slightly higher dose, or a slightly longer drive to my clinic, he could have gone into full anaphylaxis. He responded quickly to epinephrine and antihistamines. He will be fine. But his immune system now has a permanent record of that peptide as a lethal enemy. Any future exposure risks a faster, more severe reaction.
This is the experiment that is about to be released on the American public.
A New Label on Old Snake OilQuackery has long been handy with new names. “Remedies,” “tonics,” “panaceas,” and “snake oil” gave way to “complementary and alternative medicine,” which gave way to “integrative” and “functional” medicine. Today’s label is “biohacking”—and its latest product line is peptides.
To be clear: some self-experimentation is entirely reasonable. Adjusting your diet, sleep schedule, or exercise routine can have rapid results and manageable risks. That is not what I am cautioning about. I am writing about people who order vials of white powder from overseas websites, mix them with water in their kitchens, and inject themselves based on advice from social media influencers and, now, the Secretary of Health and Human Services.
This is the actual evidence base: rodent studies, discontinued trials, and anecdotes from podcast guests with financial stakes in the outcome.If you spend any time in online “wellness” spaces, you have encountered the pitch. Coaches, longevity clinics, and podcasters hawking discount codes are aggressively marketing injectable grey-market chemicals that promise to “optimize your metabolic pathways,” “boost your immune system,” “detoxify your cellular matrix,” and “address the root cause of aging.” They claim these compounds will dramatically increase muscle mass, melt body fat, skyrocket libido, erase wrinkles, and heal injuries without the inconvenience of waiting for evidence.
As I tell my patients: if a drug could genuinely do any of that, we would all know about it. It would be very hard to hide. You would not be buying it through an internet loophole labeled “not for human consumption.” Nor would there be proclamations about what “they” don’t want you to know about this new remedy.
What Peptides Actually ArePeptides are real, biologically important, and increasingly valuable. They are short chains of amino acids—smaller versions of proteins—that often function as chemical messengers in the body. Insulin is a peptide. More than 40 peptide hormones are known in humans, governing everything from blood pressure to appetite to milk production. The body’s own peptides act quickly: released, delivered to a specific receptor, then broken down by enzymes within minutes.
Medicine has successfully harnessed this biology. There are now more than 100 FDA-approved peptide drugs on the market. The GLP-1 medications—Ozempic, Wegovy, and their weight-loss relatives—have genuinely revolutionized the treatment of diabetes and obesity. Peptide pharmacology is good, productive science, and anyone who tells you the FDA is categorically hostile to peptides is simply wrong.
The compounds being sold by anti-aging clinics and wellness websites are a different kettle of goo. These are unapproved, experimental, synthetic molecules manufactured in a regulatory and industrial grey zone. They are sold with legally evasive disclaimers—”for research purposes only,” “not for human consumption”—while being marketed with explicit instructions for human injection. Many are synthesized in foreign facilities and imported for sale online. The FDA does not approve them. Independent quality testing is essentially nonexistent.
The Appeal-to-Nature Fallacy, Wearing a Lab CoatPeptide sellers claim their products are “gentle” and “natural” because the body already produces similar molecules. This argument collapses on inspection.
Because our natural peptides are removed by enzymes within minutes, lab-made versions must be chemically engineered to survive much longer in the bloodstream. This is why an Ozempic injection can last a week. The molecule is altered—designed to evade the very mechanisms that keep natural signaling tight, targeted, and controlled. Calling a chemically tweaked, enzyme-resistant synthetic compound ordered from an overseas supplier a “natural holistic remedy” is a remarkable feat of cognitive dissonance.
The natural precedent proves nothing about safety or efficacy at supraphysiological doses. The dose, the duration, the delivery route, and the molecular structure all matter enormously. This is not ideology. It is pharmacology.
The Wolverine Stack and Tooth Fairy ScienceOne popular combination—BPC-157 and TB-500—is marketed as the “Wolverine Stack,” named after the X-Men character’s mutant regenerative ability. Sellers claim it heals torn ligaments, repairs damaged tissue, and accelerates recovery from virtually any injury.
BPC-157 is a synthetic analog of a compound found in human stomach juice. In rats and cell cultures, it has shown interesting tissue-regeneration effects. There is no robust human clinical trial evidence that BPC-157 accelerates injury recovery, reduces inflammation, or supports gut health. A Phase I trial conducted in 2015 on 42 volunteers was discontinued and no results were ever published. The only human data in the published literature consist of a retrospective analysis of 12 patients and a pilot study with two participants. Based on this, influencers and longevity clinics sell it as a proven cure-all. At a MAHA (Make America Healthy Again) summit in Washington last November, a panelist told the audience his grandmother was taking it and that “it’s just one example of these products that can change people’s lives.” The audience clapped and whooped.
Then there are the peptides that are alleged to pump up growth hormone—CJC-1295 and Ipamorelin—heavily marketed to men hoping to reclaim muscle and youth without effort. What rat data actually showed for Ipamorelin was increased body weight and increased fat. Its only significant human clinical trial, investigating bowel function after surgery, found it no more effective than placebo. As for CJC-1295: clinical trials investigating it as a treatment for HIV patients were permanently halted after a participant died of a heart attack.
This is the actual evidence base: rodent studies, discontinued trials, and anecdotes from podcast guests with financial stakes in the outcome. The plural of anecdote is not data.
Downplaying RisksThe FDA’s 2023 decision to move many of these compounds to its Category 2 restricted list was not arbitrary bureaucratic overreach. It was grounded in specific, documented biology.
BPC-157 promotes angiogenesis—the formation of new blood vessels. This sounds appealing for tendon repair. It is considerably less appealing when you consider that angiogenesis is also precisely what early-stage, undetected cancers need to grow and spread. (Oncologists have long sought anti-angiogenesis drugs to attenuate the growth of blood vessels to cancerous tumors.) A person injecting unapproved angiogenic compounds has no way of knowing whether they are healing a joint or feeding a tumor. Growth hormone secretagogues carry documented risks of acromegaly—the pathological and irreversible enlargement of bones and organs from excess growth hormone exposure.
Then there is immunogenicity, the actual problem illustrated by my patient. Because synthetic peptides are engineered to persist in the bloodstream far longer than natural ones, the immune system frequently recognizes them as foreign invaders. It builds antibodies. In the best case, those antibodies simply neutralize the drug, rendering it ineffective. In worse cases, they trigger escalating allergic responses. In the worst cases, they cause anaphylaxis.
There is no quality control. There is no chain of custody. The buyer has no reliable way to know what is actually in the vial.Then there is contamination. Grey-market peptide vials from unregulated sources often contain chemical residues from synthesis, heavy metals, bacterial contamination, or simply the wrong compound entirely. There is no quality control. There is no chain of custody. The buyer has no reliable way to know what is actually in the vial.
We are already seeing the collateral damage. Bad injections have produced hospitalizations for muscle paralysis, scarring, and sepsis. In Las Vegas, two women were hospitalized with swollen tongues, respiratory distress, and elevated heart rates—classic anaphylaxis—following peptide injections at an anti-aging festival. Medical journals have reported cases of necrotizing pancreatitis directly linked to unregulated peptide use.
The MAHA ParadoxHere is where the story becomes increasingly interesting, and particularly strange.
Kennedy is not entirely wrong about one thing. When the FDA moved these compounds to Category 2 in 2023, it did not eliminate demand. It drove patients toward Chinese suppliers and grey-market “research chemical” vendors with no oversight whatsoever. Kennedy acknowledged this directly, stating that the restrictions “created the gray market.” There is a narrow, genuine point buried here: regulated compounding pharmacy access, with physician oversight and USP-compliant quality controls, is meaningfully safer than a vial of white powder ordered from an overseas website.
But reclassification from Category 2 to Category 1 does not mean FDA approval. It does not mean these compounds are safe or effective. It means licensed compounding pharmacies would be permitted to prepare them under physician prescription for individual patients. The evidence base does not change. The angiogenesis risk does not change. The immunogenicity risk does not change. The absence of human clinical trial data does not change. What changes is the supply chain—and while that matters for contamination risk, it does nothing about the fundamental problem that we do not know what these compounds actually do in human beings at the doses being used.
Meanwhile, the people celebrating the loudest have the most to gain financially. Brigham Buhler, the compounding pharmacy and wellness clinic owner, who has Kennedy’s ear and has been loudly predicting regulatory liberation on podcasts, owns the businesses that would compound and sell these newly accessible peptides. At the MAHA summit last November, he moderated a discussion on compounding pharmacies, and declared, “I think the future is bright with peptides.” The audience, again, clapped and cheered. The financial conflicts of interest here are not subtle.
Eric Topol, director of the Scripps Research Translational Institute, identified the deeper contradiction more sharply than I could: “These are the same people that won’t take a vaccine that’s been shown to work in millions of people.”
Read that again. The MAHA movement—which has spent years amplifying vaccine hesitancy, questioning FDA-approved treatments, and casting pharmaceutical medicine as a corrupt conspiracy—is now enthusiastically championing the mass use of unapproved synthetic compounds based on rodent studies and podcast testimonials. They claim that the FDA was corrupt and captured when it approved vaccines backed by Phase III trials enrolling tens of thousands of participants. It is apparently now a liberating force when it opens the door to peptides with two-patient pilot studies.
The standard of evidence, it turns out, is not a principle. It is a preference.
A Multi-Million Dollar ExperimentThe market is already staggering. U.S. Customs data show that imports of peptide and hormone compounds reached $328 million in just the first three quarters of 2024—up from $164 million during the same period the year before. That was before a sitting cabinet secretary went on the most popular podcast in America to announce that the regulatory gates are opening.
Wellness clinics function as middlemen, lending a veneer of medical legitimacy while requiring patients to sign waivers acknowledging the substances are experimental—a maneuver that transfers liability to the patient. The proponents of “functional” medicine who accuse conventional physicians of “just pushing pills” are simultaneously instructing patients to inject unapproved synthetic compounds mixed in their own kitchens. This is not a contradiction they appear to notice.
Patients frustrated by the pace of conventional healing, or simply hoping to optimize bodies that are already healthy, are understandable targets for this marketing. But enthusiasm and financial interest are not substitutes for evidence.
Caveat EmptorPeptide pharmacology is a burgeoning field of research. FDA-approved peptide drugs have produced genuine medical advances. The problem is not peptides. The problem is the systematic exploitation of public enthusiasm for that science to sell unproven, potentially dangerous compounds to people willing to self-inject in pursuit of a shortcut—and now, the prospect of that exploitation being scaled and legitimized by federal policy.
Here is a simple test. If a compound genuinely possessed the ability to burn fat, build muscle, regenerate tissue, and reverse aging without meaningful adverse effects, it would not need to be endorsed on a podcast. It would not need a cabinet secretary to rescue it from regulatory scrutiny. It would survive clinical trials. It would earn FDA approval. It would be prescribed by physicians and covered by insurance.
It would simply be called medicine.
The man whose throat was swelling in my clinic was not a fool. He was a careful, educated person who trusted the wrong sources. He got lucky. As the regulatory gates open and the market expands, not everyone will.
It’s been said that “He who controls the media controls the mind.” (Variously attributed to Jim Morrison of the rock band The Doors, along with Noam Chomsky.)
Whoever said it, billionaires seem to have taken it to heart. Elon Musk has made 𝕏 his “de facto public town square.” Jeff Bezos has The Washington Post. Rupert Murdoch continues to consolidate conservative media outfits via Fox and News Corp (which owns The Wall Street Journal, the New York Post, and HarperCollins). Mark Zuckerberg’s Meta has expanded from merely friending people on Facebook to Instagram, WhatsApp, and Threads. Brian Roberts’s Comcast is in charge of NBCUniversal, Sky News, Peacock, and Universal Pictures. And so on. Meanwhile, the Ellison family controls Paramount and CBS.
Recent headlines read like a game of high-stakes Pac-Man. Most notably, David Ellison’s Skydance Media merged with Paramount Global, bringing CBS, Paramount Pictures, MTV, Nickelodeon, and other assets under its new entity, Paramount Skydance Corporation. Then Paramount Skydance proceeded to buy The Free Press for some $150 million—putting its founder, Bari Weiss, at the helm of CBS News as its new Editor-in-Chief (she also retains her role at The Free Press).
Meanwhile, Netflix is in an $82.7 billion definitive agreement to acquire Warner Bros. Studios (subject to regulatory approvals), but not if Paramount Skydance can help it, with a lawsuit in place against the venerable studio alleging that the Netflix deal lacked transparency and that the Warner Bros. board has ignored higher offers from Skydance (the board has repeatedly rejected Skydance’s offers in support of the Netflix deal). The matter is currently in dispute, but if Paramount Skydance manages to win, it would have control over a giant piece of the media apparatus—including both CBS News and CNN.
And then there’s the recent forced sale of TikTok U.S. to an American entity. The deal creates a new U.S. joint venture where a consortium of investors led by Oracle Corporation, Silver Lake Technology Management, and MGX Fund Management Limited will hold a 50 percent stake, while ByteDance retains a 19.9 percent minority interest.
This marks a fundamental restructuring of the media landscape. Is it good for the public?
On the one hand, it’s possible that audiences will be pleased with having access to larger content libraries from a single provider, though Netflix is likely to raise its prices for the privilege of being able to share HBO’s “It’s not TV” content with them. Given its market share and massive content library, Netflix will sit firmly in the driver’s seat when negotiating acquisition costs and more.
It also means that Netflix could control 30–40 percent of all paid streaming in the U.S., according to analysts. This move risks creating a content monoculture where data-driven algorithms, rather than creative risk, dictate what gets made, especially given Netflix’s streaming-forward model, rather than a focus on theatrical releases. This new layout also makes it incredibly difficult for mid-sized companies with less capital to acquire attractive content and compete with existing massive IP libraries, and creates a near monopoly on content: a few giants at the helm, with only smaller, niche creators, podcasters, and independent outlets left on the margins. It also means that filmmakers have fewer options for their projects.
In fact, Netflix already offers a preview of what a fully consolidated media environment looks like in practice. Netflix has become infamous for canceling series after one or two seasons, often despite strong critical reception or dedicated audiences. Shows like Mindhunter, 1899, Glow, and Archive 81were all discontinued without narrative resolution. In several cases, creators later stated that the shows met or exceeded traditional benchmarks of success but failed to satisfy Netflix’s internal metrics for rapid audience growth and completion rates. The result is a cultural landscape littered with unfinished stories. Viewers learn, over time, that emotional investment is risky. Storytelling itself becomes provisional and disposable.
Genres proliferate, aesthetics vary, but narrative structures converge.This incentive structure also shapes how stories are told. Former Netflix writers and executives have described internal guidelines that prioritize early engagement above all else. As a result, many Netflix originals front-load dramatic events—major chases, twists, or revelations often occur within the first five to ten minutes of an episode. Compare this to earlier television and feature films, where narrative tension was allowed to accumulate gradually, and climactic moments were often reserved for the end.
Dialogue has changed as well. In series such as The Witcher or You, key plot points are frequently repeated verbally, sometimes multiple times within the same scene. This is not accidental. Matt Damon, while promoting his new Netflix film The Rip, has mentioned that they’ve had discussions with the streamer about ensuring that the plot is restated “three or four times in the dialogue” to address the fact that many of the viewers are simultaneously on their phones while watching.
A number of writers have also openly noted that scripts are being increasingly optimized for distracted viewing. In other words, they are designed to be intelligible even when audiences are scrolling on their phones or half-paying attention. Subtle visual storytelling gives way to explicit exposition, because ambiguity does not perform well in engagement data. And Netflix is quite data driven indeed.
Over time, this produces a subtle form of cultural monoculture. Genres proliferate, aesthetics vary, but narrative structures converge. The result is a narrowing of how storytelling is constructed. Novelty is cosmetic and experimentation is constrained by metrics designed to optimize retention rather than meaning.
For most of television history, this logic would have been alien. In the broadcast era, shows were often allowed to fail slowly or to grow into themselves. Series such as The Wire, Breaking Bad, and Mad Men all struggled initially to attract large audiences, despite being critically acclaimed. The Wirein particular was never a ratings success during its original run, yet it survived because executives believed in its long-term cultural value and its ability to enhance the network’s reputation. Success was measured over years, not weeks, and shows were allowed to develop complexity that only made sense in retrospect. Creative risk was tolerated because it signaled seriousness, ambition in storytelling, and—significantly—trust in the audiences. Initially a modestly performing niche show, Mad Men saw a 63 percent increase in viewership by its second season alone and went on to become a cultural phenomenon.
HBO famously framed itself not as television, but as something adjacent to cinema—summed up in its slogan, “It’s not TV.” The network accepted that certain shows would never be mass hits, but would instead function as prestige anchors, shaping brand identity and attracting subscribers indirectly. A series like The Sopranos justified risks taken elsewhere; Six Feet Under or Deadwoodexisted because the ecosystem allowed for uneven returns. FX’s The Americans showrunners—Joel Fields and Joe Weisberg—had chosen to end the show on its sixth season—something they had announced during the fourth, which allowed them to plan their storytelling and provide a proper ending.
In that environment, creative autonomy was not merely tolerated but protected. Writers could trust that if an audience existed—even a modest one—it would be allowed to find the work. Today’s streaming platforms invert that logic. Instead of prestige underwriting experimentation, experimentation must justify itself instantly in data. What once functioned as cultural capital has been replaced by performance analytics, and patience has been redefined as inefficiency.
Of course, the issue drawing the most attention and concern is how this consolidation will affect who controls the narrative and how it is shared.
When only a handful of entities control the information available to us about the world around us, how can we make informed decisions about its future?In particular, a lot of attention has surrounded the acquisition of CBS and the installment of Bari Weiss as its Editor-in-Chief. Proponents see this as a positive move that will help CBS become a more ideologically moderate—or centrist—outlet, creating a legacy broadcast network that appeals to and serves everyone on the political spectrum, not just those who lean left.
Critics, meanwhile, are concerned that the outlet will reflect the ideological leanings of its new owner, sympathetic to the current U.S. administration. As evidence, they point to the last-minute pulling and postponement of a 60 Minutes segment on the Trump administration’s deportations of Venezuelan migrants to El Salvador’s CECOT prison, with reports of internal tension around the ongoing delay. When the segment did eventually run, some critics noted that it didn’t contain additions that justified delaying it and argued it was intentionally aired during an NFL playoff.
To many of Weiss’s detractors, this seems to serve as a confirmation of what they believed all along—that Weiss is the mouthpiece of the Trump administration, intentionally put in place by Ellison to promote specific narratives. They point to her tenure at The Free Press, where sustained criticism of Trump has been less prominent.
Her proponents disagree, and claim that she was merely ensuring the coverage was balanced and provided an opportunity for the administration to respond to various claims—as per journalistic standards that they feel have been replaced by bias and activism elsewhere. They also note that none of the recent hires brought into CBS under Weiss could reasonably be described as MAGA.
It’s possible that Weiss is genuinely striving to bring a balanced perspective to CBS News, without ulterior motives or loyalties. Yet the network’s legacy audience is likely to remain skeptical, and many may drift away. Weiss’s goal appears to be attracting a more centrist, moderate audience—both left- and right-of-center—but in today’s polarized media landscape, many viewers seek content that aligns with their existing perspectives. In the first week under new editorial leadership, for example, CBS Evening News saw viewership drop 23 percent compared to last year, which signals, at the very least, a steep adjustment period.
Mainstream media has generally leaned left, with exceptions such as The Wall Street Journal and the New York Post. Hollywood, too, has remained largely left-leaning, which makes the recent acquisitions all the more significant when it comes to shaping culture. The right-wing media ecosystem has expanded beyond Fox with a strong presence in the online world.
In a recent article about Bari Weiss in The New Yorker, it was noted that her new role wasn’t necessarily a matter of a merely editorial choice. “Don’t think about it as David Ellison paying a hundred and fifty million dollars for The Free Press,” an unnamed industry exec said. “Think about it as a hundred and fifty million dollars on top of the price they paid for Paramount. It was basically the cost to get it to go through.” Whether that’s true will continue to be debated.
As more media outlets consolidate into the hands of a few, the number of voices shaping what we see and hear shrinks.But as I mentioned earlier, whatever the ideology, what matters isn’t who owns which outlet, but that ownership itself is converging—across news, entertainment, and social platforms—into a single layer of influence.
When ownership is diverse, multiple perspectives can still compete for public attention. But as more media outlets consolidate into the hands of a few, the number of voices shaping what we see and hear shrinks, from news and opinion reporting to entertainment in the case of Netflix, Paramount, etc.
Our ability to understand the world from multiple perspectives diminishes, and our view of reality becomes narrower. When only a handful of entities control the information available to us about the world around us, how can we make informed decisions about its future?
There’s a kind of storytelling tariff that sci-fi thrillers pay: the alien has to be visually—and physiologically—“other.” The more it resembles us, the less it feels like an invasion, and the less it sells popcorn. So, filmmakers crank the dials. Alien is the perfect example: a creature engineered for maximum dread—extra jaws, parasitic reproduction, and even acid for blood, a brilliant idea because it turns injury into a terrifying weapon. Great cinema. Bad biology.
The alien as a monsterConstraints, Not MonstersBut biology isn’t a special-effects studio. Evolution doesn’t get to pick any chemistry, any anatomy, any habitat, and call it a day. It’s boxed in by constraints: what molecules can build durable, information-rich structures; what solvents allow complex reactions; what temperatures keep chemistry running without shredding it; what gravity and atmosphere allow efficient movement; what energy sources are stable long enough for complexity to accumulate. And here’s the part science fiction usually skips: only a limited range of environments in the universe are likely to be hospitable to the long, fragile process that produces intelligent life at all. If that’s true, then the number of viable “starting conditions” shrinks—and the range of plausible outcomes shrinks with it. In other words, the universe may not be a boundless zoo of monster anatomies. It may be a narrower set of workable habitats repeatedly producing a narrower set of workable body plans—ones that, at a distance, start to look surprisingly familiar.
Carbon is the first and biggest constraint. If you want a system capable of building large, stable molecules that can both store information and do chemistry, carbon is the standout: it forms strong chains and rings, bonds flexibly with common elements (H, O, N, S, P), and supports the kind of combinatorial complexity life seems to require.1 Silicon gets invoked in sci-fi because it sits under carbon on the periodic table, but careful technical reviews conclude that silicon biochemistry faces steep hurdles compared with carbon—especially when you ask for the chemical diversity, solvent compatibility, and long-term stability you’d need for an evolving biosphere rather than a one-off laboratory curiosity.2 Carbon, by contrast, isn’t just “what we have”—it’s what the periodic table offers as good at being life’s scaffolding.
And carbon chemistry, at least as far as we understand it, almost certainly needs a liquid reaction medium. You can think of a solvent as evolution’s workshop: it transports reactants, buffers temperature swings, enables compartmentalization (membranes), and keeps chemistry running long enough for complexity to accumulate. NASA astrobiology treatments make the key point crisply: water is not merely “wet background”; its physical and chemical properties are unusually helpful for life-like chemistry.3 That doesn’t mean life must use water—serious work examines alternatives—but it does mean that when you ask where complex life is most likely to arise, you’re pulled toward a relatively narrow band of worlds with long-lived liquids, stable energy gradients, and conditions that support molecular complexity rather than constantly tearing it down.4
Carbon, by contrast, isn’t just “what we have”—it’s what the periodic table offers as good at being life’s scaffolding.Once you accept those constraints, the “anything goes” alien starts to look less likely. A restricted set of workable environments tends to funnel evolution toward a restricted set of workable solutions—especially once organisms get big, mobile, and cognitively complex. From there, the argument becomes a cascade: mobility favors efficient body plans; efficient body plans often converge on bilateral symmetry for streamlined, directional movement; and bilateral movers tend to concentrate sensors and processing at the leading end—cephalization—because that’s the part that encounters the world first.5
Finally, any lineage that’s going to build technology needs not just brains, but some way to manipulate the world with precision—one or more appendages capable of fine control. And Earth at least shows that “high intelligence” is not a one-time miracle: complex brains and sophisticated cognition have evolved multiple times in very different lineages, which is exactly what you’d expect if evolution keeps rediscovering similar solutions to similar problems.6
It Takes a Long TimeFor most of Earth’s history, life was microbial. There are abundant signs of life by around 3.5 billion years ago, with plausible evidence reaching back toward approximately 3.8 billion years and earlier, meaning single-celled organisms dominated the planet for the overwhelming majority of its existence.7 Complex multicellular life—and especially animals with nervous systems—arrives strikingly late by comparison: the Ediacaran record pushes recognizable multicellular complexity to roughly approximately 600 million years ago, and the Cambrian explosion (around 540 million years ago) is where diverse animal body plans and their organ systems, including nervous systems, become conspicuous in the fossil record.8 Even “brains,” in any familiar sense, are a comparatively recent evolutionary product of animal history.
And yet, despite billions of years of evolutionary “experimentation” across oceans, lakes, microbial mats, reefs, forests, and ice ages, technological intelligence—the kind that builds radios, telescopes, and spacecraft—emerged only once, and only under a narrow set of ecological circumstances. That doesn’t prove intelligence is unique in the universe, but it strongly suggests that it’s constrained: not every habitable world is equally likely to produce it, and not every habitable environment on a given world is equally likely to nurture it. In other words, the universe may contain places where life is possible, but far fewer where the long chain of transitions to technology can reliably occur.
Evolution is repeatedly solving the same engineering problems under similar constraints.Long before our ancestors spent most of their time on the ground, their life was shaped in trees—an environment that rewards three-dimensional vision, fine depth perception, color discrimination, and exquisitely controlled hands, arms, and digits for climbing, grasping, and precise manipulation. When some of those primates began living in woodland–savanna mosaics, bipedal walking freed the already dexterous hands for carrying and tool use, effectively repurposing “arboreal skills” into a terrestrial, cumulative technology pathway. That transition—tree-built perception and manipulation deployed on open ground—may be a rare ecological combination, and it helps explain why large brains can evolve in many settings, yet only once has intelligence ratcheted up into an industrial civilization.9
If only a limited set of planetary and ecological conditions can support the long chain from chemistry to cognition, then evolution is repeatedly solving the same engineering problems under similar constraints. And once you narrow the environments where intelligence is even plausible, you also narrow the range of bodies that can thrive there. That doesn’t point to identical aliens—but it does make wildly un-Earthlike “monster designs” (think War of the Worlds with Tom Cruise) less likely, and a recognizable family resemblance—convergent, familiar motifs—more likely.
How the Ratchet TurnsAs soon as hominins became more committed to life in woodland–savanna mosaics, a new class of problems moved to center stage: social problems. On open ground, survival often depends less on a single clever trick than on navigating alliances, rivalries, status, reciprocity, and betrayal inside a group—and sometimes between groups. That framing goes back to classic arguments that intellect evolved largely to manage social life.10 It’s also the logic behind the “social brain” tradition: as group life becomes more demanding, selection favors minds better at tracking relationships, intentions, and reputations at scale.11
In that world, intelligence isn’t just tool-use; it’s the ability to detect cheaters and liars, anticipate others’ moves, and calibrate cooperation—exactly the kind of psychological machinery psychologists Leda Cosmides and John Tooby argued would be favored in repeated social exchange.12 And once you have minds built for social exchange, you have the psychological preconditions for reciprocal altruism—the willingness to help now in expectation of help later—which is one of the foundations of large-scale human cooperation that builds civilizations.13, 14 And when resources are patchy and competition is real, intergroup conflict can further raise the stakes, selecting for coordination, cohesion, and strategic behavior within coalitions.
Intelligence exists in many lineages; an industrial pathway likely requires intelligence plus a controllable, high-energy lever and a dry-work environment where tools can persist, accumulate, and improve.Language doesn’t merely label the world; it lets individuals coordinate plans, negotiate alliances, transmit know-how, and build reputations—turning individual cognition into group cognition.15 Most importantly, humans crossed a threshold into cumulative culture: shared intentions, teaching, and high-fidelity social learning allow useful innovations to persist and improve across generations, creating the technological “ratchet” that other smart animals rarely achieve. Humans are distinctive because our know-how doesn’t reset each generation; it accumulates—tools beget better tools in a cultural “ratchet.”16 But brains are expensive tissue, so any species that evolves them must solve an energy-budget problem—through diet quality, provisioning, and other tradeoffs that reliably pay the bill.17, 18
This is where fire and cooking matter: cooking increases the calories you can extract from food and reduces the time and gut investment needed to process it, freeing energy for a larger brain.19 Just as important, controlled fire is a gateway technology—warmth, protection, nighttime sociality, and eventually high-temperature chemistry.20 Intelligence exists in many lineages; an industrial pathway likely requires intelligence plus a controllable, high-energy lever and a dry-work environment where tools can persist, accumulate, and improve.
A skeptic might object that oceans already produce impressive intelligence—dolphins and whales, for example—so why didn’t technology take off there? The point isn’t that marine brains can’t be sophisticated; it’s that an industrial pathway needs more than cognition: it needs persistent tool chains and a controllable high-energy lever.
The decisive step wasn’t just smarter brains—it was solving the problem of memory across generations.And that points to a subtle filter. Oceans can produce impressive cognition—on Earth in the form of cetaceans and, perhaps, octopus—but water is hostile to the industrial ratchet: fire is hard to control, durable toolkits are harder to store and transport, and metallurgy is effectively off the table.21 On land—especially in variable, resource-patchy habitats—portable tools, teaching, and cooperative planning can compound. That’s why the story is less “savanna created intelligence” than “a particular ecological combination made technology cumulative.”
The decisive step wasn’t just smarter brains—it was solving the problem of memory across generations. Most animals, even very intelligent ones, learn largely within a lifetime. When the individual dies, much of that hard-won knowledge dies with it. Humans broke that bottleneck. We became a species whose best ideas can outlive their inventors, because we can store information—in other minds, in shared practices, and eventually in artifacts and symbols—and then transmit it with unusually high fidelity. That’s the ratchet: innovation that doesn’t evaporate.
This requires more than imitation. It requires teaching, joint attention, and shared goals—what some researchers call “shared intentionality”—so that skills can be transferred efficiently and improvements can accumulate rather than drift. Once a lineage crosses that threshold, technology starts to behave less like a set of clever tricks and more like a compounding system.22
Language then acts as a compression algorithm for culture. It turns “watch me do this” into “here’s the rule,” making know-how portable, scalable, and teachable to people who never saw the original problem. It also enables coordination at scale—plans, roles, promises, reputations—so groups can build things no individual could.23, 24
And on land, cultural memory can be externalized. Tools can be cached, improved, standardized, and inherited. Eventually information migrates into marks, symbols, and writing—literal memory outside the brain. At that point, progress accelerates, because each generation starts not from scratch, but from a platform built by those before it.
So, What Might ET Look Like?What does all of this imply about the appearance of extraterrestrial intelligence? Not that aliens will be “human,” as if evolution everywhere is destined to reproduce our exact anatomy. Evolution is too contingent for that. But it’s not completely random. If intelligence that builds technology is constrained by chemistry, physics, and ecology – and if similar constraints repeatedly force similar solutions—then truly alien intelligence may come with a surprisingly familiar set of design motifs.
Humans broke that bottleneck. We became a species whose best ideas can outlive their inventors, because we can store information … and then transmit it with unusually high fidelity.Start with the big one: directional movement in a complex world. Once organisms become large, mobile, and behaviorally flexible, the “engineering problem” of getting around efficiently tends to favor bilateral symmetry—a front and a back, a left and a right—because it streamlines movement and organizes the body around a direction of travel.25 Bilateral movers also tend toward cephalization: concentrating senses and information processing at the leading end, because that’s the part that meets the environment first.26 In plain terms, if something is navigating the world and making decisions quickly, it’s likely to be built around a “front end” where sensing and control are concentrated (and, less glamorously, but no less practically, a “waste end” where, well, waste products are dispensed).
Then comes the key requirement for technology: manipulation. A brain can model the world all day, but technology requires a high-bandwidth interface between mind and matter: appendages capable of precise, repeatable control. On Earth, that role is played by hands and digits—originally honed for climbing and grasping in trees—later repurposed for shaping objects, carrying toolkits, and building cumulative tool traditions. This doesn’t mandate five fingers, or even “arms” in the human sense. But it strongly suggests that technological intelligence will be paired with one or more manipulators—structures evolved for fine control, not just locomotion.
Finally, technological intelligence requires culture that compounds. If each generation must rediscover the basics from scratch, there is no sustained trajectory toward industry. The transition to cumulative culture—high-fidelity social learning, teaching, shared intentions, and the ability to preserve and improve innovations—creates the technological ratchet.27, 28, 29 Once a lineage crosses that threshold, intelligence becomes more than cleverness; it becomes a system that accumulates, and that accumulation eventually externalizes into tools, structures, symbols, and records. In other words: even if the bodies vary, a technological species will likely have something analogous to language, teaching, and external memory—because without those, the ratchet stalls.30, 31
Put those pieces together and a rough “family resemblance” emerges: not humans exactly, of course (there’s contingency again), but mobile, bilateral organisms with front-loaded sensing/processing, manipulators, and a cultural transmission system that lets knowledge outlive individuals. That is the opposite of the cinematic monster. It’s less a nightmare creature and more a familiar engineering solution—built under unfamiliar skies.
Caveats and ConclusionsA skeptic’s first objection is an obvious one, namely that Earth is a sample size of one. Any story about extraterrestrial biology risks generalizing from the particular to the universal. That caution is warranted. Our lineage’s specific path—arboreal heritage, bipedalism, the woodland–savanna mosaic—may be historically contingent. Different worlds could produce intelligence by different routes (although it is not clear how), and even on Earth, high cognition appears in multiple lineages.32 So, the claim here should be modest: not “ET must look like us,” but “constraints bias evolution toward a limited menu of workable solutions.”
The Grey is a popular alien figure because it’s a humanoid distilled to a few cues: bilateral symmetry, a head-dominated body plan, and exaggerated eyes. Those broad motifs actually align with what a constraint-based view would predict. But the specific “Grey” is also a cultural icon with a traceable modern history—especially after Whitley Strieber’s Communion (1987) and its widely reproduced cover image. So, it’s better understood as a modern cultural meme than as a biologically derived prediction.
The “Grey” alien.A second objection is this: what if technology doesn’t require fire and metallurgy? Perhaps some species develop a different high-energy lever or a different materials pathway. That’s possible. But the broader point still holds: industrial-scale technology requires some means of harnessing scalable energy and building durable tool chains. Whatever substitutes exist, they still must operate under the same physical logic: persistent artifacts, repeatable processes, and the ability to store and transmit complex know-how over long spans of time.
For example, we know Earth’s atmosphere didn’t always permit fire because oxygen arrived late—and we can see that transition written in the rocks. For much of the Archean, oceans carried abundant dissolved ferrous iron (Fe²⁺); when oxygen produced by early photosynthesizers (e.g., blue-green algae that scientists call cyanobacteria) began reaching surface waters, it oxidized Fe²⁺ to insoluble ferric iron (Fe³⁺) that precipitated in vast banded iron formations (BIFs), essentially recording oxygen’s first sustained appearance as it was “soaked up” by iron sinks. Around 2.4 to 2.3 billion years ago—during the Great Oxidation Event—atmospheric O2 rose from trace levels to much more significant amounts, while BIF deposition eventually waned as the ocean’s iron sink diminished and broader oxygenation progressed.
That history matters for our argument because recognizable, combustion-driven technology depends not just on brains, but on a planet reaching an oxygen state that reliably supports open-air fire and high-temperature chemistry—the “oxygen bottleneck” for technospheres. That is why the “oxygen bottleneck” argument is useful: it highlights that recognizable, combustion-driven technospheres are not guaranteed by intelligence alone—they depend on planetary conditions that enable certain kinds of energy use.33
So, the claim is not inevitability, but probability. Constrain the environments, and you constrain the solutions. And that means the wildest designs of monster cinema are not the most realistic expectation. They are the least constrained.
Science fiction thrives on the alien as shock: the creature that breaks every rule and looks like nothing that ever walked, swam, or crawled on Earth. Alien is a masterpiece precisely because it is so unconstrained—a physiology engineered for dread. Great theater. But real evolution does not have that freedom. Biology is boxed in by chemistry, by solvents, by energy budgets, by gravity and materials, by the logic of movement and sensing, and by the requirements of cultural accumulation.
The details will be alien. The motifs may not be.That’s why the best prediction for extraterrestrial intelligence is not a monster, but a constrained organism that has solved a familiar set of problems in a workable way: a body built for efficient movement, sensors and processing concentrated forward, appendages capable of precise manipulation, and a culture that can store and transmit information across generations so that technology compounds. The details will be alien. The motifs may not be.
If we ever detect a true technosignature—or one day meet its makers—the surprise may not be how strange they are. The surprise may be how recognizable the underlying design logic feels.
“A brain biased toward seeing meaning rather than randomness is one of our greatest assets. The price we pay is occasionally connecting dots that don’t really belong together.”1 –Rob Brotherton
For nearly a decade, a mysterious ailment known as “Havana Syndrome” has been portrayed as proof that American diplomats and intelligence officers have been attacked by a foreign adversary using a secret energy weapon. Few outlets have promoted this narrative more forcefully than the CBS television News Magazine 60 Minutes, which has presented the saga as a chilling geopolitical mystery. Yet after years of investigation, the U.S. intelligence community has concluded that such attacks are “highly unlikely.” So how did one of America’s most respected news programs become so invested in a story that the evidence increasingly contradicts? The answer tells us less about the shadowy world of spycraft and secret weapons, and more about the psychology of belief, the power of social contagion, and the media’s enduring fascination with invisible enemies.
60 Minutes is widely regarded as one of the most prestigious and successful news programs in American television history. For decades it has been the gold standard in investigative reporting and has won every major award in broadcast journalism since its inception in 1968.2 Over the past decade the program has aired four exposés on “Havana Syndrome,” a mysterious clustering of health complaints first noticed by U.S. government officials in Havana, Cuba in 2016 (hence the name).3 However, for the past three years its reputation has been tarnished by two separate intelligence assessments that have challenged and discredited key elements of their investigations.4
Immediately after their third report aired in March 2024, which claimed that an elite Russian military unit was targeting Americans with an energy weapon, the segment prompted calls for a renewed congressional investigation.5 Yet the CIA Director in the Biden Administration, William Burns, responded to the broadcast by issuing a firm assurance that the claims had been thoroughly investigated and were unfounded.6 This conclusion was reaffirmed in an updated intelligence assessment that was issued in 2025.7
On Sunday March 8, 2026, 60 Minutes aired its fourth investigation into “Havana Syndrome” in nine years, once again making dramatic claims that American spies, diplomats, and military personnel have been targeted by a mysterious weapon, first in Havana, and later around the world.8 The three previous segments were critiqued in the pages of Skeptic as they relied heavily on speculation with limited physical evidence, while largely excluding skeptical perspectives.9 The latest chapter in this saga is no different, repeating old, discredited claims and introducing a striking new allegation that the government purchased a Havana Syndrome-type device on the Russian black market.10
The “Attacks” on Chris and HeidiIn the latest segment, narrator Scott Pelley interviews Chris (last name withheld) who worked on top secret spy satellites near Washington DC, and claimed to have been attacked several times between August and December 2020. Pelley implies that Chris had been targeted with an energy weapon, describing him as having been “struck by an unseen force.” He said the first incident felt like someone punched him in the throat, his left ear was clogged, and a sharp pain shot down his left arm. During the second incident, in the kitchen of his Virginia home, he suddenly felt like a vice was squeezing his head, and he became disoriented, confused, and dizzy. A third episode occurred in his living room when he was stricken with a cramping of his back muscles “like a charley horse,” accompanied by a hot, sharp pain. In the final episode, he woke up feeling like a vice was gripping his brainstem and he experienced “a full body convulsion.”
While the segment frames Chris’s experience as a targeted strike, his clinical presentation is consistent with common neurological and psychological conditions such as migraines and anxiety disorder. Migraines often cluster over several months and grow progressively worse before resolving. His description of vice-like pressure is commonly reported by migraine sufferers. Symptoms typically involve head pressure and pain, dizziness, confusion, disorientation, muscle spasms, and throat sensations. They often include unilateral symptoms (affecting one side of the body) such as the clogging of his left ear and the shooting pain down his left arm.
That he experienced several distinct episodes with differing symptoms raises further questions about the likelihood of an attack. Why would the same weapon produce such different effects? Chris’s other symptoms such as throat tightness (globus) and muscle spasms that grew progressively worse, may reflect anxiety from someone who was working in an extreme stress environment (a classified spy satellite program). The least likely explanation for his symptoms is an attack by a directed energy weapon.
The 60 Minutes narrative survives primarily through a strategic omission of key facts.His partner Heidi described waking up with joint pain that was concentrated in her left shoulder. Pelley said that “bones in her shoulder were dissolving,” and she was diagnosed with osteolysis, which required an operation. The implication was that she too had been struck with the same mysterious weapon. But osteolysis of the shoulder is a well-known condition that is becoming increasingly diagnosed in women. It is associated with repetitive strain injuries, weightlifting, trauma, and inflammation, not mysterious external agents.11 Heidi’s shoulder condition is an entirely different pathology from that of Chris. It is far more probable that two people living together simply developed two unrelated conditions.
Pelley then mentions several other victims who supposedly had similar symptoms: an FBI agent who experienced a drilling sensation in her right ear; a Commerce Department official who reported severe head pressure and ear pain; and the wife of an official who felt a piercing pain and pressure in her left ear and a headache. He asserts that a striking aspect of these stories is that “people who never met tell it the same way.” A more plausible explanation is that they were suffering from vestibular disorders: conditions that affect the inner ear and parts of the brain that regulate balance and spatial awareness. The symptoms described in the 60 Minutes interviews include ear pain and pressure, headaches and head pressure, and unusual sounds and sensations in the ear. The descriptions of the victims would be familiar to any vestibular neurologist treating migraines and inner ear conditions including unusual ear sensations, stabbing pains, or a perception of drilling, pulsation, or vibrations.12 It is estimated that one-third of adults over 40 will experience vestibular dysfunction.13
The Omission of Key InformationThe 60 Minutes narrative survives primarily through a strategic omission of key facts. It fails to mention that the foundational studies in the Journal of the American Medical Association (JAMA) that gave rise to the belief that a mysterious weapon had injured American personnel in Cuba, were mired in controversy. This included internal ethics complaints, the withdrawal of authors, and accusations of scientific misconduct. In doing so, the program presents a house of cards as a fortress of settled science. The first study appeared in JAMA in February 2018, and caused a sensation with claims that the patients suffered brain damage.14 Prior to its publication, UCLA neurologist Dr. Robert Baloh, who developed some of the tests that were used in the study, was asked by the editors to review the findings. He found the manuscript to be laden with inconsistencies, described the claims as “science fiction,” and recommended against acceptance.15
Three of the study’s original authors removed their names just prior to publication as they were refused access to the data or earlier revisions of the manuscript. One of them—Dr. Carey Balaban, an ear, nose and throat specialist at the University of Pittsburgh, was so disturbed by this that he filed an ethics complaint over what he described as potential scientific misconduct.16 When the study appeared, there were calls by neurologists for their methods to be clarified or the study retracted.17 A later attempt to clarify the study’s findings was described by University of Edinburgh neurologist Sergio Della Sala as incomprehensible.18 Prior to its publication, information had been leaked to the media that several of the patients suffered white matter tract changes in their brains, prompting dramatic headlines about brain damage. However, when the study appeared, the prevalence of white matter changes fell within a normal range.19
A second JAMA study in 2019, was equally controversial. It found brain anomalies in a small group of victims, once again prompting sensational headlines about brain damage. The study’s lead author, Dr. Ragini Verma, even described the differences in brain images of “Havana Syndrome” victims and a control group as “jaw-dropping.”20 Yet such findings are common in small cohorts and are consistent with what one would expect to see in a group of people under prolonged stress. The authors even admitted that the anomalies were so minor that they could have been caused by individual variation.21 Another problem was that 12 of the Havana Syndrome patients had pre-existing histories of concussion compared to none in the control group. Despite this, many media outlets had a field day citing a few rogue scientists who proclaimed that it was clear evidence of an attack by a microwave weapon.
Dubious BeginningsThe 60 Minutes segment also failed to mention that social contagion may have played a role in the initial spread of “Havana Syndrome.” CIA analyst Fulton Armstrong would later reveal that the undercover intelligence agent in Havana who first reported the mysterious sounds and believed they were responsible for his health issues, had engaged in a vigorous campaign to persuade colleagues that the sounds were significant. “He was lobbying, if not coercing, people to report symptoms and connect the dots,” Armstrong said.22 The man, who has since been dubbed “patient zero,” later attended a gathering of embassy personnel and played the recording of his “attack,” encouraging them to report their symptoms as he was convinced that they too had been targeted. His recording was analyzed by government scientists and identified as crickets.23 In fact, eight of the first group of victims in Cuba who reported feeling unwell and hearing sounds, recorded their “attacks.” They were later identified as the mating call of the Indies short-tailed cricket.24
Soon American and Canadian diplomats stationed in Havana were on the lookout for strange sounds and health complaints. Eventually the U.S. government alerted all of its active military personnel and embassy staff around the world to be vigilant for mysterious sounds and “anomalous health incidents.” In response, there were over 1,500 reports of possible attacks. The problem with these alerts is that “Havana Syndrome” symptoms are common in the general population and include headaches, nausea, dizziness, forgetfulness, difficulty concentrating, tinnitus, fatigue, facial pressure, hearing loss, ear pain, trouble walking, depression, irritability, and even nose bleeds.
One study found that the average person experiences five different symptoms in any given week. Thirty-six percent noted fatigue; 35 percent reported headaches. Nearly 30 percent said they had insomnia, while 15 percent had difficulty concentrating, 13 percent reported memory problems; roughly 8 percent noted nausea and dizziness.25 These symptoms overlap with those attributed to “Havana Syndrome.” When one eliminates claims of brain damage and hearing loss (which were never demonstrated), one is left with an array of exceedingly common symptoms.
A Fixation on David RelmanThe 60 Minutes segment includes extensive interviews with Stanford University microbiologist David Relman who headed two panels that both concluded that pulsed microwave radiation was likely involved in some cases. As with the earlier 60 Minutes investigations, the government intelligence assessments on “Havana Syndrome” have rejected his conclusions. One of Relman’s panels said it was not possible to assess the involvement of social contagion as there was no data on the early spread.26 Yet, the spread from “patient zero” to fellow spies and diplomats in Havana has been well-documented and was widely known over a year before the panel issued their findings in December 2020.27 The same panel interviewed fringe figures such as Dr. Beatrice Golomb, a researcher at the University of California, San Diego, known for her extreme views on mass psychogenic illness, which she believes does not exist.28 His 2022 panel concluded that social contagion could not have affected spies and diplomats operating in Havana because they were highly educated and trained to deal with stress.29 This is a common fallacy.30 These conclusions may not be surprising given that Relman’s panels failed to interview a single prominent skeptic.
The enduring lesson of “Havana Syndrome” is not secret weapons but the psychology of belief.Scott Pelley complains that the panels’ conclusions have been ignored by the intelligence community. Relman told Pelley that it was embarrassing and insulting that the victims have been “dismissed as malingerers or people who are manufacturing things.” Pelley concurred by saying that the American government “has doubted their stories” and they have been labelled as “delusional.” These claims are misleading. In 2023, the Office of the Director of National Intelligence stated unequivocally that it was the consensus of the intelligence community that the symptoms exhibited by “Havana Syndrome” sufferers are real, but it was “highly unlikely” the stimulus was a directed energy weapon from a foreign adversary. Instead, they attributed the complaints to an array of factors including pre-existing conditions, conventional illnesses, environmental causes, and social factors (a clear reference to mass suggestion and social contagion). The intelligence assessment explicitly states that their findings “do not call into question the very real experiences and symptoms that our colleagues and their family members have reported.”31 A second intelligence assessment issued in 2025 reached a similar conclusion,32 while a recent study by the National Institutes of Health found no evidence of brain damage.33
The Portable Microwave DeviceThe 60 Minutes segment also reported that in 2024 undercover U.S. government agents obtained a portable microwave weapon from a Russian criminal network and have tested it on animals. They said that the Pentagon-funded mission to obtain the weapon cost about $15 million. For being the centerpiece of this story, they provide few details. Pelley said “Our confidential sources tell us the still classified weapon has been tested in a U.S. military lab for more than a year. Tests on rats and sheep show injuries consistent with those seen in humans.” The problem with this claim is that there is no credible evidence that the victims of “Havana Syndrome” were injured by a weapon. 60 Minutes didn’t break this story; that distinction goes to CNN, who this year reported on their investigation into the same device, but their perspective was in sharp contrast to the 60 Minutes claims. The CNN sources said there was an ongoing debate and skepticism over attempts to link the device to “Havana Syndrome.”34
The claims by 60 Minutes are based on anonymous sources rather than technical reports, there are no test results, and they did not even obtain a picture of the device! Even after the device was acquired, the updated assessment on “Havana Syndrome” that was published in 2025 continued to maintain that the involvement of an adversarial weapon was highly unlikely. The U.S. and foreign governments have long conducted research on potential new weapons, so the existence of the Russian device should come as no surprise. Yet there is a big difference between testing, and producing an effective, practical weapon, with a major impediment being the laws of physics. The details surrounding the device and who created it, are nebulous. For instance, how could a Russian criminal syndicate obtain such a highly classified device and offer it for sale on the black market, without the knowledge of Russian intelligence, or U.S. intelligence for that matter?
A Media Zombie That Won’t DieThis is not the first claim of its kind. In February 2026, the Washington Post reported that a Norwegian government researcher had built a device that was purportedly behind the Havana Syndrome “attacks.”35 Unnamed sources claimed that after exposing himself to pulsed microwave radiation, he developed neurological symptoms consistent with the victims. The report stated that after the Norwegian government informed the CIA, officials from both the White House and Pentagon visited Norway on two occasions to learn more. However, the Norwegian government says they know nothing about it. An investigation by one of the country’s leading newspapers was unable to identify any such researcher, while a microwave expert at the Norwegian University of Science and Technology, Trym Holter, said any such study would have required ethics approval and been carried out in a controlled fashion with test subjects. He said for someone to have conducted such an experiment on themselves would have been “completely crazy” and he questioned whether any such experiment had ever occurred.36
Perhaps the most troubling reason for this one-sided reporting is a glaring conflict of interest: the producers behind all four 60 Minutes segments, are marketing a book on the subject.This pattern of credulous reporting is not limited to CBS News or the Post. Recently British journalist Nicky Woolf wrote a sensational article in the Sunday Times claiming that the evidence for a directed energy weapon is now overwhelming, while omitting the US intelligence community’s own conclusions to the contrary.37 He stated (falsely) that “many of the early cases didn’t know about each other,” and repeated the debunked claim that during the recent US raid in Venezuela, the American military used a directed energy weapon to incapacitate enemy soldiers.38
Historical PrecedentsUnfortunately, 60 Minutes has repeatedly focused on one side of the story instead of presenting competing perspectives. A key problem when evaluating controversial claims is that once investigators become convinced that a hidden adversary exists, the belief itself can shape how evidence is interpreted. History is replete with examples. During the Salem witch-hunts of 1692, an idea spread that witches were attacking members of the community. Before long, over 200 residents were accused of consorting with the devil. During the “Red Scare” of the 1950s, a belief spread that communist sympathizers had infiltrated communities across the United States. In response, scores of innocent people were blacklisted, often on the flimsiest of evidence.
The enduring lesson of “Havana Syndrome” is not secret weapons but the psychology of belief. The producers at 60 Minutes continue to focus on exotic explanations while ignoring mundane ones. The colloquial term for this is “doubling down”—the stubborn persistence of clinging to a discredited hypothesis in the face of compelling evidence to the contrary. In the case of CBS News, it may be a subconscious attempt to avoid the embarrassment of having to correct the record after having been mistaken. The continued advocacy by David Relman and Scott Pelley for the microwave weapon hypothesis despite intelligence assessments to the contrary, exemplifies what psychologists refer to as “belief perseverance.” This is the well-documented tendency to maintain deeply held beliefs in the face of contrary evidence.
Perhaps the most troubling reason for this one-sided reporting is a glaring conflict of interest: the producers behind all four 60 Minutes segments, are marketing a book on the subject. The Havana Syndrome: Secret Weapons, a Government cover-up, and the Greatest Spy Mystery of Our Time, is scheduled to be published this fall, with an introduction by none other than Scott Pelley himself.39 By continuing to air these “exposés,” CBS News is effectively providing a multi-million-dollar infomercial for a product that relies on a spy mystery narrative to drive sales. The authors say their reason for writing the book is “to tell the whole story” including “the cover-up.” This is ironic given that their reports have consistently left out key parts of the narrative.40
Chasing ShadowsThe history of science and journalism are replete with examples of how institutions can cling to persuasive stories long after the evidence begins to unravel. In the 1840s Hungarian physician Ignaz Semmelweis produced strong empirical evidence that handwashing among midwives dramatically reduced the deaths of mothers from childbed fever, yet his findings were resisted for decades by the medical establishment.41 More recently, in the lead-up to the Iraq War many media outlets published erroneous stories that Saddam Hussein had obtained weapons of mass destruction (WMDs) even though United Nations weapons inspectors in the field insisted they had found no clear evidence.42 This led to an apology by The New York Times for publishing claims that were never independently verified, and the Washington Post acknowledging that skeptical stories were frequently “pushed to the back of the paper” while pro-WMD claims dominated the front pages.43
This pursuit of unicorns over horses is a cautionary tale of how fear, expectation, and sensational storytelling can create a phantom menace where there is no concrete evidence that one exists.When investigators become convinced of the existence of a hidden adversary, ambiguous evidence can take on new meaning and be seen as patterns in a grand conspiracy. Anonymous sources become credible witnesses. Coincidences can appear to be coordinated acts of aggression, and mundane symptoms are redefined as signs of an attack. As physicist Richard Feynman famously warned: “The first principle is that you must not fool yourself—and you are the easiest person to fool.”44 Throughout history, when a seductive explanation takes root—whether in the form of germs, hidden arsenals, or mysterious attacks—ambiguous signs are reinterpreted as confirmation rather than treated with skepticism.
The promotion of ghostly enemies while omitting key facts is a dangerous game because it expends valuable resources at a time of confirmed threats to our homeland. This pursuit of unicorns over horses is a cautionary tale of how fear, expectation, and sensational storytelling can create a phantom menace where there is no concrete evidence that one exists.
Beliefs Have ConsequencesRobert Trivers, who died on March 12, 2026, was arguably the most important evolutionary theorist since Darwin. He had a rare gift for seeing through the messy clutter of life and revealing the underlying logic beneath it. E. O. Wilson called him “one of the most influential and consistently correct theoretical evolutionary biologists of our time.” Steven Pinker described him as “one of the great thinkers in the history of Western thought.”
I was Robert’s graduate student at Rutgers from 2006 to 2014. Long before I knew him personally, however, he had already established himself as one of the most original and insightful scientists of the twentieth century. In an astonishing series of papers in the early 1970s, he changed forever our understanding of evolution and social behavior.
The first, published while he was still a graduate student at Harvard, confronted one of the deepest problems in evolutionary theory: how can natural selection favor cooperation between non-relatives? In The Evolution of Reciprocal Altruism Trivers proposed that cooperation could evolve when the same individuals interacted repeatedly, making it advantageous to help those who were likely to help in return while avoiding cheaters who took benefits without reciprocating — i.e.“you scratch my back, I’ll scratch yours.” The paper offered an elegant solution to the problem of how natural selection can “police the system” and has had enormous implications for human psychology, including our sense of justice, with parallels in other mammals such as capuchins and dogs.
From that insight flowed one of the most powerful and falsifiable ideas in modern scienceThe next year in 1972, Trivers published his most cited paper, Parental Investment and Sexual Selection. Here he offered a unified explanation for something that had puzzled biologists since Darwin. Writing perhaps the most famous sentence in all of evolutionary biology—“What governs the operation of sexual selection is the relative parental investment of the sexes in their offspring”—Trivers threw down the gauntlet and revealed a deceptively simple principle that reorganized the field. From that insight flowed one of the most powerful and falsifiable ideas in modern science: the sex that invests more in offspring will tend to be choosier about mates, while the sex that invests less will compete more intensely for access to them.
Two years later, in 1974, Robert once again gave birth to an entirely new field of study with Parent-Offspring Conflict. In it, he built on William Hamilton’s theory of inclusive fitness to show that parents and children have divergent genetic interests. Because a parent is equally related to all of its offspring, while each offspring is related to itself more than to its siblings, conflict is built into the family from the beginning. With that insight, Trivers revealed that some of the most intimate and emotionally charged features of life—begging, weaning, sibling rivalry, tantrums, parental favoritism, even the distribution of love and attention within families—all could be understood as the product of natural selection acting on family members with conflicting evolutionary interests.
In other papers, Trivers made wide-ranging predictions about the conditions under which parents should produce or invest more in sons than daughters, how female mate choice can favor male traits that benefit daughters, why insect colonies are structured by conflicts over sex ratios, reproduction, and control, and how self-deception may have evolved as a way of more effectively deceiving others.
It is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian booksEach of these papers spawned entirely new research fields, and many have dedicated their careers to unpacking and testing the implications of his ideas. As Harvard biologist David Haig put it, “I don’t know of any comparable set of papers. Most of my career has been based on exploring the implications of one of them.” Indeed, it is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian books from Richard Dawkins and Robert Wright to David Buss and Steven Pinker.
To know Robert personally, however, was to confront a more uneven and less orderly organism— to use one of his favorite words—than the one revealed in his papers. The man who explained the hidden order in life often struggled to impose order in his own. “Genius” is one of the most overused words in the language, with “asshole” not far behind, and I have known few people who truly deserved either label. Robert deserved both. He could be genuinely funny, extraordinarily generous, and breathtakingly perceptive, but also moody, childish, and needlessly cruel.
Bob and other committee members after my dissertation defense (2014) | Bob with undergraduate students (Jamaica, 2010)
Robert taught me that writing was endless revision and paying attention to the tiniest of details. He went through seven drafts of Parental Investment and Sexual Selection and frequently quoted Ernst Mayr telling him that papers are never finished, only abandoned. He used to call me “slovenly,” but more than once returned a draft of mine with a piece of his own dried lettuce stuck to it.
He was like an alien visiting our planet trying to make sense of our strange habitsHe had an uncanny ability to see the obvious. I used to joke that one reason he was so good at explaining behaviors the rest of us took for granted was that he was like an alien visiting our planet trying to make sense of our strange habits—why we invest in our children, why we are nice to our friends, why we lie to ourselves. He told me that conflict with his own father was part of the inspiration for parent-offspring conflict and one of the observations that led to his insight into parental investment came from watching male pigeons jockeying for position on a railing outside his apartment window in Cambridge.
He cared more about truth than about his reputationRobert also had a respect for evidence and for correcting mistakes that I’ve rarely seen among academics, a group not known for their humility. He cared more about truth than about his reputation and retracted papers at great cost to himself and his career when he thought there were errors. He also knew that he was standing on the shoulders of the giants who had come before him. He wrote that “the scales fell from his eyes,” crediting Bateman’s 1948 Heredity paper on fruit flies showing that males differ more than females in reproductive success for his insights into why males compete more for mates and females tend to be choosier, and he acknowledged that George Williams had already anticipated the importance of sex-role-reversed species in Parental Investment and Sexual Selection. Indeed he once described most of his insights into social behavior as those of W.D. Hamilton plus fractions.
He was a lifelong learner with a willingness to do hard things. After his astonishing early success, he could have done what many academics do: stay in his lane, guard his territory, and spend the rest of his career commenting on ideas he had already had. Instead, in the early 1990s he saw that genetics mattered and spent the next fifteen years trying to master it. The result was Genes in Conflict, the 2006 book he wrote with Austin Burt, which pushed his interest in conflict down to the level of selfish genetic elements. Few scientists, after making contributions as important as he had, would have had the curiosity, humility, and stamina to begin again in an entirely new area.
He liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’Trivers was a great teacher, though not always in the ways he intended. He often asked dumb questions—’What does cytosine bind to again?’ in the middle of a genetics seminar and made obvious observations—’Did you know that running the air-conditioner in the car uses gas?’ But as he liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’
He could also be volatile and aggressive and there were many times when he threatened to kick my ass. I may have been the only graduate student who ever had to wonder whether he could take his advisor in a fight. Once, over lunch at Rutgers, I asked about a cut on his thumb after he had returned from one of his frequent trips to Jamaica. He matter-of-factly told me that he had just survived a home invasion in which two men armed with machetes held him hostage. He escaped by jumping from a second-story window, rolling downhill, and stabbing both men with the eight-inch knife he carried everywhere he went. He was 67 at the time.
Bob, evolutionary biologist Virpi Lummaa, me (Robert Lynch). Finland, January 2020.The benefits of being Trivers’s only graduate student were obvious. He was a brilliant man and nobody else could speak with such clarity about the impact of operational sex ratios on parental investment and male mortality while rolling a joint. The costs were obvious too. He could be erratic and often seemed either indifferent to, or unaware of, the social consequences of what he said. This often left him professionally isolated and left me with few academic relationships I could count on when it came time to find a job.
The mark of a great person is someone who never reminds us of anyone elseOne of the last times I spoke with Robert, a fall had left his right arm nearly useless. He described it as “two sausages connected by an elbow.” He was a chaotic and deeply imperfect man, but also one of the few people whose ideas permanently changed how we understand evolution, animal behavior, and ourselves. Steven Pinker wrote that “it would not be too much of an exaggeration to say that [Trivers] provided a scientific explanation for the human condition: the intricately complicated and endlessly fascinating relationships that bind us to one another.” That seems just about right to me.
His ideas are some of the deepest insights we have into human nature, animal behavior, and our place in the web of life. The mark of a great person is someone who never reminds us of anyone else. I have never known anyone like him.
I’ll miss you, Robert. You asshole.
Bob rolling a joint in NYC, 2012.Robert Ludlow “Bob” Trivers, one of the most consequential evolutionary biologists of the twentieth century, died on March 12, 2026, at the age of 83. In an extraordinary burst of intellectual creativity between 1971 and 1974, he published four papers that permanently altered how evolutionary biologists—and eventually the public—understood cooperation, conflict, selfishness, and deception in the natural world. These papers presented original theories of reciprocal altruism (1971), parental investment and sexual selection (1972), facultative sex ratio adjustment (1973), and parent-offspring conflict (1974). Each paper addressed a deep puzzle in evolutionary theory; together they laid much of the foundation for what would become the field of sociobiology and, later, evolutionary psychology.
His paper on parental investment and sexual selection (1972) proposed that the sex which invests more in offspring becomes the choosier mate. This theory explained with elegant simplicity why males and females so often behave differently across the animal kingdom. The paper arose from watching male and female pigeons out the window of his third-floor apartment in Cambridge, Massachusetts, a reminder that transformative science can begin with simple, careful observation.
Robert Trivers (photo courtesy of Alelia Trivers Doctor) | A younger Robert Trivers
He was also among the first to explain self-deception as an adaptive evolutionary strategy, first describing the concept in 1976—arguing that we deceive ourselves in order to deceive others more convincingly, a counterintuitive idea that has since attracted enormous attention across psychology, philosophy, and the social sciences.
Robert’s books included Social Evolution (1985), widely praised as among the clearest accounts of sociobiological theory, Natural Selection and Social Theory (2002), a collection of his early influential papers outlined above, Genes in Conflict (with Austin Burt, 2006), which makes the central argument that genomes are not harmonious but instead sites of constant struggle, and The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life (2011), which brought his ideas about self-deception to a popular audience. He also chose to be the author of his own story in his memoir, Wild Life (2015).
Robert Trivers was born on February 19, 1943, in Washington, D.C., the son of Howard Trivers, an American diplomat, and renowned poet, Mildred Raynolds Trivers. Growing up in a diplomatic household, Robert attended schools in Washington, D.C., Copenhagen, and Berlin before enrolling at Phillips Academy and later Harvard, where he initially studied American history before making an important pivot to biology.
He studied evolutionary theory with Ernst Mayr and William Drury at Harvard from 1968 to 1972, earning his PhD in biology. While a graduate student at Harvard, Robert accompanied Ernest Williams on an expedition to study the green lizard in Jamaica's countryside. Robert met his first wife, Lorna Staple, in Jamaica; he fell in love with her and the island at the same time. Robert and Lorna wed in 1974 in Cambridge, Massachusetts, and they had four children together: a son, Jonny, twin girls, Natasha and Natalia, and another daughter, Alelia.
Robert was on the faculty at Harvard University from 1973 to 1978, then moved to the University of California, Santa Cruz, where he remained until 1994, before joining the faculty at Rutgers University. Robert was named one of the greatest scientists and thinkers of the 20th century by TIME magazine in 1999. In 2008–09 he was a Fellow at the Berlin Institute for Advanced Study. He was awarded the 2007 Crafoord Prize in Biosciences by the Royal Swedish Academy of Sciences for his fundamental analysis of social evolution, conflict, and cooperation—widely considered the highest honor in evolutionary biology and a prize often mentioned alongside the Nobel in scientific prestige.
His life outside the laboratory was as unconventional as his science. Robert met Huey P. Newton, Chairman of the Black Panther Party, in 1978, when Newton applied from prison to do a reading course with Robert as part of a graduate degree at UC Santa Cruz. The two became close friends and Robert joined the Black Panther Party in 1979. He and Newton later co-authored an analysis of the role of self-deception in the 1982 crash of Air Florida Flight 90.
After Robert and Lorna divorced in 1988, Robert maintained a close relationship with her and with the whole Staple family in Jamaica. He also built a home in Southfield, St. Elizabeth, and spent several months a year in Jamaica for decades. His favorite pastime at his home in Jamaica was to sit on the front veranda and observe the wildlife around him, often joking that the same group of animals would pull up a chair each evening and join him for a glass of red wine, marveling with him at the beauty of the sunset. He made lifelong friends in Jamaica and conducted research from the island on lizards, symmetry, and honor killings over the years. Robert married his second wife, Debra Dixon, in 1997 and they had one child together, a son—Aubrey. They divorced in 2004 but also remained friends until his passing.
Robert Trivers with his five children | With grandson, Lucas Malcolm Howard | With ex-wife Debra, stepson, Diego, and son Aubrey | With three children and seven grandchildren | With grandaughter, Jonisha, and his great grandson, Masiah
Robert Trivers was, by any measure, a complicated man. He was diagnosed first with schizophrenia at the age of 21 and that diagnosis was modified to bipolar disorder later in adulthood. He could be generous and brilliant in one breath, reckless and destructive in the next. But he was always a loving father, a dynamic teacher, and a caring friend, often listening to loved ones for hours and providing valuable guidance and needed moments of levity. He loved life with tenacity—both studying it and living it.
Towards the end of his life, Robert found the greatest joy spending time with his children, grandchildren, and his great grandson, Masiah. His eyes would light up the moment he saw him.
Robert’s work throughout his life was also very important to him. He wanted to make a significant contribution to scientific thought in his lifetime. The theories Robert produced reshaped how we understand the deep logic of living things. His brilliant contributions to our collective understanding—and his family—are his legacy and will spur important scientific research for years to come.
He is survived by his siblings, Jonathan Trivers (Karen), Ruth Ann Mekitarian, Milly Palmer (David), Howard Trivers (Cathy), and brother-in-law, Souham Harati. Robert is predeceased by his parents, his brother, Aylmer Trivers, and sister, Kate Harati. He is also survived by five children: Jonathan Trivers (Carline), Natasha Trivers Howard (Jonathan), Natalia Barnes (Jovan), Alelia Trivers Doctor, and Aubrey Trivers; ten grandchildren; and one great grandson.
Neuroscience terms are everywhere. If you log into social media, you’re likely to be bombarded with advice on how to “increase neuroplasticity.” You might be told to “stop chasing the dopamine” or given instructions on how to “regulate your nervous system.” Meditation works because it “rewires your brain.”
Self-help gurus and productivity coaches love these terms. They signal depth. They suggest that beneath the surface of our messy behavior there are precise mechanisms that have been identified that can give us the answer to our problems, whatever those problems may be.
The trouble is, despite their suggestion of a mechanism, most of these terms are used in a way that offers no explanatory value. When a wellness blog tells you going for a walk will “regulate your nervous system” they’re just saying a walk may reduce stress. Whether it actually does reduce stress doesn’t hinge on whether we can describe it in neural terms. Similarly, when an influencer says meditation “changes the brain” this doesn’t tell you anything new. Anything from practicing a motor skill to remembering this sentence changes your brain. The question is whether it changes it in a way that’s helpful. For that, the neuroscience doesn’t provide an answer.
Neuroscience terms used in these ways are decorative—a way to jazz up tired old advice and make it seem fresh and new again. By decorative neuroscience, I mean the use of irrelevant or oversimplified brain-based concepts to rhetorically bolster some claim, explanation, or intervention.
Neuroscience terms used in these ways are decorative—a way to jazz up tired old advice and make it seem fresh and new again.Why do we continue to see so much decorative neuroscience? A study published in 2008 found that laypeople rate explanations that contain irrelevant neuroscience as better than those that lack neuroscience. This has been termed “the seductive allure of neuroscience explanations.” People without neuroscience training interpret the presence of brain-based explanations as meaning we have a much firmer grasp on a concept than we do. When influencers throw in neuroscience terms, it ends up being interpreted as more authoritative.
Many of the uses of decorative neuroscience are innocuous enough. Influencers have discovered a new rhetorical trick to ply their trades, but much of what they’re saying is the same old thing. What's more worrying is the way decorative neuroscience has started to influence public discourse.
Dopamine talk has become ubiquitous. California psychiatrist Dr. Cameron Sepah recommends “dopamine fasting,” which involves taking a break from things like smartphones and social media. Individuals following his protocol talk about being “addicted to dopamine.” From a neuroscience perspective, these terms make little sense. You can’t take a “fast” from dopamine; it’s a naturally occurring molecule in your brain and critical for movement and motivation. While addictive substances alter dopamine signaling, you can’t be addicted to dopamine itself.
Instead, the term dopamine in “dopamine fasting” is decorative, something Dr. Sepah himself admits: “Dopamine is just a mechanism that explains how addictions can become reinforced, and makes for a catchy title. The title’s not to be taken literally.”
But when the catchy title is taken away, we see the dopamine fast for what it is: advice to take a break from technology to reconnect with ourselves and others. This may be good advice, but it certainly isn’t a new idea, and it has little to do with neuroscience.
More significantly, the term dopamine has become a catch-all for sinful pleasurable activities. The bestselling book Dopamine Nation by Anna Lembke claims anything pleasurable, even reading a book, is potentially addictive because it releases dopamine.
Positing a neural mechanism is no substitute for direct evidence that an intervention actually changes behavior, experience, or well-being.While it’s true that pleasurable activities stimulate dopamine release, superficial similarities don’t mean two things are the same. The reward system of the brain responds to everything from love to video games to chocolate to methamphetamine. The involvement of the same brain regions doesn’t mean they have the same impact on us. Both addictive drugs and video games stimulate the release of dopamine, addictive drugs stimulate much more.
But again, the neuroscience is largely irrelevant—we should just look at the behaviors associated with these activities. The majority of methamphetamine users develop a use disorder, resulting in severe health and behavioral problems. Despite how widespread technology use is, technology use disorder is rare; it’s estimated around 3 percent of video game players develop any kind of behavioral problem associated with gaming (like neglecting schoolwork to the point of harming grades), and most of those problems are mild.
Part of the trouble here is pushing our understanding of neural mechanisms beyond their scope and assuming they provide a more solid basis for understanding than simple psychology. But often, the psychological level is much closer to the level of explanation we need than neuroscience. Take the classic misunderstanding of the brain hemispheres: the idea that the left hemisphere is analytical while the right hemisphere is creative. This isn’t just bad neuroscience, it’s bad psychology to boot.
First the neuroscience: it’s true there are hemispheric differences. Some functions occur more in the right or left hemisphere, something neuroscientists refer to as lateralization. Language production is a classic example—for most people, language production mostly happens in the left-hemisphere. While you can find some functional differences between the hemispheres, nearly every complex activity involves both sides. Even for analytical tasks like solving math problems, there’s substantial involvement from both hemispheres. For example, the left-brain right-brain personality theory claims that some people (the logical type) are “left-brained” and others (the creative type) are “right-brained.” This, too, doesn’t hold—people don't predominantly “use” one hemisphere over the other.
A bad psychological model can’t be bolstered by bad neuroscience. You don’t need a neuroscience mechanism to explain something that doesn’t exist.But again, the neuroscience here is largely irrelevant. We should instead look at psychology. Is it true that people are either logical or creative? Without looking at the brain, we can determine that no, it isn’t. Far from there being two categories of people (left-brained and right-brained), people fall in different parts of the distribution for each. Classic measures of intuitive versus analytical thinking styles have found they’re largely independent. If anything, there may be a positive association between analytical thinking ability and creativity, as scoring higher on an IQ test makes one more likely to score high on a test of creativity. A bad psychological model can’t be bolstered by bad neuroscience. You don’t need a neuroscience mechanism to explain something that doesn’t exist.
If you have a theory of personality types, how to study better, be more productive, or strengthen self-control, that’s great. It should be put to the test to see if it works. What’s important is whether there’s actually an effect. Does reading books often lead to addiction? Are people either analytical or creative? Does going for walks lower stress? These are straightforward questions about behavior. Pointing to possible neural mechanisms doesn’t help—the brain is complex and has many mechanisms. You can come up with all sorts of post hoc possible neural mechanisms to explain theoretical relationships between an activity and an outcome.
Looking to neuroscience for wellness or productivity advice is like looking to cell biology for dietary advice.It would be nice if we have some specific, clear mechanism like right brain versus left brain to explain the difference between people, but neuroscience rarely can offer something like this. Neuroscience is messy. Looking to neuroscience for wellness or productivity advice is like looking to cell biology for dietary advice. It might provide constraints and guidance for nutrition research, but what you really want is to have people eat stuff to see what happens.
Moving from behavior to neurons might feel like it’s digging down a level, getting rid of the messy complexities of psychology and leaving something more precise and scientific. But our understanding of the brain isn’t clearer or more complete than our understanding of behavior. Neuroscience is full of uncertainty, indirect measures, and interpretive gaps. More importantly, it operates one level down from the level of explanation we generally care about in our everyday lives: observable behavior and experience.
The human brain is a wonderfully complex organ. It’s arguably the most complex thing we’ve discovered in the universe. Neuroscience is a young science with a gargantuan task, made all the harder by the ethics of studying the living brain and the modesty of our tools for probing it. It has enriched our understanding of behavior, perception, and ourselves as biological beings. It’s helped clarify neurological and psychiatric pathologies, and offers hope for a future for treating them. Neuroscience can illuminate constraints and underlying processes, and work alongside psychological research to triangulate how cognition works in different domains. But positing a neural mechanism is no substitute for direct evidence that an intervention actually changes behavior, experience, or well-being.
This article, presented here in abridged form, was originally published in Skeptic magazine Vol. 20 No. 4
For a scientist there is the act of studying life and the process of living it, and I have never wanted the one to overwhelm the other. Yet that is exactly what a life devoted to science will tempt you into—a life of studying and, otherwise, not much living. Yes, you may have a family and a few good friends, but most scientists embrace a sedentary life, often solitary and intensely internal. You concentrate on experiments and theory and perpetual reading. Your small area of study is the focus of your life and it is a focus you share with only a few others.
This kind of life never appealed to me. I was an out-breeder by nature, raised in a diplomat’s home. Foreign countries and languages were part of my upbringing. Since my father served in Europe, I walked through more cathedrals, museums, and art galleries than was healthy for any child. I had no interest whatsoever in European culture, nor in the academic disciplines based on them, but I did know five foreign languages and enjoyed meeting people in their own land, speaking their language, learning about their area of expertise.
When I finally found my intellectual home in evolutionary biology, it offered me exactly the right kind of foreign travel—in the rural, the bush, the exotic, and the wild. Evolutionary biology would take me around the world. And it would show me how to carve knowledge from everything I experienced in these travels with a single, very general logic—what would natural selection favor? How would one best survive and reproduce in these conditions? In short, I signed on to a system of thought that allowed me to study life and live it, sometimes very intensively.
Early Scientific StirringsWhen I was 12 years old I knew I wanted to be a scientist because it was obvious upon inspection (this was 1955) that none of the other intellectual areas—history, religion, English literature, or the social so-called sciences—provided much hope of actual, sustained intellectual advance. Initially I was attracted to astronomy, with the vastness and beauty of space and the billions of years it had been forming. I got a telescope, read Hoyle’s standard Astronomy text, and came up with the bi-stellar hypothesis for the origin of the solar system.
I liked that astronomy was a science. These people were not fooling around. They measured things and did so carefully. They tested assertions against data, and were capable of changing either, and they continually attempted to improve the precision of their measurements. When Einstein’s theory that gravity bent light was tested by the apparent change in place of a background star during an eclipse we had dramatic evidence, measured with great precision, of exactly how much that bend was. But astronomy was not a discipline you could pursue in the 8th grade, so I soon turned to mathematics.
My father happened to have a large number of math books and out of sheer boredom one day I picked out one entitled Differential Calculus. I was 13 and it took me two months to master the book. It then took me two more to master the book next to it, Integral Calculus. It was a thrill to see that the algebra I knew could generate fields with real predictive and analytic power. That was only part of the beauty of mathematics, and its scientific twin: you could learn the whole thing from the bottom up. That is, if you were willing to put in the necessary concentration and time. The methodology was strictly anti-self-deception. Everything was explicit. Experiments, for example, were described so that others could attempt to replicate them exactly to see if duplicate results were achieved. Mathematical proofs were entirely explicit, every variable and every transformation exactly described.
Harvard and PsychosisI mastered other corners of mathematics, mainly number theory, infinite, irrational, limit theory, and so on. I entered Harvard as a sophomore in pure mathematics but halfway through the year I saw the end of the whole enterprise and it was nowhere I wanted to be—at best, producing work with solid utility but far delayed, perhaps by the year 2250, but of no immediate use. Physics was for me no better, because, for one thing, I had no physical intuition at all. When they raised an object off the ground and told us they had thereby given it “negative energy” I headed for the door. And of chemistry and biology I knew nothing, having never taken a course in either at any level.
So I decided to give up truth for justice and become a lawyer. I would fight the good fights—early 1960s civil rights, poverty law, criminal law where you hoped the criminal was not too guilty, and so on. I asked people what you studied if you wished to pursue law and they said there was no such thing as “pre-law” at Harvard, so I should study the history of the United States. I declared that as my major and spent the next years learning about The Federalist Papers, the Constitution, Supreme Court decisions, and the like.
I developed an almost immediate distaste for the subject because it was obvious from the outset that U.S. history, as it was studied then, was not so much an intellectual discipline as an exercise in self-deception. The major question U.S. historians were tackling at that time was: why are we the greatest society ever created and the greatest people ever to stride the face of the earth? The major competing theories were answers to this question. The benefits of having a society designed by upper-class Englishmen was one such theory, as were the benefits of an ever-receding frontier—that is, the increasing extermination of Amerindians from East Coast to West. The larger field of history was somewhat more interesting but still consisted of stories from the past, inevitably biased and lacking critical information—and I saw little hope of correcting either defect.
In April of 1964—my junior year at Harvard—I suffered a mental breakdown and was hospitalized for two and a half months. Prior to the breakdown I went through a five week manic phase, with increasing mental excitation, decreasing sleep, and near-certainty that I was the first person to understand what Ludwig Wittgenstein was actually saying in the Tractatus, even though I was enrolled in my first-ever philosophy course. (Luckily, I was not taking it for credit.) I remember very little else from the manic phase except that I tried self-hypnosis to put myself to sleep. It did not work and lack of sleep is what brings on a full breakdown. Finally, one night my friends, who had become increasingly concerned, deposited me at the Harvard Infirmary where I could not answer the elementary question, “Who are you?” “A pregnant woman?” “A new-born baby?” But not, “A thoroughly confused Harvard Junior.”
Then came eleven weeks of self-admitted incarceration at three hospitals for treatment of my psychosis. Incarceration—even when voluntary and in a hospital—is never fun. You are locked in, no longer permitted to move about as you like. But by that time biochemists had come up with compounds that would knock the psychosis right out of you, and then hold it down afterwards to give you time to sleep and recover. After my final release in mid-June I spent the summer reading novels, one a day, and I have always blessed novelists since that summer. As a scientist, I scarcely even read the science I am supposed to, never mind a novel, but that summer novels allowed me to leave my own life and dwell in the lives of others, while my own self relaxed and repaired.
It soon became apparent that psychology was not yet a science, but rather a set of competing guesses.Harvard readmitted me in the fall. I spent most of that semester playing gin rummy all night long—in other words, still resting my brain. But I also decided to take a course in psychology, since my mental breakdown suggested it might be a useful subject to know. It soon became apparent that psychology was not yet a science, but rather a set of competing guesses about what was important in human development—stimulus-response learning, the Freudian system, or social psychology. None were integrated with each other and none could form the basis for an actual science of psychology, so I paid no attention to this subject.
The two law schools I had applied to—alleged to be among the most progressive—turned me down so I graduated with a degree in a field I had little respect for and no intention of pursuing. I returned home to live with my parents, unemployed, and with only vague hope of finding a job.
The Man Who Taught Me How to ThinkI did get a job soon enough upon graduating, and in Cambridge, MA, at that. The company itself was a Harvard off-shoot—Education Services Incorporated—set up to attract funding from the National Science Foundation for the purpose of developing new courses for school children. Just as there would be the “new math” so there would be the “new social sciences.” We would teach five million 5th graders about hunter-gatherers, baboon behavior, the social life of herring gulls, and evolutionary logic, or so we thought.
“Do you know anything about animals?” No. “In that case, you are going to work on animals.”For the first six weeks my employers had me read in various subjects and attend meetings. One day they called me in and asked me if I knew anything about humans, by which they meant anthropology, sociology, or psychology. I assured them I did not. “Do you know anything about animals?” No indeed. “In that case, you are going to work on animals.” This was because they cared less about the animal material. On such minor, chance events, one’s entire life may turn. I might have discovered biology later in life, but I doubt it and I doubt I would have ever again been in as good a position to exploit its many benefits.
Trivers (right) with evolutionary biologist William “Bill” Hamilton.They assigned me a biologist to guide my reading and sign off on my work. His name was William Drury, the research director at the Massachusetts Audubon Society. For two years, my employer paid him to be my private tutor in biology. It was perhaps the greatest stroke of luck in my life. Before Bill Drury, I knew no biology. After working with him for two years, I knew its very core. He introduced me to animal behavior and taught me many facts about the social and psychological lives of other creatures. More to the point, he taught me how to interact with them as equals, as fellow living organisms. But he could have taught me all of that and still I could have left his charge without becoming a biologist. The key to my future, which he alone could supply, was his insight that natural selection referred to individual reproductive success, that it applied to every living thing and trait, and that thinking along the lines of species advantage and group selection—the then-popular vogue—had little or nothing going for it. From then on I was a theoretical biologist. I had wanted to be a scientist since age 13. Now at age 22, I had discovered my discipline—evolutionary biology.
The thrill I felt when I first learned the whole system of evolutionary logic at the individual level, applied to all of life, was similar to the feeling I’d had when I first fell in love with astronomy as a twelve-year-old. Astronomy gave you inorganic creation and evolution over a 15-billion-year period. Evolutionary logic gave you the comparable story over 4 billion years. Astronomy spoke of the vastness of time and space, while evolutionary biology did the same thing for the vast variety of living creatures. Living creatures have been forming over a 4-billion-year period, with natural selection knitting together adaptive traits all through that time, so living creatures are expected to be organized functionally in exquisite and ever-counterintuitive forms. As I had when I was first discovering astronomy, I felt a sense of religious awe upon encountering this way of viewing the world around me.
This is not to say it was all fun and games. Bill was a hard teacher. When you were wrong, he was sure to point it out—not cruelly, no over-kill, just the simple truth. If you argued back, he was up to the challenge. That was how I learned what natural selection was and was not. Bill wasn’t interested in cradling your self-esteem. He was only interested in teaching you the truth. I liked that. I’ve always preferred knowledge over self-esteem. When I brought him population-advantage arguments for the existence of male antlers in caribou, he gently took me through the entire fallacy and then had me read two short pieces on opposite sides of the issue. Three days later I was a complete convert, willing to stop people on the subway and yell, “Do you know what is wrong with group selection thinking? Do you?”
Never assume the animal you are studying is as stupid as the one studying it.One day I was watching a herring gull through binoculars side by side with Bill. In those days, a herring gull could not scratch itself without one of us asking why natural selection favored that behavior. In any case, I offered as an explanation for the ongoing gull behavior something that was nonfunctional and suggested that the animal was not capable of acting in its own self-interest. Bill replied, “Never assume the animal you are studying is as stupid as the one studying it.” I remember looking sideways at him and saying to myself “Yes sir! I like this person. I can learn from him.”
Bill taught me to think outside of the mainstream in many areas. You think monotheism is superior to polytheism? Bill would say, what do you know about polytheism, or for that matter monotheism? You assume monotheism is superior because it presumes to have a single order to the world, a single unifying logic and force, but what does this force represent? Bill taught me that polytheistic religions often had a better attitude toward nature than did the monotheistic ones. In Amerindian religions, there were spirits of the forest, of the canopy, of the deep woods, of the gurgling spring, and each captured aspects unique to these ecological zones. For someone like Bill, who had literally lived 15 to 20 years of his life in the woods, these distinctions were so much closer to his own view than that emerging from monotheism, which basically boiled down to a form of species-advantage reasoning.
We are all living organisms—make discriminatory comments about others at your own risk.On another occasion, Bill and I were discussing racial prejudice and the possible biological components thereof, and he said to me, “Bob, once you’ve learned to think of a herring gull as an equal, the rest is easy.” What a welcome approach to the problem, especially from within biology. We are all living organisms—make discriminatory comments about others at your own risk. In Bill’s view, it was always better to try to see the world from the view of the other creature.
The Greatest American Evolutionist I Ever MetErnst Mayr was the greatest U.S. evolutionist I ever met, possessing a very broad and deep knowledge of almost all of biology. He had also perhaps the strongest phenotype of any organism I have ever met. He lived to be 100 and published more books after age 90 than most scientists do in a lifetime, and not trivial ones either. He was strong in character, personality, and mode of expression.
I first met Ernst Mayr in the spring of 1966, in his office at the Museum of Harvard’s Comparative Zoology. I was brought to him by Bill Drury, himself a former student of Mayr’s. The visit with Ernst Mayr was meant to reinforce this conviction and to offer me help along the way. Mayr was a short man, with a clear, piercing gaze and a warm countenance. After an initial discussion, Ernst told me that it was not at all impossible to become a biologist at my age and with my lack of background. “Where would you like to do your graduate work?” Ernst asked. I suggested that it would be nice to work with Konrad Lorenz. “No!” Ernst said. “He’s too Austrian for you, too authoritarian. Who else?” I suggested that it might be a good idea to work with Niko Tinbergen. “No,” Ernst said, less emphatically. “He is only repeating now in the ’60s what he already showed in the ’50s. Where else?” It was clearly time for some fresh input, so I asked him, “What would you suggest?” Ernst then flung his arms in a short arc and said in his German accent, “What about Haaarvard?” Dum-kopf, I thought, striking the side of my head with my hand. Harvard indeed!
Robert Trivers on The Michael Shermer Show, discussing evolutionary theory and human nature.
The first class I ever audited in biology couldn’t have been better. It was a graduate course taught in 1966 by Ernst Mayr and George Gaylord Simpson, the famous vertebrate paleontologist, who was quite a spectacle himself. A short man, but much softer-looking than Mayr, he wore thick glasses and his eyes often seemed to shake, along with his hands. Yet when he stood up to speak, he spoke in clean, clear paragraphs, no editing required. At times one felt there should be someone at his side chiseling his words into stone, so well were they chosen.
They thought that evolutionary biology had all the intellectual excitement of a cross between stamp collecting and the study of dead languages.I remember one memorable discussion involving Mayr and Simpson and sickle cell anemia. After various parts of the evolutionary story had been reviewed—the frequency of the sickling gene in natural populations being associated with the spread of malaria—they had occasion to refer to the molecular mechanism by which the sickling gene worked. I believe it was Simpson who referred to a paper that had just come out in a cellular/molecular journal showing that the change to a sickle-shaped blood cell literally crushed the malarial parasite within the cell. However that may be, there was a glorious feeling coming from that class that evolutionary biologists at their best were the true biologists, those who mastered biology at all its levels, right down to the molecular details when these became interesting.
What made the moment so special was the use of molecular biology, for molecular biologists treated evolutionary biology with open contempt. They thought that evolutionary biology had all the intellectual excitement of a cross between stamp collecting and the study of dead languages. At their worst, they were insufferably arrogant and ignorant. While they could cow most evolutionists, they could not do so with Ernst Mayr. His expertise was the entire subject— biology itself—and when needed he took it upon himself to master every section and subsection. It did not hurt that he was he was physically and verbally dominant as well. Best way to put it, nobody fucked with Ernst Mayr. That gave us evolutionary graduate students support and backing, the value of which we were only dimly aware.
Jane Goodall and the Meaning of DeathAs part of a seven week expedition to East Africa in the summer of 1972, we took a two-hour boat ride across Lake Tanganyika from Kigoma in order to reach the famous Gombe Stream Reserve. The Reserve was a series of base camp buildings on the shore of the lake, and student sleeping quarters dotting the hills, within which roamed chimpanzees, three groups of baboons, and some leopards.
Within minutes of our arrival I was standing next to Jane Goodall and her husband Hugo van Lawick, watching a chimpanzee and her son on the hillside among some trees. This wasn’t just any primate. Flo was the most famous living chimpanzee, having been studied by Jane for more than ten years. She was a matriarch whose clan had formed the backbone of Jane’s writings and films. Flo was far past her prime when I saw her and, in fact, was afflicted with continual diarrhea. As we watched, she took a fruit and tried to smash it against a tree but she missed and struck her own leg. “I have never seen her miss like that,” said Jane. “I don’t give her two weeks to live.” My young postgraduate heart leapt: I had just arrived for a two-week visit and according to Jane I would be witness to history!
Chimpanzees worked themselves into a frenzy in the presence of a waterfall, swinging back and forth on vines, hooting, hair erected, and so on. One can almost see a religious sentiment, on which later might be built something as huge as the Catholic Church.Jane knew her chimpanzees. Several days later I was watching a “waterfall display,” in which chimpanzees, especially adult males, work themselves into a frenzy in the presence of a waterfall, swinging back and forth on vines, hooting, hair erected, and so on. One can almost see, but not quite define, a religious sentiment, an elemental force on which later might be built something as huge as the Catholic Church. While our chimpanzees were starting to work themselves up, we were interrupted by the arrival of the shocking news that Flo was dead. I was with two graduate students at the time, and we turned, as if one, and padded back down the paths toward the hillside near the base camp. Turning off the main path we went through undergrowth and reached the bank of the small river that flowed down toward camp. Flo lay half in the water. Next to her knelt Jane. And capturing this moment for posterity was one of the largest cameras I had ever seen, on a tripod with Hugo behind the lens, just across the river. Flint, meanwhile, lay depressed in a tree 20 feet above his mother.
Thus began the human drama of Flo’s death. At the beginning, Jane appeared intent upon seeing a chimpanzee funeral. At the very least she hoped that one or more of Flo’s grown children might happen upon the body and give some interesting reaction. In fact, it never happened. Instead, the first night Flo remained where she’d died but Jane sat up the whole night nearby, with many of us for company, in order to deter scavengers such as bush pigs from carting off Flo’s body (one reason one would not expect to see many chimpanzee funerals). Jane was nostalgic, remembering the early days, nearly alone with the chimpanzees, enjoying the quiet beauty of the forest, coming to know Flo almost as well as her own mother.
Parasites can be expected to flee the dead body in search of living tissue—if any are there, they should swarm out of a corpse. This immediately suggests the value of burial.In her response to the death of a member of a closely related species, Jane Goodall revealed the curious ambivalence we display toward the dead bodies of members of our own species. It is as if the body too sharply erodes the living creature for us to leave it alone. Yet from the standpoint of parasites alone, we surely should: any living creature carries a number of parasites and may have died from an ongoing parasite attack. The parasites can be expected to flee the dead body in search of living tissue—if any are there, they should swarm out of a corpse. This immediately suggests the value of burial. From the archaeological record we know that humans have practiced this custom for at least 75,000 years. But a sentimental component shows up from the beginning, as well, since even in ancient burials the deceased is interred along with various artifacts, such as utensils, weapons ,and other items of value.
The effects of a lingering memory are notoriously strong in various monkey mothers for their recently dead offspring; in some species they carry around the body of an infant in a clinging posture for as long as two days after its death. A much stronger attachment occurs in our own species, as when the exact spot of burial is preserved in memory, often with a marker, so that the desecration of such places by others is taken as an attack on the living relatives. Consider the outrage that recent attacks on Jewish cemeteries have evoked. The attackers, who dug up corpses and assaulted some of these, were regarded as more depraved and anti-Semitic than those who do harm to living Jews, as indeed they may be since if they are that eager to desecrate burial grounds, God knows what else they are eager to do.
Richard Dawkins and the Concorde FallacyIn 1975 I was in Jamaica on sabbatical when I received a letter from one Richard Dawkins enclosing a paper written by himself and Tamsin Carlisle pointing out that I had committed the Concorde Fallacy in my paper on Parental Investment and Sexual Selection, as indeed I had. The Concorde Fallacy is the notion that because you have wasted $10 billion on a bad idea—the exceedingly expensive supersonic plane Concorde—you owe it to the 10 to throw in another 4 in hopes of making it work. In poker, the rule is, “Don’t throw good money after bad.” Good money is money you still have, bad money is already in the pot; it is no longer yours. Just because you have $300 in a large poker pot (money gone) does not mean that you owe it to that money to lose another $200, with odds stacked against you. Every decision should be rationally calibrated to future pay-offs only, not past sunk costs.
I had argued in my paper that since females almost always begin with greater investment in offspring than do males, this committed them to further investment—they would be less likely to desert their offspring. Simple Concorde Fallacy; only future payoff is relevant. I consoled myself with the thought that there probably was a sex bias similar to the one I’d proposed, but only because past investment had constrained future opportunities. In any case, I wrote back that I agreed with them right down the line.
His actual purpose in writing me was to find out if I might be willing to write the Foreword for a new book he had written called The Selfish Gene.I soon received a second letter from Richard, saying that his actual purpose in writing me was, in part, to find out if I might be willing to write the Foreword for a new book he had written called The Selfish Gene. This was especially appropriate, he told me, because my work, more than anyone else’s, was featured in his book. What the hell, I thought, and he sent the manuscript along. There were indeed chapters based on individual papers of mine—“Battle of the Generations” (parent-offspring conflict), “Battle of the Sexes” (parental investment and sexual selection), “You Scratch My Back, I’ll Ride on Yours” (reciprocal altruism). I never deluded myself that my work was more fundamental than Bill Hamilton’s, nor did Richard, but we both knew that if you wanted to get some of the fun details filled in on a variety of subjects—not ants, fig wasps, or life under bark, but social topics relevant to ourselves—my work was a better bet than Bill’s.
Better than finding my own work given such a high billing, though, was discovering that Richard had a most pleasing combination of absolute mastery of the material with a wonderful way of expressing it—funny, precise, vivid. Let me give one example. He presented Bill Hamilton’s idea that a gene—or a tightly linked cluster of genes—could evolve if it could spot itself in another individual and then transfer a benefit based on the phenotypic similarity. But Richard added a vivid image, calling this “the green beard effect.” The name soon caught on in the scientific literature, so that everyone today refers to “green beard” genes, thereby summing up a complicated idea in a way that actually makes it easier to think through. The phenotypic trait is obvious: you have a green beard. And the genetic bias is obvious: you favor green-bearded individuals. Genes spread apace. Except what about a mutant that leaves your green beard intact but takes away your bias toward green-bearded individuals? Not at all obvious, yet Richard’s vivid way of writing facilitated thinking through the complexities.
So I said to myself, yes I will write you your Foreword, though I don’t know you from Adam. I wrote a good five paragraph foreword but it consumed about a month of my life, partly because I actually like to think before I write, which does slow down writing.
In any case, once I was finished, I looked at the essay and thought, why not slip in the concept of self-deception, whose function by that time I had linked to deceiving others? This I regarded as the solution to a major puzzle that had bedeviled human minds for millennia. And Dawkins, bless his soul, could hardly have set me up more nicely: “…if [as Dawkins argues] deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray—by the subtle signs of self-knowledge—the deception being practiced. Thus, the conventional view that natural selection favors nervous systems which produce ever more accurate images of the world must be a very naïve view of mental evolution.” Perfect set-up and not even in a paper of my own but in someone else’s book and an incredible bestseller at that.
Robert Trivers’ lecture for the Skeptics Society: Why does deception play such a prominent role in our everyday lives?
When I learned that Dawkins had taken on religion in the name of science and atheism, I felt he had finally found his true intellectual niche. No way could religion keep up with Richard. One June 13, 2011, I was about to begin delivering the Tinbergen lecture at Oxford, when as usual I misplaced something on the lectern. “Jesus Christ,” I muttered, and the microphone amplified it to the 400 people in attendance. I looked up and said, “I hope Richard Dawkins isn’t here.” Richard raised his hand. Before launching into my lecture I added, “I regard Richard Dawkins as a minor prophet sent from God to torture the credulous and the weak-minded, for which he has a unique talent,” as indeed he does. One nice concept in The God Delusion is that since most people dismiss all religions except one, why not go the final step?
Hanging with Huey and the PanthersOne of the few benefits of moving from Harvard to the University of California at Santa Cruz in 1978 was the chance to meet the legendary founder of the Black Panther Party, Huey Newton. Indeed he was waved in front of me as a reason to come to Santa Cruz. He was a graduate student in “History of Social Consciousness”—roughly equivalent to Western Civilization—who had the wit to see that “social consciousness” started long before the Greeks and, in some form, by the time of the insects. He had gotten his undergraduate degree from Santa Cruz in 1974 and befriended Dr. Burney Le Boeuf, the celebrated student of elephant seals. Burney had been preaching the beauties of evolutionary biology—my own work in particular—to Huey, and so I had the good fortune of meeting him after he had already been well-primed.
Trivers with founder of the Black Panther Party Huey Newton.The Panthers began with patrolling the police. They would follow police at night or patrol until they came across police-citizen interactions. Huey might then emerge from a car with a law book in his hand and read out in a loud voice that, by law, “excessive” force cannot be used during an arrest. The police would invariably answer, “Our force isn’t excessive.” Huey would read them the legal evidence on that point. They would say, “Get the fuck out of here.” He would answer that a citizen is allowed to remain within a reasonable distance of an arrest. They would say, “Your distance is unreasonable.” He would flip to the relevant page and read the appellate ruling that declared a reasonable distance was ten yards or whatever, and it would go on like this.
Huey was armed. He knew he had the right to be armed and he knew he had the courage. So when he emerged from the car, there was usually a gun beneath the law book so that, should the interaction turn hostile or threatening, he could be ready with a response. All this was legal back then, riding shotgun, in effect, on the police themselves. During the war the Panthers waged between 1967 and 1973, roughly 15 officers died for every 35 Panthers. I believe the Panthers had the largest single effect on integrating police forces in this country. The reasoning being: hey, if Black people are firing at our officers, let’s have some Black officers firing back.
In the fall of 1978 I was informed that Huey, who was then in prison, charged with beating up a tailor in his home for calling him “boy,” wanted to take a reading course from me. I said that was fine but I wanted a paragraph from him on what he wanted to read. Before he could reply he was released from lock-up and traveled to Santa Cruz to meet me. We met.
He fell down, as do we all, when it came to his own self-deception.We decided to do a reading course on deceit and self-deception, a subject I was eager to develop and on which Huey turned out to be a master. He was a master at propagating deception, at seeing through deception in others, and at beating your self-deception out of you. He fell down, as do we all, when it came to his own self-deception. Huey Newton was certainly one of the five or six brightest human beings I have ever met. Each of them has had a different sort of intelligence, and Huey’s forte was aggressive logic. And he moved his logical sentences as if they were chess pieces meant to trap you and render you impotent. “Oh, so if that is the case, then this must be true.” If you moved away from where he was pushing you, he would say, “Well, if that is true, then surely so-and so must be true.” So he was maneuvering you via logic into an indefensible position. The argument often had a double-or-nothing quality about it where, in effect, he was doubling the stakes for each logical alternative, giving you the unpleasant sensation that you were losing more heavily as the argument wore on, making more and more costly mistakes.
According to Huey, the Black Panther Party started as a simple, old-fashioned robbery, which he was planning with a number of confederates. Problem was he was reading Franz Fanon and becoming politically conscious. So he decided to use the robbery to start a new political party, as radical as its start-up funds. The hard part was selling it to his fellow robbers. They didn’t like the idea. “They almost killed me” Huey told me, but finally he got them to sign off on it, and some of them even became Party members later.
Once, when he and I were driving through West Oakland, near Berkeley, Huey pointed out the site of the Party’s first political act. There was a particularly dangerous street corner at which local African-American children were run over nearly every year while attempting to cross on their way to school. Numerous requests had been submitted for a stop sign and a proper street crossing to protect the children. Nothing had been done. One day the Panthers appeared at the street crossing at the appropriate time, dressed in their leather jackets and berets and each carrying a rifle or shotgun. They proceeded to direct traffic, standing in the highway to permit safe passage for the children. Six weeks later the city put up, not a stop sign, but a stoplight at that very corner. Nothing like armed Black men to stir civic activity.
When the California legislature was meeting to decide whether to pass the “Huey Newton law,” as it was popularly called, which states that you could no longer “ride shotgun” but instead had to keep your loaded gun in your locked trunk, Huey and 35 other Panthers showed up in Sacramento on the day of the vote, most of them carrying rifles. They tried to enter the legislature with their guns, which was allowed by law at the time. Police stopped them from entering, ordered them out of the building, and then shortly thereafter arrested them. Huey told me that many Black people argued against the public display: “Now they’re sure to pass the bill, why don’t you ease up the pressure?” Huey’s response was simple: they were going to pass the bill anyway, and he wanted to show Black people that they had the right to show up in front of the legislature with guns and confront a mass of armed police. That was one of the main points of the Party—to encourage African Americans to use their right to bear arms in self-defense. In 1948, in response to a lynching, President Harry Truman made the first and key decision in favor of equal gun rights for the Black man in the U.S., when he integrated the armed services. Before then, most Black soldiers sliced the carrots and did the dishes.
Many African Americans of more recent times have a strong ambivalence or hostility toward Huey and the Panthers because they believe he helped spawn the culture of Black gun violence among the urban young. There is probably some truth to the charge, but I think harsh drug penalties take a larger part of the blame. With the stakes so high for being caught selling illicit drugs, the chances of internecine war and murder inevitably rise as well.
A final point on Huey’s legacy: though people tend to assume that Huey was anti-police in principle, in fact he saw obvious value to community surveillance and organized protection. That’s why he regarded himself and Party members as on a par with the official police. He used to joke, “I’ve got nothing against the police as long as we are firing in the same direction.”
Looking back and Looking ForwardI am 72 years old now, having devoted 50 years to the study of evolutionary biology, a combination of social theory based on natural selection wedded to genetics—the very backbone of all of life. I have had the good fortune to help lay the foundation for a variety of flourishing subdisciplines, from reciprocal altruism and parent-offspring conflict, to within-individual genetic conflict, and self-deception. Through this work, I have met many extraordinary individuals, several of whom were my teachers. I have also gotten to know up close and personal many non-human animals. I have “enjoyed” an unusual number of near-death experiences—due in part to my tendency toward intense interpersonal disagreements late at night.
Yet when I look back on this show, there is one thing I regret, and it is absence of self-reflection. Yes I would live life and study it, but would I study my own life? Time and time again, the answer comes back “no.” Yet exactly whose life is more important to you: others or your own? “You self-deceptionist” my first wife would sneer. “You talk a lot about parent-offspring conflict, yet you neglect your own son.” Guilty as charged. Too much ambition and too little thought about my family: wife, children, and myself.
Robert Trivers’ lecture for the Skeptics Society, based on a ground-breaking study that examines honor killings, which seem to make no evolutionary sense. Why would a father kill his own daughter and thereby eliminate half of his own genes from propagating into the next generation?
Major decisions, such as where to go when I decided to leave Harvard in 1978 were made without any serious thought at all—how about a name professorship at the University of New Mexico or a major offer from the University of Rochester with its powerful biology department? These were brushed aside with scarcely a glance. Instead I simply trotted off to the University of California at Santa Cruz because my wife and I had enjoyed a pleasant weekend with Burney LeBoeuf, his wife, and his elephant seals. I even remember mumbling to myself at one point, “Oh we’ll let autopilot handle this or that problem.” Auto-pilot? As a means of choosing which of three universities and cities you should live in for the next 15 years? By definition auto-pilot is the opposite of careful conscious introspection and evaluation—it is what you do when the path forward is obvious and no rational reflection is needed.
What is the way forward? There is one obstacle and there is one hope. The obstacle is self-deception, which is a powerful force with immense repetitive power. The hope is that after becoming more deeply conscious of one’s own self-deceptions and of the possible means of ameliorating them, one can make some real progress against this strong negative force.
Very often a spiteful response is not the best one. Then comes a stronger voice, “No, Bob, this time is different.”A more costly form of self-deception involves my spiteful side. If you say something insulting, I want to strike back. If I fail to because I am slow or inhibited, trust me—whenever the event recurs in my mind, I will torture myself, sometimes for years, with the rant I should have delivered and may do so now at full volume alone in my apartment far away. And yet very often a spiteful response is not the best one. It can easily generate spite in return and down the staircase the two of you descend. Inside me there are two voices. One cries out, “Bob, you have made this mistake 630 times in the past and regretted every single one. Why not forego it this time?” Then comes a stronger voice, “No, Bob, this time is different,” and there goes 631.
It was an eye-opener to me to discover recently the value of friends in breaking this cycle. I was telling a good friend about a nasty message I had gotten and my intended nasty response. He wanted to know why? Because, I said, she said this, that, and the third thing and it hurt. That was the key. He was unmoved by this argument. He’d suffered none of my internal hurt and was indifferent to it. Only three things were relevant to him: the message, my possible response, and its likely consequences. The likeliest consequence would be that she would write back an even nastier note and I would be further estranged for no good reason. Why would I want to do that? Why indeed. The Concorde Fallacy all over again—you owe it to your past spite, despite it being a sunk cost, to double-down. Better, of course, to do nothing.
I went to a chiropractor in the 1980s for a stiff neck that had not improved after a month. A coworker praised him with the evangelical certainty usually reserved for miracle diets, used car salesmen, and people who have just read one book on nutrition. I was skeptical but adventurous, which is how most regrettable life decisions begin.
The adjustment worked. My neck improved. Worse still, my chronic asthma improved as well.
At the time, I was deeply unhappy in my first professional job after earning a bachelor’s degree in psychology and a master’s degree in applied behavioral science at Wright State University in Dayton, Ohio. I worked for a personnel-testing firm that marketed itself as scientific while relying on psychological instruments invented—without irony—in-house. Their psychometric rigor consisted largely of confidence, clipboards, and an aggressive font choice.
Compared with the pseudoscientific theater I was being paid to defend, chiropractic felt almost wholesome.These tests produced false positives and false negatives with impressive symmetry, giving employers either a false sense of security or a convenient scapegoat. Qualified people quietly lost livelihoods. Chiropractic, by contrast, seemed refreshingly concrete. Hands. Spines. Patients who said they felt better. I imagined self-employment, ethical work, relief of pain, and perhaps even improved health. Compared with the pseudoscientific theater I was being paid to defend, chiropractic felt almost wholesome. In retrospect, this should have been a warning sign.
Why Chiropractic Made Sense at FirstI had been trained in program evaluation, a discipline shaped by people obsessed with how to infer causality in the messy real world where randomization is often impossible and people insist on behaving like people. This was the era of stress research—Hans Selye, Thomas Holmes, and Richard Rahe—demonstrating that belief, expectation, and circumstance could predict outcomes as dramatic as Navy pilots crashing jets on aircraft carriers.
Chiropractic appeared to offer a humane alternative: a hands-on profession marginalized by a medical establishment overly confident in pharmaceuticals and procedures. Like many, I believed useful treatments had been discarded not because they failed, but because they threatened professional turf. I believed science had limits, and that those limits had been selectively enforced, preferably against someone else.
So I decided to become one myself, and in 1987 I graduated from the San Jose campus of Palmer College of Chiropractic and joined the ranks of doctors of chiropractic—eager, idealistic, and spectacularly unaware of the epistemic ecosystem I had entered.
Inside the BubbleThe dominant narrative was simple: conventional medicine had unfairly dismissed us. Scientific objections were cherry-picked. Our methods worked; medicine simply refused to look properly, or long enough, or with an open heart and an open mind liberated from all that oppressive critical thinking.
On weekends, I studied at Stanford’s Green Medical Library and noticed something curious: the library did not carry chiropractic’s premier scientific journal. I proposed that Palmer purchase a subscription for Stanford. We did. Stanford thanked us politely, in the tone such institutions reserve for unsolicited fruit baskets.
Subtle vital forces, innate intelligence, and spinal “subluxations” hover just beneath the surface of even the most modern curricula, like software that never quite finishes installing.Old-guard chiropractors complained that we risked spilling our secrets to scientific medicine. The truth is, chiropractic education exists in a parallel universe. Its founding figure, D.D. Palmer, died in 1910, but his metaphysical afterlife remains active. Subtle vital forces, innate intelligence, and spinal “subluxations” hover just beneath the surface of even the most modern curricula, like software that never quite finishes installing.
The 1990s brought chiropractic its brief flirtation with legitimacy. The NIH’s Office of Alternative Medicine was established, fueled in part by philanthropic enthusiasm from abroad.
I interviewed for a position at an English health estate owned by Sir Maurice Laing, who had both an interest in alternative medicine and the resources to indulge it. I declined the offer, tethered as I was to America, but not before inserting myself into meetings with leaders of British complementary medicine.
To the British Committee on Complementary medicine, I proposed a heresy: stop arguing about putative mechanisms; first determine what works, for whom, and under what conditions. Program evaluation before explanation. My suggestion was politely ignored. Before assuming his kingship, King Charles quietly stepped away from his advocacy of complementary medicine. One suspects reality intervened, possibly with charts.
The Cracks AppearAfter years of practice and research involvement, my discomfort grew. Chiropractic diagnostics increasingly failed a basic test: face validity.
My practice partner believed she could diagnose disease by testing the strength of specific muscles, a method known as applied kinesiology (AK). Patients loved it. The ritual was impressive. They asked why I did not perform AK, as though I were withholding a party trick. I asked her once how often her diagnoses were correct. “About half the time,” she said, without irony.
This is precisely the accuracy one would expect from a fair coin flip, except coins do not bill insurance companies or require continuing education credits. These tests were never compared to gold standards, so strictly speaking they were never correct or incorrect at all. They simply were.
What finally broke me was not only the epistemology—it was the economics. Chiropractic education devotes astonishing energy to practice management. Seminars, workshops, and consultants descend with the same message delivered in different fonts: sell care plans, sell frequency, sell fear. Some that you pay for one-to-one counsel offer referrals when referring to other chiropractors. My millionaire business coach promised me $1000 per referral that signed up—but always called a few weeks later with a sad reason not to pay.
The mantra was explicit: ABC—Always Be Closing. The bottom line of all the chiropractic continuing education and coaching programs was to lie about how chiropractic is crucial for overall health, and the bottom-bottom line was that advising chiropractors is much more profitable than being one.
Patients were no longer people with problems to be evaluated; they were “cases” to be converted. Thirty-six-visit plans were praised. Lifetime care was normalized. Preventive adjustments were marketed with the confidence of seatbelts and vaccines—minus the evidence, testing, and regulatory oversight.
Certainty, I learned, is a remarkably precious commodity in chiropractic world.Those who questioned this model were told they lacked confidence, commitment, and the proper chiropractic spirit. Skepticism itself became a personal failure. Success was measured not in clinical outcomes, but in collections. The resemblance to the psychometric firm I had fled years earlier was no longer subtle. With a quiet corruption of Avedis Donabedian’s classic framework—structure, process, and outcome—chiropractic leaders instead sold belief, structure, and certainty. And certainty, I learned, is a remarkably precious commodity in chiropractic world.
Indeed, one of the central problems with chiropractic is its frank comfort with ignoring evidence in favor of belief systems that “just make sense.” Plausibility substitutes for proof. Confidence substitutes for outcomes.
In practice, chiropractic operates at two largely disconnected levels of knowledge. At the top sit researchers, faculty, and administrators—those who define the profession’s identity—yet who typically know very little about the day-to-day realities of practice. At the bottom are practicing chiropractors, submerged in diagnosis codes, billing rules, collections, hiring and firing staff, training front-desk help, negotiating with insurers, and keeping the lights on.
The irony in all that is that the most influential voices shaping chiropractic practice are almost entirely those who do not practice. These are the “paycheck chiropractors,” whose authority is inversely related to their proximity to the trenches. They do not argue with insurers. They do not explain denied claims. They do not rehire front-desk staff every six months. Yet this has never impaired their confidence in advising clinicians how to act, what to treat, and what to expect from every imaginable or unimaginable combination of symptoms.
Practicing chiropractors, for their part, are remarkably comfortable with this arrangement. When things wobble or fail, blame flows inward. The practitioner assumes personal deficiency: insufficient belief, insufficient technique, insufficient commitment. It functions like a built-in self-protection virus for the profession—very convenient for avoiding collective accountability.
This arrangement is also useful when graduates eventually notice three inconvenient facts:
Chiropractic does not compete well with medicine—or even with itself. When studied carefully, its apparent effectiveness dissolves into non-specific factors: expectation, attention, ritual, and natural history. When chiropractic researchers properly control for placebo and natural recovery, the specific effect of spinal manipulation reliably shrinks or disappears altogether. Paradoxically, better science makes chiropractic look worse.
Structurally, the profession is a two-tiered, one-directional system that rarely improves, because the real problems are invisible at the top and permanently personalized at the bottom. Some leaders continue selling early-20th-century dogma, steering chiropractic safely away from medicine by avoiding diagnosis and disease altogether.
When a profession cannot hear its own failures, cannot correct its own assumptions, and cannot tolerate honest uncertainty, leaving stops feeling like betrayal and starts feeling like hygiene.At some point, the pattern became impossible to ignore. When a profession cannot hear its own failures, cannot correct its own assumptions, and cannot tolerate honest uncertainty, leaving stops feeling like betrayal and starts feeling like hygiene. That was when I knew I was done.
Many of my former classmates reached the same conclusion, some more quickly than I did. Privately, several admitted that much of what we had been taught was baloney. They were not amused. A $200,000–$400,000 investment over four years had produced clinicians who knew just enough medicine to realize how little they could safely treat. The coping mechanism was predictable: at least we help 50 percent of patients—better than nothing.
Some eventually realized that 50 percent accuracy in a two-outcome probability space is not success at all.
Do any of these statements resonate? Make you angry? Do some not even merit a response?
I can’t tell you exactly how I would respond to someone who defended Hitler, but I know what I would not do: stalk him on social media, contact his employer to try to get him fired, or ask my government representative to help criminalize such talk.
Does this make me a free speech absolutist? Not quite. Like Robert Jensen, a professor emeritus at the University of Austin and prolific blogger, I suspect that most people who call themselves free speech absolutists don’t actually mean it. They wouldn’t countenance speech like “let’s go kill a few Germans this morning. Here, have a gun.” Instead, Jensen writes they’re prepared to “impose a high standard in evaluating any restriction on speech. In complex cases where there are conflicts concerning competing values, [they] will default to the most expansive space possible for speech.”
In other words, they’re free speech maximalists. A more contemporary and nuanced variant of absolutism, the maximalist position grants special status to free speech and puts the burden of proof on those who wish to curtail it. While accepting some restrictions in time, place, and manner, free speech maximalism defaults to freedom of content. It aligns with the litmus test developed by U.S. Supreme Court Justices Hugo Black and William O. Douglas, which holds that government should limit its regulation of speech to speech that dovetails with lawless action:
Let’s go kill a few Germans? Not kosher.
The only good German is a dead one? Fair game.
Some pundits view this position as misguided. A 2025 Dispatch article titled “Is Free Speech Too Sacred?” laments America’s descent into an era of “free speech supramaximalism,” in which “not only must speech prevail over other regulation, but nearly everything is sooner or later described and defended as speech.” A New Statesman essay about Elon Musk, written a few months before he acquired Twitter (now X), decries Musk’s “maximalist conception of free speech usually adopted by teenage boys and libertarian men in their early 20s, before they realise its limitations and grow out of it.” The implication: free speech maximalism is an unserious pitstop on the way to more mature thinking. Only testosterone-soaked young men, drunk on their first taste of freedom, would spend more than a minute on such a naïve view.
This 69-year-old woman disagrees. I grew into my passion for free speech during the early months of the COVID-19 pandemic, when the pressure to conform in both word and deed reached an intensity I had never witnessed before. Any concerns about the labyrinthine lockdown rules elicited retorts like “moral degenerate” or “mouth-breathing Trumptard.” (Ask me how I know.)
Unexpectedly jolted into awareness of free speech principles, I began reading John Stuart Mill and Jean-Paul Sartre and writing essays about freedom of expression in the COVID era. One thing led to another, and in 2025 the newly minted Free Speech Union of Canada found a spot for me on its organizing committee. What most of us in the group shared, along with age spots and facial wrinkles, was a maximalist position on free speech. Perhaps we’re all immature. Or maybe we’ve lived long enough to understand exactly what we lose when free speech goes AWOL.
But but … critics sputter … what about hate speech? Free speech maximalism posits that you can’t regulate an inherently subjective concept. As Greg Lukianoff and Ricki Schlott note in their 2024 book The Cancelling of the American Mind, “as soon as you start legislating based on a concept as loosely defined and subjective as offense, you open the floodgates to every group and individual claim of offense.” This argument may well explain why Canada’s proposed Bill C9—the Combatting Hate Act—remains stalled after protracted parliamentary debate.
Is “you cannot change sex” hate speech or merely opinion? Is “you have a big Black butt” an offensive remark? It depends on who says it, how it’s said, and who hears it. One person may react to the big butt comment with reflexive outrage, while another may simply shrug. When said tenderly to a lover, the statement may elicit a full-throated laugh. Offense is in the eye of the beholder.
Someone can tell you that the sky is green, or that women can’t think logically, or that Hitler was right about some things, and you allow the words to bounce off your emotional core. It’s a liberating habit of mind.A case in point: In 2017, the U.S. Patent and Trademark Office refused to register the name “The Slants” (an Asian rock band) because of its derogatory, or hateful, connotations. The bandleader sued and the Supreme Court ultimately agreed that “giving offense is a particular viewpoint” and a law restricting expression on the basis of viewpoint violated the First Amendment.
Here’s the thing: when you embrace viewpoint diversity as an ideal, you tend to get less offended about things. You may profoundly disagree with a statement, but it won’t cause you to puff up in outrage. Someone can tell you that the sky is green, or that women can’t think logically, or that Hitler was right about some things, and you allow the words to bounce off your emotional core. It’s a liberating habit of mind.
And if you do get offended? Big whoop. You’ll survive. During a recent bus trip from Whistler to Vancouver my seatmate, a doctor, took it upon himself to share his candid opinions about women with me: they can’t take a raunchy joke, they make poor leaders, they’re responsible for cancel culture, and society would work better if they stayed home. Ugh. Seriously? But I survived. I wasn’t traumatized. Truth be told, I quite enjoyed our conversation. He listened as much as he spoke. I even found a few grains of value in his arguments, and perhaps a couple of my retorts gave him pause. And that’s what it’s all about, isn’t it? Humans of all stripes challenging and learning from each other.
Here I must pause to express disappointment in my own sex. Women, I have found, value free speech less than men do, and studies corroborate my perception. In one survey, 71 percent of men said they gave priority to free speech over social cohesion, while 59 percent of women held the opposite view. An article reporting on the survey affirmed that “across decades, topics, and studies, women are more censorious than men.” Boo.
Even with carte blanche to express ourselves, it’s impossibly difficult for us humans to lay bare our true thoughts. Self-censorship is baked into our DNA. Free speech maximalism serves as a counterweight to this force. It allows us to rise, even if timidly, above the lead blanket of social conformity flung over us by the finger-wagging classes. By exposing little bits of our true selves, we shed light on the glorious contradictions in the human condition—a benefit that serves not just angry young men, but women with age spots and everyone else.
To those concerned about the dangers of loosening our tongues, I offer Greg Lukianoff’s bracing maxim: “You are not safer for knowing less about what people really think.”
Practically everyone has heard of the tick-borne infection known as Lyme disease, even if they don’t live in a high-risk area. Some are aware of long-standing controversies about the consequences of infection or how best to treat it. Our concern here is for a newly emerging controversy about Lyme disease—namely, the theory that it originated as part of a bioweapons program. As U.S. Representative Chris Smith of New Jersey is heard to say while participating in a Department of Health and Human Services roundtable on Lyme disease: “They were weaponizing Ixodes burgdorferi [sic], as we all know.”1
Part of this theory is that Lyme disease’s origins can be traced to the United States Department of Agriculture’s (USDA) Plum Island Animal Disease Laboratory, where it allegedly was developed as a biological weapon, either as a genetically modified organism or by “weaponizing” native ticks to carry a secret pathogen. Plum Island, in fact, would seem to be a good place to center these hypothetical activities, because it has exclusively been the site of a restricted-access USDA facility since 1954. The facility has long conducted research on foreign animal diseases that would devastate the livestock industry in the United States if they were ever introduced accidentally or purposefully as a biological weapon. This research is essential for developing vaccines and measures to prevent potential outbreaks of animal diseases, such as foot-and-mouth disease, African swine fever, and other diseases of domesticated animals.
Plum Island is located off the eastern end of Long Island and about seven miles across the water from the town of Lyme, Connecticut, where what seemed (at the time) to be a new tick-borne disease was identified in the 1970s. Over the past five decades, Lyme disease—as that illness is now called—has been documented in several other states in the northeastern, mid-Atlantic, and north-central U.S., as well as parts of states in the Far West. It is a tick-borne infectious disease affecting tens of thousands of people each year and at an enormous cost to the public’s health and people’s well-being.
Nature poses a greater threat than human design or error as a source of new infectious diseases and epidemics for humans and other animals.The issue of whether the emergence of Lyme disease is the consequence of natural processes or might have originated from humans—namely, as a designed bioweapon, subsequently inadvertently or intentionally released—has become a hot topic in the news, social media, and podcasts. It has prompted calls for an investigation from members of Congress, where an amendment from Representative Smith is now part of the recently passed and White House-signed defense authorization bill. It would seem more convenient to have somebody or some government institution to blame for an emerging infectious disease, rather than natural events. But in reality, nature poses a greater threat than human design or error as a source of new infectious diseases and epidemics for humans and other animals.
Plum Island is a high-containment facility only reachable by boat from Long Island and Connecticut for the daily transport of authorized personnel. Visitors are not allowed, and any intruders are promptly escorted off the island. Deer and other wildlife that may be susceptible to infections and occasionally swim to the island are immediately culled by sharpshooters from helicopters. Such high security has long led to rumors and suspicion among neighboring communities that something nefarious must be going on at Plum Island. The island had undeservedly gained notoriety in the Silence of the Lambs book (1988) and film (1991) in Hannibal Lecter’s telling as “Anthrax Island.”
One of us (DF) worked on Plum Island during the 1990s, conducting research on African swine fever under a USDA research contract with Yale University. African swine fever is a tick-borne disease native to Africa, and it is highly infectious among pigs even without ticks. Access to infected animals required two changes of clothing and a shower before passing through each of two air-tight chambers. But there was no protection for personnel, as these animal diseases do not have the capacity to infect humans. If they did, self-contained spacesuits would be required, as are used for Ebola and other dangerous human pathogens in BSL-4 labs. The Plum Island facility had no capacity to work with human pathogens, and there is no evidence that scientists there ever worked on Lyme disease.
The second of us (AGB) participated in the early 1980s in the discovery and then isolation of the bacterium that causes Lyme disease. The team accomplished this from ticks that were collected at the far end of Long Island, so not far from Plum Island. This sounds suspicious for an escape from the Plum Island lab. But Long Island and Lyme, Connecticut, were not the only places where Lyme disease was occurring at the time. The availability of cultured bacteria led to diagnostic assays that were quickly developed and implemented. Application of these blood tests for laboratory diagnosis in many other places in the United States revealed that the infection was not limited to a small area near Plum Island and had not been so restricted for many years.
Besides New York and Connecticut in the early 1980s, cases were soon identified in other northeastern states, north-central states like Minnesota and Wisconsin, and even across the country in northern California. This is a disease only transmitted by ticks, which crawl and, unlike mosquitoes, do not fly. Even if attached to a deer, mouse, or bird, it would have been decades for the infection to spread so widely if it had been released from a single place at the continent’s end.
Evidence that the bacteria were already present in the area long before any theorized release from Plum Island was finding their presence in museum specimens of preserved ticks and field mice that had been collected in the northeastern U.S. in the 19th or early 20th century. In retrospect, cases of Lyme disease in different parts of the country had been described by physicians in medical case reports from the 1960s.
If the Lyme disease agent were some kind of Frankenstein germ, malignly created and released upon the world, one might as well invoke space aliens.Further justification for rejecting a Plum Island bioweapon release theory was recognition that Lyme disease, under other names, had clearly been occurring in Europe since at least the early 20th century, decades before it was first named as a new disease in North America. In Sweden, the Lyme disease agent was recovered from chronic skin rashes that had started years before it was found in some New York ticks. Subsequently, the causes of Lyme disease were identified in ticks and mammals, as well as in patients in China, Japan, Korea, and Russia. Why would there be a need for a new bioweapon delivered by ticks if the infection was already occurring in many parts of the world?
The bacterium that was isolated from those ticks from Long Island was the first example of what was soon recognized to be a species meriting its own name. But there was nothing strange about it at the time or since, even after intensive study. There is nothing to indicate that it was a genetically modified organism or was constructed from parts of other bacteria, as has been suggested. Genetic analysis of Lyme disease bacteria shows that they originated on the Eurasian continent and spread to North America thousands of years ago.
That first isolate was representative of but one strain out of several that were occurring then and now in the northeastern U.S. There are other strains in the Midwest and another set in the Far West. Europe has its own strains of the bacteria. This pattern of differences is what would be expected for bacteria that have been widely distributed for millennia and evolved to adapt to their unique local circumstances over time. If the Lyme disease agent were some kind of Frankenstein germ, malignly created and released upon the world, one might as well invoke space aliens that had visited the Earth thousands of years ago.
What’s the more plausible explanation for the increase in numbers and distribution of Lyme disease that began in the last half of the 20th century? It is clear to us that Lyme disease is a product of nature and has been present for millennia throughout the continents of Eurasia and North America. What has changed to cause it to become recently epidemic is the reestablishment of forests and deer, which has led to a proliferation of ticks over the past half-century. Massive deforestation in the Northeast and upper Midwest before 1900 for agriculture and manufacturing resulted in the near extermination of deer, the natural host of the deer tick that is responsible for transmitting Lyme disease in these areas. Long Island is the only known location in the Northeastern U.S. where white-tailed deer and deer ticks have persisted since colonial times.
Lyme disease is a product of nature and has been present for millennia throughout the continents of Eurasia and North America.Another refuge occurred in northern Wisconsin, where a case of Lyme disease occurring in the 1960s was retrospectively identified. From these two ancient refugia, Lyme disease has slowly spread to neighboring states as forests regenerated, and as deer and ticks returned to their former ranges. This spread has been well documented since the original discovery of the Lyme disease agent more than 40 years ago. The same history of reforestation of areas previously used for agriculture and industry accounts for the increase and spread of the Lyme disease bacteria and the ticks that transmit them in Europe.
Can we call this increase in Lyme disease in various parts of the world the result of “human activities”? Of course. Without the human population growth and concomitant advances in agriculture and industry, Lyme disease would be but one of many infections transmitted among mammals, birds, and reptiles by ticks in woodlands for eons. But the resurgence of the Lyme disease story is just one aspect of a broader process of demographic, environmental, and social change occurring in developed countries of North America, Europe, and parts of Asia. We need not attribute it to the intentional or inadvertent actions of some government workers in a high biosafety level laboratory off the coast of Long Island.