You are here

Skeptic.com feed

Subscribe to Skeptic.com feed feed Skeptic.com feed
Popular Science. Nonpartisan. Reality-Based.
Updated: 5 hours 21 min ago

A Skeptic’s Guide to Ozempic and Other GLP-1 Agonists

Mon, 01/19/2026 - 2:20pm
A new compass for the SKEPDOC column. This column was founded by Harriet Hall, MD (1945–2023) who wrote it from 2006 to 2023. In 2026, we welcome William Meller, MD, to the helm. As an expert in evolutionary medicine, Dr. Meller will be our guide in navigating the deep biological history of our species to find the “True North” of human health.

I have been practicing medicine for more than 40 years. During that time the management of obesity and Type 2 diabetes (T2DM)—the kind that usually is caused by being overweight—often felt like Sisyphus pushing a boulder up a hill, only to have it roll back down, often heavier than before. We faced a “diabesity” epidemic where the available tools were blunt instruments at best.

Lifestyle interventions—meaning trying to get someone to change their behavior—was the most and least effective method we had. Most, because in the less than two percent of patients who were successful, it works very well. Least, because, well … 98 percent failed. And they failed because all of our evolutionary history (“See food? Eat it!”) was working against them. This is the mismatch theory: a mismatch between the environment of our evolutionary ancestry that designed our brains to seek foods that were at once rare and nutritious (sweets and fats) and the modern environment in which such foods are in such overabundance that we eat far beyond the saturation point. 

The pharmacological options were often disappointing: Sulfonylureas and insulin lower blood sugar but caused weight gain, exacerbating the underlying problem. Bariatric surgery works, but it is invasive and carries surgical as well as lifelong nutritional risks. 

When we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology.

Into this therapeutic desert crawled the Gila Monster (above), a venomous lizard native to the American Southwest from which researchers derived GLP receptor agonists (Glucagon-like peptide-1 receptor agonists)—medications that mimic the natural GLP-1 hormone that lead to lower blood sugar, help control appetite, and promote weight loss by telling the pancreas to release more insulin when glucose is high, slowing the rate of stomach emptying, and signaling to the brain a sense of fullness. 

As a skeptic, I am allergic to the word “miracle,” but when we look at the data for GLP receptor agonists, along with the innumerable before and after photos of successful weight loss transformations, we are forced to admit that we have moved from a realm of wishful thinking into one of potent pharmacology. But, as always in medicine, there is no free lunch. 

The Incretin Concept: From Gut to Glory 

The story begins with the “incretin effect”—the observation that glucose taken by mouth triggers a much stronger insulin response by increasing the production of hormones in the pancreas, compared to when it is injected directly into a vein. The gut knows you are eating and tells the pancreas to get ready to pack away the extra calories as fat. In patients with Type 2 diabetes, this effect is blunted and the sugar floats around in the bloodstream much longer. 

Scientists identified two main hormones responsible: Glucose-dependent Insulinotropic Polypeptide (GIP) and Glucagon-like Peptide (GLP-1). The problem is that GIP doesn’t work well in diabetics. GLP-1 works beautifully—stimulating insulin, suppressing glucagon, and slowing gastric emptying—but it has a fatal flaw: It is destroyed by the enzyme DPP-4 within minutes of entering the bloodstream. 

This led to two distinct pharmaceutical strategies. The earlier version was DPP-4 Inhibitors. Drugs like the “Gliptins” block DPP-4, making GLP-1 last longer. They are well-tolerated but their ability to lower blood sugar is modest and they generally do not cause weight loss. 

The newer strategy was to engineer versions of GLP to resist degradation. This is where the Gila monster strolled in. In the 1990s, while researching hormone-like drugs, Dr. John Eng noted a similarity between exendin-4 found in Gila venom to Glucagon-like peptide (GLP), and it was able to resist breakdown by DPP! 

The Evidence: Efficacy Beyond the Hype 

The first GLP-1 agonist, exenatide (Byetta, approved in 2005), required twice-daily injections and produced modest weight loss. But the pharmacology evolved rapidly. We moved to once-daily liraglutide, and then to the once-weekly heavyweights: dulaglutide, semaglutide (Ozempic and Wegovy), and the dual GIP and GLP-1 agonist tirzepatide (Mounjaro and Zepbound). 

The clinical trials, called LEAD, SUSTAIN, PIONEER, STEP, and SURPASS (you’ve got to just love the creative acronyms!) have generated data that are hard to dismiss: 

Glycemic Control: These drugs consistently outperform most oral antidiabetics in lowering blood sugar by 10 to 20 percent. 

Weight Loss: This is the game changer. While early drugs produced 2–4 kg of weight loss over six months, the newer agents are producing results previously only seen with surgery. In the STEP-1 trial, semaglutide 2.4 mg resulted in an approximately 15 percent body weight reduction. Tirzepatide pushed this further, achieving up to 22 percent weight loss in the SURMOUNT-1 trial. That is the effect of a 250-pound person losing 55 pounds! Who wouldn’t want some of that?! 

Cardiovascular Outcomes: Perhaps most importantly, these drugs are not like some that just make numbers look better; they are saving lives. Liraglutide and semaglutide have demonstrated significant reductions in major adverse cardiovascular events (MACE), including heart attack and stroke, in high-risk populations. The SELECT trial recently showed semaglutide reduces MACE by 20 percent even in nondiabetic patients with cardiovascular disease. But don’t be fooled, it is not likely that these drugs have specific effects on the heart. It is probable that the fat loss alone is causing these benefits. 

Some Skeptical Scrutiny: The Risks 

If a drug sounds too good to be true, we must look for the catch. GLP-1 agonists have plenty.

The “Puke” Diet? The most common side effects of GLP-1 agonists are gastrointestinal: nausea, vomiting, diarrhea, and bloating. In some trials, up to 45 percent of patients experienced nausea. While this usually subsides, it raises a valid question: Are people losing weight because their metabolism is optimized, or because they feel too sick to eat? The mechanism involves central appetite suppression in the hypothalamus, but the “gastric braking” effect is real and unpleasant for many. 

The Pancreas and Thyroid Scare. Early observational data suggested a link between GLP-1 agonists and pancreatitis and pancreatic cancer. However, extensive reviews have not confirmed a causal link to pancreatic cancer, though a slight increase in pancreatitis persists in some data. This makes sense, as one of the major sites of GLP’s effects is on the pancreas. In the thyroid, these drugs cause C-cell tumors in rodents. Humans have far fewer GLP-1 receptors on their thyroid C-cells than rats, and so far no evidence of increased thyroid cancer has been confirmed in humans. Still, the Black Box warning remains: If you have a family history of endocrine tumors or medullary thyroid cancer, these drugs are not for you. 

If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another.

Vanishing Muscle. Weight loss via GLP-1 agonists is not just fat loss, so overall body composition must be monitored. In the STEP-1 trial, DEXA scans showed that lean body mass (muscle and bone) accounted for nearly 40 percent of the weight lost. In older adults, this raises the specter of “sarcopenic obesity”—being frail and weak despite having excess fat. Losing muscle mass compromises physical function and metabolic health. If we are simply shrinking patients without preserving their strength, we may be trading one set of problems for another. Now, regular and increased exercise is part of the prescription for all patients taking GLP drugs, but studies on how well this works are still in progress. 

The Perioperative Peril. Because GLP-1 agonists delay gastric emptying, there have been reports of patients aspirating (inhaling) gastric contents during anesthesia, even after standard fasting protocols. This is a new, practical safety concern that surgical societies are rushing to address. 

Mental Health. Reports of suicidal ideation appeared in postmarketing monitoring of GLP-1 agonist users, prompting investigations by European regulators. However, recent large cohort studies have not supported an increased risk of suicidality compared to other diabetes medications. As with all centrally acting drugs, vigilance is required, but the current data are reassuring. 

A Lifetime Prescription? The most significant caveat for GLP-1 agonists is durability. Obesity can be a chronic, relapsing disease. Trials show that when patients stop taking semaglutide, they regain two-thirds of the lost weight within a year, and cardiometabolic improvements revert toward baseline. This implies that these are not “cures” but lifelong therapies, much like blood pressure medication. 

Financial Toxicity. As I write this, these drugs are prohibitively expensive, creating a massive public health gap. We also saw shortages that left diabetic patients unable to fill prescriptions because the supply was diverted to off-label weight loss use. GLP-1 agonists are not expensive to produce, however, and the patent on Ozempic expired in January of 2026 in Canada and China (and lasts until 2030 in the U.S.), but I expect the market to bring the costs down dramatically over the next few years. As of this year, close to 12 percent of Americans have tried it at least once. 

Needles Versus Pills 

If there is one thing that holds patients back from the current crop of injectable incretins it is the needle. Despite the efficacy of weekly injections, people prefer pills. The pharmaceutical industry, never one to leave money on the table, has been racing to develop an oral alternative that doesn’t require the strict fasting rituals of earlier attempts like oral semaglutide. Enter orforglipron, the latest contender in the “nonpeptide small molecule” class, which promises the benefits of GLPs without the injection or the fuss. 

Unlike existing peptide predecessors that are digested by stomach acid unless armored with absorption enhancers, orforglipron is a chemical—a small molecule designed to survive the GI tract and activate the GLP-1 receptor directly. The data from the ATTAIN-1 trial, published in September 2025, look good. Patients on the 36 mg dose achieved an average weight loss of 11.2 percent over 72 weeks, compared to just 2.1 percent for placebo. No needles. And this pill does not require the “empty stomach, no water, wait 30 minutes” song-and-dance required by oral semaglutide; it can be taken with or without food. 

These are serious medications with serious side effects, and they may require lifelong commitment.

However, let’s look a little past the convenience. While an 11.2 percent average weight loss is clinically significant, it trails behind the 13.7 percent average reduction seen with semaglutide and 20.2 percent with tirzepatide. Furthermore, the biology of GLP-1 agonism remains the same regardless of delivery method: You cannot cheat physiology. In the ATTAIN-1 trial, adverse events led to treatment discontinuation in up to 10.3 percent of patients on the drug, compared to only 2.7 percent on placebo. The side effects are the usual suspects—gastrointestinal distress, nausea, and constipation—confirming that oral delivery does not bypass the “gastric braking” misery. 

We must also remain vigilant regarding safety. The development of a similar small molecule, lotiglipron, was unceremoniously halted due to liver toxicity concerns. While orforglipron has passed its Phase 3 hurdles without these specific signals so far, the history of pharmacology teaches us that rare, serious adverse events often lurk in the postmarketing shadows. 

Additionally, while proponents argue that small molecules are cheaper to manufacture than biologics, whether those savings will be passed on to the patient or simply absorbed into the profit margins remains to be seen, with projected self-pay costs in some cases exceeding $1,000 per month. Orforglipron represents a technological leap, but it is not a magic wand; it is simply a more convenient way to induce the same physiological trade-offs we have seen over the last several years with the shots. 

Conclusion 

Prior to the incretin era, our ability to manage the twin epidemics of diabetes and obesity was dishearteningly limited. GLP-1 receptor agonists represent a hard-earned pharmacological breakthrough, offering potent glucose control and unprecedented weight loss. 

However, skepticism is still warranted regarding their indiscriminate use. They are already being used in numerous off-label ways, like shedding a few pounds before a wedding, allegedly decreasing cravings for addictive drugs like alcohol and narcotics, and purportedly even for the treatment of Alzheimer’s and Parkinson’s disease. There are ongoing studies for these uses, but early data are weak and the risks are unknown. These are serious medications with serious side effects, and they may require lifelong commitment. 

Caveat emptor.

Categories: Critical Thinking, Skeptic

Iranians Are Rejecting Theocracy: The Islamic Republic’s Unintended Legacy

Sat, 01/17/2026 - 4:04pm
The Vote 

On March 30–31, 1979, Iranians went to the polls. The ballot contained a single question: Should Iran become an Islamic Republic? The choices were “Yes” (Green) or “No” (Red). The official result: 98.2% voted Yes.1

Fifty-Eight Days Earlier 

On February 1, 1979, Ayatollah Khomeini returned to Iran after fourteen years in exile. Millions filled the streets of Tehran—the estimates range from two to five million.2 But the man they cheered was a carefully constructed image. During the flight, Khomeini remained secluded in the upper deck of the chartered Boeing 747, praying.3 When the plane landed, he chose to be helped down the stairs by the French pilot rather than his Iranian aides, a calculated move to prevent any subordinate from sharing the spotlight.4

He chose his first destination deliberately: Tehran’s main cemetery, where those who died during the revolution were buried. The crowd was so dense his motorcade could not pass; he took a helicopter instead.5 By speaking among the graves, Khomeini positioned himself as the guardian of those who died in the revolution and as someone who would fulfill what they had sacrificed for. 

In the weeks that followed, Khomeini offered both material goods and spiritual salvation. He promised free electricity, free water, and housing for every family. Then he added the caveat that would define the coming era: “Do not be appeased by just that. We will magnify your spirituality and your spirits.”6

A Coalition of Contradictions 

The crowd that greeted him was not a monolith, but a coalition of contradictions. Marxists marched hoping for a socialist future free of American influence. Nationalists and liberals sought constitutional democracy. The devout sought governance by Sharia—and for them, the revolution was holy war: the Shah represented taghut, the Quranic term for tyrannical powers that lead people from God, and those who died fighting him became shahid, martyrs. 

Khomeini managed these competing visions by keeping his actual plans vague. He spoke of freedom, justice, and independence, terms each faction could interpret as it wished.7 His blueprint for clerical rule, Velayat-e Faqih, remained in the background. Abolhassan Bani-Sadr, who would become the Islamic Republic’s first president, later recalled: “When we were in France, everything we said to him he embraced and then announced it like Quranic verses without any hesitation. We were sure that a religious leader was committing himself.”8 Khomeini himself would later state: “The fact that I have said something does not mean that I should be bound by my word.”9

Ayatollah Mahmoud Taleghani casts his vote in the March 1979 Islamic Republic referendum.The Empty Phrase 

Now, let’s return to the ballot. 

A republic places sovereignty in the people. Citizens choose their laws. An Islamic state places sovereignty in God, but not “God” in some abstract, philosophical sense. The God of the Islamic Republic is specifically Allah as understood in Shia Islam: a God who communicates through the Quran, whose will was interpreted by the Prophet Muhammad, then by the twelve Imams, and now (in the absence of the hidden Twelfth Imam) by qualified Islamic jurists. This is not a deist clockmaker or a personal spiritual presence. This is a God with specific laws, specific requirements, and specific men authorized to speak on His behalf. 

So, what did God want? The ballot never said. 

The 1979 Iranian Islamic Republic referendum ballot showing the “نه” (No) option in red. Voters chose between a simple yes or no on whether Iran should become an “Islamic Republic”—a phrase containing no constitution, no enumerated rights, and no definition of which Islamic laws would apply or who would interpret them.

“Islamic Republic” contained no details. No constitution, no enumerated rights, no definition of which Islamic laws would apply or who would interpret them. Voters were not choosing a specific system of government. They were choosing a phrase, and trusting that its meaning would be filled in later by men they believed spoke for God. 

For those paying attention, there were clues. Khomeini had written extensively about Velayat-e Faqih (the Guardianship of the Islamic Jurist) a system in which a senior cleric would hold supreme authority as God’s representative on Earth. He had lectured on it in Najaf. He had published a book.10 But in the noise of revolution, in the flood of promises about free electricity and spiritual elevation, these details were background static. The crowds were not voting on constitutional theory. They were voting on hope. 

The 98% voted Yes. Forty-seven years later, we can measure what exists in Iranian society. 

Religious Faith 

For this case study to be valid, we must establish a baseline. Was Iranian society already irreligious before 1979, or has religiosity declined under the theocracy? 

Available evidence suggests the latter. 

In 1975, a survey of Iranian attitudes found over 80% of respondents observing daily prayers and fasting during Ramadan. The methodology is not fully documented in accessible sources.11 However, the broader historical record supports the baseline: the 1979 revolution mobilized millions under explicitly Islamic banners, clerical figures commanded genuine social authority, and the Iranian government’s own 2023 leaked survey found 85% of respondents saying society has become lessreligious than it was.12 Forty-seven years later, mosques are empty. 

Official Iranian census data reports 99.5% of the population as Muslim.13 This figure measures legal status, not belief. Under Iranian law, a child born to a Muslim father is automatically registered as Muslim, and leaving Islam carries severe legal consequences. While formal executions for “apostasy” are relatively rare—the regime prefers to charge dissidents with crimes like “Enmity against God” or “Insulting the Prophet”—the threat is sufficient to enforce public silence.

Saadatabad district, Tehran, January 8, 2026: A mosque burns amid protests. (Source: Press Office of Reza Pahlavi)

In June 2020, the Group for Analyzing and Measuring Attitudes in Iran (GAMAAN) surveyed over 50,000 respondents using methods designed to protect anonymity.14

Results: 

  • 32.2% identified as Shia Muslim 
  • 22.2% selected “None” 
  • 8.8% identified as Atheist 
  • 7.7% identified as Zoroastrian 
  • 5.8% identified as Agnostic 
  • 1.5% identified as Christian 

While this online sample skews urban (93.6% vs. Iran’s 79%) and university-educated (85.4% vs. 27.7% nationally), the magnitude of divergence from official statistics—32% Shia vs. 99.5% in census data—is too large to explain through sampling bias alone. Meanwhile, face-to-face surveys suffer the opposite problem: when GAMAAN asked respondents if they’d answer sensitive questions honestly over the phone, 40% said no.15

An interesting outcome of this study is that Iran has approximately only 25,000 practicing Zoroastrians (the total population of Iran is around 92.5 million), yet 7.7% selected this identity. Researchers interpret this as “performing alternative identity aspirations”—claiming pre-Islamic Persian heritage to reject imposed Islamic identity.16

The key findings are, however, clear: 44.5% selected a non-Islamic category when asked their current religion and 47% reported transitioning from religious to non-religious during their lifetime. 

The second figure suggests active deconversion rather than inherited secularism. 

In 2024, a classified survey by Iran’s Ministry of Culture and Islamic Guidance (conducted in 2023) was leaked to foreign media.17 This data provides a comparison point from within the regime itself. 

Indicator

2015

2023

Support separating religion from state

30.7%

72.9%

Pray “always” or “most of the time”

78.5%

54.8%

Never pray

3.1%

22.2%

Never fast during Ramadan

5.1%

27.4%

The same survey found 85% of respondents said Iranian society had become less religious in the previous five years. Only 25% reported trusting clerics. 

Based on my years of closely following Iranian society, the pace of religious abandonment has accelerated significantly since the 2022 “Woman, Life, Freedom” uprising. The leaked government data confirms this trajectory: the sharpest shifts in prayer and fasting occurred within the 2015–2023 window, with 85% saying society had grown less religious in just the previous five years. 

In February 2023, senior cleric Mohammad Abolghassem Doulabi stated that 50,000 of Iran’s approximately 75,000 mosques had closed due to low attendance, a claim partially corroborated by the leaked government survey finding only 11% always attend congregational prayers.18

Election participation has also declined. Official turnout in the June 2024 presidential election was 39.93%, the lowest in the Islamic Republic’s history.19

The Evidence on the Streets 

The data on paper is corroborated by the specific vocabulary of the street. The protest chants have evolved from requesting reform to rejecting the entire theological framework. 

Art by Hamed Javadzadeh — Woman, Life, Freedom Movement (2022)

Consider the chant: “Neither Gaza nor Lebanon, I sacrifice my life for Iran.” 

This is a direct rejection of the regime’s core ideology. The Islamic Republic prioritizes the Ummah—the transnational community of believers—over the nation-state. By rejecting funding for Hamas and Hezbollah in favor of national interests, protesters are secularizing their priorities: the Nation has replaced the Faith as the object of ultimate concern. 

Even more specific is the chant: “Death to the principle of Velayat-e Faqih.” 

The protestors are not merely calling for the death of the dictator (Khamenei); they are targeting the specific theological doctrine that grants him legitimacy. They are rejecting the very concept of divine guardianship. 

But the most striking evidence of the revolution’s failure is the return of the name it sought to erase. In a historical irony that defies all prediction, crowds now chant “Reza Shah, bless your soul,” and call upon Reza Pahlavi, the son of the deposed Shah, to return. The same population that staged a revolution to overthrow a monarchy in 1979 is now invoking that monarchy as the antidote to theocracy. 

The Mechanism 

A note on terminology: When this article refers to “Allah,” it means the legislative deity of the Islamic Republic—a God with enforceable commands interpreted by authorized clerics. This is distinct from the personal God that 78% of Iranians still believe in. 

As mentioned earlier, Iran’s constitution establishes Velayat-e Faqih—the Guardianship of the Islamic Jurist. Article 5 declares that in the absence of the Twelfth Imam (a messianic figure believed to have been in supernatural hiding since the 9th century), authority belongs to a qualified jurist. The Tony Blair Institute’s analysis states it directly: “the supreme leader’s mandate to rule over the population derives from God.”20 Khamenei’s own representative, Mojtaba Zolnour, declared in 2009: “In the Islamic system, the office and legitimacy of the Supreme Leader comes from God, the Prophet and the Shia Imams, and it is not the people who give legitimacy to the Supreme Leader.”21

This is not metaphor. The system’s legitimacy rests on the claim that its laws are Allah’s laws, its punishments are Allah’s punishments, its wars are Allah’s wars. 

When morality police detained Mahsa Amini, leading to her death, they were enforcing the mandatory religious duty of “Forbidding the Wrong.” When courts execute apostates, they enforce Allah’s law. When the regime sends billions to Hezbollah while Iranians face poverty, it pursues Allah’s mission. When it pursues a nuclear program that invites crushing sanctions, it frames the resulting economic ruin not as policy failure, but as a holy “Resistance” against the enemies of Islam. Every act of misrule carries Allah’s signature.

0:00 /1:04 1×

Khorramabad, Iran, January 8, 2026: Protesters raise the pre-1979 lion-and-sun flag, described as a symbol of secular restoration, atop a statue of the Ayatollah. (Source: Press Office of Reza Pahlavi)

In a secular dictatorship, citizens can hate the dictator while preserving their faith. The North Korean who despises Kim Jong-un can still pray. But in a theocracy, the oppressor and God speak with one voice. To oppose the oppressor is to oppose God. To want freedom is to reject divine authority. 

The regime created conditions where, for many, opposing political authority became entangled with questioning religious authority. 

The Psychology of Religious Rebellion 

Jack Brehm’s reactance theory (1966) demonstrates that when people perceive threats to their freedom, they become motivated to restore it, often by embracing the forbidden alternative.22 Subsequent research has applied this specifically to religion. Roubroeks, Van Berkum, and Jonas (2020) found that restrictive religious regulations can trigger reactance that leads to both heresy (holding beliefs contrary to orthodoxy) and apostasy (renouncing religious affiliation entirely).23

The critical insight: In cases of psychological reactance, the emotional pushback against coercion often precedes the intellectual dismantling of the belief system. 

The sequence is rarely a straight line, but the components are clear: 

  1. Coercion: The lived experience of religious enforcement 
  2. Dissonance: The widening gap between the regime’s claims of divine justice and the reality of corruption and violence 
  3. Access: The internet provides a “vocabulary of dissent” 

This third point is crucial. Iran’s internet users grew from 615,000 in 2000 to over 70 million today.24 Despite billions spent on censorship, officials admit 80–90% of Iranians use VPNs, which allow to circumvent restrictions by changing the user’s internet location to that of another country.25

For the intellectually curious, the internet offered arguments against Islamic theology that were previously banned. But for the average citizen, it offered something perhaps more powerful: validation. It showed them that their anger was shared. It broke the “pluralistic ignorance,” the state where everyone privately rejects the norm but publicly conforms because they think they are the only ones. 

Whether through deep study or simple emotional exhaustion, the result was the same: the breaking of the psychological bond between the citizen and the faith. 

The Unintended Outcome 

Iran’s religious decline is among the fastest documented in modern history. Stolz et al. (2025) in Nature Communications established that Europe’s secular transition took approximately 250 years. Iran’s comparable shift from over 80% observing daily prayers in 1975 to 47% reporting lifetime deconversion by 2020 occurred in roughly 45 years. Pew’s global data shows Muslim retention rates averaging 99% across surveyed countries.26

However, Europe secularized without internet or satellite television. Iran’s shift occurred alongside a 90-fold increase in internet access. Theocracy may provide the motive for questioning imposed faith; technology provides the accelerant that compresses generational change into decades. Ex-Muslim testimonies, apostasy narratives, ordinary lives lived without faith—these demonstrated that abandoning religion was survivable. The forbidden became imaginable. Others found arguments that validated what they already felt. The reasoning matched the shape of their anger, and that was enough. 

For forty-seven years, the Islamic Republic worked to manufacture belief. Mandatory religious education from childhood. State control of media. Morality police enforcing dress and behavior. Apostasy punishable by death. A constitution grounding all authority in God. They did not leave this to chance. 

The data suggests it did not work.

Categories: Critical Thinking, Skeptic

Two Movies on One Screen: Conflicting Narratives of the Renee Good Shooting in Minnesota

Tue, 01/13/2026 - 8:38am

Anyone following recent events in Minneapolis has likely noticed something strange. People watching the same videos, reading the same headlines, and reacting to the same street-level events often seem to be describing entirely different realities. Conversations quickly break down, not because people disagree about what should be done, but because they cannot even agree on what is happening. It’s as if people are watching two completely different movies on one screen.

The “two-movies-one-screen” concept was first coined by Scott Adams, the creator of Dilbert turned political commentator, to describe radically different interpretations of the same political events. People with access to the same set of facts come away with completely different understandings of what is happening. In some cases, each side seems genuinely unaware that the other interpretation even exists.

This is not merely disagreement, and it goes beyond ordinary bias. It is also not quite what psychologists usually mean by cognitive dissonance. Cognitive dissonance, first described by Leon Festinger in the 1950s, occurs when people experience psychological discomfort from holding conflicting beliefs or encountering information that contradicts their existing views, and then attempt to reduce that discomfort through rationalization or reinterpretation of the facts. In cases like the Renee Good shooting in Minnesota, however, something else seems to be happening. So, what is going on?

From a psychological standpoint, this resembles dissociation more than cognitive dissonance. Dissociation refers to a class of mental processes in which certain thoughts, perceptions, or experiences are kept out of conscious awareness. As clinical psychologists have long noted, dissociation functions as a defensive mechanism, shielding the individual from information that is experienced as overwhelming or intolerable. The mind does not reject the data after evaluating it. It fails to perceive it in the first place.

The following is an attempt to provide a neutral description of the events, followed by two very different interpretations.

On January 7, 2026, in Minneapolis, Minnesota, 37-year-old Renee Nicole Good was fatally shot by an Immigration and Customs Enforcement (ICE) agent during an operation targeting undocumented immigrants for deportation. Good was a U.S. citizen and mother of three from previous relationships, and present on the scene with her wife, Rebecca (Becca) Good.

Multiple videos from bystanders, body cameras, and agent phones capture the event, showing a chaotic scene lasting about three minutes.

0:00 /0:47 1×

ICE Agent’s Cellphone Video (Credit: Alpha News)

Renee Good was in her SUV, which was blocking or near the path of ICE vehicles during an arrest operation. Agents approached, giving conflicting commands: some ordered her to leave, while others demanded she exit the vehicle. One agent attempted to open her door and banged on the window.

Rebecca Good, Renee’s wife, was outside the vehicle filming and confronting agents.

At one point during the interaction, Renee’s wife urged her to “drive, baby, drive” as the situation escalated. Good maneuvered the vehicle forward and started to accelerate. The vehicle made contact with an ICE agent who was positioned in front; the agent fired through the windshield, striking her in the face and killing her.

0:00 /0:39 1×

Bystander Video (Credit: Nick Sortor)

According to official statements from ICE and the Department of Homeland Security (DHS), the shooting occurred after Good allegedly used her vehicle as a weapon, attempting to run over an agent who then fired in self-defense. Renee and Rebecca Good were part of “ICE Watch” groups monitoring, protesting, and interfering with ICE operations. The ICE agent who fatally shot Good was injured and hospitalized following a prior incident in June 2025, during which an undocumented immigrant with an open warrant for child sexual assault dragged him with his vehicle while attempting to flee arrest.

0:00 /4:26 1×

Bystander Video 2 (Credit: @Dana916 via X.com)

Progressive voices view Good’s killing as an example of ICE overreach, law enforcement brutality, and systemic abuse of power, especially against citizens exercising First Amendment rights. They emphasize Renee was a “legal observer” and had a constitutional right to protest. They further note that Good was an unarmed American citizen on a public road who was fatally shot in the face and head by a masked federal agent. They also interpret the footage as showing Good attempting to navigate away from the scene rather than intentionally trying to harm the agent. They further warn against normalizing state killings, such as in statements made by Rep. Alexandria Ocasio-Cortez (D), who responded to Vice President JD Vance’s defense of the ICE agent by calling it a “regime willing to kill its own citizens.” This sentiment is tied to broader concerns about police/ICE militarization against undocumented immigrants, and observations such as that even if Good erred (e.g., by not complying with instructions of federal law enforcement officers), it wasn’t worth her life, and society needs a higher bar for lethal force.

Conservative commentators frame the shooting as justified self-defense against anti-ICE radicals who disrupted lawful operations. They emphasize Renee’s alleged aggression and Rebecca’s role in escalating the situation by shouting “You wanna come at us? Go get yourself lunch, big boy,” portraying the couple as part of a coordinated harassment campaign rather than passive observers or demonstrators. They also argue Good was an active participant and perpetrator obstructing enforcement of long-standing immigration law, and someone attempting to flee from the scene rather than simply a citizen attending a protest. They maintain that the shooting was tragic, nevertheless law enforcement (and citizens) can use lethal force if they reasonably believe they face imminent serious harm. Further, they make the following distinction: debating whether the officer should or should not have fired is rational, but refusing to acknowledge that being struck/pushed by a vehicle is basis for self-defense isn’t.

These conflicting media narratives matter because most people do not build their understanding of the world through direct experience. Our personal encounters are limited. The rest of our mental model is assembled from stories. Indeed, research in cognitive psychology and media studies consistently shows that humans rely heavily on narrative to organize information and assign meaning. In other words, we are not natural statisticians. As psychologists such as Jerome Bruner and Daniel Kahneman have shown, people reason intuitively through stories, examples, and emotionally salient cases, often treating mediated experience as a stand-in for reality itself. This is why propaganda is most effective when it does not look like propaganda.

Many people assume propaganda is something obvious that you notice and argue with. In reality, the most powerful propaganda works through repetition rather than persuasion. Social psychologists have documented what is known as the “illusory truth effect,” in which repeated statements are more likely to be judged as true, regardless of their accuracy. When a moral narrative is replayed often enough, it stops feeling like a claim and starts feeling like memory.

Consider the recurring portrayal of tech executives in films and television. A wealthy founder speaks in vague abstractions, dismisses ethical concerns, and pursues profit at the expense of ordinary people. The specifics vary, but the moral structure remains the same. Whether any individual depiction reflects the reality of modern technology firms is almost beside the point. After repeated exposure, viewers absorb not just a critique of corporate excess, but an intuitive framework for interpreting innovation, wealth, and motive. Repetition trains audiences to assign intent instantly and to stop questioning it.

This works because fiction bypasses our analytical defenses. Experimental research on narrative persuasion shows that people are less likely to counterargue when they are emotionally absorbed in a story. Psychologists refer to this as “transportation,” a state in which attention and emotion are captured by a narrative, making viewers more receptive to its implicit assumptions. We do not fact-check television dramas. We empathize with them. Their moral premises are absorbed quietly as background knowledge.

For most of us, the names Jeff Bezos, Elon Musk, Mark Zuckerberg, or Peter Thiel evoke an immediate moral impression. But how did that impression evolve? Have you, for example, ever heard them speak at length or know how they run their companies? Do you understand what motivates them? Do they have a good sense of humor?

There is also a structural problem with storytelling itself. Everyday reality, especially everyday crime, is usually chaotic, senseless, and narratively unsatisfying. Criminologists have long observed that much violent crime lacks coherent motives or moral meaning. Writers, understandably, select stories that feel legible, purposeful, and emotionally engaging. But those selections shape our expectations of reality and thus our perception, and make us see otherwise messy events as morally clearer than they actually are.

The result is a moral universe in which certain kinds of harm are treated as profound moral ruptures, while other kinds are treated as routine or unfortunate facts of life. Violence committed by some characters is framed as a social crisis demanding urgent moral response. Similar violence committed by others is portrayed as tragic but unremarkable, something to be managed rather than interrogated.

A clear example appears in the pilot of The Pitt. A dramatic subway assault is immediately interpreted through a moral lens before basic facts are known. The graphic depiction gives viewers the feeling that they are seeing something raw and unfiltered. At the same time, the narrative structure carefully guides inference and sympathy. In the same episode, a different shooting is treated as mundane and procedural. It carries little moral weight and prompts no larger reflection.

The show is not depicting reality. It is presenting a moral map.

This does not require a conspiracy, and it does not require malicious intent. Many writers openly acknowledge that fiction shapes social norms and expectations. Cultural theorists from Walter Lippmann to contemporary media scholars have noted that narratives function as “pictures in our heads,” guiding perception long before conscious judgment enters the picture. What is new is the growing cultural distance between those producing these narratives and the audiences consuming them, combined with a strong confidence that the moral direction of society is already settled.

When this kind of storytelling dominates, it does more than persuade. It trains perception itself. Viewers learn what to notice, what to ignore, and which conclusions should feel obvious. Over time, alternative interpretations stop feeling like interpretations at all. They begin to look irrational or delusional.

This is how “the other movie” disappears.

♦ ♦ ♦

A functioning society does not require agreement on every issue. It does require a shared reality. When large groups of people cannot even see what others are responding to, debate becomes impossible. You cannot resolve disagreements if one side experiences the other as hallucinating.

The answer is not counter-propaganda, and it is not simply more facts. Research on motivated reasoning shows that facts alone rarely change minds when perceptions themselves are structured by narrative. What is required instead is closer attention to how stories shape perception. What they highlight. What they omit. And how repetition turns fiction into intuition.

Was Renee Good heroically intervening in an unlawful abduction and a victim of reckless police violence? Or was she someone who interfered with a lawful enforcement action and nearly ran over an officer? Each interpretation feels obvious to those who hold it, and nearly invisible to those who do not. If you analyze both long enough, you might start to see the narratives and the chain of events that lead one to interpret this particular incident in a particular way after watching the exact same three minutes of video.

Skepticism, properly understood, is not just about questioning explicit claims. It is about examining why certain narratives feel natural, why others feel unthinkable, and why some movies seem to be playing on the screen while others are never seen at all.

Categories: Critical Thinking, Skeptic

How “Us vs. Them” Takes Hold: Tribalism in Byzantium, Sri Lanka, and Modern America

Fri, 01/09/2026 - 10:36am

Sixth-century Byzantium was a city divided by race hatred so intense that people viciously attacked each other, not only in the streets but also in churches. The inscription on an ancient tablet conveys the raw animus that spawned from color differences: “Bind them! … Destroy them! … Kill them!” The historian Procopius, who witnessed this race antagonism firsthand, called it a “disease of the soul,” and marveled at its irrational intensity:

They fight against their opponents knowing not for what end they imperil themselves … So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place, neither to the ties of marriage nor of relationship nor of friendship.1

This hostility sparked multiple violent clashes and riots, culminating in the Nika Riot of 532 CE, the biggest race riot of all time: 30,000 people perished, and the greatest city of antiquity was reduced to smoldering ruins.

But the Nika Riot wasn’t the sort of race riot you might imagine. The race in question was the chariot race. The color division wasn’t between black and white but between blue and green—the colors of the two main chariot-racing teams. The teams’ supporters, who were referred to as the Blue and Green “factions,” proudly wore their team colors, not just in the hippodrome but also around town. To help distinguish themselves, many Blues also sported distinctive mullet hairstyles, like those of 1970s rock stars. Both Blues and Greens were fiercely loyal to their factions and their colors. The chariots and drivers were a secondary concern; the historian Pliny asserted that if the drivers were to swap colors in the middle of a race, the factions would immediately switch their allegiances accordingly.

Decades of studies have demonstrated the dangerous power of the human tribal instinct.

The race faction rivalry had existed for a long time before the Nika Riot, yet Procopius writes that it had only become bitter and violent in “comparatively recent times.” So, what caused this trivial division over horse-racing teams to turn so deadly? In short, it was the Byzantine version of “identity politics.”

Detail of “A Roman Chariot Race,” depicted by Alexander von Wagner, circa 1882. During the Nika Riots that took place against Byzantine Emperor Justinian I in Constantinople over the course of a week in 532 C.E., tens of thousands of people lost their lives and half the city was burned to the ground. It all started over a chariot race. (Image courtesy of Manchester Art Gallery)

Modern sociological research helps explain the phenomenon. Decades of studies have demonstrated the dangerous power of the human tribal instinct. Surprisingly, it doesn’t require “primordial” ethnic or tribal distinctions to engage that impulse. Minor differences are often sufficient to elicit acute ingroup-outgroup discrimination. The psychologist Henri Tajfel demonstrated this in a landmark series of studies to determine how minor those differences can be. In each successive study, Tajfel divided test subjects into groups according to increasingly trivial criteria, such as whether they preferred Klee or Kandinsky paintings or underestimated or overestimated the number of dots on a page. The results were as intriguing as they were disturbing: even the most trivial groupings induced discrimination.23

However, the most significant and unexpected discovery was that simply telling subjects that they belonged to a group induced discrimination, even when the grouping was completely random. Upon learning they officially belonged to a group, the subjects reflexively adopted an us-versus-them, zero-sum game attitude toward members of other groups. Many other researchers have conducted related experiments with similar results: a government or an authority (like a researcher) designating group distinctions is, by itself, sufficient to spur contentious group rivalry. When group rewards are at stake, that rivalry is magnified and readily turns malign.

The Robbers Cave Experiment, conducted in 1954 by social psychologists Muzafer and Carolyn Sherif, investigated intergroup conflict and cooperation. The study involved 22 eleven-year-old boys at a summer camp in Robbers Cave State Park, Oklahoma. (Photo: The University of Akron)

The extent to which authority-defined groups and competition for group benefits can foment nasty factionalism was demonstrated in the famous 1954 Robbers Cave experiment, in which researchers brought boys with identical socioeconomic and ethnic backgrounds to a summer camp, dividing them randomly into two official groups. They initially kept the two groups separate and encouraged them to bond through various group activities. The boys, who had not known each other before, developed strong group cohesion and a sense of shared identity. The researchers then pitted the groups against each other in contests for group rewards to see if inter-group hostility would arise. The group antagonism escalated far beyond their expectations. The two groups eventually burned each other’s flags and clothing, trashed each other’s cabins, and collected rocks to hurl at each other. Camp staff had to intervene repeatedly to break up brutal fights. The mounting hostility and risk of violence induced the researchers to abort that phase of the study.4 Other researchers have replicated this experiment: one follow-up study resulted in knife fights, and a researcher was so traumatized he had to be hospitalized for a week.56

How does this apply to the Blues and Greens? As in the Tajfel experiments, the Byzantine race factions had formed a group division based on a trivial distinction—the preference for a color and a horse racing team. However, for many years, the rivalry remained relatively benign. This was likely because the emperors had long played down the factional distinction and maintained a tradition of race neutrality: if they favored a faction, they avoided openly showing it. But that tradition ended a few years before the Nika Riot when emperors began openly supporting either one faction or the other. But more importantly, they extended their support outside the hippodrome with official policies that benefited members of their preferred faction. The emperors Marcian, Anastasius, and Justinian adopted official employment preferences, allocating positions to members of their favored faction and blocking the other faction from coveted jobs. To cast it in modern terms, they began a program of “race-based” affirmative action and identity politics.78

In nearly all the countries where affirmative action programs have been implemented, they have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive.

Official recognition of the group distinction enhanced the us-versus-them sense of difference between the factions, and the affirmative action scheme turned this sense of difference into bitter antagonism, which eventually exploded in violence. Procopius, our primary contemporary source, placed the blame for the mounting antagonism and the riots squarely on Justinian’s program of identity politics. It had not only promoted an us-versus-them mindset in the factions, it also incited vicious enmity between them, turning a trivial color preference and sporting rivalry into a deadly “race war.”

Considering how identity politics could elicit violence from randomly assembled groups like the Blues and Greens, it is easy to imagine how disastrous identity politics can be when applied to groups that already have some long-standing, historic sense of difference. Indeed, there have been numerous instances of this in history, most ending tragically. For example, Tutsis and Hutus enjoyed centuries of relatively peaceful coexistence in Rwanda up until Belgian colonialists arrived; when the Belgians issued identity cards distinguishing the two groups and instituted affirmative action, it ossified a formerly porous group distinction and infused it with bitter rivalry, preparing the path to genocide. Likewise, when Yugoslavia instituted its “nationality key” system, with educational and employment quotas for the country’s constituent ethnic groups, it hardened group distinctions, pitting the groups against each other and setting the stage for genocide in the Balkans. And, when the Sri Lankan government opted for identity politics and affirmative action, it spawned violent conflict and genocide that destroyed a once peaceful and prosperous country. This last example—Sri Lanka—is so illustrative of the dangers of identity politics that we’ll examine it in more detail.

Sri Lanka: How Identity Politics Destroyed ParadiseShe is a fabulous isle just south of India’s teeming shore, land of paradise … with a proud and democratic people … Her flag is the flag of freedom, her citizens are dedicated to the preservation of that freedom … Her school system is as progressive as it is democratic. —1954 TWA TOURIST VIDEO

Sri Lanka is an island off India’s southeast coast blessed with copious amounts of arable land and natural resources. It has an ethnically diverse population, with the two main groups being Sinhalese (75 percent) and Tamils (15 percent). Before Sri Lanka’s independence in 1948, there was a long history of harmony between these groups. That history goes back at least to the fourteenth century when the Arab traveler Ibn Battuta observed how the different groups “show respect” for each other and “harbor no suspicions.” On the eve of Sri Lanka’s independence, a British governor lauded the “large measure of fellowship and understanding” that prevailed, and a British soldiers’ guide noted that “there are no historic antagonisms to overcome.” With quiescent communal relations, abundant natural resources, and one of the highest literacy rates in the developing world, newly independent Sri Lanka was poised to flourish and prosper. Nobody doubted it would outperform countries like South Korea and Singapore, with the British governor dubbing it “the best bet in Asia.”

It turned out to be a very poor bet. A few years after Sri Lanka’s independence, violent communal conflict erupted, culminating in a protracted civil war and genocide. By the time it ended, over a million people had been displaced or killed. Sri Lanka’s per capita GDP, which was on par with South Korea’s in 1960, was only one-tenth of it by 2009. As in sixth-century Byzantium, identity politics precipitated the calamity.

Turning a Disparity into a Disaster

At the end of British colonial rule in Sri Lanka, there was significant educational and income disparity between Sinhalese and Tamils. This arose by happenstance rather than because of discriminatory policy. The island’s north, where Tamils predominate, is arid and poor in resources. Because of this, the Tamils devoted their productive energy toward developing human capital, focusing on education and cultivating professional skills. This focus was abetted by American missionaries, who set up schools in the north, providing top-notch English-language education, particularly in math and the physical sciences. As a result, Tamils accounted for an outsized proportion of the better-educated people on the island, particularly in higher-paying fields like engineering and medicine.

Because of the Tamils’ superior education, the British colonial administration hired them disproportionately compared to the Sinhalese. In 1948, for example, Tamils accounted for 40 percent of the clerical workers employed by the colonial government, greatly outstripping their 15 percent share of the overall population. This unequal outcome had nothing to do with overt discrimination against the Sinhalese; it merely reflected the different levels and types of education achieved by the different ethnic groups.

When Sri Lanka gained independence, it passed a constitution that prohibited discrimination based on ethnicity. But a few years after that, an opportunist politician, S.W.R.D. Bandaranaike, figured he could advance his career by cynically appealing to identity politics, stoking Sinhalese envy over the Tamils’ over-representation in higher education and government. He launched a divisive campaign to eliminate the disparity, which spurred the majority Sinhalese to elect him. After his election in 1956, Bandaranaike passed a law that changed the official language from English to Sinhala and consigned students to separate Tamil and Sinhalese education “streams” rather than having them all learn English. As one Sinhalese journalist wrote, this divided Sri Lanka, depriving it of its “link language”:

That began a great divide that has widened over the years. Children now go to segregated schools or study in separate streams in the same school. They don’t get to know other people of their own age group unless they meet them outside.

Beyond eliminating Sri Lanka’s common “link language,” this law also functioned as a de facto affirmative action program for Sinhalese. Tamils, who spoke Tamil at home and received their higher education in English, could not gain Sinhala proficiency quickly enough to meet the government’s requirement. So, many of them lost their jobs to Sinhalese. For example, the percentage of Tamils employed in government administrative services dropped dramatically: from 30 percent in 1956 to five percent in 1970; the percentage in the armed forces dropped from 40 percent to one percent.

As has happened in many other countries, Sri Lanka’s identity politics went hand-in-hand with expanded government. Sinhalese politicians made it clear: government would be the tool to redress perceived ethnic disparities. It would allocate more jobs and resources, and that allocation would be based on ethnicity. As one historian writes: “a growing perception of the state as bestowing public goods selectively began to emerge, challenging previous views and breeding mistrust between ethnic communities.” Tamils responded to this by launching a non-violent resistance campaign. With ethnic dividing lines now clearly drawn, mobs of Sinhalese staged anti-Tamil counter-demonstrations and then riots in which hundreds—mostly Tamils—were killed. The us-versus-them mentality was setting in.

Bandaranaike was eventually assassinated by radicals within his own movement. But his widow, Sirimavo, who was subsequently elected prime minister, resolved to maintain his top priorities—expansive government and identity politics. She nationalized numerous industries and launched development projects that were directed by ethnic and political considerations rather than actual need. She also removed the constitutional ban on ethnic discrimination so that she could aggressively expand affirmative action. The existing policies had already cost so many Tamils their jobs that they were now under-represented in government. However, they remained over-represented in higher education, particularly in the sciences, a disparity that Sirimavo and her political allies resolved to eliminate. In a scheme that American universities like Harvard would later emulate, the Sri Lankan universities began to reject high-scoring Tamil applicants in favor of manifestly less-qualified Sinhalese with vastly lower test scores.

Just like Justinian’s “race” preferences, the Sri Lankan affirmative action program exacerbated us-versus-them attitudes, deepening the group divide and spurring enmity between groups. As one Sri Lankan observed:

Identity was never a question for thousands of years. But now, here, for some reason, it is different … Friends that I grew up with, [messed around] with, got drunk with, now see an essential difference between us just for the fact of their ethnic identity. And there are no obvious differences at all, no matter what they say. I point to pictures in the newspapers and ask them to tell me who is Sinhalese and who is Tamil, and they simply can’t tell the difference. This identity is a fiction, I tell you, but a deadly one.9

The lessons of the various affirmative action programs in Sri Lanka were clear to everyone: individuals’ access to education and government employment would be determined by ethnic group membership rather than individual merit, and political power would determine how much each group got. If you wanted your share, you needed to mobilize as a group and acquire and maintain political power at any cost. The divisive effects of these lessons would be catastrophic.

The realization that they would forever be at the mercy of an ethnic spoils system, along with the violent attacks perpetrated against them, induced the Tamils to form resistance organizations—most notably, the Liberation Tigers of Tamil Eelam (LTTE). The LTTE attacked both Sri Lankan government forces and individual Sinhalese, initiating a deadly spiral of attacks and reprisals by both sides committing the sort of atrocities that are tragically common in ethnic conflicts: burning people alive, torture, mass killings, and so on. Over the following decades, the conflict continued to fester, periodically escalating into outright civil war. Ultimately, over a million people would be killed or displaced.

The timeline of the Sri Lankan conflict establishes how communal violence originated from identity politics rather than the underlying income and occupational disparity between the groups. That disparity reached its apex at the beginning of the twentieth century. Yet, there was no communal violence at that point or during the next half-century. It was only after the introduction of affirmative action programs that ethnic violence erupted. The deadliest attacks on Tamils occurred an entire decade after those programs had enabled Sinhalese to surpass Tamils in both income and education. As Thomas Sowell observed: “It was not the disparities which led to intergroup violence but the politicizing of those disparities and the promotion of group identity politics.”10

Consequences of Identity Politics in Sri Lanka and Beyond

Sri Lanka’s experience highlights some underappreciated consequences of identity politics. Most notably, one would expect that affirmative action programs would have warmed the feelings of the Sinhalese toward the Tamils. After all, they were receiving preferences for jobs and education at the Tamils’ expense. Yet, precisely the opposite happened: as the affirmative action programs were implemented, Sinhalese animus toward the Tamils progressively worsened. This pattern has been repeated in nearly all the countries where affirmative action has been implemented: affirmative action programs have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive. That group tends to justify the indefinite continuation of these benefits by claiming that the other group continues to enjoy “privilege”—or by demonizing them and claiming that they are “systemically” advantaged. Thus, the beneficiaries of affirmative action are often the ones to initiate hostilities. In Rwanda, for example, it was Hutu affirmative action beneficiaries who perpetrated the violence, not Tutsis. The situation in Sri Lanka was analogous, with Sinhalese instigating all of the initial riots and pogroms against the Tamils.

One knock-on effect of identity politics in Sri Lanka was that it ultimately benefited some of the wealthiest and most privileged people in the country. The government enacted several affirmative action schemes, each increasingly contrived to benefit well-heeled Sinhalese. The last of these implemented a regional quota system that was devised so that aristocratic Sinhalese living in the Kandy region would compete for spots against poor, undereducated Tamil farm workers. As one Tamil who lost his spot in engineering wrote: “They effectively claimed that the son of a Sinhalese minister in an elite Colombo school was disadvantaged vis-à-vis a Tamil tea plucker’s son.” This follows the pattern of many other affirmative action programs around the world: the greatest beneficiaries are typically the most politically connected (and privileged) individuals within the group receiving affirmative action. They are often wealthier and more privileged than many of the individuals against whom affirmative action is directed. This has been well documented in India, which has extensive data on the subgroups that benefit from its affirmative action programs.

Decades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior.

One unexpected consequence of identity politics in Sri Lanka was rampant corruption. When Sri Lanka became independent, its government was widely deemed one of the least corrupt in the developing world. However, as affirmative action programs were implemented and expanded, corruption increased in lockstep. The adoption of affirmative action set a paradigm that pervaded the government: whoever held power could steer government resources to whomever they deemed “underserved.” A baleful side effect of ethnicity-based distortion of government policy is that it undermines and erodes more general standards of government integrity and transparency, legitimating a paradigm of corruption: if it is acceptable to direct policy for the benefit of an ethnic group, is it not also acceptable to do so for the benefit of a clan or an individual? It is a small step to go from one to the other, a step that many Sri Lankan leaders and bureaucrats took. Today, Sri Lanka’s government, which once rivaled European governments in transparency, remains highly corrupt. This pattern has been repeated in other countries. For example, after the Federation of Malaysia expelled Singapore, it adopted an extensive affirmative action program, whereas Singapore prohibited ethnic preferences. Malaysia subsequently experienced proliferating corruption, whereas Singapore is one of the least corrupt countries in the world today.

Economic divergence between Singapore and Sri Lanka’s GDP per capita, 1960–2023 (Source: Our World in Data)

Perhaps the most profound consequence of identity politics in Sri Lanka was that it ultimately made everybody in the country worse off. After World War II, per capita income in Sri Lanka and Singapore was nearly identical. But after it abandoned its shared “link language” and adopted ethnically divisive policies, Sri Lanka was plagued by violent conflict and economic underperformance; today, one Singaporean earns more than seven Sri Lankans put together. All the group preferences devised to elevate Sinhalese brought down everyone in the country—Tamil, Sinhalese, and all the other groups alike. Lee Kuan Yew, Singapore’s “founding father,” attributed that failure to Sri Lanka’s divisive policies, saying that if Singapore had implemented similar policies, “we would have perished politically and economically.” There are echoes of this in other countries that have implemented identity politics. When I visited Rwanda, I asked Rwandans of various backgrounds whether they thought distinguishing people by race or ethnicity ever helped anyone in their country. There was complete unanimity on this point: after they got over pondering why anyone would ask such a naïve question, they made it very clear that distinguishing people by group made everyone, whether Hutu or Tutsi, distinctly worse off. In the Balkans, I got similar answers from Bosnians, Croatians, Serbians, and Kosovars.

The Perilous Path of Identity Politics

Decades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior. As political scientist Harold Isaacs writes:

If anything emerges plainly from our long look at the nature and functioning of basic group identity, it is the fact that the we-they syndrome is built in. It does not merely distinguish, it divides … the normal responses run from … indifference to depreciation, to contempt, to victimization, and, not at all seldom, to slaughter.11

The history of Byzantium and Sri Lanka demonstrates that this tribal instinct is extremely easy to provoke. All it takes is official recognition of group distinctions and some group preferences to balkanize people into bitterly antagonistic groups, and the consequences are potentially dire. Even if a society that is balkanized in this way avoids violent conflict, it is still likely to be plagued by all the concomitants of social fractionalization: higher corruption, lower social trust, and abysmal economic performance.

A country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.

It is therefore troubling to see the U.S. government, institutions, and society adopt Sri Lankan-style policies that emphasize group distinctions. As the U.S. continues down the perilous path of identity politics, it is unlikely to devolve into another Bosnia or Sri Lanka overnight. But the example of Sri Lanka is a dire warning: a country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.

Surveys and statistics are now flashing warning signs in the United States. A Gallup poll found that while 70 percent of Black Americans believed that race relations in the United States were either good or very good in 2001, only 33 percent did in 2021.12 Other statistics have shown that hate crimes have been on the rise over that time.13 In the last year, we have also seen the spectacle of angry anti-Israel protesters hammering on the doors of a college hall, terrorizing the Jewish students locked inside, and a Stanford professor telling Jewish students to stand in the corner of a classroom. While identity politics have increasingly directed public policy and institutions, relations between social groups have deteriorated rapidly. This—and a lot of history—suggest it’s time for a different approach.

Categories: Critical Thinking, Skeptic

The Future Leaks Out: William S. Burroughs’s Cut-Ups and Cucumbers

Tue, 01/06/2026 - 1:42pm

William S. Burroughs was one of the most controversial literary figures of the early 1960s, an American postmodern author and visual artist who was considered one of the key figures of the Beat Generation that influenced pop culture (he was friends with Allen Ginsberg and Jack Kerouac). He also became preoccupied by an unusual experiment: the cut-up, a technique in which a written text is cut up and rearranged to create a new text. But this was no mere artistic preoccupation. Burroughs, author of the notorious Naked Lunch (the subject of a major literary censorship case when its publisher was sued for violating a Massachusetts obscenity law) claimed to have found a sort of window into the future, a time warp on paper and on tape.

Burroughs got the cut-up idea in 1959 from his close friend Brion Gysin. Burroughs remembered, “It was simply of course applying the montage method, which was really rather old hat in painting at that time, to writing. As Brion said, writing is fifty years behind painting.”1 Burroughs traced the cut-up back to an incident from the Dada movement of the 1920s, when Tristan Tzara announced his intention to create a poem on the spot by pulling words out of a hat.2

For Burroughs, however, the cut-ups were something more than a creative writing technique. He traced this supposed revelation back to a Time magazine article by the oil industrialist John Paul Getty. (Burroughs may have been referring to a February 1958 Time cover story on Getty. Getty did not write the article.) Upon cutting up the article, Burroughs created the following phrase: “It’s a bad thing to sue your own father.” When Getty was in fact sued by one of his sons, Burroughs came to believe that his cut-up had foretold the future: 

Perhaps events are pre-written and prerecorded and when you cut word lines the future leaks out. I have seen enough examples to convince me that the cut-ups are a basic key to the nature and function of words.3

Years later, in Howard Brookner’s Burroughs, the fedora-clad, now-aged author explains to his poet friend Allen Ginsberg: 

Every particle of this universe contains the whole of the universe. You yourself have the whole of the universe. If I cut you up in a certain way I cut up the universe … So in my cut-ups I was attempting to tamper with the basic pre-recordings. But I think I have succeeded to some modest extent. 

At this, Ginsberg could only nod and utter a number of noncommittal “um hmms,” adding later: “Burroughs was, in cutting up, creating gaps in space and time, as Cezanne, or as meditation does.” Burroughs also cited a dubious summary of Wittgenstein’s Paradox: “This is Wittgenstein: If you have a prerecorded universe, in which everything is prerecorded, the only thing that is not prerecorded are the prerecordings themselves.”4 The actual Wittgenstein’s Paradox holds that “no course of action could be determined by a rule, because any course of action can be made out to accord with the rule.” 

Ludwig Wittgenstein was a philosopher and language theorist, but there is no reason to believe that he thought of the universe as a giant tape recording. Rather, Burroughs’s notion of human consciousness was clearly influenced by L. Ron Hubbard’s engram theory, itself reliant on Freudian psychoanalytic theory with its emphasis on trauma and repressed memory. Seemingly derived from the medical theory of the memory trace, Hubbard described engrams as imprints of unpleasant experiences on the protoplasm of living beings. 

Burroughs went so far as to describe the cut-up method as “streamlining Dianetics therapy system.” Proposing that his tape method could be used for therapy, he went on to suggest wiping “traumatic material” off a magnetic tape.5 He even hinted that Hubbard had borrowed the tape recording idea from him! His friend Ian Sommerville sold Hubbard two recorders, and Burroughs seemed to find it significant that Sommerville had become sick soon after, as if Hubbard were using an insidious black magic.6 Burroughs began to see the Scientology system as a form of brainwashing, even as he was increasingly convinced of Hubbard’s theories. 

Moving on to the world of cinema, Burroughs made two cut-up films, Towers Open Fire in 1963 and The Cut-Ups in 1966, with the help of producer Antony Balch. And, in 1965, Burroughs proposed to Balch “a new type of science fiction film,”7 one that would expose “the story of Scientology and their attempt to take over this planet.”8 The film would explain that “vulgar stupid second rate people” had taken over the planet by means of a “virus parasite.”9

Burroughs brazenly went ahead with his cut up experiment, even though it might have serious ramifications for the universe: “Could you, by cutting up … cut and nullify the pre-recordings of your own future? Could the whole prerecorded future of the universe be prerecorded or altered? I don’t know. Let’s see.” Perhaps he was thinking of the scientists at Los Alamos, who exploded the first atomic bomb without being completely sure of the ramifications.10

Nor was Burroughs’s “sample operation” in influencing the universe an especially ethical exercise. In fall 1972 the author took issue with the Moka, “London’s first espresso bar,” leading to a vengeful exercise with overtones of Maya Deren, the experimental filmmaker who was also a voodoo priestess and flinger of malicious hexes. 

Burroughs’s grudge against the Moka arose over what he described as “unprovoked discourtesy and poisonous cheesecake.” He took a movie camera and began filming. Within two months, the bar was closed. Burroughs recommended using this exercise to “discommode or destroy” any business you did not particularly like. He did not consider the bar might have shut down for some unrelated reason. Maybe word got out about the bad cheesecake.11 Some of the author’s magical thinking in this period may be a result of reliance on drugs, but Burroughs was a believer in curses since childhood.12

It is perhaps not a surprise that some thought the author’s new method was a prank. At a 1962 Edinburgh festival, Burroughs spoke about his new technique, which he was then calling the fold-in method. Members of the crowd thought they were being pranked, causing an Indian author to ask, “Are you being serious?” Burroughs insisted that he was.13

Burroughs presented a summary of his method to a gathering of students at Colorado’s Naropa Institute in 1976, and part of this lecture can be heard on the record Break Through in Grey Room. When Burroughs describes the revelatory Getty cut-up, laughter can be heard from the audience. Perhaps sensing some skepticism, Burroughs insists on his innocence in constructing the Getty rewording: “I mean, it’s purely extraneous information to me. [A woman can be heard laughing.] I had nothing to gain on either side. We had no explanation for this at the time, it’s just suggesting, perhaps, that when you cut into the present the future leaks out.”14

Burroughs may have been a bit disingenuous in telling the Naropa students he had no relationship to the wealthy Getty family. In the mid-1960s, in fact, through the art dealer Robert Fraser, Burroughs mingled with John Paul Getty Jr.15 Then, Burroughs stayed at a flat owned by art dealer Bill Willis from March to July 1967, where he often saw the likes of Getty, Jr.16

Admittedly this would have been later than Burroughs’s initial Getty cut-up (apparently in 1959, when Burroughs first became immersed in the whole cut-up process). But Burroughs may have been acquainted with members of the Getty circle before he actually met the Getty family. Plus, we are relying on a version of events that Burroughs publicly recounted in Daniel Odier’s The Job and later in 1976, and relying on Burroughs’s perception is a dubious proposition. In the 1976 Naropa lecture, Burroughs claims the lawsuit occurred a year after his cut-up,17 while in Daniel Odier’s The Job he claims it was a three-year gap. Also, in The Job he seems to garble matters by conflating the magazine title—Time—with the name of Getty’s company—Tidewater.18 I have not found any record of Getty being sued by one of his sons during the time period described. 

Burroughs’s literary acquaintances were not impressed to see the author seemingly risking his (still quite tenuous) literary reputation on an obsession like this. Samuel Beckett was appalled at the notion of using the words of other writers and said so to Burroughs directly: “That’s not writing. It’s plumbing.”19 The poet Gregory Corso told Burroughs the cut-up method would quickly become “redundant.”20 Novelist Paul Bowles felt the method would “alienate the reader.”21 Norman Mailer was the most prominent literary figure to champion Burroughs’s work to the American mainstream, and he must have been let down to see Burroughs abandoning a major writing career to get hung up on something Mailer probably considered a trivial sidetrack. To Mailer, the cut-up experiments were a mere “recording,” a distraction from the art of fiction.22 Jennie Skerl and Robin Lydenberg note that “positive assessments of Burroughs’s cut-ups were rare … most saw cut-ups as boring or repellent.”23

Nevertheless, Burroughs produced his “cut-up trilogy”: The Soft Machine (1961), The Ticket That Exploded (1962), and Nova Express (1964), although none sold as well as Naked Lunch. Biographer Ted Morgan calls them “inaccessible to the general reader.”24 The impenetrability of Burroughs’s cut-ups added to his reputation as a “difficult” author. Even Burroughs’s off-and-on friend Timothy Leary asked, rhetorically, “Do you actually know anyone who has finished an entire book by Bill Burroughs?”25

Burroughs was greatly impressed by the 1971 English-language publication of Konstantin Raudive’s Breakthrough: An Amazing Experiment in Electronic Communication with the Dead, which popularized what is known today as EVP (Electronic Voice Phenomenon), a widely discredited phenomenon that purports to find hidden messages in audio recordings of background noise, of recordings played backwards, in random static noise between radio stations, and other low information sources. 

Raudive believed these were the voices of the dead. Burroughs offered his own theory in keeping with his cut-up cosmology, namely that the entire universe was a vast playback device, something akin to a tape recording. Inspired by Raudive (and no doubt, Hubbard), Burroughs boldly rejected the precepts of modern psychology. People suffering from schizophrenia were not experiencing hallucinations; they were “tuning in to an intergalactic network of voices.”26

If we look at Burroughs’s supposed predictive phrases, we see a lot of what can only be called “reaching” or grasping at straws. In 1964 Burroughs came up with the phrase, “And here is a horrid air conditioner.” Ten years later, he “moved into a loft with a broken air conditioner.”27 There is nothing mysterious about having an air conditioner break down. If anything, Burroughs was lucky if he went ten years without a broken air conditioner. 

Then there was this cryptic recorded query of Raudive’s: “Are you without jewels?” To Burroughs, this must refer to lasers, “which are made with jewels.” And another especially absurd quote from Raudive’s recordings: “You belong to the cucumbers?” Burroughs had read that “the pickle factory” was a slang term for the CIA, so the recording seemed to be an obvious CIA reference. He read this in either Time or Newsweek. For an icon of bohemian literature, one could argue that Burroughs relied an awful lot on the mainstream media for his prognostications.28 But how were researchers like Raudive and Burroughs tapping into the playback of the universe? Burroughs himself asked this question: 

Now how random is random? We know so much that we don’t consciously know that perhaps the cut-in was not random. The operator at some level knew just where he was cutting in. As you know exactly on some level exactly where you were and what you were doing ten years ago at this particular time.29

Burroughs was admitting that the cutter was influencing the cut-up, but he believed this was because the cutter was unconsciously tuned in to the future. A simpler explanation would be that Burroughs convinced himself that he was doing random work while he was in fact cutting together semiconscious rephrasings. For instance, he may have heard a rumor from one of his monied acquaintances that one of Getty’s sons was considering a legal action well before actually suing. 

If the experimenter (i.e., Burroughs, or Gysin, or Raudive) is unconsciously influencing the experiment, then what we have is a new version of the Ouija board with its self-guided planchette—a device whose movements and messages are created by users who come to believe they are receiving messages from a spirit or other mysterious entity when, in fact, they are moving the planchette. This is known as the ideomotor response. 

It is worth noting that in this lecture Burroughs refers to a number of concepts that are often considered dubious today, such as repressed memories and unreliable eyewitness accounts of events. For instance, he discusses “freaks,” seemingly referring to individuals with alleged eidetic or “photographic” memory. Perhaps he was thinking of his late friend Jack Kerouac, who was known by some in Lowell, Massachusetts, as “Memory Babe” due to his purportedly freakish recall powers? 

There is no evidence to support the notion that anyone can foretell the future by cutting up newspapers, books, or film footage.

Burroughs’s countercultural reputation grew through the 1970s until his death in 1997. But his cut-ups don’t seem to have received much attention from the parapsychological community, perhaps because he was so preoccupied with now-dated media and technology: newspapers, reel-to-reel recordings, and 8mm film. His metaphysical notion of the universe as a “playback” machine seems dated next to the trendier notion of the universe as a computer matrix. 

William Burroughs was one of the most fascinating (and darkly funny) literary figures of the twentieth century, but that doesn’t make him a scientist. There is no evidence to support the notion that anyone can foretell the future by cutting up newspapers, books, or film footage.

Categories: Critical Thinking, Skeptic

Pages