You are here

Skeptic

Using AI To Create Virtual Environments

neurologicablog Feed - Mon, 04/15/2024 - 4:57am

Generative AI applications seem to be on the steep part of the development curve – not only is the technology getting better, but people are finding more and more uses for it. It’s a new powerful tool with broad applicability, and so there are countless startups and researchers exploring its potential. The last time, I think, a new technology had this type of explosion was the smartphone and the rapid introduction of millions of apps.

Generative AI applications have been created to generate text, pictures, video, songs, and imitate specific voices. I have been using most of these apps extensively, and they are continually improving. Now we can add another application to the list – generating virtual environments. This is not a public use app, but was developed by engineers for a specific purpose – to train robots.

The application is called holodeck, after the Star Trek holodeck. You can use natural language to direct the application to build a specific type of virtual 3D space, such as “build me a three bedroom single floor apartment” or “build me a music studio”. The application uses generative AI technology to then build the space, with walls, floor, and ceiling, and then pull from a database of objects to fill the space with appropriate things. It also has a set of rules for where things go, so it doesn’t put a couch on the ceiling.

The purpose of the app is to be able to generate lots of realistic and complex environments in which to train robot navigation AI. Such robotic AIs need to be trained on virtual spaces so they can learn how to navigate out there is the real world. Like any AI training, the more data the better. This means the trainers need millions of virtual environments, and they just don’t exist. In an initial test, Holodeck was compared to an earlier application called ProcTHOR and performed significantly better. For example, when asked to find a piano in a music studio a ProcTHOR trained robot succeeded 6% of the time while a Holodeck trained robot succeeded 30% of the time.

That’s great, but let’s get to the fun stuff – how can we use this technology for entertainment? The ability to generate a 3D virtual space is a nice addition to the list above, all of which is contributing to a specific application that I have in mind – generative video games. Of course there are companies already working on this. It’s a no-brainer. But let’s talk about what this can mean.

In the short run generative AI can be used to improve the currently chumpy AI behind most video games. For avid gamers, it is a cliche that video game AI not very good, although some are better than others. Responses from NPCs are canned and often nonsensical, missing a lot of context about the evolution of the plot in the game. The reaction of NPCs and creatures in the world is also ultimately simplistic and predictable. This makes it possible for gamers to quickly learn how to hack the limitations of the game’s AI in order to exploit it.

Now let’s imagine our favorite video games powered by generative AI. We could have a more natural conversation with a major NPC in the game. The world can remember the previous actions of the player and adapt accordingly. AI combat can be more adaptive and therefore unpredictable and challenging.

But there is another layer here – generative AI can be used to generate the video game itself, or at least parts of it. This was referenced in the Black Mirror episode, the USS Callister. The world of the game was an infinite generated space. In many ways this is an easier task than real-world applications, at least potentially. Think of a major title, like Fallout. The number of objects in the game, including every item, weapon, monster, and character, is finite. It’s much less than a real-world environment. The same is true for the elements of the environment itself. A generative AI could therefore use the database of objects that already exists for the game an generate new locations. The game could become literally infinite.

Of course, generative AI could be used to create the game in the first place, decreasing the development time, which is years for major titles. Such games famously use a limited set of recorded voices for the characters, which means you hear the same canned phrases over and over again. Now you don’t have to get actors into studios to record script (although you still might want to do this for major characters), you can just generate voices as needed.

This means that video game production can focus on creating the objects, the artistic feel, the backbone plot, the rules and physics for the world, and then let generative AI create infinite iterations of it. This can be done as part of game development. Or it can be done on a server that is hosting one instance of the game (which is how massive multiplayer games work), or eventually it can be done for one player’s individual instance of the game, just like using ChatGPT on your personal computer.

This could further mean that each player’s experience of a game can be unique, and will depend greatly on the actions of the player. In fact, players may be able to generate their own gaming environments. What I mean is, for example (sticking with Fallout), you could sign into a Bethesda Fallout website, choose the game you want, enter in the variables you want, and generate some additional content to add to your game. There could be lots of variables – how developed the area is, how densely populated, how dangerous are the people, how dangerous are the monsters, how challenging is the environment itself, what is the resource availability, etc. This already exists for the game Minecraft, which generates new unique environments as you go and allows players to tweak lots of variables, but the game is extremely graphically limited.

Also, I am just thinking of using AI to recreate the current style of video games but faster, better, and with unlimited content. Game developers, however, may think of ways to leverage generative AI to create new genres of video games – doing new things that are not possible without generative AI.

It seems inevitable that this is where we are headed. I am just curious how long it will take. I think the first crop of generative video games will come in the form of new content for existing games. Then we will see entirely new games developed with and for generative AI. This may also give a boost to VR gaming, with the ability to generate 3D virtual spaces.

And of course gaming is only one of many entertainment possibilities for generative AI. How long will it be before we have fully generated video, with music, voices, and a storyline? All the elements are there, now it’s just a matter of putting them all together with sufficient quality.

I am focusing on the entertainment applications, because it’s fun, but there are many practical applications as well, such as the original purpose of Holodeck, to train navigation AI for robots. But often technology is driven by entertainment applications, because that is where the money is. More serious applications then benefit.

The post Using AI To Create Virtual Environments first appeared on NeuroLogica Blog.

Categories: Skeptic

Robert Zubrin — How What We Can Create on the Red Planet Informs Us on How Best to Live on the Blue Planet

Skeptic.com feed - Sat, 04/13/2024 - 10:40am
https://traffic.libsyn.com/secure/sciencesalon/mss422_Robert_Zubrin_2024_04_13.mp3 Download MP3

When Robert Zubrin published his classic book The Case for Mars a quarter century ago, setting foot on the Red Planet seemed a fantasy. Today, manned exploration is certain, and as Zubrin affirms in The New World on Mars, so too is colonization. From the astronautical engineer venerated by NASA and today’s space entrepreneurs, here is what we will achieve on Mars and how.

SpaceX, Blue Origin, and Virgin Galactic are building fleets of space vehicles to make interplanetary travel as affordable as Old-World passages to America. We will settle on Mars, and with our knowledge of the planet, analyzed in depth by Dr. Zubrin, we will utilize the resources and tackle the challenges that await us. What we will we build? Populous Martian city-states producing air, water, food, power, and more. Zubrin’s Martian economy will pay for necessary imports and generate income from varied enterprises, such as real estate sales—homes that are airtight and protect against cosmic space radiation, with fish-farm aquariums positioned overhead, letting in sunlight and blocking cosmic rays while providing fascinating views. Zubrin even predicts the Red Planet customs, social relations, and government—of the people, by the people, for the people, with inalienable individual rights—that will overcome traditional forms of oppression to draw Earth immigrants. After all, Mars needs talent.

With all of this in place, Zubrin’s Red Planet will become a pressure cooker for invention in bioengineering, synthetic biology, robotics, medicine, nuclear energy, and more, benefiting humans on Earth, Mars, and beyond. We can create this magnificent future, making life better, less fatalistic. The New World on Mars proves that there is no point killing each other over provinces and limited resources when, together, we can create planets.

Robert Zubrin is former president of the aerospace R&D company Pioneer Astronautics, which performs advanced space research for NASA, the US Air Force, the US Department of Energy, and private companies. He is the founder and president of the Mars Society, an international organization dedicated to furthering the exploration and settlement of Mars, leading the Society’s successful effort to build the first simulated human Mars exploration base in the Canadian Arctic and growing the organization to include 7,000 members in 40 countries. A nuclear and astronautical engineer, Zubrin began his career with Martin Marietta (later Lockheed Martin) as a Senior Engineer involved in the design of advanced interplanetary missions. His “Mars Direct” plan for near-term human exploration of Mars was commended by NASA Administrator Dan Goldin and covered in The Economist, Fortune, Air and Space Smithsonian, Newsweek (cover story), Time, The New York Times, The Boston Globe, as well as on BBC, PBS TV, CNN, the Discovery Channel, and National Public Radio. Zubrin is also the author of twelve books, including The Case for Mars: The Plan to Settle the Red Planet and Why We Must, with more than 100,000 copies in print in America alone and now in its 25th Anniversary Edition. He lives with his wife, Hope, a science teacher, in Golden, Colorado. His latest book is: The New World on Mars: What We Can Create on the Red Planet. The next big Mars Society conference in Seattle August 8-11.

Read Zubrin’s discussion of his paper on panspermia for seeding like on Earth.

Shermer and Zubrin discuss:

  • Why not start with the moon?
  • What’s it like on Mars? Like the top of Mt. Everest?
  • Was Mars ever like Earth? Water, life, etc.?
  • How much will it cost to go to Mars?
  • How to get people to Mars: food, water, radiation, boredom?
  • Where on Mars should people settle?
  • What are “natural resources”?
  • Resources on Mars already there vs. need to be produced
  • Analogies with Europeans colonizing North America
  • Public vs. private enterprise for space exploration
  • Economics on Mars
  • Politics on Mars
  • Lessons from the Red Planet for the Blue Planet
  • Ingersoll’s insight: free speech & thought > science & technology > machines as our slaves > moon landing. “This is something that free people can do.”
  • Liberty in space: won’t the most powerful people on Mars threaten to shut off your air if you don’t obey?
  • Independent City-States on Mars
  • Direct vs. representative democracy
  • America as a model for what we can create on Mars
  • Are new frontiers needed for civilization to continue?
  • The worst idea ever: that the total amount of potential resources is fixed.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

The Skeptics Guide #979 - Apr 13 2024

Skeptics Guide to the Universe Feed - Fri, 04/12/2024 - 5:00am
Live from Dallas with special guest Dustin Bates of Starset; Eclipse Science; News Items: AI Designed Drugs, AI Music, Music Getting Simpler, Aphantasia Spectrum, Nova and Comet Compete with Eclipse; Science or Fiction
Categories: Skeptic

Pain & Profit: Who’s Responsible for the Opioid Crisis?

Skeptic.com feed - Fri, 04/12/2024 - 12:00am

In 2021 the CDC issued a grim statistic: more than one million Americans had died from overdoses since 1999 when it started tracking an opioid epidemic that began with prescription painkillers and is now dominated by fentanyl.1 Since that sobering milestone, another 300,000 have died.2 That is roughly the same number of Americans who died in all wars the United States has entered (1.3 million) combined, including the First and Second World Wars and the Civil War.3 The opioid epidemic is, aside possibly from obesity, the biggest health crisis of our time.

Most know about the frenzy of finger pointing, lawsuits, bankruptcy filings among pharmaceutical companies, drug distributors, national pharmacy chains, medical associations, and the Federal Drug Administration. There is plenty of blame to go around. What is not often discussed in the extensive media coverage about the epidemic is how we got here.

The story of how the opioid crisis got underway and who is responsible is a tale of greed, poor government regulation, and many missed opportunities. It began with good intentions based on bad data and later became a movement in which profits took precedence over morals. It is a tragedy that was largely preventable and, as such, one of the most infuriating chapters in modern U.S. history.

History of Pain

Chronic pain affects 50 million Americans, more than those with high blood pressure, diabetes, or depression.4 Developing a medication that alleviates pain without too many side effects has been one of the drug industry’s holy grails. The market is enormous, and most people are long-term patients. Opiates were isolated as effective pain killers in the 1800s. At the turn of the 20th century—the drug industry’s Wild West days—they were dispensed over the counter. Over time, opiates earned a notorious and deserved reputation for addiction. German giant Bayer patented and marketed Heroin as, incredible as it now sounds, a cure for morphine addiction.

Congress did not pass a law requiring prescriptions for narcotic-based medications until 1938.5 It took another 33 years before the federal government created the Controlled Substances Act in 1971, listing oxycodone, fentanyl (along with cocaine and methamphetamine) as Schedule II drugs. That meant they had a risk of “severe psychological or physical dependence” but had medical and therapeutic uses. Doctors were supposed to balance the risks of opioids against the needs of patients who required them for short-term use after surgery or an accident, or longer treatment for disabling chronic pain.

Throughout the 1970s and early 1980s, drug companies spent a lot of money searching for a nonaddictive painkiller. Every effort ended in failure. In a Science article, a pharmacologist and a chemist at the National Institutes of Health concluded that it was unlikely such a medication was possible.

This was the same time, however, when a few physicians were about to upend traditional medical views about pain and how to treat it. Until the early 1980s, medical schools taught that pain was only a symptom of some underlying physical condition. Physicians did not treat it as a stand-alone ailment but instead searched for what caused it. The specialty of “pain management” did not exist. An anesthesiologist, John Bonica, whom Time dubbed “pain relief’s founding father,” questioned the conventional wisdom. Bonica suffered chronic shoulder and hip pain from his pre-medical career, first as a professional wrestler, then a carnival strongman, and finally the light heavyweight world wrestling champion.6 Bonica contended that underdiagnosing pain meant millions of patients suffered needlessly. He cofounded the International Association for the Study of Pain (its journal, Pain, is the field’s leading publication) in 1974, and three years later the American Pain Society (APS).7

The incipient movement to prioritize pain was not long underway when a five-sentence “letter to the editor” in the January 10, 1980, New England Journal of Medicine (NEJM) kicked off a parallel revolution in reconsidering established medical views about the risks of opioids. A doctor, Hershel Jick, and a grad student, Jane Porter, had examined 39,946 records of Boston University Hospital patients to determine adverse reactions and potential abuse for widely used medications. Almost a third (11,882) had “received at least one narcotic preparation” but they found only “four cases of reasonably well-documented addiction in patients who had no history of addiction.” Their conclusion was as unorthodox as it was decisive: “Despite widespread use of narcotic drugs in hospitals, the development of addiction is rare.”8

The letter cited two previous studies, both of which involved only hospitalized patients given small doses of opioids in a controlled setting. Very few had had them dispensed for more than five days. None were given painkillers after they were discharged from the hospital.

No one could have predicted the impact that letter had on the reassessment of using opioids to treat pain. During the next two decades it was cited over 1,600 times in textbooks, medical journals, and other publications. More than 80 percent of those who mentioned it left out that it only studied hospitalized patients who took opioids for a few days. Instead, that 99-word letter was widely cited to support far broader conclusions about the safety profile of opioids.9 (In 2017 the NEJM published a rare “Editor’s Note,” adding it to its webpage with the original Jick-Porter letter: “For reasons of public health, readers should be aware that this letter has been ‘heavily and uncritically’ cited as evidence that addiction is rare with opioid therapy.”)

The twin themes—that not treating pain was negligent and that opioids were safe for almost everyone—reinforced one another.

The World Health Organization (WHO) cited the Jick- Porter letter in 1986 as a cornerstone for challenging decades of medical dogma that “the risks of widely prescribing opioids far outweighed any benefits.” Six weeks after the WHO publication, Pain published a startling report, the “Chronic Use of Opioid Analgesics in Non-Malignant Pain.” The lead author was Russell Portenoy, a 31-year-old Memorial Sloan Kettering physician specializing in anesthesiology, neurology, pain control, and pharmacology. His coauthor was Kathleen Foley, a top pain management specialist.

Portenoy and Foley had studied 38 patients who had been administered narcotic analgesics—a third took oxycodone—for up to seven years. Two thirds reported significant or total pain relief. There was “no toxicity,” the two doctors reported, and only two patients had a problem with addiction, both of whom had “a history of prior drug abuse.” They concluded that “opioid maintenance therapy can be a safe, salutary and more humane alternative to the options of surgery or no treatment in those patients with intractable non-malignant pain and no history of drug abuse.”10

Pain as the Fifth Vital Sign

That paper kicked off a contentious and at times rancorous debate over whether opioids had been unfairly branded for decades and underutilized in pain management. The charismatic Portenoy emerged as the unofficial spokesman for the embryonic movement to reassess opioids. He saw himself as a pioneer in reexamining outdated views about opioids. If he could convince doctors not to fear dispensing opioids, it could help millions of patients suffering from chronic pain.

A diverse, informal network of physicians contributed to the emerging reevaluation. Doctors specializing in pain management formed The American Academy of Pain Medicine and the American Society of Addiction Medicine (its slogan is “Addiction is a chronic brain disease”). They in turn encouraged patients suffering from chronic pain to form advocacy groups and petition the FDA to loosen opioid dispensing restrictions.

In 1990, American Pain Society president, Dr. Mitchell Max, wrote a widely read editorial lamenting how little progress had been made in treating pain. “Unlike ‘vital signs,’ pain isn’t displayed in a prominent place on the chart or at the bedside or nursing station,” he wrote.”11 Max’s fix was to have physicians ask patients on every visit about whether they were in pain. Doctors had for decades kept watch of four vital signs when examining patients: blood pressure, pulse, temperature, and breathing. The American Pain Society suggested “Pain as the 5th Vital Sign.”

There was no reliable diagnostic test, as there was for blood pressure or cholesterol. Pain was a subjective assessment based on the doctor’s observations and the patient’s descriptions of symptoms. What one patient described as moderate pain that restricted mobility might be excruciating and disabling for someone else. The first rudimentary measurements were developed around this time. One of them, the McGill Pain Index, had 78 words related to pain divided into 20 sections. Patients picked the words that best described their pain. Another, called the Memorial Pain Assessment Card, had eight simplified descriptions and patients selected the one that best matched their pain’s intensity. Yet another was developed by a pediatric nurse and child life specialist in Oklahoma—a chart for children with 10 handdrawn faces ranging from happy and laughing to angry and crying. Variations of that scale soon became a 1 to 10 rating for adults, 1 being “very mild, barely noticeable,” and 10 signifying “unspeakable pain.”

Those tools meant that differing pain tolerances among patients were no longer important. What mattered was tracking whether a patient’s pain was getting better or worse. The Joint Commission, an independent, not-for-profit organization responsible for accrediting 96 percent of all U.S. hospitals and clinics, became the first major group to endorse pain as the fifth vital sign. After the Veterans Administration embraced it, it was adopted quickly in the private sector.12

Over the next few years, a series of other small trials published in medical journals reinforced Portnoy’s 1986 study. They uniformly concluded that opioids did not deserve their terrible reputation and that they were extremely “effective in treating long-term chronic pain.” Buried in scientific footnotes was that “long-term” usually meant 12 to 16 weeks and “effective in treating” meant “superior to placebo.”13

An anesthesiologist and dentist, J. David Haddox, pushed the limits of the reevaluation movement. Haddox, who later became the American Academy of Pain Medicine president and went to work for Purdue Pharma, reported in Pain about the failure to treat the pain of a 17-year-old leukemia patient. That failure, wrote Haddox, had “led to changes similar to those seen with idiopathic opioid psychologic dependence (addiction).” “Pseudoaddiction” was a syndrome, he theorized, that doctors unintentionally caused when they failed to provide their patients with sufficient opioid painkillers. The “behavioral changes” that many doctors concluded constituted addiction, argued Haddox, was only evidence of how undertreated the patient was in terms of narcotic painkillers.14

America’s three major pain associations embraced pseudoaddiction.15 (It took a quarter century before a comprehensive study revealed that in the 224 scientific articles that cited pseudoaddiction, only 18 provided even the sketchiest anecdotal data to support the theory. The study concluded that pseudoaddiction was itself “fake addiction.”)

The same month that Haddox introduced pseudoaddiction, a dozen prominent doctors published “The Physician’s Responsibility Toward Hopelessly Ill Patients” in the New England Journal of Medicine. Although the study was limited to terminally ill patients, pain management advocates enthusiastically applied its conclusion to all patients: “The proper dose of pain medication is the dose that is sufficient to relieve pain and suffering.… To allow a patient to experience unbearable pain or suffering is unethical medical practice.”16

New Jersey became the first state to adopt an “intractable pain treatment” law that recognized patients had a right to treat their pain. The statute shielded doctors from criminal or civil liability if the narcotics dispensed caused an addiction; 18 other states soon followed.

Enter Big Pharma

Portenoy and colleagues contended that opioids should be the first treatment option for chronic nonmalignant pain if the patient had no history of addiction. Instead of setting a maximum dose, the emerging standard of care was that opioids should be dispensed until the patient’s pain was relieved. The twin themes—that not treating pain was negligent and that opioids were safe for almost everyone—reinforced one another. The Sackler family, owners of a small drug company, Purdue Pharma, would have been hard pressed to plan a better lead-in to their release a decade later of OxyContin, their blockbuster opioid-based painkiller.

Purdue used a Wizard of Oz analogy to promise the reps who sold the most oxycontin that “A pot of gold awaits you ‘Over the Rainbow.’”

When the pain reevaluation movement had begun in the mid-1980s, OxyContin was not even on the drawing board. It was in early development when pain was on its way to becoming the fifth vital sign. In the following decade, Purdue did what every other drug company with an opioid-based product did: spent millions underwriting and subsidizing the doctors, advocacy organizations, and pain societies who were at the vanguard of the reevaluation movement. Many pioneering doctors reaped big fees as company lecturers. Purdue and other drug firms subsidized courses at medical schools, professional conferences and conventions, and continuing education classes. And, similar to what happened with the launch of other major drugs, some government officials (even a few key FDA officials) eventually went to work for Purdue and other firms selling opioids. Purdue and its competitors spent lots of money on the pain advocates precisely because they were promoting ideas about pain treatment that the drug manufacturers enthusiastically embraced.

The opioids reevaluation movement might not have had such an impact if it was not for the development of a time-release opioid painkiller, OxyContin. Purdue, and its aggressive marketing of OxyContin, came at a time when doctors were more willing to believe that opioids could be safely prescribed.

Three psychiatrist brothers, Arthur, Mortimer, and Raymond Sackler had bought Purdue in 1952. It was then a tiny New York drug company whose product line consisted mostly of natural laxatives, earwax removers, and tonics that claimed to boost brain function and metabolism. A decade after purchasing Purdue, the Sacklers added a distressed British manufacturer, Napp Pharmaceuticals. The Sacklers had not thought about developing a painkiller until Napp took advantage of an opportunity in the United Kingdom.

Cicely Saunders, a British nurse-turned physician, had opened the world’s first hospice in London in 1967. Her biggest obstacle in alleviating patient’s terminal discomfort was the need to dose painkillers intravenously every few hours. The patients got little sleep and it was not possible to send them home to spend their last days surrounded by friends and family.

Morphine, Saunders found, was not as effective in alleviating pain as diamorphine (a brand name for heroin). Heroin’s biggest drawback, she concluded, was that “it may be rather short in action.”17 She experimented by adding sedatives and tranquilizers to extend the time pain was relieved, but she was stymied at every turn by intolerable side effects.

Still, Saunders had a permissive view of opioids and their addictive power. She did not think heroin had a “greater tendency to cause addiction than any other similar drug.… We have several patients in the wards at the moment who have come off completely without any withdrawal symptoms.”18

What she wanted was a revolutionary narcotic painkiller. In a single dose, it had to provide long relief from intense pain without causing sleepiness, motor coordination problems, and memory lapses. Several independent British pharmaceutical companies accepted her challenge. Smith & Nephew developed Narphen, a synthetic opioid it claimed was 10 times more powerful than morphine, quicker acting, and had a milder side effects profile. Although Saunders acknowledged that Narphen was a better end-of-life drug, it was not her holy grail for terminal cancer pain.

Smith & Nephew’s stumble handed the Sacklers an opportunity. Napp launched a significant research effort to find the new painkiller. When the breakthrough came in 1980, it promised not only to revolutionize pain care for the terminally ill, but it unwittingly provided the technology that would later fuel America’s opioid crisis. Napp introduced a morphine painkiller with a revolutionary, invisible- to-the-human-eye, sustained-release coating. That chemical layer consisted of a dual-action polymer mix that turned to a gel when exposed to stomach acid. Napp claimed the drug, MST Continus (continuous), released pure morphine at a steady rate over 12 hours. They could adjust the release rate by fine-tuning the density of the coating’s water-based polymer. It was the breakthrough painkiller for which Cicely Saunders had been searching since the late-1960s.

MST Continus carved out a market in the UK, but it was limited for end-of-life cancer and hospice patients. It took the Sacklers seven years (until 1987) to get FDA approval for that drug in the U.S. (which they renamed MS-Contin). The FDA had slowed the approval process since its active ingredient, morphine, was a Schedule II controlled substance. By the time it went on sale in America, Portnoy had published the first of his studies concluding that opioids were not as addictive as previously thought and that they should be prescribed liberally to treat pain.

Purdue, now run by two of the surviving Sackler brothers, Mortimer and Raymond, and some of their children, took note of the burgeoning pain management movement. Raymond’s son, Richard Sackler, also a doctor, led a company effort to find an improved painkiller, or at least one with much broader commercial appeal than MS-Contin. Richard Sackler thought that any new painkiller should not use morphine since it had a notorious reputation as an end-of-life medication. Purdue’s science team picked oxycodone, a chemical cousin of heroin. While there were some oxycodone-based painkillers on the market—Percodan (oxycodone and aspirin) and Percocet (oxycodone and acetaminophen)—they were immediate-release pills. If Purdue could master an extended-release oxycodone pill, it would be the first of its kind.

Their oxycodone-based drug was still an unnamed product. Its first clinical trial was only completed in 1989. It took until 1992 for Purdue to apply for a patent. In 1995, the company finally got FDA approval. And it also won an extraordinary concession from the government regulator. Although Purdue had not conducted clinical trials to determine whether OxyContin was less likely to be addictive or abused than other opioid painkillers, the FDA had approved wording requested by the company: “Delayed absorption as provided by OxyContin tablets, is believed to reduce the abuse liability of a drug.”19 (Curtis Wright, the FDA officer who oversaw the OxyContin label approval, soon left the agency to work at Purdue as its medical officer for risk assessment).

Marketing Pain

Purdue’s sales team highlighted that extraordinary sentence to convince physicians that it was a safer narcotic than its rivals. Purdue prepared an unprecedented marketing launch for OxyContin. The late Arthur Sackler was a marketing genius, widely acknowledged as having introduced aggressive Madison Avenue advertising tactics to selling pharmaceuticals. Arthur had handled the promotion for Hoffman LaRoche’s 1960s blockbuster drugs, Librium and Valium, and had made them the biggest-selling drugs in the world for a record 17 years.

Purdue laid out a sales strategy for OxyContin straight from Arthur’s playbook. Its twin sales pitches were that OxyContin relieved pain longer than any other opioid painkiller, and because it was a time-release product, it was less likely to be addictive.

Purdue sales reps raised “concerns about addiction” before physicians did. It was, they said, understandable that no matter how wonderful a drug, “a small minority” of patients “may not be reliable or trustworthy” for narcotic painkillers. If the doctors were still skeptical at that stage, the reps showed them the FDA-approved label that stated if OxyContin was used as prescribed for treating moderate to serious pain, addiction was “very rare.” What constitutes “very rare”? Less than one percent, according to the sales reps. To tilt the odds in favor of its “low risk of addiction” sales strategy, Purdue underwrote several studies that reported addiction rates from long-term opioid treatment between only 0.2 percent and 3.27 percent. However, those company-sponsored reports were never confirmed by independent studies.

Purdue also got help in promoting the “low risk of addiction” from the American Pain Society and the American Academy of Pain Medicine. Purdue and other opioid drug manufacturers were generous funders of both organizations. The groups issued a consensus statement emphasizing that opioids were effective for treating nonmalignant chronic pain and reiterating that it was “established” that there was a “less than 1 percent” probability of addiction.

Purdue sales reps hammered home that OxyContin released oxycodone into the bloodstream at a steady rate over 12 hours. That, Purdue claimed, made it impossible for addicts to get the rush they chased. Without a high, patients would not want more of the drug as it wore off. The company knew that was not true—its own clinical trials demonstrated that for some patients up to 40 percent of oxycodone was released into the bloodstream in the first hour or two. That was fast enough to cause a high and a resulting crash that required another pill in order to feel better.

Dispensing physicians had no idea what Oxy cost, nor did most care. Since they did not pay for the drugs, they let patients and their insurance companies worry about that.

Purdue revised its compensation packages for its sales team, especially top performers, in time for the OxyContin launch. Large bonuses could double a sales rep’s salary. In an internal memo to the “Entire Field Force,” Purdue used a Wizard of Oz analogy to promise the reps who sold the most that “A pot of gold awaits you ‘Over the Rainbow.’” Two months later after Oxy went on sale, another memo titled, “$$$$$$$$$$$$$ It’s Bonus Time in the Neighborhood!”, urged the sales team to push doctors to prescribe the higher-dose pills.

There was far greater profit for Purdue, and more money for the sales team, by pushing higher doses. There were three strengths when it went on sale: 10, 20, and 40 milligrams. An 80 mg tablet was released a month later (15, 30, 60, and 160 mg pills would arrive in a few years). Purdue’s production costs were virtually the same for each since oxycodone, the active ingredient, was inexpensive to manufacture. However, Purdue charged more for each additional strength. On average, a bottle of 20 mg pills cost twice as much as the 10 mg variety, and 80 mg pills were about seven times more expensive. If a patient took 20 mg pills twice a week, Purdue made less than $40 in profit. The same patient prescribed 80 mg pills twice a week returned $200 to Purdue, a 450 percent increase (that profit exceeded $600 a bottle in another five years).

Dispensing physicians had no idea what Oxy cost, nor did most care. Since they did not pay for the drugs, they let patients and their insurance companies worry about that.

Purdue created “Individualize the Dose,” a campaign designed to push the strongest doses. Sales reps told doctors that the company’s studies showed it was best to start patients on a medium to higher dose. The stronger doses, Purdue assured physicians, could be dispensed even to people who had never used opioids, all without adverse effects. The field reps contended that the higher-dose pills were no more likely to cause addiction. That was not true. Internal documents later revealed that Purdue’s sales team knew that stronger doses carried a significantly higher likelihood of dependence, addiction, and even potentially lethal respiratory suppression. While the company’s press releases claimed “dose was not a risk factor for opioid overdose,” internal communications are replete with references to the dangers of “dose-related overdose.”

OxyContin was instantly the most successful drug Purdue ever released. By 2001, only five years after it had gone on the market, its cumulative sales had passed a billion dollars, a first for Purdue. Although a lucrative hit for the Sacklers, OxyContin was less than ten percent of the opioid market. Johnson & Johnson, Janssen, Cephalon, and Endo Pharmaceuticals had their own narcotic painkillers. Their sales teams pitched them as aggressively as Purdue pushed Oxy, and all the companies subsidized the same nonprofits and patient advocacy groups. Janssen managed to get FDA approval in 1990 for the first fentanyl patch to treat severe pain. Fentanyl was then the most potent synthetic opioid, one hundred times stronger than morphine and 1.5 times more powerful than oxycodone. Two years after the FDA had given a green light to OxyContin, it approved Cephalon’s Actiq, a fentanyl “lollipop,” for cancer patients whose intense pain did not respond to other narcotics. Fentanyl patches and Actiq pops were diverted illicitly for big profits and sometimes with lethal side effects. There were widespread industry rumors that Cephalon’s sales team pushed its lollipops off-label as “ER on a stick” for chronic pain.

Still, by 2001, it was OxyContin that was in the crosshairs of some angry patients, the media, and the DEA. Small towns throughout Appalachia seemed overrun by a deluge of OxyContin, locally called “Hillbilly Heroin.” The DEA, meanwhile, was investigating diversion of the drug from the manufacturing plant Purdue used in New Jersey. It was also compiling evidence that Oxy contributed to overdose deaths by examining autopsy reports from across the country. The DEA wanted the FDA to put strict restrictions on the number of refills allowed for the painkiller.

In February 2001, OxyContin appeared for the first time in the New York Times, a front-page story—“Cancer Painkillers Pose New Abuse Threat”—about how it had become an abused drug in at least seven states.20 The Times raised the issue of whether Purdue’s hard-hitting marketing was partially responsible for the growing problems.

Purdue went all out to battle the bad press and its regulatory headaches. It hired big name legal talent. Rudy Giuliani, fresh off being America’s Mayor after his handling of the city in the aftermath of the 9/11 attacks, had just opened a private office and he began lobbying government officials on Purdue’s behalf. The company dispatched its medical officers and top executives to meet with the FDA and DEA. It assured both that it was working to control any abuse and diversion and it contested the findings about Oxy’s role in overdoses by pointing to the cocktail of illicit drugs in most of the autopsy reports. At that stage, the DEA could not find a death in which the victim had only Oxy, without alcohol, benzos, heroin, cocaine, cannabis, or some other drug. In the same month as the Times story, Richard Sackler sent an internal Purdue email that said, “We have to hammer on the abusers in every way possible. They are the culprits and the problem. They are reckless criminals.”

Purdue emerged mostly unscathed from all the extra scrutiny. Although the FDA did require changes to OxyContin’s label, it was far less than what activists wanted. The FDA ordered the addition of a so-called black box warning. The bold-font warning was a reminder to doctors that OxyContin was “a Schedule II controlled substance with an abuse liability similar to morphine.” No drug company liked having a black box warning on its label, but as I learned in my reporting, Purdue was not upset since it considered the language a good compromise. One marketing executive remarked later, “It is black box lite.” It merely reiterated what most physicians knew already about OxyContin.

In 2004, OxyContin officially earned the dubious distinction as the most abused drug in America.21 Parents who had lost children to OxyContin were trying to raise awareness about the drug’s dangers. The biggest concern for Purdue, however, was an ongoing investigation into Oxy’s marketing by the West Virginia U.S. Attorney, John Brownlee, who started his probe in 2002. West Virginia was one of states hardest hit by OxyContin. In 2006,

Brownlee was ready to bring a case. He forwarded a six-page memo to the DOJ’s Criminal Division to get authorization to file felony charges against Purdue and its top executives for money laundering, wire and mail fraud, and conspiracy.22 Brownlee got bad news from headquarters. The Criminal Division vetoed all the serious felony counts and instead gave him permission only to bring less serious charges around misbranding the drug. That was a clean and straightforward prosecution.

In May 2007, Purdue and three non-Sackler executives accepted a plea agreement. The company and officers pled guilty to a scheme “to defraud or mislead, marketed and promoted OxyContin as less addictive, less subject to abuse and diversion, and less likely to cause tolerance and withdrawal than other pain medications.”23 Purdue’s fine was $634.5 million, and the three executives paid a combined $34.5 million.

Purdue signed both Consent and Corporate Integrity agreements. It agreed not to make “any written or oral claim that is false, misleading, or deceptive” in marketing OxyContin and to report immediately any signs of false or deceptive marketing. The strict terms of those agreements should have been the end of Oxy’s nationwide trail of devastation. Instead, the ink was barely dry before Purdue started flagrantly disregarding the rules. The deadliest years and record abuse with OxyContin came after the 2007 guilty pleas.

And Then It Got Even Worse

Purdue went on a hiring binge that eventually doubled its sales force. It unleashed them to push Oxy with a renewed vigor. The company also paid millions to the “key physician opinion leaders” so they would convince doctors that OxyContin should be their first choice whenever a patient presented with serious pain. The results were impressive. In the year that Purdue pled guilty, sales passed $1 billion annually and profits exceeded $600 million. OxyContin provided 90 percent of Purdue’s profits.

The opioid crisis is a tragedy that was largely preventable and, as such, one of the most infuriating chapters in modern U.S. history.

When Purdue faced the possibility of generic competition in 2010, the company devised a “new and improved” coating that it said was more difficult to crush, snort or inject. Although Purdue’s two small studies showed the new version had “no effect” in reducing the addiction and overdose potential, the FDA still approved tamper-resistant OxyContin. (It took ten years before an FDA advisory panel ruled that the tamper-resistant Oxy had failed to reduce opioid overdoses).

With the FDA approval, Purdue spent millions on a splashy ad campaign directed to physicians. Titled “Opioids with Abuse Deterrent Properties,” Purdue touted its crush-resistant formulation as the first ever narcotic pain reliever that reduced the chances for abuse and slashed the addiction rate. The campaign worked. Many doctors believed it and increased their prescribing pace.

In 2011, four years after Purdue’s criminal guilty plea, OxyContin surpassed heroin and cocaine to become the nation’s most deadly drug. Sales were also at a record, each year breaking the previous year’s record. When there was a slowdown in 2013, the Sacklers brought in McKinsey & Company consultants, who laid out a plan to “supercharge” sales. The results were almost immediate. In 2015, Forbes listed the Sackler family on its “Richest Families” list for the first time. The Sacklers, with an estimated net worth of $14 billion, had jumped ahead of the Rockefellers, Mellons, and Busches, among many others. Forbes titled the family “the OxyContin Clan.”24

The news about the Sacklers great fortune was lost under a deluge of news about the national toll from OxyContin. By 2015, for the first time, opioids killed more people than guns and car crashes combined, and lethal overdoses even surpassed the peak year of HIV/AIDS deaths. Statisticians blamed OxyContin for the first decline in two decades in the life expectancy of Americans. And a CDC report confirmed what some doctors suspected: prescription opioid users were 40 times more likely to become heroin addicts, making Oxy the most effective gateway drug into heroin. The CDC urged doctors either to “carefully justify” or “avoid” prescribing more than 60 mg daily. Still, the guidelines were voluntary. Only seven states passed legislation to limit the number of prescriptions.

In 2016, OxyContin and the opioid epidemic became a presidential campaign issue. The Joint Commission, responsible for accrediting hospitals and clinics, reversed its 2001 position that pain should be the fifth vital sign. Even the FDA was slowly recognizing the extent of the problem. Parents who lost children to opioids had submitted a citizen’s petition to the FDA, pleading with the regulators to classify Oxy for severe pain only. After eight years on the back burner, the agency was seriously considering it.

It’s Only Money

Suddenly, the Sacklers and Purdue, and their competitors, were on the defensive. The Trump administration declared the opioid epidemic a public health emergency in 2017. That action freed up extra federal resources for treatment. A few months later, forty-one state attorneys general subpoenaed internal Purdue marketing and promotion documents. Purdue announced plans to slash its sales force by half and that it would no longer market Oxy directly to individual physicians, instead concentrating on hospitals and clinics.

In 2019, a judicial panel decided to streamline the more than 2,500 pending lawsuits under the jurisdiction of a single federal judge in Ohio. The consolidated lawsuit was called the National Prescription Opiate Litigation. The following month, the Massachusetts Attorney General filed an amended complaint that was different from all others. It relied on Purdue’s internal records to conclude that eight of the Sackler-family directors had “created the epidemic and profited from it through a web of illegal deceit.” The New York Attorney General filed a similar action a few weeks later and added that the Sacklers had personally transferred hundreds of millions in assets to offshore tax havens.

To drive home how much the Sacklers had profited from OxyContin, court documents filed by the attorneys general revealed that the family directors had voted payments of $12 to $13 billion in profits since OxyContin went on sale. By the end of 2019, OxyContin had $35 billion in sales from its launch, while America recorded its 200,000 death since the government had begun tracking them.25

In the end, it was lawyers, state prosecutors, and the nation’s top class action litigators, who pried some financial justice from the many parties that shared responsibility for the national tragedy. Purdue filed for bankruptcy protection in late 2019 and the Sacklers sought protection from all the civil litigation so long as they contributed a lot of money to an overall settlement. In 2022, the family agreed to pay $6 billion toward a settlement and a bankruptcy judge signed off on a plan that freed them from civil litigation.26 (I co-wrote two New York Times opinion pieces that argued the judge had exceeded his bankruptcy court authority by discharging all actions pending against the Sacklers, who had not themselves filed bankruptcy. That issue and the complex bankruptcy plan are now pending before the Supreme Court.) Under the bankruptcy plan, Purdue became a public entity that continued to sell OxyContin, with any proceeds going to treatment and public health.

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

In 2022, Johnson and Johnson paid $5 billion to settle the litigation pending against it. J&J also announced it was quitting the opioid painkiller business. The country’s three largest wholesale drug distributors—AmerisourceBergen, Cardinal Health, and McKesson—reached a settlement in the tsunami of litigation pending against them by paying a combined $21 billion.27 Another $13.8 billion came from the big three pharmacy chains, Walmart, Walgreens, and CVS. Rite Aid filed for bankruptcy protection. The litigation has produced about $55 billion in total settlements.28

Still none of that matters to many families who lost loved ones to the overzealous marketing of prescription painkillers. And, with the many families I have interviewed, they note that no one has gone to prison for having made such enormous profits off the deaths of several hundred thousand Americans. Many who helped fuel the epidemic, such as overprescribing doctors, owners of pill mills, and lax regulators at the FDA and in state health agencies, got away without so much as a slap on the wrist.

An unnamed plaintiff’s lawyer told The Guardian in 2018 that the Sacklers were “essentially a crime family… drug dealers in nice suits and dresses.” No prosecutors, however, had the courage to bring a criminal action against the Sacklers and other opioid kingpins.

What a shame.

About the Author

Gerald Posner is an award-winning journalist and author of thirteen books, including New York Times nonfiction bestsellers Why America Slept (about 9/11) and God’s Bankers (about the Vatican), and the Pulitzer Prize finalist Case Closed (about the JFK assassination). His latest, Pharma, is a withering and encyclopedic indictment of a drug industry that often seems to prioritize profits over patients. A graduate of the University of California at Berkeley, he was a litigation associate at a Wall Street law firm. Before turning to journalism, he spent several years providing pro bono legal representation on behalf of survivors of Nazi experiments at Auschwitz.

References
  1. https://rb.gy/6a7hv
  2. https://rb.gy/8pyh2
  3. https://rb.gy/h7pop
  4. https://rb.gy/lviuy; https://rb.gy/wrekp
  5. Cavers, D.F. (1939). The Food, Drug, and Cosmetic Act of 1938: Its Legislative History and Its Substantive Provisions. Law & Contemp. Probs., 6, 2.
  6. “John Bonica, Pain’s Champion and the Multidisciplinary Pain Clinic,” Relief of Pain and Suffering, John C. Liebeskind History of Pain Collection, Box 951798, History & Special Collections, UCLA Louise M. Darling Biomedical Library, Los Angeles, CA.
  7. Brennan, F. (2015). The U.S. Congressional “Decade on Pain Control and Research” 2001– 2011: A Review. Journal of Pain & Palliative Care Pharmacotherapy, 29(3), 212–227.; https://rb.gy/zmifj
  8. Porter, J., & Jick, H. (1980). Addiction Rare in Patients Treated With Narcotics. New England Journal of Medicine, 302(2), 123.
  9. https://rb.gy/zmg4c; https://rb.gy/leawh. In 2017, six researchers published in the NEJM the results of their review of all subsequent citations to the 1980 letter. “In conclusion, we found that a fivesentence letter published in the Journal in 1980 was heavily and uncritically cited as evidence that addiction was rare with long-term opioid therapy. We believe that this citation pattern contributed to the North American opioid crisis by helping to shape a narrative that allayed prescribers’ concerns about the risk of addiction associated with long-term opioid therapy.” Dr. Jick told the Associated Press in 2017: “I’m essentially mortified that that letter to the editor was used as an excuse to do what these drug companies did.”
  10. Portenoy, R.K., & Foley, K.M. (1986). Chronic Use of Opioid Analgesics in Non-Malignant Pain: Report of 38 Cases. Pain, 25(2), 171–186.
  11. Max quoted in Schottenfeld, J.R., Waldman, S.A., Gluck, A.R., & Tobin, D.G. (2018). Pain and Addiction in Specialty and Primary Care: The Bookends of a Crisis. Journal of Law, Medicine & Ethics, 46(2), 220–237.
  12. Morone, N.E., & Weiner, D.K. (2013). Pain as the Fifth Vital Sign: Exposing the Vital Need for Pain Education. Clinical Therapeutics, 35(11), 1728–1732.
  13. Sullivan, M.D., & Howe, C.Q. (2013). Opioid Therapy for Chronic Pain in the United States: Promises and Perils. Pain, 154, S94–S100.
  14. Weissman, D.E., & Haddox, J.D. (1989). Opioid Pseudoaddiction—an Iatrogenic Syndrome. Pain, 36(3), 363–366.
  15. “Definitions Related to the Use of Opioids for the Treatment of Pain,” Consensus Statement of the American Academy of Pain Medicine, the American Pain Society, and the American Society of Addiction Medicine, approved by the American Academy of Pain Medicine Board of Directors on February 13, 2001, the American Pain Society Board of Directors on February 14, 2001, and the American Society of Addiction Medicine Board of Directors on February 21, 2001 (replacing the original ASAM Statement of April 1997), published 2001.
  16. Wanzer, S.H., Federman, D.D., Adelstein, S.J., Cassel, C.K., Cassem, E.H., Cranford, R.E., … & Van Eys, J. (1989). The Physician’s Responsibility Toward Hopelessly Ill Patients. A Second Look.
  17. Saunders, C. (1965). The Last Stages of Life. The American Journal of Nursing, 70–75.
  18. Saunders, C. (1963). The Treatment of Intractable Pain in Terminal Cancer. Proceedings of the Royal Society of Medicine, 56, 195–197.
  19. https://rb.gy/l7kvh
  20. https://rb.gy/tzwla
  21. Cicero, T. J., Inciardi, J. A., & Muñoz, A. (2005). Trends in Abuse of OxyContin and Other Opioid Analgesics in the United States: 2002–2004. The Journal of Pain, 6(10), 662–672.
  22. https://rb.gy/xdv0m
  23. 2007-05-09 Agreed Statement of Facts, Para 20.
  24. https://rb.gy/qi6ph
  25. https://rb.gy/67baw
  26. https://rb.gy/580po
  27. https://rb.gy/hz79m
  28. https://rb.gy/ma2m8
Categories: Critical Thinking, Skeptic

Reconductoring our Electrical Grid

neurologicablog Feed - Thu, 04/11/2024 - 5:26am

Over the weekend when I was in Dallas for the eclipse, I ran into a local businessman who works in the energy sector, mainly involved in new solar projects. This is not surprising as Texas is second only to California in solar installation. I asked him if he is experiencing a backlog in connections to the grid and his reaction was immediate – a huge backlog. This aligns with official reports – there is a huge backlog and its growing.

In fact, the various electrical grids may be the primary limiting factor in transitioning to greener energy sources. As I wrote recently, energy demand is increasing, faster than previously projected. Our grid infrastructure is aging, and mainly uses 100 year old technology. There are also a number of regulatory hurdles to expanding and upgrading the grid. There is good news in this story, however. We have at our disposal the technology to virtually double the capacity of our existing grid, while reducing the risk of sparking fires and weather-induced power outages. This can be done cheaper and faster than building new power lines.

The process is called reconductoring, which just means replacing existing power lines with more advanced power lines. I have to say, I falsely assumed that all this talk about upgrading the electrical grid included replacing existing power lines and other infrastructure with more advanced technology, but it really doesn’t. It is mainly about building new grid extensions to accommodate new energy sources and demand. Every resource I have read, including this Forbes article, give the same primary reason why this is the case. Utility companies make more money from expensive expansion projects, for which they can charge their customers. Cheaper reconductoring projects make them less money.

Other reasons are given as well. The utility companies may be unfamiliar with the technology, not want to retrain their workers, see this as “new technology” that should be approached as a pilot project, and may have some misconceptions about the safety of the technology. However, the newer powerlines have been used for over two decades, and Europe is way ahead of the US in installing it. These are hurdles that can all be solved with a little money and regulation.

Traditional power lines have a steel core with surrounding aluminum wires. Newer power lines have a carbon composite core with surrounding annealed aluminum. The newer cables are stronger, sag less, and have up to twice the energy carrying capacity as the older lines. Upgrading to the newer cables is a no-brainer.

The electrical grids are now the primary limiting factor in getting new clean energy online. But adding new power lines is a slow process. There is no single agency that can do it, so new permits have to go through a maze of local jurisdictions. Utility companies also fight with each other over who has to pay for what. And local residents create a NIMBY problem, pushing back against new power lines.

Reconductoring bypasses all of those issues, because it uses existing power lines and infrastructure. There are no new permits – you just do it.

In a way, we can take advantage of our past negligence. We have essentially been building new power lines to add more capacity, rather than updating lines. This means we have left ourselves an easy way to massively expand our grid capacity. There is already some money in the infrastructure bill and the IRA for grid upgrades, but the consensus seems to be that this is not enough. We likely need a new bill, one that provides the regulation and funding necessary for a massive reconductoring project in the US. And again, the best part about this approach is that it can be done fast. We can get ahead of our increasing energy demand, and make the grid more resilient and safer.

This will not solve all problems. Some new additions will still need to be made for the grid, not only to expand overall capacity, but to bring new locations onto the grid, both sources and users of electricity. Those necessary grid expansions, however, can take priority, as we won’t need to build new towers just to add capacity to existing routes.

Yet again it seems we have the technology we need to successfully make the transition to a much greener energy sector. We just need to get our act together. We need to make some strategic investments and changes to regulations and how we do things. There are about 3,000 electric utility companies in the US who are responsible for grid upgrades. There are also many state and local jurisdictions. This is an impossible patchwork of entities that need to work together to improve, update, and expand the grid, and so the result is a slow bureaucratic mess (which should come as a surprise to no one). There are also some perverse incentives, such as the way utility companies are reimbursed for capital expenditures.

Again I am reminded of my experience with telehealth – we had the technology, and the advantages were all there. But we could not seem to make it happen because of bureaucratic hurdles. Then COVID hit, and literally overnight we made it happen. If we see the threat of climate change with the same urgency, we can similarly removed logistical hurdles and make a green transition happen.

The post Reconductoring our Electrical Grid first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #931: Error Erasure Extravaganza

Skeptoid Feed - Tue, 04/09/2024 - 2:00am

It's time once again for Skeptoid to correct another round of errors in previous shows.

Categories: Critical Thinking, Skeptic

Eve Herold — Robots and the People Who Love Them

Skeptic.com feed - Tue, 04/09/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss421_Eve_Herold_2024_04_09.mp3 Download MP3

If there’s one universal trait among humans, it’s our social nature. The craving to connect is universal, compelling, and frequently irresistible. This concept is central to Robots and the People Who Love Them. Socially interactive robots will soon transform friendship, work, home life, love, healthcare, warfare, education, and nearly every nook and cranny of modern life. This book is an exploration of how we, the most gregarious creatures in the food chain, could be changed by social robots. On the other hand, it considers how we will remain the same, and asks how human nature will express itself when confronted by a new class of beings created in our own image.

Drawing upon recent research in the development of social robots, including how people react to them, how in our minds the boundaries between the real and the unreal are routinely blurred when we interact with them, and how their feigned emotions evoke our real ones, science writer Eve Herold takes readers through the gamut of what it will be like to live with social robots and still hold on to our humanity. This is the perfect book for anyone interested in the latest developments in social robots and the intersection of human nature and artificial intelligence and robotics, and what it means for our future.

Eve Herold is an award-winning science writer and consultant in the scientific and medical nonprofit space. A longtime communications and policy executive for scientific organizations, she currently serves as Director of Policy Research and Education for the Healthspan Action Coalition. She has written extensively about issues at the crossroads of science and society, including stem cell research and regenerative medicine, aging and longevity, medical implants, transhumanism, robotics and AI and bioethical issues in leading-edge medicine. Previous books include Stem Cell Wars and Beyond Human, and her work has appeared in the Wall Street Journal, Vice, the Washington Post and the Boston Globe, among others. She’s a frequent contributor to the online science magazine, Leaps, and is the recipient of the 2019 Arlene Eisenberg Award from the American Society of Journalists and Authors.

Shermer and Herold discuss:

  • What happened to our flying cars and jetpacks from The Jetsons?
  • What is a robot, anyway? And what are social robots?
  • Oskar Kokoschka, Alma Mahler, and the female doll
  • Robot nannies, friends, therapists, caregivers, and lovers
  • Sex robots
  • The uncanny valley: roboticist Masahiro Mori in 1970
  • Robots in science fiction
  • Psychological states: anthropomorphism, effectance (the need to interact effectively with one’s environment), theory of mind (onto robots), social connectedness
  • “Personal, social, emotional, home robots”
  • Emotions, animism, mind
  • Emotional intelligence
  • Turing Test
  • Artificial intelligence and natural intelligence
  • What is AI and AGI?
  • The alignment problem
  • Large Language Models
  • ChatGPT, GPT-4, GPT-5 and beyond
  • Robopocalypse
  • Robo soldiers
  • What is “mind”, “thinking”, and “consciousness”, and how do molecules and matter give rise to such nonmaterial processes?
  • Westworld: Robot sentience?
  • The hard problem of consciousness
  • The self and other minds
  • How would we know if an AI system was sentient?
  • Can AI systems be conscious?
  • Does Watson know that it beat the great Ken Jennings in Jeopardy!?
  • Self-driving cars
  • What set of values should AI be aligned with, and what legal and ethical status should it.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Eclipse 2024

neurologicablog Feed - Mon, 04/08/2024 - 5:37am

I am currently in Dallas Texas waiting to see, hopefully, the 2024 total solar eclipse. This would be my first total eclipse, and everything I have heard indicates that it is an incredible experience. Unfortunately, the weather calls for some clouds, although forecasts have been getting a little better over the past few days, with the clouds being delayed. Hopefully there will be a break in the clouds during totality.

Actually there is another reason to hope for a good viewing. During totality the temperature will drop rapidly. This can cause changes in pressure that will temporarily disperse some types of clouds.

I am prepared with eclipse glasses, a pair of solar binoculars, and one of my viewing companions has a solar telescope. These are all certified and safe, and I have already used the glasses and binoculars extensively. You can use them to view the sun even when there is not an eclipse. With the binoculars you can see sunspots – it’s pretty amazing.

While we (me and the SGU crew including George Hrab and our tech guru, Ian) are in Dallas we put on three shows over the weekend, including recording two live episodes of the SGU. These were our biggest crowds ever for a live event, and included mostly people not from Texas. People from all over the world are here to see the eclipse.

I have to add, just because there is so much talk about this in the media, a clarification about the danger of viewing solar eclipses. You can view totality without protection and without danger. Also, during most of the partial eclipse, viewing the eclipse is no different than viewing the sun. It is dangerous to look directly at the sun. You should not do it as it can damage your retina.

But – we all live our lives without fearing accidentally staring at the sun, because it hurts and we naturally don’t do it. The only real danger of an eclipse is when most of the sun is covered, so that only a crescent of sun is visible. In this case the remaining amount of sun is not bright enough to trigger pain and cause us to look away. But that sliver of sun is still bright enough to damage your retina. So don’t look directly at a partial eclipse even if it is not painful. This includes locations out of the path of totality that will have a high degree of sun cover, or just before or after totality. That is when you want to use certified eclipse glasses (that are in good condition). During totality you do not need eclipse glasses, and you would see nothing but black anyway.

I will add updates here, and hopefully some pictures, once the eclipse happens.

Update: Well, despite weeks of bad weather reports and angst, we had clear skies in Dallas, and got to see the entire eclipse, including all of totality. Absolutely amazing. It is one of those wondrous natural phenomena that you have to experience in person.

During totality we were able to see multiple prominences, including one big one. Essentially this was a huge arc of red gas extending from the surface of the sun. Beautiful.

I would definitely recommend planning a trip to a future total solar eclipse. It will be worth it.

The post Eclipse 2024 first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #978 - Apr 6 2024

Skeptics Guide to the Universe Feed - Sat, 04/06/2024 - 5:00am
Guest Rogue: Andrea Jones Rooy; Quickie with Bob: Silicon Spikes; News Items: Havana Syndrome, Robo Taxis in New York, Rebellions - Cultural Memory - and Eclipses, Gravitational Waves and Human Life; Your Questions and E-mails: Evolution of Gullibility; Who's That Noisy; Science or Fiction
Categories: Skeptic

Lance Grande — The Formation, Diversification, and Extinction of World Religions

Skeptic.com feed - Sat, 04/06/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss420_Lance_Grande_2024_04_06.mp3 Download MP3

Thousands of religions have adherents today, and countless more have existed throughout history. What accounts for this astonishing diversity?

This extraordinarily ambitious and comprehensive book demonstrates how evolutionary systematics and philosophy can yield new insight into the development of organized religion. Lance Grande―a leading evolutionary systematist―examines the growth and diversification of hundreds of religions over time, highlighting their historical interrelationships. Combining evolutionary theory with a wealth of cultural records, he explores the formation, extinction, and diversification of different world religions, including the many branches of Asian cyclicism, polytheism, and monotheism.

Grande deploys an illuminating graphic system of evolutionary trees to illustrate historical interrelationships among the world’s major religious traditions, rejecting colonialist and hierarchical “ladder of progress” views of evolution. Extensive and informative illustrations clearly and vividly indicate complex historical developments and help readers grasp the breadth of interconnections across eras and cultures.

The Evolution of Religions marshals compelling evidence, starting far back in time, that all major belief systems are related, despite the many conflicts that have taken place among them. By emphasizing these broad historical interconnections, this book promotes the need for greater tolerance and deeper, unbiased understanding of cultural diversity. Such traits may be necessary for the future survival of humanity.

Lance Grande is the Negaunee Distinguished Service Curator, Emeritus, of the Field Museum of Natural and Cultural History in Chicago. He is a specialist in evolutionary systematics, paleontology, and biology who has a deep interest in the interdisciplinary applications of scientific method and philosophy. His many books include Curators: Behind the Scenes of Natural History Museums (2017) and The Lost World of Fossil Lake: Snapshots from Deep Time (2013). His new book is The Evolution of Religions: A History of Related Traditions.

Shermer and Grande discuss:

  • Why is a paleontologist and evolutionary theorist interested in religion?
  • Evolutionary systematics and comparativism in evolutionary biology, linguistics, and the history of religion
  • What is a comparative systematicist?
  • E. O. Wilson’s consilience approach
  • Agnostic approach: not addressing the truth value of any one religion
  • What is religion?
  • Variety: 10,000 different religions: Christianity (33%), Islam (23%), Hinduism/Buddhism (23%), Judaism (0.2%), Other (10%), Agnosticism (10%), Atheism (2%)
  • Evolutionary trees of religion
  • Biological vs. cultural evolution & diversification: Lamarkian vs. Darwinian
  • Historical colonialist progressivism and social Darwinism
  • Frans Boaz, Margaret Meade, historical particularism
  • Rather than focusing on differences, focus on similarities
  • Nature/Nurture & The Blank Slate in anthropology & the social sciences
  • Early evolutionary origins of religion: the cognitive revolution, agenticity, patternicity, theory of mind, animism, spiritism, polytheism
  • Gobekli Tepe as the earliest religious ceremonial structure
  • Machu Picchu and Inca religion
  • Human sacrifice and religion
  • Apocalypto
  • Pizzaro, Atahualpa, and Spanism/European colonialism & eradication of New World religions
  • Time’s arrow and Time’s cycle: Asian Cyclicism
  • Dharmic religion (India), Taoism, Buddhism, Jainism, Sikhism, Shinoism (Hirohito)
  • Old World Hard Polytheism (vs. Soft?) & New World Hard Polytheism (Mesopotamian, Egyptian, Celtic, Greek, Old Norse, Siberian totemism, Alaskan totemism
  • Colonialism and missionaries extinguished many polytheistic religions
  • Linear Monotheism: Atenism, Zoroastrianism, El, Yahweh, Jehovah, Monad, Allah (linear time: one birth, one life, one death, one eternal afterlife; dualistic cosmology: good vs. evil, light vs. dark, heaven vs. hell); proselytic: conversion efforts
  • Abrahamic Monotheism 6th century BCE Second Temple Judaism and Samaritanism
  • Included prophets: Noah, Abraham, Moses (60% of all religious people today)
  • Tanakh sacred scripture 6th century BCE: Hebrew Bible, Old Testament, Quran
  • Jesu-venerationism (1st century CE): Ebionism (Jesus as prophet but not divine), Traditional Christianity, Biblical Demiurgism (primal good god Monad, evil creator spirit Demiurge; saw Jesus as the spiritual emanation of the Monad), Islam
  • Reformation: Catholicism split into Protestantism, Anglicanism
  • Islam: revered 25 prophets from Adam to Jesus, ending with Muhammad
  • Expansion of Islam through conquests in the 7th and 8th centuries CE
  • 4 Generalizations:

    • Organized Religions are historically related at one ideological level or another (illustrated by trees);
    • Largest major branches today were historically intertwined with major political powers;
    • Authority of women declined with the rise of male dominated pantheons, empires, clergies, caliphates;
    • Religion played a role in our species’ early ability to adapt to its social and physical environment: tribalism was a competitive advantage for early humans in which communal societies that developed agriculture, commerce, educational facilities, and armies out-competed less communitarian groups.
Show Notes
How We Believe

In my 2000 book How We Believe: Science, Skepticism, and the Search for God, I defined religion as “a social institution that evolved as an integral mechanism of human culture to create and promote myths, to encourage altruism and reciprocal altruism, and to reveal the level of commitment to cooperate and reciprocate among members of the community.” That is, there are two primary purposes of religion:

  1. The creation of stories and myths that address the deepest questions we can ask ourselves: Where did we come from? Why are we here? What does our ultimate future hold?
  2. The production of moral systems to provide social cohesion for the most social of all the social primates. God figures prominently in both these modes as the ultimate subject of mythmaking and the final arbiter of moral dilemmas and enforcer of ethical precepts.
From Shermer’s book Truth

“Jesus was a great spiritual teacher who had a profound effect on many people,” writes Lance Grande in his magisterial The Evolution of Religions, admitting that “he became what is probably the most influential person in history.” But this says nothing about the verisimilitude of the miracle claims made in Jesus’ name. In fact, as Grande notes, neither during his own lifetime (4BC-30 CE), nor in the earliest writings of the New Testament by Paul, were miracle claims made in Jesus’s name. Even Paul’s mention of the resurrection of Christ was described in 1 Corinthians (15:44) as a spiritual event rather than a literal one: “It is sown a natural body; it is raised a spiritual body. There is a natural body, and there is a spiritual body.” In Paul’s writings about Christ, says Grande, “he speaks of him in a mystical sense, as a spiritual entity of human consciousness.” Many contemporary groups, in fact, “saw Christ as a spirit that possessed the man Jesus at his baptism and left him before his death at the crucifixion” (called “separationism”). But since political monarchs in the first century CE were treated as divine, Christian proselytizers began to refer to Jesus as the “King of Kings,” and so came to pass the deification of an otherwise mortal man. Here is how Grande recaps the transformation:

Reports of specific miracles only began to appear several decades after the death of Jesus, in the Gospel of Mark (65-70 CE) and in later gospels (80-100CE). This suggests that stories of miracles (e.g., controlling the weather, creating loaves and fishes out of nothing, turning water into wine, healing the sick, and raising the physical dead) were layered into the story of Jesus as expressions of an ultimate God experience.

And as is typical of myths in the making, in the retelling across peoples, spaces, and generations, layers of improbability are added as a test of faith:

Once the stories of miracles began to appear in early Christianity, they were retold repeatedly, until they became ingrained beliefs. More stories were added, such as miracles about singing angels, stars announcing earthly happenings, and even a fetus (that of John the Baptist in his mother Elizabeth’s womb) leaping to acknowledge the anticipated power of another fetus (that of Jesus in his mother Mary’s womb). These details, many of which probably began as metaphorical lessons, gradually became accepted by many followers as literal historic truths. It is probable that some of these stories were never intended as documents of historical fact.

From metaphorical lessons to historic truths. Perhaps this is what the author of the Gospel of John meant when he wrote (John 20:31): “But these are written, that ye might believe that Jesus is the Christ, the Son of God; and that believing ye might have life through his name.”

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Maggie Jackson — Uncertain: The Wisdom and Wonder of Being Unsure

Skeptic.com feed - Tue, 04/02/2024 - 1:39pm
https://traffic.libsyn.com/secure/sciencesalon/mss419_Maggie_Jackson_2024_04_02.mp3 Download MP3

In an era of terrifying unpredictability, we race to address complex crises with quick, sure algorithms, bullet points, and tweets. How could we find the clarity and vision so urgently needed today by being unsure? Uncertain is about the triumph of doing just that. A scientific adventure tale set on the front lines of a volatile era, this epiphany of a book by award-winning author Maggie Jackson shows us how to skillfully confront the unexpected and the unknown, and how to harness not-knowing in the service of wisdom, invention, mutual understanding, and resilience.

Long neglected as a topic of study and widely treated as a shameful flaw, uncertainty is revealed to be a crucial gadfly of the mind, jolting us from the routine and the assumed into a space for exploring unseen meaning. Far from luring us into inertia, uncertainty is the mindset most needed in times of flux and a remarkable antidote to the narrow-mindedness of our day. In laboratories, political campaigns, and on the frontiers of artificial intelligence, Jackson meets the pioneers decoding the surprising gifts of being unsure. Each chapter examines a mode of uncertainty-in-action, from creative reverie to the dissent that spurs team success. Step by step, the art and science of uncertainty reveal being unsure as a skill set for incisive thinking and day-to-day flourishing.

Maggie Jackson is an award-winning author and journalist known for her pioneering writings on social trends, particularly technology’s impact on humanity. Winner of the 2020 Dorothy Lee Book Award for excellence in technology criticism, her book Distracted was compared by FastCompany.com to Silent Spring for its prescient critique of technology’s excesses, named a Best Summer Book by the Seattle Post-Intelligencer, and was a prime inspiration for Google’s 2018 global initiative to promote digital well-being. Jackson is also the author of Living with Robots and The State of the American Mind. Her expertise has been featured in The New York Times, Business Week, Vanity Fair, Wired.com, O Magazine, and The Times of London; on MSNBC, NPR’s All Things Considered, Oprah Radio, The Takeaway, and on the Diane Rehm Show and the Brian Lehrer Show; and in multiple TV segments and film documentaries worldwide. Her speaking career includes appearances at Google, Harvard Business School, and the Chautauqua Institute. Jackson lives with her family in New York and Rhode Island.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

AI Designed Drugs

neurologicablog Feed - Tue, 04/02/2024 - 5:04am

On a recent SGU live streaming discussion someone in the chat asked – aren’t frivolous AI applications just toys without any useful output? The question was meant to downplay recent advances in generative AI. I pointed out that the question is a bit circular – aren’t frivolous applications frivolous? But what about the non-frivolous applications?

Recent generative AI applications are a powerful tool. They leverage the power and scale of current data centers with the massive training data provided by the internet, using large language model AI tools that are able to find patterns and generate new (although highly derivative) content. Most people are likely familiar with this tech through applications like ChatGPT, which uses this AI process to generate natural-language responses to open ended “prompts”. The result is a really good chat bot, but also a useful interface for searching the web for information.

This same technology can generate output other than text. It can generate images, video, and music. The results are technically impressive (if far from perfect), but in my experience not genuinely creative. I think these are the fun applications the questioner was referring to.

But there are many serious applications of this technology in development as well. An app like ChatGPT can make an excellent expert system, searching through tons of data to produce useful information. This can have many practical applications, from generating lists of potential diagnoses for doctors to consider, to writing first-draft legal contracts. There are still kinks to be worked out, but the potential is clearly amazing.

Perhaps most amazing, however, is the potential for AI in general, including these new generative AI applications, to assist in scientific research. This is already happening. As someone who reads dozens of science press releases a week, it is clear that the number of research studies leveraging AI is growing rapidly. The goal is to use AI to essentially complete months of research in mere hours. A recent such study caught my attention as a particularly powerful example.

The researchers used generative AI (an application called SyntheMol) to design potential antibiotics. Again, AI aided drug development is not new, but this looks like a significant advance. The idea is to use a large language model AI to generate not text but chemical structures. This is feasible because we already have a large library of known drug-like chemicals, their structure, their chemistry, the chemical reactions that make them, and their biological activity. The AI was trained on 130,000 chemical building blocks. This is a type of chemical language, and the AI can be used to generate new iterations with predicted properties.

This is essentially what traditional drug design does, but AI just does it much faster. It is estimated, for example that there are 10^60 potential drug-like chemical structures that could exist. That is an impossibly large space to explore with conventional methods. The AI used in the current study explored a “chemical space” of 30 billion new compounds. That is still a small slice of all possible drug molecules, but this subset had parameters. They were looking for chemicals that could have potential antibacterial activity against Acinetobacter baumannii, a Gram-negative bacterial pathogen. This also has been done before – looking for antibiotics – but one problem was that many of the resulting chemicals were hard to synthesize. So this time they included another parameter – only make molecules that are easy to synthesize, and include the chemical reaction steps necessary to make them.

In just 9 hours SyntheMol generated  25,000 potential new drugs. The researchers then filtered this list looking for the most novel compounds, to avoid current resistance to existing antibiotics. They chose 70 of the most promises chemicals and handed them off, including the recipe of chemical reactions to synthesize them, to a Ukrainian chemical company. They were able to synthesize 58 of them. The researchers then tested them as antibiotics and found that six of them represented structurally unique molecules with antibacterial activity against A. baumannii.

These results would have been impossible in this time frame without the use of generative AI. I would call that a non-frivolous outcome.

Drug candidates resulting from this process still need to be tested clinically, and may fail for a variety of reasons. But chemists who develop drugs know the parameters that make a successful drug. It has to have good bioavailability, a reasonable half-life, and a relative lack of toxicity (among others). These are all features that can be somewhat predicted based upon known chemical structures. These can all become parameters that SyntheMol or a similar application can use when generating potential molecules.

The goal, of course, is to do as much of the selection and filtering as possible digitally, so that when you get to in-vitro testing, animals testing, and eventually human testing, the probability of a successful drug has already been maximized. The potential for saving money, time, and suffering is massive.

This is only one specific example of how this new generative AI technology can supercharge scientific research. This is a quiet revolution that is already happening. In spaces where this kind of technology can be effectively leveraged, the pace of scientific progress may increase by orders of magnitude. Fans of the Singularity might argue that this is the beginning – a time when the pace of scientific and technology progress becomes so rapid that society cannot keep up, and the horizon of future predictability narrows to insignificance. The Singularity refers more to a time when general AI takes over human civilization, technology and research. But even with these narrow generative AI tools we are starting to see the real potential here. It’s both exciting and frightening.

The post AI Designed Drugs first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #930: Dr. Crow and the Melon Heads

Skeptoid Feed - Tue, 04/02/2024 - 2:00am

Some say creepy children with huge balloon heads stalk the woods at night, waiting to attack you.

Categories: Critical Thinking, Skeptic

It’s The Russians! The Latest 60 Minutes Episode on Havana Syndrome Engages in Tabloid Journalism

Skeptic.com feed - Tue, 04/02/2024 - 12:00am

In a special double segment that is reminiscent of The National Enquirer in its heyday, 60 Minutes has aired another dramatic story on Havana Syndrome. If it had been a sporting event, the score would have been 8-0: eight people interviewed and not a single skeptic.

Billed by CBS News as a “breakthrough” in their five-year-long investigation, the episode that aired Sunday night, March 31, 2024, raises many important questions—not about the existence of Havana Syndrome, but the present state of journalistic integrity. As someone who has followed this saga from the beginning, the new 60 Minutes report was a case study in fearmongering and selective omission. The program was filled with misleading statements and circumstantial evidence that were used to gin up a story that is on life support after the U.S. intelligence community concluded last year that “Havana Syndrome” is likely a condition that never existed.

In the leadup to the broadcast, CBS News teased the segment with the headlines “Targeting Americans” and “Breakthrough in Havana Syndrome Investigation.” Yet in the report it was described as “a possible breakthrough” and there was no conclusive proof that Americans, or anyone else, have been targeted.1

60 Minutes reporter Scott Pelley featured an interview with Gregory Edgreen, a former American military intelligence officer who oversaw the Pentagon investigation into “Havana Syndrome.” He told Pelley that the present situation is dire for American security as “the intelligence officers and our diplomats working abroad are being removed from their posts with traumatic brain injuries—they’re being neutralized.”2

Edgreen disagreed with last year’s intelligence community consensus, left his position and has founded Advanced Echelon, a company devoted to caring for “Havana Syndrome survivors.”3 His interview is reminiscent of recent attempts by some media outlets to support the unfound claim the U.S. Government is covering up information on the existence of recovered alien bodies and crashed saucers. This is the opinion of one man who was involved in an investigation, yet he does not represent the intelligence community, which has deemed the purported attacks to be “highly unlikely” and the existence of the condition itself as dubious.

Enter David Relman

Predictably, Stanford microbiologist David Relman made an appearance and told Pelley that his panel found “clear evidence of an injury to the auditory and vestibular system of the brain.” This is not supported by the evidence. Relman failed to mention that not only have recent studies found no such damage, many Havana Syndrome patients have been diagnosed with psychosomatic disorders that are commonly triggered by stress.

Pelley also claimed that a senior Department of Defense official was attacked during last year’s NATO Summit in Lithuania. His source: multiple unnamed people. He said that the official involved—also unnamed—“was struck by the symptoms and sought medical treatment.” We are told nothing more. The trouble with this claim is that Havana Syndrome has been associated with a laundry list of common health complaints ranging from fatigue and forgetfulness to nausea, nosebleeds, headache, tinnitus, ear pain and difficulty sleeping. Throughout the broadcast there were also assertions that victims were suffering from brain injury—something that has never been demonstrated. These symptoms are also features of countless other medical conditions.

Also interviewed was an FBI agent identified only as “Carrie,” who said she had been attacked by a directed energy weapon and while she had been given permission to discuss her condition by her employer, “she wasn’t allowed to discuss the cases she was on when she was hit.” Appearing in disguise to protect her identity she described how one day in 2021 at her Florida home she felt “pressure and pain” in her head that radiated down her jaw and neck and into her chest before she passed out. Since then, she says she has experienced problems with long and short-term memory and difficulty with sensing spatial awareness: “If I turn too fast, my gyroscope is off… it’s like I’m a step behind where I’m supposed to be, so I’ll turn too fast and I will literally walk right into the wall.”

Pelley then claimed that “other sources”—anonymous of course—told 60 Minutes that one of the cases involved a suspected Russian spy who was caught speeding on a Florida highway in 2020. The man had apparently been interviewed by Carrie on several occasions. He was identified as a former Russian military officer with an electrical engineering background. While serving a sentence for reckless driving and evading police, “Carrie” said she was hit two more times—about a year apart—once in Florida, once in California. The “attacks” left her disoriented and a feeling that her body was pulsating.

The Russians Are Coming!

Who is behind these attacks? The Russians, of course. Pelley casts suspicion for the “attacks” on a Russian military unit known as 29155. He also claims to have found the smoking gun—a document sourced online showing that one the unit’s officers had been paid for working on “nonlethal acoustic weapons.” This is not the dramatic find that it is made out to be. Acoustic weapons are in common use by governments around the world. The use of sound cannons—commonly known as Long-Range Acoustic Devices—have long been employed to control crowds. Beyond this they have shown little practical value as the waves rapidly disperse.

It was then claimed that unit 29155 may have been in the city of Tbilisi in the former Soviet Republic of Georgia, when several Americans experienced mysterious health incidents there. An unnamed 40-year-old wife of a Justice Department official told Pelley that she was struck by an energy weapon when her husband was working at the U.S. Embassy in Tbilisi in October 2021. She said she was suddenly overcome by a piercing sound in her left ear, felt “a fullness” in her head, developed a headache, and began vomiting. What happened next reads like a spy novel. She looked outside and spotted a car near the front gate and a man nearby. Pelley then showed her a photo of a member of unit 29155 who was thought to have been in the city at the time of the “attack.” When asked if it looked like the man in the photo, she unhesitatingly pronounced, “it absolutely does.” Shortly after, however, she grew hesitant: “I cannot absolutely say for certain that it is this man…” But after a few more seconds elapsed she proclaimed: “I can absolutely say that this looks like the man….” This is not exactly an icon-clad identification.

The woman says she continues to suffer balance problems, headaches and “brain fog,” the latter term being a common description of people experiencing anxiety. She also said that her symptoms typically worsen at night. These are common features of vestibular dysfunction. Pelley dramatically noted that the woman has also been treated for “holes in her inner ear canals.” While this could have been from a mysterious weapon, there is a more mundane explanation: perilymphatic fistula that can be caused by barotrauma from changes in air or water pressure, such as from flying or scuba diving. Strenuous physical exercise can also trigger the condition, as well as head trauma.

A Story with Nine Lives

Havana Syndrome has become a cottage industry for podcasters, bloggers, and the news media because it’s a dramatic story that reads like a spy novel and is guaranteed to get clicks and views. It has also turned into the ultimate game of whack-a-mole. Like the cat with nine lives, it just won’t die. I cannot help but think that when enough people become aware of the full story—where key facts have not been omitted, Havana Syndrome will finally fade from the headlines. If I had watched this story with only a superficial knowledge of Havana Syndrome, I probably would have finished watching the episode convinced that there really have been Russian attacks on Americans using a secret weapon. But the facts point to a far more mundane explanation.

What happened to journalistic integrity? For years many journalists have reported that American citizens have been hit with a mysterious energy weapon. Scott Pelley has filed no less than three such reports for 60 Minutes.4 At the very least, viewers are entitled to hear from prominent skeptics whose voices were silenced. A news program that interviews eight believers and no skeptics isn’t a news program—it’s propaganda.

About the Author

Robert E. Bartholomew is an Honorary Senior Lecturer in the Department of Psychological Medicine at the University of Auckland in New Zealand. He is a Fellow of the Committee for Skeptical Inquiry and the co-author of Havana Syndrome: Mass Psychogenic Illness and the Real Story Behind the Embassy Mystery and Hysteria (Copernicus, 2020) with neurologist Robert Baloh.

References
  1. Costa, Robert (2024). The CBS Evening News, March 29, 2024, at 10:00 sec. and accessed at: https://www.cbsnews.com/evening-news/; See also https://www.cbsnews.com/video/targeting-americans-sunday-on-60-minutes/
  2. Pelley, Scott (2024). “Foreign adversaries may be involved in Havana Syndrome, sources say.” 60 Minutes (CBS News, NY). March 31.
  3. See the Advanced Echelon homepage at: https://www.advancedechelon.net/about
  4. See also: Pelley, Scott (2022). “Havana Syndrome: High-level national security officials stricken with unexplained illness on White House grounds.” 60 Minutes (CBS News, NY). February 20, accessed at: https://cbsn.ws/3MfZaLR; Pelley, Scott (2019). “Brain damage suffered by U.S. diplomats abroad could be work of hostile foreign government.” 60 Minutes (CBS News, NY). March 17.
Categories: Critical Thinking, Skeptic

What to Make of Havana Syndrome

neurologicablog Feed - Mon, 04/01/2024 - 5:21am

I have not written before about Havana Syndrome, mostly because I have not been able to come to any strong conclusions about it. In 2016 there was a cluster of strange neurological symptoms among people working at the US Embassy in Havana, Cuba. They would suddenly experience headaches, ringing in the ears, vertigo, blurry vision, nausea, and cognitive symptoms. Some reported loud whistles, buzzing or grinding noise, usually at night while they were in bed. Perhaps most significantly, some people who reported these symptoms claim that there was a specific location sensitivity – the symptoms would stop if they left the room they were in and resume if they returned to that room.

These reports lead to what is popularly called “Havana Syndrome”, and the US government calls “anomalous health incidents” (AHIs). Eventually diplomats in other countries also reported similar AHIs. Havana Syndrome, however, remains a mystery. In trying to understand the phenomenon I see two reasonable narratives or hypotheses that can be invoked to make sense of all the data we have. I don’t think we have enough information to definitely reject either narrative, and each has its advocates.

One narrative is that Havana Syndrome is caused by a weapon, thought to be a directed pulsed electromagnetic or acoustic device, used by our adversaries to disrupt American and Canadian diplomats and military personnel.  The other is that Havana Syndrome is nothing more than preexisting conditions or subjective symptoms caused by stress or perhaps environmental factors. All it would take is a cluster of diplomats with new onset migraines, for example, to create the belief in Havana Syndrome, which then takes on a life of its own.

Both hypotheses are at least plausible. Neither can be rejected based on basic science as impossible, and I would be cautious about rejecting either based on our preexisting biases or which narrative feels more satisfying. For a skeptic, the notion that this is all some kind of mass delusion is a very compelling explanation, and it may be true. If this turns out to be the case it would definitely be satisfying, and we can add Havana Syndrome to the list of historical mass delusions and those of us who lecture on skeptical topics can all add a slide to our Powerpoint presentations detailing this incident.

But I am not ready to do that. We need to go through due diligence. It remains possible that our adversaries have developed a device that can beam directed pulsed EM or acoustic energy over a moderate distance (say, 100 meters) and that they have been using such a device to experiment on its results, or to achieve some perceived goal of disrupting our diplomatic efforts. For those with a more conspiratorial mindset, this narrative is the most compelling.

While I view this story as a skeptic, I also view it as a neurologist. While all the symptoms being presented as Havana Syndrome are non-specific, meaning they can be caused by a lot of things, that does not mean they are not real. A lot of the symptoms are explainable as migraines, but that does not mean they are not triggered exogenously. In fact, that could make the claims a bit more plausible – the pulsed beam is triggering a migraine-like phenomenon in the brains of the targeted individuals. Not everyone would respond to such triggers, not all responses would be identical, and the symptoms induced can become chronic. Migraine-like phenomena would also not necessarily leave behind any objective pathological findings. We cannot see migraines on an MRI scan of the brain or in blood work or EEGs. Migraines are defined mostly by the subjective symptoms of those who suffer from them (with some subsets having mild findings on exam, such as autonomic symptoms).

The presence of neurological findings have been investigated. A 2019 study found some differences in the brains of people with reported AHIs. This was a small study, the findings were not necessarily what one would predict, and at most this was an exploratory study that generated some hypotheses to be further investigated. Now two recent studies have tried to replicate these results with larger sample sizes and some more detailed analysis – and they found no brain differences between those with AHIs and controls. While this is a blow to the Havana Syndrome hypothesis, it does not kill it entirely. As an accompanying editorial by Dr. Relman, who was involved in investigating subjects with AHI, points out, we would not necessarily see consistent brain imaging finding for a variety of reasons. He also criticizes the studies for not limiting their analysis to those with what he considers to be the cardinal feature of true Havana Syndrome – the location dependent aspect of the symptoms. This could have diluted out any real findings.

There are other ways to resolve the question about the true nature of Havana Syndrome. American intelligence agencies have investigated the question as a national security question, and they report finding no evidence of any program by a foreign power to develop or use such a device. Another approach is to study directed pulsed EM or acoustic device to see if we can replicate the symptoms of Havana Syndrome. This has not been done to date.

And here the controversy sits. So far it seems that the objective evidence favors the “mass delusion” hypothesis. This is similar to “sick building syndrome” and other health incidents where a chance cluster of symptoms leads to widespread reporting which is followed by confirmation bias and the background noise of stress and symptoms focusing on the alleged syndrome. This explanation, at least, cannot be ruled out by current evidence.

But I don’t think we can rule out that something physical is going on that so far has eluded detection. Relman focuses much of his arguments on the location-dependent symptoms reported by some individuals. That would be a strange and unique feature that favors and external phenomenon. But I don’t personally know how solid these reports are, if they were contaminated by suggestive history taking, or perhaps a coincidence magnified by faulty memory and pattern seeking behavior.

As we like to say – this questions needs more study. I don’t know how open a question there is from an intelligence perspective, or if they have closed the book on it. From a neurological perspective it seems like a follow up study, addressing the criticisms of the current studies, could lay the question to rest. But that will not resolve the underlying question, because there does not necessarily have to be an documentable brain changes for a migraine – like syndrome. Finally, there is the technology question. Is a directed pulsed EM or acoustic device workable, and will it reproduce the symptoms of Havana Syndrome. That might be the most definitive piece of evidence (short of the CIA catching a foreign agent red-handed with such a device).

I do think that if Havana Syndrome is real, we should be able to demonstrate it either through reproducing the technology or uncovering evidence of a foreign program to use it. The longer we go without definitive evidence, the more likely the mass delusion hypothesis becomes. The neurological approach is most useful in the positive – if we identify clear signs of Havana Syndrome in sufferers, that will go a long way to supporting its reality. But if these studies remain negative, that does not have the potential to falsify Havana Syndrome.

The post What to Make of Havana Syndrome first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #977 - Mar 30 2024

Skeptics Guide to the Universe Feed - Sat, 03/30/2024 - 8:00am
AI Created Music; News Items: Sweetened Drinks and Atrial Fibrillation, One Degree, Birth Control Misinformation, Iridology; Who's That Noisy; Your Questions and E-mails: Mel's Mystery Hole, Positive Thinking; Science or Fiction
Categories: Skeptic

Coleman Hughes — The End of Race Politics

Skeptic.com feed - Sat, 03/30/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss418_Coleman_Hughes_2024_03_30.mp3 Download MP3

As one of the few black students in his philosophy program at Columbia University years ago, Coleman Hughes wondered why his peers seemed more pessimistic about the state of American race relations than his own grandparents–who lived through segregation. The End of Race Politics is the culmination of his years-long search for an answer.

Contemplative yet audacious, The End of Race Politics is necessary reading for anyone who questions the race orthodoxies of our time. Hughes argues for a return to the ideals that inspired the American Civil Rights movement, showing how our departure from the colorblind ideal has ushered in a new era of fear, paranoia, and resentment marked by draconian interpersonal etiquette, failed corporate diversity and inclusion efforts, and poisonous race-based policies that hurt the very people they intend to help. Hughes exposes the harmful side effects of Kendi-DiAngelo style antiracism, from programs that distribute emergency aid on the basis of race to revisionist versions of American history that hide the truth from the public.

Through careful argument, Hughes dismantles harmful beliefs about race, proving that reverse racism will not atone for past wrongs and showing why race-based policies will lead only to the illusion of racial equity. By fixating on race, we lose sight of what it really means to be anti-racist. A racially just, colorblind society is possible. Hughes gives us the intellectual tools to make it happen.

Coleman Hughes is a writer, podcaster and opinion columnist who specializes in issues related to race, public policy and applied ethics. Coleman’s writing has been featured in the New York Times, the Wall Street Journal, National Review, Quillette, The City Journal and The Spectator. He appeared on Forbes’ 30 Under 30 list in 2021.

Shermer and Hughes discuss:

  • If he is “half-black, half-Hispanic” why is he considered “black”?
  • What is race biologically and culturally?
  • Race as a social construction
  • Population genetics and race differences: sports, I.Q., crime, etc.
  • Base Rate Neglect, Base Rate Taboos
  • The real state of race relations in America: surveys, call-back studies, search data, etc.
  • George Floyd, BLM, Ibram X Kendi, Robin DiAngelo, Isabella Wilkinson, Ta-Nehisi Coates and the neo-racists
  • Institutionalized neo-racism: the academy and business
  • What it means to be “colorblind”
  • Viewpoint epistemology and race
  • Affirmative action and correcting for past wrongs
  • Lyndon Johnson’s famous quote, June 4, 1965, Howard University: “You do not take a person who, for years, has been hobbled by chains and liberate him, bring him up to the starting line of a race and then say, “you are free to compete with all the others,” and still justly believe that you have been completely fair. Thus it is not enough just to open the gates of opportunity. All our citizens must have the ability to walk through those gates. This is the next and the more profound stage of the battle for civil rights. We seek not just freedom but opportunity. We seek not just legal equity but human ability, not just equality as a right and a theory but equality as a fact and equality as a result.”
  • Why are there still big gaps in income, wealth, home ownership, CEO representation, Congressional representation, etc.?
  • Myth of Black Weakness
  • Myth of No Progress
  • Myth of Undoing the Past
  • The Fall of Minneapolis
  • Reparations
  • The future of colorblindness.

Read Michael H. Bernstein’s review of Coleman Hughes book, The End of Race Politics: Arguments for a Colorblind America.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Revisiting Colorblindness

Skeptic.com feed - Sat, 03/30/2024 - 12:00am

Several years ago, I came across an imaginative essay entitled “Explaining Affirmative Action to a Martian.”1 The author, who I had never heard of, described a fictious interaction where a human explains the rationale of affirmative action to an alien. Among its gems is the following interaction:

Earthling: Black people were enslaved and subjugated for centuries, so, sometimes they get special dispensations. It’s only fair…

Visitor: So those black kids…were enslaved and subjugated, so they get to score 450 [standardized test] points lower than Asians?

Earthling: Well these particular black students didn’t experience slavery or Jim Crow themselves… But their grandparents might have experienced Jim Crow.

Visitor: Might have?

Earthling: Well, around half of black students at elite colleges are actually the children of black immigrants so they have no ancestral connection to American slavery or Jim Crow…

Visitor: … I’m utterly confused by you creatures.

The author of this essay was Coleman Hughes, a Columbia University undergraduate at the time. In the intervening years, Hughes has been one of the leading voices on race. As a long-time listener and fan of Hughes, I was eager to read his first book, The End of Race Politics: Arguments for a Colorblind America. It did not disappoint.

Hughes has a gift for clearly and dispassionately evaluating one of our most explosive social topics. Oftentimes in today’s world, the political left exaggerates the prevalence of racism while the political right a priori assumes that all such accusations lack merit. What we so desperately need is a middle ground: An analysis that deals honestly with the racism which does exist without inflating it. This is what Coleman Hughes does.

What makes his book excellent is, ironically, the mundane manner in which he evaluates race. Hughes carefully dissects the arguments of neoracism, an ideology he defines as “discrimination in favor of non-whites…justified on account of the hardships they endure—and hardships their ancestors endured—at the hands of whites.” Reading Hughes is a breath of fresh air. He gives neoracism the long overdue hearing it deserves, one that is fair but critical.

Hughes forces readers to think in terms of counterfactuals. This is typically missing from public discourse but is essential for evaluating double-standards and identifying what philosophers call the “special pleading” fallacy where rules are inconsistently applied. For example, the author points out that Yale University did not denounce the racism of a psychiatrist who gave a talk saying, “I had fantasies of unloading a revolver into the head of any white person that got in my way, burying their body and wiping my bloody hands as I walked away relatively guiltless.” Yet, as Hughes puts it, “Suppose [the speaker] had described fantasies about shooting black people in the head, burying them, and walking away… Is there any doubt that the Yale administration would have condemned her racism?”

In chapter five, Hughes delves into seven central tenants of neoracism, such as “Racial disparities provide direct evidence of systemic racism” and “White people have power in society, but Black people don’t.” Reading the book, and this chapter in particular, felt like following the author on a tour of three-legged stools. Each claim seems believable on its face but Hughes raises compelling arguments against them. For instance, consider the racial disparity tenant mentioned above which includes the claim that “there would be no racial disparities, or at least large ones, in a fair society.” It is easy to see the appeal: Blacks have been historically discriminated against and are on the short end of many troubling disparities. However, Hughes discusses factors other than racism that could explain group differences. Perhaps most critical is the age gap, such that the median White person is 10 years older than the median Black person. Wilfred Reilly points out that the age gap is even more striking (31 years) when comparing the modal (most common) age.2 Might that play a role in some disparities, such as wealth or incarceration rates?

One of the joys of reading Hughes’ book, at least for myself as a research psychologist, is the way key psychological concepts are infused throughout—even if not explicitly named (and since the author has no formal background this is not surprising). The notion of tribalism, something my colleagues and I have studied in the context of politics,3 is depicted as key to the neoracist ideology “because it casts every event as an instance of us versus them, good versus evil, black versus white.” Sadly, one of the most important lessons4 from social psychology—the subfield largely devoted to understanding how humans interact with each other—of the past 50 years is the ease with which people separate into groups and develop preference for in-group members.

Elsewhere, in critiquing what he calls “chronic victimhood,” Hughes writes, “A wise therapist wouldn’t tell you to accept chronic victim status…and think of yourself as forever trapped in your experience of trauma. The wise therapist would instead help you develop strategies for moving past the trauma you’d suffered, empowering you to escape the trauma’s gravitational pull.” Here, the author is getting at the idea of mindset, a concept developed by Stanford Psychology Professor Carol Dweck. Hughes is correctly pointing out that victimhood and its downstream difficulties should be viewed in the context of a growth mindset—something that is malleable—rather than in the context of a fixed mindset, which is not changeable. Growth mindsets suggest that people have agency to change; unsurprisingly, research generally supports the idea that it leads to better outcomes. A study by Jessica Schleider, for example, found that a single 20–30 minute computer-based session focused on enhancing a growth mindset reduced depression among adolescents when evaluated nine months later.5

There was one observation Hughes made in passing that clearly reveals his status as a gifted intellectual with a keen eye towards understanding how people think and behave. He starts by critiquing the position that America has failed to “acknowledge and atone for its past [racism]” by pointing out several facts which seem to contradict this assertion, including the adoption of Juneteenth and Martin Luther King Day as federal holidays, affirmative action programs, and most critically, Congress issuing apologies for slavery. Hughes argues that “none of this paints a picture of a general public, or a government, that is resistant to historical soul-searching.” Several paragraphs later he continues, “To this day, it remains a talking point among media pundits that America has ‘never’ issued a formal apology for slavery.” And this is where he makes his insight:

We must realize that a game is being played here. Normally when someone demands an apology, they actually want one. But sometimes they don’t. Sometimes the ability to continue demanding the apology is worth more than the apology itself. Sometimes the debt is worth more unpaid than paid… This is why every new apology, program, or holiday that they demand is forgotten as soon as it’s achieved… It’s not clear to me whether neoracists play this game consciously or whether there is self-deception involved. But either way, we are indeed playing a game, and if we don’t realize it, then everyone loses.

If you took out the word “neoracist” and told me this passage was from Eric Berne’s seminal 1964 book, Games People Play, I would have believed you. Hughes is arguing that a game of shifting goalposts is occurring. One could argue another instance of this happened in the aftermath of George Floyd’s death. First, there were demands for Chauvin to be convicted. After he was convicted, the guilty verdict was seen as insufficient. For instance, Bernie Sanders tweeted the common sentiment “The jury’s verdict delivers accountability for Derek Chauvin, but not justice for George Floyd.”6 If no game were occurring, which is to say that opinion was also held before the conviction, then it seems to suggest courts are unable to administer justice for victims. This raises challenging questions about how justice would be administered (if possible) and who would decide what constitutes justice.

My only substantiative critique of the book is that, while it functions as a highly effective counter to ideas presented by radical neoracists, Hughes could have bolstered his argument in favor of colorblindness by also speaking more explicitly to moderates. I think many left-of-center people are put off by the ideas of activists such as Ibram X. Kendi and Robin DiAngelo, and are disturbed by the way race is discussed in elite circles. However, I also think most would still favor mild affirmative action programs that they believe are appropriately calibrated. People who fall into this camp might agree with 90 percent of the book and even agree that colorblindness is a better approach to race than our current one. Yet, they might also argue that the best solution is to reduce, yet not eliminate, the consideration of race.

Overall, I found The End of Race Politics to be an excellent read from a superb up-and-coming author. Those teaching classes on race who include Kendi’s How to Be an Antiracist on their syllabus should seriously consider adding this book to the reading list for a diversity of viewpoints. Students could then engage with scholars who hold diametrically opposing positions and debate the merits of each.

I doubt that will happen anytime soon, but will be delighted if proven wrong.

A review of The End of Race Politics: Arguments for a Colorblind America by Coleman Hughes

About the Author

Michael H. Bernstein is an experimental psychologist and an Assistant Professor at Brown University. His research is focused on the overlap of cognitive science with medicine. He is Director of the Brown Medical Expectations Lab and co-editor of The Nocebo Effect: When Words Make You Sick. For more information, visit michaelhbernstein.com.

References
  1. https://bit.ly/3Vi24Xv
  2. https://bit.ly/48UG3Be
  3. https://bit.ly/498cmNi
  4. https://bit.ly/4amn181
  5. https://bit.ly/4cddOR0
  6. https://bit.ly/3TuKcGu
Categories: Critical Thinking, Skeptic

Is Music Getting Simpler

neurologicablog Feed - Fri, 03/29/2024 - 5:27am

I don’t think I know anyone personally who doesn’t have strong opinions about music – which genres they like, and how the quality of music may have changed over time. My own sense is that music as a cultural phenomenon is incredibly complex, no one (in my social group) really understands it, and our opinions are overwhelmed by subjectivity. But I am fascinated by it, and often intrigued by scientific studies that try to quantify our collective cultural experience. And I know there are true experts in this topic, musicologists and even ethnomusicologists, but haven’t found good resources for science communication in this area (please leave any recommendations in the comments).

In any case, here are some random bits of music culture science that I find interesting. A recent study analyzing 12,000 English language songs over the last 40 years has found that songs have been getting simpler and more repetitive over time. They are using fewer words with greater repetition. Further, the structure of the lyrics are getting simpler, and they are more readable and easier to understand. Also, the use of emotional words has increased, and has become overall more negative and more personal. I have to note this is a single study and there are some concerns about the software used in the analysis, but while this is being investigated the authors state that it is unlikely any glitch will alter their basic findings.

But taken at face value, it’s interesting that these findings generally fit with my subjective experience. This doesn’t necessarily make me more confident in the findings, and I do worry that I am just viewing these results through my confirmation bias filter. Still, it not only fits what I have perceived in music but in culture in general, especially with social media. We should be wary of simplistic explanations, but I wonder if this is mainly due to a general competition for attention. Overtime there is a selective pressure for media that is more immediate, more emotional, and easier to consume. The authors also speculate that it may reflect our changing habits in terms of consuming media. There is a greater tendency to listen to music, for example, in the background, while doing other things (perhaps several other things).

I’m really trying to avoid any “these kids today” moments in this piece, but I do have children and have been exposed through them (and other contexts) to their generation. It is common for them to be consuming 3-4 types of media at once. They may listen to music, while having a YouTube video running in the background, while playing a video game or watching TV. I wonder if it just comforting for people raised surrounded by so much digital media. This would tend to support the author’s hypothesis.

Our digital world has given us access to lots of media and information. But I have to wonder if that means there is a trend over time to consume more media more superficially. When I was younger I would listen to a much narrower range of music – I would buy an album of an artist I liked and listen to the entire album dozens or even hundreds of times. Now, when I listen to music, it’s mostly radio or streaming. Even when I listen to my own playlists, there are thousands of songs from hundreds of artists.

Or there may be other factors at play. Another study, for example, looking at film found that the average shot length in movies from 1945 was 13 seconds, while today it is about 4 seconds. I like to refer to this phenomenon as “short attention span theater”. But in reality I know this is about more than attention span. Directors and editors have become more skilled at communicating to their audience through cinema, and there is an evolving cinematic language that both filmmaker and audience learn. Part of the decreased shot length is that it is possible to convey and idea, emotion, or character element much more quickly and efficiently. I also think editing has just become tighter and more efficient.

I watch a lot of movies, and again having children meant I revisited many classics with them. It is amazing how well a really good classic film can hold up over time, even decades (the word “timeless” is appropriate). Simultaneously, it is amazing how dated and crusty not-so-classic movies become over time. The difference, I think, is between artistic films and popular flicks. Watch popular movies from any past decade, for example, and you will be able to identify their time period very easily. They are the opposite of timeless – they are embedded in their culture and time in a very basic way. You will likely also note that movies from past decades may tend to drag, even becoming unwatchable at times. I am OK with slow movies (2001 is still a favorite), if they are well done and the long shots have an artistic purpose. But older movies can have needlessly long scenes, actors mugging the camera for endless seconds, pointless action and filler, and a story that is just plodding.

The point is that shorter, quicker, and punchier media may not be all about short attention-span consumers. There is also a positive aspect to this – greater efficiency and a shared language. There may also be shifting habits of consumption, with the media just adapting to changing use.

But I still can’t help the subjective feeling that with music something is being lost as well. I am keenly aware that the phenomenon known as “neural nostalgia“. What may be happening is that the media we consume between the ages of 12 and 22 gets ingrained onto a brain that is rapidly developing and laying down pathways. This then becomes the standard by which we judge anything we consume for the rest of our lives. So everyone thinks that the music of their youth was the best, and music has only gotten worse since then. This is a bias that we have to account for.

But neural nostalgia does not mean that music has not objectively changed. It’s just difficult to tease apart real change from subjective perception, and to also avoid the bias of thinking of any change as a worsening (rather than just a difference). More emotional and personal song lyrics is not necessarily a bad thing, or a good thing – it’s just a thing. Simpler lyrics may sound annoyingly repetitive and mindless to boomers, but older lyrics may seem convoluted and difficult to understand let alone follow to younger generations.

I do think music can be an interesting window onto culture. It reflects the evolving lives of each generation and how cultural norms and technology are affecting every aspect of their experience.

 

The post Is Music Getting Simpler first appeared on NeuroLogica Blog.

Categories: Skeptic

The Experience Machine Thought Experiment

neurologicablog Feed - Tue, 03/26/2024 - 5:05am

In 1974 Robert Nozick published the book, Anarchy, State, and Utopia, in which he posed the following thought experiment: If you could be plugged into an “experience machine” (what we would likely call today a virtual reality or “Matrix”) that could perfectly replicate real-life experiences, but was 100% fake, would you do it? The question was whether you would do this irreversibly for the rest of your life. What if, in this virtual reality, you could live an amazing life – perfect health and fitness, wealth and resources, and unlimited opportunity for adventure and fun?

Nozick hypothesized that people generally would not elect to do this (as summarized in a recent BBC article). He gave three reasons – we want to actual do certain things, and not just have the experience of doing them, we want to be a certain kind of person and that can only happen in reality, and we want meaning and purpose in our lives, which is only possible in reality.

A lot has happened in the last 50 years and it is interesting to revisit Nozick’s thought experiment. I would say I basically disagree with Nozick, but there is a lot of nuance that needs to be explored. For me there are two critical variables, only one of which I believe was explicitly addressed by Nozick. In his thought experience once you go into the experience machine you have no memory of doing so, therefore you would believe the virtual reality to be real. I would not want to do this. So in that sense I agree with him – but he did not give this as a major reason people would reject the choice. I would be much more likely to go into a virtual reality if I retained knowledge of the real world and that I was in a virtual world.

Second – are there other people in this virtual reality with me, or is every other entity an AI? To me the worst case scenario is that I know I am in a virtual reality and that I am alone with nothing but AIs. That is truly a lonely and pointless existence, and no matter how fun and compelling it would be, I think I would find that realization hard to live with indefinitely. But, If I didn’t know that I was living in a virtual reality, than it wouldn’t matter that I was alone, at least not to the self in the virtual reality. But would I condemn myself to such an existence, even knowing I would be blissfully unaware? Then there is what I would consider to be the best case scenario – I know I am living in a virtual reality and there are other actual people in here with me. There is actually another variable – does anything that happens in the virtual reality have the potential to affect the real world? If I write a book, could that book be published in the real world?

Nozick’s thought experiment, I think, was pure in that you would not know you are in a virtual reality, there is no one else in there with you, and you are forever cut off from the real world. In that case I think the ultimate pointlessness of such an existence would be too much. I would likely only consider opting for this at the end of my life, especially if I were ill or disabled to a significant degree. This would be a great option in many cases. But let’s consider other permutations, with 50 years of additional experience.

I also think that at the other end of the spectrum, with people knowing they are in virtual reality, there are real people together in this virtual world, and it is connected to the real world, than most people would find living large parts of their life in virtual reality acceptable and enjoyable. This is the “Ready Player One” scenario. We know from experience that people already do some version of this, spending lots of time playing immersive video games or engaging in virtual communities on social media. People find meaning in their virtual lives.

What about the AI variable? I think we have to distinguish general AI from narrow AI. Are the AI sentient? If so, then I think it doesn’t matter that they are AI. If they are just narrow AI algorithms, the knowledge of that would be bothersome. But could people be fooled by narrow AI? I think the answer there is unequivocally yes. People have a tendency to anthropomorphize, and we generally accept and respond to the illusion of human interaction. People are already falling in love with narrow AIs and virtual characters that don’t actually exist.

What about the “Matrix” scenario? This is something else to consider – is all of humanity in the virtual reality? In Nozick’s thought experience the Matrix was run by benign and well-meaning overlords that just want us to have an enjoyable existence, without malevolent intent. I do think it would matter whether or not a subset of humanity were in the Matrix, with other people still advancing technology, art, science, and philosophy and running civilization. It is quite another thing for humanity in its entirety to check out of reality and just exist in a Matrix. Civilization would essentially be over. Some futurists speculate that this may be the ultimate fate of many civilizations, turning inward and creating a virtual civilization. The advantages may just be too massive to ignore, and some civilizations may decide that they have achieved the ultimate end already and go down the path of becoming a virtual civilization.

In the end I think Nozick’s solution to his own thought experiment was too simplistic and one sided. I do agree with him that people need a sense of purpose and meaning. But on the other hand, I think we know a lot more now about how compelling and appealing virtual reality can be, that people will respond emotionally to a sufficiently compelling illusion, and people will find fulfillment even in a virtual reality.

What I think this means for the future of humanity, at least in the short run, is something close to the Ready Player One scenario. We will build increasingly sophisticated and compelling virtual realities, and as a result people will spend more and more time there. But this virtual reality will be seamlessly integrated into physical reality. Yes, some people will use it as an escape, but it will also be just another aspect of actual reality.

The post The Experience Machine Thought Experiment first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic