You are here

Skeptic

Skeptoid #928: EMDR: Looking Past the Pain

Skeptoid Feed - Tue, 03/19/2024 - 2:00am

This controversial treatment for PTSD involves moving the eyes side to side.

Categories: Critical Thinking, Skeptic

Dan Stone — An Unfinished History of the Holocaust

Skeptic.com feed - Tue, 03/19/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss415_Dan_Stone_2024_03_19.mp3 Download MP3

The Holocaust is much discussed, much memorialized, and much portrayed. But there are major aspects of its history that have been overlooked.

Spanning the entirety of the Holocaust, this sweeping history deepens our understanding. Dan Stone—Director of the Holocaust Research Institute at Royal Holloway, University of London—reveals how the idea of “industrial murder” is incomplete: many were killed where they lived in the most brutal of ways. He outlines the depth of collaboration across Europe, arguing persuasively that we need to stop thinking of the Holocaust as an exclusively German project. He also considers the nature of trauma the Holocaust engendered, and why Jewish suffering has yet to be fully reckoned with. And he makes clear that the kernel to understanding Nazi thinking and action is genocidal ideology, providing a deep analysis of its origins.

Drawing on decades of research, The Holocaust: An Unfinished History upends much of what we think we know about the Holocaust. Stone draws on Nazi documents, but also on diaries, post-war testimonies, and even fiction, urging that, in our age of increasing nationalism and xenophobia, it is vital that we understand the true history of the Holocaust.

Dan Stone is Professor of Modern History and Director of the Holocaust Research Institute at Royal Holloway, University of London. He is the author or editor of numerous articles and books, including: Histories of the Holocaust (Oxford University Press); The Liberation of the Camps: The End of the Holocaust and its Aftermath (Yale University Press); and Concentration Camps: A Very Short Introduction (Oxford University Press). His new book is The Holocaust: An Unfinished History.

Shermer and Stone discuss:

  • Why this book now? What is unfinished in the history of the Shoah?
  • Holocaust denial: 20% of Americans under 30 who, according to a poll by The Economist, believe the Holocaust is a myth. Another 20% believe it is exaggerated
  • Just as “Nazism was the most extreme manifestation of sentiments that were quite common, and for which Hitler acted as a kind of rainmaker or shaman”, suggests Stone, the defeat of his regime has left us with “a dark legacy, a deep psychology of fascist fascination and genocidal fantasy that people turn to instinctively in moments of crisis – we see it most clearly in the alt-right and the online world, spreading into the mainstream, of conspiracy theory”
  • What was the Holocaust and why did it happen: intentionalism vs. functionalism
  • Ideological roots of Nazism and German anti-Semitism
  • “ideology, understood as a kind of phantasmagorical conspiracy theory, as the kernel of Nazi thinking and action”
  • From ideas to genocide: magical thinking
  • Blood and soil
  • Hitler’s willing executioners
  • The Holocaust as a continent-wide crime
  • Motivations of the executioners
  • Polish law prohibiting the accusation of Poles complicit in the Holocaust
  • Industrial genocide vs. low-tech mass murder
  • The banality of evil
  • Nearly half of the Holocaust’s six million victims died of starvation in the ghettos or in “face-to-face” shootings in the east.
  • Jews were constrained by a profusion of demeaning legislation. They were forbidden to keep typewriters, musical instruments, bicycles and even pets. The sheer variety of persecution was bewildering. It was also chillingly deceptive, persuading some law-abiding Jews that survival was a matter of falling into line. Stone quotes the wrenching letter of a woman reassuring her loved one that getting transported to Theresienstadt, in German-occupied Czechoslovakia, might be better than living in Germany. “My future place of residence represents a sort of ghetto,” she explained. “It has the advantage that, if one obeys all the rules, one lives in some ways without the restrictions one has here.”
  • Wannsee Conference of Jan. 20, 1942
  • In March 1942, “75 to 80 percent of the Holocaust’s victims were still alive.” Eleven months later, “80 percent of the Holocaust’s victims were dead.”

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Energy Demand Increasing

neurologicablog Feed - Mon, 03/18/2024 - 5:14am

For the last two decades electricity demand in the US has been fairly flat. While it has been increasing overall, the increase has been very low. This has been largely attributed to the fact that as the use of electrical devices has increased, the efficiency of those devices has also increased. The introduction of LED bulbs, increased building insulation, more energy efficient appliances has largely offset increased demand. However, the most recent reports show that US energy demand is turning up, and there is real fear that this recent spike is not a short term anomaly but the beginning of a long term trend. For example, the projection of increase in energy demand by 2028 has nearly doubled from the 2022 estimate to the 2023 estimate – ” from 2.6% to 4.7% growth over the next five years.”

First, I have to state my usual skeptical caveat – these are projections, and we have to be wary of projecting short term trends indefinitely into the future. The numbers look like a blip on the graph, and it seems weird to take that blip and extrapolate it out. But these forecasts are not just based on looking at such graphs and then extending the line of current trends. These are based on an industry analysis which includes projects that are already under way. So there is some meat behind these forecasts.

What are the factors that seem to be driving this current and projected increase in electricity demand? They are all the obvious ones you might think. First, something which I and other technology-watchers predicted, is the increase in the use of electrical vehicles. In the US there are more than 2.4 million registered electric vehicles. While this is only about 1% of the US fleet, EVs represent about 9% of new car sales, and growing. If we are successful in somewhat rapidly (it will still take 20-30 years) changing our fleet of cars from gasoline engine to electric or hybrid, that represents a lot of demand on the electricity grid. Some have argued that EV charging is mostly at night (off peak), so this will not necessarily require increased electricity production capacity, but that is only partly true. Many people will still need to charge up on the road, or will charge up at work during the day, for example. It’s hard to avoid the fact that EVs represent a potential massive increase in electricity demand. We need to factor this in when planning future electricity production.

Another factor is data centers. The world’s demand for computer cycles is increasing, and there are already plans for many new data centers, which are a lot faster to build than the plants to power them. Recent advances in AI only increase this demand. Again we may mitigate this somewhat by prioritizing computer advances that make computers more energy efficient, but this will only be a partial offset. We do also have to think about applications, and if they are worth it. The one that gets the most attention is crypto – by one estimate Bitcoin mining alone used 121 terra-watt hours of electricity in 2023, the same as the Netherlands (with a population of 17 million people).

Other factors increasing US energy demand include recent investments in industry, through the Inflation Reduction Act, the infrastructure bill, and the Chips and Science Act. Part of the goal of these bills was to bring manufacturing back to the US, and to the extent that they are working this comes with an increased demand for electricity. And fourth, another factor that was predicted and we are now starting to feel, as the Earth warms the demand for air conditioning increases.

All of these factors are likely to increase going forward. Also, in general there is a move to electrify as many processes as possible, as an approach to decarbonize our civilization – moving from gas stoves and heating to electric, for example. Even in industry, reducing the carbon footprint of steel making involves using a lot more electricity.

What all this means is that as we plan to decarbonize over the next 25 years, we need to expect that electricity demand will dramatically increase. This is true even in a country like the US, and even if our population remains stable over this time. Worldwide the situation is even worse, as many populations are trying to industrialize and world population is projected to grow (probably peaking at around 10 billion). The problem is that the rate at which we are building renewable low carbon energy is just treading water – we are essentially building enough to meet the increase in demand, but not enough to replace existing demand. This means that fossil fuel use worldwide is not dropping, in fact it is still increasing. These new energy demand projections may mean that we fall further behind.

Most concerning about these recent reports is that we currently are unable to meet this new projected increase in demand with renewables. Keep in mind, this is still far better than relying entirely on fossil fuel. Wind, solar, hydroelectric, geothermal, and nuclear capacity all replaces fossil fuel capacity, and is helping to mitigate CO2 release and climate change. But it has not been enough so far to actually reduce fossil fuel demand, and it’s going to get more challenging. The problem we are facing is bottlenecks in building new infrastructure. The primary limiting factor is the grid. It takes too long to build new grid projects. They are slowed by the patchwork of regulations and bickering among states over who is paying for what. New renewable energy projects are therefore delayed by years.

What needs to happen to fix the situation? First, we need more massive investment in electric grid infrastructure. There is some of this in the bills I mentioned, but not enough. We need perhaps a standalone bill investing billions in new grid projects. But also, this legislation should probably include new Federal authority to approve and enact such projects, to reduce local bottlenecks. We need Federal legislation to essentially enact eminent domain to rush through new grid projects. The report estimates that we will need to triple our existing grid capacity by 2035 to meet growing demand.

This analysis also reinforces the belief by many that wind and solar, while great sources of energy, are not going to get us to our goals. The problem is simply that they require a lot of new grid infrastructure and new connections to the grid. We will simply not be able to build them out in time. Residential solar is probably the best option, because it can use existing connections to the grid and is distributed to where it is used. This is especially true if you plan to switch to an electric vehicle – pair that with some solar panels. But still, this is not going to get us to our goals.

What we need is the big centralized power plants that can replace coal, oil, and natural gas plants – and this means nuclear, geothermal, and hydroelectric. The latter two are limited geographically, as there is limited potential to expand them, at least for now. Perhaps we may top out at 15% or so (that is of existing demand). This leaves nuclear. I know I have beat this drum for a while, but the most compelling and logical analyses I read all indicate that we will not get to our decarbonization goals without nuclear. Nuclear can generate the amount of electricity we need, and be plugged into existing connections to the grid, and can go anywhere. The main limitation with nuclear is the regulations make building new plants really slow – but this is fixable with the stroke of a pen. We need to streamline the regulation process for all zero carbon power plants – a project warp speed for energy. The bottom line really is coming down to – do you want a coal-fired plant or a nuclear plant? That is the real practical choice we face.  To some extend the choice is also between nuclear and natural gas, which is a lot better than coal but is still fossil fuel with the pollution and CO2 that comes with.

As the report indicates, many states are keeping coal-fired plants open longer to meet the increased demand. Or they are building natural gas fired plants, because the technology is proven, they are the fastest to build, and they are the most profitable. This has to change. It needs to be feasible to build nuclear plants instead. Some of this is happening, but not nearly enough.

We are dealing with hard numbers here, and the numbers are telling a very consistent and compelling story.

The post Energy Demand Increasing first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #975 - Mar 16 2024

Skeptics Guide to the Universe Feed - Sat, 03/16/2024 - 9:00am
Tax Scams; News Items: Pentagon UFO Report, Microplastic Risks, Parasite Cleanse, Gut Microbe Communication, Interstellar Meteorite; Who's That Noisy; Your Questions and E-mails: Thou, Mach Effect Drive; Science or Fiction
Categories: Skeptic

Eric Schwitzgebel — The Weirdness of the World

Skeptic.com feed - Sat, 03/16/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss414_Eric_Schwitzgebel_2024_03_16.mp3 Download MP3

Do we live inside a simulated reality or a pocket universe embedded in a larger structure about which we know virtually nothing? Is consciousness a purely physical matter, or might it require something extra, something nonphysical? According to the philosopher Eric Schwitzgebel, it’s hard to say. In The Weirdness of the World, Schwitzgebel argues that the answers to these fundamental questions lie beyond our powers of comprehension. We can be certain only that the truth—whatever it is—is weird. Philosophy, he proposes, can aim to open—to reveal possibilities we had not previously appreciated—or to close, to narrow down to the one correct theory of the phenomenon in question. Schwitzgebel argues for a philosophy that opens.

According to Schwitzgebel’s “Universal Bizarreness” thesis, every possible theory of the relation of mind and cosmos defies common sense. According to his complementary “Universal Dubiety” thesis, no general theory of the relationship between mind and cosmos compels rational belief. Might the United States be a conscious organism—a conscious group mind with approximately the intelligence of a rabbit? Might virtually every action we perform cause virtually every possible type of future event, echoing down through the infinite future of an infinite universe? What, if anything, is it like to be a garden snail? Schwitzgebel makes a persuasive case for the thrill of considering the most bizarre philosophical possibilities.

Eric Schwitzgebel is professor of philosophy at the University of California, Riverside. He is the author of A Theory of Jerks and Other Philosophical Misadventures; Perplexities of Consciousness; and Describing Inner Experience?

Schwitzgebel has studied the behavior of philosophers, particularly ethicists, using empirical methods. The articles he has published investigate whether ethicists behave more ethically than other populations. In a 2009 study, Schwitzgebel investigated the rate at which ethics books were missing from academic libraries compared to similar philosophy books. The study found that ethics books were in fact missing at higher rates than comparable texts in other disciplines. Subsequent research has measured the behavior of ethicists at conferences, the perceptions of other philosophers about ethicists, and the self-reported behavior of ethicists. Schwitzgebel’s research did not find that the ethical behavior of ethicists differed from the behavior of professors in other disciplines. In addition, his research found that the moral beliefs of professional philosophers were just as susceptible to being influenced by irrelevant factors as those of non-philosophers. Schwitzgebel has concluded that, “Professional ethicists appear to behave no differently than do non-ethicists of similar social background.”

Shermer and Schwitzgebel discuss:

  • bizarreness
  • skepticism
  • consciousness and sentience
  • AI, Turing Test, sentience, existential threat
  • idealism, materialism and the ultimate nature of reality
  • solipsism and experimental evidence for the existence of an external world
  • Are we living in a computer simulation?
  • mind-body problem
  • truths: external, internal, objective, subjective, and mind-altering drugs
  • anthropic principles and fine-tuning of the universe
  • theism, atheism, agnosticism, deism, pantheism, panpsychism
  • free will, determinism, compatibilism
  • Is the universe predetermined?
  • entropy, the arrow of time, and causality
  • infinity
  • souls and immortality, mind uploading
  • multiverse, parallel universes, and many worlds hypothesis
  • why there is something rather than nothing.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

What Is a Grand Conspiracy?

neurologicablog Feed - Fri, 03/15/2024 - 5:09am

Ah, the categorization question again. This is an endless, but much needed, endeavor within human intellectual activity. We have the need to categorize things, if for no other reason than we need to communicate with each other about them. Often skeptics, like myself, talk about conspiracy theories or grand conspiracies. We also often define exactly what we mean by such terms, although not always exhaustively or definitively. It is too cumbersome to do so every single time we refer to such conspiracy theories. To some extent there is a cumulative aspect to discussions about such topics, either here or, for example, on my podcast. To some extent I expect regular readers or listeners to remember what has come before.

For blog posts I also tend to rely on links to previous articles for background, and I have little patience for those who cannot bother to click these links to answer their questions or before making accusations about not having properly defined a term, for example. I don’t expect people to have memorized my entire catalogue, but click the links that are obviously there to provide further background and explanation. Along those lines, I suspect I will be linking to this very article in all my future articles about conspiracy theories.

What is a grand conspiracy theory? First a bit more background, about categorization itself. There are two concepts I find most useful when thinking about categories – operational definition and defining characteristics. An operational definition is one that essentially is a list of inclusion and exclusion criteria, a formula, that if you follow, will determine if something fits within the category or not. It’s not a vague description or general concept – it is a specific list of criteria that can be followed “operationally”. This comes up a lot in medicine when defining a disease. For example, the operational definition of “essential hypertension” is persistent (three readings or more) systolic blood pressure over 130 or diastolic blood pressure over 80.

Operational definitions often rely upon so-called “defining characteristics” – those features that we feel are essential to the category. For example, how do we define “planet”? Well, astronomers had to agree on what the defining characteristics of “planet” should be, and it was not entirely obvious. The one that created the most controversy was the need to gravitationally clear out one’s orbit – the defining characteristic that excluded Pluto from the list of planets.

There is therefore some subjectivity in categories, because we have to choose the defining characteristics. Also, such characteristics may have fuzzy or non-obvious boundaries. This leads to what philosophers call the “demarcation problem” – there may be a fuzzy border between categories. But, and this is critical, this does not mean the categories themselves don’t exist or are not meaningful.

With all that in mind, how do we operationally define a “grand conspiracy” and what are the defining characteristics. A grand conspiracy has a particular structure, but I think the key defining characteristic is the conspirators themselves. The conspirators are a secret group that have way more power than they should have or any group realistically could have. Further they are operating for their own nefarious goals and are deceiving the public about their existence and their true goals. This shadowy group may operate within a government, or represents a shadow government themselves, or even a secret world government. They can control the media and other institutions as necessary to control the public narrative. They are often portrayed a diabolically clever, able to orchestrate elaborate deceptions and false flag operations, down to tiny details.

But of course there would be no conspiracy theory if such a group were entirely successful. So there must also be an “army of light” that has somehow penetrated the veil of the conspirators, they see the conspiracy for what it is and try to expose it. Then there is everyone else, the “sheeple” who are naive and deceived by the conspiracy.

That is the structure of a grand conspiracy. Functionally, psychologically, the grand conspiracy theory operates in order to insulate the belief of the “conspiracy theorist”. Any evidence that contradicts the conspiracy theory was a “false flag” operation, meant to cast doubt on the conspiracy. The utter lack of direct evidence for the conspiracy is due to the extensive ability of the conspirators to cover up any and all such evidence. So how, then, do the conspiracy theorists even know that the conspiracy exists? They rely on pattern recognition, anomaly hunting, and hyperactive agency detection – not consciously or explicitly, but that is what they do. They look for apparent alignments, or for anything unusual. Then they assume a hidden hand operating behind the scenes, and give it all a sinister interpretation.

Here is a good recent example – Joe Rogan recently “blew” his audience’s mind by claiming that the day before 9/11, Donald Rumsfeld said in a press conference that the Pentagon had lost 2.3 trillion dollars. Then, the next day, a plane crashes into the part of the Pentagon that was carrying out the very audit of that missing trillions. Boom – a grand conspiracy is born (of course fitting into another existing conspiracy that 9/11 was an inside job). The coincidence was the press conference the day before 9/11, which is not much of a coincidence because you can go anomaly hunting by looking at any government activity in the days before 9/11 for anything that can be interpreted in a sinister way.

In this case, Rumsfeld did not say the Pentagon lost $2.3 trillion. He was criticizing the outdated technology in use by the DOD, saying it is not up to the modern standards used by private corporations. An analysis – released to the public one year earlier – concluded that because of the outdated accounting systems, as much as 2.3 trillion dollars in the Pentagon budget cannot be accurately tracked and documented. But of course, Rogan is just laying out a sinister-looking coincidence, not telling a coherent story. What is he actually saying? Was Rumsfeld speaking out of school? Was 9/11 orchestrated in a single day to cover up Rumsfeld’s accidental disclosure? Is Rumsfeld a rebel who was trying to expose the coverup? Would crashing into the Pentagon sufficiently destroy any records of DOD expenditures to hide the fact that $2.3 trillion was stolen? Where is the press on this story? How can anyone make $2.3 trillion disappear? How did the DOD operate with so much money missing from their budget?

Such questions should act as a “reality filter” that quickly marks the story as implausible and even silly. But the grand conspiracy reacts to such narrative problems by simply expanding the scope, depth, and power of the conspiracy. So now we have to hypothesize the existence of a group within the government, complicit with many people in the government, that can steal $2.3 trillion from the federal budget, keep it from the public and the media, and orchestrate and carry our elaborate distractions like 9/11 when necessary.

This is why, logically speaking, grand conspiracy theories collapse under their own weight. They must, by necessity, grow in order to remain viable, until you have a vast multi-generational conspiracy spanning multiple institutions with secret power over many aspects of the world. Any they can keep it all secret by exerting unbelievable control over the thousands and thousands of individuals who would need to be involved. They can bribe, threaten, and kill anyone who would expose them. Except, of course, for the conspiracy theorists themselves, who can work tirelessly to expose them with fear, apparently.

This apparent contradiction has even lead to a meta conspiracy theory that all conspiracy theories are in fact false flag operations, meant to discredit conspiracy theories and theorists so that the real conspiracies can operate in the shadows.

Being a “grand” conspiracy is not just about size. As I have laid out, it is about how such conspiracies allegedly operate, and the intellectual approach of the conspiracy theorists who believe in them. This can fairly easily be distinguished from actual conspiracies, in which more than one person or entity agree together to carry out some secret illegal activity. Actually conspiracies can even become fairly extensive, but the bigger they get the greater the risk that they will be exposed, which they are all the time. Of course, we can’t know about the conspiracies that were never exposed, by definition, but certainly there are a vast number of conspiracies that do ultimately get exposed. It makes it hard to believe that a conspiracy orders of magnitude larger can operate for decades without similarly being exposed.

Ultimately the grand conspiracy theory is about the cognitive style and behavior of the conspiracy theorists – the subject of a growing body of psychological research.

The post What Is a Grand Conspiracy? first appeared on NeuroLogica Blog.

Categories: Skeptic

Psychotherapy Redeemed: A Response to Harriet Hall’s “Psychotherapy Reconsidered”

Skeptic.com feed - Fri, 03/15/2024 - 12:00am

While not going so far as arguing, as some have, that psychotherapy is always effective, I’d like to present some data and offer some contrasting considerations to Harriet Hall’s article: “Psychotherapy Reconsidered” (in Skeptic 28.1). Probably no other area within social science practice has been so inordinately and unfortunately praised and damned. Many of us working in the field have long been acutely aware of the difficulties to which Hall and others point, as well as other problems. However, we also regularly observe the positive changes in clients’ lives that psychotherapy—properly practiced—has produced, and in many cases, the lives it has saved.

In her article, the late Harriet Hall, whose work I and all skeptics admire and now miss, stated that no-one can provide an objective report about the field, indeed, that there “…aren’t even any basic numbers,” that we don’t know whether psychotherapy works, that it is not based on solid science, and that there is “…no rational basis for choosing a therapy or therapist.”

Hall and other sources she quotes are quite correct in saying that there is much we still don’t know about human psychology, and much that we don’t understand about how the mind and psychotherapy work. Yet it’s also necessary to look at the data and analyses which demonstrate that psychotherapy does work. The case for the defense is made in detail in The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work by Bruce Wampold and Zac Imel, and also in Psychotherapy Relationships That Work by Wampold and John Norcross, both of which present decades of meta-analyses. They review conclusions from an impressive number of psychotherapy studies and show how humans heal in a social context, as well as offer a compelling alternative to the conventional approach to psychotherapy research, which typically concentrates on identifying the most effective treatment for specific disorders by placing an emphasis on the particular components of treatment.

This is a misguided point in Hall’s argument, as she was looking at the differences between treatments rather than between therapists. Studies that previously claimed superiority over one method to another ignored who the treatment provider was.1 We know that these wrong research questions arise from using the medical model where it is imperative to know which treatment is the most effective for a particular disorder. In psychotherapy, and to some extent in medicine generally, the person administering the treatment is absolutely critical. Indeed, in psychotherapy the most important factor is the skill, confidence, and interpersonal flexibility of the therapist delivering the treatment, not the model, method, or “school” they use, their number of years in practice, or even the amount of professional development they’ve had. How we train and supervise therapists largely has little impact on the outcomes of psychotherapy, unless each therapist routinely collects outcome data in every session and adjusts their approach to accommodate each client’s feedback.

The Bad News About Psychotherapy

Hall is right on the point that psychotherapy outcomes have not improved much over the last 50 years. Hans Eysenck’s classic study debunking psychotherapy was performed in 1952.2 His view was not challenged until 1977, when a meta-analysis showed that psychotherapy was effective, and that Eysenck was wrong.3 It found the effect size (ES) for psychotherapy was .8 above the mean of the untreated sample. Recent meta-analyses show that this ES has remained the same over the intervening 50 years, despite the proliferation of diagnoses and treatment models.4

Hall was also accurate in saying that much conflicting data exists from studies about the efficacy of the hundreds of types of psychotherapy. Yet she was incorrect in saying that we don’t even have basic numbers. We now have decades of meta-analyses showing what works and what doesn’t work in psychotherapy.5, 6, 7, 8, 9, 10

Hall was also mostly on-target when she stated, “…proponents of each modality of psychotherapy give us their…impressions about the success of their chosen method.” Decades of clinical trials comparing treatment A to treatment B point to the conclusion that all bona fide psychotherapy models work equally well. This is consistently replicated in trials comparing therapists who use two different yet coherent, convincing, and structured treatments, as long as these treatments provide an explanation for what’s bothering the client in addition to discussing a treatment plan for the client to work hard at overcoming their difficulties. Psychotherapy research clearly shows that all models contribute 0–1 percent towards the outcomes of psychotherapy.11 This means that proponents of Cognitive Behavioral Therapy—or any model—claiming its superiority to other treatments, are not basing their claims on the available evidence.

Another correct statement of Hall’s is that most therapists have no evidence to show that what they’re doing is effective. This lack of evidence led others to conclude that, “Beyond personal impressions and informal feedback, the majority of therapists have no hard, verifiable evidence that anything they do makes a difference…Absent a valid and reliable assessment of their performance, it also stands to reason they cannot possibly know what kind of instruction or guidance would help them improve.”12

For decades, free pen-and-paper measures by which therapists can track their outcomes have been available,13 recently superseded by online versions.14 These Feedback Informed Treatment (FIT) online platforms are easy to use and have been utilized by thousands of therapists around the world to get routine feedback from every client on each session. The result: Data from hundreds of thousands of clients is continually being updated. Regrettably, those of us who use these methods are still a small minority of therapists practicing around the world compared to the unknown numbers who, as Hall rightly pointed out, provide psychotherapy in its manifold (and perhaps unregulated) forms.

The online outcome measurement platforms mentioned above are recommended by the International Center for Clinical Excellence (ICCE).15 For decades, the ICCE has been aggregating data from therapists around the world and so providing evidence that corroborates some of Hall’s critical claims about psychotherapy. Current data show that dropout rates, defined as clients unilaterally stopping treatment without experiencing reliable clinical improvement, are between 20–22 percent among adult populations (even when therapists use FIT).16 Dropout rates are typically higher (40–60 percent) for child and adolescent populations. This raises the unfortunate possibility that dropout rates for therapists who don’t get routine feedback from clients are probably higher still.

Hall was, however, incorrect in stating that we don’t know about the harms of psychotherapy. There are many examples of discussions and analyses of what doesn’t work in psychotherapy and what can cause harm.17 One study of aggregated data shows that the percentage of people who are reliably worse while in treatment is 5–10 percent.18

Regrettably, the data indicate that the average clinician’s outcomes plateau relatively early in their career, despite their thinking they are improving. One review found no evidence that therapists improve beyond their first 50 hours of training in terms of their effectiveness, and a number of studies have found that paraprofessionals with perhaps six weeks of training achieve outcomes on par with psychologists holding a PhD, which is equal to five years of training.19 These data support Hall’s statement that unless they are measuring their outcomes, no therapist knows whether their method is more (or less) effective than the methods used by others. Even then, it leads to a conflation that it’s due to the method instead of the therapist. Studies also show that students often achieve outcomes that are on par or better than their instructors. These facts are amply demonstrated in Witkowski’s discussion with Vikram H. Patel,20 whose mental health care manual Where There Is No Psychiatrist is used primarily in developing countries by non-specialist health workers and volunteers.21

Further, there is now evidence that psychotherapists who have been in practice for a few years see themselves as improving even though the data show no such improvement.22 Psychotherapists are not immune either to cognitive biases or to the Dunning-Kruger effect, and a majority rate themselves as being above average. In other words, psychotherapists generally overestimate their abilities. Finally, meta-analyses show that there is a large variation in effectiveness between clinicians, with a small minority of top performing therapists routinely getting superior outcomes with a wide range of clients. Unfortunately, these “supershrinks” are a rare breed.23

To balance the bad news above, following is some of the data which shows that psychotherapy works.

The Good News About Psychotherapy

Psychotherapy works. It does help people. Since Eysenck’s time and in response to the numerous sources cited by Hall, many studies have demonstrated that the average treated client is better off than eighty percent of the untreated sample.24 That doesn’t mean that psychotherapy is eighty percent effective, but it does mean that if you take the average treated person and you compare them to those in an untreated sample, that average treated person is doing better than eighty percent of people in the untreated sample. This effect size means that psychotherapy outcomes are equivalent to those for coronary artery bypass surgery and four times greater than those for the use of fluoride in preventing tooth decay. As discussed earlier, this has remained constant for 50 years, regardless of the problem being tested or the method being employed.

Just as in surgery, the tools that psychotherapists use are only as effective as the hands that use them. How effective are psychotherapists? Real world studies have looked at this question, asking clinicians to measure their outcomes on a routine basis with each client in every session. They’ve compared these outcomes against those in randomized clinical trials (RCTs). It must be noted that in RCTs researchers have many advantages that real world practitioners do not. These include: (a) a highly select clientele, in that many published studies have a single unitary diagnosis while clinicians routinely deal with clients with two or more comorbidities; (b) they have a lower caseload; and (c) they have ongoing supervision and consultation with some of the world’s leading experts on psychotherapy. Despite all this, the data documents that psychotherapy outcomes are equivalent with those of RCTs.25

Therapists around the world, including me, have been using Feedback Informed Treatment (FIT) for decades. I have been seeing clients since 1981 and my clinical outcomes started to improve when I started incorporating FIT into my practice nearly 20 years ago. Those of us who use FIT routinely get quantitative feedback from every client at the beginning of every session. We ask about the client’s view of the outcomes of therapy in four areas of their life: (1) their individual wellbeing; (2) their close personal relationships; (3) their social interactions; and (4) their overall functioning. This measure is termed the Outcome Rating Scale or ORS.26 At the end of every session, we also get quantitative feedback about four items to gauge the client’s experience of: (1) whether they felt heard, understood, and respected by us in that session; (2) whether we talked about what the client wanted to discuss; (3) whether the therapist’s approach/method was a good fit for the client; and (4) an overall rating for the session, also asking if there was anything missing in that session. This measure is termed the Session Rating Scale or SRS.27 The resulting feedback is successively incorporated into the therapy, ensuring that the client’s voice and preferences are privileged.

Research shows that individual therapists vary widely in their ability to achieve positive outcomes in therapy, so which therapist a client sees is a big factor in determining the outcome of their therapy. Data gathered over a 2.5-year period from nearly 2,000 clients and 91 therapists documented significant variation in effectiveness among the clinicians in the study and found certain high-performing therapists were 10 times more effective than the average clinician.28 One variable that strongly accounted for this difference in outcome effectiveness was the amount of time these therapists devoted outside of therapy to deliberately practicing objectives which were just beyond their level of proficiency.29

What these studies show is that we’ve been looking in the wrong place for the answers as to why the outcomes of psychotherapy have not improved over the last 50 years. We’ve been studying the effects within the therapy room rather than what happens outside of the therapy room, i.e., what clients bring into their therapy and what therapists do before and after they see their clients.

Indeed, clients and their extra-therapeutic factors contribute 87 percent to outcomes of psychotherapy!30 Extra-therapeutic factors comprise the client’s personality, their daily environment, their friends, family, work, good relationships, and community support. On average clients spend less than one hour per week with a therapist. The extra-therapeutic factors are the components of the client’s life to which they return, and which make up the other 167 hours of their week. This begs the question “does this mean that there’s nothing we can do about it?” The key is for therapists to a) attune to these outside factors and resources, and b) tap into them. The remaining 13 percent of treatment effects which accounts for positive outcomes in therapy is made up of: the individual therapist, between 4–9 percent; the working alliance (relationship) between therapist and client, 4.9–8 percent; the expectancy/placebo and rationale for treatment, 4 percent; while the model of therapy contributes an insignificant 0–1 percent. This highlights that who the therapist is and how they relate to their clients is the main variable accounting for positive outcomes outside of the client’s extra-therapeutic factors.

So, how should you choose a therapist?

There is now a movement led by eminent researchers, educators, policymakers, and supervisors in the psychotherapy field to ensure that after graduation therapists consciously and intentionally engage in ongoing Deliberate Practice—critically analyzing their own skills and therapy session performance, continuously practicing their skillset (particularly training their in-the-moment responses to emotionally challenging clients and situations), and seeking expert feedback. Deliberate Practice is based on K. Anders Ericsson’s (who made a name for himself as “the expert on expertise”) three decades of research on the components of expertise in many domains of activity, including in sport, medicine, music, mathematics, business, education, computer programming, and other fields. Building on research in other professional domains such as sports, music, and medicine, a 2015 study was conducted to understand what differentiated top performing therapists from average ones.31 It found that top performing therapists spent 2.5 times more time in Deliberate Practice before and after their client sessions than did average therapists, and 14 times more time in Deliberate Practice than the least effective therapists!

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Experts in the field encourage therapists, supervisors, educators, and licensing bodies to “change the rules” about how psychotherapists are trained and how psychotherapy is practiced.32 The research reviewed here highlights that we can do this in two main ways: first, by making our clients’ voices the central focus of psychotherapy by routinely engaging in Feedback Informed Treatment with every client in every session to create a culture of feedback; and second, by each therapist receiving guidance from a coach who uses Deliberate Practice. To ensure accountability to clients, health insurance companies, and the psychotherapy field itself, this should be the basis for all practice, training, accreditation, and ongoing licensing of therapists.

In summary, psychotherapy does work. For readers who are curious to explore why psychotherapy works and which factors contribute to it doing so, I’d highly recommend Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness33 and its accompanying Field Guide to Better Results.34

About the Author

Vivian Baruch is a relationship coach, counselor, psychotherapist, and clinical supervisor specializing in relationship issues for singles and couples. She has been practicing since 1981, has been a psychotherapy educator at the Australian College of Applied Psychology, and taught supervision to psychotherapists at the University of Canberra. In 2004, she trained with Scott D. Miller, and has been using Feedback Informed Treatment (FIT) for 20 years to routinely incorporate her clients’ feedback into her psychotherapy and supervision work.

References
  1. https://rb.gy/iw4yb
  2. https://rb.gy/4y3su
  3. https://rb.gy/bc9u9
  4. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  5. Wampold, B.E., & Imel, Z.E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge.
  6. Norcross, J. C., & Lambert, M. J. (Eds.). (2019). Psychotherapy Relationships That Work: Volume 2: Evidence-Based Therapist Responsiveness. Oxford University Press.
  7. https://rb.gy/qm2hz
  8. https://rb.gy/x7bm9
  9. https://rb.gy/rfq74
  10. https://rb.gy/rz91t
  11. Wampold, B.E., & Imel, Z.E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge.
  12. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  13. https://rb.gy/edpb6
  14. https://rb.gy/ktioc
  15. https://rb.gy/2bjuy
  16. https://rb.gy/6f55y
  17. https://rb.gy/tpuo2
  18. https://rb.gy/uqp3k
  19. https://rb.gy/obhfg
  20. Witkowski, T. (2020). Shaping Psychology: Perspectives on Legacy, Controversy and the Future of the Field. Springer Nature.
  21. Patel, V. (2003). Where There Is No Psychiatrist: A Mental Health Care Manual. RCPsych publications.
  22. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  23. Ricks, D. F. (1974). Supershrink: Methods of a Therapist Judged Successful on the Basis of Adult Outcomes of Adolescent Patients. In D.F. Ricks, A. Thomas, & M. Roff (Eds.), Life History Research in Psychopathology: III. University of Minnesota Press.
  24. https://rb.gy/obhfg
  25. https://rb.gy/uulpw
  26. https://rb.gy/d5mbx
  27. Ibid.
  28. https://rb.gy/0hvy3
  29. https://rb.gy/rkr85
  30. Wampold, B.E., & Imel, Z.E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work. Routledge.
  31. https://rb.gy/ye406
  32. https://rb.gy/r2jb8
  33. Miller, S.D., Hubble, M.A., & Chow, D. (2020). Better Results: Using Deliberate Practice to Improve Therapeutic Effectiveness. American Psychological Association.
  34. https://rb.gy/f3c3e
Categories: Critical Thinking, Skeptic

Pentagon Report – No UFOs

neurologicablog Feed - Tue, 03/12/2024 - 5:06am

In response to a recent surge in interest in alien phenomena and claims that the US government is hiding what it knows about extraterrestrials, the Pentagon established a committee to investigate the question – the All-Domain Anomaly Resolution Office (AARO). They have recently released volume I of their official report – their conclusion:

“To date, AARO has not discovered any empirical evidence that any sighting of a UAP represented off-world technology or the existence a classified program that had not been properly reported to Congress.”

They reviewed evidence from 1945 to 2023, including interviews, reports, classified and unclassified archives, spanning all “official USG investigatory efforts” regarding possible alien activity. They found nothing – nada, zip, goose egg, zero. They did not find a single credible report or any physical evidence. They followed up on all the fantastic claims by UFO believers (they now use the term UAP for unidentified anomalous phenomena), including individual sightings, claims of secret US government programs, claims of reverse engineering alien technology or possessing alien biological material.

They found that all eyewitness accounts were either misidentified mundane phenomena (military aircraft, drones, etc), or simply lacked enough evidence to resolve. Eyewitness accounts of secret government programs were all misunderstood conversations or hearsay, often referring to known and legitimate military or intelligence programs. Their findings are familiar to any experience skeptic – people misinterpret what they see and hear, fitting their misidentified perception into an existing narrative. This is what people do. This is why we need objective evidence to know what is real and what isn’t.

I know – this is a government report saying the government is not hiding evidence of aliens. This is likely to convince no hard-core believer. Anyone using conspiracy arguments to prop up their claims of aliens will simply incorporate this into their conspiracy narrative. Grand conspiracy theories are immune to evidence and logic, because the conspiracy can be used to explain away anything – any lack of evidence, or any disconfirming evidence. It is a magic box in which any narrative can be true without the burden of evidence or even internal consistency.

But the report is devastating to those who claim the government has known for a long time that aliens exist and are in possession of alien tech. It also means that in order to maintain such a belief, you have to enlarge the conspiracy, give it more power and scope. You have to believe the secret program is secret even from Congress and the executive branch, and that it is either secret from the defense and intelligence communities or they are fully involved at every level. At some point, it’s not really even a government program, but a rogue program somehow existing secretly within the government.

This is how grand conspiracy theories fail. In order to be maintained against negative evidence, they have to be enlarged and deepened. They then quickly collapse under their own weight. Imagine what it would take to fund and maintain such a program over decades, over multiple administrations and generations. How total would their control need to be to keep something this huge secret for so long? There have been no leaks to the mainstream press, like the Pentagon papers, or the Snowden leaks, or even the Discord fiasco. And yet, some rando UFO researchers know all about it. There is no way to make this story make sense.

I also don’t buy the alleged motivation. Why would such an agency keep the existence of aliens secret for so long? I can see keeping it a secret for a short time, until they had a chance to wrap their head around what was going on – but half a century? The notion that the public is “not ready” for the revelation is just silly. We’ve been ready for decades. If they want to keep the tech secret, they can do that without keeping the very existence of aliens secret. Besides, wouldn’t the principle of deterrence mean that we would want our enemies to know – hey, we have reverse-engineered alien technology, so don’t mess with us?

Also, the conspiracy theories often ignore the fact that the US is not the only government in the world. So do all countries in the world who might come into possession of alien artifacts have similarly powerful and long-lived secret organizations within their government? Some conspiracy theorists solve this contradiction by, again, widening the conspiracy. This leads to “secret world government” territory. Perhaps the lizard aliens are really in charge, and they are trying to keep their own existence secret.

I’ll be interested to see what the effect of the report will be (especially in our social-media post truth world). Interests in UFOs wax and wane over the years. It seems each generation has a flirtation with the idea then quickly grows bored, leaving the hard core believers to keep the flame alive until a new generation comes up. This creates a UFO boom and bust cycle. The claims, blurry photos, faked evidence, and breathless eyewitness accounts all seem superficially fascinating. I got sucked into this when I was around 10. I remembering thinking that something this huge, aliens visiting the Earth, would come out eventually. All the suggestive evidence was interesting, but I knew deep down none of it was conclusive. At some point we would need the big reveal – unequivocal evidence of alien visitation.

As the years rolled by, the suggestive blurry evidence and wild speculation became less and less interesting. You can only maintain such anticipation for so long. Eventually all it took was for me to hear Carl Sagan say that all the UFO evidence was crap, and the entire house of cards collapsed. Now, 40 years later, nothing has changed. We have mostly the same cast of dubious characters making the same tired claims, citing mostly the same incidents with the same conspiracy theories. The only difference is that their audience is a new generation that hasn’t been through it all before.

Perhaps the boom bust cycle is faster now because of social media and the relative short attention spans of the public. I suspect the Pentagon report will have the effect of forcing those with a more casual interest off the fence – either you have to admit there is simply no evidence for alien visitation, or you have to go the other way an embrace the grand UFO conspiracy theory. Or perhaps the current generation simply does not care about evidence, logic, and internal consistency and will just believe whatever narrative generates the most clicks on Tik Tok.

The post Pentagon Report – No UFOs first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #927: I Can't Believe They Did That: Human Guinea Pigs #3

Skeptoid Feed - Tue, 03/12/2024 - 2:00am

Part 3 in our roundup of scientists who took the ultimate plunge and experimented on themselves.

Categories: Critical Thinking, Skeptic

The Story of Female Empowerment & Getting Canceled: Elite Commando and Kickboxing World Champion Leah Goldstein

Skeptic.com feed - Tue, 03/12/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss413_Leah_Goldstein_2024_03_12.mp3 Download MP3

A conversation with Leah Goldstein on becoming a kickboxing world champion, ultra-endurance cyclist, and an elite commando combating terrorism. For this she was to be honored at the International Women’s Day event… until she was disinvited and canceled.

This is her story.

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Mach Effect Thrusters Fail

neurologicablog Feed - Mon, 03/11/2024 - 5:07am

When thinking about potential future technology, one way to divide possible future tech is into probable and speculative. Probable future technology involves extrapolating existing technology into the future, such as imaging what advanced computers might be like. This category also includes technology that we know is possible, we just haven’t mastered it yet, like fusion power. For these technologies the question is more when than if.

Speculative technology, however, may or may not even be possible within the laws of physics. Such technology is usually highly disruptive, seems magical in nature, but would be incredibly useful if it existed. Common technologies in this group include faster than light travel or communication, time travel, zero-point energy, cold fusion, anti-gravity, and propellantless thrust. I tend to think of these as science fiction technologies, not just speculative. The big question for these phenomena is how confident are we that they are impossible within the laws of physics. They would all be awesome if they existed (well, maybe not time travel – that one is tricky), but I am not holding my breath for any of them. If I had to bet, I would say none of these exist.

That last one, propellantless thrust, does not usually get as much attention as the other items on the list. The technology is rarely discussed explicitly in science fiction, but often it is portrayed and just taken for granted. Star Trek’s “impulse drive”, for example, seems to lack any propellant. Any ship that zips into orbit like the Millennium Falcon likely is also using some combination of anti-gravity and propellantless thrust. It certainly doesn’t have large fuel tanks or display any exhaust similar to a modern rocket.

In recent years NASA has tested two speculative technologies that claim to be able to produce thrust without propellant – the EM drive and the Mach Effect thruster (MET). For some reason the EM drive received more media attention (including from me), but the MET was actually the more interesting claim. All existing forms of internal thrust involve throwing something out the back end of the ship. The conservation of momentum means that there will be an equal and opposite reaction, and the ship will be thrust in the opposite direction. This is your basic rocket. We can get more efficient by accelerating the propellant to higher and higher velocity, so that you get maximal thrust from each atom or propellant your ship carries, but there is no escape from the basic physics. Ion drives are perhaps the most efficient thrusters we have, because they accelerate charged particles to relativistic speeds, but they produce very little thrust. So they are good for moving ships around in space but cannot get a ship off the surface of the Earth.

The problem with propellant is the rocket equation – you need to carry enough fuel to accelerate the fuel, and more fuel for that fuel, etc. It means that in order to go anywhere interesting very fast you need to carry massive amounts of fuel. The rocket equation also sets a lot of serious limits on space travel, in terms of how fast and far we can go, how much we can lift into orbit, and even if it is possible to escape from a strong gravity well (chemical rockets have a limit of about 1.5 g).

If it were possible to create thrust directly from energy without the need for propellant, a so-called propellantless or reactionless drive, that would free us from the rocket equation. This would make space travel much easier, and even make interstellar travel possible. We can accomplish a similar result by using external thrust, for example with a light sail. The thrust can come from a powerful stationary laser that pushes against the light sail of a spacecraft. This may, in fact, be our best bet for long distance space travel. But this approach has limits as well, and having an onboard source of thrust is extremely useful.

The problem with propellantless drives is that they probably violate the laws of physics, specifically the conservation of momentum. Again, the real question is – how confident are we that such a drive is impossible? Saying we don’t know how it could work is not the same as saying we know it can’t work. The EM drive is alleged to work using microwaves in a specially designed cone so that as they bounce around they push slightly more against one side than the other, generating a small amount of net thrust (yes, this is a simplification, but that’s the basic idea). It was never a very compelling idea, but early tests did show some possible net thrust, although very tiny.

The fact that the thrust was extremely tiny, to me, was very telling. The problem with very small effect sizes is that it’s really easy for them to be errors, or to have extraneous sources. This is a pattern we frequently see with speculative technologies, from cold fusion to free-energy machines. The effect is always super tiny, with the claim that the technology just needs to be “scaled up”. Of course, the scaling up never happens, because the tiny effect was a tiny error. So this is always a huge red flag to me, one that has proven extremely predictive.

And in fact when NASA tested the EM drive under rigorous testing conditions, they could not detect any anomalous thrust. With new technology there are two basic types of studies we can do to explore them. One is to explore the potential underlying physics or phenomena – how could such technology work. The other is to simply test whether or not the technology works, regardless of how. Ideally both of these types of evidence will align. There is often debate about which type of evidence is more important, with many proponents arguing that the only thing that matters is if the technology works. But the problem here is that often the evidence is low-grade or ambiguous, and we need the mechanistic research to put it into context.

But I do agree, at the end of the day, if you have sufficiently high level rigorous evidence that the phenomenon either exists or doesn’t exist, that would trump whether or not we currently know the mechanism or the underlying physics. That is what NASA was trying to do – a highly rigorous experiment to simply answer the question – is there anomalous thrust. Their answer was no.

The same is true of the MET. The theory behind the MET is different, and is based on some speculative physics. The idea stems from a question in physics for which we do not currently have a good answer – what determines inertial frames of reference. For example, if you have a bucket of water in deep intergalactic space (sealed at the top to contain the water), and you spin it, centrifugal force will cause the water to climb up the sides of the buck. But how can we prove physically that the bucket is spinning and the universe is not spinning around it. In other words – what is the frame of reference. We might intuitive feel like it makes more sense that the bucket is spinning, but how do we prove that with physics and math? What theory determines the frame of reference?

One speculative theory is that the inertial frame of reference is determined by the total mass energy of the universe, it derives from an interaction between an object and the rest of the universe. If this is the case then perhaps you can change that inertia by pushing against the rest of the universe, without expelling propellant. If this is all true, then the MET could theoretically work. This seems to be one step above the EM drive in that the EM drive likely violates the known laws of physics, while the MET is based on unknown laws.

Well, NASA tested the MET also and – no anomalous thrust. Proponents, of course, could always argue that the experimental setup was not sensitive enough. But at some point, teeny tiny becomes practically indistinguishable from zero.

It seems that we do not have a propellantless drive in our future, which is too bad. But the idea is so compelling that I also doubt we have seen the end of such claims, as with perpetual motion machines and free energy. There are already other claims, such as the quantum drive. There are likely to be more. What I typically say to proponents is this – scale it up first, then come talk to me. Since “scaling up” tends to be the death of all of these claims, that’s a good filter.

The post Mach Effect Thrusters Fail first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #974 - Mar 9 2024

Skeptics Guide to the Universe Feed - Sat, 03/09/2024 - 8:00am
Quickie with Bob: Colliding Neutron Stars; News Items: Sinking Cities, Hypervaccination, Conspiracy Theories and Disease X, Celebrities and Flat Earth, Superconducting Magnets and Fusion; Who's That Noisy; Your Questions and E-mails: IVF, Moon's Orbit; Science or Fiction
Categories: Skeptic

Mohamad Jebara — Who Wrote the Qur’an, Why, and What Does it Really Say?

Skeptic.com feed - Sat, 03/09/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss412_Mohamad_Jebara_2024_03_09.mp3 Download MP3

Over a billion copies of the Qur’an exist – yet it remains an enigma. Its classical Arabic language resists simple translation, and its non-linear style of abstract musings defies categorization. Moreover, those who champion its sanctity and compete to claim its mantle offer widely diverging interpretations of its core message – at times with explosive results.

Building on his intimate portrait of the Qur’an’s prophet in Muhammad the World-Changer, Mohamad Jebara returns with a vivid profile of the book itself. While viewed in retrospect as the grand scripture of triumphant empires, Jebara reveals how the Qur’an unfolded over 22 years amidst intense persecution, suffering, and loneliness. The Life of the Qur’an recounts this vivid drama as a biography examining the book’s obscured heritage, complex revelation, and contested legacy.

The author believes that the Qur’an re-emerges with clarity as a dynamic life force that seeks to inspire human beings to unleash their dormant potential despite often-overwhelming odds – in order to transform themselves and the world.

Mohamad Jebara is a scriptural philologist and prominent exegetist known for his eloquent oratory style as well as his efforts to bridge cultural and religious divides. A semanticist and historian of Semitic cultures, he has served as Chief Imam as well as headmaster of several Qur’anic and Arabic language academies. Jebara has lectured to diverse audiences around the world; briefed senior policy makers; and published in prominent newspapers and magazines. A respected voice in Islamic scholarship, Jebara advocates for positive social change.

Shermer and Jebara discuss:

  • Who wrote the Qur’an and why?
  • Do Muslims believe it was written by Muhammad divinely inspired, or is it suppose to be the literal words of God/Allah?
  • Why do we need a new translation and interpretation of the Qur’an?
  • What inspired a Westerner raised in public school to write a biography of Muhammad and a history of the Islamic holy book?
  • Is the Muslim world stagnating? And how does the biography of the Quran and Islam’s founder aim to help the situation?
  • What is the “Semitic mindset”?
  • How is the Qur’an the first “Post-Modern” book?
  • Many Westerners believe that the Qur’an endorses violence, Jihad, and Sharia Law over secular laws and constitutions. What does it really say?
  • Christianity and Judaism went through the Enlightenment and came out the other side much more tolerant and peaceful. Has Islam had its Enlightenment? Does Islam and the Muslim world need reforming?
  • the meaning of “Allahu Akbar”
  • women in Islam
  • female genital mutilation
  • What percentage of Muslims want Sharia Law, and where in the world?

Sharia, or Islamic law, offers moral and legal guidance for nearly all aspects of life – from marriage and divorce, to inheritance and contracts, to criminal punishments. Sharia, in its broadest definition, refers to the ethical principles set down in Islam’s holy book (the Quran) and examples of actions by the Prophet Muhammad (sunna).

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

The Future of Medicine & Wellness

Skeptic.com feed - Fri, 03/08/2024 - 12:00am

Skeptic: Let’s start with the big questions. What is the problem to be solved? And why is systems biology the right method to find the answer?

Leroy Hood: The problem is this great complexity. Reductionism is the approach where you take an element of a complex system and study that element in enormous detail. However, studying one element in a complex system gives you no insight into how the complex system works. Systems biology highlights something extremely important—namely, biological networks underlie all of the complex responses and phenotypes of human beings. So, we first identify the network components and then study their dynamics. Systems biology takes a global, holistic view of a problem by thinking in terms of the networks that encode the information that is responsible for each phenotype, and so forth. The most fascinating part of the systems approach is that it can be applied to any kind of complex problem—physiological, psychological, or sociological.

Skeptic: Take DNA. Crick & Watson drilled down to the molecular structure—that’s reductionism. But then you have to build back out to the phenotype and the entire body, and how it interacts with systems both within the body and externally.

Hood: Correct. That’s systems biology. The first thing to figure out is what are the elements of information that DNA encodes—the genes. Once you’ve identified the 20,000 or so genes, you figure out how these genes connect to form these networks. Finally, you watch the networks operate during the dynamics of what you are studying. The really important thing about systems is that they operate across multiple scales. A system can be thought of at the level of one molecule, one cell, an organ, or at the level of the whole organism, and then you really begin to see how the various hierarchical levels operate differently in space and time.

Nathan Price: There has been tension between molecular biologists and systems biologists, especially in the early days, because molecular biology sometimes can feel very satisfying and concrete: “Here’s the protein…and here is its sequence.” In contrast, when building a system, you often see very complex relationships amongst all these.

Skeptic: OK. Let’s consider weight loss and diets. Why are diets faddish? And during a particular fad, why does a particular diet work for some, but not others?

Price: Let me give a specific example. First, studies in which we compared people who went on to lose weight with those who didn’t. In the thousands of measurements we’ve made, was there anything predictive about whether or not people would lose weight? When we looked at metabolites and proteins—once you normalize for BMI—nothing. But in the microbiome, two features were predictive.

The first was how fast your microbiome was growing. If your bacteria are growing fast, every calorie you eat is a calorie either for yourself or your microbiome. If your microbiome is consuming more, it is easier to lose weight. The second big factor is whether the genes in your microbiome are more likely to break down complex carbohydrates. Let’s say you eat a sweet potato. Some microbiomes will break that down into simple sugars that will spike your insulin more, making it harder to lose weight. But if you have a microbiome that will break down those same complex carbohydrates into short-chain fatty acids, it’s easier to lose weight. That was quite predictive and captured a fair amount of the variance between individuals.

Another big example is trying to lower something like LDL cholesterol. Some of that is genetically encoded and you can predict the blood level of LDL cholesterol from the genome, without knowing anything about a person’s diet or lifestyle. So we looked at whether people going through a wellness program could lower their LDL cholesterol without medication. If you had high LDL but your genome predicted low, you could lower it. But if your genome predicted high and you were high, you couldn’t. That is an incredibly useful tool that lets us know what you can change easily and what is going to be hard to change.

In short, we have a totally new method to look at your genetic potential versus your actual outcome. You get a roadmap of which lifestyle changes will make the greatest difference to your health. That’s big!

Skeptic: Hopefully, that kind of test will soon be available in every doctor’s office.

Hood: Exactly! That’s why I’ve proposed a second genome initiative where we take a million people for 10 years and conduct all these analyses. This will give us all the correlations for 150 different genetic risks, and all the correlations with the phenotype, so we can show unequivocally how this transforms the quality of your life. I guarantee that today we’re giving drugs to many people who should never be taking them. They could manage themselves just by diet and exercise.

Skeptic: You write that the 10 most popular drugs in the U.S. work for only about 10 percent of the people treated. Seriously?

Hood: Isn’t that absolutely striking? Yet it is true. A critical outcome of the million-person project is that we’ll have blood biomarkers that can tell us, unequivocally, which individuals are going to respond to particular drugs and which are not. That’s something pharma companies would hate because their bottom line likes this idea of one drug for everybody.

Price: Another factor is that your microbiome transforms about 13 percent of the drugs you take. That means you could be taking a drug, but if you have the wrong microbiome, it could change that compound so that you’re not even on the drug you think you are. This is a big problem that drug companies need to start thinking about systemically.

Skeptic: So, what exactly is the microbiome?

Price: Understanding the microbiome is one of the hottest areas in health and has just emerged quite recently. The microbiome is the bacteria and other small organisms that live on your skin and in your gut. Everything you take into your body—food, a supplement, a drug—passes through the microbiome before it gets to you. There are now tests that provide useful information about your microbiome.

For starters, you can see how your microbiome affects your digestion. You might have microbes that are making too much ammonia, which will cause your stomach to not be acidic enough to break down your food the way you need to. We can evaluate that. Recently, we ran a trial on people with Irritable Bowel Syndrome (IBS). They can now get a test, implement personalized interventions, and resolve their symptoms—in most cases—over the course of about a month.

The 10 most popular drugs in the U.S. work for only about 10 percent of the people treated.

Skeptic: Now that’s individualized medicine. When this knowledge and technology aren’t available, we have to resort to large-scale treatments. “Here’s the problem. Here’s our drug. Give it to everybody with that problem and hope maybe half get better.”

Hood: And today, thanks to the million-person genome project, the genome itself is going to reveal a whole series of diseases. For example, there are roughly 7,000 rare diseases. Many are single-gene defects. For each of those, we’re going to have to find a drug that works. And it’s going to be essential to examine the genome as early in life as possible because some cause disease during childhood or infancy. You want to know immediately which gene defects you have. Your physician will hopefully have these data and keep track of these things for you.

Skeptic: In that example, what would be the financial model for pharmaceutical companies?

Hood: First, the small molecule drugs today can only attack five percent of the proteins that exist in your genome. That’s a very small number. The pharmaceutical companies will need to generate lots of new drugs that can attack more than five percent. Second, they must figure out how to scale the research. If the clinical trial is going to cost three billion dollars per trial, it’s never going to work for individualized medicine. You need really efficient ways of generating lots of drugs and screening them effectively. Third, the federal government is probably going to have to help financially. A disease that’s devastating in infancy wipes out the productivity of that person for life. Avoid that and you have a productive, creative, functional citizen. You can make compelling arguments for being able to deal with these diseases at an appropriate time.

These are things pharmaceutical companies are just barely beginning to think about at a proper scale.

Skeptic: So we’re really at an intersection between academia, private industry, and government to make the transition from raw knowledge to application to industrial-size production of such kits and applications.

Price: We can progressively get access to information in ways that are simpler and cheaper than before. We already have the microbiome test and the blood measurement device that we hope will soon be approved in the United States. It’s already FDA-approved for a supervised blood draw, in which you need another person to stand next to you while you do it. We have successfully tested this in trials, including a big one of nearly 20,000 people with the University of Cambridge. So far, there have been no reported adverse events. And we have a 99.9 percent success rate of being able to get a measurement off the device when people use it at home. You can obtain your own blood sample and drop it in the mail.

Skeptic: This sounds somewhat similar to Elizabeth Holmes and Theranos.

Price: There are two different paths you can take on blood analysis. One is that you can try to get conventional clinical lab tests, which are typically done on large volumes of blood, and you try to miniaturize that. That’s what Theranos tried. And they failed. In fact, they were fraudulent all along the way, which is why Elizabeth Holmes is now in prison. Pretty much all other companies that tried to go down that route have also failed.

But there’s another path—the one that we’re pursuing—in which you use small volumes of blood to do what are called omics-based measures. These are your metabolomics (measuring all the metabolites out of the blood) or proteomics (measuring all the proteins). Those technologies, based on mass spectrometry or on capture agents, are only done on very small volumes.

We can make thousands of measurements out of that small volume. The challenge is how to interpret all of that information. Working with that data, tying it to health outcomes, connecting it to electronic health records, and monitoring people’s health is a much bigger challenge. It’s much more of an AI-data problem than anything else. And I’d much rather try to solve the information technology challenge than go down the Theranos road.

Hood: What Elizabeth Holmes projected is going to be done. I just want to say that it is a valid way of thinking about the technology. Technically, it’s far more difficult. But we are going to learn how. It’s going to take 10 or 15 years. It’s just a matter of getting appropriate, miniaturized measurements in microfluidics or nanotechnology.

Skeptic: Let’s talk about the future of medicine. What is CRISPR?

Price: CRISPR allows you to go into a genome and essentially edit any base pairs that you want. Incredibly powerful, it holds the potential to end all genetic-related disease, or at least monogenetic defects. Huntington’s disease is a compelling example. Unlike most genetic traits, it has essentially 100 percent penetrance. If you have the gene for Huntington’s, you will get Huntington’s disease. But you can CRISPR it out. Even in the embryonic stage! That eliminates the gene, and all your progeny forever will not carry it. That’s amazing.

Skeptic: What about stem cells?

Price: Stem cells are the body’s raw materials—cells from which all other cells with specialized functions derive. They are very powerful because they are generative. They grow all other cell types. Of particular interest are induced pluripotent stem cells which are programmed to grow tissue. And so, you put them in—in a way taking a leap of faith—trusting in the intelligence of the stem cells to act on their programs and to rebuild the right tissue. It’s all evidence-based, but we don’t fully understand the logic of how they do it.

Skeptic: What about cancer?

Price: Most of us may have already had cancers in our lives, but they started and then our immune system cleared them out. However, when you develop a tumor, your immune system didn’t recognize. The cancer fooled your immune system. Immunotherapies—not trying to kill the cancer off with a drug, but rather teaching the immune system to get the cancer that it missed—are one of the most exciting developments.

If you know the molecular properties of the cells that have become cancers and were missed by your immune system, you can create a vector that will look for certain gene expressions or molecular properties or some combinatoric aspect of those cells, then go in and initiate a program that will stick a molecule up from the outside of the cell. That molecule will act as a signal to macrophages, which are cells in the immune system that come and eat other cells, in this case destroying the tumor cells.

We have a totally new method to look at your genetic potential versus your actual outcome. You get a roadmap of which lifestyle changes will make the greatest difference to your health.

Skeptic: What about stories of injecting tumors with vaccines? Jimmy Carter was treated this way for his brain tumor.

Price: It’s a similar idea. You take pieces of the cancer cells, and then you create antibodies. You’re placing a signal into the immune system that says these are fragments of something foreign. Your immune system can then look for those specific tumor cells and kill them. It doesn’t yet work for the majority of cancers, but for the fraction that it does, it’s amazing.

Skeptic: Is part of the problem that there are so many different kinds of cancers, and they’re different in each body?

Price: Exactly. And that is why precision medicine is probably more advanced in cancer research than any other area. You can pull out the tumor and you can sequence it. You can look at its gene expression, its metabolites, and its proteins. You can take your genome and the genome of your cancer and design a map of exactly what’s different about those cells. You can do this on a per-person basis, and it’s at the core of individualized therapy.

I think the term cancer is a total misnomer. We should move away from terms like prostate cancer, lung cancer, or breast cancer. It should always be cancers. Cancers are a huge, massively heterogeneous, highly diverse set of diseases and conditions and molecular mutations, not at all a single phenomenon.

Skeptic: Starting today, what can each one of us do to be healthier?

Price: Exercise. You lose between zero and one percent of your muscle mass per year as you age. Decade after decade, that adds up. You become frail in your later years. There are things you really want to be able to do when you’re older—for example, to stand up from the ground without any assistance. When you’re young, that’s easy; when you’re older, it gets harder. If you lose that ability, you’re putting yourself at much greater risk. Balance and posture are also important. So are stretching and range of mobility.

Skeptic: And what can we do to keep our minds healthy?

Price: I think that we have really been on the wrong track in Alzheimer’s, for a long time.

First, people almost always say amyloid plaques cause Alzheimer’s. I don’t think that’s true. I think the biggest factor is metabolism and what the brain has to do to maintain energy. Your brain is only two percent of your body’s biomass, but it uses 20 percent of your body’s energy. That’s 10 times more metabolically demanding than the body average. So, every second of every day, you’ve got to supply it with energy. As you get older, your ability to perfuse oxygen into your brain through your blood vessels goes down. It decreases, just like muscle mass.

As that happens, certain regions of the brain become lower in oxygen. The amount of energy that you can create goes below the amount that you need, so certain neurons start dying. As they die, you put more demand on the remaining neurons. Their demand goes up while their supply stays the same. Then they die, which puts even more pressure on those remaining. So you get this cascade of cell death.

The second big factor that everyone knows about is that if you have the gene APOE, you have a high risk of Alzheimer’s. How can your neurons keep making a lot of energy under these low-oxygen conditions? Supporting cells, called astrocytes, are really important here. There is a 9:1 ratio of astrocytes to neurons, and they support energy generation in the neurons. So you want to keep the cholesterol level in the astrocytes low in order to keep energy generation high in neurons under low oxygen conditions.

APOE has a role in the transport of cholesterol out of astrocytes. APOE4 does it slowly, APOE2 does it fast. And if all you do is take those two facts—the lowering of oxygen and the difference in the efficiency of keeping positive energy balance under that condition—you can recapitulate the ages at which all the different genotypes get Alzheimer’s disease with very close accuracy. That’s true for a whole range of genetic and environmental backgrounds.

This article appeared in Skeptic magazine 28.4
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Then there’s a gene called TREM2 that has to do with the energetics of what’s required to clear debris. If these cells are dying, you get all this debris that you’ve got to clean out. That comes at an energetic cost. And if you spend energy doing that, you don’t have as much energy left to protect the nerve. As your neurons die and you lose synapses, as that synapse firing goes down, you cross a threshold where you can no longer do what’s called Hebbian learning—what fires together, wires together. But you don’t have enough firing to learn, so the brain has to secrete a molecule in order to recruit additional synapses— amyloid beta. It is brought in by the brain or made by the brain in order to recruit these synapses so you can keep cognition going. Now, as a byproduct, these things glom together and form amyloid plaques. Amyloid can embed in your blood vessels and constrict them, which then gets us back to the central problem of limited energy because it’s limiting oxygenation into the brain. But the plaques aren’t the cause of the disease.

Skeptic: So, what can be done to prevent that or treat it?

Price: Exercise. Under the model I just described, it’s obvious why.

Skeptic: How do you see the future of health and wellness?

Hood: I think any brand-new idea almost never can be achieved in the context of an existing bureaucracy. Bureaucracies are honed by the past, can barely deal with the present, and have difficulty dealing with the future. Initially, 80 percent of the biologists in the U.S. were opposed to the Human Genome Project.

As a young assistant professor at Caltech, I started thinking about where I wanted my future scientific career. I had a real interest in human biology and disease. It was 1970 and I was dismayed by the complexity of the problem and by the lack of tools we had for dealing with it. I decided to develop instruments that allowed one to read and write DNA, which could decipher that complexity by generating big data from individual humans. Put simply, lots of information on individuals that—when analyzed—could lead to insights into wellness and prevention.

I hope we were able to convince you that this sort of thinking is absolutely mandatory for improving healthcare in the future, and that scientific research is now on the right track.

This print interview has been edited from a longer conversation on The Michael Shermer Show.

About the Interviewees

Leroy Hood developed the DNA sequencer which enabled the reading of the entire human genetic code as part of the Human Genome Project. His work was also instrumental in the creation of treatment for AIDS. Hood founded the discipline of systems biology and is one of only 15 individuals elected to all three U.S. National Academies (the National Academy of Science, the National Academy of Engineering, and the Institute of Medicine).

Nathan Price is Chief Science Officer of Thorne HealthTech and Professor at the Institute for Systems Biology. Selected as an Emerging Leader in Health and Medicine by the National Academy of Medicine, he received the Grace A. Goldsmith Award for his work on scientific wellness, and is the author of more than 200 peer-reviewed scientific publications

Categories: Critical Thinking, Skeptic

Is the AI Singularity Coming?

neurologicablog Feed - Thu, 03/07/2024 - 4:49am

Like it or not, we are living in the age of artificial intelligence (AI). Recent advances in large language models, like ChatGPT, have helped put advanced AI in the hands of the average person, who now has a much better sense of how powerful these AI applications can be (and perhaps also their limitations). Even though they are narrow AI, not sentient in a human way, they can be highly disruptive. We are about to go through the first US presidential election where AI may play a significant role. AI has revolutionized research in many areas, performing months or even years of research in mere days.

Such rapid advances legitimately make one wonder where we will be in 5, 10, or 20 years. Computer scientist Ben Goertzel, who popularized the term AGI (artificial general intelligence), recently stated during a presentation that he believes we will achieve not only AGI but an AGI singularity involving a superintelligent AGI within 3-8 years. He thinks it is likely to happen by 2030, but could happen as early as 2027.

My reaction to such claims, as a non-expert who follows this field closely, is that this seems way to optimistic. But Goertzel is an expert, so perhaps he has some insight into research and development that’s happening in the background that I am not aware of. So I was very interested to see his line of reasoning. Will he hint at research that is on the cusp of something new?

Goertzel laid out three lines of reasoning to support his claim. The first is simply extrapolating from the recent exponential grown of narrow AI. He admits that LLM systems and other narrow AI are not themselves on a path to AGI, but they show the rapid advance of the technology. He aligns himself here with Ray Kurzweil, who apparently has a new book coming out, The Singularity is Nearer. Kurzweil has a reputation for predicting advances in computer technology that were overly optimistic, so that is not surprising.

I find this particular argument not very compelling. Exponential growth in one area of technology at one particular time does not mean that this is a general rule about technology for all time. I know that is explicitly what Kurzweil argues, but I disagree with it. Some technologies hit roadblocks, or experience diminishing returns, or simply peak. Stating exponential advance as a general rule did not mean that the hydrogen economy was coming 20 years ago. It has not made commercial airline travel any faster over the last 50 years. Rather, history is pretty clear that we need to do a detailed analysis of individual technologies to see how they are advancing and what their potential is. Even still, this only gives us a roadmap for a certain amount of time, and is not useful for predicting disruptive technologies or advances.

So that is a strike one, in my opinion. Recent rapid advances in narrow AI does not predict, in and of itself, that AGI is right around the corner. It’s also strike two, actually, because he argues that one line of evidence to support his thesis is Kurzweil’s general rule of exponential advance, and the other is the recent rapid advances in LLM narrow AIs. So what is his third line of evidence?

This one I find the most compelling, because at least it deals with specific developments in the field. Goertzel here is referring to his own work: “OpenCog Hyperon,” as well as associated software systems and a forthcoming AGI programming language, dubbed “MeTTa”. The idea here is that you can create an AGI by stitching together many narrow AI systems. I think this is a viable approach. It’s basically how our brains work. If you had 20 or so narrow AI systems that handled specific parts of cognition and were all able to communicate with each other, so that the output of one algorithm becomes the input of another, then you are getting close to a human brain type of cognition.

But saying this approach will achieve AGI in a few years is a huge leap. There is still a lot we don’t know about how such a system would work, and there is much we don’t know about how sentience emerges from the activity of our brains. We don’t know if linking many narrow AI systems together will cause AGI to emerge, or if it will just be a bunch of narrow AIs working in parallel. I am not saying there is something unique about biological cognition, and I do think we can achieve AGI in silicon, but we don’t know all the elements that go into AGI.

If I had to predict I would say that AGI is likely to happen both slower and faster than we predict. I highly doubt it will happen in 3-8 years. I suspect it is more like 20-30 years. But when it does happen, like with the LLMs, it will probably happen fast and take us by surprise. Goertzel, to his credit, admits he may be wrong. He says we may need a, “quantum computer with a million qubits or something.”  To me that is a pretty damning admission, that all his extrapolations actually mean very little.

Another aspect of his predictions is what happens after we achieve AGI. He, as many others have also predicted, said that if we give the AGI the ability to write its own code then it could rapidly become superintelligent, like a single entity with the cognitive ability of all human civilization. Theoretically, sure. But having an AGI that powerful is more than about writing better code, right? It’s also limited by the hardware, and the availability of training data, and perhaps other variables as well. But yes, such an AGI would be a powerful tool of science and technology that could be turned toward making the AGI itself more advanced.

Will this create a Kurzweil-style “singularity”? Ultimately I think that idea is a bit subjective, and we won’t really know until we get there.

The post Is the AI Singularity Coming? first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #926: The Chicago O'Hare Airport UFO

Skeptoid Feed - Tue, 03/05/2024 - 2:00am

In 2006, a flying saucer spent minutes literally hovering right above Chicago's O'Hare International Airport... so the story goes.

Categories: Critical Thinking, Skeptic

Samuel Wilkinson — What Evolution and Human Nature Imply About the Meaning of Our Existence

Skeptic.com feed - Tue, 03/05/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss411_Samuel_Wilkinson_2024_03_05.mp3 Download MP3

Generations have been taught that evolution implies there is no overarching purpose to our existence, that life has no fundamental meaning. We are merely the accumulation of tens of thousands of intricate molecular accidents. Some scientists take this logic one step further, suggesting that evolution is intrinsically atheistic and goes against the concept of God.

With respect to our evolution, nature seems to have endowed us with competing dispositions, what Wilkinson calls the dual potential of human nature. We are pulled in different directions: selfishness and altruism, aggression and cooperation, lust and love.

By using principles from a variety of scientific disciplines, Yale Professor Samuel Wilkinson provides a framework for human evolution that reveals an overarching purpose to our existence.

Wilkinson claims that this purpose, at least one of them, is to choose between the good and evil impulses that nature has created within us. Our life is a test. This is a truth, as old as history it seems, that has been espoused by so many of the world’s religions. From a certain framework, Wilkinson believes that these aspects of human nature—including how evolution shaped us—are evidence for the existence of a God, not against it.

Closely related to this is meaning. What is the meaning of life? Based on the scientific data, it would seem that one such meaning is to develop deep and abiding relationships. At least that is what most people report are the most meaningful aspects of their lives. This is a function of our evolution. It is how we were created.

Samuel T. Wilkinson is Assistant Professor of Psychiatry at Yale University, where he also serves as Associate Director of the Yale Depression Research Program. He received his MD from Johns Hopkins School of Medicine. His articles have been featured in the New York Times, the Washington Post, and the Wall Street Journal. He has been the recipient of many awards, including Top Advancements & Breakthroughs from the Brain and Behavior Research Foundation; Top Ten Psychiatry Papers by the New England Journal of Medicine, the Samuel Novey Writing Prize in Psychological Medicine (Johns Hopkins); the Thomas Detre Award (Yale University); and the Seymour Lustman Award (Yale University). His new book is Purpose: What Evolution and Human Nature Imply About the Meaning of our Existence.

Shermer and Wilkinson discuss:

  • evolution: random chance or guided process?
  • selfishness and altruism
  • aggression and cooperation
  • inner demons and better angels
  • love and lust
  • free will and determinism
  • the good life
  • the good society
  • empirical truths, mythic truths, religious truths, pragmatic truths
  • Is there a cosmic courthouse where evil will be corrected in the next life?
  • theodicy and the problem of evil: Why do bad things happen to good people?

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Climate Sensitivity and Confirmation Bias

neurologicablog Feed - Mon, 03/04/2024 - 6:02am

I love to follow kerfuffles between different experts and deep thinkers. It’s great for revealing the subtleties of logic, science, and evidence. Recently there has been an interesting online exchange between a physicists science communicator (Sabine Hossenfelder) and some climate scientists (Zeke Hausfather and Andrew Dessler). The dispute is over equilibrium climate sensitivity (ECS) and the recent “hot model problem”.

First let me review the relevant background. ECS is a measure of how much climate warming will occur as CO2 concentration in the atmosphere increases, specifically the temperature rise in degrees Celsius with a doubling of CO2 (from pre-industrial levels). This number of of keen significance to the climate change problem, as it essentially tells us how much and how fast the climate will warm as we continue to pump CO2 into the atmosphere. There are other variables as well, such as other greenhouse gases and multiple feedback mechanisms, making climate models very complex, but the ECS is certainly a very important variable in these models.

There are multiple lines of evidence for deriving ECS, such as modeling the climate with all variables and seeing what the ECS would have to be in order for the model to match reality – the actual warming we have been experiencing. Therefore our estimate of ECS depends heavily on how good our climate models are. Climate scientists use a statistical method to determine the likely range of climate sensitivity. They take all the studies estimating ECS, creating a range of results, and then determine the 90% confidence range – it is 90% likely, given all the results, that ECS is between 2-5 C.

Hossenfelder did a recent video discussing the hot model problem. This refers to the fact that some of the recent climate models, ones that are ostensible improved from older models incorporating better physics and cloud modeling, produced an estimate for ECS outside the 90% confidence interval, with ECSs above 5.0. Hossenfelder expressed grave concern that if these models are closer to the truth on ECS we are in big trouble. There is likely to be more warming sooner, which means we have even less time than we thought to decarbonize our economy if we want to avoid the worst climate change has in store for us. Some climate scientists responded to her video, and then Hossenfelder responded back (links above). This is where it gets interesting.

To frame my take on this debate a bit, when thinking about any scientific debate we often have to consider two broad levels of issues. One type of issue is generic principles of logic and proper scientific procedure. These generic principles can apply to any scientific field – P-hacking is P-hacking, whether you are a geologist or chiropractor. This is the realm I generally deal with, basic principles of statistics, methodological rigor, and avoiding common pitfalls in how to gather and interpret evidence.

The second relevant level, however, is topic-specific expertise. Here I do my best to understand the relevant science, defer to experts, and essentially try to understand the consensus of expert opinion as best I can. There is often a complex interaction between these two levels. But if researchers are making egregious mistakes on the level of basic logic and statistics, the topic-specific details do not matter very much to that fact.

What I have tried to do over my science communication career is to derive a deep understanding of the logic and methods of good science vs bad science from my own field of expertise, medicine. This allows me to better apply those general principles to other areas. At the same time I have tried to develop expertise in the philosophy of science, and understanding the difference between science and pseudoscience.

In her response video Hossenfelder is partly trying to do the same thing, take generic lessons from her field and apply them to climate science (while acknowledging that she is not a climate scientist). Her main point is that, in the past, physicists had grossly underestimated the uncertainty of certain measurements they were making (such as the half life of protons outside a nucleus). The true value ended up being outside the earlier uncertainty range – h0w did that happen? Her conclusions was that it was likely confirmation bias – once a value was determined (even if just preliminary) then confirmation bias kicks in. You tend to accept later evidence that supports the earlier preliminary evidence while investigating more robustly any results that are outside this range.

Here is what makes confirmation bias so tricky and often hard to detect. The logic and methods used to question unwanted or unexpected results may be legitimate. But there is often some subjective judgement involved in which methods are best or most appropriate and there can be a bias in how they are applied. It’s like P-hacking – the statistical methods used may be individually reasonable, but if you are using them after looking at data their application will be biased. Hossenfelder correctly, in my opinion, recommends deciding on all research methods before looking at any data. The same recommendation now exists in medicine, even with pre-registration of methods before collective data and reviewers now looking at how well this process was complied with.

So Hausfather and Dessler make valid points in their response to Hossenfelder, but interestingly this does not negate her point. Their points can be legitimate in and of themselves, but biased in their application. The climate scientists point out (as others have) that the newer hot models do a relatively poor job of predicting historic temperatures and also do a poor job of modeling the most recent glacial maximum. That sounds like a valid point. Some climate scientists have therefore recommended that when all the climate models are averaged together to produce a probability curve of ECS that models which are better and predicting historic temperatures be weighted heavier than models that do a poor job. Again, sounds reasonable.

But – this does not negate Hossenfelder’s point. They decided to weight climate models after some of the recent models were creating a problem by running hot. They were “fixing” the “problem” of hot models. Would they have decided to weight models if there weren’t a problem with hot models? Is this just confirmation bias?

None of this means that there fix is wrong, or that the hot models are right. But what it means is that climate scientists should acknowledge exactly what they are doing. This opens the door to controlling for any potential confirmation bias. The way this works (again, generic scientific principle that could apply to any field) is to look a fresh data. Climate scientists need to agree on a consensus method – which models to look at, how to weight their results – and then do a fresh analysis including new data. Any time you make any change to your methods after looking at the data, you cannot really depend on the results. At best you have created a hypothesis – maybe this new method will give more accurate results – but then you have to confirm that method by applying it to fresh data.

Perhaps climate scientists are doing this (I suspect they will eventually), although Hausfather and Dessler did not explicitly address this in their response.

It’s all a great conversation to have. Every scientific field, no matter how legitimate, could benefit from this kind of scrutiny and questioning. Science is hard, and there are many ways  bias can slip in. It’s good for scientists in every field to have a deep and subtle understanding of statistical pitfalls, how to minimize confirmation bias and p-hacking, and the nature of pseudoscience.

The post Climate Sensitivity and Confirmation Bias first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #973 - Mar 2 2024

Skeptics Guide to the Universe Feed - Sat, 03/02/2024 - 8:00am
News Items: First Private Landing on the Moon, Sex Difference in the Brain, Bee Venom for Breast Cancer, Learning Empathy, Brightest Object; jWho's That Noisy; Quotation Game; Your Question and E-mails: Correction; Science or Fiction
Categories: Skeptic

Byron Reese — How Humanity Functions as a Single Superorganism

Skeptic.com feed - Sat, 03/02/2024 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss410_Byron_Reese_2024_03_02.mp3 Download MP3

Could humans unknowingly be a part of a larger superorganism—one with its own motivations and goals, one that is alive, and conscious, and has the power to shape the future of our species? This is the fascinating theory from author and futurist Byron Reese, who calls this human superorganism “Agora.”

In We Are Agora, Reese starts by asking the question, “What is life and how did it form?” From there, he looks at how multicellular life came about, how consciousness emerged, and how other superorganisms in nature have formed. Then, he poses eight big questions based on the Agora theory, including:

  • If ants have colonies, bees have hives, and we have our bodies, how does Agora manifest itself? Does it have a body?
  • Can Agora explain things that happen that are both under our control and near universally undesirable, such as war?
  • How can Agora theory explain long-term progress we’ve made in the world?

In this unique and ambitious work that spans all of human history and looks boldly into its future, Reese melds science and history to look at the human species from a fresh new perspective. We Are Agora will give readers a better understanding of where we’ve been, where we’re going, and how our fates are intertwined.

Byron Reese is an Austin-based entrepreneur with a quarter-century of experience building and running technology companies. A recognized authority on AI who holds a number of technology patents, Byron is a futurist with a strong conviction that technology will help bring about a new golden age of humanity. He gives talks around the world about how technology is changing work, education, and culture. He is the author of four books on technology; his previous title The Fourth Age was described by the New York Times as “entertaining and engaging.” Bloomberg Businessweek credits Reese with having “quietly pioneered a new breed of media company.” The Financial Times reported that he “is typical of the new wave of internet entrepreneurs out to turn the economics of the media industry on its head.” He and his work have been featured in hundreds of news outlets, including the New York Times, Washington Post, Entrepreneur, USA Today, Reader’s Digest, and NPR.

Shermer and Reese discuss:

  • What is an organism and what is a superorganism?
  • What is life?
  • Why do things die?
  • the origins of life, multicellular life, and complex organisms
  • What is the self?
  • emergence
  • consciousness
  • social insects: bees, ants, termites
  • Is the Internet a superorganism?
  • Will AI create a superorganism?
  • Is AI an existential threat?
  • Could AI become sentient or conscious?
  • the hard problem of consciousness
  • cities as superorganisms
  • planetary superorganisms
  • Are we living in a simulation?
  • Why are we here?

If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.

Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic