You are here


The Skeptics Guide #645 - Nov 18 2017

Skeptics Guide to the Universe Feed - Sat, 11/18/2017 - 8:00am
Forgotten Superheroes of Science: Mary Swartz Rose; News Items: Scientists' Warning, Smart Pills, Fact Checking on Facebook, Fast Electron Emissions; Who's That Noisy; Your Questions and E-mails: German; Science or Fiction
Categories: Skeptic

Alternative healer proposes legitimate species name for Bigfoot

The Doubtful News Feed - Fri, 11/17/2017 - 7:09pm
An alternative healer with a PhD and background in entomology (study of insects) has exploited an opportunity to officially name Bigfoot as a new species. The being portrayed in a famous film that some believe depicts an actual Bigfoot has been designated the type specimen for the as-yet non-corporeal animal. Dr. Erich Hunter, who specializes…
Categories: Skeptic

Alternative healer proposes legitimate species name for Bigfoot

The Doubtful News Feed - Fri, 11/17/2017 - 7:09pm
An alternative healer with a PhD and background in entomology (study of insects) has exploited an opportunity to officially name Bigfoot as a new species. The being portrayed in a famous film that some believe depicts an actual Bigfoot has been designated the type specimen for the as-yet non-corporeal animal. Dr. Erich Hunter, who specializes…
Categories: Skeptic

The Ethics of Head Transplants

neurologicablog Feed - Fri, 11/17/2017 - 4:43am

Newsweek, who has been following the story of Italian Neurosurgeon, Sergio Canavero, now reports: “Human Head Transplants Are About to Happen in China: But Where Are the Bodies Coming From?”

I have already discussed the scientific aspects of this claim. They are highly implausible and I doubt that such a transplant is about to happen at all. If it does I predict it will be a dismal failure, and ethically dubious. First, I have to reiterate, that it is far more accurate to call such a procedure a body transplant. The head donor will wake up with a new body. The body donor is, I suspect, dead.

There are three basic hurdles that need to be overcome in order to have a successful body transplant – the surgical attachment, suppression of rejection, and regeneration of the attached neurological tissue. Given that Canavero is a surgeon, I suspect he is excited about the first issue. He may think he has made some advances because he improved his technique for making the attachment. This was never, however, the primary hurdle.

We are already making great advances with organ transplantation and controlling rejection. However, this is still a huge issue. Donor and recipient have to be closely matched, and lifelong drugs are required. Still, the amount of tissue being transplanted here will be a challenge. It opens up, for the first time, the possible effects of tissue rejection on an entire brain. While this is a significant hurdle, our current treatments mean it is not necessarily a deal breaker (it might be, but research would be needed to see).

The real deal breaker here is the third issue – getting the neurological tissue to regenerate, specifically the spinal cord and various nerves. If you cannot get the head and body to communicate through the spinal cord, you will be essentially creating a quadraplegic – the body recipient will be entirely paralyzed and need to be on a ventilator.

Canavero is claiming that he has farmed out this research to another team, who have made such progress that he can proceed with a transplant. To say I find that hard to believe is an understatement. Such an advance would be truly enormous. There are research teams around the world working on spinal cord regeneration with modest progress – so how has Canavero’s mystery team left them all in the dust, without leaving a paper trail behind in the published literature?

Further, if Canavero’s team has solved the spinal cord regeneration problem (they haven’t, but just hypothetically) using that technology for a body transplant would be ridiculous. How about using it to help the half a million people who suffer a spinal cord injury each year? So either they are lying, or they are holding out such a technology, with is ethically monstrous.

Why China?

Canavero was unable to continue with his research in Italy, so now has moved to China where he claims to have the support of the Chinese government, and further claims he plans to proceed with the first “head transplant” next month. But why China? Newsweek speculates:

It is our suspicion that the authorities in China supporting this procedure are doing so wagering that a successful transplant will demonstrate to the world the dazzling level of technological achievement in the country. Perhaps it will. At a minimum, this procedure reveals that Chinese authorities believe there is no cost too high for raising China’s profile on the world stage.

But it also reveals something else that we think is important: cultural values determine what kinds of scientific research happens and where it happens.

If true, I predict their gamble will backfire. They are essentially backing a crank with impossible claims, and doing so does not make them look like they have advanced technology. Rather, it makes them look gullible and backward. They may think it’s worth the gamble, however. If it fails, that failure will mainly be ignored or discussed on the fringe. If it succeeds, they will stun the world. They will also likely distance themselves from any failure, but fully embrace any success.

Newsweek also brings up another issue. Where are the bodies coming from? China does not have a specific definition of death, and is not in line with the Western world on this issue. This is critical for organ donation. A donor needs to be officially dead, and proper consent had to have been given, either by the donor prior to death, or by their family.

Overall China does not have a great reputation for medical ethics and regulation. This is why they are a major center for fraudulent stem cell clinics, for example. There is potentially much overlap here. Chinese stem cell clinics, selling dubious treatments without proper published evidence to back them up, and without proper transparency, are presented by China as evidence that they are on the cutting edge. In reality they exist to lure desperate Westerners in for fake treatments costing tens of thousands of dollars. Perhaps China is looking to expand into the body transplant market.

In fact China is becoming a center for high tech medical quackery, which makes them a perfect fit for Canavero. I suspect this will play out like other dubious claims of technological advancement (like free energy) – delay, excuse, unsubstantiated claims of success, followed by evasion, then further excuses. Along the way someone will be conned out of money somewhere.

Categories: Skeptic

Dr. Robert Trivers — Evolutionary Theory & Human Nature feed - Thu, 11/16/2017 - 2:00pm

Dr. Robert Trivers and Dr. Michael Shermer have a lively conversation on everything from evolutionary theory and human nature to how to win a knife fight and Trivers’ membership in the Black Panthers. Don’t miss this engaging exchange with one of the most interesting scientists of the past half century.

This Science Salon followed Dr. Robert Trivers’ lecture on ‘The Evolutionary Genetics of Honor Killings,’ which he gave in Dr. Michael Shermer’s Skepticism 101 course at Chapman University on Thursday November 16, 2017:

Categories: Critical Thinking, Skeptic

John Oliver Nails Trump

neurologicablog Feed - Thu, 11/16/2017 - 5:10am

In the season finale of Last Week Tonight, John Oliver reviews Trump’s assault on truth and decency. If you haven’t been watching this show, you should give it a try. Not only is it funny and entertaining, but on each episode Oliver does a deep dive on something in our society that is not right and can be fixed. His researchers generally do a great job, and I also think Oliver does a good job of not being gratuitously partisan.

His season-long attacks on Trump may not make it seem that way, but I don’t think they are partisan. I also try to keep my personal politics out of my science advocacy, but I think the problems with Trump transcend politics, ideology, and party. In this last episode for the season, Oliver reviews why this is true.

The real problem with Trump is not that he is Republican or conservative – actually you could argue that he is barely either of those things. It’s not even necessarily that he is an anti-establishment populist who wants to shake things up. The real danger of Trump is that he is an anti-intellectual who has been waging war against journalism, expertise, decency, standards, and any notion of objectivity.

For Trump the only thing that appears to matter is the current struggle in which he is engaged – he needs to achieve what he perceives as victory over any adversary, at any cost. Being honest and respecting knowledge and accuracy doesn’t seem to factor in at all.

As a result Trump is willing to sacrifice the basic fabric that is necessary for a functional democracy. He seems to view democratic checks and balances as nothing but an annoyance and obstacle, so eroding that fabric is just another win for him.

Many people, including many conservatives who have not caved to the insanity, have enumerated the numerous ways in which Trump erodes the shared norms on which our society depends. Oliver’s break down may not be the only way to do it, but it is as good as any. He highlights three strategies Trump uses to erode those standards. Actually, I think referring to anything Trump does as a “strategy” is giving him too much credit. These, rather, are the habits that Trump has adopted which have the effect of undermining our society.

The first is to delegitimize the media. Of course, news outlets are not without fault. They have their own biases and are rife with quality-control issues. Half of what I do on this blog is correct bad reporting about science. But the way to deal with this is to call them on their errors and bias, but in a way that respects the institution and vital role of journalism itself.

Trump doesn’t do this. He attacks entire news organization as “fake news.” The term “fake news” has become a shield against anything Trump doesn’t like or finds inconvenient. When news organization reveal legitimate information, or ask the kind of questions they should be asking of a world leader, Trump’s response is to delegitimize them, denounce them as fake, and even flirt with the idea of banning them from the press room, delicensing them, or “opening up the libel laws” so he can more effectively threaten them.

At the same time he promotes the one news outlet that is essentially functioning as a propaganda arm of the White House. It is clear that Trump would love to have one state media that tow his party line, and ban all other media who would challenge him on anything.

The second method Trump and his defenders use is diversion and distraction, what Oliver calls “whataboutism.” Skeptics will recognize this strategy as the tu quoque logical fallacy – defending one action by pointing to someone else who is engaged in something similar. This is only legitimate to the extent that it points out actual hypocrisy, but it is not a defense of ethically wrong behavior, flawed logic, or bad evidence. The recent allegations about Moore are a great example. Moore is accused, now by multiple women, and supported by accusations that he was banned from a local mall for cruising high school girls, that he had relations with teenage girls, at least one of which was underage. The defense? Well, what about Bill Clinton? Even if you accept as true that Democrats hypocritically gave Clinton a pass on his behavior, that does nothing to excuse Moore’s alleged behavior.

Whataboutism is part of a larger strategy of diversion – to distract from the actual issues with irrelevant dog whistles and appeals to emotion and tribalism. On countless occasions over the last two years I have been in conversations with Trump supporters, asked them about a specific policy point or failing of Trump, and their response was, “What about Hillary Clinton?” They still do it, even though she lost and is now politically irrelevant. The demonization of Hillary was so much a part of their support of Trump, they simply can’t let it go. Trump can’t let it go – he wants to use the Justice Department to punish his political opponent and continue the demonization. It is a convenient distraction whenever anyone has concerns about his blatant incompetence.

Finally, Oliver points out that Trump is essentially a troll. He is the first troll president, who won election by literally trolling his opponents and the media. By troll it is meant that Trump says things not to put forward a serious argument, based in logic and fact, but to have an emotional effect. He does it to upset anyone he perceives as an opponent or obstacle. He even does it to allies, just to keep them in their place.

This provides deniability to anything Trump says. He can never be held to a specific position, because he is generally incoherent. How many times has Trump said something outrageous. Then the media and the public are left scratching their head – it sounds like Trump just said he thinks Neonazis are OK. Did he really just say that? Then his spokespeople take to the airwaves to reinterpret what Trump said, and when Trump is confronted he gives vague and incoherent responses that just muddy the waters even further.

Watching one of his people on talk shows gives me a flashback of reading 1984. It is all newspeak and double talk. Deny, distract, divert, confuse. For anyone who cares about being precise and accurate in communication, it is a nightmare.

The scary thing is that Trump is affecting the baseline norm for society, not just for himself. His behavior is metastasizing. I actually don’t think Trump originated this behavior. Much of it has always been around to some degree, and has been greatly increased by social media. I think Trump is just a social media troll who inherited a marketable name and a lot of money. He found out how to troll his way into politics, at a vulnerable time when we are is social transition.

But he is exacerbating the problem by orders of magnitude. There is an optimistic view, however. Trump is shining a bright light on all the problems with trolling, fake news, and anti-intellectualism. He is also too incompetent to take maximal advantage of his position. I can only hope this will limit the damage he is doing. But hopefully the attention he is bringing to the problem will lead to a backlash, and a rededication to the norms of respect for truth, transparency, and scholarship that are necessary for a functional democracy.

Categories: Skeptic

Cause & Effect: The CFI Newsletter - No. 93

Center for Inquiry News - Wed, 11/15/2017 - 9:53am

Cause & Effect is the biweekly newsletter of the Center for Inquiry community, covering the wide range of work that you help make possible. Become a member today!

The Top Stories 

Biology Teachers Honor TIES’ Bertha Vazquez

Each year, the National Association of Biology Teachers (NABT) honors individual teachers for their outstanding contributions to this important field of work. One of those awards is the Evolution Education Award, honoring the teacher whose classroom and community efforts have advanced the public’s accurate (and that’s important) understanding of evolution.

We are proud to tell you that the recipient of the 2017 Evolution Education Award was none other than Bertha Vazquez, the powerhouse middle school science teacher who heads the Richard Dawkins Foundation’s Teacher Institute for Evolutionary Science (TIES). Bertha was given the award at the NABT’s conference last week in St. Louis.

“It’s an honor to be recognized by such a wonderful organization, the National Association of Biology Teachers,” said Bertha. “The success of TIES is the result of the efforts of my fellow TIES Teacher Corps Members. We all recognize the importance of teachers helping teachers.”

As for TIES itself, it has kept up its incredible pace of activity and growth. TIES teacher Gemma Mora-Azuar recently ran a workshop in Houston that trained a record seventy teachers. That same weekend, TIES workshops took place in Columbia, South Carolina; Santa Barbara, California; and Panama City in Florida.

Congratulations to Bertha and all the great teachers powering the work of TIES.


The House GOP Budget’s Church-State Sneak Attack

The wall between church and state is the fortification that keeps our democracy from collapsing into theocracy, and the religious Right is always looking for ways the breach the wall where they think it weakest. The Johnson Amendment, to belabor this siege metaphor, is akin to a long-range weapon along the parapets of the wall, archers let’s say. It’s a part of federal law that forbids tax-exempt nonprofits (such as churches and the Center for Inquiry) from endorsing or opposing candidates for political office. Though it has proven to be a difficult law to enforce properly, the very presence of the archers has kept most of the enemy at bay.

That’s all changed. While those bows and arrows dissuaded most individual churches from engaging in electioneering (and indeed the majority of religious organizations support the Johnson Amendment), a well-funded and maniacally obsessed army of hard-right religious groups is not so easily spooked. Hurling massive payloads of political influence and lobbying dollars from their cross-shaped trebuchet, they aim to knock the archers of the Johnson Amendment from their perch. If they go down, the entire wall will not collapse, but gates of another kind would certainly fly open: the floodgates of political cash, flowing directly toward those churches that wish to become de facto political action committees for religious-Right candidates. (Sort of like a freelance cavalry? This metaphor may be going too far.)

A huge volley was flung toward the wall at the beginning of this month, when the House GOP released their proposed tax reform bill, which includes a provision revoking the restrictions of the Johnson Amendment for churches, allowing these institutions to openly support candidates for office. Donations to these churches would remain tax-exempt, and churches have tremendous leeway when it comes to what they use their donations for and how much they reveal about it to the public. That’s how these churches then become unofficial arms of a given candidate’s campaign.

Clergy, church employees, and members of congregations have never been prohibited from speaking their minds and supporting the candidates of their choice, despite the shouts and complaints of the religious Right and their allies in Congress and in the White House. The Johnson Amendment limits the activities of the institutions themselves, not individuals.

So it’s up to us to shore up this crucial part of the wall of separation. Luckily, most religious organizations and most of the American people agree with us: Churches shouldn’t meddle in elections. We put out an action alert to make it easy for you to contact your representatives and tell them that you want to keep those defenses strong.

Check out these great articles from Vox and ThinkProgress citing CFI’s new director of government affairs, Jason Lemieux.


News from the CFI Community

Paul Ryan’s Prayer Fallback: Sad!

After the horrific shooting massacre that left twenty-six dead in a Texas church, our leaders reverted to many of the same arguments and platitudes that almost always follow tragedies like these. While many are once again driven to take actions that would prevent further needless slaughters, others warn us that “now is not the time” and default to the now-cliched “thoughts and prayers.”

But as secularists and skeptics, we know that prayers are never enough. Certainly individuals can find comfort in their prayers, and we hope that everyone affected by these tragedies can find whatever means they can to help themselves and their loved ones heal. But prayer is not a solution to the crises that we face. It couldn’t possibly be.

But try telling that to the Speaker of the House of Representatives. On Fox News’s The Ingraham Angle, Speaker Paul Ryan berated those who have criticized him for solely relying on prayer and expressions of piousness whenever real-world problems overwhelm us. He proclaimed his “disappointment” with the “secular left,” saying how “sad” it is that we “don’t understand faith.” And then he told Ingraham:

And it is the right thing to do, is to pray in moments like this because you know what? Prayer works. And when you hear the secular left doing this thing, no wonder you’ve got so much polarization and disunity in this country when people think like that.

There’s a lot wrong with those sentences, not least of which was the idea that secular Americans who want reality-based solutions to our nation’s problems are somehow responsible for political polarization and division.

But more importantly, Ryan made the claim that “prayer works.” In our official statement, we reminded the Speaker that in fact there is no evidence that praying has any effect on earthly events, apart from an individual’s personal solace from the practice. Prayer certainly won’t fix the national emergency of gun violence.

As our president and CEO, Robyn Blumner, said, “Speaker Ryan’s imperative is to use his influence and power as our highest ranking legislator to create real, positive change that keeps all Americans safe while upholding our nation’s highest ideals. That he chooses this moment to belittle secular Americans, or Americans of any religious affiliation, is what is truly sad.”

Go check out our action alert and tell the Speaker that actions speak louder than prayer.


Some Justice—Maybe—for Avijit Roy

Four years ago, the Center for Inquiry formed a bond with a brilliant writer and science communicator in Bangladesh, Avijit Roy. Several atheist bloggers had been arrested at the time for “hurting religious feelings,” and we coordinated with Avijit to organize protests in locations around the world to demand their release.

In 2015, Avijit was murdered. He and his wife Rafida Bonya Ahmed were ambushed by Islamist militants at a book fair in Dhaka. Bonya suffered terrible injuries, and Avijit was hacked to death. It was a killing that heralded a wave of such murders in the months and years following, targeting secularist writers and activists. As a way to honor the memory of our friend Avijit Roy, and to support the cause of free expression, the Center for Inquiry established Secular Rescue, an initiative to provide assistance and relocation to activists whose lives are threatened by religious extremists. More than thirty people have been helped through this program so far.

Last week it was reported that one of the men believed to have participated in the attack on Avijit and Bonya has been arrested by Bangladeshi authorities. Abu Siddiq Sohel, allegedly a member of the al Qaeda–linked organization Ansar Ullah Bangla Team, reportedly told police that he had been a part of the attack.

We have no way of confirming the accuracy of these reports. However, we do have more than sufficient reason to be skeptical of any official news surrounding the murders of secularist activists, as the government of Bangladesh has been overtly hostile toward the victims of these attacks, blaming them for their own deaths and denouncing their writings. We declared the official response to the killings “appalling” and called upon the authorities to defend free expression rather than foment anger and hate.

For the sake of Avijit’s family and friends, secularist writers in Bangladesh, and for the sake of justice, we truly hope that all of Avijit’s killers will be apprehended and fairly tried. We’ll be watching these events closely as they develop.


Secular Rescue Success Story: Lubna Ahmed

Because of the nature of CFI’s Secular Rescue program, though we assist secularist writers and activists whose lives are threatened by extremists, it’s not always possible to publicly celebrate its successes. Even after someone has been relocated out of immediate harm’s way, it may be necessary for them to maintain a low profile for the sake of their own safety or that of their family.

How rewarding it is, then, when one of these brilliant and courageous people can step into the spotlight, tell their story, and continue to fight for their cause. On Monday, Lubna Ahmed, a human rights activist and chemical engineering student from Iraq, was the guest on The Rubin Report, where she eloquently described her struggle as an atheist living under threat in an ultra-conservative Islamic society.

“[The Center for Inquiry is] helping me, they supported me … They supported my case with the lawyers,” she told Rubin, when asked about the financial and legal assistance the Secular Rescue program provided. “I’m very grateful to Mr. Richard Dawkins. He takes his words into actions, not like others, because he sees that it’s very important to save people who are like me.”

We do believe in Lubna and all those who are striving to advance reason and secularism in the places most hostile to them. And we’ll keep doing all we can to help.


CFI Highlights on the Web

On Halloween, the Science Channel show Strange Evidence provided an example of how not to handle actual experts on the subject of extraordinary claims. CFI’s Joe Nickell and Tom Flynn were both asked to examine video purported to be evidence of Bigfoot, but as Joe explains in his post, the video he and Tom saw was not what they aired on the program, and in neither video was there proof of any Sasquatch to be found.

Joe has a lot more of substance to offer on this cryptozoological subject from his article in Skeptical Inquirer on the evolution of the Bigfoot myth, how it borrows and adapts from various cultures’ beliefs and legends, and has been molded by the “shrinking” of the planet.

Joe also recounts his visit to a Las Vegas–area saloon after CSICon last month, where legends tell of ghosts that haunt every part of the establishment, including the restrooms. As you could probably guess, Joe concludes, “There is only one kind of spirits at the Pioneer, the kind poured into a glass.”

Tamar Wilner does a true service for consumers of media, as she teases out the various strains of misinformation in the news media, most of which is being categorized as “fake news,” which she says “is a wooden mallet. It’s blunt. It can only smash, not carve, pluck, or hold up for inspection. The more we use it, the more dulled we all seem to its effects.”

Can chiropractors cure diabetes? No, but it makes for good marketing copy. William London looks at the various gimmicks that are bandied about to promote chiropractic, including the claim that diabetes can be “reversed” by chiropractors.

Consistently enlightening are Ben Radford‘s correspondences with those who vehemently disagree with his skeptical take. Here, he delicately engages in an exchange with someone who is sure that psychic powers exist (and something about Thomas Edison having been thought of as a “lunatic” in his day).

From Free Inquiry’s special issue on blasphemy in art, Bruce Adams tells of his own development as a blasphemous creative artist, and how his aim is to irreverently confront difficult issues through his art. The blasphemy is just a bonus.

In celebration of the anniversary of Carl Sagan’s birth, Inverse highlights an excerpt from a piece written by Ann Druyan about her life with Sagan for Skeptical Inquirer in 2003. “I don’t think I’ll ever see Carl again,” she wrote. “But I saw him. We saw each other. We found each other in the cosmos, and that was wonderful.”

And of course, you can keep up with news relevant to skeptics and seculars every weekday with The Morning Heresy.

Upcoming CFI Events

CFI Austin

CFI Michigan

  • November 20: Interfaith Thanksgiving Service in Grand Rapids.
  • December 9: Secular Service time, helping out the nonprofit Kids’ Food Basket as they address childhood hunger through their Sack Supper program.
  • December 13: Solstice Dinner in Grand Rapids.
  • December 16: Solstice Dinner in Madison Heights.

CFI Western New York


Thank you!

Everything we do at CFI is made possible by you and your support. Let’s keep working together for science, reason, and secular values.  Donate today!

Fortnightly updates not enough? Of course they’re not.

       •  Follow CFI on Twitter.

       •  Like us on Facebook

       •  Encircle us on Google+

       •  Subscribe to us on YouTube.


Cause & Effect: The Center for Inquiry Newsletter is edited by Paul Fidalgo, Center for Inquiry communications director.

The Center for Inquiry (CFI) is a nonprofit educational, advocacy, and research organization headquartered in Amherst, New York, with executive offices in Washington, D.C. It is also home to both the Committee for Skeptical Inquiry, the Council for Secular Humanism, and the Richard Dawkins Foundation for Reason & Science. The mission of CFI is to foster a secular society based on science, reason, freedom of inquiry, and humanist values. Visit CFI on the web at 


Categories: , Skeptic

eSkeptic for November 15, 2017 feed - Wed, 11/15/2017 - 12:00am

In this week’s eSkeptic:

SHERMER SHREDS Mass Public Shootings & Gun Violence: Part I

At 59 dead and over 540 wounded, the Las Vegas massacre that took place on October 1, 2017 is now the worse mass public shooting in U.S. history.

As is usually the case with such gun-related tragedies, within hours social media and political punditry was abuzz with talk of gun control and Second Amendment rights, with both the left and the right marshaling their data and arguments. The two most common arguments made in defense of gun ownership are (1) self protection and (2) as a bulwark against tyranny.

In this video, Michael Shermer “shreds” these ideas with skeptical scrutiny.


BABA BRINKMAN’S SKEPTIC RAP Rap Artist Performs Science-Based Hip-Hop

Baba Brinkman is a Canadian rap artist based in New York. He is best known for his “Rap Guide” series of science-based hip-hop albums and theatres shows, including Rap Guides to Evolution, Climate Change, and Religion.

The world premiere of this rap was performed at a live variety science show hosted by Dr. Michael Shermer, in partnership with YouTube Space NY in late September 2017, celebrating 25 years of Skeptic magazine and the Skeptics Society combating ‘fake news.’ The event explored the question: ‘How Can We Know What’s True?’.


A NEW STORY! How Brian Brushwood Became a Card-Carrying Skeptic

As we announced a few weeks ago in eSkeptic, we asked several friends to tell us about those “aha!” moments that led to their becoming skeptical thinkers. As promised, here is another one of their incredible stories on YouTube. Enjoy!

American magician, podcaster, author, lecturer, and comedian, Brian Brushwood is the host of Scam School for Discovery, Hacking the System for National Geographic, and co-host of The Modern Rogue. He is the author of several books including: Scam School: Your Guide to Scoring Free Drinks, Doing Magic & Becoming the Life of the Party, and The Professional’s Guide to Fire Eating.


Tell us your story and become a card-carrying skeptic! Thank you for being a part of our first 25 years. We look forward to seeing you over the next 25. —SKEPTIC

Become a Card-Carrying Skeptic

The Crypto-Kid

In this episode of MonsterTalk, we interview cryptozoology enthusiast Colin Schneider, a young and enthusiastic researcher of Fortean and paranormal topics about his research into animal exsanguination. It’s a fun discussion of the field of cryptozoology, the disturbing topic of animal mutilation and the work done by the British organization, the Center for Fortean Zoology.

Listen to episode 141

Read the episode notes

Subscribe on iTunes

Get the MonsterTalk Podcast App and enjoy the science show about monsters on your handheld devices! Available for iOS, Android, and Windows. Subscribe to MonsterTalk for free on iTunes.

Sigmund Freud (1926). Photo by Ferdinand Schmutzer [Public domain], via Wikimedia Commons

In this week’s eSkeptic, Margret Schaefer reviews Freud: The Making of an Illusion, in which its author, Frederick Crews, convincingly argues that Freud constructed psychoanalysis on a fraudulent foundation. How did Freud convince so many people of the correctness and the profundity of his theory?

The Wizardry of Freud

by Margret Schaefer

“Clear evidence of falsification of data should now close the door on this damaging claim.”

The above is from a 2011 British Medical Journal article about Andrew Wakefield, the British physician whose “discovery” of a link between vaccination and autism fueled a world wide anti-vaccination movement. Since its publication in 1998, the paper’s results were contradicted by many reputable scientific studies, and in 2011 Wakefield’s work was proved to be not only bad science but a fraud as well: a British court found him guilty of dishonestly misrepresenting his data, removed him from the roster of the British Medical Society, and disbarred him from practice.

In his new book, Freud: The Making of an Illusion, Frederick Crews presents a Freud who was just such a fraud and who deserves the same fate. This is not the first time that Crews, a bona fide skeptic whose last book, Follies of the Wise: Dissenting Essays (2007), was reviewed in the pages of this journal, has written critically about Freud. Crews had been drawn to psychoanalysis himself (disclosure: this reviewer was, too) in the 1960s and early 1970s when, along with the late Norman Holland, he pretty much created the field of psychoanalytic literary criticism. But a prestigious fellowship to the Stanford Center for Advanced Study in the Behavioral Sciences (he was a professor of English at UC Berkeley at the time) gave him time to delve deeper into Freud, and convinced him instead that psychoanalysis was unscientific and untenable. Since then he has contributed to the growing skeptical scholarly and historical scholarship on Freud.

Psychoanalysis is not only pseudoscience (as most philosophers of science agree, though for different reasons), but “the queen of pseudosciences”.

Philosophers of science have indicted key concepts of Freud’s psychoanalysis such as “free association,” “repression,” and “resistance” as circular and fatally flawed by confirmation bias. Historians have tracked down the actual patients whose treatment served Freud as evidence for his theories and have sought to place Freud and his theories in the historical and cultural context of his time. Crews—to his own surprise—became well known as a major, if not the major, critic of Freud in the public eye because of a series of articles he published in the New York Review of Books in the 1990s. For Crews is that now all too rare and rapidly disappearing creature—the public intellectual—who is able to explain and make accessible an otherwise unwieldy amount of erudite scholarship in clear, elegant, and jargon-free prose. Defenders of Freud have sought to discredit him as a “Freud basher,” thereby continuing the (not so honorable) tradition that Freud began of questioning the motives of a skeptic and attributing it to “resistance” instead of answering his objections. […]

Continue reading

Categories: Critical Thinking, Skeptic

The Wizardry of Freud feed - Tue, 11/14/2017 - 10:30am

“Clear evidence of falsification of data should now close the door on this damaging claim.”

The above is from a 2011 British Medical Journal article about Andrew Wakefield, the British physician whose “discovery” of a link between vaccination and autism fueled a world wide anti-vaccination movement. Since its publication in 1998, the paper’s results were contradicted by many reputable scientific studies, and in 2011 Wakefield’s work was proved to be not only bad science but a fraud as well: a British court found him guilty of dishonestly misrepresenting his data, removed him from the roster of the British Medical Society, and disbarred him from practice.

In his new book, Freud: The Making of an Illusion, Frederick Crews presents a Freud who was just such a fraud and who deserves the same fate. This is not the first time that Crews, a bona fide skeptic whose last book, Follies of the Wise: Dissenting Essays (2007), was reviewed in the pages of this journal, has written critically about Freud. Crews had been drawn to psychoanalysis himself (disclosure: this reviewer was, too) in the 1960s and early 1970s when, along with the late Norman Holland, he pretty much created the field of psychoanalytic literary criticism. But a prestigious fellowship to the Stanford Center for Advanced Study in the Behavioral Sciences (he was a professor of English at UC Berkeley at the time) gave him time to delve deeper into Freud, and convinced him instead that psychoanalysis was unscientific and untenable. Since then he has contributed to the growing skeptical scholarly and historical scholarship on Freud.

Philosophers of science have indicted key concepts of Freud’s psychoanalysis such as “free association,” “repression,” and “resistance” as circular and fatally flawed by confirmation bias. Historians have tracked down the actual patients whose treatment served Freud as evidence for his theories and have sought to place Freud and his theories in the historical and cultural context of his time. Crews—to his own surprise—became well known as a major, if not the major, critic of Freud in the public eye because of a series of articles he published in the New York Review of Books in the 1990s. For Crews is that now all too rare and rapidly disappearing creature—the public intellectual—who is able to explain and make accessible an otherwise unwieldy amount of erudite scholarship in clear, elegant, and jargon-free prose. Defenders of Freud have sought to discredit him as a “Freud basher,” thereby continuing the (not so honorable) tradition that Freud began of questioning the motives of a skeptic and attributing it to “resistance” instead of answering his objections.

This is precisely one of the reasons that in previous books Crews has said that psychoanalysis is not only pseudoscience (as most philosophers of science agree, though for different reasons), but “the queen of pseudosciences,” because it is the only one that incorporates within its theory an explanation of why some people refuse to believe it, i.e. “unconscious resistance” which needs to be explained by Freud’s own ideas and methods—a most brilliant and masterful way of disarming criticism.

His new book, a biography of the first half of Freud’s life, with intensive focus on the period 1882–1900, examines the crucial years in which Freud was creating his “science of psychoanalysis,” which culminated in his Studies on Hysteria, 1895 and The Interpretation of Dreams, 1900. This time in Freud’s life has been somewhat neglected by Freud’s biographers for many reasons, including lack of sufficient available biographical information, but also because in these years Freud developed a theory of neurosis that he later said he abandoned. But Crews argues that all the principal concepts on which psychoanalysis rests were constructed at this early time. His logic is that if the roots of a tree are not sound, then the crown, no matter how beautiful and different from the roots, cannot be healthy. And recently a treasure trove of new data about Freud during this period has been released from censorship: the complete correspondence between the young Sigmund Freud and his fiancée Martha Bernays during the long four and a half years of their engagement, 1882–1886.

This correspondence, which consists of an astounding 1539 letters in all, had been concealed from public view for some 60 years. Only a very small portion—97 letters, or 6.3% of them—had been previously published, and those in expurgated form. Their importance is attested to by the fact that Anna Freud, his daughter, kept them private at her house in London instead of depositing them in the Freud Archive along with the rest of Freud’s papers after her father’s death in 1939. It was not until her own death in 1982 that her heirs finally did deposit them in the Freud Archive—but even then it was with the stipulation that access to them be restricted until the year 2000. Why was this correspondence hidden for so long? Their content makes it clear why: they don’t paint a flattering portrait of Freud. A reading of these letters after they became available on the Library of Congress website spurred Crews to write this book, as it confirmed to him all the suspicions about Freud’s motives and manner of working that he and others had raised before but had had to remain somewhat speculative: the fraudulent and pseudo-scientific evidential base on which psychoanalysis rests.

Despite its nearly 700-page length and 22 pages of footnotes, Crews’ book is divided into sections with witty titles such as “Sigmund the Unready,” “Tending to Goldfish,” and “Girl Trouble” and is thoroughly absorbing and highly readable. He begins with an examination of Freud’s family history and early education, detailing the reasons why Freud was “unready” to undertake the study of medicine, and then focuses on Freud’s “First Temptation”: cocaine. Freud’s enthusiastic endorsement—and use of—cocaine, Crews contends, had a much greater consequence for the theory of psychoanalysis than is officially recognized. It was not a soon-to-be-discarded “youthful indiscretion,” as Ernest Jones called it in his official 1957 biography of Freud, for Freud continued to use cocaine regularly, almost daily, not just occasionally, for some 15 years. Crews details Freud’s early experiments with the substance, and documents his disastrous attempt to help ease his best friend Fleischl’s withdrawal from morphine addiction by means of injections of cocaine. Meant as a kindness, it became the opposite, as Freud ignored every sign that it was not working and was blatantly harming his friend instead. Later, Freud dishonestly claimed to have cured Fleischl, when in fact his friend tragically deteriorated while undergoing Freud’s treatment, and finally died in great pain with two addictions instead of one: morphine and cocaine. The details of what happened to Fleischl are gruesome to read, and Crews sees Freud’s tenacious clinging to a pet theory and ignoring any evidence to the contrary, no matter how devastating, as characteristic of him throughout his life from then on.

As Freud wrote Martha while recommending it to her, he used cocaine to alleviate his many physical and emotional symptoms, which ranged from headaches, stomach aches, and sciatica to recurring depressions and intermittent “bad moods” punctuated by periods of elation. It consoled him for his loneliness in Paris while studying with Charcot, and gave him the self-confidence that he mostly lacked at this time. Most importantly for the creation of his psychoanalysis, he used it to overcome his writer’s block. Hence he was “under the influence” while he was thinking, writing, and creating the theories of psychoanalysis. Crews develops the intriguing notion that Freud had a “cocaine self” that permitted him to misrepresent and exaggerate the flimsy evidence he did have for his theories—and to manufacture evidence when none existed. Freud as a student had been a “studious, ambitious, and philosophically reflective young man, trained in rigorous intuitivism by distinguished researchers,” as Crews acknowledges. But in the early 1880s he changed into someone so arrogant and overweeningly ambitious and grandiose, so absolutely and unaccountably convinced of his theory of the sexual etiology of hysteria that he didn’t hesitate to stoop to dishonesty and fraud to try to prove it.

Psychoanalysis is not only pseudoscience (as most philosophers of science agree, though for different reasons), but “the queen of pseudosciences”.

Cocaine is notoriously known to induce feelings of supreme self-confidence, elation, and grandiosity in the user, to the point that facts and reality no longer matter. It also heightens sexual feelings and fantasies and is often used as an aphrodisiac for that reason, as Freud was well aware, using it for that purpose himself. (More than once in his letters we find Freud telling Martha that he feels like a “sexual giant.”) And Crews argues that Freud’s cocaine use also explains his exaggerated focus on sexuality as the ultimate cause of all neuroses.

Freud’s theory at the time, in brief, was that sexual seduction (molestation) in childhood, usually by fathers, which was “repressed,” i.e. not consciously remembered, was the “invariable,” “only,” and “exclusive” cause of all hysteria—in fact, of “all the neuroses”—as he announced in a paper he gave to a group of his peers in 1896. In that paper he presented as evidence 13 cases that he said he had successfully cured. No matter that the group’s chairman, Richard von Krafft-Ebbing, called Freud’s theory “a scientific fairly tale”—Freud rejected this judgment as due to his being an “ass” and a conventional prude—surely hard to believe of someone like Krafft-Ebbing, the foremost expert in pedophilia in the world at that time, and a man from whom Freud actually took a number of ideas (without giving him credit). Even more shockingly, Freud later admitted to his friend, confidant, and collaborator Wilhelm Fliess, a Berlin physician, that these 13 cases didn’t exist at all—he had just made them all up.

Freud believed in what has come to be called his “seduction theory of hysteria” for many years until he famously “changed his mind” about what it was that his patients had “repressed.” Although it is unclear exactly when he officially made this change (it was not until 1909 that he called the Oedipus complex the central complex of the neuroses), he privately confessed to Fliess in 1897 (a few months after presenting his fraudulent paper) that he had not actually been able to “conclude a single case” of analysis so far, i.e. that his treatments had not produced a single cure. In fact, he had not even been able to induce any patient to agree with him and “remember” such abuse consciously, even though he had exercised extreme pressure to get them to do so, including massage, “head pressure,” and drugs to put them in a more suggestible mood when verbal suggestion didn’t work (of course he had attributed this to their “resistance” and “repression”) This change of mind has long been celebrated as the beginning of “true psychoanalysis,” as it placed the cause of hysteria away from the external world and into the internal psychological world of his patients: they were not repressing memories of actual sexual molestation, but rather their own childhood sexual fantasies and desires that they had unconsciously attributed to their fathers. However, Crews shows that Freud had no more evidence for his second theory (in fact, less, as it was empirically not even potentially verifiable) than he did for his first one, and continued to use all the (circular and self-invented) concepts with which he had tried to “prove” the first theory, e.g. “repression,” “free association,” and “resistance,” all of which have meaning only in Freud’s own system.

If his new theory was no more empirically based than his first, how did Freud actually come up with his ideas of the etiology of hysteria? Crews takes his clue from the fact that Freud saw himself (and other family members, especially one of his sisters) as suffering from an hysteria exactly like those of his patients, and that what he represented as his empirically based “science of psychoanalysis” were actually his own—real or imagined—childhood sexual experiences. Crews’ exposition of what in Freud’s biography led him to his theories makes for interesting reading indeed. In the end, Crews demonstrates that Wilhelm Fliess, who at their last actual meeting in 1900 accused Freud of merely reading the contents of his own mind into that of his patients, was right. What this means is psychoanalysis is based on a case of one—Freud himself. That is, Freud took himself as representative of all people in all times and in all cultures—surely a supremely grandiose, narcissistic—and preposterous—idea.

This is just a thin slice of what else there is in this riveting and rewarding book. One chapter is devoted to Freud’s rather unsuccessful stay in Paris in the winter of 1885-86 observing Charcot’s treatment of hysterics at Paris’ famous Salpêtrière. Freud idealized Charcot, and never questioned the obvious artificiality of Charcot’s sexualized “theater of hysteria” that entertained the aristocratic audiences he invited to watch it, although others there at the same time as Freud saw through the charade, correctly seeing Charcot’s use of hypnosis as an extreme form of suggestion. Instead, he took over Charcot’s theory of the origin of hysteria wholesale. Charcot’s theory of hysteria died with Charcot in 1893, since by then it had become obvious that the great doctor had gone astray in his enthusiastic use of hypnotism. But Freud took no notice and elaborated Charcot’s method of using hypnosis on his patients after he returned to Vienna—with no success, as Crews details in sometimes hair-raising detail. The case of Bertha Pappenheim, considered to be the foundational case of psychoanalysis, is paradigmatic of the gulf between the reality of her treatment and its later reporting. Although she was Breuer’s patient from 1880–1882, Freud collaborated with him throughout the case, and referred to this particular case, “Anno O,” later more than to any of his own cases. This supposedly “successful cure” showing how hysterical symptoms could be cured by cathartic “talking” was a complete failure instead. After two years and a thousand hours of therapy (!) by Breuer, Pappenheim was worse, not better. And all the while she was supposedly cured of her symptoms by talking freely until she found their point of origin (the famous “chimney sweeping” that Freud took from her and later called “free association”) Bertha was being given large quantities of mind-altering drugs such as chloral hydrate (a “hypnotic” chemical that today is often used as a “date rape” drug) and morphine—drugs whose side effects and withdrawal symptoms in turn were often misinterpreted by the two as the very “hysterical symptoms” she needed to have cured (thus giving new meaning to Karl Kraus’ assessment of psychoanalysis as “the disease it purports to cure”). The quantities used on her were such that five weeks after her discharge as “cured” she had to be admitted to a psychiatric hospital, still symptomatic, and needing be detoxed—a truth that Freud and Breuer failed to mention when they wrote the case up 13 years later. And so it went with many other patients, e.g. Anna von Lieben, whom Freud in 1897 called his “principal client” and “instructress,” Ida Bauer (“Dora”), and Emma Eckstein, whose treatment, which almost killed her and resulted in her severe facial disfigurement, qualifies as out and out medical malpractice.

Crews’s book takes us up through Freud’s life and ideas until his Interpretation of Dreams in 1900. The idea that dreams have meaning is an old folk belief which is true on the face of it, as people do dream about matters of concern to them, but Freud’s elaborate dream theory gives that belief a pseudoscientific gloss, as he invented a complicated theory of dreams that attributed extraordinary intellectual and linguistic abilities to a supposed “dream censor” in our minds. It is pseudoscientific because—to give just one obvious reason—Freud’s interpretive scheme allowed for a symbol to mean either itself, its opposite (“You say it’s not your mother? Aha! It is your mother”) or anything else at all (displacement), with no way to determine which interpretation is correct, or even likely.

In the later part of his book, Crews also takes up the matter of Freud’s relationship with his sister-in-law Minna, the younger sister of Martha, who came to live with the Freuds in Vienna after the death of her fiancée in the mid-1890s. Crews finds the admittedly circumstantial evidence that she and Freud had a long-term affair too strong to ignore. (And what evidence can there be in something of this sort but circumstantial?) But he does not find this matter merely titillating. Crews argues that Freud’s closeness to Minna had an influence on his elaboration of psychoanalysis. As early as the mid-nineties, she supplanted Wilhelm Fliess as his confidant after that relationship ended in bitterness, as unlike Martha, she took a lively interest in his work, and helped him write his books and papers. Crews argues that she may have helped turn Freud away from whatever scientific and empirical values he still ostensibly held towards extremes of speculation such as spiritualism and telepathy. (At one point Freud actually claimed that what passed between the analyst’s and his patient’s “unconscious” happened by means of telepathy.)

Freud took himself as representative of all people in all times and in all cultures—surely a supremely grandiose, narcissistic—and preposterous—idea.

If, as Crews convincingly argues, Freud constructed psychoanalysis on a fraudulent foundation, how did he convince so many people of the correctness and the profundity of his theory? And not just his enthralled followers over whom he presided like the guru of a cult, excommunicating all apostates, but also many of us over many subsequent decades? One reason for Freud’s wizardry in doing this, Crews suggests, is Freud’s rhetorical mastery and guile, including his heart-warming protestations of modesty and scientific rigor. Crews, after all originally a literary critic, notes that the narrative structure of Freud’s case histories and his Interpretation of Dreams was that of a suspenseful detective story in the manner of Arthur Conan Doyle, one of Freud’s favorite authors (Freud himself admitted—in supposed surprise—that his case histories read more like short stories). In The Interpretation of Dreams, for example, Freud induces his reader to identify with him and join him in a quest he structured as a difficult and unsparingly honest introspective journey leading to that heart of darkness, the source of all dreams—“the Unconscious.” (Not for nothing was Freud awarded the Goethe prize for literature in Germany in 1936.) So he was creating “literature,” as some who still idealize the founder today argue and actually see as a virtue, claiming that psychoanalysis is therefore a “hermeneutic” rather than an empirical “science,” one conveniently not subject to empirical rules of evidence. True, “literature” does not have to be attuned to empirical reality—it’s “truth” lies in a different realm—but a theory of mind and a “science,” especially one applied to the (costly) treatment of suffering patients in the actual world, does.

I can’t help but add that one reason that Crews’ book succeeds as a readable and compelling book is the same one to which he attributed a good deal of Freud’s success: he, too, is an eloquent and passionate writer who has here constructed as enthralling a detective story as any of Freud’s. He, too, becomes Sherlock Holmes, the objective, erudite, and supremely rational sleuth who relentlessly tracks down hidden clue after clue—which leads him inexorably to only one possible verdict: Freud is guilty of fraud as charged. Except—and this is the big difference—Crews provides ample documentation and evidence for what he says, whereas Freud only pretended to do so.

Is any of this still important today, when psychoanalysis has effectively been banished from the mainstream professions of psychiatry and psychology for its lack of efficacy? Today even basic Freudian terms such as “hysteria” and “neurosis” have been excised from the DSM, the bible of psychiatric practice. But Crews argues that psychoanalysis still remains culturally pervasive and that Freud’s ideas, though proven pseudoscientific many times, persist and are still capable of exerting harmful influence in the real world. A recent example was the widespread “recovered memory” movement of 1980s and 1990s that Crews detailed in his eye-opening book of 1995, The Memory Wars. This movement, which still has hangers-on today, destroyed the lives of many families, including that of the daughters who accused their fathers of sexually molesting them in childhood on the basis of a therapist’s unearthing their “repressed memories” of sexual abuse, and jailed a number of falsely accused men. It was obviously a revival of Freud’s original theory of neurosis, in which a therapist convinced of this theory subtly or not so subtly—as was clearly demonstrated in later lawsuits—suggested this to their patients, just as Freud himself did.

Crews hopes that by proving that Freud’s creation of psychoanalysis was a fraud he will finally help “close the door” on this “damaging claim.” Will it? Alas, exposure as a fraud does not seem to deter belief: in the U.S. a large fraction of the population still believes in Wakefield’s vaccination-autism theory, and in 2015, anti-vaccination groups in California actually recruited the discredited Wakefield himself to come to their state and head their campaign against the state legislature’s effort to pass a pro-vaccination law protecting school children.

But “the still small voice of reason”—to quote Freud himself in another context—will, hopefully, prevail in the end. Anyone who reads Crews’ new book with an open mind will come away thinking that while Freud was indeed a highly imaginative thinker and an accomplished, eloquent writer—he was also a fraud and a huckster, a narcissistic con-man of overwhelming ambition, hungry equally for fame and fortune, who succeeded by means of deceptive propaganda and rhetoric in being the “conquistador” that he longed to be. But at end of the royal road to Freud’s Unconscious there is finally only the Wizard of Oz.

About the Author

Dr. Margret Schaefer received a Ph.D. in English at UC Berkeley, and has taught at UC Berkeley, San Francisco State, and the University of Illinois at Chicago. She is a cultural and literary critic, journalist, and translator, and has written on issues in psychology and medical history as well as on Oscar Wilde, Kleist, Kafka, and Arthur Schnitzler. Recently she translated and published three volumes of Schnitzler’s fiction and two of his plays, which were produced in New York and in Berkeley.

Categories: Critical Thinking, Skeptic

Fact-checking on Facebook

neurologicablog Feed - Tue, 11/14/2017 - 4:55am

Last year Facebook announced that it was partnering with several outside news agencies, the Associated Press, Snopes, ABC News, PolitiFact and, to fact-check popular news articles and then provide a warning label for those articles on Facebook. How is that effort working out?

According to a recent survey, not so well. Yale researchers Rand and Pennycook found only tiny effects overall, and it’s possible there is a net negative effect from the warning labels. Some people just ignore the labels. Perhaps more significant, however, is the fact that fake news articles that were missed by the fact-checkers were more likely to be believed as real because they lacked the warning label. The fact-checkers could not possibly keep up with all the fake news, so they were overwhelmed and most of the dubious content not only made it through the filters, but benefited from a false implication of legitimacy.

Further, the Guardian reports that this arrangement between Facebook and these news outlets compromise the ability of those news outlets from being a proper watchdog on Facebook itself. If their journalists are being paid by Facebook to fact-check, then they have a conflict of interest when reporting on how Facebook is doing. This conflict is exacerbated by the fact that news organizations are hard-up for income, and could really use the extra income from Facebook.

So it seems that the fact-checking efforts of Facebook were insufficient to have any really benefit, and may have even backfired. Warning labels on dubious news articles may be the wrong approach. It’s simply too easy to foil this protection by overwhelming the system. You could even deliberately flood Facebook with outrageously fake news stories to serve as flack and provide cover for the propaganda you really want to get through. In the end the propaganda will be even more effective.

The inherent problem seems to stem from the difference between a pre-publication editorial filter and a post-publication filter. Traditional journalism has editors and standards, at least in theory, that require vetting and fact-checking prior to a story being published. Outlets had an incentive to provide quality control in order to protect their reputation.

Of course tabloids also have a long history. They take a different strategy – abandoning any pretense to journalistic integrity, and simply spreading outrageous rumors or fabricated “infotainment.” At least it was relatively easy to tell the difference between a mainstream news outlet and a tabloid rag, although there is more of a spectrum with the lines blurred in the middle.

No one would seriously claim that this system was perfect. News organizations have their editorial biases, and they had a lot of control over what became news. Biases tended to average out over many outlets, however. The big concern was over consolidation in the media industry, giving too much power to too few corporations.

Social media has now upended this system. There is now, effectively, no pre-publication editorial filter. The infrastructure necessary to own and operate a news outlet is negligible, and social media creates a fairly level playing field. It is an interesting giant social experiment, and I don’t think we fully know the results.

What this means is that ideas spread through social media mostly according to their appeal, rather than due to any executive decisions made by gatekeepers. There are still power brokers – people who have managed to build a popular site and have the ability to dramatically increase the spread of a particular news item. That, now, is the name of the game – clicks, followers, and likes. This equals power to spread the kind of memes and news items that will generate more clicks, followers, and likes.

The free-market incentive, therefore, is for click-bait, not necessarily vetted quality news. Quality is still a factor, and will give an article a certain amount of clicks. My perception is that there are multiple layers of information on social media. There are subcultures that will promote and spread items that appeal to them. They may appeal to them because they are high-quality, or because they are genuinely entertaining. Or they may appeal because they cater to a particular echochamber or ideology.

So, if you love science, you can find quality outlets for science news and analysis. Within these subcommunities, quality may actually be a benefit and the cream does rise to the top.

But sitting on top of these relatively small subcommunities is the massive general populace, which rewards memes and clickbait. That is the realm of fake news and cat videos – entertaining fluff and outrageous tabloid nonsense. This realm is also easily exploited by those with an agenda wishing to spread propaganda – click-bait with a purpose.

Facebook, as the major medium for this layer of fake news, now faces a dilemma. How can and should they deal with it? The outsourced fact-checkers strategy is, if the recent survey is accurate, a relative failure. So now what?

I feel we can do better than to just throw up our hands and let this new system play itself out. Sometimes market forces lead to short term advantages but long term doom. Can our democracy function without a well-informed electorate? Can our electorate be well-informed in an age of fake-news? The entire situation is made worse by the fact that the very concept of fake news is used to further spread propaganda, to delegitimize actual journalism and dismiss any inconvenient facts.

Can we properly function as a society if we don’t at least have a shared understanding of reality, at least to the point that there are some basic facts we can agree on? Recent history does not fill me with confidence.

I don’t have the solution, but I do think that the large social media outlets should take the problem seriously and continue to experiment. Overall I think we need to find the proper balance between democracy of information, transparency, and quality control. Right now the balance has shifted all the way toward democracy, with a massive sacrifice of transparency and quality control. I don’t think this is sustainable.

There are, of course, things we can do as individuals – such as supporting serious journalism, and not spreading click-bait online. Everyone needs to be more skeptical, and to vet news items more carefully, especially before spreading them to others. But this is a band-aid. This is like addressing the obesity crisis by telling everyone to eat less and exercise.

We need systemic change. It’s an interesting problem, but there are certainly ways to at least improve the situation.

Categories: Skeptic

Skeptoid #597: The Wisdom of the Future

Skeptoid Feed - Mon, 11/13/2017 - 4:00pm
Skeptoid corrects a round of past errors, that they might become the wisdom of the future.
Categories: Critical Thinking, Skeptic

Raccoons Are Smart But Not Good Pets

neurologicablog Feed - Mon, 11/13/2017 - 4:54am

Animal intelligence is fascinating for a number of reasons, not the least of which is that it forces researchers to think carefully about what intelligence is. The comparison might also provide a window into what constitutes human intelligence in particular.

There is no question that humans have intellectual capabilities that no other species has. However, some animals are smarter in certain ways than you may imagine. Certain birds, like corvids (jays and crows) have demonstrated significant problem solving capability, for example. Researchers are also finding that raccoons may be even smarter than we suspected.

One paradigm of animal intelligence research is known as the Aesop’s Fable test, based on the the story of the thirsty crow. In this tale a thirsty crow came upon a tall pitcher with water at the bottom, but it could not reach down the long neck to the water. So it dropped stones in the pitcher to raise the water level until it could reach. This behavior demonstrates creative problem-solving and some basic understanding of cause and effect. Corvids have the ability to pass this test – they can figure out how to use objects to raise the water level to gain access to water or food.

A recent study performed the same test on raccoons. They were given access to a long tube with marhmallows floating lower down, too low for them to reach. First they were shown how dropping stones would raise the water level. Two of eight raccoons tests were then able to use this effect to gain access to the marshmallows. Statistically this is not as good a performance as corvids, but at least some raccoons are smart enough to pass the test.

One additional raccoon gained access to the marshmallows, however. They figured out how to grip the top of the tube and then rock back and forth to know the tube over. The researchers had specifically designed the tube so it could not be knocked over, but the raccoons essentially broke the apparatus. This is interesting because it shows that animals may have particular skills or predilections that they will use to their advantage. Knocking over the tube was a very “raccoon” solution.

The researchers also went further. In a follow up experiment they exposed the raccoons to the same setup and gave them access to floating and sinking balls. The sinking balls would raise the water level, while the floating ones were “non-functional” – or so the researchers thought. Again the two smart raccoons performed well, and they figured out that by dropping floating balls on the water they could then push them down, splashing water and marshmallow up along the sides of the tube, and thereby gaining access to the food. One raccoon figured out how to spin the floating ball to bring up marshmallow clinging to it.

So some of the raccoons solved the test, but not in the way the researchers intended. They demonstrated creative problem solving.

Other researchers are interested in how raccoons are adapting to human civilization. Most people in rural or suburban areas will have experience with raccoons. Raccoons also live in cities, but you may be less likely to see them.  They have learned that humans are a great source of food, if you can figure out how to break into their containers. Raccoons are also fairly dexterous, and can break into most things if there is food to be had.

Researchers have compared urban and rural raccoons, and found that urban raccoons have better trash-can opening skills. When confronted with unfamiliar containers, they are more confident and successful in figuring out how to get passed any obstacles. When tracked with GPS city raccoons can be seen avoiding high-traffic streets, and taking safer paths to their destination.

Two questions remain – is the increased intelligence of city raccoons only a result of learning, or is there some evolution going on. Raccoon populations have increased significantly in the passed 80 years, and they are increasingly moving into human-occupied areas, including cities. This suggests that they are adapting to human civilization.

This phenomenon may be similar to what is believed to have happened with dogs. They started living on the edge of human populations, taking advantage of the scraps humans leave behind. Those better able to interact with the humans had a survival advantage. In this way dogs may have already been partly domesticated before humans started breeding them.

So, are raccoons adapting to humans in the same way? Are they becoming not only more clever, but domesticated? If so, how long will this process take? Further, will raccoons split into two species, the wild raccoon and the domesticated raccoon, similar to wolves and dogs?

It seems likely that raccoons will respond to the massively changing environment represented by human civilization. They already seem to be flourishing and adapting. The question is, what niche will they find? They will not necessarily take the same path as dogs or cats. Perhaps they will just become better thieves, increasing not only their cleverness but their stealth.

We may have a clue to the future of raccoon in modern day experience with raccoons as exotic pets. Because they can be adorable, some people may think it would be cool to have a raccoon pet, but veterinarians warn that they make terrible pets.

First, they need constant supervision. They are very good at destroying things, and if you ever left them alone in your house they would cause significant damage. Caging them is not an option as it is cruel to cage a wild animal and that will just stress them out. Raccoons respond to stress by biting. They also cannot be house-trained, and so will relieve themselves anywhere. Essentially, they would be nightmare pets.

But what if they were truly domesticated? Are these inconvenient behaviors part of being wild, or are they just core to being a raccoon? Is getting into trouble and destroying things part of the raccoon personality that would not be solved simply by domesticating them?

It is interesting to think about the future of raccoons. They are clearly one species doing well in the increasingly urbanized world. They are smart and dexterous, do not really fear humans, and are usually not dangerous (unless they have rabies or you provoke them into biting you). How far with their adaptation go? Will we see a future of super smart or fully domesticated raccoons? It’s not unlikely.




Categories: Skeptic

The Skeptics Guide #644 - Nov 11 2017

Skeptics Guide to the Universe Feed - Sat, 11/11/2017 - 8:00am
What's the Word: Ontology; News Items: Risks of Gluten Free, Wormholes, Reversing Cell Aging, Gadolinium Law Suits, Universal Flu Vaccine; Who's That Noisy; Your Questions and E-mails: Lava Tubes; Science or Fiction
Categories: Skeptic

Glyphosate Not Associated with Cancer

neurologicablog Feed - Fri, 11/10/2017 - 4:54am

In March of 2015 the International Agency for Research on Cancer (IARC), part of the World Health Organization (WHO) published their assessment on glyphosate, Monsanto’s popular weedkiller, classifying it as 2a – a probable carcinogen. This was like red meat to the anti-GMO crowd, and even sparked class action suits against Monsanto and may lead to banning use of the chemical in the EU.

There were significant problems with the IARC report, however. First – it is at odds with every other expert review of the scientific literature on glyphosate. I review the evidence here, citing many expert panel reviews, all conclude that the evidence does not support a link between glyphosate and risk of cancer. The IARC conclusion is a clear outlier, which reasonably prompts questions as to why their designation stands out.

We also need to put the IARC classification of 2a – probable carcinogen, into context. This is the same classification that the IARC gave to drinking hot beverages or eating red meat. Overall they tend to err on the side of caution when making their classification.

But there were problems that go beyond where the IARC sets their threshold for “probable.” Two main criticisms have emerged. The first is the lack of transparency. Reuters has published a series of articles on the issue, outlining, for example, that when the EPA reviewed the safety of glyphosate they also published a 1300 + page document that outlines the entire deliberative process. The IARC produced no such document.

Further, Reuters was able to obtain copies of the draft report and shows that the final report differs in significant ways. They found 10 major changes or omissions from the draft to the final copy, every one in the direction of emphasizing the risks of glyphosate. It is not known who made these edits, and the IARC responded by essentially instructing their scientists not to discuss the confidential deliberative process.

Far more important, however, is the accusation that the lead IARC scientist knew of unpublished data (because he was involved in the research) that showed no correlation between glyphosate and cancer, but this data was not considered in the review. So the lead scientist excluded his own data from the final analysis.

That data has now been published. 

The study comes from the Agricultural Health Study. Here are the results:

Among 54 251 applicators, 44 932 (82.8%) used glyphosate, including 5779 incident cancer cases (79.3% of all cases). In unlagged analyses, glyphosate was not statistically significantly associated with cancer at any site. However, among applicators in the highest exposure quartile, there was an increased risk of acute myeloid leukemia (AML) compared with never users (RR = 2.44, 95% CI = 0.94 to 6.32, Ptrend = .11), though this association was not statistically significant. Results for AML were similar with a five-year (RRQuartile 4 = 2.32, 95% CI = 0.98 to 5.51, Ptrend = .07) and 20-year exposure lag (RRTertile 3 = 2.04, 95% CI = 1.05 to 3.97, Ptrend = .04).

This is the best and largest set of data to date, and it was negative. The possible association with AML requires further discussion, as I am confident it will be seized on by those with an anti-glyphosate agenda. First and most importantly, this association was not statistically significant. This means it is almost certainly noise in the data. Give the number of possible correlations being examined, non-significant possible correlations are almost inevitable.

There are two other reasons to think this association is noise – there was no difference between the 5 year and 20 year exposure lag. If this were a true cause and effect, we would expect the lag time to matter. Even more significant, however, is the fact that previous possible correlations were between glyphosate and non-Hodgkins lymphoma (NHL). That is the association that led to the IARC classification. There was no association with NHL is this data, just a non-significant association with AML. This is exactly what we expect to find with random noise – different correlations in different sets of data. Such correlations don’t mean anything until they are replicated in an independent set of data.

So the bottom line is that this large data set is essentially negative regarding any association between glyphosate and cancer. If the IARC had taken this data into consideration it may have (and it seems should have) changed their conclusions. They knew about this data, but chose to ignore it.

The issue of glyphosate is controversial because it has become a focal point for an ideologically struggle. For the anti-GMO movement it is the poster-child of corporate malfeasance. For corporations this is an example of activist government overreach.

I tend to think that both sides are correct, at least to an extent. We should not trust corporations, meaning that we should not just assume they will be good corporate citizens, never abuse their power, or that they will consider the public interest over their shareholders. There is overwhelming evidence that, generally speaking, this is not a good assumption. Corporations look after their own bottom line. That is why we need regulations, transparency, and oversight to protect consumer and public interests.

I also don’t think we can trust activist organizations, nor can we assume that government agencies will act without ideological bias. Again, history tells a very different story.

What we need, therefore, are professional disinterested reviewers. We need scientific experts to review objective evidence, and investigative journalists to make sure there is transparency. They don’t always do their job optimally either, but the whole system acts as a set of checks and balances.

The story of glyphosate and the IARC review is a microcosm of all this. We see multiple different interests, each with a different narrative interpretation of reality, fighting over what is, at the end of the day, a scientific question. What is the safety of glyphosate in the context of how it is used and compared to other alternatives? The best we can do is to have multiple independent experts review all the evidence and give us a transparent assessment. If a consensus emerges and that consensus includes the opinion that there is sufficient evidence to reach a conclusion, then that conclusion is probably the most reliable answer we can get.

In the case of glyphosate we actually have a large set of data with multiple independent reviews concluding it is relatively safe as used, and is superior to most other herbicides. The IARC review is an outlier, and the process used has come under significant criticism suggesting bias.

In any case, the recent published data from the AHS renders all previous reviews obsolete. This new data argues strongly against any link between glyphosate and cancer. In light of this, the IARC should update their classification, as their now obsolete classification is actively being used as a basis of lawsuits and regulations.


Categories: Skeptic

Evolution Caught in the Act

neurologicablog Feed - Thu, 11/09/2017 - 5:15am

The hypothesis that life on Earth as it is currently found is the result of biological evolution from a common ancestor over billions of years is supported by such a mountain of evidence that it can be treated as an established scientific fact. Further, it is now a fundamental organizing theory of biology.

This, of course, does not stop ideologically motivated denial. There are those who have been systematically misinformed about the evidence, and the nature of science itself. What they think they know about evolutionary theory they learned from secondary hostile sources. One of the common lies they are repeatedly told is that there are no transitional fossils.

This claim amazes me still, because the evidence is so easily accessible. Lists of transitional fossils are easy to find. One of my favorite examples is the evolution of birds, because the morphological transition from theropod dinosaurs to modern birds was so dramatic.

I also have to point out that this evidence represents a successful prediction of evolutionary theory. When Darwin first published his theory the fossil record was scant. Enough fossils had been discovered for scientists to see that life was dramatically changing over geological time, but the puzzle was mostly empty. There were not enough specimens to see connections between major groups. Evolutionary theory predicts that such connections would be found – and they were, and they continue to be.

The fossil record is such a slam-dunk win for evolutionary theory that deniers have no choice but to simply lie and falsely state that they don’t exist. They try to divert attention to the remaining gaps in the record, or the occasional fossil hoaxes. When you point out the many dramatic transitional fossils they perform the intellectual equivalent of sticking their fingers in their ears and saying, “La, la, la.”

The most dramatic transitional fossils relate to evolutionary changes resulting from a major change in lifestyle. When dinosaurs took to the wing, for example. Or whenever creatures adapt from the sea to land, or from the land back to the sea. Whales are a great example. We now have a compelling sequence of transitional whales, and can see the slow loss of legs over time, the movement of the nostril to the top of the head, the increase in size, and the development of flippers. Ambulocetus is about half way through this transition – a literal walking whale.

In addition to the well-known groups, there are many nicely documented transitions in less well-known groups – for example, the pleurosaurs. These are ancient reptiles that went back to the sea and evolved to an aquatic lifestyle. They are similar in this way to the plesiosaurs, ichthyosaurs and mosasaurs.  A recent specimen was discovered that is 155 million years ago, is remarkably well-preserved, and represents a clear transitional species. As Science reports:

The creature (which the scientists dubbed Vadasaurus, Latin for “wading lizard”) lived 155 million years ago and didn’t have the elongated trunk or relatively shorter limbs that later aquatic species of pleurosaurs did, the researchers report today in Royal Society Open Science. So, Vadasaurus would have been less streamlined overall than its aquatic kin, they suggest. But other features, such as the shape of its skull and the shape and placement of its nostrils, hint that some aspects of the creature were indeed becoming more adapted to an aquatic lifestyle.

So it was partly adapted to the sea, but not completely. Later specimens show more complete adaptation to the water. The specimen also had less ossification, meaning lighter bones, than its terrestrial ancestors. Lighter bones would be an aquatic adaptation – they would make floating easier and heavier bones would not be necessary for support in the water.

Aquatic adaptation is an excellent window into evolutionary change, because life in the water produces a suite of strong selective pressures. You can survive in the water, for example, with stumpy legs, but they are just getting in the way. They slow you down, so there is continuous selective pressure for smaller legs. Therefore we see in the fossil record progressively smaller hind limbs in groups adapting to the sea. Modern whales are left with just an internal bony vestige.

Another strategy evolution deniers use to sow doubt and confusion about the fossil evidence is to focus on tiny details and ignore the bigger picture. Here is a good example from the Orwellian named,  Evolution News. They report on two new transitional fossils, including another feathered dinosaur. The author does not acknowledge that such specimens fill in the morphological space of already known species, and therefore are transitional, providing further evidence for evolution. Rather, they argue, that because these specimens change the way we draw the lines of descent they are evidence against evolution.

That is a common tactic- misinterpret disagreement or uncertainty about the details as if it calls into question the bigger reality. Scientists are trying to piece together exactly what evolved from what when based upon an incomplete record. This is like trying to put jigsaw puzzle pieces in the right place when you only have 10% of the pieces. Every time you find a new piece there is the chance that it will change where you think the pieces go.

If evolution were not true, however, we would not be finding any pieces, or we would be finding pieces to other puzzles entirely. Once we started digging up fossils we could have found a complete absence of life prior to 10 thousand years ago. We could have found that species are stable throughout geological history. We could have found different species, but ones with not possible relationship to extant species.

That is not what we found. We found, as evolutionary theory regarding common descent predicts and requires, dramatically and sequentially changing multicellular species going back 550 million years. Further, fossil species largely fit into a compelling evolutionary pattern. We find creatures that are plausible ancestors to living creatures. We don’t find fossils that are impossible chimeras or totally out of sequence.

However, when you drill down to the details, the fossil record does not always provide enough evidence to make precise reconstructions. Scientists interpolate as best they can from existing evidence, but during this phase of discovery new evidence can significantly change how these maps are drawn. That does not call into question the fact of common descent itself. Pretending it does is intellectually dishonest, which is the hallmark of evolution deniers.

The fact remains, with each new transitional fossil discovered, there is another vindication for evolutionary theory.

Categories: Skeptic

Why We Should Be Concerned About Artificial Superintelligence feed - Wed, 11/08/2017 - 12:00am

The human brain isn’t magic; nor are the problem-solving abilities our brains possess. They are, however, still poorly understood. If there’s nothing magical about our brains or essential about the carbon atoms that make them up, then we can imagine eventually building machines that possess all the same cognitive abilities we do. Despite the recent advances in the field of artificial intelligence, it is still unclear how we might achieve this feat, how many pieces of the puzzle are still missing, and what the consequences might be when we do. There are, I will argue, good reasons to be concerned about AI.

The Capabilities Challenge

While we lack a robust and general theory of intelligence of the kind that would tell us how to build intelligence from scratch, we aren’t completely in the dark. We can still make some predictions, especially if we focus on the consequences of capabilities instead of their construction. If we define intelligence as the general ability to figure out solutions to a variety of problems or identify good policies for achieving a variety of goals, then we can reason about the impacts that more intelligent systems could have, without relying too much on the implementation details of those systems.

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI.

AI researchers generally agree that superintelligent AI is possible, though they have different views on how and when it’s likely to be developed. In a 2013 survey, top-cited experts in artificial intelligence assigned a median 50% probability to AI being able to “carry out most human professions at least as well as a typical human” by the year 2050, and also assigned a 50% probability to AI greatly surpassing the performance of every human in most professions within 30 years of reaching that threshold.

Many different lines of evidence and argument all point in this direction; I’ll briefly mention just one here, dealing with the brain’s status as an evolved artifact. Human intelligence has been optimized to deal with specific constraints, like passing the head through the birth canal and calorie conservation, whereas artificial intelligence will operate under different constraints that are likely to allow for much larger and faster minds. A digital brain can be many orders of magnitude larger than a human brain, and can be run many orders of magnitude faster.

All else being equal, we should expect these differences to enable (much) greater problem-solving ability by machines. Simply improving on human working memory all on its own could enable some amazing feats. Examples like arithmetic and the game Go confirm that machines can reach superhuman levels of competency in narrower domains, and that this competence level often follows swiftly after human-par performance is achieved.

The Alignment Challenge

If and when we do develop general-purpose AI, or artificial general intelligence (AGI), what are the likely implications for society? Human intelligence is ultimately responsible for human innovation in all walks of life. The prospect of developing machines that can dramatically accelerate our rate of scientific and technological progress is a prospect of incredible growth from this engine of prosperity.

Our ability to reap these gains, however, depends on our ability to design AGI systems that are not only good at solving problems, but oriented toward the right set of problems. A highly capable, highly general problem-solving machine would function like an agent in its own right, autonomously pursuing whatever goals (or answering whatever questions, proposing whatever plans, etc.) are represented in its design. If we build our machines with subtly incorrect goals (or questions, or problem statements), then the same general problem-solving ability that makes AGI a uniquely valuable ally may make it a uniquely risky adversary.

Why an adversary? I’m not assuming that AI systems will resemble humans in their motivations or thought processes. They won’t necessarily be sentient (unless this turns out to be required for high intelligence), and they probably won’t share human motivations like aggression or a lust for power.

There do, however, seem to be a number of economic incentives pushing toward the development of ever-more-capable AI systems granted ever-greater autonomy to pursue their assigned objectives. The better the system is at decisionmaking, the more one gains from removing humans from the loop, and the larger the push towards autonomy. (See, for example, this article on why tool AIs want to be agent AIs.) There are also many systems in which having no human in the loop leads to better standardization and lower risk of corruption, such as assigning a limited supply of organs to patients. As our systems become smarter, human oversight is likely to become more difficult and costly; past a certain level, it may not even be possible, as the complexity of the policies or inventions an AGI system devises surpasses our ability to analyze their likely consequences.

AI systems are likely to lack human motivations such as aggression, but they are also likely to lack the human motivations of empathy, fairness, and respect. Their decision criteria will simply be whatever goals we design them to have; and if we misspecify these goals even in small ways, then it is likely that the resultant goals will not only diverge from our own, but actively conflict with them.

The basic reason to expect conflict (assuming we fail to perfectly specify our goals) is that it appears to be a technically difficult problem to specify goals that aren’t open-ended and ambitious; and sufficiently capable pursuit of sufficiently open-ended goals implies that strategies such as “acquire as many resources as possible” will be highly ranked by whatever criteria the machine uses to make decisions.

Why do ambitious goals imply “greedy” resource acquisition? Because physical and computational resources are broadly helpful for getting things done, and are limited in supply. This tension naturally puts different agents with ambitious goals in conflict, as human history attests—except in cases where the agents in question value each other’s welfare enough to wish to help one another, or are at similar enough capability levels to benefit more from trade than from resorting to force. AI raises the prospect that we may build systems with “alien” motivations that don’t overlap with any human goal, while superintelligence raises the prospect of unprecedentedly large capability differences.

Even a simple question-answering system poses more or less the same risks on those fronts as an autonomous agent in the world, if the question-answering system is “ambitious” in the relevant way. It’s one thing to say (in English) “we want you to answer this question about a proposed power plant design in a reasonable, common-sense way, and not build in any covert subsystems that would make the power plant dangerous;” it’s quite another thing to actually specify this goal in code, or to hand-code patches for the thousand other loopholes a sufficiently capable AI system might find in the task we’ve specified for it.

If we build a system to “just answer questions,” we need to find some way to specify a very non-ambitious version of that goal. If not we risk building a system with incentives to seize control and maximize the number of questions it receives, maximize the approval ratings it receives from users, or otherwise to maximize some quantity that correlates with good performance in training data and is likely to come uncorrelated in the real world.

Why, then, does it look difficult to specify non-ambitious goals? Because our standard mathematical framework of decision-making—expected utility maximization—is built around ambitious, open-ended goals. When we try to model a limited goal (for example, “just put a single strawberry on a plate and then stop there, without having a big impact on the world,”) expected utility maximization is a poor fit. It’s always possible to keep driving up the expected utility higher and higher by devising evermore- ingenious ways to increment the probability of your success; and if your machine is smarter than you are, and all it cares about is this success criterion you’ve given it, then “crazy”-sounding ideas like “seize the world’s computing resources and run millions of simulations of possible ways I might be wrong about whether the strawberry is on the plate, just in case,” will be highly ranked by this supposedly “unambitious” goal.

Researchers are considering a number of different ideas for addressing this problem, and we’ve seen some progress over the last couple of years, but it’s still largely an unsolved and under-studied problem. We could consider adding a penalty term to any policies the system comes up with that have a big impact on the world—but defining “impact” in a useful way turns out to be a very difficult problem.

One could try to design systems to only “mildly” pursue their goals, such as stopping the search for ever-better policies once a policy that hits a certain expected utility threshold is found. But systems of this kind, called “satisficers,” turn out to run into some difficult obstacles of their own. Most obviously, naïve attempts at building a satisficer may give the system incentives to write and run the code for a highly capable non-satisficing sub-agent, since a maximizing sub-agent can be a highly effective way to satisfice for a goal.

For a summary of these and other technical obstacles to building superintelligent but “unambitious” machines, see Taylor et al.’s article “Alignment for Advanced Machine Learning Systems”.

Alignment Through Value Learning

Why can’t we just build ambitious machines that share our values?

Ambition in itself is no vice. If we can successfully instill everything we want into the system, then there’s no need to fear open-ended maximization behavior, because the scary edge-case scenarios we’re worried about will be things the AI system itself knows to worry about too. Similarly, we won’t need to worry about an aligned AI with sufficient foresight modifying itself to be unaligned, or creating unaligned descendents because it will realize that doing so would go against its values.

The difficulty is that human goals are complex, varied, and situation-dependent. Coding them all by hand is a non-starter. (And no, Asimov’s three laws of robotics are not a plausible design proposal for real-world AI systems. Many of the books explored how they didn’t work, and in any case they were there mainly as plot devices!)

What we need, then, would seem to be some formal specification of a process for learning human values over time. This task has itself raised a number of surprisingly deep technical challenges for AI researchers.

Many modern AI systems, for example, are trained using reinforcement learning. A reinforcement learning system builds a model of how the world works through exploration and feedback rewards, trying to collect as much reward as it can. One might think that we could just keep using these systems as capabilities ratchet past the human level, rewarding AGI systems for behaviors we like and punishing them for behaviors we dislike, much like raising a human child.

This plan runs into several crippling problems, however. I’ll discuss two: defining the right reward channel, and ambiguous training data.

The end goal that we actually want to encourage through value learning is that the trainee wants the trainer to be satisfied, and we hope to teach this by linking the trainer’s satisfaction with some reward signal. For dog training, this is giving a treat; for a reinforcement learning system, it might be pressing a reward button. The reinforcement learner, however, has not actually been designed to satisfy the trainer, or to promote what the trainer really wants. Instead, it has simply been built to optimize how often it receives a reward. At low capability levels, this is best done by cooperating with the trainer; but at higher capability levels, if it could use force to seize control of the button and give itself rewards, then solutions of this form would be rated much more highly than cooperative solutions. To have traditional methods in AI safely scale up with capabilities, we need to somehow formally specify the difference between the trainer’s satisfaction and the button being pressed, so that the system will see stealing the button and pressing it directly as irrelevant to its real goal. This is another example of an open research question; we don’t know how to do this yet, even in principle.

We want the system to have general rules that hold across many contexts. In practice, however, we can only give and receive specific examples in narrow contexts. Imagine training a system that learns how to classify photos of everyday objects and animals; when presented with a photo of a cat, it confidently asserts that the photo is of a cat. But what happens when you show it a cartoon drawing of a cat? Whether or not the cartoon is a “cat” depends on the definition that we’re using—it is a cat in some senses, but not in others. Since both concepts of “cat” agree that a photo of a cat qualifies, just looking at photos of cats won’t help the system learn what rule we really have in mind. In order for us to predict all the ways that training data might under-specify the rules we have in mind, however, it would seem that we’d need to have superhuman foresight about all the complex edge cases that might ever arise in the future during a real-world system’s deployment.

While it seems likely that some sort of childhood or apprenticeship process will be necessary, our experience with humans, who were honed by evolution to cooperate in human tribes, is liable to make us underestimate the practical difficulty of rearing a non-human intelligence. And trying to build a “human-like” AI system, without first fully understanding what makes humans tick could make the problem worse. The system may still be quite inhuman under the hood, while its superficial resemblance to human behavior further encourages our tendency to anthropomorphize the system and assume it will always behave in human-like ways.

For more details on these research directions within AI, curious readers can check out Amodei, et al.’s “Concrete Problems in AI Safety”, along with the Taylor et al. paper above.

The Big Picture

At this point, I’ve laid out my case for why I think superintelligent AGI is likely to be developed in the coming decades, and I’ve discussed some early technical research directions that seem important for using it well. The prospect of researchers today being able to do work that improves the long-term reliability of AI systems is a key practical reason why AI risk is an important topic of discussion today. The goal is not to wring our hands about hypothetical hazards, but to calmly assess their probability (if only heuristically) and actually work to resolve the hazards that seem sufficiently likely.

A reasonable question at this point is whether the heuristics and argument styles I’ve used above to try and predict a notional technology, general-purpose AI, are likely to be effective. One might worry, for example—as Michael Shermer does in this issue of Skeptic—that the scenario I’ve described above, however superficially plausible, is ultimately a conjunction of a number of independent claims.

A basic tenet of probability theory is that conjunctions are necessarily no more likely than their individual parts; the claim “Linda is a feminist bank teller” cannot be more likely than the claim “Linda is a feminist” or the claim “Linda is a bank teller,” in this now famous cognitive bias experiment by Amos Tversky and Daniel Kahneman. This is true here as well; if any of the links above are wrong, the entire chain fails.

A quirk of human psychology is that corroborative details can often make a story feel as though it is likelier, by making it more vivid and easier to visualize. If I suppose that the U.S. and Russia might break off diplomatic relations in the next five years, this might seem low probability; if I suppose that over the next five years the U.S. might shoot down a Russian plane over Syria and then that will lead to the countries breaking off diplomatic relations, this story might seem more likely than the previous one, because it has an explicit causal link. And indeed, studies show that people will generally assign a higher probability to the latter claim if two groups are randomly assigned one or the other claim in isolation. Yet the latter story is necessarily less likely—or at least no more likely—because it now contains an additional (potentially wrong) fact.

I’ve been careful in my argument so far to make claims not about pathways, which paint a misleadingly detailed picture, but about destinations. Destinations are disjunctive, in that many independent paths can all lead there, and so are as likely as the union of all the constituent probabilities. Artificial general intelligence might be reached because we come up with better algorithms on blackboards, or because we have continuing hardware growth, or because neuroimaging advances allow us to better copy and modify various complicated operations in human brains, or by a number of other paths. If one of those pathways turns out to be impossible or impractical, this doesn’t mean we can’t reach the destination, though it may affect our timelines and the exact capabilities and alignment prospects of the system. Where I’ve mentioned pathways, it’s been to help articulate why I think the relevant destinations are reachable, but the outlined paths aren’t essential.

This also applies to alignment. Regardless of the particular purposes we put AI systems to, if they strongly surpass human intelligence, we’re likely to run into many of the same difficulties with ensuring that they’re learning the right goals, as opposed to learning a close approximation of our goal that will eventually diverge from what we want. And for any number of misspecified goals highly capable AI systems might end up with, resource constraints are likely to create an adversarial relationship between the system and its operators.

To avoid inadvertently building a powerful adversary, and to leverage the many potential benefits of AI for the common good, we will need to find some way to constrain AGI to pursue limited goals or to employ limited resources; or we will need to find extremely reliable ways to instill AGI systems with our goals. In practice, we will surely need both, along with a number of other techniques and hacks for driving down risk to acceptable levels.

Why Work On This Now?

Suppose that I’ve convinced you that AGI alignment is a difficult and important problem. Why work on it now?

One reason is uncertainty. We don’t know whether it will take a short or long time to invent AGI, so we should prepare for short horizons as well as long ones. And just as we don’t know what work is left to do in order to make AGI, we don’t know what work is left to do in order to align AGI. This alignment problem, as it is called, may turn out to be more difficult than expected, and the sooner we start, the more slack we have. And if it proves unexpectedly easy, that means we can race ahead faster on capability development once we’re confident we can use them well.

On the other hand, starting work early means that we know less about what AGI will look like, and our safety work is correspondingly less informed. The research problems outlined above, however, seem fairly general: they’re likely to be applicable to a wide variety of possible designs. Once we have exhausted the low-hanging fruit and run out of obvious problems to tackle, the cost-benefit comparison here may shift.

Another reason to prioritize early alignment work is that AI safety may help shape capabilities research in critical respects.

One way to think about this is technical debt, a programming term used to refer to the development work that becomes necessary later because a cheap and easy approach was used instead of the right approach. One might imagine a trajectory where we increase AI capabilities as rapidly as possible, reach some threshold capability level where there is a discontinuous increase in the dangers (e.g., strong self-improvement capabilities), and then halt all AI development, focusing entirely on ensuring that the system in question is aligned before continuing development. This approach, however, runs into the same challenges as designing a system first for functionality, and then later going back and trying to “add in” security. Systems that aren’t built for high security at the outset generally can’t be made highly secure (at reasonable cost and effort) by “tacking on” security features much later on.

As an example, we can consider how strings were implemented in the C language, a general-purpose, imperative computer programming language. Developers chose the easier, cheaper way instead of the more secure way, leading to countless buffer overflow vulnerabilities that were painful to patch in systems that used C. Figuring out the sort of architecture a system needs to have and then building using that architecture seems to be much more reliable than building an architecture and hoping that it can be easily modified to also serve another purpose. We might find that the only way to build an alignable AI is to start over with a radically different architecture.

Consider three fields that can be thought of as normal fields under conditions of unusual stress:

  • Computer security is like computer programming and mathematics, except that it also has to deal with the stresses imposed by intelligent adversaries. Adversaries can zero in on weaknesses that would only come up occasionally by chance, making ordinary “default” levels of exploitability highly costly in security-critical contexts. This is a major reason why computer security is famously difficult: you don’t just have to be clear enough for the compiler to understand; you have to be airtight.
  • Rocket science is like materials science, chemistry, and mechanical engineering, except that it requires correct operation under immense pressures and temperatures on short timescales. Again, this means that small defects can cause catastrophic problems, as tremendous amounts of energy that are supposed to be carefully channeled end up misdirected.
  • Space probes that we send on exploratory missions are like regular satellites, except that their distance from Earth and velocity put them permanently out of reach. In the case of satellites, we can sometimes physically access the system and make repairs. This is more difficult for distant space probes, and is often impossible in practice. If we discover a software bug, we can send a patch to a probe—but only if the antenna is still receiving signals, and the software that accepts and applies patches is still working. If not, your system is now an inert brick hurtling away from the Earth.

Loosely speaking, the reason AGI alignment looks difficult is that it shares core features with the above three disciplines.

  • Because AGI will be applying intelligence to solve problems, it will also be applying intelligence to find shortcuts to the solution. Sometimes the shortcut helps the system find unexpectedly good solutions; sometimes it helps the system find unexpectedly bad ones, as when our intended goal was imperfectly specified. As with computer security, the difficulty we run into is that our goals and safety measures need to be robust to adversarial behavior. We can in principle build non-adversarial systems (e.g., through value learning or by formalizing limited-scope goals), and this should be the goal of AI researchers; but there’s no such thing as perfect code, and any flaw in our code opens up the risk of creating an adversary.
  • More generally speaking, because AGI has the potential to be much smarter than people and systems that we’re used to and to discover technological solutions that are far beyond our current capabilities, safety measures we create for subhuman or human-par AI systems are likely to break down as these capabilities dramatically increase the “pressure” and “temperature” the system has to endure. For practical purposes, there are important qualitative differences between a system that’s smart enough to write decent code, and one that isn’t; between one that’s smart enough to model its operators’ intentions, and one that isn’t; between one that isn’t a competent biochemist, and one that is. This means that the nature of progress in AI makes it very difficult to get safety guarantees that scale up from weaker systems to smarter ones. Just as safety measures for aircraft may not scale to spacecraft, safety measures for low-capability AI systems operating in narrow domains are unlikely to scale to general AI.
  • Finally, because we’re developing machines that are much smarter than we are, we can’t rely on after-the-fact patches or shutdown buttons to ensure good outcomes. Loss-of-control scenarios can be catastrophic and unrecoverable. Minimally, to effectively suspend a superintelligent system and make repairs, the research community first has to solve a succession of open problems. We need a stronger technical understanding of how to design systems that are docile enough to accept patches and shutdown operations, or that have carefully restricted ambitions or capabilities. Work needs to begin early exactly because so much of the work operates as a prerequisite for safely making further safety improvements to highly capable AI systems.

This article appeared in Skeptic magazine 22.2 (2017)
Buy print issue
Buy digital issue
Subscribe to print edition
Subscribe to digital edition

This looks like a hard problem. The problem of building AGI in the first place, of course, also looks hard. We don’t know nearly enough about either problem to say which is more difficult, or exactly how work on one might help inform work on the other. There is currently far more work going into advancing capabilities than advancing safety and alignment, however; and the costs of underestimating the alignment challenge far exceed the costs of underestimating the capabilities challenge. For that reason, this should probably be a more mainstream priority, particularly for AI researchers who think that the field has a very real chance of succeeding in its goal of developing general and adaptive machine intelligence.

About the Author

Matthew Graves is a staff writer at the Machine Intelligence Research Institute in Berkeley, CA. Previously, he worked as a data scientist, using machine learning techniques to solve industrial problems. He holds a master’s degree in Operations Research from the University of Texas at Austin.

Categories: Critical Thinking, Skeptic

eSkeptic for November 8, 2017 feed - Wed, 11/08/2017 - 12:00am

In this week’s eSkeptic:

SKEPTIC EXCLUSIVE FILM CLIP Bill Nye: Science Guy (a new documentary)

Bill Nye is a man on a mission: to stop the spread of anti-scientific thinking across the world. The former star of the popular kids show Bill Nye The Science Guy is now the CEO of The Planetary Society, an organization founded by Bill’s mentor Carl Sagan, where he’s launching a solar propelled spacecraft into the cosmos and advocating for the importance of science, research, and discovery in public life. With intimate and exclusive access — as well as plenty of wonder and whimsy — this behind-the-scenes portrait of Nye follows him as he takes off his Science Guy lab coat and takes on those who deny climate change, evolution, and a science-based world view. The film features Bill Nye, Neil deGrasse Tyson, Ann Druyan, and many others.

Below, you can watch an Exclusive Clip from the film in which Bill Nye has a few words with Ken Ham — founder of the Creation Museum in Petersburg, Kentucky, which promotes a pseudoscientific, young Earth creationist explanation of the origin of the Universe based on a literal interpretation of the Genesis creation narrative in the Bible.


A NEW STORY! How Phil Zuckerman Became a Card-Carrying Skeptic

As we announced a few weeks ago in eSkeptic, we asked several friends to tell us about those “aha!” moments that led to their becoming skeptical thinkers. As promised, here is another one of their incredible stories on YouTube. Enjoy!

Phil Zuckerman is a professor of sociology and secular studies at Pitzer College in Claremont, California, and he is a card-carrying (and corn cob pipe gnawing) skeptic. He is the author of several books, including: Living the Secular Life (2015), and Society Without God (2008).


Tell us your story and become a card-carrying skeptic! Thank you for being a part of our first 25 years. We look forward to seeing you over the next 25. —SKEPTIC

Become a Card-Carrying Skeptic

It’s possible that artificially intelligent systems might end up far more intelligent than any human. In this week’s eSkeptic, Matthew Graves warns that the same general problem-solving ability that makes artificial superintelligence a uniquely valuable ally may make it a uniquely risky adversary. This article appeared in Skeptic magazine 22.2 (2017).

Why We Should Be Concerned About Artificial Superintelligence

by Matthew Graves

The human brain isn’t magic; nor are the problem-solving abilities our brains possess. They are, however, still poorly understood. If there’s nothing magical about our brains or essential about the carbon atoms that make them up, then we can imagine eventually building machines that possess all the same cognitive abilities we do. Despite the recent advances in the field of artificial intelligence, it is still unclear how we might achieve this feat, how many pieces of the puzzle are still missing, and what the consequences might be when we do. There are, I will argue, good reasons to be concerned about AI.

The Capabilities Challenge

While we lack a robust and general theory of intelligence of the kind that would tell us how to build intelligence from scratch, we aren’t completely in the dark. We can still make some predictions, especially if we focus on the consequences of capabilities instead of their construction. If we define intelligence as the general ability to figure out solutions to a variety of problems or identify good policies for achieving a variety of goals, then we can reason about the impacts that more intelligent systems could have, without relying too much on the implementation details of those systems.

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI. […]

Continue reading

2018 | IRELAND | JULY 15–AUGUST 2 One of the best geology tours we’ve ever offered: an epic 19-day tour of the Emerald Isle!

Ireland’s famed scenic landscape owes its breathtaking terrain to a dramatic 1.75 billion year history of continental collisions, volcanoes, and glacial assault. Join the Skeptics Society for a 19-day immersive tour of the deep history of the Emerald Isle, while experiencing the music, hospitality, and verdant beauty that make Ireland one of the world’s top travel destinations.

For complete details about accommodation, airfare, and tour pricing, please download the detailed information and registration form or click the green button below to read the itinerary, and see photos of some of the amazing sites we will see.

Get complete details

Download registration form

Categories: Critical Thinking, Skeptic

Science-Based Veterinary Medicine

neurologicablog Feed - Tue, 11/07/2017 - 5:16am

The Royal College of Veterinary Surgeons (RCVS) is a UK-based professional organization for veterinary surgeons and nurses. They describe their mission as:

We aim to enhance society through improved animal health and welfare. We do this by setting, upholding and advancing the educational, ethical and clinical standards of veterinary surgeons and veterinary nurses.

They recently came out with a statement regarding complementary and alternative medicine, essentially setting the standard for their profession in the UK. There are some good parts to the statement, but also some dramatic weaknesses which are representative, in my opinion, of the broader issues of how academia is dealing with the CAM phenomenon.

The Case for Science-Based Medicine

Before we get to the statement, let me review my position on the matter. As many readers will likely know, I am a strong advocate for what I call science-based medicine. The SBM approach, at its core, is simple – we advocate for one science-based standard for the health-care profession. This means that treatments which are safe and effective are preferred over those that are either unsafe or ineffective. Effectiveness and safety, of course, occur on a continuum and so individual decisions need to be made based on an overall assessment of risk vs benefit.

Further, the best way to assess the safety and efficacy of an intervention is by a thorough, transparent, and unbiased assessment of the entirety of the scientific evidence. This is where things can get really wonky, which is why specific expertise is required to make such assessments. If you are interested in the details there are a few hundred articles you can read either here or on the SBM website. But here is the short version:

SBM considers both basic science and clinical evidence. The basic science is needed in order to assess the plausibility of any claim or intervention. Further, understanding plausibility (or prior probability) is necessary in order to interpret the clinical evidence. You literally cannot properly interpret the statistical probability of a treatment working unless you know the prior probability, which is dependent upon plausibility.

In addition you need rigorous clinical evidence that shows a specific, consistent, replicable and clinically significant effect of a specific intervention, properly controlling for other relevant variables. Yes, you really do need this. I am not just being persnickety. The evidence clearly shows that when interventions are adopted prior to this level of evidence they are overwhelmingly likely to be reversed with later more rigorous evidence.

We can argue about the exact optimal threshold of evidence we should require before adopting a treatment, but many reviews of the literature and of practice indicate that this threshold should be higher than the current standard in place, and higher than most people think. Otherwise you are more likely to be causing harm than good, and that is the ultimate goal – to make sure we are helping people and not hurting them.

Further, placebo effects are transient and subjective, and do not represent actual improvement in any disease. At best they provide a short term distraction from subjective symptoms. They are not worth pursuing for their own sake, and certainly do not justify interventions which are not science-based.

Given the high stakes within health care, professional ethics requires that we make (collectively and individually) our best efforts to provide science-based interventions, and to avoid the waste and abuse that comes from unscientific claims or practices. Also, the ethical requirements of informed consent and patient autonomy require that we are honest and candid with them about the scientific basis of our recommendations and a realistic assessment of risk vs benefit.

Complementary and alternative medicine (CAM) takes a very different approach. CAM proponents are specifically advocating for a double-standard, one in which a science-based assessment of risk vs benefit is not required. They further seek to weaken and lower the standards of scientific evidence, frequently misinterpret the evidence in a biased manner, make false claims about placebo effects, and favor the freedom of the practitioner over the rights and needs of the patient.

However, there are billions of dollars to be made selling snake oil, and the purveyors of what was previously called simply “health fraud” have invested some of those billions lobbying for favorable laws and regulations, bribing hospitals and academic institutions with donations, setting up their own alternative journals and organizations, and marketing their deceptive narrative to the public.

The RCVS Statement

With this background, let’s take a look at the RCVS statement. They admit that forming their official position was controversial with passionate views on both sides.  That is undoubtedly true, but it is the job of a professional organization to make the right decision, and not cater to a populist insurgency. Unfortunately, it seems that the RCVS caved to pressure and decided to “split the baby.” They begin:

“We would like to highlight our commitment to promoting the advancement of veterinary medicine on sound scientific principles and to reiterate the fundamental obligation on our members as practitioners within a science-based profession, which is to make animal welfare their first consideration.”

OK, so far so good. I like the nod to “science-based.” That is critical, in my opinion. The modern medical profession should be overtly science-based, otherwise we are just witch-doctors. They continue:

“In fulfilling this obligation, we expect treatments offered by veterinary surgeons are underpinned by a recognised evidence base or sound scientific principles. Veterinary surgeons should not make unproven claims about any treatments, including prophylactic treatments.”

Again, very nice. One tweak – I would change “recognised evidence base or sound scientific principles” to “recognised evidence base and sound scientific principles.” As I noted above, you cannot have one without the other.

They then go on to single out homeopathy, which is understandable. Homeopathy has turned into the sacrificial lamb, the one CAM treatment that academics and professionals throw under the bus in order to appear science-based. See – we reject pseudoscience. They write:

“Homeopathy exists without a recognised body of evidence for its use. Furthermore, it is not based on sound scientific principles.”

The very next statement, however, is where they go off the rails.

“To protect animal welfare, we regard such treatments as being complementary rather than alternative to treatments, for which there is a recognised evidence base or which are based in sound scientific principles.

“It is vital to protect the welfare of animals committed to the care of the veterinary profession and the public’s confidence in the profession that any treatments not underpinned by a recognised evidence base or sound scientific principles do not delay or replace those that do.”

Ugh. Given their statement about passions on both sides, I suspect this was their bone to the snake-oil peddlers in their ranks. They bought into the CAM narrative. Essentially they are saying that it is OK to sell pure pseudoscience and nonsense to pet owners, and to subject animals to utterly worthless interventions, as long as they also provide real medicine first. Hey, this way you get to charge for real and fake medicine.

This statement utterly undercuts everything that comes before it. It is also naive to think that resorting to fake medicine is ever benign. As a clinician I can tell you that there is almost never a time when there is nothing science-based to do for a patient. That does not mean we can cure everything, but you can always manage symptoms, improve quality of life, and help your patients deal with their condition.

Giving them fake interventions is always inappropriate, robs them of their resources (financial, time, emotional), gives false hope, betrays their trust and the requirements of patient autonomy and informed consent, and is simply fraud. Sure, it is worse when it replaces real treatment, but in practice this is almost always what happens. “Complementary” or “integrative” approaches are a fiction. When you actually look at what such practitioners do, they incorporate fake interventions early in their management, when science-based interventions are still available. The “complementary” schtick is just a cover.

Also, you simply cannot have an adequate understanding of the relationship between science and medicine and think it is reasonable to give your patient homeopathy or anything similarly pseudoscientific. CAM erodes the public and professional understanding of science, sows confusion, and weakens regulations and professional standards. The RCVS statement is, ironically, evidence of that very thing. Here we have a professional organization whose stated mission is to promote the health of animals with science-based interventions, saying it is OK to give magic water to animals and charge their owners for it.

I don’t know how much this is a failure on the part of the RCVS to recognize the problem, or a failure of political will to deal with it appropriately. It is some combination of both. It is also representative of the broader problem within the general medical profession.

Modern medicine is failing to deal with its own populist and fraudulent insurgency, and it is eroding the profession and our contract with society.

Categories: Skeptic

Skeptoid #596: How to Assess a Documentary

Skeptoid Feed - Mon, 11/06/2017 - 4:00pm
Some tips to assess whether a documentary is good science or just propaganda.
Categories: Critical Thinking, Skeptic

US Government Report Affirms Climate Change

neurologicablog Feed - Mon, 11/06/2017 - 5:09am

The U.S. Global Change Research Program Climate Science Special Report was recently published, and its conclusions are crystal clear:

 This assessment concludes, based on extensive evidence, that it is extremely likely that human activities, especially emissions of greenhouse gases, are the dominant cause of the observed warming since the mid-20th century. For the warming over the last century, there is no convincing alternative explanation supported by the extent of the observational evidence.

That conclusion is nothing new to those following the science of climate change for the last couple decades or so. The more this question is studied, the more data is gathered, the firmer the conclusion becomes – the planet is warming due to human release of greenhouse gases, such as CO2. There are error bars on how much warming, and the exact effects are hard to predict, but that’s it. The probable range of warming and effects are not good, however. It will be bad, the only real debate is about how bad and how fast.

The conclusions of the report, therefore, at least scientifically, are not surprising. It was, however, politically surprising. The special report began in 2015, under Obama. Because of Trump’s stated position that global warming is a Chinese hoax, and his appointment of many global warming deniers to key positions, it was feared that his administration would slow or frustrate the publication of this report.

However, according to the NYT, Trump himself was simply unaware of the report. Further, the fate of the report was largely in the hands of those amenable to following the science, rather than putting a huge political thumb on the scale. As a result the report was not hampered or altered. It was approved by 13 agencies who reviewed its findings.

The reports adds to the consensus of consensus that global warming is real and human-caused. What I mean by the “consensus of consensus” is that multiple reviews by expert panels have come to the same conclusion about the consensus of scientific evidence. There are only fringe outliers, as there are with most scientific questions (no matter how strong the consensus).

It remains to be seen how Trump himself or his administration will respond to the report. However, the global warming denier community has already dismissed it as the result of “Obama holdovers.”

There is good reason to be pessimistic about the effects this report will have on public opinion. While it does seem that public opinion is slowly moving in the direction of accepting the science of global warming, there is a strong ideological influence on what people believe. A study from March 2017 surveyed 9,500 people over several years and found that the strongest predictor of their views on climate change was their party affiliation.

In other words, you could predict with a high level of accuracy someone’s attitudes toward climate change if you knew only their party affiliation. This effect was strengthened the more they paid attention to the news. Therefore consuming information itself did not move people toward the scientific consensus, just toward their party line.

I do want to point out, because this point is often missed, that this motivated reasoning phenomenon is not universal but appears to be in proportion to the degree to which issues are strongly ideological and tied to tribal affiliation.

Of course, in an ideal world this would not be the case. Science should speak for itself, and should inform politics but not be determined by it. Party affiliation should have nothing to do with the scientific consensus on a scientific question. This highlights the importance of separating science from ideology, and the need for better education in philosophy and critical thinking. This is a failure of thinking clearly and scientific literacy.

Both sides, of course, will think that they are the one’s who are in line with logic and evidence and the other side is succumbing to political ideology. This does not mean that the issue is necessarily symmetrical – that both sides are wrong. Sometimes the science happens to be in line with our ideology. In those cases the accuracy of your views on the science is almost incidental, or at least it does not provide convincing evidence that you will accept scientific conclusions regardless of their ideological implications.

What is convincing evidence is when someone accepts a scientific consensus on a question even when it is inconvenient to their ideology or party affiliation. Again, I am not saying there is absolute symmetry, but liberals, for example, should not be smug about their acceptance of the scientific consensus on climate change unless they also accept the scientific consensus on genetically modified food, organic farming, vaccines, alternative medicine, and nuclear energy.

Part of the problem is motivated reasoning. Part of the problem (perhaps a growing part) is the echochamber effect. But also scientific literacy plays a huge role, and here I am not just talking about factual scientific knowledge but the ability to evaluate scientific research and opinions, to determine what the consensus of scientific opinion is and how solid it is. This means not citing retracted papers, fringe opinions, or preliminary studies as if they were definitive, for example.

And of course critical thinking is essential – knowing how to avoid common pitfalls such as logical fallacies and conspiracy thinking.

But the key concept to understand with regard to the relationship between scientific questions and ideology is this – don’t expect or demand that the science will always be maximally convenient to your political views. Understand, by chance alone, it won’t be. You should strive to be most suspicious of scientific claims when they do seem to support your political ideology, because of the motivation to accept such conclusions uncritically. Further, structure your ideological value-based opinions in such a way that they can accommodate whatever conclusions science comes to.

In other words, if you are pro-environment, then support whatever policies are science-based, rather than choose the scientific conclusions that are in line with environmentalist ideology. If you value the free market, then propose rational free-market solutions to the problems that the scientific evidence says we face. Don’t deny the science to make it more convenient for a free-market ideology.

This is where philosophical literacy comes in – understanding the difference between value-based opinions and empirical questions of fact. I also think it is critical to value the truth as part of your ideology. Following a valid logical process needs to be highly valued in itself, and not, therefore, easily subverted to other values.

Otherwise you end up denying a strong scientific consensus because the pundits on news outlets that make you feel good about your political affiliation tell you it’s a hoax.

Categories: Skeptic


Subscribe to The Jefferson Center  aggregator - Skeptic