You are here

Critical Thinking Feed

Skeptoid #599: Listener Feedback: Creationism and More Dead Paul

Skeptoid Feed - Mon, 11/27/2017 - 4:00pm
Some updates, notes, and extra information sent in by listeners about recent episodes.
Categories: Critical Thinking, Skeptic

Skeptic Six-Day Sale (25% Off, Now Thru Cyber Monday)

Skeptic.com feed - Wed, 11/22/2017 - 12:00am

In this week’s eSkeptic:

25 YEARS STRONG Your ongoing patronage will help ensure that sound scientific viewpoints are heard worldwide.

2017 was another banner year for science, skepticism, and critical thinking. We celebrated our 25th anniversary with a spectacular event in New York City that featured our Executive Director Dr. Michael Shermer and a number of skeptical and scientific Internet celebrities including the ASAP Science guys, the science rapper Baba Brinkman, neuroscientist Dr. Heather Berlin, pop star Michael Posner, magician Prakash Puru, The Thinking Atheist podcast host Seth Andrews, and others. Thanks to your continuing support we are looking forward to 2018 and are pleased to tell you about some of the great success we have had with new projects launched in 2017. Click the button below to read the 4-page update from Michael Shermer, the Skeptics Society’s Executive Director.

Read the letter from Michael

Ways to Make Your Tax-Deductible Donations

You can make a donation online using your credit card, or by downloading a printable donation card to make your donation by cheque in the mail. You may also make a donation by calling 1-626-794-3119. The Skeptics Society is US 501(c)(3) nonprofit educational organization. Donations are tax deductible.

Make a tax-deductible donation
to The Skeptics Society

NOW THROUGH CYBER MONDAY 25% off almost everything in store

It’s our best sale of the year, on now through Cyber Monday. SAVE 25% on almost everything at Shop Skeptic, including: books, t-shirts, stickers, lapel pins, print subscriptions, and gift certificates. The only products that aren’t 25% off are back issues of Skeptic magazine (print edition), which are over 80% off! See details further down the page. Shop now.

Shipping not included, while supplies last. Sale ends at 23:59:59 Monday, November 27, 2017, PST.

ORDER FOR CHRISTMAS DELIVERY

Order by December 6 for shipments outside the US.
Order by December 13 for shipments inside the US.
See our Holiday Shipping page for complete details.

Shop Now, Save 25%

NOW THROUGH CYBER MONDAY 25% off digital subscriptions & back issues

SAVE 25% on digital subscriptions and back issues at PocketMags.com/Skeptic-Magazine. Click the button below, add a digital subscription or back issue(s) to your shopping cart, and enter promocode ‘SKEPTIC25’ at checkout. After you’ve made your purchase(s), your PocketMags account allows you to synchronize your purchases to your phone or tablet by signing in to the Skeptic Magazine App (or PocketMags App) using the same username and password you used to make your purchase(s) on PocketMags.com. Questions? Email the Skeptic Webmaster.

Buy Skeptic Digital
at PocketMags.com

UNPRECEDENTED SAVINGS $1 or less per issue. Over 80% off!
(plus shipping)

Save 80–90% on printed back issues of Skeptic magazine, now through Cyber Monday. At 50¢–$1 per issue (depending on quantity purchased), it’s never been a better time to complete your collection of Skeptic magazine with unprecendented savings during our inventory blowout!

INVENTORY BLOWOUT Buy more. Save more (while quantities last!)

We need to make room in our warehouse. So, we are practically giving away printed back issues of Skeptic magazine for as low as 50¢ per issue (i.e. when you buy 50 issues at a savings of a whopping $275)! The more you buy, the bigger the discount:

  • 1 for $1
  • 5 for $4   (80¢ ea.)
  • 10 for $7   (70¢ ea.)
  • 25 for $15   (60¢ ea. — That’s a $135 savings!)
  • 50 for $25   (50¢ ea. — That’s a $275 savings!)

Shipping not included. Shipping will be calculated based on weight & destination and added to your total at checkout. While Supplies Last. Back issues that are no longer available in print may be available in digital format (except for a couple of our earliest volumes, which will hopefully be digitized next year.)

Stephen Jay Gould called Skeptic magazine “The best journal in the field.” For 25 years, our definitive skeptical journal has promoted science and reason. Our in-depth articles explore and inform. Buy it. Read it. Share it. Help us make the world a more rational place.

Browse all Back Issues
on sale for $1 or less!

MAY WE SUGGEST these informative and in-depth issues…

Issue 6.1 (1998)

Science & Society

E.O. Wilson: Can We Unify All Knowledge?; Deconstructing James Van Praagh Talking to the Dead; Emily Rosa Tests Therapeutic Touch; The Ancient Evil Eye; James Randi on New Age Tech; Legalizing Fraud in the Name of Religion; Holocaust Revisionism; Stephen Hawking v. Frank Tipler; Skeptic’s Guide to the Drug Policy Debate; Objectivity in Journalism; Graduate Record Exam Fringe Science…

Buy this back issue

Issue 6.4 (1998)

John F. Kennedy

JFK Facts and Fictions; JFK Case Still Open: Skepticism and the Assassination of JFK; JFK Assassination Science; James Randi on Dowsing; Steven T. Asma on Critical Thinking; The Case For and Against God: A Forum Exchange; The Lost World: of Jack Horner An Interview with the World’s Most Famous Dinosaur Digger; Anastasia: Miraculous Survival Myth; Aliens Among Us?; Psychic Math!; How to Fake UFO Photo…

Buy this back issue

Issue 3.1 (1995)

Pseudomedicine

Life After False Memory Syndrome; The Mattoon Phantom Gasser Mass Hysteria; Why Should Skeptics Understand Religion?; Skeptical Perspectives: A Heretic-Scientist Among the Spiritualists; Homeopathy; Spiritual Belief Systems Try to Compete as Alternatives to Scientific Health Care; Therapeutic Touch; Leftist Science; Star Trek’s Meaning; Liquefying “Blood” …

Buy this back issue

Issue 7.3 (1999)

Millennium Madness

A Critical Analysis of James Redfield and The Celestine Prophesy; Search for Immortality; The Alpha and the Omega The Creation and the End in Biblical Eschatology ; Celestine Prophesy; The Fire That Will Cleanse: Millennial Meanings and the End of the World ; That’s All Folks! It’s the End of the World … Again; Apocalypse Never The Search for Immortality as Millennial Phenomena; Myth and Science; Political Extremism; Autopsy Aliens…

Buy this back issue

Issue 4.1 (1996)

Evolutionary Psychology

Quadro Tracker Dowsing Stick, Tested; Jehovah’s Witnesses/End of World; Physics; McIver Guide to Evolutionary Psychology/Nature of Human Nature; Salter Evolutionary Psychology as Protoscience; Critical Analysis of Evolutionary Psychology; Interview: Stephen Jay Gould; Gould’s Dangerous Idea: Contingency/Necessity/Nature of History; Reviews: Darwin’s Dangerous Idea; The Origin of Satan; The Final Superstition…

Buy this back issue

Issue 1.4 (1992)

Witches, Heretics & Scientists

Special Section: The Price of Intolerance; Spirits, Witches & Science: Why the Rise of Science Encouraged Belief in the Supernatural in 17th-Century England; Ideological Immune System: Resistance to New Ideas in Science; Psychology of Resistance to the Heretical-Science of Copernicus; Edgar Cayce Foundation Responds to a Skeptical Critique and we reply; It’s Baaack: The Nature-Nurture Debate…

Buy this back issue

Issue 5.1 (1997)

Environmental Science

Ecologists vs. Economists: A Quick & Dirty Guide to the Environmental Debate: Is Environmental Science polluted by Politics?; The Beautiful People Myth; Population Risk Assesment; The Not-So-Wise-Use Movement; Julian Simon Slams Eco-Crybabies; Top Scientists’ Eco-warning; Fred Crews on Modern American Witch Hunters; Ancient Astro-NOTS; Murky Origins of Hale-Bopp UFO Fiasco; Human Magnets; Dowsing; Futurists…

Buy this back issue

Issue 7.2 (1999)

Cloning & Genetic Engineering

Cloning Science & Ethics; Science’s Moral Limits; How Evolution Increases Information in the Genome; Group Selection & Origins of Evil; Historical Perspective on Theology & Evolutionary Psychology; Urban Legends; Hawking Expanding Universe Trick; Deconstructing JFK: Assassination Debate Continues; De-Population Myths; Is Anybody Out There?; Freethinkers, Fundamentalists, Fake Quotes …

Buy this back issue

Issue 3.2 (1995)

AIDS

Does HIV Really Cause AIDS? A Case Study in Skepticism Taken Too Far; AIDS Part I: The Skeptics and Their Claims; Part II: How Skepticism Went Astray; Part III Lessons on How Science Works; An interview with the author of The Bell Curve, Charles Murray; Alfred Russel Wallace, Charles Darwin, and the Resolution of a Scientific Priority Dispute; The Question All Skeptics are Asking: Why Did God Make Rice Cakes?…

Buy this back issue

Issue 18.2 (2013)

Gender Differences

What Science Says and Why it’s Mostly Wrong; Gender and the Paranormal; What Science Says About the Soul; Interview with Controversial Anthropologist Napoleon Chagnon; Is Earth’s Magnetic Field Reversal Dangerous?; Scientology Self Help Handbook; Can We Trust Science Media Reports?; Why The Universe Exists; Skeptics in Film; Witchcraft Ceremony; Junior Skeptic: Alien Invaders!…

Buy this back issue

Issue 7.1 (1999)

The Game of Influence

The Game of Influence: Understanding the Hidden Dynamics of Communication; Selection for Credulity A Biologist’s View of Belief; Legitimatizing Psychology’s Prodigal Son: Re-considering Hypnosis for the 21st Century; How the Public Relations Industry Compromises Democracy; The Knowledge Filter Reality Must Take Precedence in the Search for Truth…

Buy this back issue

Issue 1.2 (1992)

Cryonics: Can Science Cheat Death?

Can Science Cheat Death?; Technical Aspects; The Society for the Recovery of Persons Apparently Dead: History of Resuscitation; Basic Q&A on Cryonics from Alcor Life Extension Foundation; Physicist in the White House?; Black Holes; Acupuncturists and Chiropractors Fined; Secular Alcohol Treatment; Laws of Robotics; Establishing a Miracle; Use & Abuse of Statistics in the “Real World”…

Buy this back issue

Issue 21.4 (2016)

Deception in Cancer Treatment

Deceptive Cancer-care Industry Marketing; Amityville Hoax at 40; Alien Skulls?; Meaning Behind the Nazca Geoglyphs; Clown Sightings Rattle Nerves; Case for a Galactic Defense System; Is “Spirituality” Meaningless?; Are We Living in a Computer Simulation?; One of the Most Fundamental Sources of Error in Human Judgment; Thinking Critically about Public Discourse; Anti-Aging Claims…

Buy this back issue

Issue 21.4 (2016)

Boston Bombing Conspiracy Theories

Conspiracy in Boston: Disentangling Boston Marathon Bombing Conspiracy Theories; Miracle of Large Numbers Explains Seemingly Miraculous Events; Reasons for Hope in the Science of Artificial Intelligence; Faith Healing Tragedies; The Science of Memory and the Dylan Farrow/Woody Allen Case; Photographing Phantoms; Strange and Unusual Religious Beliefs and Practices in the United States; Cosmos: A Spacetime Odyssey…

Buy this back issue

Remember, the more you buy, the bigger the discount:

  • 1 for $1
  • 5 for $4   (80¢ ea.)
  • 10 for $7   (70¢ ea.)
  • 25 for $15   (60¢ ea. — That’s a $135 savings!)
  • 50 for $25   (50¢ ea. — That’s a $275 savings!)

Browse all Back Issues
on sale for $1 or less!

Below, you’ll find a few books we recommend for your library. These also make great gifts!

AUTOGRAPHED HARDCOVER, 1st EDITION How Science and Reason Lead Humanity toward Truth, Justice, and Freedom

Reg. $32. NOW $7.13

Get a 1st edition, autographed, hardcover copy of Dr. Michael Shermer’s The Moral Arc: How Science and Reason Lead Humanity toward Truth, Justice, and Freedom for only $9.50, plus shipping (while quantities last). In this book about moral progress, Shermer demonstrates, through extensive data and heroic stories, that the arc of the moral universe bends toward truth, justice, and freedom, and that we are living in the most moral period of our species’ history.

Praise for the book

A thrilling and fascinating book, which could change your view of human history and human destiny.

—Steven Pinker

In these cynical times, where right and left foresee disaster and despair (albeit for different reasons), Shermer’s monumental opus, spanning centuries, nations, and cultures, is bound to provoke debate and open minds.

—Carol Tavris

Shermer’s thought-provoking, multidisciplinary book will engage anyone who wishes to understand rationalism as a force for morality.

—Library Journal

Buy the autographed hardcover

AUTOGRAPHED HARDCOVER 75 Collected essays from bestselling author Michael Shermer’s celebrated columns in Scientific American

Reg. $28. NOW $21

For fifteen years, bestselling author Michael Shermer has written a column in Scientific American magazine that synthesizes scientific concepts and theory for a general audience. His trademark combination of deep scientific understanding and entertaining writing style has thrilled his huge and devoted audience for years. Now, in SKE?TIC, seventy-five of these columns are available together for the first time; a welcome addition for his fans and a stimulating introduction for new readers.

Praise for the book

Dense with facts, convincing arguments, and curious statistics, this is an ingenious collection of light entertainment for readers who believe that explaining stuff is a good idea.

Kirkus Reviews

Shermer makes a strong case for the value of the scientific endeavor and the power of rational thinking in 75 brief essays…. Each entry is insightful, informative, and entertaining.

Publishers Weekly

Michael Shermer is a beacon of reason in an ocean of irrationailty.

Neil deGrasse Tyson

Buy the autographed hardcover

Hardcover Reg. $28
NOW $21

UFOs, Chemtrails, and Aliens: What Science Says

UFOs. Aliens. Strange crop circles. Giant figures scratched in the desert surface along the coast of Peru. The amazing alignment of the pyramids. Strange lines of clouds in the sky. Paranormal belief is alive and well in American. Donald Prothero and Tim Callahan explore why such demonstrably false beliefs thrive despite decades of education and scientific debunking. Employing the standards of scientific evidence, the authors discuss the reliability of eyewitness testimony, and the psychology of belief and conspiratorial thinking.

Buy the hardcover

Paperback Reg. $26
NOW $19.50

The New Age: Notes of a Fringe Watcher

The New Yorker calls it, “Fair, witty appraisal of cranks, quacks, and quackeries of science and pseudoscience … A very able and even-tempered presentation.” This book is a classic of skeptical literature filled with thirty-three diverse chapters: a bountiful offering of the delightful drollery and horse sense that has made Martin Gardner the undisputed dean of the critics of pseudoscience. It is also a quick way to get up to speed on many topics. Gardner is not afraid to examine the process of critical examination itself.

Buy the paperback

Paperback Reg. $16
NOW $12

A Universe from Nothing

Where did the universe come from? What was there before it? What will the future bring? And finally, why is there something rather than nothing? At long last scientists are closing in on answer the question of why there is something rather than nothing, and why the universe bothers to exist at all. Dr. Krauss’ answer is based purely on the known laws of nature, showing that a universe can arise out of nothing without the aid or direction of a deity.

Buy the paperback

Paperback Reg. $16.95
NOW $12.71

Climbing Mount Improbable

A brilliant book celebrating improbability as the engine that drives life, by the acclaimed author of The Selfish Gene, The Blind Watchmaker, and The God Delusion. The human eye is so complex and works so precisely that it appears to be the product of design. How could such an intricate object have come about by chance? In writing that the New York Times called “a masterpiece”—Richard Dawkins builds a carefully reasoned and illustrated argument for evolutionary adaptation as the mechanism for life on earth.

Buy the paperback

Paperback Reg. $18
NOW $13.50

Breaking the Spell

In this New York Times Bestseller (and a definitive work on religion), Daniel Dennett (one of the “Four Horsemen” and a world-famous philosopher) asks: Is religion a product of blind evolutionary instinct or rational choice? Is it truly the best way to live a moral life? Ranging through biology, history, and psychology, Dennett charts religion’s evolution from “wild” folk belief to “domesticated” dogma.

Buy the paperback

STICK IT TO ’EM Skeptic Lapel Pin

Reg. $10. NOW $7.50

We’d like to see this lapel pin become world-famous, and we need your help to make that happen! Every skeptic (and his/her best friends) should have one of these lapel pins. They feature elegant gold-colored SKEPTIC letters, with a contrasting black background. They’re a great size for a lapel or tie-tack — about 25mm × 6mm (1″ × .25″). The pin comes in a classy little plastic box suitable for gift-giving. At this price, why not get one for every jacket you own!

Buy a Skeptic lapel pin

MORE IN STORE! Shop our entire online selection and save now through Cyber Monday

Discover more gift ideas

Categories: Critical Thinking, Skeptic

Skeptoid #598: The Hudson Valley UFO Mystery

Skeptoid Feed - Mon, 11/20/2017 - 4:00pm
Hundreds of people watched this UFO over the Hudson River Valley many times between 1983 and 1984.
Categories: Critical Thinking, Skeptic

Dr. Robert Trivers — Evolutionary Theory & Human Nature

Skeptic.com feed - Thu, 11/16/2017 - 2:00pm

Dr. Robert Trivers and Dr. Michael Shermer have a lively conversation on everything from evolutionary theory and human nature to how to win a knife fight and Trivers’ membership in the Black Panthers. Don’t miss this engaging exchange with one of the most interesting scientists of the past half century.

This Science Salon followed Dr. Robert Trivers’ lecture on ‘The Evolutionary Genetics of Honor Killings,’ which he gave in Dr. Michael Shermer’s Skepticism 101 course at Chapman University on Thursday November 16, 2017:

Categories: Critical Thinking, Skeptic

eSkeptic for November 15, 2017

Skeptic.com feed - Wed, 11/15/2017 - 12:00am

In this week’s eSkeptic:

SHERMER SHREDS Mass Public Shootings & Gun Violence: Part I

At 59 dead and over 540 wounded, the Las Vegas massacre that took place on October 1, 2017 is now the worse mass public shooting in U.S. history.

As is usually the case with such gun-related tragedies, within hours social media and political punditry was abuzz with talk of gun control and Second Amendment rights, with both the left and the right marshaling their data and arguments. The two most common arguments made in defense of gun ownership are (1) self protection and (2) as a bulwark against tyranny.

In this video, Michael Shermer “shreds” these ideas with skeptical scrutiny.

FOLLOW MICHAEL SHERMER ON
TwitterFacebookYouTube

BABA BRINKMAN’S SKEPTIC RAP Rap Artist Performs Science-Based Hip-Hop

Baba Brinkman is a Canadian rap artist based in New York. He is best known for his “Rap Guide” series of science-based hip-hop albums and theatres shows, including Rap Guides to Evolution, Climate Change, and Religion.

The world premiere of this rap was performed at a live variety science show hosted by Dr. Michael Shermer, in partnership with YouTube Space NY in late September 2017, celebrating 25 years of Skeptic magazine and the Skeptics Society combating ‘fake news.’ The event explored the question: ‘How Can We Know What’s True?’.

FOLLOW BABA BRINKMAN ON
TwitterFacebookYouTube

A NEW STORY! How Brian Brushwood Became a Card-Carrying Skeptic

As we announced a few weeks ago in eSkeptic, we asked several friends to tell us about those “aha!” moments that led to their becoming skeptical thinkers. As promised, here is another one of their incredible stories on YouTube. Enjoy!

American magician, podcaster, author, lecturer, and comedian, Brian Brushwood is the host of Scam School for Discovery, Hacking the System for National Geographic, and co-host of The Modern Rogue. He is the author of several books including: Scam School: Your Guide to Scoring Free Drinks, Doing Magic & Becoming the Life of the Party, and The Professional’s Guide to Fire Eating.

TELL US YOUR STORY!

Tell us your story and become a card-carrying skeptic! Thank you for being a part of our first 25 years. We look forward to seeing you over the next 25. —SKEPTIC

Become a Card-Carrying Skeptic

The Crypto-Kid
MONSTERTALK EPISODE 141

In this episode of MonsterTalk, we interview cryptozoology enthusiast Colin Schneider, a young and enthusiastic researcher of Fortean and paranormal topics about his research into animal exsanguination. It’s a fun discussion of the field of cryptozoology, the disturbing topic of animal mutilation and the work done by the British organization, the Center for Fortean Zoology.

Listen to episode 141

Read the episode notes

Subscribe on iTunes

Get the MonsterTalk Podcast App and enjoy the science show about monsters on your handheld devices! Available for iOS, Android, and Windows. Subscribe to MonsterTalk for free on iTunes.

Sigmund Freud (1926). Photo by Ferdinand Schmutzer [Public domain], via Wikimedia Commons

In this week’s eSkeptic, Margret Schaefer reviews Freud: The Making of an Illusion, in which its author, Frederick Crews, convincingly argues that Freud constructed psychoanalysis on a fraudulent foundation. How did Freud convince so many people of the correctness and the profundity of his theory?

The Wizardry of Freud

by Margret Schaefer

“Clear evidence of falsification of data should now close the door on this damaging claim.”

The above is from a 2011 British Medical Journal article about Andrew Wakefield, the British physician whose “discovery” of a link between vaccination and autism fueled a world wide anti-vaccination movement. Since its publication in 1998, the paper’s results were contradicted by many reputable scientific studies, and in 2011 Wakefield’s work was proved to be not only bad science but a fraud as well: a British court found him guilty of dishonestly misrepresenting his data, removed him from the roster of the British Medical Society, and disbarred him from practice.

In his new book, Freud: The Making of an Illusion, Frederick Crews presents a Freud who was just such a fraud and who deserves the same fate. This is not the first time that Crews, a bona fide skeptic whose last book, Follies of the Wise: Dissenting Essays (2007), was reviewed in the pages of this journal, has written critically about Freud. Crews had been drawn to psychoanalysis himself (disclosure: this reviewer was, too) in the 1960s and early 1970s when, along with the late Norman Holland, he pretty much created the field of psychoanalytic literary criticism. But a prestigious fellowship to the Stanford Center for Advanced Study in the Behavioral Sciences (he was a professor of English at UC Berkeley at the time) gave him time to delve deeper into Freud, and convinced him instead that psychoanalysis was unscientific and untenable. Since then he has contributed to the growing skeptical scholarly and historical scholarship on Freud.

Psychoanalysis is not only pseudoscience (as most philosophers of science agree, though for different reasons), but “the queen of pseudosciences”.

Philosophers of science have indicted key concepts of Freud’s psychoanalysis such as “free association,” “repression,” and “resistance” as circular and fatally flawed by confirmation bias. Historians have tracked down the actual patients whose treatment served Freud as evidence for his theories and have sought to place Freud and his theories in the historical and cultural context of his time. Crews—to his own surprise—became well known as a major, if not the major, critic of Freud in the public eye because of a series of articles he published in the New York Review of Books in the 1990s. For Crews is that now all too rare and rapidly disappearing creature—the public intellectual—who is able to explain and make accessible an otherwise unwieldy amount of erudite scholarship in clear, elegant, and jargon-free prose. Defenders of Freud have sought to discredit him as a “Freud basher,” thereby continuing the (not so honorable) tradition that Freud began of questioning the motives of a skeptic and attributing it to “resistance” instead of answering his objections. […]

Continue reading

Categories: Critical Thinking, Skeptic

The Wizardry of Freud

Skeptic.com feed - Tue, 11/14/2017 - 10:30am

“Clear evidence of falsification of data should now close the door on this damaging claim.”

The above is from a 2011 British Medical Journal article about Andrew Wakefield, the British physician whose “discovery” of a link between vaccination and autism fueled a world wide anti-vaccination movement. Since its publication in 1998, the paper’s results were contradicted by many reputable scientific studies, and in 2011 Wakefield’s work was proved to be not only bad science but a fraud as well: a British court found him guilty of dishonestly misrepresenting his data, removed him from the roster of the British Medical Society, and disbarred him from practice.

In his new book, Freud: The Making of an Illusion, Frederick Crews presents a Freud who was just such a fraud and who deserves the same fate. This is not the first time that Crews, a bona fide skeptic whose last book, Follies of the Wise: Dissenting Essays (2007), was reviewed in the pages of this journal, has written critically about Freud. Crews had been drawn to psychoanalysis himself (disclosure: this reviewer was, too) in the 1960s and early 1970s when, along with the late Norman Holland, he pretty much created the field of psychoanalytic literary criticism. But a prestigious fellowship to the Stanford Center for Advanced Study in the Behavioral Sciences (he was a professor of English at UC Berkeley at the time) gave him time to delve deeper into Freud, and convinced him instead that psychoanalysis was unscientific and untenable. Since then he has contributed to the growing skeptical scholarly and historical scholarship on Freud.

Philosophers of science have indicted key concepts of Freud’s psychoanalysis such as “free association,” “repression,” and “resistance” as circular and fatally flawed by confirmation bias. Historians have tracked down the actual patients whose treatment served Freud as evidence for his theories and have sought to place Freud and his theories in the historical and cultural context of his time. Crews—to his own surprise—became well known as a major, if not the major, critic of Freud in the public eye because of a series of articles he published in the New York Review of Books in the 1990s. For Crews is that now all too rare and rapidly disappearing creature—the public intellectual—who is able to explain and make accessible an otherwise unwieldy amount of erudite scholarship in clear, elegant, and jargon-free prose. Defenders of Freud have sought to discredit him as a “Freud basher,” thereby continuing the (not so honorable) tradition that Freud began of questioning the motives of a skeptic and attributing it to “resistance” instead of answering his objections.

This is precisely one of the reasons that in previous books Crews has said that psychoanalysis is not only pseudoscience (as most philosophers of science agree, though for different reasons), but “the queen of pseudosciences,” because it is the only one that incorporates within its theory an explanation of why some people refuse to believe it, i.e. “unconscious resistance” which needs to be explained by Freud’s own ideas and methods—a most brilliant and masterful way of disarming criticism.

His new book, a biography of the first half of Freud’s life, with intensive focus on the period 1882–1900, examines the crucial years in which Freud was creating his “science of psychoanalysis,” which culminated in his Studies on Hysteria, 1895 and The Interpretation of Dreams, 1900. This time in Freud’s life has been somewhat neglected by Freud’s biographers for many reasons, including lack of sufficient available biographical information, but also because in these years Freud developed a theory of neurosis that he later said he abandoned. But Crews argues that all the principal concepts on which psychoanalysis rests were constructed at this early time. His logic is that if the roots of a tree are not sound, then the crown, no matter how beautiful and different from the roots, cannot be healthy. And recently a treasure trove of new data about Freud during this period has been released from censorship: the complete correspondence between the young Sigmund Freud and his fiancée Martha Bernays during the long four and a half years of their engagement, 1882–1886.

This correspondence, which consists of an astounding 1539 letters in all, had been concealed from public view for some 60 years. Only a very small portion—97 letters, or 6.3% of them—had been previously published, and those in expurgated form. Their importance is attested to by the fact that Anna Freud, his daughter, kept them private at her house in London instead of depositing them in the Freud Archive along with the rest of Freud’s papers after her father’s death in 1939. It was not until her own death in 1982 that her heirs finally did deposit them in the Freud Archive—but even then it was with the stipulation that access to them be restricted until the year 2000. Why was this correspondence hidden for so long? Their content makes it clear why: they don’t paint a flattering portrait of Freud. A reading of these letters after they became available on the Library of Congress website spurred Crews to write this book, as it confirmed to him all the suspicions about Freud’s motives and manner of working that he and others had raised before but had had to remain somewhat speculative: the fraudulent and pseudo-scientific evidential base on which psychoanalysis rests.

Despite its nearly 700-page length and 22 pages of footnotes, Crews’ book is divided into sections with witty titles such as “Sigmund the Unready,” “Tending to Goldfish,” and “Girl Trouble” and is thoroughly absorbing and highly readable. He begins with an examination of Freud’s family history and early education, detailing the reasons why Freud was “unready” to undertake the study of medicine, and then focuses on Freud’s “First Temptation”: cocaine. Freud’s enthusiastic endorsement—and use of—cocaine, Crews contends, had a much greater consequence for the theory of psychoanalysis than is officially recognized. It was not a soon-to-be-discarded “youthful indiscretion,” as Ernest Jones called it in his official 1957 biography of Freud, for Freud continued to use cocaine regularly, almost daily, not just occasionally, for some 15 years. Crews details Freud’s early experiments with the substance, and documents his disastrous attempt to help ease his best friend Fleischl’s withdrawal from morphine addiction by means of injections of cocaine. Meant as a kindness, it became the opposite, as Freud ignored every sign that it was not working and was blatantly harming his friend instead. Later, Freud dishonestly claimed to have cured Fleischl, when in fact his friend tragically deteriorated while undergoing Freud’s treatment, and finally died in great pain with two addictions instead of one: morphine and cocaine. The details of what happened to Fleischl are gruesome to read, and Crews sees Freud’s tenacious clinging to a pet theory and ignoring any evidence to the contrary, no matter how devastating, as characteristic of him throughout his life from then on.

As Freud wrote Martha while recommending it to her, he used cocaine to alleviate his many physical and emotional symptoms, which ranged from headaches, stomach aches, and sciatica to recurring depressions and intermittent “bad moods” punctuated by periods of elation. It consoled him for his loneliness in Paris while studying with Charcot, and gave him the self-confidence that he mostly lacked at this time. Most importantly for the creation of his psychoanalysis, he used it to overcome his writer’s block. Hence he was “under the influence” while he was thinking, writing, and creating the theories of psychoanalysis. Crews develops the intriguing notion that Freud had a “cocaine self” that permitted him to misrepresent and exaggerate the flimsy evidence he did have for his theories—and to manufacture evidence when none existed. Freud as a student had been a “studious, ambitious, and philosophically reflective young man, trained in rigorous intuitivism by distinguished researchers,” as Crews acknowledges. But in the early 1880s he changed into someone so arrogant and overweeningly ambitious and grandiose, so absolutely and unaccountably convinced of his theory of the sexual etiology of hysteria that he didn’t hesitate to stoop to dishonesty and fraud to try to prove it.

Psychoanalysis is not only pseudoscience (as most philosophers of science agree, though for different reasons), but “the queen of pseudosciences”.

Cocaine is notoriously known to induce feelings of supreme self-confidence, elation, and grandiosity in the user, to the point that facts and reality no longer matter. It also heightens sexual feelings and fantasies and is often used as an aphrodisiac for that reason, as Freud was well aware, using it for that purpose himself. (More than once in his letters we find Freud telling Martha that he feels like a “sexual giant.”) And Crews argues that Freud’s cocaine use also explains his exaggerated focus on sexuality as the ultimate cause of all neuroses.

Freud’s theory at the time, in brief, was that sexual seduction (molestation) in childhood, usually by fathers, which was “repressed,” i.e. not consciously remembered, was the “invariable,” “only,” and “exclusive” cause of all hysteria—in fact, of “all the neuroses”—as he announced in a paper he gave to a group of his peers in 1896. In that paper he presented as evidence 13 cases that he said he had successfully cured. No matter that the group’s chairman, Richard von Krafft-Ebbing, called Freud’s theory “a scientific fairly tale”—Freud rejected this judgment as due to his being an “ass” and a conventional prude—surely hard to believe of someone like Krafft-Ebbing, the foremost expert in pedophilia in the world at that time, and a man from whom Freud actually took a number of ideas (without giving him credit). Even more shockingly, Freud later admitted to his friend, confidant, and collaborator Wilhelm Fliess, a Berlin physician, that these 13 cases didn’t exist at all—he had just made them all up.

Freud believed in what has come to be called his “seduction theory of hysteria” for many years until he famously “changed his mind” about what it was that his patients had “repressed.” Although it is unclear exactly when he officially made this change (it was not until 1909 that he called the Oedipus complex the central complex of the neuroses), he privately confessed to Fliess in 1897 (a few months after presenting his fraudulent paper) that he had not actually been able to “conclude a single case” of analysis so far, i.e. that his treatments had not produced a single cure. In fact, he had not even been able to induce any patient to agree with him and “remember” such abuse consciously, even though he had exercised extreme pressure to get them to do so, including massage, “head pressure,” and drugs to put them in a more suggestible mood when verbal suggestion didn’t work (of course he had attributed this to their “resistance” and “repression”) This change of mind has long been celebrated as the beginning of “true psychoanalysis,” as it placed the cause of hysteria away from the external world and into the internal psychological world of his patients: they were not repressing memories of actual sexual molestation, but rather their own childhood sexual fantasies and desires that they had unconsciously attributed to their fathers. However, Crews shows that Freud had no more evidence for his second theory (in fact, less, as it was empirically not even potentially verifiable) than he did for his first one, and continued to use all the (circular and self-invented) concepts with which he had tried to “prove” the first theory, e.g. “repression,” “free association,” and “resistance,” all of which have meaning only in Freud’s own system.

If his new theory was no more empirically based than his first, how did Freud actually come up with his ideas of the etiology of hysteria? Crews takes his clue from the fact that Freud saw himself (and other family members, especially one of his sisters) as suffering from an hysteria exactly like those of his patients, and that what he represented as his empirically based “science of psychoanalysis” were actually his own—real or imagined—childhood sexual experiences. Crews’ exposition of what in Freud’s biography led him to his theories makes for interesting reading indeed. In the end, Crews demonstrates that Wilhelm Fliess, who at their last actual meeting in 1900 accused Freud of merely reading the contents of his own mind into that of his patients, was right. What this means is psychoanalysis is based on a case of one—Freud himself. That is, Freud took himself as representative of all people in all times and in all cultures—surely a supremely grandiose, narcissistic—and preposterous—idea.

This is just a thin slice of what else there is in this riveting and rewarding book. One chapter is devoted to Freud’s rather unsuccessful stay in Paris in the winter of 1885-86 observing Charcot’s treatment of hysterics at Paris’ famous Salpêtrière. Freud idealized Charcot, and never questioned the obvious artificiality of Charcot’s sexualized “theater of hysteria” that entertained the aristocratic audiences he invited to watch it, although others there at the same time as Freud saw through the charade, correctly seeing Charcot’s use of hypnosis as an extreme form of suggestion. Instead, he took over Charcot’s theory of the origin of hysteria wholesale. Charcot’s theory of hysteria died with Charcot in 1893, since by then it had become obvious that the great doctor had gone astray in his enthusiastic use of hypnotism. But Freud took no notice and elaborated Charcot’s method of using hypnosis on his patients after he returned to Vienna—with no success, as Crews details in sometimes hair-raising detail. The case of Bertha Pappenheim, considered to be the foundational case of psychoanalysis, is paradigmatic of the gulf between the reality of her treatment and its later reporting. Although she was Breuer’s patient from 1880–1882, Freud collaborated with him throughout the case, and referred to this particular case, “Anno O,” later more than to any of his own cases. This supposedly “successful cure” showing how hysterical symptoms could be cured by cathartic “talking” was a complete failure instead. After two years and a thousand hours of therapy (!) by Breuer, Pappenheim was worse, not better. And all the while she was supposedly cured of her symptoms by talking freely until she found their point of origin (the famous “chimney sweeping” that Freud took from her and later called “free association”) Bertha was being given large quantities of mind-altering drugs such as chloral hydrate (a “hypnotic” chemical that today is often used as a “date rape” drug) and morphine—drugs whose side effects and withdrawal symptoms in turn were often misinterpreted by the two as the very “hysterical symptoms” she needed to have cured (thus giving new meaning to Karl Kraus’ assessment of psychoanalysis as “the disease it purports to cure”). The quantities used on her were such that five weeks after her discharge as “cured” she had to be admitted to a psychiatric hospital, still symptomatic, and needing be detoxed—a truth that Freud and Breuer failed to mention when they wrote the case up 13 years later. And so it went with many other patients, e.g. Anna von Lieben, whom Freud in 1897 called his “principal client” and “instructress,” Ida Bauer (“Dora”), and Emma Eckstein, whose treatment, which almost killed her and resulted in her severe facial disfigurement, qualifies as out and out medical malpractice.

Crews’s book takes us up through Freud’s life and ideas until his Interpretation of Dreams in 1900. The idea that dreams have meaning is an old folk belief which is true on the face of it, as people do dream about matters of concern to them, but Freud’s elaborate dream theory gives that belief a pseudoscientific gloss, as he invented a complicated theory of dreams that attributed extraordinary intellectual and linguistic abilities to a supposed “dream censor” in our minds. It is pseudoscientific because—to give just one obvious reason—Freud’s interpretive scheme allowed for a symbol to mean either itself, its opposite (“You say it’s not your mother? Aha! It is your mother”) or anything else at all (displacement), with no way to determine which interpretation is correct, or even likely.

In the later part of his book, Crews also takes up the matter of Freud’s relationship with his sister-in-law Minna, the younger sister of Martha, who came to live with the Freuds in Vienna after the death of her fiancée in the mid-1890s. Crews finds the admittedly circumstantial evidence that she and Freud had a long-term affair too strong to ignore. (And what evidence can there be in something of this sort but circumstantial?) But he does not find this matter merely titillating. Crews argues that Freud’s closeness to Minna had an influence on his elaboration of psychoanalysis. As early as the mid-nineties, she supplanted Wilhelm Fliess as his confidant after that relationship ended in bitterness, as unlike Martha, she took a lively interest in his work, and helped him write his books and papers. Crews argues that she may have helped turn Freud away from whatever scientific and empirical values he still ostensibly held towards extremes of speculation such as spiritualism and telepathy. (At one point Freud actually claimed that what passed between the analyst’s and his patient’s “unconscious” happened by means of telepathy.)

Freud took himself as representative of all people in all times and in all cultures—surely a supremely grandiose, narcissistic—and preposterous—idea.

If, as Crews convincingly argues, Freud constructed psychoanalysis on a fraudulent foundation, how did he convince so many people of the correctness and the profundity of his theory? And not just his enthralled followers over whom he presided like the guru of a cult, excommunicating all apostates, but also many of us over many subsequent decades? One reason for Freud’s wizardry in doing this, Crews suggests, is Freud’s rhetorical mastery and guile, including his heart-warming protestations of modesty and scientific rigor. Crews, after all originally a literary critic, notes that the narrative structure of Freud’s case histories and his Interpretation of Dreams was that of a suspenseful detective story in the manner of Arthur Conan Doyle, one of Freud’s favorite authors (Freud himself admitted—in supposed surprise—that his case histories read more like short stories). In The Interpretation of Dreams, for example, Freud induces his reader to identify with him and join him in a quest he structured as a difficult and unsparingly honest introspective journey leading to that heart of darkness, the source of all dreams—“the Unconscious.” (Not for nothing was Freud awarded the Goethe prize for literature in Germany in 1936.) So he was creating “literature,” as some who still idealize the founder today argue and actually see as a virtue, claiming that psychoanalysis is therefore a “hermeneutic” rather than an empirical “science,” one conveniently not subject to empirical rules of evidence. True, “literature” does not have to be attuned to empirical reality—it’s “truth” lies in a different realm—but a theory of mind and a “science,” especially one applied to the (costly) treatment of suffering patients in the actual world, does.

I can’t help but add that one reason that Crews’ book succeeds as a readable and compelling book is the same one to which he attributed a good deal of Freud’s success: he, too, is an eloquent and passionate writer who has here constructed as enthralling a detective story as any of Freud’s. He, too, becomes Sherlock Holmes, the objective, erudite, and supremely rational sleuth who relentlessly tracks down hidden clue after clue—which leads him inexorably to only one possible verdict: Freud is guilty of fraud as charged. Except—and this is the big difference—Crews provides ample documentation and evidence for what he says, whereas Freud only pretended to do so.

Is any of this still important today, when psychoanalysis has effectively been banished from the mainstream professions of psychiatry and psychology for its lack of efficacy? Today even basic Freudian terms such as “hysteria” and “neurosis” have been excised from the DSM, the bible of psychiatric practice. But Crews argues that psychoanalysis still remains culturally pervasive and that Freud’s ideas, though proven pseudoscientific many times, persist and are still capable of exerting harmful influence in the real world. A recent example was the widespread “recovered memory” movement of 1980s and 1990s that Crews detailed in his eye-opening book of 1995, The Memory Wars. This movement, which still has hangers-on today, destroyed the lives of many families, including that of the daughters who accused their fathers of sexually molesting them in childhood on the basis of a therapist’s unearthing their “repressed memories” of sexual abuse, and jailed a number of falsely accused men. It was obviously a revival of Freud’s original theory of neurosis, in which a therapist convinced of this theory subtly or not so subtly—as was clearly demonstrated in later lawsuits—suggested this to their patients, just as Freud himself did.

Crews hopes that by proving that Freud’s creation of psychoanalysis was a fraud he will finally help “close the door” on this “damaging claim.” Will it? Alas, exposure as a fraud does not seem to deter belief: in the U.S. a large fraction of the population still believes in Wakefield’s vaccination-autism theory, and in 2015, anti-vaccination groups in California actually recruited the discredited Wakefield himself to come to their state and head their campaign against the state legislature’s effort to pass a pro-vaccination law protecting school children.

But “the still small voice of reason”—to quote Freud himself in another context—will, hopefully, prevail in the end. Anyone who reads Crews’ new book with an open mind will come away thinking that while Freud was indeed a highly imaginative thinker and an accomplished, eloquent writer—he was also a fraud and a huckster, a narcissistic con-man of overwhelming ambition, hungry equally for fame and fortune, who succeeded by means of deceptive propaganda and rhetoric in being the “conquistador” that he longed to be. But at end of the royal road to Freud’s Unconscious there is finally only the Wizard of Oz.

About the Author

Dr. Margret Schaefer received a Ph.D. in English at UC Berkeley, and has taught at UC Berkeley, San Francisco State, and the University of Illinois at Chicago. She is a cultural and literary critic, journalist, and translator, and has written on issues in psychology and medical history as well as on Oscar Wilde, Kleist, Kafka, and Arthur Schnitzler. Recently she translated and published three volumes of Schnitzler’s fiction and two of his plays, which were produced in New York and in Berkeley.

Categories: Critical Thinking, Skeptic

Skeptoid #597: The Wisdom of the Future

Skeptoid Feed - Mon, 11/13/2017 - 4:00pm
Skeptoid corrects a round of past errors, that they might become the wisdom of the future.
Categories: Critical Thinking, Skeptic

Why We Should Be Concerned About Artificial Superintelligence

Skeptic.com feed - Wed, 11/08/2017 - 12:00am

The human brain isn’t magic; nor are the problem-solving abilities our brains possess. They are, however, still poorly understood. If there’s nothing magical about our brains or essential about the carbon atoms that make them up, then we can imagine eventually building machines that possess all the same cognitive abilities we do. Despite the recent advances in the field of artificial intelligence, it is still unclear how we might achieve this feat, how many pieces of the puzzle are still missing, and what the consequences might be when we do. There are, I will argue, good reasons to be concerned about AI.

The Capabilities Challenge

While we lack a robust and general theory of intelligence of the kind that would tell us how to build intelligence from scratch, we aren’t completely in the dark. We can still make some predictions, especially if we focus on the consequences of capabilities instead of their construction. If we define intelligence as the general ability to figure out solutions to a variety of problems or identify good policies for achieving a variety of goals, then we can reason about the impacts that more intelligent systems could have, without relying too much on the implementation details of those systems.

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI.

AI researchers generally agree that superintelligent AI is possible, though they have different views on how and when it’s likely to be developed. In a 2013 survey, top-cited experts in artificial intelligence assigned a median 50% probability to AI being able to “carry out most human professions at least as well as a typical human” by the year 2050, and also assigned a 50% probability to AI greatly surpassing the performance of every human in most professions within 30 years of reaching that threshold.

Many different lines of evidence and argument all point in this direction; I’ll briefly mention just one here, dealing with the brain’s status as an evolved artifact. Human intelligence has been optimized to deal with specific constraints, like passing the head through the birth canal and calorie conservation, whereas artificial intelligence will operate under different constraints that are likely to allow for much larger and faster minds. A digital brain can be many orders of magnitude larger than a human brain, and can be run many orders of magnitude faster.

All else being equal, we should expect these differences to enable (much) greater problem-solving ability by machines. Simply improving on human working memory all on its own could enable some amazing feats. Examples like arithmetic and the game Go confirm that machines can reach superhuman levels of competency in narrower domains, and that this competence level often follows swiftly after human-par performance is achieved.

The Alignment Challenge

If and when we do develop general-purpose AI, or artificial general intelligence (AGI), what are the likely implications for society? Human intelligence is ultimately responsible for human innovation in all walks of life. The prospect of developing machines that can dramatically accelerate our rate of scientific and technological progress is a prospect of incredible growth from this engine of prosperity.

Our ability to reap these gains, however, depends on our ability to design AGI systems that are not only good at solving problems, but oriented toward the right set of problems. A highly capable, highly general problem-solving machine would function like an agent in its own right, autonomously pursuing whatever goals (or answering whatever questions, proposing whatever plans, etc.) are represented in its design. If we build our machines with subtly incorrect goals (or questions, or problem statements), then the same general problem-solving ability that makes AGI a uniquely valuable ally may make it a uniquely risky adversary.

Why an adversary? I’m not assuming that AI systems will resemble humans in their motivations or thought processes. They won’t necessarily be sentient (unless this turns out to be required for high intelligence), and they probably won’t share human motivations like aggression or a lust for power.

There do, however, seem to be a number of economic incentives pushing toward the development of ever-more-capable AI systems granted ever-greater autonomy to pursue their assigned objectives. The better the system is at decisionmaking, the more one gains from removing humans from the loop, and the larger the push towards autonomy. (See, for example, this article on why tool AIs want to be agent AIs.) There are also many systems in which having no human in the loop leads to better standardization and lower risk of corruption, such as assigning a limited supply of organs to patients. As our systems become smarter, human oversight is likely to become more difficult and costly; past a certain level, it may not even be possible, as the complexity of the policies or inventions an AGI system devises surpasses our ability to analyze their likely consequences.

AI systems are likely to lack human motivations such as aggression, but they are also likely to lack the human motivations of empathy, fairness, and respect. Their decision criteria will simply be whatever goals we design them to have; and if we misspecify these goals even in small ways, then it is likely that the resultant goals will not only diverge from our own, but actively conflict with them.

The basic reason to expect conflict (assuming we fail to perfectly specify our goals) is that it appears to be a technically difficult problem to specify goals that aren’t open-ended and ambitious; and sufficiently capable pursuit of sufficiently open-ended goals implies that strategies such as “acquire as many resources as possible” will be highly ranked by whatever criteria the machine uses to make decisions.

Why do ambitious goals imply “greedy” resource acquisition? Because physical and computational resources are broadly helpful for getting things done, and are limited in supply. This tension naturally puts different agents with ambitious goals in conflict, as human history attests—except in cases where the agents in question value each other’s welfare enough to wish to help one another, or are at similar enough capability levels to benefit more from trade than from resorting to force. AI raises the prospect that we may build systems with “alien” motivations that don’t overlap with any human goal, while superintelligence raises the prospect of unprecedentedly large capability differences.

Even a simple question-answering system poses more or less the same risks on those fronts as an autonomous agent in the world, if the question-answering system is “ambitious” in the relevant way. It’s one thing to say (in English) “we want you to answer this question about a proposed power plant design in a reasonable, common-sense way, and not build in any covert subsystems that would make the power plant dangerous;” it’s quite another thing to actually specify this goal in code, or to hand-code patches for the thousand other loopholes a sufficiently capable AI system might find in the task we’ve specified for it.

If we build a system to “just answer questions,” we need to find some way to specify a very non-ambitious version of that goal. If not we risk building a system with incentives to seize control and maximize the number of questions it receives, maximize the approval ratings it receives from users, or otherwise to maximize some quantity that correlates with good performance in training data and is likely to come uncorrelated in the real world.

Why, then, does it look difficult to specify non-ambitious goals? Because our standard mathematical framework of decision-making—expected utility maximization—is built around ambitious, open-ended goals. When we try to model a limited goal (for example, “just put a single strawberry on a plate and then stop there, without having a big impact on the world,”) expected utility maximization is a poor fit. It’s always possible to keep driving up the expected utility higher and higher by devising evermore- ingenious ways to increment the probability of your success; and if your machine is smarter than you are, and all it cares about is this success criterion you’ve given it, then “crazy”-sounding ideas like “seize the world’s computing resources and run millions of simulations of possible ways I might be wrong about whether the strawberry is on the plate, just in case,” will be highly ranked by this supposedly “unambitious” goal.

Researchers are considering a number of different ideas for addressing this problem, and we’ve seen some progress over the last couple of years, but it’s still largely an unsolved and under-studied problem. We could consider adding a penalty term to any policies the system comes up with that have a big impact on the world—but defining “impact” in a useful way turns out to be a very difficult problem.

One could try to design systems to only “mildly” pursue their goals, such as stopping the search for ever-better policies once a policy that hits a certain expected utility threshold is found. But systems of this kind, called “satisficers,” turn out to run into some difficult obstacles of their own. Most obviously, naïve attempts at building a satisficer may give the system incentives to write and run the code for a highly capable non-satisficing sub-agent, since a maximizing sub-agent can be a highly effective way to satisfice for a goal.

For a summary of these and other technical obstacles to building superintelligent but “unambitious” machines, see Taylor et al.’s article “Alignment for Advanced Machine Learning Systems”.

Alignment Through Value Learning

Why can’t we just build ambitious machines that share our values?

Ambition in itself is no vice. If we can successfully instill everything we want into the system, then there’s no need to fear open-ended maximization behavior, because the scary edge-case scenarios we’re worried about will be things the AI system itself knows to worry about too. Similarly, we won’t need to worry about an aligned AI with sufficient foresight modifying itself to be unaligned, or creating unaligned descendents because it will realize that doing so would go against its values.

The difficulty is that human goals are complex, varied, and situation-dependent. Coding them all by hand is a non-starter. (And no, Asimov’s three laws of robotics are not a plausible design proposal for real-world AI systems. Many of the books explored how they didn’t work, and in any case they were there mainly as plot devices!)

What we need, then, would seem to be some formal specification of a process for learning human values over time. This task has itself raised a number of surprisingly deep technical challenges for AI researchers.

Many modern AI systems, for example, are trained using reinforcement learning. A reinforcement learning system builds a model of how the world works through exploration and feedback rewards, trying to collect as much reward as it can. One might think that we could just keep using these systems as capabilities ratchet past the human level, rewarding AGI systems for behaviors we like and punishing them for behaviors we dislike, much like raising a human child.

This plan runs into several crippling problems, however. I’ll discuss two: defining the right reward channel, and ambiguous training data.

The end goal that we actually want to encourage through value learning is that the trainee wants the trainer to be satisfied, and we hope to teach this by linking the trainer’s satisfaction with some reward signal. For dog training, this is giving a treat; for a reinforcement learning system, it might be pressing a reward button. The reinforcement learner, however, has not actually been designed to satisfy the trainer, or to promote what the trainer really wants. Instead, it has simply been built to optimize how often it receives a reward. At low capability levels, this is best done by cooperating with the trainer; but at higher capability levels, if it could use force to seize control of the button and give itself rewards, then solutions of this form would be rated much more highly than cooperative solutions. To have traditional methods in AI safely scale up with capabilities, we need to somehow formally specify the difference between the trainer’s satisfaction and the button being pressed, so that the system will see stealing the button and pressing it directly as irrelevant to its real goal. This is another example of an open research question; we don’t know how to do this yet, even in principle.

We want the system to have general rules that hold across many contexts. In practice, however, we can only give and receive specific examples in narrow contexts. Imagine training a system that learns how to classify photos of everyday objects and animals; when presented with a photo of a cat, it confidently asserts that the photo is of a cat. But what happens when you show it a cartoon drawing of a cat? Whether or not the cartoon is a “cat” depends on the definition that we’re using—it is a cat in some senses, but not in others. Since both concepts of “cat” agree that a photo of a cat qualifies, just looking at photos of cats won’t help the system learn what rule we really have in mind. In order for us to predict all the ways that training data might under-specify the rules we have in mind, however, it would seem that we’d need to have superhuman foresight about all the complex edge cases that might ever arise in the future during a real-world system’s deployment.

While it seems likely that some sort of childhood or apprenticeship process will be necessary, our experience with humans, who were honed by evolution to cooperate in human tribes, is liable to make us underestimate the practical difficulty of rearing a non-human intelligence. And trying to build a “human-like” AI system, without first fully understanding what makes humans tick could make the problem worse. The system may still be quite inhuman under the hood, while its superficial resemblance to human behavior further encourages our tendency to anthropomorphize the system and assume it will always behave in human-like ways.

For more details on these research directions within AI, curious readers can check out Amodei, et al.’s “Concrete Problems in AI Safety”, along with the Taylor et al. paper above.

The Big Picture

At this point, I’ve laid out my case for why I think superintelligent AGI is likely to be developed in the coming decades, and I’ve discussed some early technical research directions that seem important for using it well. The prospect of researchers today being able to do work that improves the long-term reliability of AI systems is a key practical reason why AI risk is an important topic of discussion today. The goal is not to wring our hands about hypothetical hazards, but to calmly assess their probability (if only heuristically) and actually work to resolve the hazards that seem sufficiently likely.

A reasonable question at this point is whether the heuristics and argument styles I’ve used above to try and predict a notional technology, general-purpose AI, are likely to be effective. One might worry, for example—as Michael Shermer does in this issue of Skeptic—that the scenario I’ve described above, however superficially plausible, is ultimately a conjunction of a number of independent claims.

A basic tenet of probability theory is that conjunctions are necessarily no more likely than their individual parts; the claim “Linda is a feminist bank teller” cannot be more likely than the claim “Linda is a feminist” or the claim “Linda is a bank teller,” in this now famous cognitive bias experiment by Amos Tversky and Daniel Kahneman. This is true here as well; if any of the links above are wrong, the entire chain fails.

A quirk of human psychology is that corroborative details can often make a story feel as though it is likelier, by making it more vivid and easier to visualize. If I suppose that the U.S. and Russia might break off diplomatic relations in the next five years, this might seem low probability; if I suppose that over the next five years the U.S. might shoot down a Russian plane over Syria and then that will lead to the countries breaking off diplomatic relations, this story might seem more likely than the previous one, because it has an explicit causal link. And indeed, studies show that people will generally assign a higher probability to the latter claim if two groups are randomly assigned one or the other claim in isolation. Yet the latter story is necessarily less likely—or at least no more likely—because it now contains an additional (potentially wrong) fact.

I’ve been careful in my argument so far to make claims not about pathways, which paint a misleadingly detailed picture, but about destinations. Destinations are disjunctive, in that many independent paths can all lead there, and so are as likely as the union of all the constituent probabilities. Artificial general intelligence might be reached because we come up with better algorithms on blackboards, or because we have continuing hardware growth, or because neuroimaging advances allow us to better copy and modify various complicated operations in human brains, or by a number of other paths. If one of those pathways turns out to be impossible or impractical, this doesn’t mean we can’t reach the destination, though it may affect our timelines and the exact capabilities and alignment prospects of the system. Where I’ve mentioned pathways, it’s been to help articulate why I think the relevant destinations are reachable, but the outlined paths aren’t essential.

This also applies to alignment. Regardless of the particular purposes we put AI systems to, if they strongly surpass human intelligence, we’re likely to run into many of the same difficulties with ensuring that they’re learning the right goals, as opposed to learning a close approximation of our goal that will eventually diverge from what we want. And for any number of misspecified goals highly capable AI systems might end up with, resource constraints are likely to create an adversarial relationship between the system and its operators.

To avoid inadvertently building a powerful adversary, and to leverage the many potential benefits of AI for the common good, we will need to find some way to constrain AGI to pursue limited goals or to employ limited resources; or we will need to find extremely reliable ways to instill AGI systems with our goals. In practice, we will surely need both, along with a number of other techniques and hacks for driving down risk to acceptable levels.

Why Work On This Now?

Suppose that I’ve convinced you that AGI alignment is a difficult and important problem. Why work on it now?

One reason is uncertainty. We don’t know whether it will take a short or long time to invent AGI, so we should prepare for short horizons as well as long ones. And just as we don’t know what work is left to do in order to make AGI, we don’t know what work is left to do in order to align AGI. This alignment problem, as it is called, may turn out to be more difficult than expected, and the sooner we start, the more slack we have. And if it proves unexpectedly easy, that means we can race ahead faster on capability development once we’re confident we can use them well.

On the other hand, starting work early means that we know less about what AGI will look like, and our safety work is correspondingly less informed. The research problems outlined above, however, seem fairly general: they’re likely to be applicable to a wide variety of possible designs. Once we have exhausted the low-hanging fruit and run out of obvious problems to tackle, the cost-benefit comparison here may shift.

Another reason to prioritize early alignment work is that AI safety may help shape capabilities research in critical respects.

One way to think about this is technical debt, a programming term used to refer to the development work that becomes necessary later because a cheap and easy approach was used instead of the right approach. One might imagine a trajectory where we increase AI capabilities as rapidly as possible, reach some threshold capability level where there is a discontinuous increase in the dangers (e.g., strong self-improvement capabilities), and then halt all AI development, focusing entirely on ensuring that the system in question is aligned before continuing development. This approach, however, runs into the same challenges as designing a system first for functionality, and then later going back and trying to “add in” security. Systems that aren’t built for high security at the outset generally can’t be made highly secure (at reasonable cost and effort) by “tacking on” security features much later on.

As an example, we can consider how strings were implemented in the C language, a general-purpose, imperative computer programming language. Developers chose the easier, cheaper way instead of the more secure way, leading to countless buffer overflow vulnerabilities that were painful to patch in systems that used C. Figuring out the sort of architecture a system needs to have and then building using that architecture seems to be much more reliable than building an architecture and hoping that it can be easily modified to also serve another purpose. We might find that the only way to build an alignable AI is to start over with a radically different architecture.

Consider three fields that can be thought of as normal fields under conditions of unusual stress:

  • Computer security is like computer programming and mathematics, except that it also has to deal with the stresses imposed by intelligent adversaries. Adversaries can zero in on weaknesses that would only come up occasionally by chance, making ordinary “default” levels of exploitability highly costly in security-critical contexts. This is a major reason why computer security is famously difficult: you don’t just have to be clear enough for the compiler to understand; you have to be airtight.
  • Rocket science is like materials science, chemistry, and mechanical engineering, except that it requires correct operation under immense pressures and temperatures on short timescales. Again, this means that small defects can cause catastrophic problems, as tremendous amounts of energy that are supposed to be carefully channeled end up misdirected.
  • Space probes that we send on exploratory missions are like regular satellites, except that their distance from Earth and velocity put them permanently out of reach. In the case of satellites, we can sometimes physically access the system and make repairs. This is more difficult for distant space probes, and is often impossible in practice. If we discover a software bug, we can send a patch to a probe—but only if the antenna is still receiving signals, and the software that accepts and applies patches is still working. If not, your system is now an inert brick hurtling away from the Earth.

Loosely speaking, the reason AGI alignment looks difficult is that it shares core features with the above three disciplines.

  • Because AGI will be applying intelligence to solve problems, it will also be applying intelligence to find shortcuts to the solution. Sometimes the shortcut helps the system find unexpectedly good solutions; sometimes it helps the system find unexpectedly bad ones, as when our intended goal was imperfectly specified. As with computer security, the difficulty we run into is that our goals and safety measures need to be robust to adversarial behavior. We can in principle build non-adversarial systems (e.g., through value learning or by formalizing limited-scope goals), and this should be the goal of AI researchers; but there’s no such thing as perfect code, and any flaw in our code opens up the risk of creating an adversary.
  • More generally speaking, because AGI has the potential to be much smarter than people and systems that we’re used to and to discover technological solutions that are far beyond our current capabilities, safety measures we create for subhuman or human-par AI systems are likely to break down as these capabilities dramatically increase the “pressure” and “temperature” the system has to endure. For practical purposes, there are important qualitative differences between a system that’s smart enough to write decent code, and one that isn’t; between one that’s smart enough to model its operators’ intentions, and one that isn’t; between one that isn’t a competent biochemist, and one that is. This means that the nature of progress in AI makes it very difficult to get safety guarantees that scale up from weaker systems to smarter ones. Just as safety measures for aircraft may not scale to spacecraft, safety measures for low-capability AI systems operating in narrow domains are unlikely to scale to general AI.
  • Finally, because we’re developing machines that are much smarter than we are, we can’t rely on after-the-fact patches or shutdown buttons to ensure good outcomes. Loss-of-control scenarios can be catastrophic and unrecoverable. Minimally, to effectively suspend a superintelligent system and make repairs, the research community first has to solve a succession of open problems. We need a stronger technical understanding of how to design systems that are docile enough to accept patches and shutdown operations, or that have carefully restricted ambitions or capabilities. Work needs to begin early exactly because so much of the work operates as a prerequisite for safely making further safety improvements to highly capable AI systems.

This article appeared in Skeptic magazine 22.2 (2017)
Buy print issue
Buy digital issue
Subscribe to print edition
Subscribe to digital edition

This looks like a hard problem. The problem of building AGI in the first place, of course, also looks hard. We don’t know nearly enough about either problem to say which is more difficult, or exactly how work on one might help inform work on the other. There is currently far more work going into advancing capabilities than advancing safety and alignment, however; and the costs of underestimating the alignment challenge far exceed the costs of underestimating the capabilities challenge. For that reason, this should probably be a more mainstream priority, particularly for AI researchers who think that the field has a very real chance of succeeding in its goal of developing general and adaptive machine intelligence.

About the Author

Matthew Graves is a staff writer at the Machine Intelligence Research Institute in Berkeley, CA. Previously, he worked as a data scientist, using machine learning techniques to solve industrial problems. He holds a master’s degree in Operations Research from the University of Texas at Austin.

Categories: Critical Thinking, Skeptic

eSkeptic for November 8, 2017

Skeptic.com feed - Wed, 11/08/2017 - 12:00am

In this week’s eSkeptic:

SKEPTIC EXCLUSIVE FILM CLIP Bill Nye: Science Guy (a new documentary)

Bill Nye is a man on a mission: to stop the spread of anti-scientific thinking across the world. The former star of the popular kids show Bill Nye The Science Guy is now the CEO of The Planetary Society, an organization founded by Bill’s mentor Carl Sagan, where he’s launching a solar propelled spacecraft into the cosmos and advocating for the importance of science, research, and discovery in public life. With intimate and exclusive access — as well as plenty of wonder and whimsy — this behind-the-scenes portrait of Nye follows him as he takes off his Science Guy lab coat and takes on those who deny climate change, evolution, and a science-based world view. The film features Bill Nye, Neil deGrasse Tyson, Ann Druyan, and many others.

Below, you can watch an Exclusive Clip from the film in which Bill Nye has a few words with Ken Ham — founder of the Creation Museum in Petersburg, Kentucky, which promotes a pseudoscientific, young Earth creationist explanation of the origin of the Universe based on a literal interpretation of the Genesis creation narrative in the Bible.

FOLLOW THE BILL NYE FILM ON
TwitterFacebookInstagram

A NEW STORY! How Phil Zuckerman Became a Card-Carrying Skeptic

As we announced a few weeks ago in eSkeptic, we asked several friends to tell us about those “aha!” moments that led to their becoming skeptical thinkers. As promised, here is another one of their incredible stories on YouTube. Enjoy!

Phil Zuckerman is a professor of sociology and secular studies at Pitzer College in Claremont, California, and he is a card-carrying (and corn cob pipe gnawing) skeptic. He is the author of several books, including: Living the Secular Life (2015), and Society Without God (2008).

TELL US YOUR STORY!

Tell us your story and become a card-carrying skeptic! Thank you for being a part of our first 25 years. We look forward to seeing you over the next 25. —SKEPTIC

Become a Card-Carrying Skeptic

It’s possible that artificially intelligent systems might end up far more intelligent than any human. In this week’s eSkeptic, Matthew Graves warns that the same general problem-solving ability that makes artificial superintelligence a uniquely valuable ally may make it a uniquely risky adversary. This article appeared in Skeptic magazine 22.2 (2017).

Why We Should Be Concerned About Artificial Superintelligence

by Matthew Graves

The human brain isn’t magic; nor are the problem-solving abilities our brains possess. They are, however, still poorly understood. If there’s nothing magical about our brains or essential about the carbon atoms that make them up, then we can imagine eventually building machines that possess all the same cognitive abilities we do. Despite the recent advances in the field of artificial intelligence, it is still unclear how we might achieve this feat, how many pieces of the puzzle are still missing, and what the consequences might be when we do. There are, I will argue, good reasons to be concerned about AI.

The Capabilities Challenge

While we lack a robust and general theory of intelligence of the kind that would tell us how to build intelligence from scratch, we aren’t completely in the dark. We can still make some predictions, especially if we focus on the consequences of capabilities instead of their construction. If we define intelligence as the general ability to figure out solutions to a variety of problems or identify good policies for achieving a variety of goals, then we can reason about the impacts that more intelligent systems could have, without relying too much on the implementation details of those systems.

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI. […]

Continue reading

2018 | IRELAND | JULY 15–AUGUST 2 One of the best geology tours we’ve ever offered: an epic 19-day tour of the Emerald Isle!

Ireland’s famed scenic landscape owes its breathtaking terrain to a dramatic 1.75 billion year history of continental collisions, volcanoes, and glacial assault. Join the Skeptics Society for a 19-day immersive tour of the deep history of the Emerald Isle, while experiencing the music, hospitality, and verdant beauty that make Ireland one of the world’s top travel destinations.

For complete details about accommodation, airfare, and tour pricing, please download the detailed information and registration form or click the green button below to read the itinerary, and see photos of some of the amazing sites we will see.

Get complete details

Download registration form

Categories: Critical Thinking, Skeptic

Skeptoid #596: How to Assess a Documentary

Skeptoid Feed - Mon, 11/06/2017 - 4:00pm
Some tips to assess whether a documentary is good science or just propaganda.
Categories: Critical Thinking, Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Critical Thinking