You are here

News Feeds

Skeptoid #1033: Skeptifying BBC's Uncanny

Skeptoid Feed - Tue, 03/24/2026 - 2:00am

Supplying some much-needed skepticism to an episode of the BBC podcast Uncanny.

Learn about your ad choices: dovetail.prx.org/ad-choices
Categories: Critical Thinking, Skeptic

Physicists just turned glass into a powerful quantum security device

Computers and Math from Science Daily Feed - Tue, 03/24/2026 - 12:43am
Scientists have turned simple glass into a powerful quantum communication device that could safeguard data against future quantum attacks. The chip combines stability, speed, and versatility—handling both ultra-secure encryption and record-breaking random number generation in one compact system.
Categories: Science

New light trap design supercharges atom-thin semiconductors

Computers and Math from Science Daily Feed - Tue, 03/24/2026 - 12:25am
Scientists have found a clever way to supercharge ultra-thin semiconductors by reshaping the space beneath them rather than altering the material itself. By placing a single-atom-thick layer of tungsten disulfide over tiny air cavities carved into a crystal, they created miniature “light traps” that dramatically boost brightness and optical effects—up to 20 times stronger emission and 25 times stronger nonlinear signals. These hollow structures, called Mie voids, concentrate light exactly where the material sits, overcoming a major limitation of atomically thin devices.
Categories: Science

New light trap design supercharges atom-thin semiconductors

Matter and energy from Science Daily Feed - Tue, 03/24/2026 - 12:25am
Scientists have found a clever way to supercharge ultra-thin semiconductors by reshaping the space beneath them rather than altering the material itself. By placing a single-atom-thick layer of tungsten disulfide over tiny air cavities carved into a crystal, they created miniature “light traps” that dramatically boost brightness and optical effects—up to 20 times stronger emission and 25 times stronger nonlinear signals. These hollow structures, called Mie voids, concentrate light exactly where the material sits, overcoming a major limitation of atomically thin devices.
Categories: Science

A surprising foam discovery could change everyday products

Matter and energy from Science Daily Feed - Mon, 03/23/2026 - 8:44pm
Foams have long baffled scientists because liquid drains from them far sooner than theory predicts. New research shows the reason: the bubbles don’t stay put—they rearrange, opening pathways for liquid to escape. The key factor is the pressure needed to shift bubbles, not just push liquid through them. This insight reshapes how we understand foams and could improve everyday products.
Categories: Science

Astronomers reconstruct a galaxy’s 12-billion-year history using chemical clues

Space and time from Science Daily Feed - Mon, 03/23/2026 - 7:46pm
For the first time, scientists have reconstructed the full history of a galaxy outside the Milky Way using chemical clues. By analyzing oxygen across NGC 1365 and comparing it with simulations, they traced its growth over 12 billion years. The findings show how its core formed early while its outer regions were built through repeated mergers. This new approach could transform how astronomers study galaxy evolution.
Categories: Science

Giant Craters May Reveal if Psyche is a Lost Planetary Core

Universe Today Feed - Mon, 03/23/2026 - 7:18pm

When we think of asteroids, we almost immediately think of giant rocks bouncing around like the iconic chase scene in Empire Strikes Back, and we often hear how they are remnants from the birth of the solar system. While the asteroids that comprise the Main Asteroid Belt of our solar system are not only spread far apart from each other, they are also not all made of rock. One asteroid approximately the size of the State of Massachusetts called 16 Psyche is made of metal, which planetary scientists hypothesize could be the remnants of a protoplanet’s core that didn’t build into a full-fledged planet. But how did such a unique asteroid form?

Categories: Science

Parabolic Flight Experiments Delve into Planetary Formation

Universe Today Feed - Mon, 03/23/2026 - 4:46pm

What happens in a protoplanetary disk to create planetesimals around a star? We know the general story -- the material begins to clump together and eventually grows from dust grains to rocky bodies capable of sticking together to make planets. But, how does that dust begin the aggregation journey? That's what a research team from the Switzerland wanted to know. So, they did experiments aboard parabolic micro-gravity flights to find an answer.

Categories: Science

Rubin Alert Leads to First Follow-Up Observations and Detection of Four Supernovae

Universe Today Feed - Mon, 03/23/2026 - 2:59pm

NSF NOIRLab has completed end-to-end runs of its ecosystem for following up on alerts from NSF–DOE Vera C. Rubin Observatory. The runs demonstrated how multiple NOIRLab-developed software tools, plus a network of telescopes around the globe, will enable quick follow-up observations of the countless transient objects that Rubin will uncover during its ten-year survey.

Categories: Science

Family Resemblance: Why Intelligent Extraterrestrials May Look Strangely Familiar

Skeptic.com feed - Mon, 03/23/2026 - 1:03pm

There’s a kind of storytelling tariff that sci-fi thrillers pay: the alien has to be visually—and physiologically—“other.” The more it resembles us, the less it feels like an invasion, and the less it sells popcorn. So, filmmakers crank the dials. Alien is the perfect example: a creature engineered for maximum dread—extra jaws, parasitic reproduction, and even acid for blood, a brilliant idea because it turns injury into a terrifying weapon. Great cinema. Bad biology.

The alien as a monsterConstraints, Not Monsters

But biology isn’t a special-effects studio. Evolution doesn’t get to pick any chemistry, any anatomy, any habitat, and call it a day. It’s boxed in by constraints: what molecules can build durable, information-rich structures; what solvents allow complex reactions; what temperatures keep chemistry running without shredding it; what gravity and atmosphere allow efficient movement; what energy sources are stable long enough for complexity to accumulate. And here’s the part science fiction usually skips: only a limited range of environments in the universe are likely to be hospitable to the long, fragile process that produces intelligent life at all. If that’s true, then the number of viable “starting conditions” shrinks—and the range of plausible outcomes shrinks with it. In other words, the universe may not be a boundless zoo of monster anatomies. It may be a narrower set of workable habitats repeatedly producing a narrower set of workable body plans—ones that, at a distance, start to look surprisingly familiar.

Carbon is the first and biggest constraint. If you want a system capable of building large, stable molecules that can both store information and do chemistry, carbon is the standout: it forms strong chains and rings, bonds flexibly with common elements (H, O, N, S, P), and supports the kind of combinatorial complexity life seems to require.1 Silicon gets invoked in sci-fi because it sits under carbon on the periodic table, but careful technical reviews conclude that silicon biochemistry faces steep hurdles compared with carbon—especially when you ask for the chemical diversity, solvent compatibility, and long-term stability you’d need for an evolving biosphere rather than a one-off laboratory curiosity.2 Carbon, by contrast, isn’t just “what we have”—it’s what the periodic table offers as good at being life’s scaffolding.

And carbon chemistry, at least as far as we understand it, almost certainly needs a liquid reaction medium. You can think of a solvent as evolution’s workshop: it transports reactants, buffers temperature swings, enables compartmentalization (membranes), and keeps chemistry running long enough for complexity to accumulate. NASA astrobiology treatments make the key point crisply: water is not merely “wet background”; its physical and chemical properties are unusually helpful for life-like chemistry.3 That doesn’t mean life must use water—serious work examines alternatives—but it does mean that when you ask where complex life is most likely to arise, you’re pulled toward a relatively narrow band of worlds with long-lived liquids, stable energy gradients, and conditions that support molecular complexity rather than constantly tearing it down.4

Carbon, by contrast, isn’t just “what we have”—it’s what the periodic table offers as good at being life’s scaffolding.

Once you accept those constraints, the “anything goes” alien starts to look less likely. A restricted set of workable environments tends to funnel evolution toward a restricted set of workable solutions—especially once organisms get big, mobile, and cognitively complex. From there, the argument becomes a cascade: mobility favors efficient body plans; efficient body plans often converge on bilateral symmetry for streamlined, directional movement; and bilateral movers tend to concentrate sensors and processing at the leading end—cephalization—because that’s the part that encounters the world first.5 

Finally, any lineage that’s going to build technology needs not just brains, but some way to manipulate the world with precision—one or more appendages capable of fine control. And Earth at least shows that “high intelligence” is not a one-time miracle: complex brains and sophisticated cognition have evolved multiple times in very different lineages, which is exactly what you’d expect if evolution keeps rediscovering similar solutions to similar problems.6

It Takes a Long Time

For most of Earth’s history, life was microbial. There are abundant signs of life by around 3.5 billion years ago, with plausible evidence reaching back toward approximately 3.8 billion years and earlier, meaning single-celled organisms dominated the planet for the overwhelming majority of its existence.7 Complex multicellular life—and especially animals with nervous systems—arrives strikingly late by comparison: the Ediacaran record pushes recognizable multicellular complexity to roughly approximately 600 million years ago, and the Cambrian explosion (around 540 million years ago) is where diverse animal body plans and their organ systems, including nervous systems, become conspicuous in the fossil record.8 Even “brains,” in any familiar sense, are a comparatively recent evolutionary product of animal history.

And yet, despite billions of years of evolutionary “experimentation” across oceans, lakes, microbial mats, reefs, forests, and ice ages, technological intelligence—the kind that builds radios, telescopes, and spacecraft—emerged only once, and only under a narrow set of ecological circumstances. That doesn’t prove intelligence is unique in the universe, but it strongly suggests that it’s constrained: not every habitable world is equally likely to produce it, and not every habitable environment on a given world is equally likely to nurture it. In other words, the universe may contain places where life is possible, but far fewer where the long chain of transitions to technology can reliably occur.

Evolution is repeatedly solving the same engineering problems under similar constraints.

Long before our ancestors spent most of their time on the ground, their life was shaped in trees—an environment that rewards three-dimensional vision, fine depth perception, color discrimination, and exquisitely controlled hands, arms, and digits for climbing, grasping, and precise manipulation. When some of those primates began living in woodland–savanna mosaics, bipedal walking freed the already dexterous hands for carrying and tool use, effectively repurposing “arboreal skills” into a terrestrial, cumulative technology pathway. That transition—tree-built perception and manipulation deployed on open ground—may be a rare ecological combination, and it helps explain why large brains can evolve in many settings, yet only once has intelligence ratcheted up into an industrial civilization.9

If only a limited set of planetary and ecological conditions can support the long chain from chemistry to cognition, then evolution is repeatedly solving the same engineering problems under similar constraints. And once you narrow the environments where intelligence is even plausible, you also narrow the range of bodies that can thrive there. That doesn’t point to identical aliens—but it does make wildly un-Earthlike “monster designs” (think War of the Worlds with Tom Cruise) less likely, and a recognizable family resemblance—convergent, familiar motifs—more likely.

How the Ratchet Turns

As soon as hominins became more committed to life in woodland–savanna mosaics, a new class of problems moved to center stage: social problems. On open ground, survival often depends less on a single clever trick than on navigating alliances, rivalries, status, reciprocity, and betrayal inside a group—and sometimes between groups. That framing goes back to classic arguments that intellect evolved largely to manage social life.10 It’s also the logic behind the “social brain” tradition: as group life becomes more demanding, selection favors minds better at tracking relationships, intentions, and reputations at scale.11 

In that world, intelligence isn’t just tool-use; it’s the ability to detect cheaters and liars, anticipate others’ moves, and calibrate cooperation—exactly the kind of psychological machinery psychologists Leda Cosmides and John Tooby argued would be favored in repeated social exchange.12 And once you have minds built for social exchange, you have the psychological preconditions for reciprocal altruism—the willingness to help now in expectation of help later—which is one of the foundations of large-scale human cooperation that builds civilizations.13, 14 And when resources are patchy and competition is real, intergroup conflict can further raise the stakes, selecting for coordination, cohesion, and strategic behavior within coalitions. 

Intelligence exists in many lineages; an industrial pathway likely requires intelligence plus a controllable, high-energy lever and a dry-work environment where tools can persist, accumulate, and improve.

Language doesn’t merely label the world; it lets individuals coordinate plans, negotiate alliances, transmit know-how, and build reputations—turning individual cognition into group cognition.15 Most importantly, humans crossed a threshold into cumulative culture: shared intentions, teaching, and high-fidelity social learning allow useful innovations to persist and improve across generations, creating the technological “ratchet” that other smart animals rarely achieve. Humans are distinctive because our know-how doesn’t reset each generation; it accumulates—tools beget better tools in a cultural “ratchet.”16 But brains are expensive tissue, so any species that evolves them must solve an energy-budget problem—through diet quality, provisioning, and other tradeoffs that reliably pay the bill.17, 18

This is where fire and cooking matter: cooking increases the calories you can extract from food and reduces the time and gut investment needed to process it, freeing energy for a larger brain.19 Just as important, controlled fire is a gateway technology—warmth, protection, nighttime sociality, and eventually high-temperature chemistry.20 Intelligence exists in many lineages; an industrial pathway likely requires intelligence plus a controllable, high-energy lever and a dry-work environment where tools can persist, accumulate, and improve.

A skeptic might object that oceans already produce impressive intelligence—dolphins and whales, for example—so why didn’t technology take off there? The point isn’t that marine brains can’t be sophisticated; it’s that an industrial pathway needs more than cognition: it needs persistent tool chains and a controllable high-energy lever.

The decisive step wasn’t just smarter brains—it was solving the problem of memory across generations.

And that points to a subtle filter. Oceans can produce impressive cognition—on Earth in the form of cetaceans and, perhaps, octopus—but water is hostile to the industrial ratchet: fire is hard to control, durable toolkits are harder to store and transport, and metallurgy is effectively off the table.21 On land—especially in variable, resource-patchy habitats—portable tools, teaching, and cooperative planning can compound. That’s why the story is less “savanna created intelligence” than “a particular ecological combination made technology cumulative.”

The decisive step wasn’t just smarter brains—it was solving the problem of memory across generations. Most animals, even very intelligent ones, learn largely within a lifetime. When the individual dies, much of that hard-won knowledge dies with it. Humans broke that bottleneck. We became a species whose best ideas can outlive their inventors, because we can store information—in other minds, in shared practices, and eventually in artifacts and symbols—and then transmit it with unusually high fidelity. That’s the ratchet: innovation that doesn’t evaporate.

This requires more than imitation. It requires teaching, joint attention, and shared goals—what some researchers call “shared intentionality”—so that skills can be transferred efficiently and improvements can accumulate rather than drift. Once a lineage crosses that threshold, technology starts to behave less like a set of clever tricks and more like a compounding system.22 

Language then acts as a compression algorithm for culture. It turns “watch me do this” into “here’s the rule,” making know-how portable, scalable, and teachable to people who never saw the original problem. It also enables coordination at scale—plans, roles, promises, reputations—so groups can build things no individual could.23, 24

And on land, cultural memory can be externalized. Tools can be cached, improved, standardized, and inherited. Eventually information migrates into marks, symbols, and writing—literal memory outside the brain. At that point, progress accelerates, because each generation starts not from scratch, but from a platform built by those before it.

So, What Might ET Look Like?

What does all of this imply about the appearance of extraterrestrial intelligence? Not that aliens will be “human,” as if evolution everywhere is destined to reproduce our exact anatomy. Evolution is too contingent for that. But it’s not completely random. If intelligence that builds technology is constrained by chemistry, physics, and ecology – and if similar constraints repeatedly force similar solutions—then truly alien intelligence may come with a surprisingly familiar set of design motifs.

Humans broke that bottleneck. We became a species whose best ideas can outlive their inventors, because we can store information … and then transmit it with unusually high fidelity.

Start with the big one: directional movement in a complex world. Once organisms become large, mobile, and behaviorally flexible, the “engineering problem” of getting around efficiently tends to favor bilateral symmetry—a front and a back, a left and a right—because it streamlines movement and organizes the body around a direction of travel.25 Bilateral movers also tend toward cephalization: concentrating senses and information processing at the leading end, because that’s the part that meets the environment first.26 In plain terms, if something is navigating the world and making decisions quickly, it’s likely to be built around a “front end” where sensing and control are concentrated (and, less glamorously, but no less practically, a “waste end” where, well, waste products are dispensed).

Then comes the key requirement for technology: manipulation. A brain can model the world all day, but technology requires a high-bandwidth interface between mind and matter: appendages capable of precise, repeatable control. On Earth, that role is played by hands and digits—originally honed for climbing and grasping in trees—later repurposed for shaping objects, carrying toolkits, and building cumulative tool traditions. This doesn’t mandate five fingers, or even “arms” in the human sense. But it strongly suggests that technological intelligence will be paired with one or more manipulators—structures evolved for fine control, not just locomotion.

Finally, technological intelligence requires culture that compounds. If each generation must rediscover the basics from scratch, there is no sustained trajectory toward industry. The transition to cumulative culture—high-fidelity social learning, teaching, shared intentions, and the ability to preserve and improve innovations—creates the technological ratchet.27, 28, 29 Once a lineage crosses that threshold, intelligence becomes more than cleverness; it becomes a system that accumulates, and that accumulation eventually externalizes into tools, structures, symbols, and records. In other words: even if the bodies vary, a technological species will likely have something analogous to language, teaching, and external memory—because without those, the ratchet stalls.30, 31

Put those pieces together and a rough “family resemblance” emerges: not humans exactly, of course (there’s contingency again), but mobile, bilateral organisms with front-loaded sensing/processing, manipulators, and a cultural transmission system that lets knowledge outlive individuals. That is the opposite of the cinematic monster. It’s less a nightmare creature and more a familiar engineering solution—built under unfamiliar skies.

Caveats and Conclusions

A skeptic’s first objection is an obvious one, namely that Earth is a sample size of one. Any story about extraterrestrial biology risks generalizing from the particular to the universal. That caution is warranted. Our lineage’s specific path—arboreal heritage, bipedalism, the woodland–savanna mosaic—may be historically contingent. Different worlds could produce intelligence by different routes (although it is not clear how), and even on Earth, high cognition appears in multiple lineages.32 So, the claim here should be modest: not “ET must look like us,” but “constraints bias evolution toward a limited menu of workable solutions.”

The Grey is a popular alien figure because it’s a humanoid distilled to a few cues: bilateral symmetry, a head-dominated body plan, and exaggerated eyes. Those broad motifs actually align with what a constraint-based view would predict. But the specific “Grey” is also a cultural icon with a traceable modern history—especially after Whitley Strieber’s Communion (1987) and its widely reproduced cover image. So, it’s better understood as a modern cultural meme than as a biologically derived prediction.

The “Grey” alien.

A second objection is this: what if technology doesn’t require fire and metallurgy? Perhaps some species develop a different high-energy lever or a different materials pathway. That’s possible. But the broader point still holds: industrial-scale technology requires some means of harnessing scalable energy and building durable tool chains. Whatever substitutes exist, they still must operate under the same physical logic: persistent artifacts, repeatable processes, and the ability to store and transmit complex know-how over long spans of time. 

For example, we know Earth’s atmosphere didn’t always permit fire because oxygen arrived late—and we can see that transition written in the rocks. For much of the Archean, oceans carried abundant dissolved ferrous iron (Fe²⁺); when oxygen produced by early photosynthesizers (e.g., blue-green algae that scientists call cyanobacteria) began reaching surface waters, it oxidized Fe²⁺ to insoluble ferric iron (Fe³⁺) that precipitated in vast banded iron formations (BIFs), essentially recording oxygen’s first sustained appearance as it was “soaked up” by iron sinks. Around 2.4 to 2.3 billion years ago—during the Great Oxidation Event—atmospheric O2 rose from trace levels to much more significant amounts, while BIF deposition eventually waned as the ocean’s iron sink diminished and broader oxygenation progressed. 

That history matters for our argument because recognizable, combustion-driven technology depends not just on brains, but on a planet reaching an oxygen state that reliably supports open-air fire and high-temperature chemistry—the “oxygen bottleneck” for technospheres. That is why the “oxygen bottleneck” argument is useful: it highlights that recognizable, combustion-driven technospheres are not guaranteed by intelligence alone—they depend on planetary conditions that enable certain kinds of energy use.33

So, the claim is not inevitability, but probability. Constrain the environments, and you constrain the solutions. And that means the wildest designs of monster cinema are not the most realistic expectation. They are the least constrained.

Science fiction thrives on the alien as shock: the creature that breaks every rule and looks like nothing that ever walked, swam, or crawled on Earth. Alien is a masterpiece precisely because it is so unconstrained—a physiology engineered for dread. Great theater. But real evolution does not have that freedom. Biology is boxed in by chemistry, by solvents, by energy budgets, by gravity and materials, by the logic of movement and sensing, and by the requirements of cultural accumulation.

The details will be alien. The motifs may not be.

That’s why the best prediction for extraterrestrial intelligence is not a monster, but a constrained organism that has solved a familiar set of problems in a workable way: a body built for efficient movement, sensors and processing concentrated forward, appendages capable of precise manipulation, and a culture that can store and transmit information across generations so that technology compounds. The details will be alien. The motifs may not be.

If we ever detect a true technosignature—or one day meet its makers—the surprise may not be how strange they are. The surprise may be how recognizable the underlying design logic feels.

Categories: Critical Thinking, Skeptic

This Ancient Star In A Low-Mass Galaxy Is A Precious Find

Universe Today Feed - Mon, 03/23/2026 - 12:35pm

To understand the Universe we see around us today, we have to understand its past. Some hard-to-find ancient stars, called Population II stars, preserve evidence from the ancient Universe. Astronomers finally found one.

Categories: Science

Genetic clues tell the story of Neanderthals' decline

New Scientist Feed - Mon, 03/23/2026 - 12:00pm
The Neanderthal population shrank during a cold spell around 75,000 years ago, and the loss of genetic diversity may have contributed to their eventual extinction
Categories: Science

Warmer ocean is driving the Antarctic sea ice 'regime shift'

New Scientist Feed - Mon, 03/23/2026 - 12:00pm
Since 2016, Antarctic sea ice extent has been declining sharply – now scientists are piecing together how strong winds and warm deep water have played a part in this abrupt transition
Categories: Science

“Little Darlin'”: cover or parody? Or is it two, two, two songs in one?

Why Evolution is True Feed - Mon, 03/23/2026 - 9:30am

Reader Thomas from Marginal REVOLUTION sent me the linked article piece, which shows a video of the 1957 doo-wop hit “Little Darlin'” by the Diamonds. If you’re my age, you’ll remember it.  I always thought it was a serious song, and apparently so did most Americans, but the first comment says this:

From what I recall, the Diamonds were a conventional vocal group who recorded “Little Darlin'” at the end of a session as a kind of joke song – a parody of the Ink Spots and Mills Brothers. Fortunately for them, the studio musicians joined in with perfectly irresistible beat and the song became a rock and roll hit. Early rock and roll was born as parody as much as anything else, like Elvis’s Jokey rendition of Bill Monroe’s “Blue Moon of Kentucky” turning out to be the founding document of rockabilly.

Really? And then they present the video, which, unlike any music video I know from that era, shows the group clowning around. Have a look at the video for a hit song (more cowbell!). But yes, this is the original song. which went to #2 on the Billboard charts.

To get to the bottom of this, I sent to AI, in particular Grok. Sure enough, the bot pronounced it a parody after scanning what’s published about it.

“Was “Little Darlin'” by the Diamonds a parody song?”

Yes, “Little Darlin'” by The Diamonds (1957) is widely regarded as a stylistic parody/satire of the doo-wop genre—not a lyrical parody like a Weird Al track (the words and melody are the same), but a deliberately exaggerated, campy cover that poked fun at the over-the-top clichés of the style.Here’s the story, straight from music historians:

  • The song was written by 16-year-old Maurice Williams and first recorded in January 1957 by his group The Gladiolas (on the Excello label). Their version was a straight, heartfelt R&B/doo-wop track with a Latin-tinged rhythm, falsetto, spoken bridge, and percussion. It charted modestly (#11 R&B, minor pop crossover).
  • The Diamonds (a Canadian quartet) rushed out their cover just a month later on Mercury Records. It was reportedly a last-minute, end-of-session lark—they had about 15 minutes left, no drummer, so they improvised with castanets. Lead singer Dave Somerville and the group leaned hard into the nonsense syllables (“ai-ya-yi-yai-yai-ya,” “wella-wella,” etc.), hamming it up with theatrical flair. They even performed it in an over-the-top way on TV (like The Steve Allen Show).

AllMusic critic Stephen Thomas Erlewine noted that the vocal phrases “verge on self-parody… which may be why The Diamonds’ version is superior.” Radio historian John Gilliland (on the Pop Chronicles series) explicitly stated that their recording “was in fact a parody of the genre.” Other sources describe it as a “joke song,” “satire of doo-wop,” or “intended as a satire” with “campy vocal delivery.”Despite (or because of) the tongue-in-cheek approach, it became a monster hit: #2 on the Billboard Hot 100 for eight weeks (behind only Elvis’s “All Shook Up”), the biggest version by far, and now the definitive classic. The original Gladiolas take is respected by R&B fans but far less known.It’s a classic example of a white cover version (common in the 1950s) that amplified the style for mainstream appeal—here with humor that stuck. Later, Elvis Presley covered the Diamonds’ arrangement on his final album Moody Blue, and Joan Baez even recorded an actual lyrical parody.

From Wikipedia, we learn that the hit version was actually a re-recording of a song recorded just a month earlier by a black group, the Gladiolas, one of whose members wrote the song. There’s also a brief not about the parody aspect:

It was written by a 16-year-old Maurice Williams with both melody and doo-wop accompaniment strongly emphasizing a calypso rhythm. First recorded in January 1957 by Williams’ group the Gladiolas, it was quickly released as a single on Excello Records, a small swamp blues label owned by Nashville record man Earnie Young, who was responsible for creating the song’s Latin feel, naming the group and ensuring Williams would retain the song’s publishing.

The recording, inspired by a book Williams was writing, and originally called “Little Darlin’/ The Beginning,” is noted for its trademark doo-wop falsetto by Fred Mangum and its spoken bridge by Williams (“My Darlin’ I need you …”). The Gladiolas were from Lancaster, South Carolina, where they had been together since high school.

The Diamonds‘ successful cover version followed a month later. The Diamonds were a Canadian pop group that evolved into a doo-wop group. The Diamonds’ version reached number two in sales for eight weeks on the Billboard Hot 100Billboard ranked this version as the No. 3 song for 1957. In Canada, the song was No. 11 on the premiere CHUM Chart, May 27, 1957.

The Diamonds’ version is generally considered superior. AllMusic critic Stephen Thomas Erlewine argues that the Diamonds “Little Darlin'” is an unusual example of a cover being better than the original:

[T]he Diamonds’ take remained the bigger hit, and over the years, the better-known version. Normally, this would have been an outrage, but there’s a reason why the Diamonds’ version has sustained its popularity over the years: it’s a better, fiercer recording. Both versions are good, even if they’re a little silly, because it’s a good doo wop song, giving each member of the quartet a lot to do. At times, the vocal phrases verge on self-parody — the “ai-ya-yi-yai-yai-ya”‘s or the “wella-wella”‘s — which may be why The Diamonds’ version is superior.

On the Pop Chronicles, host John Gilliland claimed that their version was in fact a parody of the genre. Nonetheless, “Little Darlin'” (primarily the Diamonds’ version, but to some extent the Gladiolas’ version) remains an all-time rock ‘n roll R&B classic.

Here’s the original version by The Gladiolas, and sure enough, it’s pretty much like the parody (or cover), including the talking interlude. It was not a hit. Is this cultural appropriation?

Categories: Science

The simple questions cracking the hard problem of consciousness

New Scientist Feed - Mon, 03/23/2026 - 9:00am
Do we all see the same red? Or feel joy and sadness alike? Mapping how our inner experiences relate to one another could finally reveal how physical processes in the brain give rise to consciousness
Categories: Science

Black Hole Mergers Test the Limits of General Relativity

Universe Today Feed - Mon, 03/23/2026 - 7:47am

We can now use the gravitational waves of black holes to test general relativity and look for evidence of alternative theories of gravity.

Categories: Science

“The right war is being waged by the wrong people, for the wrong reasons”: Sam Harris on the conflict with Iran

Why Evolution is True Feed - Mon, 03/23/2026 - 7:15am

Sam Harris is widely demonized by the know-nothings, and I’m not quite sure why. Yes, he discussed the possibility of torture in certain circumstance, but this was a philosophical rumination which is perfectly justifiable if you have a utilitarian or consequentialist view of ethics, and in fact a similar discussion appears in the Stanford Encyclopedia of Philosophy. His view are speculative and nuanced, and he thinks the torture by the American government should remain illegal. Nevertheless, over and over again I’ve seen him damned for advocating willy-nilly torture, as if he were some kind of latter-day Josef-Mengele.  And that is what I’ve seen people emphasize, though Sam has also discussed the possibility of American preemptive nuclear strikes, the nature of Islam (he’s been deemed an “Islamophobe”), and for profiling groups of people for airline security. That has been more fodder for going after him, though people always neglect the nuance.

I myself have criticized his position on “objective morality,” but that would never make me dismiss Sam, as his writings are always measured and thoughtful—and largely philosophical. They promote thought, and that is, after all, the goal of philosophy. But I guess when an avowed atheist dips into philosophy, he’s almost automatically damned.  So be it; I will continue to read him.

Yesterday Sam posted his views about our war with Iran, and his overall take is summarized in the title of this post. You can read his discussion by clicking on the link below, or reading the the piece archived here.

I think the moral confusion about the war, which I see as the conflict between feeling it’s a just war and the despair at the rationalizations for the war by our administration and Trump’s repeated lies about what’s going on and what he intends to do—all of this is the cause for the increasing anomie many of us feel about our engagement, as well as for the widespread opposition to the war by Americans. As Sam says, “To think clearly about this war, we need to hold two sets of ideas in our minds at the same moment: the Iranian regime is evil, and the Trump administration is dangerously amoral, corrupt, and incompetent.”

I’ll give a few quotes from Sam (indented). Text that is flush left is mine, as are the bold headings:

Why it’s a morally just war

The Islamic Republic has tormented its own people for forty-seven years. It has hanged dissidents from cranes, crushed peaceful protests with live ammunition, tortured political prisoners, and funded jihadist proxies throughout the Middle East and beyond. When Salman Rushdie was nearly killed by a knife-wielding fanatic, after living for thirty-three years under the shadow of the Ayatollah’s imbecilic curse, this was a direct export from the theocracy in Tehran—which has grown increasingly unpopular with the Iranian people. The protests of 2025 and 2026 reminded the world, yet again, of the Iranian majority’s desperation to be free. The assassination of Ayatollah Khamenei on the first day of this war was greeted with celebrations in Tehran, Isfahan, and among the Iranian diaspora in Los Angeles and New York. Whatever else one thinks about the decision to strike Iran, it is obscene to pretend that there was no moral or pragmatic argument for doing so.

Note that a credible figure for civilians killed by the Iranian regime for protesting is 30,000+.  The regime apprehends political prisoners for protesting the war, including the families of protestors, and tries to assassinate critics of the regime even when they are overseas (e.g., Masih Alinejad).  There’s no doubt that the Iranian people, overall, want to acquire the freedoms they had before the Revolution. All this, and the determination of Iran to export terrorism throughout the Middle East, makes this a just war.

The lame criticism of this war. (Sam sees this as a failure to recognize moral evil (or recognizing it and ignoring it), as he did when he wrote about the Gaza War)

And yet, most critics of the war speak as though Iran was a peaceful nation attacked by foreign aggressors. Notions of “sovereignty” and “international law” are invoked as though the Islamic Republic were Sweden. Almost no prominent critic of this war has anything cogent to say about the decades of misery the mullahs have inflicted on their own citizens, the threat that Iran’s network of proxy militias poses to the entire region, or the inconceivability of establishing deterrence once a jihadist death cult acquires nuclear weapons. If your opposition to this war cannot acknowledge the evil we are facing, your opposition is not morally sane.

I’ll add to this something that Sam takes up later: some of the motivation for criticizing this war comes from the fact that we’re allies with Israel, seen as a settler-colonialist and evil apartheid regime,and of course a Jewish state.

The justified criticism of this war.

But there is a serious case to be made against this war. One might believe, along with Damon Linker, that the risk of Iran becoming a failed state—a larger, more dangerous version of post-invasion Iraq—far outweighs the benefits of toppling the regime today. One could point to the apparent absence of a credible plan for what comes next, or to the fact that three weeks of bombardment haven’t produced anything resembling the “unconditional surrender” that Trump once demanded. Iranian state media has reported that the conflict has already killed more than 1,500 people, including over 200 children. Whatever the actual numbers, there can be no doubt that the humanitarian toll is real and mounting. These are intelligible concerns, and they deserve to be taken seriously.

But this is not the argument that most opponents of this war are making. They are making a much lazier set of claims—and often treating any American use of force as inherently unjust. Most critics are simply ignoring the question of what the world should do about a jihadist regime that has spent decades aspiring to commit genocide, views any peace as a temporary interval in which to gain the upper hand, and happens to be on the verge of acquiring nuclear weapons.

Admittedly, it is difficult to separate the ethics of destroying the Iranian regime, and seeking to liberate the Iranian people, from the staggering incompetence and callousness of the reality-television stars who are now sit atop our vast machinery of war and wield it for no clearly stated purpose.

Sam then recounts a lot of Trump’s lies (or confusions), including his back-and-forth on whether we’ve destroyed the regime and its nuclear program, whether or not he’ll demand “unconditional surrender”, his failure to include Congress in his deliberations, and his failure to prepare the American people for the conflict.  This leads to feelings of both anomie and impotence:

Congressional Democrats now face the agonizing reality that they cannot stop a war that is already underway—the question is whether they can impose constitutional oversight on an administration that started it without asking anyone’s permission.

How the war increased antisemitism

I think we all know that antisemitism is now a tenacious termite in Western democracies. Though not ubiquitous, we see it growing in America, in Canada, in Europe, and in Australia.  I’m not sure whether the conflict with Iran has actually increased it, or simply given antisemites an excuse to parade their views more openly. Sam blames Trump for his incompetence that has created an explanatory vacuum that’s fbeen filled with Jew-hating.

Finally, all of this bluster, confusion, dishonesty, and strategic incoherence has been a gift to the world’s antisemites. In the three weeks since the war began, antisemitic incidents worldwide have spiked by 34 percent. The resignation of Joe Kent, former director of the National Counterterrorism Center—who blamed “high-ranking Israeli officials and influential members of the American media” for encouraging the conflict—gave mainstream respectability to the conspiracy theory that most of our wars are fought at the behest of perfidious Jews. Far-right figures have seized on the joint U.S.-Israeli operation to push the narrative that American soldiers are dying for a foreign power. And the antiwar Left, rather than reject this framing, has amplified it. While the similarities between the far Right and the far Left may be overstated, for the Jews, “horseshoe theory” is now a daily reality.

And Trump’s chaotic messaging bears much of the blame. When a president cannot clearly articulate why his country has gone to war, he leaves a vacuum that conspiracy theories will fill. A competent leader would have made the case for this war on its merits—the destruction of a terroristic theocracy’s military infrastructure, the elimination of its nuclear program, the liberation of 88 million people from a regime that jails and tortures women for the crime of uncovering their hair. Instead, Trump has offered a jumble of contradictions: Iran’s military is destroyed, but we need more troops; the war is almost over, but we may yet put boots on the ground; Iran wants to talk, but there is no one to talk to. This moral and logical void has become a vessel for antisemitic paranoia on both the Right and the Left. Needless to say, it doesn’t help that Trump and his family have been accepting personal gifts and payments from the Gulf states—a plane from Qatar, secretive investments from the UAE—while American forces protect those same regimes. The man is simultaneously waging a war of choice and running what appears to be an extortion racket.

The resultant ambivalence.

I think the first paragraph below, which involves us believing things that seem at odds with each other, explains at least for my ambivalence about the war.  A nutjob is in charge, someone who continually contradicts himself and acts on impulse, and yet he’s in charge of a just war.

It is possible, even necessary, to believe all of the following at once: the Iranian regime is a monstrosity that should be destroyed; the Iranian people deserve to be free; the risk that this war will end in catastrophe is real, largely because of the character of those who are waging it; and the rising tide of antisemitism that this conflict has unleashed is yet another moral emergency that people on both sides of the debate have a responsibility to confront.

The tragedy of this moment is that the right war is being waged by the wrong people, for the wrong reasons. And the opponents of the war, rather than making this case, have mostly opted for blinkered pacifism and conspiracy theories, while refusing to grapple with the manifest evil of the Iranian regime. Of course, the Iranian people, caught between their own tyrants, a reckless American president, and his feckless critics, will pay the heaviest price.

Categories: Science

Another Bold Battery Claim

neurologicablog Feed - Mon, 03/23/2026 - 7:03am

In the decades before the Wright brothers historic 1903 flight at Kitty Hawk there were many claims of powered heavier-than-air flying machines. There were also many false sightings of “airships”, amounting to a form of mass delusion. But the false claims and false sightings do not change the fact that the technology for powered flight was right on the cusp, and that the Wright brothers crossed that threshold in 1903, leading ultimately to the massive industry we have today. This is not surprising. There is often a sense, in the industry and spreading to the public, that the technological pieces are in place for a significant application breakthrough. Today this is more true than ever, with a vibrant industry of tech news, showcases, conferences, blogs, podcasts, etc. I cover plenty of tech new here. It’s interesting to try to glimpse what technology is right around the corner. Any technology that is closely watched and much anticipated is likely to generate lots of premature hype and false claims.

This is definitely true for battery technology. We are arguably in the middle of a massive effort to electrify as much of our industry as possible, especially transportation. Also maximizing intermittent renewable sources of energy would be greatly facilitated by advances in energy storage. Meanwhile electronic devices are becoming increasingly integrated into our daily lives. Advances in battery technology can have a dramatic impact on all these sectors, and is likely to be a critical technology for the next century. So it’s no surprise that there is a lot of hype surrounding battery tech, some of it legitimate, some of it fake, and some just premature. But this hype does not change the fact that battery technology is rapidly improving and the hype will become reality soon enough (just like the Wright flyer).

When it comes to EV batteries we all have a wish-list of features we would like to see. I now own two EVs, and they are the best cars I have ever owned. At least for my personal situation (I live in an exurb and own my own parking spots), EVs are great, and current battery technology is more than adequate for EVs. But sure, I live everyday with the reality of how advances in battery tech will make EVs even more convenient and useful. I have detailed the wish-list before, but here it is again: increased capacity, both in terms of volume but especially weight (specific energy), to decrease the weight while increasing the potential range of EVs, faster charging (with the holy grail being the ability to fully recharge an EV as fast as you can fill a car with gas), long charge-discharge cycle lifespan (longer than the lifespan of the car), useful in a wide range of temperatures, stability (does not spontaneously catch fire), and cheap, which is tied to being made from cheap and abundant elements. This last feature also means that the battery is not dependent on rare elements whose supply line is largely controlled by hostile or conflict-ridden countries.

Making a significant breakthrough in any one of these features is big news. This is why Donut Lab’s claim to have simultaneously improved all of these wish-list features at once was met with so much skepticism. (I will give a quick update on Donut Labs at the end of this post.) Now we have another bold claim, this one from a US company based in Dallas. Their claim focuses on just one feature of EV batteries, the recharge time, however they also claim reduced need for cobalt, which is nice. The company is OMI, who claims to have innovated a new iron-based cathode that allows an EV to recharge from empty to full in 3 minutes. That would be huge – 3 minutes is the holy grail, about as long as it takes to fill a tank of gas. Technically they claim a 20C recharge rate. The “C” is based on a convention with 1C meaning that a battery can fully charge in 1 hour. So a 20C battery, by definition, would recharge fully in 3 minutes. For reference, most fast charging EV batteries today are rated at 8-12C, or a 7.5 to 5 minute recharge time. This is already pretty good, and as you can see there is a diminishing return with increased C rating when translated into recharge time. Of note, however, these ratings are under ideal conditions. In the real world we are still looking at 10-12 minute recharge times for the fastest recharging batteries.

To me this is not a big deal at all. Even when I use a charger that requires 20 minutes to go from 20-80% charge, it’s rare I am doing that on the road (only during long trips), and it’s relatively easy to plan that around a pit stop anyway. Go to the restroom, get a snack, and by the time you get back to your car you are done or almost done. Any improvement from there is icing on the cake. Ten to twelve minutes would be fantastic. Three minutes is insane. Keep in mind, 99% of the time I am slow charging my EVs at home. But sure, that occasional time you are driving home late at night and you need a top off to make it home, and you have nothing to do but wait there while your car recharges, faster is definitely better.

So how reliable is this claim from OMI. It looks pretty credible. They are calling the technology LnFP (lithium nano-ferrophosphate). This is a variation on the established LMFP technology which uses manganese in the cathode. Doping the cathode with manganese allows for faster charging. OMI is not revealing the exact chemistry of their new cathode (industry secrets and all), but will only say that it is nano-structured, hence the “nano”. Nothing there that breaks the laws of physics, and this all seems reasonably incremental. But again, prematurely hyping plausible incremental advances, but ones that will give a company dominance in an industry, is not uncommon. Claim unlimited free energy and you are just an obvious crank or a fraudster. Claim a plausible incremental advance, and you generate excitement in the industry. But that still leaves the question – did they really achieve this, or are they hyping a lab phenomenon, or are they pulling a “fake it till you make it” maneuver to goose funding?

The broader context here is that OMI is not one of the major players in battery technology, investing billions in a global race to push the industry forward and grab market share. They are a small startup, although they have been providing components to large companies like Harley Davidson. Are we seeing the democratization of battery tech, with spunky small startup leveraging creativity and innovation to challenge the major players? Or is this mostly small startups trying to make a quick score by making bold claims and either attracting big funding or getting snapped up by one of the big boys? OMI claims their battery claims are validated, but I cannot find any independent third-part validation. They also claim they will go into production in 2027. That is the ultimate test – can they mass produce these batteries at a competitive price and they actually work as advertised in products?

Speaking of which, two months ago Donut Labs announced to the world a dream solid-state battery with all the wish-list features. Now they are claiming independent testing and validation, but again it is not quite worthy of the hype they are putting out. Finland’s state-owned VTT Technical Research Centre has tested some of its features. It tested the rapid recharge time revealing a 0-80% charge in 4.5 minutes, with a 5C rating. Testing has also demonstrated their solid state battery is not a supercapacitor, which was one of the theories. But that, so far, is it. The 400 Wh/kg specific energy has not been validated, and that is really the main feature. So far we have more of a glimpse than total verification. So I am still withholding ultimate judgement until all the evidence is in, but it still seems sketchy to me. I hope that everyone is wrong, and Donut Labs has really achieved what they claim. But that hope, I think, is the point.

The post Another Bold Battery Claim first appeared on NeuroLogica Blog.

Categories: Skeptic

Readers’ wildlife photos

Why Evolution is True Feed - Mon, 03/23/2026 - 6:15am

Send ’em in if you got ’em.  The photo situation is dire.

But today we have whale photos by reader Ephriam Heller. His captions and IDs are indented, and you can enlarge his photos by clicking on them.

The best whale watching I have experienced is observing gray whales (Eschrichtius robustus) in the San Ignacio Lagoon on the Pacific coast of the Baja peninsula in Mexico. These whales are curious and “friendly,” often swimming up to boats to observe their occupants and even allowing themselves to be touched. Here is an example of an interaction between two species that each appear to exhibit curiosity and intelligence:

This is what a gray whale looks like (Image courtesy of International Whaling Commission):

They engage in numerous photogenic behaviors, such as rolling, riding the surf, waving their flippers and flukes in the air, and spyhopping to observe their surroundings.

The gray whale has longitudinal double blowholes. People claim that they form a heart shape, but think a heart with this shape needs immediate treatment:

When the sunlight hits their spray just right, one sees “rainblows”:

The gray whale has the most parasites of any whale, carrying up to 180 kg. At birth, babies have no barnacles or sea lice, but quickly acquire them from their mothers. The older the whale, the more barnacles and lice they collect. The whales rub along the seabed and piers to try to rid themselves of the parasites.

The whales carry one species of barnacle and four species of whale lice. The barnacles are Cryptolepas rhachianecti (whale barnacles) which are specific to gray whale hosts (i.e., they rarely occur on any other species), and they die when the whale dies.

There are four species of “whale lice,” which are not true lice (which are insects) but are amphipods in family Cyamidae: Cyamus scammony (the most common), Cyamus kessleri, and Cyamus eschrichtii are all found only on gray whales. Cyamus ceti is found on gray and bowhead whales. These cause minor irritation to healthy whales. Researchers view cyamid coverage and distribution (e.g., heavy clusters near blowhole, mouthline, genital slit) as indicators of stress, nutritional status, and chronic skin disease rather than as a primary cause of these problems.

There are two populations. The larger Eastern North Pacific population migrates along the continental coast between its breeding grounds in Baja, Mexico and its feeding grounds in Alaska. The small Western North Pacific population migrates along the Pacific coast of Asia. Gray whales hold the record for the longest migration of any mammal, with typical round-trip distances of about 20,000 km annually (although this isn’t close to the 70,000 km migration of the arctic tern).

Whales fall into two suborders: baleen (Mysticeti) and toothed (Odontoceti). Gray whales are in Mysticeti and use their baleen to feed on amphipods and plankton on the seafloor. During the six month summer feeding season, adults consume over 1 ton of food per day. They then fast for the remainder of the year, including the migration and winter birthing / breeding season. They exhibit “handedness,” in that most gray whales feed by scooping up sediments from the seafloor with the right side of their heads, resulting in their right sides having fewer adhering barnacles and sea lice.

They live up to ~70 years. Biggs transient killer whales (orcas) kill up to 35% of the calf population annually. Based on scarring, researchers speculate that almost every gray whale has been attacked by orcas. Most attacks occur as the young calves migrate north through Monterey Bay, California and Unimak Pass, Alaska.

The Eastern North Pacific population dropped to ~1,000 individuals around 1885 due to whaling, but has since recovered to ~27,000 in 2015-2016. The Western North Pacific population is tiny, comprising just a few hundred individuals. North Atlantic populations were extirpated (perhaps by whaling at the end of the medieval warm period) on the European coast in the 12th to 14th centuries, and on the American and African Atlantic coasts around the late 17th to early 18th centuries. Remains of gray whales from the time of the Roman empire have been found in the Mediterranean Sea, and they are still rarely seen there in modern times.

The gray whale has a dark slate-gray color and is covered by characteristic gray-white patterns, which are scars left by parasites that drop off in its cold feeding grounds. Individuals can be identified by their pigmentation patterns and their scars. I got this great photo of a whale’s tail; but it was just a fluke:

In case you are the kind of person who is interested in this sort of thing, this is what it looks like when whales mate:

Anyone with a younger brother will recognize this as the “head butt” greeting, a conserved behavior across all mammal species:

And this is the view when you saddle up a gray whale (I use a western saddle):

The eyes of gray whales are unlike the eyes of any other mammal I have seen, with what appear to be tangled filaments. My AI friend assures me that this is not the case and that they do not have any “extra” organs in their eyes: “The ‘tangled filaments’ you’re seeing are structures in the gray whale’s iris and surrounding tissues that become visible because the eye is small, very dark, and strongly three‑dimensional, so you are effectively looking across folded, ridged iris and ciliary tissues rather than through a flat, open pupil as in most mammals you see up close.”

Categories: Science

Mysterious comet disintegration caught by telescope after lucky break

New Scientist Feed - Mon, 03/23/2026 - 6:04am
Catching a comet in the process of falling apart is difficult, but a coincidence let astronomers see one in more detail than ever before using the Hubble Space Telescope – and revealed a mystery
Categories: Science

Pages

Subscribe to The Jefferson Center  aggregator