Some people try to feed their dogs the same alternative diet they eat themselves... not necessarily so good for the dog.
Learn about your ad choices: dovetail.prx.org/ad-choicesDogs dressed up in bonnets. Diamond-studded iPhone cases shaped like unicorns. Donut-shaped purses. Hello Kitty shoes, credit cards, engine oil, and staplers. My Little Pony capsule hotel rooms. Pikachu parades. Hedgehog cafes. Pink construction trucks plastered with cartoon eyes. Miniature everything. Emojis everywhere. What is going on here?
Top left to right: Astro Boy, Hello Kitty credit card, Hello Kitty backpack, SoftBank’s Pepper robot, Pikachu Parade, Hello Kitty hat, film still from Ponyo by Studio GhibliSuch merch, and more, are a manifestation of Japan’s kawaii culture of innocence, youthfulness, vulnerability, playfulness, and other childlike qualities. Placed in certain contexts, however, it can also underscore a darker reality—a particular denial of adulthood through a willful indulgence in naïveté, commercialization, and escapism. Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.
The roots of kawaii can be traced back to Japan’s Heian (“peace” or “tranquility”) period (794–1185 CE), a time when aristocrats appreciated delicate and endearing aesthetics in literature, art, and fashion.1 During the Edo period (1603–1868 CE), art and culture began to emphasize aesthetics, beauty, and playfulness.2 Woodblock prints (ukiyo-e) often depicted cute and whimsical characters.3 The modern iteration of kawaii began to take shape during the student protests of the late 1960s,4 particularly against the backdrop of the rigid culture of post-World War II Japan. In acts of defiance against academic authority, university students boycotted lectures and turned to children’s manga—a type of comic or graphic novel—as a critique of traditional educational norms.5
Kawaii can be joyful and happy, but it is also a way to avoid confronting the realities of real life.After World War II, Japan experienced significant social and economic changes. The emerging youth culture of the 1960s and 1970s began to embrace Western influences, leading to a blend of traditional Japanese aesthetics with Western pop culture.6 During the economic boom of the 1970s and 1980s, consumer subcultures flourished, and the aesthetic of cuteness found expression in playful handwriting, speech patterns, fashion, products, and themed spaces like cafes and shops. The release of Astro Boy (Tetsuwan Atomu) in 1952, created by Osamu Tezuka, is regarded by scholars as a key moment in the development of kawaii culture.7 The character’s large eyes, innocent look, and adventurous spirit resonated with both children and adults, setting the stage for the rise of other kawaii characters in popular culture. Simultaneously, as Japanese women gained more prominence in the workforce, the “burikko” archetype8—an innocent, childlike woman—became popular. This persona, exuding charm and nonthreatening femininity, was seen as enhancing her desirability in a marriage-centric society.9
Left to right: burikko handwriting, bento box, Kumamon mascot
Another catalyst for kawaii culture was the 1970’s emergence of burikko handwriting among teenage girls.10 It was this playful, childlike, rounded style of writing that included hearts, stars, and cartoonish doodles. To the chagrin of educators, it became a symbol of youthful rebellion and a break from rigid societal expectations.
Japanese culture is deeply rooted in tradition, with strict social norms governing behavior and appearance. If you drop something, it’s common to see people rush to retrieve it for you. Even at an empty intersection with no car in sight, a red light will rarely be ignored. Business cards are exchanged with a sense of deference, and social hierarchies are meticulously observed. Conformity is highly valued, while femininity is often dismissed as frivolous. Against this backdrop, the emergence of kawaii can be seen as an act of quiet resistance.
The rise of shōjo (girls’) manga in the 1970s introduced cute characters with large eyes and soft rounded faces with childlike features, popularizing the kawaii aesthetic among young girls.11 Then, in 1974, along came Sanrio’s Hello Kitty,12 commercializing and popularizing kawaii culture beyond Japan’s borders. While it started as a product range for children, it soon became popular with teens and adults alike.
Kawaii characters like Hello Kitty are often depicted in a simplistic style, with oversized eyes and minimal facial expressions. This design invites people to project their own feelings and emotions onto the characters. As a playful touch, Hello Kitty has no mouth—ensuring she’ll never reveal your secrets!
By the 1980s and 1990s, kawaii had permeated stationery, toys, fashion, digital communications, games, and beyond. Franchises like Pokémon, anime series such as Sailor Moon, and the whimsical works of Studio Ghibli exported a sense of childlike wonder and playfulness to audiences across the globe. Even banks and airlines embraced cuteness as a strategy to attract customers, as did major brands like Nissan, Mitsubishi, Sony, and Nintendo. What may have begun as an organic expression of individuality was quickly commodified by industry.
Construction sites, for example, frequently feature barricades shaped like cartoon animals or flowers, softening the visual impact of urban development.13 They also display signs with bowing figures apologizing for any inconvenience. These elements are designed to create a sense of comfort for those passing by. Similarly, government campaigns use mascots like Kumamon,14 a cuddly bear, to promote tourism or public health initiatives. Japanese companies and government agencies use cute mascots, referred to as Yuru-chara, to create a friendly image and foster a sense of connection. You’ll even find them in otherwise harsh environments like high security prisons, the Tokyo Metropolitan Police, and, well, the Japanese Sewage Association uses them too.15
Kawaii aesthetics have also appeared in high-tech domains. Robots designed for elder care, such as SoftBank’s Pepper,16 often adopt kawaii traits to appear less intimidating and foster emotional connections. In the culinary world, bento boxes featuring elaborately arranged food in cute and delightful shapes have become a creative art form, combining practicality with aesthetic pleasure—and turning ordinary lunches into whimsical and joyful experiences.
Sanrio Puroland (website)Kawaii hasn’t stayed confined to Japan’s borders. It has become popular in other countries like South Korea, and had a large influence in the West as well. It has become a global representation of Japan, so much so that it helps draw in tourism, particularly to the Harajuku district in Tokyo and theme parks like Sanrio Puroland. In 2008, Hello Kitty was even named as Japan’s official tourism ambassador.17
The influence of kawaii extends beyond tourism. Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty-themed designs, menus, and crew uniforms on its Paris-Taipei route.18 Even the Vatican couldn’t resist the power of cute: In its appeal to younger generations, it introduced Luce, a cheerful young girl with big eyes, blue hair, and a yellow raincoat, as the mascot for the 2025 Jubilee Year and the Vatican’s pavilion at Expo 2025.19
Taiwanese airline EVA Air celebrated Hello Kitty’s 40th birthday with a special edition Boeing 777-300ER, featuring Hello Kitty- themed designs, menus, and crew uniforms on its Paris–Taipei route.Could anime and kawaii culture become vehicles for Catholicism? Writing for UnHerd, Katherine Dee suggests that Luce represents a global strategy to transcend cultural barriers in ways that traditional symbols, like the rosary, cannot. She points out that while Europe’s Catholic population has been shrinking, the global Catholic community continues to grow by millions.20 But while Luce may bring more attention to the Vatican, can she truly inspire deeper connections to God or spirituality?
All that said, the bigger question remains: Why does anyone find any of this appealing or cute?
One answer comes from the cultural theorist Sianne Ngai, who said that there’s a “surprisingly wide spectrum of feelings, ranging from tenderness to aggression, that we harbor toward ostensibly subordinate and unthreatening commodities.”21 That’s a fancy way of saying that humans find babies cute, a discovery that, in fact, was awarded the 1973 Nobel Prize in Physiology or Medicine to the Austrian zoologist and ethologist Konrad Lorenz for his research on the “baby schema”22 (or Kindchenschema), to explain how and why certain infantile facial and physical traits are seen as cute. These features include an overly large head, rounded forehead, large eyes, and protruding cheeks.23 Lorenz argued that this is so because such features trigger a biological response within us—a desire to nurture and protect because we view them as proxies for vulnerability. The more such features, the more we are wired to care for those who embody them.24 Simply put, when these traits are projected onto characters or art or products, it promotes the same kind of response in us as seeing a baby.
Modern research validates Lorenz’s theory. A 2008 brain imaging study showed that viewing infant faces, but not adult ones, triggered a response in the orbitofrontal cortex linked to reward processing.25 Another brain imaging study conducted at Washington University School of Medicine26 investigated how different levels of “baby schema” in infant faces—characteristics like big eyes and round cheeks—affect brain activity. Researchers discovered that viewing baby-like features activates the nucleus accumbens, a key part of the brain’s reward system responsible for processing pleasure and motivation. This effect was observed in women who had never had children. The researchers concluded that this activation of the brain’s reward system is the neurophysiological mechanism that triggers caregiving behavior.
A very different type of study,27 conducted in 2019, further confirmed that seeing baby-like features triggers a strong emotional reaction. In this case, the reaction is known as “kama muta,” a Sanskrit term that describes the feeling of being deeply moved or touched by love. This sensation is often accompanied by warmth, nostalgia, or even patriotism. The researchers found that videos featuring cute subjects evoked significantly more kama muta than those without such characteristics. Moreover, when the cute subjects were shown “interacting affectionately,” the feeling of kama muta was even stronger compared to when the subjects were not engaging in affectionate behavior.
In 2012, Osaka University professor Hiroshi Nittono led a research study that found that “cuteness” has an impact on observers, increasing their focus and attention.28 It also speaks to our instinct to nurture and protect that which appears vulnerable—which cute things, with their more infantilized traits, do. After all, who doesn’t love Baby Yoda? Perhaps that’s why some of us are so drawn to purchase stuffed dolls of Eeyore—it makes us feel as if we are rescuing him. When we see something particularly cute, many of us feel compelled to buy it. Likewise, it’s possible, at least subconsciously, that those who engage in cosplay around kawaii do so out of a deeper need to feel protected themselves. Research shows that viewing cute images improves moods and is associated with relaxation.29
Kawaii may well be useful in our fast-paced and stressful lives. For starters, when we find objects cute or adorable, we tend to treat them better and give them greater care. There’s also a contagious happiness effect. Indeed, could introducing more kawaii into our environments make people happier? Might it encourage us to care more for each other and our communities? The kawaii aesthetic could even be used in traditionally serious spaces—like a doctor’s waiting room or emergency room—to help reduce anxiety. Instead of staring at a blank ceiling in the dentist’s chair, imagine looking up at a whimsical kawaii mural instead.
Consider also the Tamagotchi digital pet trend of the 1990s. Children were obsessed with taking care of this virtual pet, tending to its needs ranging from food to entertainment. Millions of these “pets” were sold and were highly sought after. There’s something inherently appealing to children about mimicking adult roles, especially when it comes to caregiving. It turns out that children don’t just want to be cared for by their parents—they also seem to have an innate desire to nurture others. This act of caregiving can make them feel capable, empowered, and useful, tapping into a deep sense of responsibility and connection.
At Chuo University in Tokyo, there’s an entire new field of “cute studies” founded by Dr. Joshua Dale, whose book summarizes his research: Irresistible: How Cuteness Wired our Brains and Changed the World.30 According to Dale, there are four traditional and aesthetic values of Japanese culture that contributed to the rise of kawaii: (1) valuing the diminutive, (2) treasuring the transient, (3) preference for simplicity, and (4) appreciating the playful and transient.31 His work emphasizes how kawaii is not just about cuteness, but in fact expresses a deeply rooted cultural philosophy that reflects Japanese views on beauty, life, and emotional expression.
The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions.In other words, there’s something about kawaii that goes beyond a style or a trend. It is a reflection of deeper societal values and emotional needs. In a society that has such rigid hierarchies, social structures, decorum, and an intense work culture, kawaii provides a form of escapism—offering a respite from the harsh realities of adulthood and a return to childlike innocence. It is a safe form of vulnerability. Yet, does it also hint at an inability to confront the realities of life?
The “cult of cute” can lead people to seek refuge from responsibility and avoid confronting uncomfortable emotions. By surrounding themselves with cuteness and positivity, they may be trying to shield themselves from darker feelings and worries. In some cases, people even adapt their own personal aesthetics to appear cuter, as this can make them seem more innocent and in need of help—effectively turning cuteness into a protective layer.
Kawaii also perpetuates infantilization, particularly among women who feel pressured to conform to kawaii aesthetics, which often places them in a submissive role. This is especially evident in subgenres like Lolita fashion—a highly detailed, feminine, and elegant style inspired by Victorian and Rococo fashion, but with a modern and whimsical twist. While this style is adopted by many women with the female gaze in mind, the male gaze remains inescapable.
Japanese Lolita fashionAs a result, certain elements of kawaii can sometimes veer into the sexual, both intentionally and as an unintended distortion of innocence. Maid cafes, for example, though not designed to be sexually explicit, often carry sexual undertones that undermine their seemingly innocent and cute appeal. In these cafes, maids wear form-fitting uniforms and play into fantasies of servitude and submission—particularly when customers are addressed as “masters” and flirtatious interactions are encouraged.
It’s important to remember that things that look sweet and cute can also be sinister. The concept of “cute” often evokes feelings of trust, affection, and vulnerability, which can paradoxically make it a powerful tool for manipulation, subversion, and even control. Can kawaii be a Trojan horse?
When used in marketing to sell products, it may seem harmless, but how much of the rational consumer decision-making process does it override? And what evil lurks behind all the sparkle? In America, cuteness manifests itself even more boldly and aggressively. One designer, Lisa Frank, built an entire empire in the 1980s and 1990s on vibrant, neon colors and whimsical artwork featuring rainbow-colored animals, dolphins, glitter, and images of unicorns on stickers, adorning backpacks and other merchandise. Her work is closely associated with a sense of nostalgia for millennials who grew up in that era. Yet, as later discovered and recently recalled in the Amazon documentary, “Glitter and Greed: The Lisa Frank Story,” avarice ultimately led to a toxic work environment, poor working conditions, and alleged abuse.
Worse, can kawaii be used to mask authoritarian intentions or erase the memory of serious crimes against humanity?
As Japan gained prominence in global culture, its World War II and earlier atrocities have been largely overshadowed, causing many to overlook these grave historical events.32 When we think of Japan today, we often think of cultural exports like anime, manga, Sanrio, geishas, and Nintendo. Even though Japan was once an imperial power, today it exercises “soft power” in the sociopolitical sphere. This concept, introduced by American political scientist Joseph Nye,33 refers to influencing others by promoting a nation’s culture and values to make foreign audiences more receptive to its perspectives.
Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort.Japan began leveraging this strategy in the 1980s to rehabilitate its tarnished postwar reputation, especially in the face of widespread anti-Japanese sentiment in neighboring Asian nations. Over time, these attitudes shifted as Japan used “kawaii culture” and other forms of pop-culture diplomacy to reshape its image and move beyond its violent, imperialist past.
Kawaii also serves as a way to neutralize our fears by transforming things we might typically find unsettling into endearing and approachable forms—think Casper the Friendly Ghost or Monsters, Inc. This principle extends to emerging technologies, such as robots. Deep down, we harbor anxieties about how technology might impact our lives or what could happen if it begins to operate independently. By designing robots to look cute and friendly, we tend to assuage such fear and discomfort. Embedding frightening concepts with qualities that evoke happiness or safety allows us to navigate the interplay between darkness and light, innocence and danger, in a more approachable way. In essence, it’s a coping mechanism for our primal fears.
An interesting aspect of this is what psychologists call the uncanny valley—a feeling of discomfort that arises when something is almost humanlike, but not quite. Horror filmmakers have exploited this phenomenon by weaponizing cuteness against their audiences with characters like the Gremlins and the doll Chucky. The dissonance between a sweet appearance and sinister intent creates a chilling effect that heightens the horror.
When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?Ultimately, all this speaks to the multitude of layers to kawaii. It is more than an aesthetic; it’s a cultural phenomenon with layers of meaning, and it reflects both societal values and emotional needs. Its ability to evoke warmth and innocence can also be a means of emotional manipulation. It can serve as an unassuming guise for darker intentions or meanings. It can be a medium for individual expression, and yet simultaneously it has been commodified and overtaken by consumerism. It can be an authentic expression, yet mass production has also made it a symbol of artifice. It’s a way to embrace the innocent and joyful, yet it can also be used to avoid facing the harsher realities of adulthood. When we embrace kawaii, are we truly finding joy, or are we surrendering to an illusion of comfort in an otherwise chaotic world?
It’s worth asking whether the prevalence of kawaii in public and private spaces reflects a universal desire for escapism or if it serves as a tool to maintain conformity and compliance. Perhaps, at its core, kawaii holds up a mirror to society’s collective vulnerabilities—highlighting not just what we nurture, but also what we are willing to overlook for the sake of cuteness.
Last week I wrote about the de-extinction of the dire wolf by a company, Colossal Biosciences. What they did was pretty amazing – sequence ancient dire wolf DNA and use that as a template to make 20 changes to 14 genes in the gray wolf genome via CRISPR. They focused on the genetic changes they thought would have the biggest morphological effect, so that the resulting pups would look as much as possible like the dire wolves of old.
This achievement, however, is somewhat tainted by overhyping what was actually achieved, by the company and many media outlets. Although the pushback began immediately, and there is plenty of reporting about the fact that these are not exactly dire wolves (as I pointed out myself). I do think we should not fall into the pattern of focusing on the controversy and the negative and missing the fact that this is a genuinely amazing scientific accomplishment. It is easy to become blase about such things. Sometimes it’s hard to know in reporting what the optimal balance is between the positive and the negative, and as skeptics we definitely can tend toward the negative.
I feel the same way, for example, about artificial intelligence. Some of my skeptical colleagues have taken the approach that AI is mostly hype, and focusing on what the recent crop of AI apps are not (they are not sentient, they are not AGI), rather than what they are. In both cases I think it’s important to remember that science and pseudoscience are a continuum, and just because something is being overhyped does not mean it gets tossed in the pseudoscience bucket. That is just another form of bias. Sometimes that amounts to substituting cynicism for more nuanced skepticism.
Getting back to the “dire wolves”, how should we skeptically view the claims being made by Colossal Biosciences. First let me step back a bit and talk about de-extinction – bringing back species that have gone extinct from surviving DNA remnants. There are basically three approaches to achieve this. They all start with sequencing DNA from the extinct species. This is easier for recently extinct species, like the carrier pigeon, where we still have preserved biological samples. The more ancient the DNA, the harder it is to recover and sequence. Some research has estimated that the half life of DNA (in good preserving conditions) is 521 years. This leads to an estimate that all base pairs will be gone by 6.8 million years. This means – no non-avian dinosaur DNA. But there are controversial claims of recovered dino DNA. That’s a separate discussion, but for now lets focus on the non-controversial DNA, of thousands to at most a few million years old.
Species on the short list for de-extinction include the dire wolf (13,000 years ago), woolly mammoth (10,000 years ago), dodo (360 years), and the thylacine (90 years). The best way (not the most feasible way) to fully de-extinct a species is to completely sequence their DNA and then use that to make a full clone. No one would argue that a cloned woolly mammoth is not a woolly mammoth. There has been discussion of cloning the woolly mammoth and other species for decades, but the technology is very tricky. We would need a complete woolly mammoth genome – which we have. However, the DNA is degraded making cloning not possible with current technology. But this is one potential pathway. It is more feasible for the dodo and thylacine.
A second way is to make a hybrid – take the woolly mammoth genome and use it to fertilize the egg from a modern elephant. The result would be half woolly mammoth and half Asian or African elephant. You could theoretically repeat this procedure with the offspring, breeding back with woolly mammoth DNA, until you have a creature that is mostly woolly mammoth. This method requires an extant relative that is close enough to produce fertile young. This is also tricky technology, and we are not quite there yet.
The third way is the “dino-chicken” (or chickenosaurus) method, promoted initially (as far as I can tell, but I’m probably wrong) by Jack Horner. With this method you start with an extant species and then make specific changes to its genome to “reverse engineer” an ancestor or close relative species. There are actually various approaches under this umbrella, but all involve starting with an extant species and making genetic changes. There is the Jurassic Park approach, which takes large chunks of “dino DNA” and plugs them into an intact genome from a modern species (why they used frog DNA instead of bird DNA is not clear). There is also the dino-chicken approach, which simply tries to figure out the genetic changes that happened over evolutionary time to result in the morphological changes that turned, for example, a theropod dinosaur into a chicken. Then, reverse those changes. This is more like reverse engineering a dinosaur by understanding how genes result in morphology.
Then we have the dire wolf approach – use ancient DNA as a template to guide specific CRISPR changes to an extant genome. This is very close to the dino-chicken approach, but uses actual ancient DNA as a template. All of these approaches (perhaps the best way to collectively describe these methods is the genetic engineering approach) do not result in a clone of the extinct species. They result in a genetically engineered approximation of the extinct species. Once you get passed the hype, everyone acknowledges this is a fact.
The discussion that flows from the genetic engineering method is – how do we refer to the resulting organisms? We need some catchy shorthand that is scientifically accurate. The three wolves produced by Colossal Biosciences are not dire wolves. But they are not just gray wolves – they are wolves with dire wolf DNA resulting in dire wolf morphological features. They are engineered dire wolf “sims”, “synths”, “analogs”, “echos”, “isomorphs”? Hmmm… A genetically engineered dire wolf isomorph. I like it.
Also, my understanding is that the goal of using the genetic engineering method of de-extinction is not to make a few changes and then stop, but to keep going. By my quick calculation the dire wolf and the gray wolf differ by about 800-900 genes out of 19,000 total. Our best estimate is that dire wolves had 78 chromosomes, like all modern canids, including the gray wolf, so that helps. So far 14 of those genes have been altered from gray wolf to dire wolf (at least enough to function like a dire wolf). There is no reason why they can’t keep going, making more and more changes based upon dire wolf DNA. At some point the result will be more like a dire wolf than a gray wolf. It will still be a genetic isomorph (it’s growing on me) but getting closer and closer to the target species. Is there any point at which we can say – OK, this is basically a dire wolf?
It’s also important to recognize that species are not discrete things. They are temporary dynamic and shifting islands of interbreeding genetic clusters. We should also not confuse taxonomy for reality – it is a naming convention that is ultimately arbitrary. Cladistics is an attempt to have a fully objective naming system, based entirely on evolutionary branching points. However, using that method is a subjective choice, and even within cladistics the break between species is not always clear.
I find this all pretty exciting. I also think the technology can be very important. Its best uses, in my opinion, are to de-extinct (as close as possible) recently extinct species due to human activity, ones where there is still something close to their natural ecosystem still in existence (such as the dodo and thylacine). Also it can be used to increase the genetic diversity of endangered species and reduce the risk of extinction.
Using it to bring back extinct ancient species, like the mammoth and dire wolf (or non-avian dinosaurs, for that matter), I see as a research project. And sure, I would love to see living examples that look like ancient extinct species, but that is mostly a side benefit. This can be an extremely useful research project, advancing our understanding of genetics, cloning and genetic engineering technology, and improving our understanding of ancient species.
This recent controversy is an excellent opportunity to teach the public about this technology and its implications. It’s also an opportunity to learn about categorization, terminology, and evolution. Let’s not waste it by overreacting to the hype and being dismissive.
The post OK – But Are They Dire Wolves first appeared on NeuroLogica Blog.
We may have a unique opportunity to make an infrastructure investment that can demonstrably save money over the long term – by burying power and broadband lines. This is always an option, of course, but since we are in the early phases of rolling out fiber optic service, and also trying to improve our grid infrastructure with reconductoring, now may be the perfect time to also upgrade our infrastructure by burying much of these lines.
This has long been a frustration of mine. I remember over 40 years ago seeing new housing developments (my father was in construction) with all the power lines buried. I hadn’t realized what a terrible eye sore all those telephone poles and wires were until they were gone. It was beautiful. I was lead to believe this was the new trend, especially for residential areas. I looked forward to a day without the ubiquitous telephone poles, much like the transition to cable eliminated the awful TV antennae on top of every home. But that day never came. Areas with buried lines remained, it seems, a privilege of upscale neighborhoods. I get further annoyed every time there is a power outage in my area because of a downed line.
The reason, ultimately, had to be cost. Sure, there are lots of variables that determine that cost, but at the end of the day developers, towns, utility companies were taking the cheaper option. But what price do we place on the aesthetics of the places we live, and the inconvenience of regular power outages? I also hate the fact that the utility companies have to come around every year or so and carve ugly paths through large beautiful trees.
So I was very happy to see this study which argues that – Benefits of aggressively co-undergrounding electric and broadband lines outweigh costs. First, they found that co-undergrounding (simply burying broadband and power lines at the same time) saves about 40% over doing each individually. This seems pretty obvious, but it’s good to put a number on it. But more importantly they found that the whole project can save money over the long term. They modeled one town in Mass and found:
“Over 40 years, the cost of an aggressive co-undergrounding strategy in Shrewsbury would be $45.4 million, but the benefit from avoiding outages is $55.1 million.”
The reduced cost comes mostly from avoiding power outages. This means that areas most prone to power outages would benefit the most. What they mean by “aggressive” is co-undergrounding even before existing power lines are at the end of their lifespan. They do not consider the benefits of reconductoring – meaning increasing the carrying capacity of power lines with more modern construction. The benefit here can be huge as well, especially in facilitating the move to less centralized power production. We can further include the economic benefits of upgrading to fiber optic broadband, or even high end cable service.
This is exactly the kind of thing that governments should be doing – thoughtful public investments that will improve our lives and save money in the long term. The up front costs are also within the means of utility companies and local governments. I would also like to see subsidies at the state and federal level to spread the costs out even more.
Infrastructure investments, at least in the abstract, tend to have broad bipartisan support. Even when they fight over such proposals, in the end both sides will take credit for them, because the public generally supports infrastructure that makes their lives better. For undergrounding there are the immediate benefits of improved aesthetics – our neighborhoods will look prettier. Then we will also benefit from improved broadband access, which can be connected to the rural broadband project which has stalled. Investments in the grid can help keep electricity costs down. For those of us living in areas at high risk of power outages, the lack of such outages will also make an impression over time. We will tell our kids and grandkids stories about the time an ice storm took down power lines, which were laying dangerously across the road, and we had no power for days. What did we do with ourselves, they will ask. You mean – there was no heat in the winter? Did people die? Why yes, yes they did. It will seem barbaric.
This may not make sense for every single location, and obviously some long distance lines are better above ground. But for residential neighborhoods, undergrounding power and broadband seems like a no-brainer. It seemed like one 40 years ago. I hope we don’t miss this opportunity. This could also be a political movement that everyone can get behind, which would be a good thing in itself.
The post Bury Broadband and Electricity first appeared on NeuroLogica Blog.
This article was originally published in Skeptic in 1997.
Presented here for the first time are the complete texts of two letters that Einstein wrote regarding his lack of belief in a personal god.
Just over a century ago, near the beginning of his intellectual life, the young Albert Einstein became a skeptic. He states so on the first page of his Autobiographical Notes (1949, pp. 3–5):
Thus I came—despite the fact I was the son of entirely irreligious (Jewish) parents—to a deep religiosity, which, however, found an abrupt ending at the age of 12. Through the reading of popular scientific books I soon reached the conviction that much in the stories of the Bible could not be true. The consequence was a positively fanatic [orgy of] freethinking coupled with the impression that youth is intentionally being deceived… Suspicion against every kind of authority grew out of this experience, a skeptical attitude … which has never left me….We all know Albert Einstein as the most famous scientist of the 20th century, and many know him as a great humanist. Some have also viewed him as religious. Indeed, in Einstein’s writings there is well-known reference to God and discussion of religion (1949, 1954). Although Einstein stated he was religious and that he believed in God, it was in his own specialized sense that he used these terms. Many are aware that Einstein was not religious in the conventional sense, but it will come as a surprise to some to learn that Einstein clearly identified himself as an atheist and as an agnostic. If one understands how Einstein used the terms religion, God, atheism, and agnosticism, it is clear that he was consistent in his beliefs.
Part of the popular picture of Einstein’s God and religion comes from his well-known statements, such as:
“God is cunning but He is not malicious.” (Also: “God is subtle but he is not bloody-minded.” Or: “God is slick, but he ain’t mean.”) (1946)“God does not play dice.” (On many occasions.)“I want to know how God created the world. I am not interested in this or that phenomenon, in the spectrum of this or that element. I want to know His thoughts, the rest are details.” (Unknown date.)It is easy to see how some got the idea that Einstein was expressing a close relationship with a personal god, but it is more accurate to say he was simply expressing his ideas and beliefs about the universe.
Figure 1Einstein’s “belief” in Spinoza’s God is one of his most widely quoted statements. But quoted out of context, like so many of these statements, it is misleading at best. It all started when Boston’s Cardinal O’Connel attacked Einstein and the General Theory of Relativity and warned the youth that the theory “cloaked the ghastly apparition of atheism” and “befogged speculation, producing universal doubt about God and His creation” (Clark, 1971, 413–414). Einstein had already experienced heavier duty attacks against his theory in the form of anti-Semitic mass meetings in Germany, and he initially ignored the Cardinal’s attack. Shortly thereafter though, on April 24, 1929, Rabbi Herbert Goldstein of New York cabled Einstein to ask: “Do you believe in God?” (Sommerfeld, 1949, 103). Einstein’s return message is the famous statement:
“I believe in Spinoza’s God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with fates and actions of human beings” (103). The Rabbi, who was intent on defending Einstein against the Cardinal, interpreted Einstein’s statement in his own way when writing:
Spinoza, who is called the God-intoxicated man, and who saw God manifest in all nature, certainly could not be called an atheist. Furthermore, Einstein points to a unity. Einstein’s theory if carried out to its logical conclusion would bring to mankind a scientific formula for monotheism. He does away with all thought of dualism or pluralism. There can be no room for any aspect of polytheism. This latter thought may have caused the Cardinal to speak out. Let us call a spade a spade (Clark, 1971, 414).Both the Rabbi and the Cardinal would have done well to note Einstein’s remark, of 1921, to Archbishop Davidson in a similar context about science: “It makes no difference. It is purely abstract science” (413).
The American physicist Steven Weinberg (1992), in critiquing Einstein’s “Spinoza’s God” statement, noted: “But what possible difference does it make to anyone if we use the word “God” in place of “order” or “harmony,” except perhaps to avoid the accusation of having no God?” Weinberg certainly has a valid point, but we should also forgive Einstein for being a product of his times, for his poetic sense, and for his cosmic religious view regarding such things as the order and harmony of the universe.
But what, at bottom, was Einstein’s belief? The long answer exists in Einstein’s essays on religion and science as given in his Ideas and Opinions (1954), his Autobiographical Notes (1949), and other works. What about a short answer?
In the Summer of 1945, just before the bombs of Hiroshima and Nagasaki, Einstein wrote a short letter stating his position as an atheist (Figure 1, above). Ensign Guy H. Raner had written Einstein from mid-Pacific requesting a clarification on the beliefs of the world famous scientist (Figure 2, below). Four years later Raner again wrote Einstein for further clarification and asked “Some people might interpret (your letter) to mean that to a Jesuit priest, anyone not a Roman Catholic is an atheist, and that you are in fact an orthodox Jew, or a Deist, or something else. Did you mean to leave room for such an interpretation, or are you from the viewpoint of the dictionary an atheist; i.e., “one who disbelieves in the existence of a God, or a Supreme Being?” Einstein’s response is shown in Figure 3.
Figure 2Combining key elements from the first and second response from Einstein there is little doubt as to his position:
From the viewpoint of a Jesuit priest I am, of course, and have always been an atheist…. I have repeatedly said that in my opinion the idea of a personal God is a childlike one. You may call me an agnostic, but I do not share the crusading spirit of the professional atheist whose fervor is mostly due to a painful act of liberation from the fetters of religious indoctrination received in youth. I prefer an attitude of humility corresponding to the weakness of our intellectual understanding of nature and of our being.I was fortunate to meet Guy Raner, by chance, at a humanist dinner in late 1994, at which time he told me of the Einstein letters. Raner lives in Chatsworth, California and has retired after a long teaching career. The Einstein letters, a treasured possession for most of his life, were sold in December, 1994, to a firm that deals in historical documents (Profiles in History, Beverly Hills, CA). Five years ago a very brief letter (Raner & Lerner, 1992) describing the correspondence was published in Nature. But the two Einstein letters have remained largely unknown.
“I have repeatedly said that in my opinion the idea of a personal God is a childlike one.” —EinsteinCuriously enough, the wonderful and well-known biography Albert Einstein, Creator and Rebel, by Banesh Hoffmann (1972) does quote from Einstein’s 1945 letter to Raner. But maddeningly, although Hoffmann quotes most of the letter (194–195), he leaves out Einstein’s statement: “From the viewpoint of a Jesuit Priest I am, of course, and have always been an atheist.”!
Hoffmann’s biography was written with the collaboration of Einstein’s secretary, Helen Dukas. Could she have played a part in eliminating this important sentence, or was it Hoffmann’s wish? I do not know. However, Freeman Dyson (1996) notes “…that Helen wanted the world to see, the Einstein of legend, the friend of school children and impoverished students, the gently ironic philosopher, the Einstein without violent feelings and tragic mistakes.” Dyson also notes that he thought Dukas “…profoundly wrong in trying to hide the true Einstein from the world.” Perhaps her well-intentioned protectionism included the elimination of Einstein as atheist.
Figure 3Although not a favorite of physicists, Einstein, The Life and Times, by the professional biographer Ronald W. Clark (1971), contains one of the best summaries on Einstein’s God: “However, Einstein’s God was not the God of most men. When he wrote of religion, as he often did in middle and later life, he tended to … clothe with different names what to many ordinary mortals—and to most Jews—looked like a variant of simple agnosticism….This was belief enough. It grew early and rooted deep. Only later was it dignified by the title of cosmic religion, a phrase which gave plausible respectability to the views of a man who did not believe in a life after death and who felt that if virtue paid off in the earthly one, then this was the result of cause and effect rather than celestial reward. Einstein’s God thus stood for an orderly system obeying rules which could be discovered by those who had the courage, the imagination, and the persistence to go on searching for them” (19).
Einstein continued to search, even to the last days of his 76 years, but his search was not for the God of Abraham or Moses. His search was for the order and harmony of the world.
BibliographyThis really is just a coincidence – I posted yesterday about using AI and modern genetic engineering technology, with one application being the de-extinction of species. I had not seen the news from yesterday about a company that just announced it has cloned three dire wolves from ancient DNA. This is all over the news, so here is a quick recap before we discuss the implications.
The company, Colossal Biosciences, has long announced its plans to de-extinct the woolly mammoth. This was the company that recently announced it had made a woolly mouse by inserting a gene for wooliness from recovered woolly mammoth DNA. This was a proof-of-concept demonstration. But now they say they have also been working on the dire wolf, a species of wolf closely related to the modern gray wolf that went extinct 13,000 years ago. We mostly know about them from skeletons found in the Le Brea tar pits (some of which are on display at my local Peabody Museum). Dire wolves are about 20% bigger than gray wolves, have thicker lighter coats, and are more muscular. They are the bad-ass ice-age version of wolves that coexisted with saber-toothed tigers and woolly mammoths.
The company was able to recover DNA from 13,000 year old tooth and a 72,000 year old skull. With that DNA they engineered wolf DNA at 20 sites over 14 genes, then used that DNA to fertilize an egg which they gestated in a dog. They actually did this twice, the first time creating two males, Romulus and Remus (now six months old), and the second time making one female, Kaleesi (now three months old). The wolves are kept in a reserve. The company says they have no current plan to breed them, but do plan to make more in order to create a full pack to study pack behavior.
The company acknowledges these puppies are not the exact dire wolves that were alive up to 13,000 years ago, but they are pretty close. They started pretty close – gray wolves share 99.5% of their DNA with dire wolves, and now they are even closer, replicating the key morphological features of the dire wolf. So not a perfect de-extinction, but pretty close. Next up is the woolly mammoth. They also plan to use the same techniques to de-extinct the dodo and the thylacine.
What is the end-game of de-extincting these species? That’s a great question. I don’t anticipate that a breeding population of dire wolves will be released into the wild. While they did coexist with grey wolves, and can again, this species was not driven to extinction by humans but likely by changing environmental conditions. They are no longer adapted to this world, and would likely be a highly disruptive invasive species. The same is true of the woolly mammoth, although it is not a predator so the concerns are no as – dire (sorry, couldn’t resist). But still, we would need to evaluate their effect on any ecosystem we place them.
The same is not true for the thylacine or dodo. The dodo in particular seem benign enough to reintroduce. The challenge will be getting it to survive. It went extinct not just from human predation, but also it ground nests and was not prepared for the rats and other predators that we introduced to their island. So first we would need to return their habitat to a livable state for them. Thylacines might be the easiest to reintroduce, as they went extinct very recently and their habitat still largely exists.
So – for those species we have no intention of reintroducing into the wild, or for which this would be an extreme challenge – what do we do with them? We could keep them on a large preserve to study them and to be viewed by tourists. Here we might want to follow the model of Zealandia – a wildlife sanctuary in New Zealand. I visited Zealandia and it is amazing. It is a 500+ acre ecosanctuary, completely walled off from the outside. The goal is to recreate the native plants and animals of pre-human New Zealand, and to keep out all introduced predators. It serves as a research facility, sanctuary for endangered species, and tourist and educational site.
I could imagine other similar ecosanctuaries. The island of Mauritius where the dodo once lived is now populated, but vast parts of it are wild. It might be feasible to create an ecosanctuary there, safe for the dodo. We could do a similar project in North America, which is not only a preserve for some modern species but also could contain de-extincted compatible species. Having large and fully protected ecosanctuaries is not a bad idea in itself.
There is a fine line between an ecosanctuary and a Jurassic Park. It really is a matter of how the park is managed and how people interact with it, and it’s more of a continuum than a sharp demarcation. It really isn’t a bad idea to take an otherwise barren island, perhaps a recent volcanic island where life has not been established yet, and turn it into an isolated ecosanctuary, then fill it with a bunch of ancient plants and animals. This would be an amazing research opportunity, a way to preserve biodiversity, and an awesome tourist experience, which then could fund a lot of research and environmental initiatives.
I think the bottom line is that de-extinction projects can work out well, if they are managed properly. The question is – do we have faith that they will be? The chance that they are is increased if we engage in discussions now, including some thoughtful regulations to ensure ethical and responsible behavior all around.
The post De-extincting the Dire Wolf first appeared on NeuroLogica Blog.
Many of our preconceived notions about immigrants likely bear very little resemblance to the facts.
Learn about your ad choices: dovetail.prx.org/ad-choicesTariff policy has been a contentious issue since the founding of the United States. Hamilton clashed with Jefferson and Madison over tariff policy in the 1790s, South Carolina threatened to secede from the union over tariff policy in 1832, and the Hawley-Smoot tariff generated outrage in 1930. Currently, Trump is sparking heated debates about his tariff policies.
To understand the ongoing tariff debate, it is essential to grasp the basics: Tariffs are taxes levied by governments on imported goods. They have been the central focus of U.S. trade policy since the federal government was established in 1789. Historically, tariffs have been used to raise government revenue, protect domestic industries, and influence the trade policies of other nations. The history of U.S. tariffs can be understood in three periods corresponding with these three uses.
From 1790 until the Civil War in 1861, tariffs primarily served as a source of federal revenue, accounting for about 90 percent of government income (since 2000, however, tariffs have generated less than 2 percent of the federal government’s income).1 Both the Union and the Confederacy enacted income taxes to help finance the Civil War. After the war, public resistance to income taxes grew, and Congress repealed the federal income tax in 1872. Later, when Congress attempted to reinstate an income tax in 1894, the Supreme Court struck it down in Pollock v. Farmers’ Loan & Trust Co. (1895), ruling it unconstitutional. To resolve this issue, the Sixteenth Amendment was ratified in 1913, granting Congress the authority to levy income taxes. Since then, federal income taxes have provided a much larger source of revenue than tariffs, allowing for greater federal government expenditures. The shift away from tariffs as the primary revenue source began during the Civil War and was further accelerated by World War I, which required large increases in federal spending.
The 16th Amendment was ratified in 1913, granting Congress the authority to levy income taxes.Before the Civil War, the North and South had conflicting views on tariffs. The North, with its large manufacturing base, wanted higher tariffs to protect domestic industries from foreign competition. This protection would decrease the amount of competition Northern manufacturers faced, allowing them to charge higher prices and encounter less risk of being pushed out of business by more efficient foreign producers. By contrast, the South, with an economy rooted in agricultural exports (especially cotton) favored low tariffs, as they benefited from cheaper imported manufactured goods. These imports were largely financed by selling Southern cotton, produced by enslaved labor, to foreign markets, particularly Great Britain. The North-South tariff divide eventually led to the era of protective tariffs (1860-1934) after the Civil War, when the victorious North gained political power, and protectionist policies dominated U.S. trade.
For more than half a century after the Civil War, U.S. trade policy was dominated by high protectionist tariffs. Republican William McKinley, a strong advocate of high tariffs, won the presidency in 1896 with support from industrial interests. Between 1861 and the early 1930s, average tariff rates on dutiable imports rose to around 50 percent and stayed elevated for decades. As a point of comparison, average tariffs had declined to about 5 percent by the early 21st century.
Republicans passed the Hawley-Smoot Tariff in 1930, which coincided with the Great Depression. While it is generally agreed among economists that the Hawley-Smoot Tariff did not cause the Great Depression, it further hurt the world economy during the economic downturn (though many observers at the time thought that it was responsible for the global economic collapse). The widely disliked Hawley-Smoot Tariff, along with the catastrophic effects of the Great Depression, allowed the Democrats to gain political control of both Congress and the Presidency in 1932. They passed the Reciprocal Trade Agreements Act (RTAA) in 1934, which gave the president the power to negotiate reciprocal trade agreements.
The RTAA transitioned some of the power over trade policy, i.e., tariffs, away from Congress and to the President. Whereas the constituencies of specific members of Congress are in certain regions of the U.S., the entire country can vote in Presidential elections. For that reason, regional producers generally have less political power over the President than they do over their specific members of Congress, and therefore the President tends to be less responsive to their interests and more responsive to the interests of consumers and exporters located across the nation. Since consumers and exporters generally benefit from lower tariffs, the President has an incentive to decrease them. Thus, the RTAA contributed to the U.S. lowering tariff barriers around the world. This marked the beginning of the era of reciprocity in U.S. tariff policy (1934-2025) in which the U.S. has generally sought to reduce tariffs worldwide.
World War II and its consequences also pushed the U.S. into the era of reciprocity. The European countries, which had been some of the United States’ strongest economic competitors, were decimated after two World Wars in 30 years. Exports from Europe declined and the U.S. shifted even more toward exporting after the Second World War. As more U.S. firms became larger exporters, their political power was aimed at lowering tariffs rather than raising them. (Domestic companies that compete with imports have an interest in lobbying for higher tariffs, but exporting companies have the opposite interest.)
The World Trade Organization (WTO) was founded in 1995. Photo © WTO.The end of WWII left the U.S. concerned that yet another World War could erupt if economic conditions were unfavorable around the world. America also sought increased trade to stave off the spread of Communism during the Cold War. These geopolitical motivations led the U.S. to seek increased trade with non-Communist nations, which was partially accomplished by decreasing tariffs. This trend culminated in the creation of the General Agreement on Tariffs and Trade (GATT) in 1947, which was then superseded by the World Trade Organization (WTO) in 1995. These successive organizations helped reduce tariffs and other international trade barriers.
Although there is a strong consensus among economists that tariffs do more harm than good,2,3,4 there are some potential benefits of specific tariff policies.
ProsAlthough tariffs have some theoretical benefits in specific situations, the competence and incentives of the U.S. political system often do not allow these benefits to come to fruition. Tariffs almost always come with the cost of economic inefficiency, which is why economists generally agree that tariffs do more harm than good. Does the increase in U.S. tariffs, particularly on China, since 2016 mark the end of the era of reciprocity or is it just a blip? The answer will affect the economic well-being of Americans and people around the world.
The history of tariffs described in this article is largely based on Clashing Over Commerce by Douglas Irwin (2017).
The author would like to thank Professor John L. Turner at the University of Georgia for his invaluable input.
Throughout the early modern period—from the rise of the nation state through the nineteenth century—the predominant economic ideology of the Western world was mercantilism, or the belief that nations compete for a fixed amount of wealth in a zero-sum game: the +X gain of one nation means the –X loss of another nation, with the +X and –X summing to zero. The belief at the time was that in order for a nation to become wealthy, its government must run the economy from the top down through strict regulation of foreign and domestic trade, enforced monopolies, regulated trade guilds, subsidized colonies, accumulation of bullion and other precious metals, and countless other forms of economic intervention, all to the end of producing a “favorable balance of trade.” Favorable, that is, for one nation over another nation. As President Donald Trump often repeats, “they’re ripping us off!” That is classic mercantilism and economic nationalism speaking.
Adam Smith famously debunked mercantilism in his 1776 treatise An Inquiry into the Nature and Causes of the Wealth of Nations. Smith’s case against mercantilism is both moral and practical. It is moral, he argued, because: “To prohibit a great people…from making all that they can of every part of their own produce, or from employing their stock and industry in the way that they judge most advantageous to themselves, is a manifest violation of the most sacred rights of mankind.”1 It is practical, he showed, because: “Whenever the law has attempted to regulate the wages of workmen, it has always been rather to lower them than to raise them.”2
Producers and ConsumersAdam Smith’s The Wealth of Nations was one long argument against the mercantilist system of protectionism and special privilege that in the short run may benefit producers but which in the long run harms consumers and thereby decreases the wealth of a nation. All such mercantilist practices benefit the producers, monopolists, and their government agents, while the people of the nation—the true source of a nation’s wealth—remain impoverished: “The wealth of a country consists, not of its gold and silver only, but in its lands, houses, and consumable goods of all different kinds.” Yet, “in the mercantile system, the interest of the consumer is almost always constantly sacrificed to that of the producer.”3
Adam Smith statue in Edinburgh, Scotland. Photo by K. Mitch Hodge / UnsplashThe solution? Hands off. Laissez Faire. Lift trade barriers and other restrictions on people’s economic freedoms and allow them to exchange as they see fit for themselves, both morally and practically. In other words, an economy should be consumer driven, not producer driven. For example, under the mercantilist zero-sum philosophy, cheaper foreign goods benefit consumers but they hurt domestic producers, so the government should impose protective trade tariffs to maintain the favorable balance of trade.
But who is being protected by a protective tariff? Smith showed that, in principle, the mercantilist system only benefits a handful of producers while the great majority of consumers are further impoverished because they have to pay a higher price for foreign goods. The growing of grapes in France, Smith noted, is much cheaper and more efficient than in the colder climes of his homeland, for example, where “by means of glasses, hotbeds, and hotwalls, very good grapes can be raised in Scotland” but at a price thirty times greater than in France. “Would it be a reasonable law to prohibit the importation of all foreign wines, merely to encourage the making of claret and burgundy in Scotland?” Smith answered the question by invoking a deeper principle:
What is prudence in the conduct of every private family, can scarce be folly in that of a great kingdom. If a foreign country can supply us with a commodity cheaper than we ourselves can make it, better buy it of them.4
This is the central core of Smith’s economic theory: “Consumption is the sole end and purpose of all production; and the interest of the producer ought to be attended to, only so far as it may be necessary for promoting that of the consumer.” The problem is that the system of mercantilism “seems to consider production, and not consumption, as the ultimate end and object of all industry and commerce.”5 So what?
When production is the object, and not consumption, producers will appeal to top-down regulators instead of bottom-up consumers. Instead of consumers telling producers what they want to consume, government agents and politicians tell consumers what, how much, and at what price the products and services will be that they consume. This is done through a number of different forms of interventions into the marketplace. Domestically, we find examples in tax favors for businesses, tax subsidies for corporations, regulations (to control prices, imports, exports, production, distribution, and sales), and licensing (to control wages, protect jobs).6 Internationally, the interventions come primarily through taxes under varying names, including “duties,” “imposts,” “excises,” “tariffs,” “protective tariffs,” “import quotas,” “export quotas,” “most-favored nation agreements,” “bilateral agreements,” “multilateral agreements,” and the like.
Such agreements are never between the consumers of two nations; they are between the politicians and the producers of the nations. Consumers have no say in the matter, with the exception of indirectly voting for the politicians who vote for or against such taxes and tariffs. And they all sum to the same effect: the replacement of free trade with “fair trade” (fair for producers, not consumers), which is another version of the mercantilist “favorable balance of trade” (favorable for producers, not consumers). Mercantilism is a zero-sum game in which producers win by the reduction or elimination of competition from foreign producers, while consumers lose by having fewer products from which to choose, along with higher prices and often lower quality products. The net result is a decrease in the wealth of a nation.
The principle is as true today as it was in Smith’s time, and we still hear the same objections Smith did: “Shouldn’t we protect our domestic producers from foreign competition?” And the answer is the same today as it was two centuries ago: no, because “consumption is the sole end and purpose of all production.”
Nonzero EconomicsThe founders of the United States and the framers of the Constitution were heavily influenced by the Enlightenment thinkers of England and the continent, including and especially Adam Smith. Nevertheless, it was not long after the founding of the country before our politicians began to shift the focus of the economy from consumption to production. In 1787, the United States Constitution was ratified, which included Article 1, Section 8: “The Congress shall have the power to lay and collect taxes, duties, imposts, and excises to cover the debts of the United States.” As an amusing exercise in bureaucratic wordplay, consider the common usages of these terms in the Oxford English Dictionary.
Tax: “a compulsory contribution to the support of government”
Duty: “a payment to the public revenue levied upon the import, export, manufacture, or sale of certain commodities”
Impost: “a tax, duty, imposition levied on merchandise”
Excise: “any toll or tax.”
(Note the oxymoronic phrase “compulsory contribution” in the first definition.)
A revised Article 1, Section 8 reads: “The Congress shall have the power to lay and collect taxes, taxes, taxes, and taxes to cover the debts of the United States.”
A revised Article 1, Section 8 of the Constitution reads: “The Congress shall have the power to lay and collect taxes, taxes, taxes, and taxes to cover the debts of the United States.” Photo by Anthony Garand / UnsplashIn the U.K. and on the continent, mercantilists dug in while political economists, armed with the intellectual weapons provided by Adam Smith, fought back, wielding the pen instead of the sword. The nineteenth-century French economist Frédéric Bastiat, for example, was one of the first political economists after Smith to show what happens when the market depends too heavily on top-down tinkering from the government. In his wickedly raffish The Petition of the Candlemakers, Bastiat satirizes special interest groups—in this case candlemakers—who petition the government for special favors:
We are suffering from the ruinous competition of a foreign rival who apparently works under conditions so far superior to our own for the production of light, that he is flooding the domestic market with it at an incredibly low price.... This rival... is none other than the sun.... We ask you to be so good as to pass a law requiring the closing of all windows, dormers, skylights, inside and outside shutters, curtains, casements, bull’s-eyes, deadlights and blinds; in short, all openings, holes, chinks, and fissures.7
Zero-sum mercantilist models hung on through the nineteenth and twentieth centuries, even in America. Since the income tax was not passed until 1913 through the Sixteenth Amendment, for most of the country’s first century the practitioners of trade and commerce were compelled to contribute to the government through various other taxes. Since foreign trade was not able to meet the growing debts of the United States, and in response to the growing size and power of the railroads and political pressure from farmers who felt powerless against them, in 1887 the government introduced the Interstate Commerce Commission. The ICC was charged with regulating the services of specified carriers engaged in transportation between states, beginning with railroads, but then expanded the category to include trucking companies, bus lines, freight carriers, water carriers, oil pipelines, transportation brokers, and other carriers of commerce.8 Regardless of its intentions, the ICC’s primary effect was interference with the freedom of people to buy and sell between the states of America.
The ICC was followed in 1890 with the Sherman Anti-Trust Act, which declared: “Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony,” resulting in a massive fine, jail, or both.
When stripped of its obfuscatory language, the Sherman Anti-Trust Act and the precedent-setting cases that have been decided in the courts in the century since it was passed, allows the government to indict an individual or a company on one or more of four crimes:
This was Katy-bar-the-door for anti-business legislators and their zero-sum mercantilist bureaucrats to restrict the freedom of consumers and producers to buy and sell, and they did with reckless abandon.
Completing Smith’s RevolutionTariffs are premised on a win-lose, zero-sum, producer-driven economy, which ineluctably leads to consumer loss. By contrast, a win-win, nonzero, consumer-driven economy leads to consumer gain. Ultimately, Smith held, a consumer-driven economy will produce greater overall wealth in a nation than will a producer-driven economy. Smith’s theory was revolutionary because it is counterintuitive. Our folk economic intuitions tell us that a complex system like an economy must have been designed from the top down, and thus it can only succeed with continual tinkering and control from the top. Smith amassed copious evidence to counter this myth—evidence that continues to accumulate two and a half centuries later—to show that, in the modern language of complexity theory, the economy is a bottom-up self-organized emergent property of complex adaptive systems.
Adam Smith launched a revolution that has yet to be fully realized. A week does not go by without a politician, economist, or social commentator bemoaning the loss of American jobs, American manufacturing, and American products to foreign jobs, foreign manufacturing, and foreign products. Even conservatives—purportedly in favor of free markets, open competition, and less government intervention in the economy—have few qualms about employing protectionism when it comes to domestic producers, even at the cost of harming domestic consumers.
Citing the need to protect the national economic interest—and Harley-Davidson—Ronald Reagan raised tariffs on Japanese motorcycles from 4.4 percent to 49.4 percent. Photo by Library of Congress / UnsplashEven the icon of free market capitalism, President Ronald Reagan, compromised his principles in 1982 to protect the Harley-Davidson Motor Company when it was struggling to compete against Japanese motorcycle manufactures that were producing higher quality bikes at lower prices. Honda, Kawasaki, Yamaha, and Suzuki were routinely undercutting Harley-Davidson by $1500 to $2000 a bike in comparable models.
On January 19, 1983, the International Trade Commission ruled that foreign motorcycle imports were a threat to domestic motorcycle manufacturers, and a 2-to-1 finding of injury was ruled on petition by Harley-Davidson, which complained that it could not compete with foreign motorcycle producers.10 On April 1, Reagan approved the ITC recommendation, explaining to Congress, “I have determined that import relief in this case is consistent with our national economic interest,” thereby raising the tariff from 4.4 percent to 49.4 percent for a year, a ten-fold tax increase on foreign motorcycles that was absorbed by American consumers. The protective tariff worked to help Harley-Davidson recover financially, but it was American motorcycle consumers who paid the price, not Japanese producers. As the ITC Chairman Alfred E. Eckes explained about his decision: “In the short run, price increases may have some adverse impact on consumers, but the domestic industry’s adjustment will have a positive long-term effect. The proposed relief will save domestic jobs and lead to increased domestic production of competitive motorcycles.”11
Photo by Lisanto 李奕良 / UnsplashWhenever free trade agreements are proposed that would allow domestic manufacturers to produce their goods cheaper overseas and thereby sell them domestically at a much lower price than they could have with domestic labor, politicians and economists, often under pressure from trade unions and political constituents, routinely respond disapprovingly, arguing that we must protect our domestic workers. Recall Presidential candidate Ross Perot’s oft-quoted 1992 comment in response to the North American Free Trade Agreement (NAFTA) about the “giant sucking sound” of jobs being sent to Mexico from the United States.
In early 2007, the Nobel laureate economist Edward C. Prescott lamented that economists invest copious time and resources countering the myth that it is “the government’s economic responsibility to protect U.S. industry, employment and wealth against the forces of foreign competition.” That is not the government’s responsibility, says Prescott, echoing Smith, which is simply “to provide the opportunity for people to seek their livelihood on their own terms, in open international markets, with as little interference from government as possible.” Prescott shows that “those countries that open their borders to international competition are those countries with the highest per capita income” and that open economic borders “is the key to bringing developing nations up to the standard of living enjoyed by citizens of wealthier countries.”12
“Protectionism is seductive,” Prescott admits, “but countries that succumb to its allure will soon have their economic hearts broken. Conversely, countries that commit to competitive borders will ensure a brighter economic future for their citizens.” But why exactly do open economic borders, free trade, and international competition lead to greater wealth for a nation? Writing over two centuries after Adam Smith, Prescott reverberates the moral philosopher’s original insight:
It is openness that gives people the opportunity to use their entrepreneurial talents to create social surplus, rather than using those talents to protect what they already have. Social surplus begets growth, which begets social surplus, and so on. People in all countries are motivated to improve their condition, and all countries have their share of talented risk-takers, but without the promise that a competitive system brings, that motivation and those talents will only lie dormant.13
The Evolutionary Origins of Tariffs and Zero-Sum EconomicsWhy is mercantilist zero-sum protectionism so pervasive and persistent? Bottom-up invisible hand explanations for complex systems are counterintuitive because of our folk economic propensity to perceive designed systems to be the product of a top-down designer. But there is a deeper reason grounded in our evolved social psychology of group loyalty. The ultimate reason that Smith’s revolution has not been fulfilled is that we evolved a propensity for in-group amity and between-group enmity, and thus it is perfectly natural to circle the wagons and protect one’s own, whoever or whatever may be the proxy for that group. Make America Great Again!
For the first 90,000 years of our existence as a species we lived in small bands of tens to hundreds of people. In the last 10,000 years some bands evolved into tribes of thousands, some tribes developed into chiefdoms of tens of thousands, some chiefdoms coalesced into states of hundreds of thousands, and a handful of states conjoined together into empires of millions. The attendant leap in food-production and population that accompanied the shift to chiefdoms and states allowed for a division of labor to develop in both economic and social spheres. Full-time artisans, craftsmen, and scribes worked within a social structure organized and run by full-time politicians, bureaucrats, and, to pay for it all, tax collectors. The modern state economy was born.
In this historical trajectory our group psychology evolved and along with it a propensity for xenophobia—in-group good, out-group bad. In the Paleolithic social environment in which our moral commitments evolved, one’s fellow in-group members consisted of family, extended family, friends, and community members who were well known to each other. To help others was to help oneself. Those groups who practiced in-group harmony and between-group antagonism would have had a survival advantage over those groups who experienced within-group social divide and decoherence, or haphazardly embraced strangers from other groups without first establishing trust. Because our deep social commitments evolved as part of our behavioral repertoire of responses for survival in a complex social environment, we carry the seeds of such in-group inclusiveness today. The resulting within-group cohesiveness and harmony carries with it a concomitant tendency for between-group xenophobia and tribalism that, in the context of a modern economic system, leads to protectionism and mercantilism.
And tariffs. We must resist the tribal temptation.
I think it’s increasingly difficult to argue that the recent boom in artificial intelligence (AI) is mostly hype. There is a lot of hype, but don’t let that distract you from the real progress. The best indication of this is applications in scientific research, because the outcomes are measurable and objective. AI applications are particularly adept at finding patterns in vast sets of data, finding patterns in hours that might have required months of traditional research. We recently discussed on the SGU using AI to sequence proteins, which is the direction that researchers are going in. Compared to the traditional method using AI analysis is faster and better at identifying novel proteins (not already in the database).
One SGU listener asked an interesting question after our discussion of AI and protein sequencing that I wanted to explore – can we apply the same approach to DNA and can this result in reverse-engineering the genetic sequence from the desired traits? AI is already transforming genetic research. AI apps allow for faster, cheaper, and more accurate DNA sequencing, while also allowing for the identification of gene variants that correlate with a disease or a trait. Genetics is in the sweet spot for these AI applications – using large databases to find meaningful patterns. How far will this tech go, and how quickly.
We have already sequenced the DNA of over 3,000 species. This number is increasing quickly, accelerated by AI sequencing techniques. We also have a lot of data about gene sequences and the resulting proteins, non-coding regulatory DNA, gene variants and disease states, and developmental biology. If we trained an AI on all this data, could it then make predictions about the effects of novel gene variants? Could it also go from a desired morphological trait back to the genetic sequence that would produce that trait? Again, this sounds like the perfect application for AI.
In the short run this approach is likely to accelerate genetic research and allow us to ask questions that would have been impractical otherwise. This will build the genetic database itself. In the not-so-medium term this could also become a powerful tool of genetic modification. We won’t necessarily need to take a gene from one species and put it into another. We could simply predict which changes would need to be made to the existing genes of a cultivar to get the desired trait. Then we can use CRISPR (or some other tool) to make those specific changes to the genome.
How far will this technology go? At some point in the long term could we, for example, ask an AI to start with a chicken genome and then predict which specific genetic changes would be necessary to change that chicken into a velociraptor? We could change an elephant into a wooly mammoth. Could this become a realistic tool of deextinction? Could we reduce the risk of extinction in an endangered species by artificially increasing the genetic diversity in the remaining population?
What I am describing so far is actually the low-hanging-fruit. AI is already accelerating genetics research. It is already being used for genetic engineering, to help predict the net effects of genetic changes to reduce the chance of unintended consequences. This is just one step away from using AI to plan the changes in the first place. Using AI to help increase genetic diversity in at-risk populations and for deextinction is a logical next step.
But that is not where this thought experiment ends. Of course whenever we consider making genetic changes to humans the ethics becomes very complicated. Using AI and genetic technology for designer humans is something we will have to confront at some point. What about entirely artificial organisms? At what point can we not only tweak or even significantly transform existing species, but design a new species from the ground up? The ethics of this are extremely complicated, as are the potential positive and negative implications. The obvious risk would be releasing into the wild a species that would be the ultimate invasive species.
There are safeguards that could be created. All such creatures, for example, could be not just sterile but completely unable to reproduce. I know – this didn’t work out well on Jurassic Park, nature finds a way, etc, but there are potential safeguards so complete that no mutation would fix, such as completely lacking reproductive organs or gametes. There is also the “lysine contingency” – essentially some biological factor that would prevent the organism from surviving for long outside a controlled environment.
This all sound scary, but at some point we could theoretically get to acceptable safety levels. For example, imagine a designer pet, with a suite of desirable features. This creature cannot reproduce, and if you don’t regularly feed it special food it will die, or perhaps just go into a coma from which it can be revived. Such pets might be safer than playing genetic roulette with random breeding of domesticated predators. This goes not just for pets but for a variety of work animals.
Sure – PETA will have a meltdown. There are legitimate ethical considerations. But I don’t think they are unresolvable.
In any case, we are rapidly hurtling toward this future. We should at least head into this future with our eyes open.
The post Will AI Bring Us Jurassic Park first appeared on NeuroLogica Blog.
Annie Dawid’s most recent novel revisits the Jonestown Massacre from the perspective of the people who were there, taking the spotlight off cult leader Jim Jones and rehumanizing the “mindless zombies” who followed one man from their homes in the U.S. to their death in Guyana, but as our notion of victimhood is improving, we’re also forced to confront the ugly truth: In the almost fifty years since Jonestown: large-scale cult-related death has not gone away.
On the 18th of November, 2024, fiction author Annie Dawid’s sixth book, Paradise Undone: A Novel of Jonestown, celebrated its first birthday on the same day as the forty-sixth anniversary of its subject matter, an incident that saw the largest instance of intentional U.S. citizen death in the 20th Century and introduced the world to the horrors and dangers of cultism—The Jonestown Massacre.
A great deal has been written on Jonestown after 1978, although mostly non-fiction, and the books Raven: The Untold Story of the Rev. Jim Jones and His People (1982) and The Road to Jonestown: Jim Jones and Peoples Temple (2017) are considered some of the most thorough investigations into what happened in the years in the lead up to the massacre. Many historical and sociological studies of Jonestown focus heavily on the psychology and background of the man who ordered 917 men, women, and children to die with him in the Guyanese jungle—The Reverend Jim Jones.
For cult survivors beginning the difficult process of unpacking and rebuilding after their cult involvement—or for those who lose family members or friends to cult tragedy—the shame of cult involvement and the public’s misconception that cult recruitment stems from a psychological or emotional fault are challenges to overcome.
And when any subsequent discussions of cult-related incidents can result in a disproportionate amount of attention given to cult leaders, often classified as pathological narcissists or having Cluster-B personality disorders, there’s a chance that with every new article or book on Jonestown, we’re just feeding the beast—often at the expense of recognizing the victims.
An aerial view of the dead in Jonestown.Annie Dawid, however, uses fiction to avoid the trap of revisiting Jonestown through the lens of Jones, essentially removing him and his hold over the Jonestown story.
“He’s a man that already gets too much air time,” she says, “The humanity of 917 people gets denied by omission. That’s to say their stories don’t get told, only Jones’ story gets told over and over again.”
“I read so many books about him. I was like enough,” she says, “Enough of him.”
Jones of JonestownBy all accounts, Jones, in his heyday, was a handsome man.
An Internet image search for Jones pulls up an almost iconic, counter-culture cool black-and-white photo of a cocksure man in aviator sunglasses and a dog collar, his lips parted as if the photographer has caught him in the middle of delivering some kind of profundity.
Jones’s signature aviator sunglasses may have once been a fashion statement, a hip priest amongst the Bay Area kids, but now he never seems to be without them as an increasing amphetamine and tranquilizer dependency has permanently shaded the areas under his eyes.
Jim Jones in 1977. By Nancy Wong“Jim Jones is not just a guy with an ideology; he was a preacher with fantastic charisma, says cult expert Mike Garde, the director of the Irish charity Dialogue Ireland, an independent charity that educates the public on cultism and assists its victims. “And this charisma would have been unable to bring people to Guyana if he had not been successful at doing it in San Francisco,” he adds.
Between January 1977 and August 1978, almost 900 members of the Peoples Temple gave up their jobs, and life savings, and left family members behind in the U.S. to relocate to Guyana to begin moving into the new home: Peoples Temple Agricultural Mission, an agricultural commune inspired by Soviet socialist values.
On November 19th, 1978, U.S. Channel 7 interrupted its normal broadcast with a special news report, and presenter Tom Van Amburg encouraged viewer discretion and described the horror of hardened newsmen upon seeing the scenes at Jonestown that had “shades of Auschwitz.”
As a story, the details of Jonestown feel like a work of violent fiction, like a prototype Cormac McCarthy novel: A Hearts of Darkness-esque cautionary tale of Wild-West pioneering gone wrong in a third-world country with Jones cast in the lead role.
“I feel like there’s a huge admiration for bad boys, and if they’re good-looking, that helps too,” Dawid says, “This sort of admiration of the bad boy makes it that we want to know, we’re excited by the monster—we want to know all about the monster.”
Dawid understands Jones’ allure, his hold over the Jonestown narrative as well as the public’s attention, but “didn’t want to indulge that part of me either,” she says.
“But I wasn’t tempted to because I learned about so many interesting people that were in the story but never been the subjects of the story,” she adds, “So I wanted to make them the subjects.”
Screenshot of the website for the award-winning film Jonestown: The Life and Death of Peoples Temple by Stanley Nelson, Marcia Smith, and Noland WalkerThe People of the Peoples TempleFor somebody who was there from the modest Pentecostal beginnings of the Peoples Temple in 1954 until the end in Guyana, very little attention had ever been paid to Marceline Jones in the years after Jonestown.
“She was there—start to finish. For me, she made it all happen, and nobody wrote anything about her,” Dawid says, “The woman behind the man doesn’t exist.”
Even for Garde, Marceline was another anonymous victim of no significance beyond the surname connecting her to the husband: “My initial read of Marceline was that she was ‘Just a cipher, she wasn’t a real person,” he says, “She didn’t even register on my dial.”
Dawid gives Marceline an existence, and in her book, she’s a “superwoman” juggling her duties as a full-time nurse and the Peoples Temple—a caring, selfless individual who lives in the service of others, mainly the children and the elderly of the Peoples Temple.
“In the sort of awful way, she’s this smart, interesting, energetic woman, but she can’t escape the power of her husband,” Dawid says, “It’s just very like domestic violence where the woman can’t get away from the abuser [and] I have had so much feedback from older women who felt that they totally related to her.”
The woman behind the man doesn’t exist.Selfless altruism was a shared characteristic of the Peoples Temple, as members spent most of their time involved in some kind of charity work, from handing out food to the homeless or organizing clothes drives.
“You know, I did grow to understand the whole sort of social justice beginnings of Peoples Temple,” Dawid says, “I came to admire the People’s Temple as an organization.”
“Social justice, racism, and caring for old people, that was a big part of the Peoples Temple. And so it made sense why an altruistic, smart, young person would say, ‘I want to be part of this,’” she adds.
GuyanaFor Dawid, where it all went down is just as important—and arguably just as overlooked in the years after 1978—as the people who went there.
Acknowledging the incredible logistical feat of moving almost 1000 people, many of them passport-less, to a foreign country, Dawid sees the small South American country as another casualty of Jonestown: “I had to have a Guyanese voice in my book because Guyana was another victim of Jones,” Dawid says.
The English-speaking Guyana—recently free of British Colonial rule and leaning towards Socialism under leader Cheddi Jagan—offered Jones a haven from the increasing scrutiny back in the U.S. amidst accusations of fraud and sexual abuse, and was “a place to escape the regulation of the U.S. and enjoy the weak scrutiny of the Guyanese state,” according to Garde.
“He was not successful at covering up the fact he had a dual model: he was sexually abusing women, taking money, and accruing power to himself, and he had to do it in Guyana,” Garde adds, “He wanted a place where he could not be observed.”
There may be a temptation to overstate what happened in 1978 as leaving an indelible, defining mark on the reputation of a country during its burgeoning years as an independent nation, but in the columns of many newspapers on the breakfast tables of American households in the years afterward, one could not be discussed without the other: “So it used to be that if you read an article that mentioned Guyana, it always mentioned Jonestown,” Dawid says.
In the few reports interested in the Guyanese perspective after Jonestown, the locals have gone through a range of feelings from wanting to forget the tragedy ever happened, or turning the site into a destination for dark tourism.
However, the country’s 2015 discovery of offshore oil means that—in the pages of some outlets and the minds of some readers—Jonestown is no longer the only thing synonymous with Guyana: “I read an article in the New York Times about Guyana’s oil,” Dawid says, “and it didn’t mention Jonestown.”
From victimhood to survivorship: out of the darkness and into the light…Victimhood to SurvivorshipAccording to Garde, the public’s perception of cult victims as mentally defective, obsequious followers, or—at worst—somehow deserving of their fate is not unique to victims of religious or spiritual cults.
“Whenever we use the words ‘cult’, ‘cultism’ or ‘cultist’ we are referring solely to the phenomenon where troubling levels of undue psychological influence may exist. This phenomenon can occur in almost any group or organization,” reads Dialogue Ireland’s mission statement.
“Victim blaming is something that is now so embedded that we take it for granted. It’s not unique to cultism contexts—it exists in all realms where there’s a victim-perpetrator dynamic,” Garde says, “People don’t want to take responsibility or face what has happened, so it can be easier to ignore or blame the victim, which adds to their trauma.”
While blaming and shaming prevent victims from reporting crimes and seeking help, there does seem to be recent improvements in their treatment, regardless of the type of abuse:
“We do seem to be improving our concept of victims, and we are beginning to recognize the fact that the victims of child sexual abuse need to be recognized, the #MeToo movement recognizes what happened to women,” says Garde, “They are now being seen and heard. There’s an awareness of victimhood and at the same time, there’s also a movement from victimhood to survivorship.”
Paradise Undone: A Novel of Jonestown focuses on how the survivors process and cope with the fallout of their traumatic involvement with or connection to Jonestown, making the very poignant observation that cult involvement does not end when you escape or leave—the residual effects persist for many years afterward.
“It’s an extremely vulnerable period of time,” Garde points out, “If you don’t get out of that state, in that sense of being a victim, that’s a very serious situation. We get stuck in the past or frozen in the present and can’t move from being a victim to having a future as a survivor.”
Support networks and resources are flourishing online to offer advice and comfort to survivors: “I think the whole cult education movement has definitely humanized victims of cults,” Dawid points out, “And there are all these cult survivors who have their own podcasts and cult survivors who are now counseling other cult survivors.”
At the very least, these can help reduce the stigma around abuse or kickstart the recovery process; however, Garde sees a potential issue in the cult survivors counseling cult survivors dynamic: “There can be a danger of those operating such sites thinking that, as former cult members, they have unique insight and don’t recognize the expertise of those who are not former members,” he says, “We have significant cases where ex-cultists themselves become subject to sectarian attitudes and revert back to cult behavior.”
Whenever we use the words ‘cult’, ‘cultism’ or ‘cultist’ we are referring solely to the phenomenon where troubling levels of undue psychological influence may exist.And while society’s treatment and understanding of cult victims may be changing, Garde is frustrated with the overall lack of support the field of cult education receives, and all warnings seem to fall on deaf ears, as they once were in the lead up to Jonestown:
The public’s understanding seems to be changing, but the field of cult studies still doesn’t get the support or understanding it needs from the government or the media. I can’t get through to journalists and government people, or they don’t reply. It’s so just unbelievably frustrating in terms of things not going anywhere.One fundamental issue remains; some might say that things have gotten worse in the years post-Jonestown: “The attitude there is absolutely like pro-survivor, pro-victim, so that has changed,” Dawid says, “You know, it does seem like there are more cults than ever, however.”
A History of ViolenceThe International Cultic Studies Association’s (ICSA) Steve Eichel estimates there are around 10,000 cults operating in the U.S. alone. Regardless of the number, in the decades since Jonestown, there has been no shortage of cult-related tragedies resulting in a massive loss of life in the U.S. and abroad.
The trial of Paul Mackenzie, the Kenyan pastor behind the 2023 Shakahola Forest Massacre (also known as the Kenyan starvation cult), is currently underway. Mackenzie pleads not guilty to the death of 448 people and charges of murder, child torture, and terrorism as Kenyan pathologists are still working to identify all of the exhumed bodies.
“It’s frustrating and tragic to see events like this still happening internationally, so it might seem like we haven’t progressed in terms of where we’re at,” Garde laments.
Jonestown may be seen as the progenitor of the modern-cult tragedy, an incident for which other cult incidents are compared, but for Dawid, the 1999 Colorado shooting that left 13 teenagers dead and 24 injured would shock American society in the same way, and leave behind a similar legacy.
“I see a kind of similarity in the impact it had,” Dawid says, “Even though there had been other school shootings before Columbine….I think it did a certain kind of explosive number on American consciousness in the same way that Jones did, not just on American consciousness, but world consciousness about the danger of cults.”
Victim blaming is something that is now so embedded that we take it for granted.Just as everyone understands that Jonestown refers to the 917 dead U.S. citizens in the Guyanese jungle, the word “Columbine” is now a byword for school shootings. However, if you want to use their official, unabbreviated titles, you’ll find both events share the same surname—massacre.
“All cult stories will mention Jonestown, and all school shootings will [mention] Columbine,” Dawid points out.
In MemoriamThe official death toll on November 18, 1978, is 918, but that figure includes the man who couldn’t bring himself to follow his own orders.
According to the evidence, Jim Jones and the nurse Annie Moore were the only two to die of gunshot wounds at Jonestown. The entry wound on Jones’ left temple meant there was a very good chance the shooter wasn’t right-handed (as Jones was). It is believed that Jones ordered Moore to shoot him first, confirming for Garde, Jones’ cowardice: “We saw his pathetic inability to die as he set off a murder-suicide. He could order others to kill themselves, but he could not take the same poison. He did not even have the guts to shoot himself.”
On the anniversary of Jonestown (also International Cult Awareness Day), people gather at the Jonestown Memorial at the Workers at the Evergreen Cemetery in Oakland, California, but the 2011 unveiling of the memorial revealed something problematic. Nestled between all the engraved names of the victims is the name of the man responsible for it all: James Warren Jones.
The inclusion of Jones’ name has outraged many in attendance, and there are online petitions calling for it to be removed. Garde agrees, and just as Dawid retired Jones from his lead role in the Jonestown narrative, he believes Jones’ name should be physically removed from the memorial.
“He should be definitely excluded and there should be a sign saying very clearly he was removed because of the fact that it was totally inappropriate for him to be connected to this.” he says, “It’s like the equivalent of a murderer being added as if he’s a casualty.”
In the years since she first started researching the book, Dawid feels that the focus on Jones: “There’s been a lot written since then, and I feel like some of the material that’s been published since then has tried to branch out from that viewpoint,” she says.
It’s frustrating and tragic to see events like this still happening internationally.Modern re-examinations challenge the long-time framing of Jonestown as a mass suicide, with “murder-suicide” providing a better description of what unfolded, and the 2018 documentary Jonestown: The Women Behind the Massacre explores the actions of the female members of Jones’ inner circle.
While it may be difficult to look at Jonestown and see anything positive, with every new examination of the tragedy that avoids making him the central focus, Jones’ power over the Peoples Temple, and the story of Jonestown, seems to wane.
And looking beyond Jones reveals acts of heroism that otherwise go unnoticed: “The woman who escaped and told everybody in the government that this was going to happen. She’s a hero, and nobody listened to her,” Dawid says.
That person is Jonestown defector Deborah Layton, the author of the Jonestown book Seductive Poison, whose 1978 affidavit warned the U.S. government of Jones’ plans for a mass suicide.
And in the throes of the chaos of November 18, a single person courageously stood up and denounced the actions that would define the day.
For Christine, who refused to submit.Dawid’s book is dedicated to the memory of the sixty-year-old Christine Miller, the only person known to have spoken out that day against the Jones and his final orders. Her protests can be heard on the 44-minute “Death Tape”—an audio recording of the final moments of Jonestown.
The dedication on the opening page of Paradise Undone: A Novel of Jonestown reads: “For Christine, who refused to submit.”
Perceptions of Jonestown may be changing, but I ask Dawid how the survivors and family members of the victims feel about how Jonestown is represented after all these years.
“It’s a really ugly piece of American history, and it had been presented for so long as the mass suicide of gullible, zombie-like druggies,” Dawid says, “We’re almost at the 50th anniversary, and the derision of all the people who died at Jonestown as well as the focus on Jones as if he were the only important person, [but] I think they’re encouraged by how many people still want to learn about Jonestown.”
“They’re very strong people,” Dawid tells me.
Yes – it is well-documented that in many industries the design of products incorporates a plan for when the product will need to be replaced. A blatant example was in 1924 when an international meeting of lightbulb manufacturers decided to limit the lifespan of lightbulbs to 1,000 hours, so that consumers would have to constantly replace them. This artificial limitation did not end until CFLs and then LED lightbulbs largely replaced incandescent bulbs.
But – it’s more complicated than you might think (it always is). Planned obsolescence is not always about gimping products so they break faster. It often is – products are made so they are difficult to repair or upgrade and arbitrary fashions change specifically to create demand for new versions. But often there is a rational decision to limit product quality. Some products, like kid’s clothes, have a short use timeline, so consumers prefer cheap to durable. There is also a very good (for the consumer) example of true obsolescence – sometimes the technology simply advances, offering better products. Durability is not the only nor the primary attribute determining the quality of a product, and it makes no sense to build in expensive durability for a product that consumers will want to replace. So there is a complex dynamic among various product features, with durability being only one feature.
We can also ask the question, for any product or class of products, is durability actually decreasing over time? Consumers are now on the alert for planned obsolescence, and this may produce the confirmation bias of seeing it everywhere, even when it’s not true. A recent study looking at big-ticket appliances shows how complex this question can be. This is a Norwegian study looking at the lifespan of large appliances over decades, starting in the 1950s.
First, they found that for most large appliances, there was no decrease in lifespan over this time period. So the phenomenon simply did not exist for the items that homeowning consumers care the most about, their expensive appliances. There were two exceptions, however – ovens and washing machines. Each has its own explanations.
For washing machines, the researchers found another plausible explanation for the decrease in lifespan from 19.2 to 10. 6 years (a decrease of 45%). The researchers found that over the same time, the average number of loads a household of four did increased from 2 per week in 1960 to 8 per week by 2000. So if you count lifespan not in years but in number of loads, washing machines had become more durable over this time. I suspect that washing habits were formed in the years when many people did not have washing machines, and doing laundry was brutal work. Once the convenience of doing laundry in the modern era settled in (and perhaps also once it became more than woman’s work), people did laundry more often. How many times do you wear an article of clothing before you wash it? Lots of variables there, but at some point it’s a judgement call, and this likely also changed culturally over time.
For ovens there appears to be a few answers. One is that ovens have become more complex over the decades. For many technologies there is a trade-off between simple but durable, and complex but fragile. Again – there is a tradeoff, not a simple decision to gimp a product to exploit consumers. But there are two other factors the researchers found. Over this time the design of homes have also changed. Kitchens are increasingly connected to living spaces with a more open design. In the past kitchens were closed off and hidden away. Now they are where people live and entertain. This means that the fashion of kitchen appliances are more important. People might buy new appliances to make their kitchen look more modern, rather than because the old ones are broken.
If this were true, however, then we would expect the lifespan of all large kitchen appliances to converge. As people renovate their kitchens, they are likely to buy all new appliances that match and have an updated look. This is exactly what the researchers found – the lifespan of large kitchen appliances have tended to converge over the years.
They did not find evidence that the manufacturers of large appliances were deliberately reducing the durability of their products to force consumers to replace them at regular intervals. But this is the narrative that most people have.
There is also a bigger issue of waste and the environment. Even when the tradeoffs for the consumer favor cheaper, more stylish and fashionable, or more complex products with lower durability, is this a good thing for the world? Landfilled are overflowing with discarded consumer products. This is a valid point, and should be considered in the calculus when making purchasing decisions and also for regulation. Designing products to be recyclable, repairable, and replaceable is also an important consideration. I generally replace my smartphone when the battery life gets too short, because the battery is not replaceable. (This is another discussion unto itself.)
But replacing old technology with new is not always bad for the environment. Newer dishwashers, for example, are much more energy and water efficient than older ones. Refrigerators are notorious energy hogs, and newer models are substantially more energy efficient than older models. This is another rabbit hole, exactly when do you replace rather than repair an old appliance, but generally if a newer model is significantly more efficient, replacing may be best for the environment. Refrigerators, for example, probably should be upgraded every 10 years with newer and more efficient models – so then why build them to last 20 or more?
I like this new research and this story primarily because it’s a good reminder that everything is more complex than you think, and not to fall for simplistic narratives.
The post Is Planned Obsolescence Real first appeared on NeuroLogica Blog.
Frans de Waal was one of the world’s leading primatologists. He has been named one of TIME magazine’s 100 Most Influential People. The author of Are We Smart Enough to Know How Smart Animals Are?, as well as many other works, he was the C.H. Candler Professor in Emory University’s Psychology Department and director of the Living Links Center at the Yerkes National Primate Research Center.
Skeptic: How can we know what another mind is thinking or feeling?
Frans de Waal: My work is on animals that cannot talk, which is both a disadvantage and advantage. It’s a disadvantage because I cannot ask them how they feel and what their experiences are, but it is an advantage because I think humans lie a lot. I don’t trust humans. I’m a biologist but I work in a psychology department, and all my colleagues are psychologists. Most psychologists nowadays use questionnaires, and they trust what people tell them, but I don’t. So, I’d much rather work with animals where instead of asking how often they have sex, I just count how often. That’s more reliable.
I cannot ask them how they feel and what their experiences are, but it is an advantage because I think humans lie a lot. I don’t trust humans.That said, I distinguish between emotions and feelings because you cannot know the feelings of any animals. But I can deduce them, guess at them. Personally, I feel it’s very similar with humans. Humans can tell me their feelings, but even if you tell me that you are sad, I don’t know if that’s the same sadness that I would feel under the same circumstances, so I can only guess what you feel. You might even be experiencing mixed feelings, or there may be feelings you’re not even aware of, and so you’re not able to communicate them. We have the same problem in non-human species as we do in humans, because feelings are less accessible and require guesswork.
That said, sometimes I’m perfectly comfortable guessing at the feelings of animals, even though you must distinguish them from the things you can measure. I can measure facial expressions. I can measure blood pressure. I can measure their behavior, but I can never really measure what they feel. But then, psychologists can’t do that with people either.
Skeptic: Suppose I’m feeling sad and I’m crying at some sort of loss. And then I see you’ve experienced a loss and that you’re crying … Isn’t it reasonable to infer that you feel sad?
FdW: Yes. And so that same principle of being reasonable can be applied to other species. And the closer that species is to you, the easier it is. Chimpanzees and bonobos cry and laugh. They have facial expressions— the same sort of expressions we do. So it’s fairly easy to infer the feelings behind those expressions and infer they may be very similar to our own. If you move to, say, an elephant, which is still a mammal, or to a fish, which is not, it becomes successively more difficult. Fish don’t even have facial expressions. That doesn’t mean that fish don’t feel anything. It would be a very biased view to assume that an animal needs to show facial expressions as evidence that it feels something.
At the same time, research on humans has argued that we have six basic emotions based on the observation that we have six basic facial expressions. So, there the tie between emotions and expressions has been made very explicit.
In my work, I tend to focus on the expressive behavior. But behind it, of course, there must be similar feelings. At least that’s what Darwin thought.
Chimpanzees and bonobos cry and laugh. They have facial expressions—the same sort of expressions we do.Skeptic: That’s not widely known, is it? Darwin published The Expression of the Emotions in Man and Animals in 1872, but it took almost a century before the taboo against it started to lift.
FdW: It’s the only book of Darwin’s that disappeared from view for a century. All the other books were celebrated, but that book was placed under some sort of taboo. Partly because of the influence of the behaviorist school of B.F. Skinner, Richard Herrnstein, and others, it was considered silly to think that animals would have the same sort of emotions as we do.
Biologists, including my own biology professors, however, found a way out. They didn’t need to talk about emotions because they would talk about the function of behavior. For example, they would not say “the animal is afraid” but rather that “the animal escapes from danger.” They phrased everything in functional terms—a semantic trick that researchers still often use.
If you were to say that two animals “love each other” or that “they’re very attached to each other,” you’re likely to receive significant criticism, if not ridicule. So why even describe it that way? Instead, you objectively report that the animals bonded and they benefited from doing so. Phrasing it functionally has, well, functioned as a sort of preferred safe procedure. But I have decided not to employ it anymore.
Skeptic: In most of your books you talk about the social and political context of science. Why do you think the conversation about animal emotions was held back for almost a century?
FdW: World War II had an effect on the study of aggression, which became a very popular topic in the 1960s and 70s. Then we got the era of “the selfish gene” and so on. In fact, the silencing of the study of mental processes and emotions in animals started before the war. It actually started in the 1920s and 30s. And I think it’s because scientists such as Skinner wanted the behavioral sciences to be like the physical sciences. They operated under the belief that it provided a certain protection against criticism to get away from anything that could be seen as speculation. And there was a lot of speculation going on in the so-called “depth psychologies,” some of it rather wild.
However, there are a lot of invisible things in science that we assume to be true, for example, evolutionary theory. Evolution is not necessarily visible, at least most of the time it isn’t, yet still, we believe very strongly that evolution happened. Continental drift is unobservable, but we now accept that it happened. The same principle can be applied to animal feelings and animal consciousness. You assume it as a sort of theory and see if things fit. And, research has demonstrated that things fit quite well.
Skeptic: Taking a different angle, can Artificial Intelligence (AI) experience emotions? Was IBM’s Watson “thrilled” when it beat Ken Jennings, the all-time champion of Jeopardy!? Well, of course not. So what do you think about programming such internal states into an artificial intelligence?
FdW: I think researchers developing AI models are interested in affective programs because of the way we biologists look at emotions. Emotions trigger actions that are adaptive. Fear is an adaptive emotion because it may trigger certain behaviors such as hiding, escaping, etc., so we look at emotions as being the stimulus that elicits certain specific types of behavior. Emotions organize behavior, and I think that’s what the AI people are interested in. Emotions are actually a very smart system, compared to instincts. Someone might argue that instincts also trigger behavior. However, while instincts are inflexible, emotions are different.
Let’s say you are afraid of something. The emotion of fear doesn’t trigger your behavior. An emotion just prepares the body for certain behaviors, but you still need to make a decision. Do I want to escape? Do I want to fight? Do I want to hide? What is the best behavior under these circumstances? And so, your emotion triggers the need for a response, and then your cognition takes over and searches for the best solution. It’s a very, very nice system and creators of AI models are interested in such an organizational system of behavior. I’m not sure they will ever construct the feelings behind the emotions—it’s not an easy thing to do—but certainly organizing behavior according to emotions is possible.
Skeptic: Are emotions created from the bottom-up? How do you scale from something very simple up to much higher levels of complexity?
FdW: Humans have a complex emotional system—we mix a lot of emotions, sort them, regulate them. Well, sometimes we don’t actually regulate them and that is something that really interests me in my work with animals. What kind of regulation do they have over their emotions? People often say that we have emotions and we can suppress them, whereas animals have emotions that they have to follow. However, experiments have demonstrated that’s not really the case. For example, we give apes the marshmallow test. Briefly, that’s where you put a child in a situation in which he or she can either eat a marshmallow immediately, or wait and get a second one later. Well, kids are willing to wait for 15 minutes. If you do that same experiment with apes, they’re also willing to wait for 15 minutes. So they can control their emotions. And like children, apes seek distractions from the situation because they’re aware that they’re dealing with certain specific emotions. Therefore, we know that apes have a certain awareness of their emotions, and they have a certain level of control over them. This whole idea that regulation of emotions is specifically human, while animals can only follow them, is wrong.
The emotional farewell between the chimpanzee Mama and her caretaker, Jan van Hooff (Source)That’s actually the reason I wrote Mama’s Last Hug. The starting point of the book was when Prof. Jan Van Hoff came on TV and showed a little clip that everyone has seen by now, where he and a chimpanzee called Mama hug each other. Both he and I were shocked when the clip went viral and generated such a response. Many people cried and wrote to us to say they were very influenced by what they saw. The truth is Mama was simply showing perfectly normal chimpanzee behavior. It was a very touching moment, obviously, but for those familiar with chimps, there was nothing surprising about the behavior. And so, I wrote this book partly because I noticed that people did not know how human-like the expressions of the apes are. Embracing, and hugging, and calming someone down, and having a big smile on your face are all common behaviors seen in primates and are not unique to humans.
Skeptic: Your famous experiment with capuchin monkeys, where you offer them a grape or a piece of cucumber, is along similar lines. When the monkey got the cucumber instead of the grape, he got really angry. He threw the cucumber back, then proceeded to pound on the table and the walls … He was clearly ticked off at the injustice he felt had been done him, just as a person would be.
A still from the famous capuchin monkey fairness experiment (Source: Frans de Waal’s TED Talk)FdW: The funny thing is that primates, including those monkeys, have all the same expressions and behaviors as we do. And so, they shake their cage and throw the cucumber at you. The behavior is just so extremely similar, and the circumstances are so similar … I always say that if related species behave in a similar way under similar circumstances, you have to assume a shared psychology lies behind it. It is just not acceptable in this day and age of Darwinian philosophy, so to speak, to assume anything else. If people want to make the point that it’s maybe not similar, that maybe the monkey was actually very happy while he was throwing the stuff … they’ll have a lot of work to do to convince me of that.
Skeptic: What’s the date of the last common ancestor humans shared with chimps and bonobos?
FdW: It’s about 6 million years ago.
Skeptic: So, these are indeed pretty ancient emotions.
FdW: Oh, they go back much further than that! Like the bonding mechanism based on oxytocin—the neuropeptides in bonding go back to rodents, and probably even back to fish at some point. These neuropeptide circuits involved in attachment and bonding are very ancient. They’re even older than mammals themselves.
Skeptic: One emotion that seems very uniquely human is disgust. If a chimp or Bonobo comes across a pile of feces or vomit, what do they do?
FdW: When we do experiments and put interesting food on top of feces and see if the chimp is willing to take it, they don’t. They refuse to. The facial expression of the chimps is the same as we have for disgust—with the wrinkly nose and all that. Chimps also show it, for example, when it rains. They don’t like rain. And they show it, sometimes, in circumstances where they encounter a rat. So, some of these emotions have been proposed as being uniquely human, but I disagree. Disgust, I think, is a very old emotion.
If related species behave in a similar way under similar circumstances, you have to assume a shared psychology lies behind it.Disgust is an interesting case because we know that both in chimps and humans a specific part of the brain called the insula is involved. If you stimulate the insula in a monkey who’s chewing on good fruit, he’ll spit it out. If you put humans in a brain scanner and show them piles of feces or things they don’t want to see, the insula is likewise activated. So here we have an emotion that is triggered under the same circumstances, that is shown in the face in the same way, and that is associated with the same specific area in the brain. So we have to assume it’s the same emotion across the board. That’s why I disagree with those scientists who have declared disgust uniquely human.
Skeptic: In one of your lectures, you show photos of a horse wrinkling up its nose and baring its teeth. Is that a smile or something else?
FdW: The baring of the teeth is very complex because in many primates it is a fearful signal shown when they’re afraid or when they’re intimidated by dominance and showing submission. So, we think it became a signal of appeasement and non-hostility. Basically saying, “I’m not hostile. Don’t expect any trouble from me.” And then over time, especially in apes and then in humans, it became more and more of a friendly signal. So it’s not necessarily a fear signal. Although we still say that if someone smiles too much, they’re probably nervous.
Skeptic: Is it true that you can determine whether someone’s giving you a fake smile or a real smile depending on whether the corners of their eyes are pulled down?
FdW: Yes, this is called the Duchenne smile. Duchenne was a 19th century French neurologist. He studied people who had facial paralysis, meaning they had the muscles, but they could not feel anything in their face. This allowed him to put electrodes on their faces and stimulate them. He methodically contracted different muscles and noticed he could produce a smile on his subjects. Yet he was never quite happy with the smile—it just didn’t look real. Then one day he told a subject a joke. A very good joke, I suppose, and all of a sudden, he got a real full-blown smile. That’s when Duchenne decided that there needs to be a contraction and a narrowing of the eyes for a smile to be a real smile. So, we now distinguish between the fake smile and the Duchenne smile.
Skeptic: So, smiling involves a whole complex suite of muscles. Is the number of muscles in the face of humans higher than other species?
FdW: Do we have far more muscles in the face than a chimpanzee? I’ve heard that all my life. Until people who analyze faces of chimpanzees found exactly the same number of muscles in there as in a human face. So that whole story doesn’t hold up. I think the confusion originated because when we look at the human face, we can interpret so many little details of it—and I think chimps do that with each other too—but when we look at a chimp, we only see the bold, more flamboyant expressions.
Skeptic: Have we evolved in the way we treat other animals?
FdW: The Planet of the Apes movies provide a good example of that. I’m so happy that Hollywood has found a way of featuring apes in movies without the involvement of real animals. There was a time when Hollywood had trainers who described what they do as affective training. Not effective, but affective. They used cattle prods, and stuff like that. People used to think that seeing apes dressed up or producing silly grins was hilarious. No longer. We’ve come a long way from that.
Skeptic: The Planet of the Apes films show apes that are quite violent, maybe even brutal. You actually studied the darker side of emotion in apes. Can you describe it?
FdW: Most of the books on emotions in animals dwell on the positive: they show how animals love each other, how they hug each other, how they help each other, how they grieve … and I do think that’s all very impressive. However, the emotional life of animals—just like that of humans— includes a lot of nasty emotions.
We do not treat animals very well, certainly not in the agricultural industry.I have seen so much of chimpanzee politics that I witnessed those very dark emotions. They can kill each other. One of the killings I’ve witnessed was in captivity. So, when it happened, I thought maybe it was a product of captivity. Some colleagues said to me, “What do you expect if you lock them up?” But now we know that wild chimpanzees do the exact same thing. Sometimes, if a male leader loses his position or other chimps are not happy with him, they will brutally kill him. At the same time, chimpanzees can also be good friends, help each other, and defend their territory together—just like people who on occasion hate each other or even kill each other, but otherwise coexist peacefully.
The more important point is that we do not treat animals very well, certainly not in the agricultural industry. And we need to do something about that.
Skeptic: Are you a vegetarian or vegan?
FdW: No. Well, I do try to avoid eating meat. For me, however, the issue is not so much the eating, it’s the treatment of animals. As a biologist, I see the cycle of life as a natural thing. But it bothers me how we treat animals.
Skeptic: What’s next for you?
FdW: I’m going to retire! In fact, I’ve already stopped my research. I’m going to travel with my wife, and write.
Dr. Frans de Waal passed away on March 14, 2024, aged 75. In Loving Memory.
It is generally accepted that the transition from hunter-gatherer communities to agriculture was the single most important event in human history, ultimately giving rise to all of civilization. The transition started to take place around 12,000 years ago in the Middle East, China, and Mesoamerica, leading to the domestication of plants and animals, a stable food supply, permanent settlements, and the ability to support people not engaged full time in food production. But why, exactly, did this transition occur when and where it did?
Existing theories focus on external factors. The changing climate lead to fertile areas of land with lots of rainfall, at the same time food sources for hunting and gathering were scarce. This occurred at the end of the last glacial period. This climate also favored the thriving of cereals, providing lots of raw material for domestication. There was therefore the opportunity and the drive to find another reliable food source. There also, however, needs to be the means. Humanity at that time had the requisite technology to begin farming, and agricultural technology advanced steadily.
A new study looks at another aspect of the rise of agriculture, demographic interactions. How were these new agricultural communities interacting with hunter-gather communities, and with each other? The study is mainly about developing and testing an inferential model to look at these questions. Here is a quick summary from the paper:
“We illustrate the opportunities offered by this approach by investigating three archaeological case studies on the diffusion of farming, shedding light on the role played by population growth rates, cultural assimilation, and competition in shaping the demographic trajectories during the transition to agriculture.”
In part the transition to agriculture occurred through increased population growth of agricultural communities, and cultural assimilation of hunter-gatherer groups who were competing for the same physical space. Mostly they were validating the model by looking at test cases to see if the model matched empirical data, which apparently it does.
I don’t think there is anything revolutionary about the findings. I have read many years ago that cultural exchange and assimilation was critical to the development of agriculture. I think the new bit here is a statistical approach to demographic changes. So basically the shift was even more complex than we thought, and we have to remember to consider all internal as well as external factors.
It does remain a fascinating part of human history, and it seems there is still a lot to learn about something that happened over a long period of time and space. There’s bound to be many moving parts. I always found it interesting to imagine the very early attempts at agriculture, before we had developed a catalogue of domesticated plants and animals. Most of the food we eat today has been cultivated beyond recognition from its wild counterparts. We took many plants that were barely edible and turned them into crops.
In addition, we had to learn how to combine different foods into a nutritionally adequate diet, without having any basic knowledge of nutrition and biochemistry. In fact, for thousands of years the shift to agriculture lead to a worse diet and negative health outcomes, due to a significant reduction in diet diversity. Each culture (at least the ones that survived) had to figure out a combination of staple crops that would lead to adequate nutrition. For example, many cultures have staple dishes that include a starch and a legume, like lentils and rice, or corn and beans. Little by little we plugged the nutritional holes, like adding carrots for vitamin A (even before we knew what vitamin A was).
Food preparation and storage technology also advanced. When you think about it, we have a few months to grow enough food to survive an entire year. We have to store the food and save enough seeds to plant the next season. We take for granted in many parts of the developed world that we can ship food around the world, and we can store food in refrigerated conditions, or sterile containers. Imagine living 5,000 years ago without any modern technology. One bad crop could mean mass starvation.
This made cultural exchange and trade critical. The more different communities could share knowledge the better everyone could deal with the challenges of subsistence farming. Also, trade allowed communities to spread out their risk. You could survive a bad year if a neighbor had a bumper crop, knowing eventually the roles will reverse. The ancient world had a far greater trading system than we previously knew or most people imagine. The bronze age, for example required bringing together tin and copper from distant mines around Eurasia. There was still a lot of fragility in this system (which is why the bronze age collapsed, and other civilizations often collapsed), but obviously in the aggregate civilization survived and thrived.
Agricultural technology was so successful it now supports a human population of over 8 billion people, and it’s likely our population will peak at about 10 billion.
The post The Transition to Agriculture first appeared on NeuroLogica Blog.
All of the ways you've heard that deep space wants to kill us — and how plausible or likely each scenario is.
Learn about your ad choices: dovetail.prx.org/ad-choicesThis is an interesting concept, with an interesting history, and I have heard it quoted many times recently – “we get the politicians (or government) we deserve.” It is often invoked to imply that voters are responsible for the malfeasance or general failings of their elected officials. First let’s explore if this is true or not, and then what we can do to get better representatives.
The quote itself originated with Joseph de Maistre who said, “Every nation gets the government it deserves.” (Toute nation a le gouvernement qu’elle mérite.) Maistre was a counter-revolutionary. He believed in divine monarchy as the best way to instill order, and felt that philosophy, reason, and the enlightenment were counterproductive. Not a great source, in my opinion. But apparently Thomas Jefferson also made a similar statement, “The government you elect is the government you deserve.”
Pithy phrases may capture some essential truth, but reality is often more complicated. I think the sentiment is partly true, but also can be misused. What is true is that in a democracy each citizen has a civic responsibility to cast informed votes. No one is responsible for our vote other than ourselves, and if we vote for bad people (however you wish to define that) then we have some level of responsibility for having bad government. In the US we still have fair elections. The evidence pretty overwhelmingly shows that there is no significant voter fraud or systematic fraud stealing elections.
This does not mean, however, that there aren’t systemic effects that influence voter behavior or limit our representation. This is a huge topic, but just to list a few examples – gerrymandering is a way for political parties to choose their voters, rather than voters choosing their representatives, the electoral college means that for president some votes have more power than others, and primary elections tend to produce more radical options. Further, the power of voters depends on getting accurate information, which means that mass media has a lot of power. Lying and distorting information deprives voters of their ability to use their vote to get what they want and hold government accountable.
So while there is some truth to the notion that we elect the government we deserve, this notion can be “weaponized” to distract and shift blame from legitimate systemic issues, or individual bad behavior among politicians. We still need to examine and improve the system itself. Actual experts could write books about this topic, but again just to list a few of the more obvious fixes – I do think we should, at a federal level, ban gerrymandering. It is fundamentally anti-democratic. In general someone affected directly by the rules should not be able to determine those rules and rig them to favor themselves. We all need to agree ahead of time on rules that are fair for everyone. I also think we should get rid of the electoral college. Elections are determined in a handful of swing states, and voters in small states have disproportionate power (which they already have with two senators). Ranked-choice voting also would be an improvement and would lead to outcomes that better reflect the will of the voters. We need Supreme Court reform, better ethics rules and enforcement, and don’t get me started on mass and social media.
This is all a bit of a catch-22 – how do we get systemic change from within a broken system? Most representatives from both parties benefit from gerrymandering, for example. I think it would take a massive popular movement, but those require good leadership too, and the topic is a bit wonky for bumper stickers. Still, I would love to see greater public awareness on this issue and support for reform. Meanwhile, we can be more thoughtful about how we use the vote we have. Voting is the ultimate feedback loop in a democracy, and it will lead to outcomes that depend on the feedback loop. Voters reward and punish politicians, and politicians to some extent do listen to voters.
The rest is just a shoot-from-the-hip thought experiment about how we might more thoughtfully consider our politicians. Thinking is generally better than feeling, or going with a vague vibe or just a blind hope. So here are my thoughts about what a voter should think about when deciding whom to vote for. This also can make for some interesting discussion. I like to break things down, so here are some categories of features to consider.
Overall competence: This has to do with the basic ability of the politician. Are they smart and curious enough to understand complex issues? Are they politicly savvy enough to get things done? Are they diligent and generally successful?
Experience: This is related to competence, but I think is distinct. You can have a smart and savvy politician without any experience in office. While obviously we need to give fresh blood a chance, experience also does count. Ideally politicians will gain experience in lower office before seeking higher office. It also shows respect for the office and the complexity of the job.
Morality: This has to do with the overall personality and moral fiber of the person. Do they have the temperament of a good leader and a good civil servant? Will they put the needs of the country first? Are they liars and cheaters? Do they have a basic respect for the truth?
Ideology: What is the politician’s governing philosophy? Are they liberal, conservative, progressive, or libertarian? What are their proposals on specific issues? Are they ideologically flexible, willing and able to make pragmatic compromises, or are they an uncompromising radical?
There is more, but I think most features can fit into one of those four categories. I feel as if most voters most of the time rely too heavily on the fourth feature, ideology, and use political party as a marker for ideology. In fact many voters just vote for their team, leaving a relatively small percentage of “swing voters” to decide elections (in those regions where one party does not have a lock). This is unfortunate. This can short-circuit the voter feedback loop. It also means that many elections are determined during the primary, which tend to produce more radical candidates, especially in winner-take-all elections.
It seems to me, having closely followed politics for decades, that in the past voters would primarily consider ideology, but the other features had a floor. If a politician demonstrated a critical lack of competence, experience, or morality that would be disqualifying. What seems to be the case now (not entirely, but clearly more so) is that the electorate is more “polarized”, which functionally means they vote based on the team (not even really ideology as much), and there is no apparent floor when it comes to the other features. This is a very bad thing for American politics. If politicians do not pay a political price for moral turpitude, stupidity or recklessness, then they will adjust their algorithm of behavior accordingly. If voters reward team players above all else, then that is what we will get.
We need to demand more from the system, and we need to push for reform to make the system work better. But we also have to take responsibility for how we vote and to more fully realize what our voting patterns will produce. The system is not absolved of responsibility, but neither are the voters.
The post The Politicians We Deserve first appeared on NeuroLogica Blog.
A team led by Corrado Malanga from the University of Pisa and Filippo Biondi from the University of Strathclyde recently claimed to have found huge structures beneath the Pyramids of Giza using Synthetic Aperture Radar (SAR) technology.
These structures are said to be up to 10 times larger than the pyramids, potentially rewriting our understanding of ancient Egyptian history.
However, many archaeologists and Egyptologists, including prominent figures, have expressed doubt, highlighting the lack of peer-reviewed evidence and the technical challenges of such deep imaging.
Photo by Michael Starkie / UnsplashDr. Zahi Hawass, a renowned Egyptologist and former Egyptian Minister of Antiquities, has publicly rejected these findings, calling them “completely wrong” and “baseless,” arguing that the techniques used are not scientifically validated. Other experts, like Professor Lawrence Conyers, have questioned whether SAR can penetrate the dense limestone to the depths claimed, suggesting decades of prior studies using other methods found no such evidence.
The claims have reignited interest in fringe theories, such as the pyramids as ancient power grids or energy hubs, with comparisons to Nikola Tesla’s wireless energy transmission ideas. Mythological correlations, like the Halls of Amenti and references in the Book of the Dead, have also been drawn.
The research has not been published in a peer-reviewed scientific journal, which is a critical step for validation. The findings were announced via a press release on March 15, 2025, and discussed in a press conference.
What to make of it all?
For a deep dive into this fascinating claim, Skeptic magazine Editor-in-Chief Michael Shermer appeared on Piers Morgan Uncensored, alongside Jay Anderson from Project Unity, archaeologist and YouTuber Dr. Flint Dibble, Jimmy Corsetti from the Bright Insight Podcast, Dan Richards from DeDunking the Past, and archaeologist and YouTuber Milo Rossi (AKA Miniminuteman).
Watch the discussion here: