You are here

Skeptic

Quantifying Privilege: What Research on Social Mobility Tells Us About Fairness in America

Skeptic.com feed - Wed, 12/27/2023 - 12:00am

Is it more of a disadvantage to be born poor or Black? Is it worse to be brought up by rich parents in a poor neighborhood, or by poor parents in a rich neighborhood? The answers to these questions lie at the very core of what constitutes a fair society. So how do we know if it is better to have wealthy parents or to grow up in a wealthy neighborhood when “good” things often go together (i.e., kids with rich parents grow up in rich neighborhoods)? When poverty, being Black, and living in a neighborhood with poor schools all predict worse outcomes, how can we disentangle them? Statisticians call this problem multicollinearity, and a number of straightforward methods using some of the largest databases on social mobility ever assembled provide surprisingly clear answers to these questions—the biggest obstacle children face in America is having the bad luck of being born into a poor family.

The immense impact of parental income on the future earnings of children has been established by a tremendous body of research. Raj Chetty and colleagues, in one of the largest studies of social mobility ever conducted,1 linked census data to federal tax returns to show that your parent’s income when you were a child was by far the best predictor of your own income when you became an adult. The authors write, “On average, a 10 percentile increase in parent income is associated with a 3.4 percentile increase in a child’s income.” This is a huge effect; children will earn an average of 34 percent more if their parents are in the highest income decile as compared to the lowest. This effect is true across all races, and Black children born in the top income quintile are more than twice as likely to remain there than White children born in the bottom quintile are to rise to the top. In short, the chances of occupying the top rungs of the economic ladder for children of any race are lowest for those who grow up poor and highest for those who grow up rich. These earnings differences have a broad impact on wellbeing and are strongly correlated with both health and life expectancy.2 Wealthy men live 15 years longer than the poorest, and wealthy women are expected to live 10 years longer than poor women—five times the effect of cancer!

Why is having wealthy parents so important? David Grusky at Stanford, in a paper on the commodification of opportunity, writes:

Although parents cannot directly buy a middleclass outcome for their children, they can buy opportunity indirectly through advantaged access to the schools, neighborhoods, and information that create merit and raise the probability of a middle-class outcome.3

In other words, opportunity is for sale to those who can afford it. This simple point is so obvious that it is surprising that so many people seem to miss it. Indeed, it is increasingly common for respected news outlets to cite statistics about racial differences without bothering to control for class. This is like conducting a study showing that taller children score higher on math tests without controlling for age. Just as age is the best predictor of a child’s mathematical ability, a child’s parent’s income is the best predictor of their future adult income.

Although there is no substitute for being born rich, outcomes for children from families with the same income differ in predictable and sometimes surprising ways. After controlling for household income, the largest racial earnings gap is between Asians and Whites, with Whites who grew up poor earning approximately 11 percent less than their Asian peers at age 40, followed by a two percent reduction if you are poor and Hispanic and an additional 11 percent on top of that if you are born poor and Black. Some of these differences, however, result from how we measure income. Using “household income,” in particular, conceals crucial differences between homes with one or two parents and this alone explains much of the residual differences between racial groups. Indeed, the marriage rates between races uncannily recapitulate these exact same earnings gaps—Asian children have a 65 percent chance of growing up in households with two parents, followed by a 54 percent chance for Whites, 41 percent for Hispanics and 17 percent for Blacks4 and the Black-White income gap shrinks from 13 percent to 5 percent5 after we control for income differences between single and two-parent households.

Just as focusing on household income obscures differences in marriage rates between races, focusing on all children conceals important sex differences, and boys who grow up poor are far more likely to remain that way than their sisters.6 This is especially true for Black boys who earn 9.7 percent less than their White peers, while Black women actually earn about one percent more than White women born into families with the same income. Chetty writes:

Conditional on parent income, the black-white income gap is driven entirely by large differences in wages and employment rates between black and white men; there are no such differences between black and white women.7

So, what drives these differences? If it is racism, as many contend, it is a peculiar type. It seems to benefit Asians, hurts Black men, and has no detectable effect on Black women. A closer examination of the data reveals their source. Almost all of the remaining differences between Black men and men of other races lie in neighborhoods. These disadvantages could be caused either by what is called an “individual-level race effect” whereby Black children do worse no matter where they grow up, or by a “place-level race effect” whereby children of all races do worse in areas with large Black populations. Results show unequivocal support for a place-level effect. Chetty writes:

The main lesson of this analysis is that both blacks and whites living in areas with large African-American populations have lower rates of upward income mobility.8

Multiple studies have confirmed this basic finding, revealing that children who grow up in families with similar incomes and comparable neighborhoods have the same chances of success. In other words, poor White kids and poor Black kids who grow up in the same neighborhood in Los Angeles are equally likely to become poor adults. Disentangling the effects of income, race, family structure, and neighborhood on social mobility is a classic case of multicollinearity (i.e., correlated predictors), with race effectively masking the real causes of reduced social mobility—parent’s income. The residual effects are explained by family structure and neighborhood. Black men have the worst outcomes because they grow up in the poorest families and worst neighborhoods with the highest prevalence of single mothers. Asians, meanwhile, have the best outcomes because they have the richest parents, with the lowest rates of divorce, and grow up in the best neighborhoods.

The impact that family structure has on the likelihood of success first came to national attention in 1965, when the Moynihan Report9 concluded that the breakdown of the nuclear family was the primary cause of racial differences in achievement. Daniel Patrick Moynihan, an American sociologist serving as Assistant Secretary of Labor (who later served as Senator from New York) argued that high out-of-wedlock birth rates and the large number of Black children raised by single mothers created a matriarchal society that undermined the role of Black men. In 1965, he wrote:

In a word, a national effort towards the problems of Negro Americans must be directed towards the question of family structure. The object should be to strengthen the Negro family so as to enable it to raise and support its members as do other families.10

A closer look at these data, however, reveals that the disadvantage does not come from being raised by a single mom but rather results from growing up in neighborhoods without many active fathers. In other words, it is not really about whether your own parents are married. Children who grow up in two-parent households in these neighborhoods have similarly low rates of social mobility. Rather, it seems to depend on growing up in neighborhoods with a lot of single parents. Chetty in a nearly perfect replication of Moynihan’s findings writes:

black father presence at the neighborhood level strongly predicts black boys’ outcomes irrespective of whether their own father is present or not, suggesting that what matters is not parental marital status itself but rather community-level factors.11

Although viewing the diminished authority of men as a primary cause of social dysfunction might seem antiquated today, evidence supporting Moynihan’s thesis continues to mount. The controversial report, which was derided by many at the time as paternalistic and racist, has been vindicated12 in large part because the breakdown of the family13 is being seen among poor White families in rural communities today14 with similar results. Family structure, like race, often conceals underlying class differences too. Across all races, the chances of living with both parents fall from 85 percent if you are born in an upper-middle-class family to 30 percent if you are in the lower-middle class.15 The take-home message from these studies is that fathers are a social resource and that boys are particularly sensitive to their absence.16 Although growing up rich seems to immunize children against many of these effects, when poverty is combined with absent fathers, the negative impacts are compounded.17

Children who grow up in families with similar incomes and comparable neighborhoods have the same chances of success. In other words, poor White kids and poor Black kids who grow up in the same neighborhood in Los Angeles are equally likely to become poor adults.

The fact that these outcomes are driven by family structure and the characteristics of communities that impact all races similarly poses a serious challenge to the bias narrative18—the belief that anti-Black bias or structural racism underlies all racial differences19 in outcomes—and suggests that the underlying reasons behind the racial gaps lie further up the causal chain. Why then do we so frequently use race as a proxy for the underlying causes when we can simply use the causes themselves? Consider by analogy the fact that Whites commit suicide at three times the rate of Blacks and Hispanics.20 Does this mean that being White is a risk factor for suicide? Indeed, the link between the income of parents and their children may seem so obvious that it can hardly seem worth mentioning. What would it even mean to study social mobility without controlling for parental income? It is the elephant in the room that needs to be removed before we can move on to analyze more subtle advantages. It is obvious, yet elusive; hidden in plain sight.

If these results are so clear, why is there so much confusion around this issue? In a disconcertingly ignorant tweet, New York Times writer Nikole Hanna-Jones, citing the Chetty study, wrote:

Please don’t ever come in my timeline again bringing up Appalachia when I am discussing the particular perils and injustice that black children face. And please don’t ever come with that tired “It’s class, not race” mess again.21

Is this a deliberate attempt to serve a particular ideology or just statistical illiteracy?22 And why are those who define themselves as “progressive” often the quickest to disregard the effects of class? University of Pennsylvania political science professor Adolph Reed put what he called “the sensibilities of the ruling class” this way:

the model is that the society could be one in which one percent of the population controls 95 percent of the resources, and it would be just, so long as 12 percent of the one percent were black and 14 percent were Hispanic, or half women.23

Perhaps this view and the conviction shared by many elites that economic redistribution is a non-starter accounts for this laser focus on racism, while ignoring material conditions. Racial discrimination can be fixed by simply piling on more sensitivity training or enforcing racial quotas. Class inequities, meanwhile, require real sacrifices by the wealthy, such as more progressive tax codes, wider distribution of property taxes used to fund public schools, or the elimination of legacy admissions at elite private schools.24 The fact that corporations and an educated upper class of professionals,25 which Thomas Piketty has called “the Brahmin left,”26 have enthusiastically embraced this type of race-based identity politics is another tell. Now, America’s rising inequality,27 where the top 0.1 percent have the same wealth as the bottom 90 percent, can be fixed under the guidance of Diversity, Equity and Inclusion (DEI) policies and enforced by Human Resources departments. These solutions pose no threat to corporations or the comfortable lives of the elites who run them. We are obsessed with race because being honest about class would be too painful.

There are, however, also a number of aspects of human psychology that make the powerful impact of the class into which we are born difficult to see. First, our preference for binary thinking,28 which is less cognitively demanding, makes it easier to conjure up easily divisible, discrete, and visible racial categories (e.g., Black, White, Asian), rather than the continuous and often less visible metric of income. We run into problems when we think about continuous variables such as income, which are hard to categorize and can change across our lifetimes. For example, what is the cutoff between rich and poor? Is $29,000 dollars a year poor but $30,000 middle class? This may also help to explain why we are so reluctant to discuss other highly heritable traits that impact our likelihood of success, like attractiveness and intelligence. Indeed, a classic longitudinal study by Blau and Duncan in 196729 which studied children across the course of their development suggests that IQ might be an even better predictor of adult income than their parent’s income. More recently Daniel Belsky found that an individual’s education-linked genetics consistently predicted a change in their social mobility, even after accounting for social origins.30 Any discussion of IQ or innate differences in cognitive abilities has now become much more controversial, however, and any research into possible cognitive differences between populations is practically taboo today. This broad denial of the role of genetic factors in social mobility is puzzling, as it perpetuates the myth that those who have succeeded have done so primarily due to their own hard work and effort, and not because they happened to be beneficiaries of both environmental and genetic luck. We have no more control over our genetic inheritance than we do over the income of our parents, their marital status, or the neighborhoods in which we spend our childhoods. Nevertheless, if cognitive differences or attractiveness were reducible to clear and discrete categories, (e.g., “dumb” vs. “smart” or “ugly” vs. “attractive”) we might be more likely to notice them and recognize their profound effects. Economic status is also harder to discern simply because it is not stamped on our skin while we tend to think of race as an immutable category that is fixed at birth. Race is therefore less likely to be seen as the fault of the hapless victim. Wealth, however, which is viewed as changeable, is more easily attributed to some fault of the individual, who therefore bears some of the responsibility for being (or even growing up) poor.

We may also fail to recognize the effects of social class because of the availability bias31 whereby our ability to recall information depends on our familiarity with it. Although racial segregation has been falling32 since the 1970s, economic segregation has been rising.33 Although Americans are interacting more with people from different races, they are increasingly living in socioeconomic bubbles. This can make things such as poverty and evictions less visible to middle-class professionals who don’t live in these neighborhoods and make problems with which they may have more experience, such as “problematic” speech, seem more pressing.

Still, even when these studies are published, and the results find their way into the media, they are often misinterpreted. This is because race can mask the root causes of more impactful disadvantages, such as poverty, and understanding their inter-relations requires a basic understanding of statistics, including the ability to grasp concepts such as multicollinearity.

Of course, none of this is to say that historical processes have not played a crucial role in producing the large racial gaps we see today. These causes, however, all too easily become a distraction that provides little useful information about how to solve these problems. Perhaps reparations for some people, or certain groups, are in order, but for most people, it simply doesn’t matter whether your grandparents were impoverished tenant farmers or aristocrats who squandered it all before you were born. Although we are each born with our own struggles and advantages, the conditions into which we are born, not those of our ancestors, are what matter, and any historical injustices that continue to harm those currently alive will almost always materialize in economic disparities. An obsession with historical oppression which fails to improve conditions on the ground is a luxury34 that we cannot afford. While talking about tax policy may be less emotionally satisfying than talking about the enduring legacy of slavery, redistributing wealth in some manner to the poor is critical to solving these problems. These are hard problems, and solutions will require acknowledging their complexity. We will need to move away from a culture that locks people into an unalterable hierarchy of suffering, pitting groups that we were born into against one another, but rather towards a healthier identity politics that emphasizes economic interests and our common humanity.

Most disturbing, perhaps, is the fact that the institutions that are most likely to promote the bias narrative and preach about structural racism are those best positioned to help poor children. Attending a four-year college is unrivaled in its ability to level the playing field for the most disadvantaged kids from any race and is the most effective path out of poverty,35 nearly eliminating any other disadvantage that children experience. Indeed, the poorest students who are lucky enough to attend elite four-year colleges end up earning only 5 percent less than their richest classmates.36 Unfortunately, while schools such as Harvard University tout their anti-racist admissions policies,37 admitting Black students in exact proportion to their representation in the U.S. population (14 percent), Ivy League universities are 75 times more likely38 to admit children born in the top 0.1 percent of the income distribution as they are to admit children born in the bottom 20 percent. If Harvard was as concerned with economic diversity as racial diversity, it would accept five times as many students from poor families as it currently does. Tragically, the path most certain to help poor kids climb out of poverty is closed to those who are most likely to benefit.

This article appeared in Skeptic magazine 28.3
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Decades of social mobility research has come to the same conclusion. The income of your parents is by far the best predictor of your own income as an adult. By using some of the largest datasets ever assembled and isolating the effects of different environments on social mobility, research reveals again and again how race effectively masks parental income, neighborhood, and family structure. These studies describe the material conditions of tens of millions of Americans. We are all accidents of birth and imprisoned by circumstances over which we had no control. We are all born into an economic caste system in which privilege is imposed on us by the class into which we are helplessly born. The message from this research is that race is not a determinant of economic mobility on an individual level.39 Even though a number of factors other than parental income also affect social mobility, they operate on the level of the community.40 And although upward mobility is lower for individuals raised in areas with large Black populations, this affects everyone who grows up in those areas, including Whites and Asians. Growing up in an area with a high proportion of single parents also significantly reduces rates of upward mobility, but once again this effect operates on the level of the community and children with single parents do just as well as long as they live in communities with a high percentage of married couples.

One thing these data do reveal—again, and again, and again—however, is that privilege is real. It’s just based on class, not race.

About the Author

Robert Lynch is an evolutionary anthropologist at Penn State who specializes in how biology, the environment, and culture transact to shape life outcomes. His scientific research includes the effect of religious beliefs on social mobility, sex differences in social relationships, the impact of immigration on social capital, how social isolation can promote populism, and the evolutionary function of laughter.

References
  1. https://rb.gy/n0b2s
  2. https://rb.gy/hyrbb
  3. https://rb.gy/e72y9
  4. https://rb.gy/borp3
  5. https://rb.gy/hhbv7
  6. https://rb.gy/4y12m
  7. https://rb.gy/ws3ri
  8. https://rb.gy/885jf
  9. https://rb.gy/swsnm
  10. https://rb.gy/fqske
  11. https://rb.gy/xamwr
  12. https://rb.gy/6hgl4
  13. https://rb.gy/gyd8f
  14. https://rb.gy/wevmn
  15. https://rb.gy/8603b
  16. https://rb.gy/j31um
  17. https://rb.gy/njjfe
  18. https://rb.gy/zey0m
  19. Ibid.
  20. https://rb.gy/tvgor
  21. https://rb.gy/m8d6d
  22. https://rb.gy/hjnr1
  23. https://rb.gy/vhiqi
  24. https://rb.gy/ci5jd
  25. https://rb.gy/1x19z
  26. https://rb.gy/il8nx
  27. https://rb.gy/5wkgb
  28. https://rb.gy/du3le
  29. https://rb.gy/ayncj
  30. https://rb.gy/6h3e4
  31. https://rb.gy/kav1r
  32. https://rb.gy/sp0vu
  33. https://rb.gy/d61g7
  34. https://rb.gy/6n3r3
  35. https://rb.gy/7wi4s
  36. https://rb.gy/dd5gp
  37. https://rb.gy/bwrqt
  38. https://rb.gy/5jsod
  39. https://rb.gy/wg63i
  40. https://rb.gy/dj43h
Categories: Critical Thinking, Skeptic

Skeptoid #916: Ask Me Anything, 2023 Edition, Part 2

Skeptoid Feed - Tue, 12/26/2023 - 2:00am

Skeptoid rapid fires a bunch of mini-episodes in answer to your questions.

Categories: Critical Thinking, Skeptic

The Skeptics Guide #963 - Dec 23 2023

Skeptics Guide to the Universe Feed - Sat, 12/23/2023 - 4:00am
Special Guest: Eli Bosnick; Quickie with Steve: Iron Spheres are Terrestrial; News Items: Lunar Roads, Misinformation vs Disinformation, Gravitational Waves As Fast As Light, Fluoride and IQ, Dark GPT; Skeptics and Paranormal Experiences; Science or Fiction
Categories: Skeptic

Science News in 2023

neurologicablog Feed - Thu, 12/21/2023 - 5:14am

This is not exactly a “best of” because I don’t know how that applies to science news, but here are what I consider to be the most impactful science news stories of 2023 (or at least the ones that caught by biased attention).

This was a big year for medical breakthroughs. We are seeing technologies that have been in the works for decades come to fruition with specific applications. The FDA recently approved a CRISPR treatment for sickle cell anemia. The UK already approved this treatment for sickle cell and beta thalassemia. This is the first CRISPR-based treatment approval. The technology itself is fascinating – I have been writing about CRISPR since it was developed, it’s a technology for making specific alterations to DNA at a specific target site. It can be used to permanently inactivate a gene, insert a new gene, or reversibly turn a gene off and then on again. Importantly, the technology is faster and cheaper than prior technologies. It is a powerful genetics research tool, and is a boon to genetic engineering. But since the beginning we have also speculated about its potential as a medical intervention, and now we have proof of concept.

The procedure is to take bone-marrow from the patient, then use CRISPR to silence a specific gene that turns off the production of fetal hemoglobin. The altered blood stem cells are then transplanted back into the patient. Both of these diseases, sickle cell and thalassemia, are genetic mutations of adult hemoglobin. The fetal hemoglobin is unaffected. By turning back on the production of fetal hemoglobin, this effectively reduces or even eliminates the negative effects of the mutations. Sickle cell patients do not go into crisis and thalassemia patients do not need constant blood transfusions.

This is an important milestone – we can control the CRISPR technique sufficiently that it is a safe and effective tool for treating genetically based diseases. This does not mean we can now cure all genetic diseases. There is still the challenge of getting the CRISPR to the right cells (using some vector). Bone-marrow based disease is low hanging fruit because we can take the cells to the CRISPR. But still – this is a lot of potential disease targets – anything blood or bone marrow based. Also, any place in the body where we can inject CRISPR into a contained space, like the eye, is an easy target. Other targets will not be as easy, but that technology is advancing as well. This all opens up a new type of medical intervention, through precise genetic alteration. Every future story about this technology will likely refer back to 2023 as the year of the first approved CRISPR treatment.

Another similar milestone was the fact that 2023 saw the first fully approved drugs that are disease-modifying for Alzheimer’s disease (AD). There are now three monoclonal antibodies that are designed to slow or even reverse some aspects of AD, a first for a neurodegenerative disease. This is partly because our understanding of the disease has been improving over the decades. But these targets have been known and tried before, with multiple failures. Experts believe, with good reason, that it is likely these new treatments are showing an effect because they are simply more powerful than the chemical drugs we tried previously. Monoclonal antibodies, as the name implies, are antibodies that can be engineered to target a specific receptor or protein. This technology has been developed over decades, and again is a powerful research tool. In the last 10 years there has been an explosion of monoclonal antibody based treatments, and they are proving to be incredibly effective. They also often have fewer side effects than chemically-based drugs – they don’t have to be metabolized by the kidneys or liver, and they can be more precisely targeted. It’s simply a great technology. The only major downside is that monoclonal antibody treatments are still very expensive.

To me these are the two biggest science news stories of 2023, and they are very good news. There were many other medical advances, but these signal the beginning of perhaps new eras in medicine.

There were many other science news stories I found interesting, and can only mention a few here. One story is the continuing saga of global warming. One the one hand it seems that we have turned a corner in terms of public perception. The dedicated denialists are still out there, but what has changed is that global warming has gone from something that will happen in the future to something that feels like it is happening now. Heat waves, forest fires, melting ice, and extreme weather are becoming more and more obvious. Over the summer we were plagued by smoke from Canadian forest fires even down in CT where I live. It was bizarre.

Also, 2023 will definitely be the warmest year on record. Those who deny even that warming is happening (they are still out there) are sounding more and more disconnected from reality and desperate. How many warmest years on record can we have before it’s just too much to ignore? This is not some fluctuation, and there was never any global warming pause. The scientists said it was going to continue to warm, and it is, pretty much in the middle of their projections. However, the manifestations of global warming seem to be ahead of schedule. It is impacting agriculture and infrastructure, and will result in more and more climate refugees.

At the same time, our political class has never seemed more feckless and dysfunctional. COP28 was, in my opinion, largely a failure. They failed to get meaningful agreements, something more than vague showy statements. We need specifics. We need agreements with teeth. We need a plan. This year will likely also be the year with the greatest CO2 emissions ever. We are still increasing fossil fuel use and CO2 emissions. It’s maddening.

It is increasingly clear that we do not have the luxury of simply waiting for technology to solve this problem for us. It is already mostly too late not to blow past 1.5 C. That is pretty much a done deal. The question now is, how far will peak warming go beyond that, and how many tipping points will we trigger? I feel like we are living in slow motion through the first act of every disaster movie ever.

There is a silver lining – the Inflation Reduction Act (IRA – really a climate bill) seems to be working better than expected. Biden chose an all carrot approach to industrial policy aimed at accelerating the change to green technology. No sticks. This garnered some criticism from both sides of the ideological spectrum. But it is working – industry, with incentives and reduced risk through guaranteed loans, is investing heavily in green technology, which includes (very pragmatically, in my opinion) nuclear power. This includes next generation advanced nuclear, like the Natrium plant being built in Wyoming, to replace a coal-fired plant.

This is all good, but it’s not enough. We probably need to increase this type of investment by an order of magnitude, and spread it to the other major emitters around the world, like China and India. We need to shut down coal burning as quickly as possible. This will not be easy, as it’s a cheap form of energy for much of the world, but it has to be a priority. We already have the technology, and that technology is constantly incrementally getting better. We need to invest smartly in new infrastructure – those are investments that will pay off many fold. We just need the vision and political will to make those investments. Otherwise we are simply negligently displacing huge costs onto future generations.

So 2023 has seen a dramatic increase in the gap between both the reality and perception of global warming, with the dysfunctional too-little too-late actions of the governments of the world. I fear 2023 may be seen as the last opportunity we had to avoid climate disaster, one that we threw away.

The post Science News in 2023 first appeared on NeuroLogica Blog.

Categories: Skeptic

A Vision for Comprehensive Educational Reform: Where Learners Control Their Own Education

Skeptic.com feed - Wed, 12/20/2023 - 12:00am

Everyone knows the problems with American education; there is no point in rehashing them. Identifying the source of those problems, however, is essential to any meaningful reform. At every level, educational innovation is choked off by bureaucratic administrators who benefit from the current structure’s inefficiencies. Let’s be clear, there is no grand administrative conspiracy— both game theory and public choice economic theory predict that when a structure empowers a certain group1 (in this case, educational bureaucrats), the structure will gradually evolve to manifest the priorities of the group in power. Understanding this simple point leads to an understanding of how educational reform could occur. True educational reform would require creating a structure that empowers the learner, not the administrative bureaucracy. This article describes in detail a workable plan for doing so. Every one of these ideas is feasible for implementation right now. Beginning with higher education, here is a vision for what American education could be.

Higher Education

My 2021 Skeptic article on post-pandemic higher education,2 described how the same market forces that had made entertainment cheap and constantly accessible had done the same for educational content. Indeed, learning has never been cheaper or more accessible—unless you need certification in the form of a degree. American colleges and universities, emboldened by a scam-of-the-century system where students could pay for higher education with easy-toget Pell Grants, and where the school could keep the money even when students defaulted on loans,3 leveraged their ability to verify the transaction of education (through degrees). School “leaders” expanded their bureaucracies, football stadiums, campus amenities, and diversity efforts, while undercutting the teaching faculty by injecting adjuncts and teaching assistants to do the actual instructing.4 Students who went to college to enjoy four years of a lazy river ride,5 or to attend mega-sporting events, seemed not to mind, but cynics saw the whole system of higher education as beyond saving. There was, however, a notable exception—a new program, led by a visionary at the Massachusetts Institute of Technology (MIT).

Sanjay Sarma, who led in a variety of roles at MIT’s Open Learning Department from 2012 to 2022 developed an open-learning “micromasters” program. In his book, Grasp: The Science Transforming How We Learn,6 he pointed out that the educational structure is designed to both teach students and to “winnow” them into next level institutions based on judgment and performance. The “winnowing” function has now largely been eliminated because high-level education can be made accessible to just about anyone at any time.

*In some cases, workplaces have begun offering in-house credentials to employees. However, employers don’t like to reward credentials that employees can then use at another job somewhere else. Workforce needs should inform the creation of educational programs and the development of credentials, but a neutral third-party verification system through colleges and universities is probably best for workers.

Anyone can “get into” MIT right now on the Open Learning website.7 If you pass the Open Learning course, and perhaps even take a test, MIT will award you with a micro-credential indicating that you have mastered the content in, say, supply chain management to the extent that MIT’s faculty thinks it sufficient to earn MIT verification.*

The cost is between $1,000 and $1,200. In 2021, I predicted that the disjunction between the high cost of college education and the lowered cost of learning could not last. Just a few months later, MIT’s Open Education model was sold to edX (2U) for 800 million dollars.8

The Open Learning concept has given birth to two new players in the “game” of higher education. The first is Axiom, which delivers high-quality instruction through an open-learning model.9 The second is the Digital Credentials Consortium, a process dedicated to finding ways to use blockchain technology to verify educational transactions.10

Currently, a teacher or professor passes knowledge and skills on to students, but it is their school or college that verifies the “transaction” by issuing diplomas bearing its imprimatur. Blockchain verification, administered largely through rigorous mastery-level testing, could eliminate the need for diplomas. In practice, digital credentialing would look something like peer-to-peer lending verified by a blockchain, where a bank is not necessary to act as a third-party verifier.

If peer-to-peer lending has been around for a while, why do banks still exist? The answer is that institutions that have built up a century’s worth of legal leverage by aggressively lobbying their politicians, don’t die easily. And that’s what makes Sarma and MIT all the more significant. It is as if the CEO of a major banking or insurance institution decided to cut profits and deliver better services. MIT has a credentialing power, thanks to its earned reputation, to verify a transaction of learning through an open-source model.

After my 2021 paper was published, I contacted Dr. Sarma and he invited me to the MIT campus. I met with him and several members of MIT’s Open Learning faculty. At that time, I had just finished my second decade as a public high school teacher and had spent several years developing education programs for in-service STEM teachers. The entire secondary structure seemed strained to the point of collapse (a subject I wrote about for the Skeptic Reading Room in 2022),11 and I left MIT in June of 2022 believing their Open Learning system could save education.

It seemed that education, at all levels, was or is on the verge of a “Netflix-Blockbuster” moment. When Netflix started streaming in 2007, Blockbuster stores could be found in every hamlet in America. Just a few early adopters could recognize what streaming services would do to the home entertainment business, and at that moment Blockbuster still looked like a strong business model. By 2014, Blockbuster went bankrupt as home entertainment turned to a cheaper and more agile model of entertainment delivery. Between 2007 and 2014 there was little technological change on the part of Netflix. Instead, those seven years represent the amount of time that it took users to recognize, understand, and use the new streaming service. A critical mass of users had to be reached before the business model turned.

With education available at all times, schools really should just teach two things: how to become interested in an academic subject and how to use the educational ecosystem to saturate that interest level.

It is clear that even if Axiom and digital credentialing are offering an impressive new world of Open-Source education, a student population that graduates from traditional schools won’t know how to use or access this type of content. The current Open-Source model needs to begin early in a student’s education for students, teachers, guidance counselors, and parents to understand how it works and become comfortable with it.

Open-Source learning does have some weaknesses. It lacks a face-to-face component, and because Open-Source learning is universal, it is not local. These are issues that can be effectively addressed by connecting the model to existing educational institutions, but in order to explain how, we first need to connect Open-Source learning to K–12 education.

Open-Source Learning and Secondary Education

Before offering a new vision for education, I submit there are three educational myths that need to be dispelled. First—online education and face-to-face education only exist as an either/or construct. Second—interest-based education, where students develop an interest in a topic and then explore content and develop skills around that interest, does not constitute a serious method for learning. Third—an educational structure must be hyper-competitive (the winnowing function) because it is the pressure that forces students to learn complicated topics. Let us examine these three myths, and explore solutions:

1. Online Education and Face-to-Face Education

In Grasp, Sanjay Sarma explained that education currently operates within a Seat-Time Model (STM) where students are generally rewarded with grades after they have spent a certain amount of time in a class. He advocates a Mastery-Learning Model (MLM), or a competence-based model for education, one where students leave a course of study once they have demonstrated mastery. If the goal is “mastering” certain material, methods, or procedures (for example, the alphabet, adding fractions, changing a tire or a diaper), it would make sense to combine the best aspects of online and face-to-face learning so that a student attains the most complete education possible.

Students who study through a MLM module can still read actual books and work with pencil and paper, but their progress would be tracked through constant testing. A MLM treats tests as living parts of the learning process, not as “educational autopsies” to be administered after students have absorbed content. Again, the MLM lacks a face-to-face component, and it is not localized, though these weaknesses can be filled by teachers.

If students work through a MLM, teachers would no longer have to assign them grades. They would also not be subject to the various external forces that often cause or reward grade inflation. MLM tracks both student progress and mastery. Teachers, then, would need to localize the curriculum by showing how the content that students are learning in an MLM connects to the local community and workforce.

In practice, this means that an MLM on, say, chemistry would have “gaps” built into it. Teachers would no longer be record-keepers (no grading) but rather would need to be connected to workplaces and universities, to access the content-area knowledge necessary to guide students. Teachers would therefore need to be sustained through a new type of professional development that begins with them being exposed to the intellectual community around them and ends with their creating a Teacher-Generated Curriculum.12 Ideally, community members would take an active part in student education. When a local business leader comes into a high school classroom and says, “I’m paying people to do what you just did for your classroom project,” it is certainly a powerful moment.

This brings up an important problem for traditional methods of education—there are populations of students who are unlikely to go past high school given the current educational model. Lowering costs and increasing accessibility for higher education can help reach them, but if their only experience of high school involves failing classes and a revolving door in the teaching staff, they are unlikely to advance, either in school or at work. Students need to have a positive, engaging experience, and have someone from a post-secondary institution come in and explain that if they find school engaging, they’re likely to continue on to the next level.

2. Interest-based education is not a serious method.

Eventually, if learners are empowered, then a mastery-learning “ecosystem” will evolve.13 The ecosystem will offer mastery education to anyone at any time. It’s likely that grade-level distinctions for mastery will fade. A top-down, seat-time approach to educating students will cease to make sense in such an educational environment. Currently, most state educational standards are designed by experts in various fields with little concept of what students can accomplish. (Meet every stated requirement for high school graduation, and I will congratulate you on being the smartest person on the planet.) But with education available at all times, schools really should just teach two things: how to become interested in an academic subject and how to use the educational ecosystem to saturate that interest level.

Obviously, the state has a right and duty to enforce a basic level of educational proficiency on schools, but a traditional top-down educational model has simply not worked.14, 15 Students at all levels cheat the system either through a skim-and-scam method toward content material,16 or through outright cheating. A qualitative study completed way back in 1992 found that 67 percent of students at 31 college campuses self-reported having cheated on classwork and exams.17 This was all before ChatGPT and other Artificial Intelligence systems began being used by students to write essays.

Cheating is the most cynical act in which a student can engage because it shows they see no value at all in the actual content of education, but only in obtaining the necessary grade (“even a D will do”) and/or credential. Traditionally, this problem has been solved by a hodgepodge of educational assessments. College entrance administrators recognize that grades, where the standards vary widely from school to school and teacher to teacher, are not a very good indicator of a student’s abilities. To correct for this, the ACT , SAT, and Advanced Placement exams have been (until recently) adopted by local educational systems as a means of providing a nationalized standard of ability to master successfully further education.

†The College Board is an American not-for-profit organization that develops and administers standardized tests, such as the widely used SAT (originally the Scholastic Aptitude Test, then later the Scholastic Assessment Test) and the Graduate Record Exam (GRE), as well as curricula used by K–12 and post-secondary education institutions as part of the college admissions process. It also provides resources, tools, and services to students, parents, colleges, and universities for college planning, recruitment and admissions, financial aid, and retention.

This creates an unnecessarily exhausting situation for students, teachers, and guidance counselors in high schools. Grades must be kept so that students can meet local graduation requirements, but because those grades are but a limited predictor of either present subject mastery or predicted future performance, the College Board† tests offer reliable and valid metrics. Students, for example, who take an AP course earn a grade in the class from the teacher/school, then also receive a score (1–5) on the AP exam. The grade does not affect the score nor the score the grade.

The decentralized nature of the system keeps credentialing power in the hands of administration. Placing that credentialing power in the hands of the learner, by means of micro-credentials accrued through blockchain in a digital transcript, would rapidly reshape the educational landscape.

The College Board has, in the past, provided an important function, but one that, I submit, is now obsolete for three reasons:

  1. The College Board has moved to unify its testing requirements across the AP classes, and the purpose of most of the classes now seems to be to teach students how to effectively meet the requirements of the College Board rubrics. Students taking the short answer questions on the AP history exams, for example, are exhorted to use a three-sentence Answer-Cite-Explain model. This is not necessarily the best means of showcasing an understanding of history, but it does make it easier to grade the exams through an assembly line approach.
  2. The College Board is invested in the idea of testing a student’s college potential. Testing students for potential was only necessary when colleges needed to winnow out the most likely to benefit from attending a college campus and learning from the small amount of information that could be transferred in a classroom/lecture hall over four years. When education is available to everyone at all times, it is useless to measure potential or aptitude.
  3. The College Board’s educational model has always been political, and legally unstable. The AP courses essentially set national curricula, and states have had to adopt their local standards and laws to allow for students to take AP courses. Controversies, particularly in history, become inevitable.18 It would seem impossible to get Americans, many of whom are more politically-charged (in whatever direction) than ever, to agree on a set of content standards for a U.S. history course.

An educational ecosystem could save American schools from curricular paralysis. Simply turn the questions over to the students, and then let them access the educational ecosystem to answer those questions. For example, students in a U.S. History course might be asked “What is the main narrative in United States history from 1877 to the present?” Students would need to answer the question with a central thesis but address three other points of view in the process. A student who stated that the African American experience is the main narrative would need to show a mastery of other perspectives and explain the primacy of the African American experience in their final work.

When people read because they are interested in the subject matter, they don’t cheat. They also don’t compete. This brings us to the final myth.

3. The educational structure must be competitive.

Please consider, for a moment, the absurdity of an educational system that implicitly tells teenagers that their entire future (for kids who might live another 75 years!) will be determined by how well they learn the volume of a cylinder formula, or by their ability to retain the definition of the word “catholic” before they turn 18. For most students and parents, the system is bewildering and frustrating. However, the worst thing about a competitive educational system is what it does to the people who succeed in it.

Ivy League professor William Deresiewicz noted that a competitive structure ultimately deprives students of the ability to be happy for another’s successes.19 This might sound menial, but if someone is jealous that another person has made a creative breakthrough, then it becomes impossible to build upon that breakthrough. How can someone become a genuinely creative thinker in that context?

Removing the winnowing function from education would create opportunities for cooperative education where students learn to converse rather than debate. Again, in an educational ecosystem no one is trying to “get in” anymore. You can access the ecosystem at any time. This is not to say that face-to-face learning will disappear or that brick-and-mortar universities will cease to exist.

One can imagine an educational structure where students learn and test to mastery and then receive a micro-credential that verifies their learning. Those micro-credentials would accrue in a centrally administered digital transcript, sometimes called an “achievement wallet.” Institutions might require that students earn their way onto campus (and thereby access to valuable face-to-face learning) by obtaining a certain specified set of micro-credentials. This would embed fairness into college admissions, drastically reduce administrative costs by eliminating the cumbersome and controversial admissions procedures, and ensure that students entered campus with the right knowledge.

The current system of “grading” students is not only useless for verifying the transaction of content and skills, it actively harms the educational process itself20 by distracting both teachers and students from instructing and learning.

$2 Million Is All It Took to Get the Ball Rolling

This entire concept of a cooperative educational future, with mastery-learning micro-credentials, and a new definition of what it means to be a teacher might sound fantastic in both senses of the word. However, all of this technology already exists. Currently, however, decision-making power rests with an educational bureaucracy spread out across almost 14,000 K–12 school districts and nearly 4,000 colleges and universities. The decentralized nature of the system keeps credentialing power in the hands of administration. Placing that credentialing power in the hands of the learner, by means of micro-credentials accrued through blockchain in a digital transcript, would rapidly reshape the educational landscape.

Eventually, students will go to school for the purpose of learning how to become interested in subjects and how to navigate a Mastery-Learning Model. They will have to learn to deal with frustration, how to study independently, how to read at a deep level, and how to converse with other students about content.

Teachers and guidance counselors will no longer need to take grades or keep records. All of that will be done through the ecosystem, and students will prove mastery via testing. Teachers, then, could connect with their colleges and universities through new forms of professional development and be respected as knowledge producers and content masters. Guidance counselors, freed from record keeping and college admissions, could work to help students emotionally and intellectually navigate their actual learning process.

By implementing this vision, we can reduce pressure on the teen years. A student who hates math at 15 might develop an interest in math at 40. There’s nothing preventing it. If teenagers go through a trauma, or a growth spurt, or sometimes just feel paralyzed by “all the drama,” it would be possible to back off for a while, or to develop alternate means of education. (Many a fifteen-year-old boy might benefit from six months off from school to learn a trade, or just work, or see how professionals in an area in which they have an academic interest actually go about their day and earn a living). They could always come back later and pick up on their more traditional style of learning.

Still, this much change seems overwhelming, and educational change in the U.S. will, by law, have to happen on a state-by-state basis. After leaving the MIT campus in 2022, it made sense to me that a process of mastery learning should actually begin with in-service teachers. This makes sense because teachers are a population of adults who have constant contact with students. If the teachers understand how mastery learning and micro-credentialing work, they can gradually make students and parents understand.

At the start of the 2023 legislative session, Sanjay and I met with leadership in the Indiana Senate and House. As a result of these discussions, two million dollars was allocated, over the biennium, to develop education programs for in-service teachers. The money would be awarded in competitive grants to Indiana’s colleges and/or universities and the educational programs must focus on the study of content (History, English, STEM, etc.) and can have a workforce education component.21 Teachers can earn a micro-credential after completing a combination of online and in-person education that culminates with a Teacher Generated Curriculum ready for the classroom. Programs will be overseen by the Indiana Department of Education and the Indiana Commission for Higher Education.

This article appeared in Skeptic magazine 28.3
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

The state already had a contract with a vendor for “achievement wallets”22 and was looking to overhaul the high school experience by making new forms of work-based education, with new forms of credentials, possible. Why not issue every teacher in the state a digital transcript, and allow them to accrue micro-credentials for both extra pay and for license renewal?

Just like that, a new type of education program, with a new type of credential and a new kind of transcript were all encoded into law and overseen by centralized institutions. An experimental group of over 60,000 teachers now have access to a new form of affordable (indeed compensated, since teachers are paid to complete coursework) and accessible education largely devoid of messy bureaucracy. If the teachers become familiar with such an efficient system, how long before they, and then their students, and the parents of those students, start to demand it?

About the Author

Chris Edwards teaches World History, English, and Mathematics at a public high school in Indiana. He is a frequent contributor to Skeptic on a variety of topics and has had his original connectthe- dots teaching methodology published with the National Council for Social Studies. He is the author of numerous books, including the young adult STEM title All About the Moon Landing (Blue River Press, 2023), the fantasy novella The Strongman’s Tale (See Sharp Press, 2023), and Self-Taught: Moving from a Seat Time Model to a Mastery-Learning Model (Rowman & Littlefield Education, 2022) about educational reform.

References
  1. Yoeli, E., & Hoffman, M. (2022). Hidden Games: The Surprising Power of Game Theory to Explain Irrational Behavior. Basic Books.
  2. Edwards, C. (2021). The Future of Higher Education: Reengineering Learning for a Post-Pandemic World. Skeptic (Vol. 26, No. 1).
  3. https://rb.gy/sajwc
  4. https://rb.gy/60i5n
  5. https://rb.gy/sjmyw
  6. Sarma, S. (2021). Grasp: The Science Transforming How We Learn. Anchor Books.
  7. https://rb.gy/qg96g
  8. https://rb.gy/8uyzn
  9. https://rb.gy/i10mm
  10. https://rb.gy/hs9df
  11. https://rb.gy/9xrsc
  12. Edwards, C. (2011). Three Cheers for Teachers: Educational Reform Should Come From Within the Classroom and Science Can Inform Our Reforms. Skeptic (Vol. 17, No.1).
  13. https://rb.gy/9xrsc
  14. Caplan, B.D. (2018). The Case Against Education: Why the Education System Is a Waste of Time and Money. Princeton University Press.
  15. Arum, R., & Roska, J. (2011). Academically Adrift: Limited Learning on College Campuses. The University of Chicago Press.
  16. Carr, N.G. (2020). The Shallows: What the Internet Is Doing to Our Brains. W.W. Norton & Company.
  17. Williams, A.E. & Janosik, S.M. (2007, November). An Examination of Academic Dishonesty Among Sorority and Nonsorority Women. Journal of College Student Development, 48(6), 706–714.
  18. https://rb.gy/nskn6
  19. Deresiewicz, W. (2015). Excellent Sheep: The Miseducation of the American Elite and the Way to a Meaningful Life. Free Press.
  20. https://rb.gy/bb6ba
  21. https://rb.gy/izx1l
  22. https://rb.gy/q0dgs
Categories: Critical Thinking, Skeptic

An Earth-like Climate is Fragile

neurologicablog Feed - Tue, 12/19/2023 - 5:04am

One of the biggest questions of exoplanet astronomy is how many potentially habitable planets are out there in the galaxy. By one estimate the answer is 6 billion Earth-like planets in the Milky Way. But of course we have to set parameters and make estimates, so this number can vary significantly depending on details.

And yet – how many exoplanets have we discovered so far that are “Earth-like”, meaning they are a rocky world orbiting a sun-like star in the habitable zone, not tidally locked to their parent star, with the potential for liquid water on the surface? Zero. Not a single one, out of the over 5,500 exoplanets confirmed so far. This is not a random survey, however, because it is biased by the techniques we use to discover exoplanets, which favor larger worlds and worlds closer to their stars. But still, zero is a pretty disappointing number.

I am old enough to remember when the number of confirmed exoplanets was also zero, and when the first one was discovered in 1995. Basically since then I have been waiting for the first confirmed Earth-like exoplanet. I’m still waiting.

A recent simulation, if correct, may mean there are even fewer Earth-like exoplanets than we think. The study looks at the transition from a planet like Earth to one like Venus, where a runaway greenhouse effect leads to a dry and sterile planet with a surface temperature of hundreds of degrees. The question being explored by this simulation is this – how delicate is the equilibrium we have on Earth? What would it take to tip the Earth into a similar climate as Venus? The answer is – not much.

There have been studies modeling the process, but this is the first one to model the transition. What essentially happens is that there is a positive feedback loop. Increasing surface temperature increases the evaporation of water. Water vapor is itself a powerful greenhouse gas, which therefore increases the amount of warming, which causes further evaporation. Within a certain range of temperature, a range that Earth is within now, this process leads to a new equilibrium point of temperature. This is because the planet is still able to cool itself by radiating heat away into space. The hotter the planet becomes the more heat radiates away, until that equilibrium point is reach.

However, at some point the blanket of water vapor around the planet is so thick that the planet no longer loses heat in this way and there is nothing to stop the process I outlined above. This is the “runaway” heating point, which doesn’t stop until all the surface water has evaporated. We partly know this happens because that is what happened on Venus, with a surface temperature of  464 degrees C.

How much would the Earth have to heat in order to reach this point? That is what the new study addresses. They found that if the surface temperature of the Earth increased by a few tens of degrees due to increased solar radiance, then we would tip over into runaway heating. That is not something we need to worry about, at least no time soon. Even in a worst case scenario, AGW leads to a few degrees of warming, maybe 6 degrees C with tipping points and positive feedbacks. Also, the model used increased solar radiance as the initial cause of heating. They plan on doing a follow up study to see at what temperature we would hit runaway heating if the cause of initial heating were increased CO2. Still, it’s not reassuring that the range of temperatures that keep a planet in a habitable state is fairly narrow.

The biggest implication of this current model is on our search for Earth-like exoplanets. It may effect the estimate of the number of habitable worlds out there. Also, the researchers plan on figuring out if there are any signatures of a planet in equilibrium or in the grips of runaway heating, so that we can detect those signatures in exoplanets. This will further help us refine our estimates of Earth-like planets out there.

It would be nice if I live to see the confirmation of an Earth-like exoplanet clearly in a habitable zone of an orange or yellow star with liquid water on the surface. It is hard to extrapolate from zero. We may find that the Earth is far more rare and precious than we previously imagined.

The post An Earth-like Climate is Fragile first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #915: Ask Me Anything, 2023 Edition, Part 1

Skeptoid Feed - Tue, 12/19/2023 - 2:00am

A year-end Ask Me Anything session to tell you everything you ever wanted to know about Skeptoid and more.

Categories: Critical Thinking, Skeptic

Caylan Ford — Good and Evil, Human Nature, Education Reform, and Cancel Culture

Skeptic.com feed - Tue, 12/19/2023 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss395_Caylan_Ford_2023_11_16.mp3 Download MP3

Caylan Ford is a documentary filmmaker, writer, researcher, charter school founder, and a former political candidate. She is interested in the problem of political and philosophical evil, and most of her work is animated by a desire to help people recover their roots in reality and their orientation toward the divine. She was born in Calgary, Canada, and earned a Bachelor’s degree (Hons.) in Chinese history at the University of Calgary. From there she obtained a Master’s degree in International Affairs from the George Washington University, and worked on and off as a senior policy advisor for Canada’s foreign ministry for about ten years. Between the birth of her two children she earned another Master’s in International Human Rights Law at the University of Oxford. She hopes one day to do a degree in comparative eschatology.

A very large part of her life has been spent working, volunteering and consulting in the international human rights field, including by increasing access to anti-surveillance and censorship tools in Iran, China, Myanmar, and elsewhere; working with civil rights lawyers representing political dissidents; supporting refugee and asylum claimants; and conducting and publishing original research on the repression of religious minorities in China. She has written and co-produced two feature documentary films on the themes of religious and political persecution, censorship, forced labor, scapegoating, and mass persuasion under totalitarian regimes.

Her new documentary film, When the Mob Came, focuses on her experience of cancel culture following a catastrophic bid for political office in 2019. (Read her account of events.) Relatedly, she is the plaintiff in an ongoing $7 million defamation claim against several Canadian media and political institutions, and her case has so far resulted in the recognition of a new tort of civil harassment in Alberta. (Read about her litigation efforts.)

In 2022 she founded Canada’s first tuition-free classical charter school, Calgary Classical Academy, and she hopes to open a new campus in Edmonton. Watch a short video introduction to the Academy’s work and how it aims to promote knowledge of things that are true, good, and enduring.

Shermer and Ford discuss:

  • education reform
  • Can education be value free?
  • public vs. private vs. charter schools
  • human nature and the blank slate
  • Thomas Sowell’s Constrained Vision vs. Unconstrained Vision
  • conservatism vs. liberalism
  • French Revolution vs. American Revolution
  • in defense of truth, justice, and reality
  • what promotes humanity and what degrades it
  • sex, gender, trans
  • transhumanism
  • political correctness and identity politics
  • cancel culture, witch crazes, and virtue signaling
  • totalitarianism and preference falsification
  • free speech, hate speech and slippery slopes
  • how to stand up to cancel culture.
Show Notes
Cancel Culture Defined

Cancel culture operates on a similar principle to the Inferno. It delivers perpetual punishment, without any possibility of redemption, for heretics (or perceived heretics) against an emerging ideological orthodoxy: people who used the wrong word, defended inherited wisdom, or attempted to critically examine a topic that had been declared off-limits to philosophical inquiry. But cancel culture reflects and heightens the inverted priorities of the world. It incentivizes cruelty and performative outrage, and so suffocates humility, generosity, and openness. It asks us to go in search of grievances, and to look for the bad in others, but never to reproach ourselves. It inflames tribal hatreds and artificial divisions. It demands that people lie, that they confess to crimes they did not commit, and that they conceal their true beliefs to preserve themselves. Envy, hubris, and wrath are rewarded, while prudence and a slowness to judgement are treated with suspicion, as though they are evidence of an insufficient commitment to the cause. In the name of love and tolerance and solidarity, it asks us to hate our enemies, inform on our neighbors, and desert our friends.

Alberta Classical Academy

Inside the classrooms expert teachers guide students through a knowledge-rich curriculum using proven teaching methods of explicit instruction and the Socratic method beginning in grade 5 our students study Latin and all wear uniforms but don’t let that throw you we’re not a private school as a public charter school The Classical Academy is tuition free and open to all Learners who seek moral and intellectual Excellence without regard to their background, financial means, or postal code. We focus on the cultivation of virtues like courage, integrity, benevolence, fortitude, temperance, and magnanimity. We use an enhanced classical curriculum at the heart of which is a great books program. Our students read whole texts, books with depth and enduring value that illuminate those unchanging aspects of the human condition. They’re exposed to the best that has been thought and said in east or west, and are ennobled through the study of classical fine and performing arts like poetry, drama, music, dance, painting, and sculpting. They have an opportunity to directly study and observe the natural world and to contemplate the majesty and wonder of creation. Notably absent from our classrooms are the ubiquitous screens and the distraction of smartphones because we value deep learning and concentration. We use screen-based technologies in a minimal deliberate way and smartphones are prohibited at The Classical Academy. Our goal is not just to help students achieve academically, it is to prepare their minds and souls for liberty. We understand that our students are not just future workers. They are future friends, neighbors, spouses, parents, and citizens. They are bearers of divine Souls which thirst after knowledge of what is true, good and enduring. Our mission is to help them grow in virtue and in wisdom so that they may live well and with purpose.

From Michael Shermer’s 2011 book The Believing Brain, from the chapter on political beliefs

In his book A Conflict of Visions, the economist Thomas Sowell argues that these two clusters of moral values are intimately linked to the vision one holds about human nature, either as constrained (conservative) or unconstrained (liberal), and so he calls these the Constrained Vision and the Unconstrained Vision. Sowell shows that controversies over a number of seemingly unrelated social issues such as taxes, welfare, social security, health care, criminal justice, and war repeatedly reveal a consistent ideological dividing line along these two conflicting visions. “If human options are not inherently constrained, then the presence of such repugnant and disastrous phenomena virtually cries out for explanation—and for solutions. But if the limitations and passions of man himself are at the heart of these painful phenomena, then what requires explanation are the ways in which they have been avoided or minimized.”

Which of these natures you believe is true will largely shape which solutions to social ills will be most effective. “In the unconstrained vision, there are no intractable reasons for social evils and therefore no reason why they cannot be solved, with sufficient moral commitment. But in the constrained vision, whatever artifices or strategies restrain or ameliorate inherent human evils will themselves have costs, some in the form of other social ills created by these civilizing institutions, so that all that is possible is a prudent trade-off.” It’s not that conservatives think that we’re evil and liberals believe we’re good. “Implicit in the unconstrained vision is the notion that the potential is very different from the actual, and that means exist to improve human nature toward its potential, or that such means can be evolved or discovered, so that man will do the right thing for the right reason, rather than for ulterior psychic or economic rewards,” Sowell elaborates. “Man is, in short, ‘perfectible’—meaning continually improvable rather than capable of actually reaching absolute perfection.”1

In his masterpiece analysis of human nature, The Blank Slate, the Harvard psychologist Steven Pinker re-labels these two visions the Tragic Vision and the Utopian Vision, and reconfigures them slightly: “The Utopian Vision seeks to articulate social goals and devise policies that target them directly: economic inequality is attacked in a war on poverty, pollution by environmental regulations, racial imbalances by preferences, carcinogens by bans on food additives. The Tragic Vision points to the self-interested motives of the people who would implement these policies—namely, the expansion of their bureaucratic fiefdoms—and to their ineptitude at anticipating the myriad consequences, especially when the social goals are pitted against millions of people pursuing their own interests.” The distinct Left-Right divide consistently cleaves the (respectively) Utopian Vision and Tragic Vision along numerous specific contests, such as the size of the government (big versus small), the amount of taxation (high versus low), trade (fair versus free), healthcare (universal versus individual), environment (protect it versus leave it alone), crime (caused by social injustice versus caused by criminal minds), the constitution (judicial activism for social justice versus strict constructionism for original intent), and many others.2

Personally I agree with Sowell and Pinker that the unconstrained vision is utopian, which in its original Greek means “no place.” An unconstrained utopian vision of human nature largely accepts the blank slate model and believes that custom, law, and traditional institutions are sources of inequality and injustice and should therefore be heavily regulated and constantly modified from the top down; it holds that society can be engineered through government programs to release the natural unselfishness and altruism within people; it deems physical and intellectual differences largely to be the result of unjust and unfair social systems that can be re-engineered through social planning, and therefore people can be shuffled across socioeconomic classes that were artificially created through unfair and unjust political, economic, and social systems inherited from history. I believe that this vision of human nature exists in literally No Place.

Although some liberals embrace just such a vision of human nature, I strongly suspect that when pushed on specific issues most liberals realize that human behavior is constrained to a certain degree—especially those educated in the biological and evolutionary sciences who are aware of the research in behavior genetics—so the debate turns on degrees of constraint. Rather than there being two distinct and unambiguous categories of constrained and unconstrained (or tragic and utopian) visions of human nature, I think there is just one vision with a sliding scale. Let’s call this the Realistic Vision. If you believe that human nature is partly constrained in all respects—morally, physically, and intellectually—then you hold a Realistic Vision of human nature. In keeping with the research from behavioral genetics and evolutionary psychology, let’s put a number on that constraint at 40 to 50 percent. In the Realistic Vision, human nature is relatively constrained by our biology and evolutionary history, and therefore social and political systems must be structured around these realities, accentuating the positive and attenuating the negative aspects of our natures.

A Realistic Vision rejects the blank slate model that people are so malleable and responsive to social programs that governments can engineer their lives into a great society of its design, and instead believes that family, custom, law, and traditional institutions are the best sources for social harmony. The Realistic Vision recognizes the need for strict moral education through parents, family, friends, and community because people have a dual nature of being selfish and selfless, competitive and cooperative, greedy and generous, and so we need rules and guidelines and encouragement to do the right thing. The Realistic Vision acknowledges that people vary widely both physically and intellectually—in large part because of natural inherited differences—and therefore will rise (or fall) to their natural levels. Therefore governmental redistribution programs are not only unfair to those from whom the wealth is confiscated and redistributed, but the allocation of the wealth to those who did not earn it cannot and will not work to equalize these natural inequalities.

I think most moderates on both the left and the right embrace a Realistic Vision of human nature. They should, as should the extremists on both ends, because the evidence from psychology, anthropology, economics, and especially evolutionary theory and its application to all three of these sciences supports the Realistic Vision of human nature. There are at least a dozen lines of evidence that converge to this conclusion:3

  1. The clear and quantitative physical differences among people in size, strength, speed, agility, coordination, and other physical attributes that translates into some being more successful than others, and that at least half of these differences are inherited.
  2. The clear and quantitative intellectual differences among people in memory, problem solving ability, cognitive speed, mathematical talent, spatial reasoning, verbal skills, emotional intelligence, and other mental attributes that translates into some being more successful than others, and that at least half of these differences are inherited.
  3. The evidence from behavior genetics and twin studies indicating that 40 to 50 percent of the variance among people in temperament, personality, and many political, economic, and social preferences are accounted for by genetics.
  4. The failed communist and socialist experiments around the world throughout the 20th century revealed that top-down draconian controls over economic and political systems do not work.
  5. The failed communes and utopian community experiments tried at various places throughout the world over the past 150 years demonstrated that people by nature do not adhere to the Marxian principle “from each according to his ability, to each according to his need.”
  6. The power of family ties and the depth of connectedness between blood relatives. Communities who have tried to break up the family and have children raised by others provides counter evidence to the claim that “it takes a village” to raise a child. As well, the continued practice of nepotism further reinforces the practice that “blood is thicker than water.”
  7. The principle of reciprocal altruism—I’ll scratch your back if you’ll scratch mine”—is universal; people do not by nature give generously unless they receive something in return, even if what they receive is social status.
  8. The principle of moralistic punishment—I’ll punish you if you do not scratch my back after I have scratched yours—is universal; people do not long tolerate free riders who continually take but almost never give.
  9. The almost universal nature of hierarchical social structures—egalitarianism only works (barely) among tiny bands of hunter-gatherers in resource-poor environments where there is next to no private property, and when a precious game animal is hunted extensive rituals and religious ceremonies are required to insure equal sharing of the food.
  10. The almost universal nature of aggression, violence, and dominance, particularly on the part of young males seeking resources, women, and especially status, and how status-seeking in particular explains so many heretofore unexplained phenomena, such as high risk taking, costly gifts, excessive generosity beyond one’s means, and especially attention seeking.
  11. The almost universal nature of within-group amity and between-group enmity, wherein the rule-of-thumb heuristic is to trust in-group members until they prove otherwise to be distrustful, and to distrust out-group members until they prove otherwise to be trustful.
  12. The almost universal desire of people to trade with one another, not for the selfless benefit of others or the society, but for the selfish benefit of one’s own kin and kind; it is an unintended consequence that trade establishes trust between strangers and lowers between-group enmity, as well as produces greater wealth for both trading partners and groups.

The founders of our Republic established our system of government as they did based on this Realistic Vision of human nature, knowing full well that the tension between individual liberty and social cohesiveness could never be resolved to everyone’s satisfaction, and so the moral pendulum swings Left and Right and politics is played mostly between the two 40-yard lines of the political playing field. This tension between freedom and security, in fact, would explain why third parties have such a difficult time finding a toe-hold on the political rock face of America, and typically crater after an election, or cower in the shadows of two behemoths that have come to define the Left-Right system. Even in Europe, where third, fourth, and even fifth parties receive substantial support at the polls, they are, in fact, barely distinguishable from the parties on either side of them, and political scientists find that they can easily classify them as largely emphasizing either liberal or conservative values. Haidt’s data on the differing foundational values of American liberals and conservatives, in fact, generalizes to all countries that have been tested, and the chart lines from country to country are virtually indistinguishable from one another.

I believe that the Realistic Vision of human nature is what James Madison was thinking of when he penned (literally) his famous dictum in the Federalist Paper Number 51: “If men were angels, no government would be necessary.4 If angels were to govern men, neither external nor internal controls on government would be necessary.” Abraham Lincoln also had something like the Realistic Vision in mind when he wrote in his first inaugural address in March of 1861, on the eve of the bloodiest conflict in our nation’s history that he knew would unleash the demons within: “Though passion may have strained, it must not break our bonds of affection. The mystic chords of memory, stretching from every battlefield and patriot grave to every living heart and hearthstone all over this broad land, will yet swell the chorus of the Union, when again touched, as surely they will be, by the better angels of our nature.”5

References
  1. Sowell, Thomas. 1987. A Conflict of Visions: Ideological Origins of Political Struggles. New York: Basic Books, 24-25.
  2. Pinker, Steven. 2002. The Blank Slate: The Modern Denial of Human Nature. New York: Viking, 290-291.
  3. I present this data in much greater detail in two of my books: Shermer, Michael. 2003. The Science of Good and Evil. New York: Henry Holt/Times Books. And: Shermer, Michael. 2008. The Mind of the Market. New York: Henry Holt/Times Books.
  4. Madison, James. 1788. “The Federalist No. 51: The Structure of the Government Must Furnish the Proper Checks and Balances Between the Different Departments.” Independent Journal, Wednesday, February 6.
  5. Inaugural Addresses of the Presidents of the United States. Washington, D.C.: U.S. G.P.O.: for sale by the Supt. of Docs., U.S. G.P.O., 1989; Bartleby.com, 2001. https://www.bartleby.com/124/.
Categories: Critical Thinking, Skeptic

The Conversation Gets it Wrong on GMOs

neurologicablog Feed - Mon, 12/18/2023 - 5:14am

Even high quality media outlets will get it wrong from time to time. I notice this tends to happen when there is a mature and sophisticated propaganda campaign that has had enough time and reach to essentially gaslight a major portion of the public, and further where a particular expertise is required to understand why the propaganda is false. This is true, for example, for acupuncture, where even medical experts don’t have sufficient topic expertise to know why the claims being made are largely pseudoscience.

Where there is arguably the biggest gap between the scientific evidence and public opinion is genetically modified organisms (GMOs). There has been a well-funded and unfortunately successful campaign to unfairly and unscientifically demonize GMO technology, largely funded by the organic lobby but also environmental groups. Scientific pushback has ameliorated this somewhat. Further, the more time that goes by without the predicted “GMO apocalypse” the less urgent the fearmongering seems. Plus, genetic engineering works and is safe and is producing results, and people may be just getting more comfortable with it over time.

But it seems to me that there are still some people who are stuck in the anti-GMO narrative, and they are making increasingly poor and unconvincing arguments to sustain their negative attitude. An example is a recent article in The Conversation – Genetically modified crops aren’t a solution to climate change, despite what the biotech industry says. The article is by Barbara Van Dyck, who is a long time anti-GMO activist, even participating in disruptions of field trials. Let’s dive into her recent article.

Her premise is that the biotech industry is overpromising on the ability for GMOs to adapt to and mitigate climate change, but to make this point she sets up a strawman. She writes:

“They argue that by enhancing crops’ resistance to drought or improving their ability to capture carbon, climate change may no longer seem such a daunting challenge.”

and later:

“But, perhaps most importantly, genetically modified plants aren’t the solution to the climate crisis.”

They argue nothing of the sort  (read it for yourself), but she puts it in sufficiently flowery prose that it may seem to be a bit of hyperbole or just poetic license. But she is clearly setting up a weak argument to knock down, that GMOs are a “solution to climate change”. No where in the paper do the authors argue this is a significant solution to climate change or will render it “less daunting”. They simply lay out the ways in which genetic engineering can be used to adapt to and mitigate climate change, and they make solid arguments, so she has to exaggerate their claims in order to make it seem as if they are overpromising. You could do the same thing for any of the hundreds of approaches that moderately contribute to mitigating climate change. None of them, by themselves, are “the solution” or make climate change “less daunting”.

Another type of argument she makes is essentially self-referential – justifying the anti-GMO movement because of the attitudes that the anti-GMO movement helped create. She also seems upset that the biotech industry is fighting back against the propaganda:

“The first step was to rebrand the techniques they are using, aiming to distance themselves from the bad reputation of genetic modification. Biotech firms started to use more innocent terms like gene editing and precision breeding instead.”

But she and other anti-GMO activists created the “bad reputation” – it’s not based on science or reality, and in fact is highly disconnected from the opinion of scientists. So she is upset that the biotech industry is moving away from the terms that she and others worked so hard to unfairly demonize. These terms are not inherently more “innocent”, they just haven’t been targeted for the last 20 years by the anti-GMO movement. But she tries to make it seem like there is some deception at work by the biotech industry by using terms like “gene editing”. How is “gene editing” not a completely technically accurate description of the technology?

With regard to “precision breeding”, I looked up the technical definition. It is gene editing, but, “These changes must be equivalent to those that could have been made using traditional plant or animal breeding methods.”  Yet her criticism is based on the notion that “Studies have shown that new genetic techniques can alter the traits of a species “to an extent that would be impossible, or at least very unlikely, using conventional breeding”.” Well then it’s not precision breeding, it’s gene editing. She appears to be the one playing loose with the terms here.

Her main argument against GMOs as a technology that can help adapt to and mitigate climate change is:

“These crops are designed for an agricultural model centred on the large-scale cultivation of single crop varieties destined for the global market.

This agricultural model relies on staggering amounts of fuel for distribution and places farmers in a state of dependence on heavy machinery and farm inputs (like artificial fertilisers and pesticides) derived from fossil fuels.”

This is a form of bait and switch. This problem has nothing directly to do with GMOs or genetic engineering, but with agricultural systems. This is the same logical problem as with people who argue against GMOs because they don’t like the fact that they are patented. But genetic engineering technology is largely agnostic toward agricultural systems. The former is a tool, the latter is an application. If she wants to argue for a decentralized food production system that relies less on monocropping, go ahead. I also happen to think this is folly, and cannot feed the world, or will rely on a dramatic increase in land usage. There is a reason why we make most of our calories through monocropping. But I am open to making adjustments to make agriculture more sustainable.

None of this has anything to do with GMOs or GE technology. She argues:

Typical examples include patents for soybeans with increased protein content, waxy corn, or rice that is tolerant to herbicides. These crops are designed for an agricultural model centred on the large-scale cultivation of single crop varieties destined for the global market.

Really? How does increasing the protein content of soybeans specifically apply to monocropping for a global market? How would it be incompatible with another approach to agriculture? Even herbicide tolerance, the favorite boogeyman of the anti-GMO crowd, could be used with integrated pest management or any sustainable system you wish to favor. It may not be compatible with organic farming, which I think is the real point for many, but “organic” farming is pure marketing (with a lot of pseudoscience) and should not be confused with sustainable farming.

Finally, she appeals to the precautionary principle, which is the governing philosophy for the EU, and which I think is not appropriate and is over-applied. In the US we take a more risk vs benefit model, which I think is more practical. But sure, there is a conversation to be had about how best to balance the needs of ensuring a safe food supply without unnecessarily hindering an industry. Right now I think GMO regulations are unnecessarily onerous, and based largely on fearmongering. So far there has been a grand total of zero humans harmed by GMOs. The totality of the scientific research shows that the technology is largely safe. But sure, having some safety net for each individual new crop introduced to the food system is a good idea, as long as it is proportional to the risk.

But we also have to put this into context, and this is what anti-GMO activists fail to do consistently. One big problem with the GMO term is that it is arbitrary and unfair. Why should certain types of genetic engineering be subject to far more scrutiny and regulation than, say, mutation farming or forced hybridization? In fact, GMO technology is sometimes safer, because we can predict the outcomes better. Mutation farming literally uses radiation or chemicals to mutate crops, and then selects the lucky improvement out of a large bunch. Talk about unintended changes. But apparently mutation farming is fine, while turning off one specific gene is inherently risky.

That is a main problem with the anti-GMO rhetoric. Non GMO plants can be patented, and have the same risks of introducing new food into the ecosystem. There really is no reason to think that GE technology is inherently riskier, and in fact there have been instances of unsafe non-GMO foods being introduced. So far, GMOs have a better track record. The alleged problems that activists point to are not unique to GMOs, nor are they inherent or universal to GMOs. But “GMO” is the term they spent billions demonizing, and they don’t want to give up all that marketing.

Meanwhile, here are the traits that they talk about in the paper Van Dyck linked to:

Abiotic stress – making crops more heat and drought resistant

Resistant to pests and pathogens (absolutely necessary for agriculture, no matter what system you use)

Increased carbon fixation

Improved efficiency of photosynthesis

De novo domestication – to increase the biodiversity of our food system.

Making crops more sustainable through seed bioengineering – allowing more kinds of crops to meet our nutritional needs.

Sounds like a horror story, right? How dare they? She can’t really make an argument against any proposed GE technology, so she has to attack strawmen and false targets. This was a total fail for The Conversation, because they printed an article from an activist at one end of the ideological spectrum, not someone who could provide a neutral expert perspective.

 

The post The Conversation Gets it Wrong on GMOs first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #962 - Dec 16 2023

Skeptics Guide to the Universe Feed - Sat, 12/16/2023 - 8:00am
Interview with Jessica McCabe of How to ADHD; Special Segment: Zelle Scams; News Items: Neuromorphic Supercomputer, Sodium Ion Batteries, Lab Grown Coffee, Lunar Anthropocene; Who's That Noisy; Your Questions and E-mails: Nazi Synthetic Fuel; Who Said That; Science or Fiction
Categories: Skeptic

Michael Greger — How Not to Age

Skeptic.com feed - Sat, 12/16/2023 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss394_Michael_Greger_2023_11_13.mp3 Download MP3

When Dr. Michael Greger, founder of NutritionFacts.org, dove into the top peer-reviewed anti-aging medical research, he realized that diet could regulate every one of the most promising strategies for combating the effects of aging. We don’t need Big Pharma to keep us feeling young―we already have the tools. In How Not to Age, the internationally renowned physician and nutritionist breaks down the science of aging and chronic illness and explains how to help avoid the diseases most commonly encountered in our journeys through life.

Physicians have long treated aging as a malady, but getting older does not have to mean getting sicker. There are eleven pathways for aging in our bodies’ cells and we can disrupt each of them. Processes like autophagy, the upcycling of unusable junk, can be boosted with spermidine, a compound found in tempeh, mushrooms, and wheat germ. Senescent “zombie” cells that spew inflammation and are linked to many age-related diseases may be cleared in part with quercetin-rich foods like onions, apples, and kale. And we can combat effects of aging without breaking the bank. Why spend a small fortune on vitamin C and nicotinamide facial serums when you can make your own for up to 2,000 times cheaper?

Inspired by the dietary and lifestyle patterns of centenarians and residents of “Blue Zone” regions where people live the longest, Dr. Greger presents simple, accessible, and evidence-based methods to preserve the body functions that keep you feeling youthful, both physically and mentally. Brimming with expertise and actionable takeaways, How Not to Age lays out practical strategies for achieving ultimate longevity.

A founding member and Fellow of the American College of Lifestyle Medicine, Michael Greger, MD, is a physician, New York Times bestselling author, and internationally recognized speaker on nutrition, food safety, and public health issues. He has lectured at the Conference on World Affairs, testified before Congress, and was invited as an expert witness in the defense of Oprah Winfrey in the infamous “meat defamation” trial. In 2017, Dr. Greger was honored with the ACLM Lifestyle Medicine Trailblazer Award. He is a graduate of Cornell University School of Agriculture and Tufts University School of Medicine. His first book How Not to Die became an instant New York Times Best Seller. He is also the author of The How Not to Die Cookbook, How Not to Diet, The How Not to Diet Cookbook, and How to Survive a Pandemic. He has videos on more than 2,000 health topics freely available at NutritionFacts.org, with new videos and articles uploaded every day. All proceeds he receives from his books, DVDs, and speaking engagements are donated to charity.

Shermer and Greger discuss:

  • low trust in medicine and public health post-Covid
  • why we age and die
  • Is aging a disease?
  • No one dies from old age?
  • lifespan, vs. healthspan vs. life expectancy
  • longevity escape velocity
  • leading causes of death: heart disease, cancer, stroke, dementia
  • aging in the Paleolithic vs. Civilization
  • how to determine causality in aging science: genes, environment, diet, and luck
  • dietary and nutrition fads in history
  • the diet and anti-aging industry: multibillion-dollar behemoths: $88 billion US, $292 billion worldwide
  • diet and beverages
  • What centenarians eat?
  • Mediterranean Diet
  • Okinawan Diet
  • Red, White, and Blue Zones
  • plant-based eating
  • meat, dairy, eggs, etc.
  • lifestyle
  • exercise
  • weight control
  • sleep
  • stress
  • the Anti-Aging 8: nuts, greens, berries, prebiotics and postbiotics
  • cholesterol and statins
  • vaccines: shingles, flu, etc.
  • brain supplements
  • sunscreen
  • sunglasses and UV
  • alcohol
  • Alzheimer’s and dementia
  • social ties, friendships, and marriage.
Excerpt from the Book

Even if all forms of cancer were eliminated, the average life expectancy in the United States would only go up about three years. Because dodging cancer would just mean delaying death from something like a heart attack or stroke. If one age-related ailment doesn’t get us, another will. Rather than playing “whack-a-mole” by tackling each disease separately, progress in decelerating aging could address all these issues simultaneously.

How long do you wish to live? 85, 120, 150, indefinitely (2/3 said 85) when the question was reframed as How long do you wish to live in guaranteed mental and physical health?, the most popular answer switched to an unlimited lifespan.

Only about 18 percent of people can be described as undergoing “successful aging.” Studies have found the prevalence of multimorbidity, the coexistence of multiple chronic diseases, ranges between 55 percent and 98 percent among older individuals. By age eighty-five, more than 90 percent may have at least one disease and, on average, about four diseases. And just like 85 percent of cancer patients tend to overestimate their survival, so, too, do those with other chronic diseases. Those suffering from heart failure or chronic obstructive lung diseases like emphysema are about three times more likely to die within the subsequent year than they predicted.

A twenty-year-old in 1998 could expect to live about fifty-eight more years, while a twenty-year-old in 2006 could look forward to fifty-nine more years. However, the twenty-year-old from the 1990s might live ten of those years with chronic disease, whereas now it’s more like thirteen years. So it feels like one step forward, three steps back. The researchers also noted that we’re living two fewer functional years—that is, years we’re no longer able to perform basic life activities, such as walking a quarter of a mile, standing or sitting for two hours without having to lie down, or standing without special equipment.

In terms of life expectancy, the United States ranked down around 27 or 28 out of the 34 top free-market democracies. People in Slovenia live longer than we do. That was in 2010, down from ranking 20th in 1990. More recently, U.S. life expectancy dipped to 43rd in the world and is expected to drop to 64th by 2040, despite spending trillions on healthcare a year, more than anyone else around the globe. The problem isn’t healthcare access. The Mayo Clinic estimates that nearly 70 percent of Americans are on prescription drugs. The problem is that those trillions in healthcare spending aren’t addressing the root cause. The leading risk factor for death in the United States is what we eat. It’s the food. The standard American diet is just to die for. Literally.

According to one industry group, 60 percent of Americans sixty-five and older are pursuing anti-aging interventions, yet, according to the director of the Institute for Biomedical Aging Research, in almost all instances, these interventions are not supported by science. They sound like they are, though. Scientific breakthroughs exploited by the sensationalist press have long been opportunistically repackaged by profiteers.

“Simply put,” the American Academy of Anti-Aging Medicine’s official response to the criticism read, “the death cult of gerontology desperately labors to sustain an arcane, outmoded stance that aging is natural and inevitable.”

The odds of living to age one hundred have risen from approximately one in twenty million to as high as one in fifty. Why do some make it to their hundredth birthday but others don’t? It’s not just a matter of picking better parents. Studies following identical twins suggest that no more than 20 to 30 percent of the variance in lifespan is explained by gene inheritance.

Categories: Critical Thinking, Skeptic

Solution Aversion Fallacy

neurologicablog Feed - Fri, 12/15/2023 - 5:05am

I like to think deeply about informal logical fallacies. I write about them a lot, and even have an occasional segment of the SGU dedicated to them. They are a great way to crystalize our thinking about the many ways in which logic can go wrong. Formal logic deals with arguments that are always true, by there very construction. If A=B and B=C then A=C, is always true. Informal logical fallacies, on the other hand, are context dependent. They are a great way to police the sharpness of arguments, but they require a lot of context-dependent judgement.

For example, the Argument from Authority fallacy recognizes that a fact claim is not necessarily true just because an authority states it, but it could be true. And recognizing meaningful authority is useful, it’s just not absolute. It’s more of a probability and the weight of opinion. But authority itself does not render a claim true or false. The same is true of the Argument ad Absurdum, or the Slippery Slope fallacy – they are about taking an argument to an unjustified extreme. But what’s “extreme”? This subjectivity might cause one to think that they are therefore not legitimate logical fallacies at all, but that’s just the False Continuum logical fallacy – the notion that if there is not a sharp demarcation line somewhere along a spectrum, then we can ignore or deny the extremes of the spectrum.

I recently received the following question about a potential informal fallacy:

I’ve noticed this more and more with politics. Party A proposes a ludicrous solution to an issue. Party B objects to the policy. Party A then accuse Party B of being in favour of the issue. It’s happening in the immigration debate in the UK where the government are trying to deport asylum seekers to Rwanda (current capacity around 200 per year) in order to solve our “record levels of immigration” (over 700,000 this year of which less than 50,000 are asylum seekers). When you object to the policy you are accused of being in support of unlimited immigration.

This feels like something wider, that may even be a logical fallacy, but I’ve not been able to locate anything describing it. I can imagine it comes up in Skepticisim relatively often.

When trying to define a potential fallacy we need to first acknowledge the lumper/splitter problem. This comes up with many attempts at categorization – how finely do we chop up the categories. Do we have big categories with lots of variation within each type, or do we split it up by every small nuanced difference. In medicine, for example, this comes up all the time in terms of naming diseases. Is ALS one disease? If there are subtypes, how do we break them up? How big a difference is enough to justify a new designation.

The same is true of the informal logical fallacies. We can have very general types of fallacies, or split them up into specific variations. I tend to remain agnostic on this particular debate and embrace both approaches. I like to start as big as possible and then focus in – so I both lump and then split. I find it helps me think about the core problem with any particular argument. I fit what the e-mailer is describing above into a category that includes the basic strategy of saying something unfairly negative about a position you want to argue against, and then falsely conclude that the position must be wrong, or at least is unsavory. I tend to use the term “Strawman Fallacy” to refer generically to this strategy – trying to argue against an unfairly weakened version of a position.

But there are variations that are worth noting. Poisoning the Well is very similar – attaching the position to another position, or an ideology or even a specific person that is wrong or unpopular. One version of this is so common it has received its own name – the Argument ad Hitlerum (now that’s splitting). Well, you believe that, and so did Hitler, so, you know, there’s that.

The Argument ad Absurdum may also fall under the Strawman category. This is when you take a position and extrapolate it to the extreme. You believe in free speech? Well, what if someone constantly threatened to murder you and your family in horrific ways, is that speech protected? This is a legitimate way to stress-test any position. But, this strategy can also be perverted. You can say – you are in favor of free speech, therefore you think that fraud and violence are OK. Or – you want to lower the drinking age to 18? What’s next, allowing 10 year old’s to drive?

What we have above, saying that if you oppose a specific proposed solution you must therefore support the identified “problem” or not think it’s a big deal, is a type of Strawman fallacy. I don’t think it’s an Argument ad Absurdum, however, although it can have that flavor. It may be it’s own subtype of Strawman. We encounter two versions of this – one has already been recognized as “solution aversion” – if you don’t like the solution, then you deny the problem. Or, you use a strawman version of a proposed solution to argue that the problem is not real or serious. We see this a lot with global warming. The “Libs” just want to take over industry and stop everyone from eating steak and driving cars, therefore global warming isn’t real.

This is more of a “solution strawman” – as the e-mailer says, you don’t like my solution, then you must support the problem. You don’t think we should have armed police officers on every street corner? You must not care about crime. You are “pro-crime”. This type of argument is extremely common in politics, and I think it is mostly deliberate. This is what politicians consider “spin”, and it’s a deliberate strategy, a way or painting your opponents in the most negative light possible. I really becomes a fallacy when partisan members of the public internalize the argument as if it were valid.

We also have terms for when a logical fallacy is raised to extreme levels. When a series of strawman fallacies and poisoning the well strategies are deployed against a person or group, we call that “demonizing” or creating a “cardboard villain”. Recently another term has become popular – when there is an organized campaign of deploying multiple fallacies combined with misinformation and disinformation to create an entire false reality, we call that “gaslighting”.

It’s a shame we need to have terms for all of these things, but understanding them is the first line of defense against them. But most importantly of all, we need to recognize them in ourselves, so that we don’t unknowingly fall into them ourselves. But knowledge cuts both ways – it also gives some people a greater ability to deliberately deploy these fallacies as a political strategy. All the more reason we must recognize and defend against them.

The post Solution Aversion Fallacy first appeared on NeuroLogica Blog.

Categories: Skeptic

Deep South – A Neuromorphic Supercomputer

neurologicablog Feed - Thu, 12/14/2023 - 5:04am

Australian researchers at the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University have announced they are building what they are calling Deep South (based on IBM’s Deep Blue). This will be the world’s largest neuromorphic supercomputer, with 228 trillion synaptic operations per second. This won’t be the fastest supercomputer in the world, which is currently reaching the exascale with quintillions of operations per second. So then what’s the big deal? It probably has something to do with the “neuromorphic” part.

The basic definition of a neuromorphic computer is one based on or inspired by the design of biological systems – something more closely resembling neurons and synapses. Conventional computers have central processing units and separate memory storage. But they seem to get the job done. Neuromorphic computers have components that function more like neurons and synapses which are both memory and processing at the same time. Also, the process is massively parallel and distributed. But if this is not necessarily faster than conventional computers, why bother?

There are two reasons, but the first is efficiency. The human brain is a powerful computer, and yet it operates only 20 watts of power. That is incredibly more energy efficient than any computer, and researchers believe this is because of the architecture. Right now specific computer applications use the energy of small countries. Crypto uses 127 terawatt hours per year, more than Norway. Data centers use 340 TWh. A Chat GPT query uses 15 times the energy as a normal Google query, and again is already getting to small-country levels of energy use. The AI revolution, whatever else you may think about it, is energy hungry.

By one recent estimate, using a neuromorphic computer to process the same information is 4-16 times more energy efficient than a non-neuromorphic system. If we take an average figure and say that neuromorphic computers would use about 10% of the energy of conventional computers, that is a massive change. There is obviously a cost efficiency here, but also this could be a major efficiency breakthrough in terms of mitigating global warming. The most environmentally friendly energy is the energy you don’t use. It is hard to overestimate the potential benefit here, especially as AI systems are ramping up, complete with massive energy demand.

This relates also to the other benefit of the neuromorphic design – there are some applications where they are computationally more efficient that conventional computers, not just energy efficient but faster and more powerful. You can, theoretically, simulate any computational system virtually, but that can take a massive amount of computing power and is slower. That is basically doing it the hard way (by, ironically, doing it in software rather than hardware). Designing the hardware for the specific application is just more powerful and efficient in every way.

And guess which applications the neuromorphic design is optimal for – many types of AI computing.

Deep South will be operational by April 2024, so we don’t have to wait long to see it come online. I will then want to track how it performs, both in terms of computing power and energy efficiency. If it turns out to be a successful experiment, which I hope it is, then perhaps this will accelerate massive adoption of neuromorphic computing, starting with the applications where it makes the most sense.

As a side note, I wonder if this will take some pressure of the GPU market – graphical processing unit. It turns out that GPUs are fastest when it comes to certain kinds of computing (like crypto), and so suddenly all the high end graphics cards were being bought up for these purposes, leaving us poor gamers in the lurch. Graphics cards are now harder to get and more expensive as a result. A lot of this is end users (crypto miners) which won’t be affected by neuromorphic computing (not anytime soon). But a lot is also large data centers, which can transition to neuromorphic systems.

It will be interesting to see how this pans out. But I like that we have this technological option, and it does offer some hope of avoiding the coming AI energy apocalypse.

The post Deep South – A Neuromorphic Supercomputer first appeared on NeuroLogica Blog.

Categories: Skeptic

Say Goodbye to 2023

Skeptoid Feed - Thu, 12/14/2023 - 2:00am

Help us say goodbye to 2023, Skeptoid style.

Categories: Critical Thinking, Skeptic

The Kill Your Brother Game: Playful Dramas & Unintended Consequences of Censorship

Skeptic.com feed - Wed, 12/13/2023 - 12:00am

During her sojourns among the Inuit throughout the 1960s and 70s, pioneering anthropologist Jean Briggs observed some peculiar parenting practices. In a chapter she contributed to The Anthropology of Peace and Nonviolence, a collection of essays from 1994, Briggs describes various methods the Inuit used to reduce the risk of physical conflict among community members. Foremost among them was the deliberate cultivation of modesty and equanimity, along with a penchant for reframing disputes or annoyances as jokes. “An immodest person or one who liked attention,” Briggs writes, “was thought silly or childish.” Meanwhile, a critical distinction held sway between seriousness and playfulness. “To be ‘serious’ had connotations of tension, anxiety, hostility, brooding,” she explains. “On the other hand, it was highest praise to say of someone: ‘He never takes anything seriously’.”1 The ideal then was to be happy, jocular, and even-tempered.

This distaste for displays of anger applied in the realm of parenting as well. No matter how unruly children’s behavior, adults would refrain from yelling at them. So, it came as a surprise to Briggs that Inuit adults would often purposely instigate conflicts among the children in their charge. One exchange Briggs witnessed involved an aunt taking her three-year-old niece’s hand and putting it in another child’s hair while telling her to pull it. When the girl refused, the aunt gave it a tug herself. The other child, naturally enough, turned around and hit the one she thought had pulled her hair. A fight ensued, eliciting laughter and cheers from the other adults, who intervened before anyone was hurt. None of the other adults who witnessed this incident seemed to think the aunt had done anything wrong.

On another occasion, Briggs witnessed a mother picking up a friend’s baby and saying to her own nursling, “Shall I nurse him instead of you?” The other mother played along, offering her breast to the first woman’s baby, saying, “Do you want to nurse from me? Shall I be your mother?”2 The nursling shrieked in protest, and both mothers burst into laughter. Briggs witnessed countless more of what she calls “playful dramas” over the course of her research. Westerners might characterize what the adults were doing in these cases as immature, often cruel pranks, even criminal acts of child abuse. What Briggs came to understand, however, was that the dramas served an important function in the context of Inuit culture. Tellingly, the provocations didn’t always involve rough treatment or incitements to conflict but often took the form of outrageous or disturbing lines of questioning. This approach is reflected in the title of Briggs’s chapter, “‘Why Don’t You Kill Your Baby Brother?’ The Dynamics of Peace in Canadian Inuit Camps.” However, even these gentler sessions were more interrogation than thought experiment, the clear goal being to arouse intense emotions in the children.

From interviews with adults in the communities hosting her, Briggs gleaned that the purpose of these dramas was to force children to learn how to handle difficult social situations. The term they used is isumaqsayuq, meaning “to cause thought,” which Briggs notes is a “central idea of Inuit socialization.” “More than that,” she goes on, “and as an integral part of thought, the dramas stimulate emotion.” The capacity for clear thinking in tense situations—and for not taking the tension too seriously—would help the children avoid potentially dangerous confrontations. Briggs writes:

The games were, themselves, models of conflict management through play. And when children learned to recognize the playful in particular dramas, people stopped playing those games with them. They stopped tormenting them. The children had learned to keep their own relationships smoother—to keep out of trouble, so to speak— and in doing so, they had learned to do their part in smoothing the relationships of others.3

The parents, in other words, were training the children, using simulated and age-calibrated dilemmas, to develop exactly the kind of equanimity and joking attitude they would need to mature into successful adults capable of maintaining a mostly peaceful society. They were prodding at the kids’ known sensitivities to teach them not to take themselves too seriously, because taking yourself too seriously makes you apt to take offense, and offense can often lead to violence.

Are censors justified in their efforts at protecting children from the wrong types of lessons?

The Inuit’s aversion to being at the center of any drama and their penchant for playfulness in potentially tense encounters are far removed from our own culture. Rather their approach to socialization relies on an insight that applies universally, one that’s frequently paid lip service in the West but even more frequently lost sight of. Anthropologist Margaret Mead captures the idea in her 1928 ethnography Coming of Age in Samoa, writing, “The children must be taught how to think, not what to think.”4 People fond of spouting this truism today usually intend to communicate something diametrically opposite to its actual meaning, with the suggestion being that anyone who accepts rival conclusions must have been duped by unscrupulous teachers. However, the crux of the insight is that education should not focus on conclusions at all. Thinking is not about memorizing and being able to recite facts and propositions. Thinking is a process. It relies on knowledge to be sure, but knowledge alone isn’t sufficient. It also requires skills.

Cognitive psychologists label knowing that and knowing how as declarative and procedural knowledge, respectively.5 Declarative knowledge can be imparted by the more knowledgeable to the less knowledgeable—the earth orbits the sun—but to develop procedural knowledge or skills you need practice. No matter how precisely you explain to someone what goes into riding a bike, for instance, that person has no chance of developing the requisite skills without at some point climbing on and pedaling. Skills require training, which to be effective must incorporate repetition and feedback.

What the Inuit understood, perhaps better than most other cultures, is that morality plays out far less in the realm of knowing what than in the realm of knowing how. The adults could simply lecture the children about the evils of getting embroiled in drama, but those children would still need to learn how to manage their own aggressive and retributive impulses. And explaining that the most effective method consists of reframing slights as jokes is fine, but no child can be expected to master the trick on first attempt. So it is with any moral proposition. We tell young children it’s good to share, for instance, but how easy is it for them to overcome their greedy impulses? And what happens when one moral precept runs up against another? It’s good to share a toy sword, but should you hand it over to someone you suspect may use it to hurt another child? Adults face moral dilemmas like this all the time. It’s wrong to cheat on your spouse, but what if your spouse is controlling and threatens to take your children if you file for divorce? It’s good to be honest, but should you lie to protect a friend? There’s no simple formula that applies to the entire panoply of moral dilemmas, and even if there were, it would demand herculean discipline to implement.

Unfortunately, Western children have a limited range of activities that provide them opportunities to develop their moral skillsets. Perhaps it’s testament to the strength of our identification with our own moral principles that few of us can abide approaches to moral education that are in any regard open-ended. Consider children’s literature. As I write, political conservatives in the U.S. are working to impose bans on books6 they deem inappropriate for school children. Meanwhile, more left-leaning citizens are being treated to PC bowdlerizations7 of a disconcertingly growing8 list of classic books. One side is worried about kids being indoctrinated with life-deranging notions about race and gender. The other is worried about wounding kids’ and older readers’ fragile psyches with words and phrases connoting the inferiority of some individual or group. What neither side appreciates is that stories can’t be reduced to a set of moral propositions, and that what children are taught is of far less consequence than what they practice.

Do children’s books really have anything in common with the playful dramas Briggs observed among the Inuit? What about the fictional stories adults in our culture enjoy? One obvious point of similarity is that stories tend to focus on conflict and feature high-stakes moral dilemmas. The main difference is that reading or watching a story entails passively witnessing the actions of others, as opposed to actively participating in the plots. Nonetheless, the principle of isumaqsayuq comes into play as we immerse ourselves in a good novel or movie. Stories, if they’re at all engaging, cause us to think. They also arouse intense emotions. But what could children and adults possibly be practicing when they read or watch stories? If audiences were simply trying to figure out how to work through the dilemmas faced by the protagonists, wouldn’t the outcome contrived by the author represent some kind of verdict, some kind of lesson? In that case, wouldn’t censors be justified in their efforts at protecting children from the wrong types of lessons?

To answer these questions, we must consider why humans are so readily held rapt by fictional narratives in the first place. If the events we’re witnessing aren’t real, why do we care enough to devote time and mental resources to them? The most popular stories, at least in Western societies, feature characters we favor engaging in some sort of struggle against characters we dislike—good guys versus bad guys. In his book Just Babies: The Origins of Good and Evil, psychologist Paul Bloom describes a series of experiments9 he conducted with his colleague Karen Wynn, along with their then graduate student Kiley Hamlin. They used what he calls “morality plays” to explore the moral development of infants. In one experiment, the researchers had the babies watch a simple puppet show in which a tiger rolls a ball to one rabbit and then to another. The first rabbit rolls the ball back to the tiger and a game ensues. But the second rabbit steals away with the ball at first opportunity. When later presented with both puppets and encouraged to reach for one to play with, the babies who had witnessed the exchanges showed a strong preference for the one who had played along. What this and several related studies show is that by as early as three months of age, infants start to prefer characters who are helpful and cooperative over those who are selfish and exploitative.

That such a preference would develop so early and so reliably in humans makes a good deal of sense in light of how deeply dependent each individual is on other members of society. Throughout evolutionary history, humans have had to cooperate to survive, but any proclivity toward cooperation left them vulnerable to exploitation. This gets us closer to the question of what we’re practicing when we enjoy fiction. In On the Origin of Stories: Evolution, Cognition, and Fiction, literary scholar Brian Boyd points out that animals’ play tends to focus on activities that help them develop the skills they’ll need to survive, typically involving behaviors like chasing, fleeing, and fighting. When it comes to what skills are most important for humans to acquire, Boyd explains:

Even more than other social species, we depend on information about others’ capacities, dispositions, intentions, actions, and reactions. Such “strategic information” catches our attention so forcefully that fiction can hold our interest, unlike almost anything else, for hours at a stretch.10

Fiction, then, can be viewed as a type of imaginative play that activates many of the same evolved cognitive mechanisms as gossip, but without any real-world stakes. This means that when we’re consuming fiction, we’re not necessarily practicing to develop equanimity in stressful circumstances as do the Inuit; we’re rather honing our skills at assessing people’s proclivities and weighing their potential contributions to our group. Stories, in other words, activate our instinct, while helping us to develop the underlying skillset, for monitoring people for signals of selfish or altruistic tendencies. The result of this type of play would be an increased capacity for cooperation, including an improved ability to recognize and sanction individuals who take advantage of cooperative norms without contributing their fair share.

Ethnographic research into this theory of storytelling is still in its infancy, but the anthropologist Daniel Smith and his colleagues have conducted an intensive study11 of the role of stories among the Agta, a hunter-gatherer population in the Philippines. They found that 70 percent of the Agta stories they collected feature characters who face some type of social dilemma or moral decision, a theme that appears roughly twice as often as interactions with nature, the next most common topic. It turned out, though, that separate groups of Agta invested varying levels of time and energy in storytelling. The researchers saw this as an opportunity to see what the impact of a greater commitment to stories might be. In line with the evolutionary account laid out by Boyd and others, the groups that valued storytelling more outperformed the other groups in economic games that demand cooperation among the players. This would mean that storytelling improves group cohesion and coordination, which would likely provide a major advantage in any competition with rival groups. A third important finding from this study is that the people in these groups knew who the best storytellers were, and they preferred to work with these talented individuals on cooperative endeavors, including marriage and childrearing. This has obvious evolutionary implications.

What do children learn from parents’ concern that single words may harm or corrupt them?

Remarkably, the same dynamics at play in so many Agta tales are also prominent in classic Western literature. When literary scholar Joseph Carroll and his team surveyed thousands of readers’ responses to characters in 200 novels from authors like Jane Austen and Charles Dickens, they found that people see in them the basic dichotomy between altruists and selfish actors. They write:

Antagonists virtually personify Social Dominance—the self-interested pursuit of wealth, prestige, and power. In these novels, those ambitions are sharply segregated from prosocial and culturally acquisitive dispositions. Antagonists are not only selfish and unfriendly but also undisciplined, emotionally unstable, and intellectually dull. Protagonists, in contrast, display motive dispositions and personality traits that exemplify strong personal development and healthy social adjustment. They are agreeable, conscientious, emotionally stable, and open to experience.12

Interestingly, openness to experience may be only loosely connected to cooperativeness and altruism, just as humor is only tangentially related to peacefulness among the Inuit. However, being curious and open-minded ought to open the door to the appreciation of myriad forms of art, including different types of literature, leading to a virtuous cycle. So, the evolutionary theory, while focusing on cooperation, leaves ample room for other themes, depending on the cultural values of the storytellers.

In a narrow sense then, cooperation is what many, perhaps most, stories are about, and our interest in them depends to some degree on our attraction to more cooperative, less selfish, individuals. We obsessively track the behavior of our fellow humans because our choices of who to trust and who to team up with are some of the most consequential in our lives. This monitoring compulsion is so powerful that it can be triggered by opportunities to observe key elements of people’s behavior—what they do when they don’t know they’re being watched—even when those people don’t exist in the real world. But what keeps us reading or watching once we’ve made our choices of which characters to root for? And, if one of the functions of stories is to help us improve our social abilities, what mechanism provides the feedback necessary for such training to be effective?

In Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction, literary scholar William Flesch theorizes that our moment-by-moment absorption in fictional plots can be attributed to our desire to see cooperators rewarded and exploiters punished. Citing experiments that showed participants were willing to punish people they had observed cheating other participants—even when the punishment came at a cost13 to the punishers— Flesch argues that stories offer us opportunities to demonstrate our own impulse to enforce norms of fair play. Within groups, individual members will naturally return tit for tat when they’ve been mistreated. For a norm of mutual trust to take hold, however, uninvolved third parties must also be willing to step in to sanction violators. Flesch calls these third-party players “strong reciprocators” because they respond to actions that aren’t directed at them personally. He explains that

the strong reciprocator punishes or rewards others for their behavior toward any member of the social group, and not just or primarily for their individual interactions with the reciprocator.14

His insight here is that we don’t merely attend to people’s behavior in search of clues to their disposition. We also watch to make sure good and bad alike get their just deserts. And the fact that we can’t interfere in the unfolding of a fictional plot doesn’t prevent us from feeling that we should. Sitting on the edge of your seat, according to this theory, is evidence of your readiness to step in.

Another key insight emerging from Flesch’s work is that humans don’t merely monitor each other’s behavior. Rather, since they know others are constantly monitoring them, they also make a point of signaling that they possess desired traits, including a disposition toward enforcing cooperative norms. Here we have another clue to why we care about fictional characters and their fates. It doesn’t matter that a story is fictional if a central reason for liking it is to signal to others that we’re the type of person who likes the type of person portrayed in that story. Reading tends to be a solitary endeavor, but the meaning of a given story paradoxically depends in large part on the social context in which it’s discussed. We can develop one-on-one relationships with fictional characters for sure, but part of the enjoyment we get from these relationships comes from sharing our enthusiasm and admiration with nonfictional others.

This brings us back to the question of where feedback comes into the social training we get from fiction. One feedback mechanism relies on the comprehensibility and plausibility of the plot. If a character’s behavior strikes us as arbitrary or counter to their personality as we’ve assessed it, then we’re forced to think back and reassess our initial impressions—or else dismiss the story as poorly conceived. A character’s personality offers us a chance to make predictions, and the plot either confirms or disproves them. However, Flesch’s work points to another type of feedback that’s just as important. The children at the center of Inuit playful dramas receive feedback from the adults in the form of laughter and mockery. They learn that if they take the dramas too seriously and thus get agitated, then they can expect to be ridiculed. Likewise, when we read or watch fiction, we gauge other audience members’ reactions, including their reactions to our own reactions, to see if those responses correspond with the image of ourselves we want to project. In other words, we can try on traits and aspects of an identity by expressing our passion for fictional characters who embody them. The outcome of such experimentation isn’t determined solely by how well the identity suits the individual fan, but also by how well that identity fits within the wider social group.

Parents worried that their children’s minds are being hijacked by ideologues will hardly be comforted by the suggestion that teachers and peers mitigate the impact of any book they read. Nor will those worried that their children are being inculcated with more or less subtle forms of bigotry find much reassurance in the idea that we’re given to modeling15 our own behavior on that of the fictional characters we admire. Consider, however, the feedback children receive from parents who respond to the mere presence of a book in a school library with outrage. What do children learn from parents’ concern that single words may harm or corrupt them?

Today, against a backdrop of increasing vigilance and protectiveness among parents, kids are graduating high school and moving on to college or the workforce with historically unprecedented rates of depression16 and anxiety,17 having had far fewer risky but rewarding experiences18 such as dating, drinking alcohol, getting a driver’s license, and working for pay. It’s almost as though the parents who should be helping kids learn to work through difficult situations by adopting a playful attitude have themselves become so paranoid and humorless that the only lesson they manage to impart is that the world is a dangerous place, one young adults with their fragile psyches can’t be trusted to navigate on their own.

Parents should, however, take some comfort from the discovery that even pre-verbal infants are able to pick out the good guys from the bad. As much as young Harry Potter fans discuss which Hogwarts House the Sorting Hat would place them in, you don’t hear19 many of them talking enthusiastically about how cool it was when Voldemort killed all those filthy Muggles. The other thing to keep in mind is that while some students may embrace the themes of a book just because the teacher assigned it, others will reject them for the same reason. It depends on the temperament of the child and the social group they hope to achieve status in.

This article appeared in Skeptic magazine 28.3
Buy print edition
Buy digital edition
Subscribe to print edition
Subscribe to digital edition
Download our app

Should parents let their kids read just anything? We must acknowledge that books, like playful dramas, need to be calibrated to the maturity levels of the readers. However, banning books deemed dangerous deprives children not only of a new perspective. It deprives them of an opportunity to train themselves for the difficulties they’ll face in the upcoming stages of their lives. If you’re worried your child might take the wrong message from a story, you can make sure you’re around to provide some of your own feedback on their responses. Maybe you could even introduce other books to them with themes you find more congenial. Should we censor words or images—or cease publication of entire books—that denigrate individuals or groups? Only if we believe children will grow up in a world without denigration. Do you want your children’s first encounter with life’s ugliness to occur in the wild, as it were, or as they sit next to you with a book spread over your laps?

What should we do with great works by authors guilty of terrible acts? What about mostly good characters who sometimes behave badly? What happens when the bad guy starts to seem a little too cool? These are all great prompts for causing thought and arousing emotions. Why would we want to take these training opportunities away from our kids? It’s undeniable that books and teachers and fellow students and, yes, even parents themselves really do influence children to some degree. That influence, however, may not always be in the intended direction. Parents who devote more time and attention to their children’s socialization can probably improve their chances of achieving desirable ends. However, it’s also true that the most predictable result of any effort at exerting complete control over children’s moral education is that their social development will be stunted.

About the Author

Dennis J. Junk holds degrees in anthropology and psychology and a Masters in British and American literature. His first book is He Borara: a Novel about an Anthropologist among the Yąnomamö.

References
  1. https://rb.gy/hjcpn
  2. Ibid.
  3. Ibid.
  4. Mead, M. (1928). Coming of Age in Samoa. William Morrow and Co.
  5. https://rb.gy/i7a0h
  6. https://rb.gy/sx5oh
  7. https://rb.gy/wq45k
  8. https://rb.gy/6vi33
  9. https://rb.gy/6w2nt
  10. Boyd, B. (2010). On the Origin of Stories: Evolution, Cognition, and Fiction. Belknap Press.
  11. https://rb.gy/n3sxn
  12. Carroll, J., Gottschall, J., Johnson, J.A., & Kruger, D. (2012). Graphing Jane Austen: The Evolutionary Basis of Literary Meaning. Palgrave Macmillan.
  13. https://rb.gy/1az75
  14. Flesch, W. (2008). Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction. Harvard University Press.
  15. https://rb.gy/dw9gt
  16. https://rb.gy/jgaf2
  17. https://rb.gy/c3gs4
  18. https://rb.gy/hejk8
  19. Vezalli, L., Stathi, S., Giovannini, D., Cappoza, D. & Trifiletti, E. (2014). The Greatest Magic of Harry Potter: Reducing Prejudice. Journal of Applied Social Psychology, 45(2), 105–121.
Categories: Critical Thinking, Skeptic

Virtual Reality for Mice

neurologicablog Feed - Tue, 12/12/2023 - 5:02am

Scientists have developed virtual reality goggles for mice. Why would they do this? For research. The fact that it’s also adorable is just a side effect.

One type of neuroscience research is to expose mice in a laboratory setting to specific tasks or stimuli while recording their brain activity. You can have an implant, for example, measure brain activity while it runs a maze. However, having the mouse run around an environment puts limits on the kind of real time brain scanning you can do. So researchers have been using VR (virtual reality) for about 15 years to simulate an environment while keeping the mouse in a more controlled setting, allowing for better brain imaging.

However, this setup is also limiting. The VR is really just surrounding wrap-around screens. But it is technically challenging to have overhead screens, because that is where the scanning equipment is, and there are still visual clues that the mouse is in a lab, not the virtual environment. So this is an imperfect setup. k

The solution was to build tiny VR goggles for mice. The mouse does not wear the goggles like a human wears a VR headset. They can’t get them that small yet. Rather, the goggles are mounted, and the mouse is essentially placed inside the goggle while standing on a treadmill. The mouse can therefore run around while remaining stationary on the treadmill, and keep his head in the mounted VR goggles. This has several advantages over existing setups.

First, the VR experience is 3-D, because a different image can be presented to each eye. This makes the environment seem much more compelling and real. Further, the goggles take up the entire visual field of the mouse. There are no visual cues that the mouse is actually in a lab. Finally, the VR setup allows the researchers to present stimuli from anywhere, including overhead. This has actually allowed researchers to study the neurological response of the mouse to overhead visual threat for the first time. This is an important system in the mouse brain, as reacting to overhead predators (like a raptor) is extremely important to their survival.

This first study is an important proof-of-concept for this experimental setup, which seems to work really well. This will now make it much easier for researchers to ask a host of questions about how the mouse brain responds to specific situations. They can swap in any VR environment and stimuli. They next want to study how the mouse responds as predator, hunting for flies to eat.

VR neuroscience research also already exists in humans. In fact, it’s easier in humans because VR systems made for humans already exist. There are three basic types of research. The first is to explore the phenomenon of VR itself. It’s important to know if people respond to VR the same way they respond to reality, otherwise VR research would be introducing an artifact into research. I have discussed previously that the VR experience can be very real and visceral. My brain seems to completely buy into the reality that the VR experience is presenting, and reacts accordingly. In fact, a lot of initial neuroscience research using VR also used subjective reports to measure how “real” the VR experience is.

But that’s not good enough for science – we need some objective measure to see if a VR experience is real enough to justify neuroscience research, or if it is just a good simulation. At least one such study has been done, comparing real-world height exposure to VR to 2D stimulation. They found that the real world and VR experiences looked almost identical:

Behavioral and psychophysiological results suggest that identical exogenous and endogenous cognitive as well as emotional mechanisms are deployed to process the real-life and virtual experience. Specifically, alpha- and theta-band oscillations in line with heart rate variability, indexing vigilance, and anxiety were barely indistinguishable between those two conditions, while they differed significantly from the laboratory setup. Sensory processing, as reflected by beta-band oscillations, exhibits a different pattern for all conditions, indicating further room for improving VR on a haptic level.

There was only a small difference on sensory processing, which they attribute not to the visual stimuli but the tactile or “haptic” stimuli. This can be fixed by pairing VR goggles with some appropriate physical sensation (haptic feedback). We know from prior research (and subjective experience) that combining two or more sensory modalities creates a very compelling illusion of reality. VR combines sight and sound. But at even a little haptic sensation, like winds on the face, and the sensation is all the more real. I think the bottom line at this point is that VR is real enough to conduct neuroscience research, even if there is a little room for improvement.

The second is similar to this mouse study, looking at how the brain responds to a task or stimuli presented in VR. You can have a human, for example, navigate a maze in VR to see how the brain processes visuospatial information. VR is perfect for visuospatial processing, because that is the information the VR headset is presenting. This is the “low hanging fruit’ for VR neuroscience research. But really there are endless possibilities, especially if you add other sensory modalities.

Also, for mice it is easy to put them on a small treadmill. For a human the ultimate VR experience would be similar to what was portrayed in Ready Player One – the VR user is in a harness and standing on a multi-directional treadmill, wearing a full-body haptic feedback suit. This way your movements are translated into the VR experience, and anything that happens in VR can be sensed by the user. If you pick something up, you feel it in your hand, perhaps even a simulation of the weight. If you sit down on a virtual chair, you are sitting on the harness which holds you up. With my current headset only VR setup, I do have to remind myself that the virtual world is not really there. I cannot lean up against a virtual wall.

The third type of VR research is looking at VR as a tool of rehabilitation or psychological intervention. VR has been tested for phobias, PTSD, anxiety, and depression. Overall the research shows that VR therapy is effective. It may be a good adjunct to in person therapy. It is not necessary more effective than in person therapy, and I think researchers are still sorting out how best to use it, but so far it seems like a great additional option to have. There is even research looking at VR for physical rehabilitation, with promising results.

I think the bottom line is that VR is effective. Remember, our brains construct our internal sense of reality by processing multiple sensory streams. Swapping out sensory input for virtual input still leads to constructing a compelling reality, based on the virtual information. Our brain’s buy the illusion, because all perception is ultimately an illusion of sorts. We are already at the point where VR is more than adequate to work, and will only get more compelling as the technology improves. Beyond entertainment, this is already proving to be a boon to neuroscience research.

The post Virtual Reality for Mice first appeared on NeuroLogica Blog.

Categories: Skeptic

Skeptoid #914: Stopping Hiccups with Science

Skeptoid Feed - Tue, 12/12/2023 - 2:00am

Does science have a way to reliably cure the hiccups?

Categories: Critical Thinking, Skeptic

Andrew Shtulman — Learning to Imagine: The Science of Discovering New Possibilities

Skeptic.com feed - Tue, 12/12/2023 - 12:00am
https://traffic.libsyn.com/secure/sciencesalon/mss393_Andrew_Shtulman_2023_10_10.mp3 Download MP3

Imagination is commonly thought to be the special province of youth―the natural companion of free play and the unrestrained vistas of childhood. Then come the deadening routines and stifling regimentation of the adult world, dulling our imaginative powers. In fact, Andrew Shtulman argues, the opposite is true. Imagination is not something we inherit at birth, nor does it diminish with age. Instead, imagination grows as we do, through education and reflection.

The science of cognitive development shows that young children are wired to be imitators. When confronted with novel challenges, they struggle to think outside the box, and their creativity is rigidly constrained by what they deem probable, typical, or normal. Of course, children love to “play pretend,” but they are far more likely to simulate real life than to invent fantasy worlds of their own. And they generally prefer the mundane and the tried-and-true to the fanciful or the whimsical.

Children’s imaginations are not yet fully formed because they necessarily lack knowledge, and it is precisely knowledge of what is real that provides a foundation for contemplating what might be possible. The more we know, the farther our imaginations can roam. As Learning to Imagine demonstrates, the key to expanding the imagination is not forgetting what you know but learning something new. By building upon the examples of creative minds across diverse fields, from mathematics to religion, we can consciously develop our capacities for innovation and imagination at any age.

Andrew Shtulman is Professor of Psychology at Occidental College where he directs the Thinking Lab. His award-winning research has been featured in the New York Times and the Wall Street Journal. His previous book was Scienceblind: Why Our Intuitive Theories About the World Are So Often Wrong. His new book is Learning to Imagine: The Science of Discovering New Possiblities.

Shermer and Shtulman discuss:

  • Are we rational, irrational, or both?
  • Did our senses and brain evolve for veridical perception or just fitness to get our genes into the next generation?
  • Are children natural-born scientists experimenting with the world?
  • Imagination defined: the capacity to generate alternatives to reality
  • Imagination evolved
  • Imagination’s purpose
  • Imagination’s structure
  • Anomalies
  • Counterfactuals
  • Expanding imagination by example: Testimony, Technology, Empirical Discovery
  • Expanding imagination by principle: Scientific, Mathematical, Ethical
  • Expanding imagination by model: pretense, fiction, religion
  • Children

    • Children and development of imagination: are they natural-born innovators?
    • Are children highly gullible or skeptical? Rational or emotional/intuitive?
    • How children understand causality
    • How children develop morality and moral principles
    • Pretend play’s purpose
    • Children’s preference for nonfiction vs. fiction, prosaic stories vs. unusual ones, realism vs. fantasy
    • Why children do not have a more expansive imagination than adults
    • Are children more exploratory than adults?
    • Are children natural-born scientists?
  • Religion

    • Images of God, Heaven, Hell, the afterlife, etc., from anthropomorphic to abstract
    • Children’s images of God etc.
    • Theory of mind and the mind of God
    • Baptism
    • Faith Frame: Tanya Luhrmann: religious practices are not byproducts of belief but means of achieving it by turning vague abstractions into palpable experiences
  • AI and creativity
  • The Beatles and creativity
  • Education: Montessori.
Categories: Critical Thinking, Skeptic

Cultural Blindness

neurologicablog Feed - Mon, 12/11/2023 - 5:14am

Not a crow.

One of the core tenets of scientific skepticism is what I call neuropsychological humility – the recognition that while the human brain is a powerful information processing machine, it also has many frailties. One of those frailties is perception – we do not perceive the world in a neutral or objective way. Our perception of the world is constructed from multiple sensory streams processed together and filtered through internal systems that include our memories, expectations, biases, assumptions and (critically) attention. In many ways, we see what we know, what we are looking for, and what we expect to see. Perhaps the most internet-famous example of this is the invisible gorilla, a dramatic example of inattentional blindness.

Far more subtle is what might be called cultural blindness – we can perceive differences that we already know exist or with which we are very familiar, but otherwise may miss differences as a background blur. On my personal intellectual journey, one dramatic example I often refer to is my perception before and after becoming a birder. For most of my life birds were something in the background I paid little attention to. My internal birding map consisted of a few local species and broad groups. I could recognize cardinals, blue jays, crows, pigeons, and mourning doves. Any raptor was a “hawk”. There were ducks and geese, and then there was – everything else. I would probably call any small bird a sparrow, if I thought to call it anything at all. I knew of other birds from nature shows, but they were not part of my world.

The birding learning curve was very steep, and completely changed my perception. What I called “crows” consisted not only of crows but ravens and at least two types of grackle. I can identify the field markings of several hawks and two vultures. I can tell the subtle differences between a downy and hairy woodpecker. At first I had difficulty telling a chickadee from a nuthatch, now the difference is obvious. I can even tell some sparrow species apart. My internal birding map is vastly different, and that affects how I perceive the world.

The same is true culturally for different groups of people. As a child I started out with broad vague categories, and as I matured I realized that these categories contained a lot of diversity. I further realized how much personal experience affects how much detail we perceive. It’s almost embarrassing now to think about how little I knew about certain cultures, but of course we all start with maximal ignorance and can’t be blamed for that. But we can take responsibility for our own education and attempts to grow beyond whatever subculture we grew up in.

One interesting example I remember – one of those profound moments of realization – has to deal with Arab culture. Again, to my younger self, the Middle East was a vast undifferentiated region of the world. I could recognize when someone was from that part of the world, and I knew the names of some of the countries, but little else. One example of this is that I knew that Middle Eastern men wore what I called a “turban”, and that was about all I knew about it. I perceived no further detail. I was later introduced to Arabic culture through my wife’s family. I remember seeing a poster on the wall of one of her relatives, which included pictures of 30 or so different Arabic men, each with a different style of head wrap (keffiyeh is the general term), “turbans” being only one subtype, each from a different part of the Middle East and surrounding areas outside the Middle East. I was struck by how different and specific they were, and was metaphorically slapped in the face with my previous ignorance and cultural blindness.

One of the lessons I also learned as a skeptic is that these specific examples of our intellectual world opening up are not isolated or quirky event. They are ubiquitous. We should take these individual experiences and extrapolate them to the rest of the world. Confronting the depths of our own ignorance – to the point where we literally cannot even perceive the details of reality – should be humbling. We like to think that our “gut instincts” are largely about logic and common sense, and sometimes they may be, but they are also about bias and perception. When our instincts brush up against reality, we should not assume it is the former. That is a time for exploration and humility.

One topic where my instincts proved to be mostly wrong, because of cultural blindness, is the topic of race – more specifically, the notion that race does not really exist. My intuition tells me that I can identify at least the continent of origin or someone without difficulty. Certainly, this must mean something. But, being a science communicator I knew I had to get this right, and I was extremely curious what the experts had to say on the issue. After an extensive deep dive I realized that my initial instinct was, essentially, cultural blindness. You may be having this reaction right now also, and that’s OK. We all perceive the world from our own perspective. The power of science is to give us a new and hopefully more objective persepective.

The point is not that there aren’t genetic differences among subpopulations of humans. There is. The point, rather, is that there is no objective level which we can call race, and the traditional “races” that we speak about are cultural constructs, not genetic reality. There are two main reasons for this. The first is that there is a lot of genetic mixing of the various human subpopulations. They are therefore not that distinct. They have also not been separated very long evolutionarily speaking. The genetic result is that there is much more genetic diversity within any group than there is between groups. So even if various groupings have some reality in terms of genetic clustering, they are much more superficial than you might think.

But even more importantly, if we took a genetic map of humanity, looking at all the branching points, clusterings, and degree of disparity and diversity, there is no objective way to divide them into what people generally think about in terms of race. If you looked at such a map, just of genetic diversity, without looking at what people looked like, you would not divide that map up into existing “races”. Depending on how you analyze it, something like 83% of all genetic diversity is within African populations, with the rest of the world representing only 17%. From this perspective, a geneticist would divide the world into at least four genetic groups of Africans, and one group for all non-Africans. And even then, the genetic diversity between the African groups would be much greater than the non-African group.

But from our perspective, we use the most obvious outward example of genetic diversity (skin) to divide the world, but that is all perception. From an African perspective, I can imagine there are vast differences between different populations, and everyone outside Africa looks roughly similar. The bottom line – there is no objective genetic reality to the current scheme of “races”, which are mostly a cultural construct based mostly on continent of origin. Races are a matter of perception, not genetic reality.

Lumping all Africans into one race is similar to thinking that all small brown birds are “sparrows” or all large predatory birds are “hawks”.

The most important change personally, however, I think is the fact that I now love having my eyes opened to a perception to which I was previously blind. It is no longer disconcerting, it’s expected. It’s welcome. It is the only attitude to have, because we are all starting out as maximally ignorant with a very narrow perception. Our only real choices are to stay stagnant is that narrow perception, or to learn and grow.

The post Cultural Blindness first appeared on NeuroLogica Blog.

Categories: Skeptic

The Skeptics Guide #961 - Dec 9 2023

Skeptics Guide to the Universe Feed - Sat, 12/09/2023 - 7:00am
Quickie with Bob: Ceramic Storage; News Items: Quantum Gravity, X-Prize for Health Span, ECT Effects on the Brain, Building New Materials with AI and Robots; From Tik Tok: Electric Car Without Charging; Who's That Noisy; Science or Fiction
Categories: Skeptic

Pages

Subscribe to The Jefferson Center  aggregator - Skeptic