You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 2 hours 21 min ago

Why Do Species Evolve to Get Bigger or Smaller

Fri, 01/19/2024 - 4:58am

Have you heard of Cope’s Rule or Foster’s Rule? American paleontologist Edward Drinker Cope first noticed a trend in the fossil record that certain animal lineages tend to get bigger over evolutionary time. Most famously this was noticed in the horse lineage, beginning with small dog-sized species and ending with the modern horse. Bristol Foster noticed a similar phenomenon specific to islands – populations that find their way to islands tend to either increase or decrease in size over time, depending on the availability of resources. This may also be called island dwarfism or gigantism (or insular dwarfism or gigantism).

When both of these things happen in the same place there can be some interesting results. On the island of Flores a human lineage, Homo floresiensis (the Hobbit species) experienced island dwarfism, while the local rats experienced island gigantism. The result were people living with rats the relative size of large dogs.

Based on these observations, two questions emerge. The first (and always important and not to be skipped) is – are these trends actually true or are the initial observations just quirks or hyperactive pattern recognition. For example, with horses, there are many horse lineages and not all of them got bigger over time. Is this just cherry-picking to notice the one lineage that survived today as modern horses? If some lineages are getting bigger and some are getting smaller, is this just random evolutionary change without necessarily any specific trend? I believe this question has been answered and the consensus is that these trends are real, although more complicated than first observed.

This leads to the second question – why? We have to go beyond just saying “evolutionary pressure” to determine if there is any unifying specific evolutionary pressure that is a dominant determinant of trends in body size over time. Of course, it’s very likely that there isn’t one answer in every case. Evolution is complex and contingent, and statistical trends in evolution over time can emerge from many potential sources. But we do see these body size trends a lot, and it does suggest there may be a common factor.

Also, the island dwarfism/gigantism thing seems to be real, and the fact that these trends correlate so consistently with migrating to an island suggests a common evolutionary pressure. Foster, who published his ideas in 1964, thought it was due to resources. Species get smaller if island resources are scarce, or they get bigger if island resources are abundant due to relative lack of competition. Large mainland species who find themselves on islands may find a smaller region in which to operate which much less resources, so the smaller critters will have an advantage. Also, smaller species can have a shorter gestation time and more rapid generational time, which can provide advantages in a stressed environment. Predator species may then become smaller in order to adapt to smaller prey (which apparently is also a thing).

At the small end of the animal size, getting bigger has advantages. Larger animals can go longer between meals and can roam across a larger range looking for food. Again, this is what we see, the largest animals become smaller and the smallest animals become larger, meeting in the middle (hence the Hobbits with dog-sized rats).

Now a recent study looks that these ideas with computer evolutionary simulations. The simulations pretty much confirm what I summarized above but also add some new wrinkles. The simulations show that a key factor, beyond the availability of resources, is competition for those resources. First, it showed a general trend in increasing body size due to competition between species. When different species compete in the same niche the larger animals tend to win out. They state this as Cope’s rule applying when interaction between species is determines largely by their body size.

The simulations also showed, however, that when the environment is stress the larger species were more vulnerable to extinction. There were relatively fewer individuals in larger species with long gestation and generational times. While smaller species could weather the strain better and bounce back quicker, but then they subsequently undergo slow increase in size when the environment stabilizes. This leads to what they call recurrent Cope’s Rule – each subsequent pulse of gigantism gets even bigger.

The simulations also confirmed island dwarfism – that species tend to shrink over time when there is overlap in niches and resource use, which contributes to a decreased resource availability. They call this an inverse Cope’s Rule. They don’t refer to Foster’s Rule, I think because the simulations were independent of being on an island or in an insular environment (which is the core of Foster’s observation). Rather, species become smaller when their interaction is determined more by the environment and resource availability than their relative body size (which could be the case on islands).

So the simulations don’t really change anything dramatically. They largely confirm Cope’s Rule and Foster’s Rule, and add the layer that niche overlap and competition is important, not just total availability of recourses.

The post Why Do Species Evolve to Get Bigger or Smaller first appeared on NeuroLogica Blog.

Categories: Skeptic

Converting CO2 to Carbon Nanofibers

Thu, 01/18/2024 - 4:56am

One of the dreams of a green economy where the amount of CO2 in the atmosphere is stable, and not slowly increasing, is the ability to draw CO2 from the atmosphere and convert it to a solid form. Often referred to as carbon capture, some form of this is going to be necessary eventually, and most climate projections include the notion of carbon capture coming online by 2050. Right now we don’t have a way to economically and on a massive industrial scale pull significant CO2 from the air. There is some carbon capture in the US, for example, but it accounts for only 0.4% of CO2 emissions. It is used near locations of high CO2 production, like coal-fired plants.

But there is a lot of research being done, mostly in the proof of concept stage. Scientists at the DOE and Brookhaven National Laboratory have published a process which seems to have promise. They can convert CO2 in the atmosphere to carbon nanofibers, which is a solid form of carbon with potential industrial uses. One potential use of these nanofibers would be as filler for concrete. This would bind up the carbon for at least 50 years, while making the concrete stronger.

In order to get from CO2 to carbon nanofibers they break the process up into two steps. They figured out a way, using an iron-cobalt catalyst, to make carbon monoxide (CO) into carbon nanofibers. This is a thermocatalyst process operating at 400 degrees C. That’s hot, but practical for industrial processes. It’s also much lower than the 1000 degrees C required for a method that would go directly from CO2 to carbon nanofibers.

That’s great, but first you have to convert the CO2 to CO, and that’s actually the hard part. They decided to use a proven method which uses a commercially available catalyst – palladium supported on carbon. This is an electrocatalyst process, that converts CO2 and H2O into CO and H2 (together called syngas). Both CO and H2 are high energy molecules that are very useful in industry. Hydrogen, as I have written about extensively, has many uses, including in steel making, concrete, and energy production. CO is a feed molecule for many useful reactions creating a range of hydrocarbons.

But as I said – conversion of CO2 and H20 to CO and H2 is the hard part. There has been active research to create an industrial scale, economic, and energy efficient process to do this for years, and you can find many science news items reporting on different processes. It seems like this is the real game, this first step in the process, and from what I can tell that is not the new innovation in this research, which focuses on the second part, going from CO to carbon nanofibers.

The electrocatalyst process that goes from CO2 to CO uses electricity. Other processes are thermocatalytic, and may use exothermic reactions to drive the process. Using a lot of energy is unavoidable, because essentially we are going from a low energy molecule (CO2) to a higher energy molecule (CO), which requires the addition of energy. This is the unavoidable reality of carbon capture in general – CO2 gets released in the process of making energy, and if we want to recapture that CO2 we need to put the energy back in.

The researchers (and pretty much all reporting on CO2 to CO conversion research) state that if the electricity were provided by a green energy source (solar, wind, nuclear) then the entire process itself can be carbon neutral. But this is exactly why any type of carbon capture like this is not going to be practical or useful anytime soon. Why have a nuclear power plant powering a carbon capture facility, that is essentially recapturing the carbon released from a coal-fired plant? Why not just connect the nuclear power plant to the grid and shut down the coal-fired plant? That’s more direct and efficient.

What this means is that any industrial scale carbon capture will only be useful after we have already converted our energy infrastructure to low or zero carbon. Once all the fossil fuel plants are shut down, and we get all our electricity from wind, solar, nuclear, hydro, and geothermal then we can make some extra energy in order to capture back some of the CO2 that has already been released. This is why when experts project out climate change for the rest of the century they figure in carbon capture after 2050 – after we have already achieved zero carbon energy. Carbon capture prior to that makes no sense, but after will be essential.

This is also why some in the climate science community think that premature promotion of carbon capture is a con and a diversion. The fossil fuel industry would like to use carbon capture as a way to keep burning fossil fuels, or to “cook their books” and make it seem like they are less carbon polluting than they are. But the whole concept is fatally flawed – why have a coal-fired plant to make electricity and a nuclear plant to recapture the CO2 produced, when you can just have a nuclear plant to make the electricity?

The silver lining here is that we have time. We won’t really need industrial scale carbon capture for 20-30 years, so we have time to perfect the technology and make it as efficient as possible. But then, the technology will become essential to avoid the worst risks of climate change.

 

The post Converting CO2 to Carbon Nanofibers first appeared on NeuroLogica Blog.

Categories: Skeptic

Betavoltaic Batteries

Tue, 01/16/2024 - 5:08am

In 1964 Isaac Asimov, asked to imagine the world 50 years in the future, wrote:

“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes. The isotopes will not be expensive for they will be by- products of the fission-power plants which, by 2014, will be supplying well over half the power needs of humanity.”

Today nuclear fission provides about 10% of the world’s electricity. Asimov can be forgiven for being off by such a large amount. He, as a science fiction futurist, was thinking more about the technology itself. Technology is easier to predict than things like public acceptance, irrational fear of anything nuclear, or even economics (which even economists have a hard time predicting).

But he was completely off about the notion that nuclear batteries would be running most everyday appliances and electronics. This now seems like a quaint retro-futuristic vision, something out of the Fallout franchise. Here the obstacle to widespread adoption of nuclear batteries has been primarily technological (issues of economics and public acceptance have not even come into play yet). Might Asimov’s vision still come true, just decades later than he thought? It’s theoretically possible, but there is still a major limitation that for now appears to be a deal-killer – the power output is still extremely low.

Nuclear batteries that run through thermoelectric energy production have been in use for decades by the aerospace industry. These work by converting the heat generated by the decay of nuclear isotopes into electricity. Their main advantage is that they can last a long time, so they are ideal for putting on deep space probes. These batteries are heavy and operate at high temperatures – not suitable for powering your vacuum cleaner. There are also non-thermal nuclear batteries, which do not depend on a heat gradient to generate electricity. There are different types depending on the decay particle and the mechanism for converting it into electricity. These can be small cool devices, and can function safely for commercial. In fact, for a while nuclear powered pacemakers were in common use, until lithium-ion batteries became powerful enough to replace them.

One type of non-thermal nuclear battery is betavoltaic, which is widely seen as the most likely to achieve widespread commercial use. These convert beta particles, which are the source of energy –

“…energy is converted to electricity when the beta particles inter-act with a semiconductor p–n junction to create electron–hole pairs that are drawn off as current.”

Beta particles are essentially either high energy electrons or positrons emitted during certain types of radioactive decay. They are pretty safe, as radiation goes, and are most dangerous when inhaled. From outside the skin they are less dangerous, but high exposure can cause burns. The small amounts released within a battery are unlikely to be dangerous, and the whole idea is that they are captured and converted into electricity, not radiated away from the device. A betavoltaic device is often referred to as a “battery” but are not charged or recharged with energy. When made they have a finite amount of energy that they release over time – but that time can be years or even decades.

Imagine having a betavoltaic power source in your smartphone. This “battery” never has to be charged and can last for 20-30 years. In such a scenario you might have one such battery that you transfer to subsequent phones. Such an energy source would also be ideal for medical uses, for remote applications, as backup power, and for everyday use. If they were cheap enough, I could imagine such batteries being ubiquitous in everyday electronics. Imagine if most devices were self-powered. How close are we to this future?

I wish I could say that we are close or that such a vision is inevitable, but there is a major limiting factor to betavoltaics – they have low power output. This is suitable for some applications, but not most. A recent announcement by a Chinese company,  Betavolt, reminded me of this challenge. Their press release does read like some grade A propaganda, but I tried to read between the lines.

Their battery uses nickel-63 as a power source, which decays safely into copper. The design incorporates a crystal diamond semiconductor, which is not new (nuclear diamond batteries have been in the news for years). In a device as small as a coin they can generate 100 microwatts (at 3 volts) for “50 years”. In reality the nickel-63 has a half-life of 100 years. That is a more precise way to describe its lifespan. In 100 years it will be generating half the energy it did when manufactured. So saying it has a functional life of 50 years is not unreasonable.

The problem is the 100 microwatts. A typical smart phone requires 3-5 watts of power. So the betavolt battery produces only 1/30 thousandth the energy necessary to run your smart phone. That’s four orders of magnitude. And yet, Betavolt claims they will produce a version of their battery that can produce 1 watt of power by 2025. Farther down in the article it says they plan-

“to continue to study the use of strontium 90, plethium 147 and deuterium and other isotopes to develop atomic energy batteries with higher power and a service life of 2 to 30 years.”

I suspect these two things are related. What I mean is that when it comes to powering a device with nuclear decay, the half-life is directly related to power output. If the radioisotope decays at half the rate, then it produces half the energy (given a fixed mass). There are three variables that could affect power output. One is the starting mass of the isotope that is producing the beta particles. The second is the half life of that substance. And the third is the efficiency of conversion to electricity. I doubt there are four orders of magnitude to be gained in efficiency.

From what I can find betavoltaics are getting to about the 5% efficiency range. So maybe there is one order of magnitude to gain here, if we could design a device that is 50% efficient (which seems like a massive gain). Where are the other three orders of magnitude coming from? If you use an isotope with a much shorter half-life, say 1 year instead of 100 years, there are two orders of magnitude. I just don’t see where the other one is coming from. You would need 10 such batteries to run your smart phone, and even then, in one year you are operating at half power.

Also, nuclear batteries have constant energy output. You do not draw power from them as needed, like with a lithium-ion battery. They just produce electricity at a constant (and slowly decreasing) rate. Perhaps, then, such a battery could be paired with a lithium-ion battery (or other traditional battery). The nuclear battery slowly charges the traditional battery, which operates the devices. This way the nuclear battery does not have to power the device, and can produce much less power than needed. If you use your device 10% of the time, the nuclear battery can keep it charged. Even if the nuclear battery does not produce all the energy the device needs, you would be able to go much longer between charges, and you will never be dead in the water. You could always wait and build up some charge in an emergency or when far away from any power source to recharge. So I can see a roll for betavoltaic batteries, not only in devices that use tiny amounts of power, but in consumer devices as a source of “trickle” charging.

At first this might be gimicky, and we will have to see if it provides a real-world benefit that is worth the expense. But it’s plausible. I can see it being very useful in some situations, and the real variable is how widely adopted such a technology would be.

The post Betavoltaic Batteries first appeared on NeuroLogica Blog.

Categories: Skeptic

Big Ring Challenges Cosmological Principle

Fri, 01/12/2024 - 4:44am

University of Central Lancashire (UCLan) PhD student Alexia Lopez, who two years ago discovered a giant arc of galaxy clusters in the distant universe, has now discovered a Big Ring. This (if real) is one of the largest structures in the observable universe at 1.3 billion light years in diameter. The problem is – such a large structure should not be possible based on current cosmological theory. It violates what is known as the Cosmological Principle (CP), the notion that at the largest scales the universe is uniform with evenly distributed matter.

The CP actually has two components. One is called isotropy, which means that if you look in any direction in the universe, the distribution of matter should be the same. The other component is homogeneity, which means that wherever you are in the universe, the distribution of matter should be smooth. Of course, this is only true beyond a certain scale. At small scale, like within a galaxy or even galaxy cluster, matter is not evenly distributed, and it does matter which direction you look. But at some point in scale, isotropy and heterogeneity are the rule. Another way to look at this is – there is an upper limit to the size of any structure in the universe. The Giant Arc and Big Ring are both too big. If the CP is correct, they should not exist. There are also a handful of other giant structures in the universe, so these are not the first to violate the CP.

The Big Ring is just that, a two-dimensional structure in the shape of a near-perfect ring facing Earth (likely not a coincidence but rather the reason it was discoverable from Earth). Alexia Lopez later discovered that the ring is actually a corkscrew shape. The Giant Arc is just that, the arc of a circle. Interestingly, it is in the same region of space and the same distance as the Big Ring, so the two structures exist at the same time and place. This suggests they may be part of an even bigger structure.

How certain are we that these structures are real, and not just a coincidence? Professor Don Pollacco, of the department of physics at the University of Warwick, said the probability of this being a statistical fluke is “vanishingly small”. But still, it seems premature to hang our hat on these observations just yet. I would like to see some replications and attempts at poking holes in Lopez’s conclusions. That is the normal process of science, and it takes time to play out. But so far, it seems like solid work.

If her observations hold up, then the Big Ring would be the seventh such giant structure that violates our current formulation of the CP. This is also how science works. The Cosmological Principle is based on both observation and our theories about how the universe works, as well as its origins and history. It’s not just a guess or an aesthetic wish. Scientists have very good reasons for supporting the CP. But in recent years the number of apparent violations of the CP have been growing. This is often how it works in science – the number of problems or exceptions to a theory grows until the theory has to be abandoned or significantly modified. What might be going on here?

There are already a number of hypotheses to explain these giant structures. Perhaps there was some effect at work in the very early universe, shortly have the Big Bang, and this created ripples in the cosmos. These ripples would be like pressure waves, which would affect star and galaxy formation. Are we seeing the distant echo of these ripples? More specifically, there is the cosmic string idea, with the strings being filamentary defects in the distribution of matter after the Big Bang. There is also Conformal Cyclic Cosmology (CCC), a cosmology proposed by Roger Penrose. Apparently such structures might be a sign of CCC.

What we have in the Big Ring is a possible piece to a very big puzzle. First astronomers will need to confirm that it is a real piece, and not some statistical error, systematic bias in data analysis, or statistical fluke. If it holds up then cosmologists will have to work out what the theoretical implications are, and propose testable alternative hypotheses that would account for this cosmology. And the process of science grinds on.

The post Big Ring Challenges Cosmological Principle first appeared on NeuroLogica Blog.

Categories: Skeptic

Categorization and What’s In a Name

Mon, 01/08/2024 - 5:07am

Categorization is critical in science, but it is also very tricky, often deceptively so. We need to categorize things to help us organize our knowledge, to understand how things work and relate to each other, and to communicate efficiently and precisely. But categorization can also be a hindrance – if we get it wrong, it can bias or constrain our thinking. The problem is that nature rarely cleaves in straight clean lines. Nature is messy and complicated, almost as if it is trying to defy our arrogant attempts at labeling it. Let’s talk a bit about how we categorize things, how it can go wrong, and why it matters.

We can start with an example that might seem like a simple category – what is a planet? Of course, any science nerd knows how contentious the definition of a planet can be, which is why it is a good example. Astronomers first defined them as wandering stars – the points of light that were not fixed but seemed to wonder throughout the sky. There was something different about them. This is often how categories begin – we observe a phenomenon we cannot explain and so the phenomenon is the category. This is very common in medicine. We observe a set of signs and symptoms that seem to cluster together, and we give it a label. But once we had a more evolved idea about the structure of the universe, and we knew that there are stars and stars have lots of stuff orbiting around them, we needed a clean way to divide all that stuff into different categories. One of those categories is “planet”. But how do we define planet in an objective, intuitive, and scientifically useful way?

This is where the concept of “defining characteristic” comes in. A defining characteristic is, “A property held by all members of a class of object that is so distinctive that it is sufficient to determine membership in that class. A property that defines that which possesses it.” But not all categories have a clear defining characteristic, and for many categories a single characteristic will never suffice. Scientists can and do argue about which characteristics to include as defining, which are more important, and how to police the boundaries of that characteristic.

Returning to planets, they clearly orbit the sun, but since many objects do that is a necessary but insufficient criterion for the category. We don’t want every asteroid to be a planet, so we can add that planets have to be big enough that gravity pulls them into a sphere. But a sphere to what tolerance? Also, there are some bodies that would be a sphere if they were not rotating so fast, but their spin distorts them into an oval. Do they still count? Also many moons are spherical, so we have to exclude objects that are revolving about another object other than the sun. Are we there yet – spherical objects orbiting a parent star but not other objects?

For a while, that was it. But it became apparent that there are potentially many hundreds of objects orbiting our sun that fit this definition, diluting the utility of the category of planet. Ceres, for example, which is the largest asteroid in the asteroid belt between Mars and Jupiter, fits this definition (and for a time was categorized as a planet). Is it a spherical asteroid, or is it a planet orbiting among the asteroids? But what triggered this controversy was the discovery of planetary objects in the Kuiper belt, and the realization there may be hundreds of them. So astronomers added another defining characteristic – a planet must also dominate and clear out its orbit around its parent star. That will nicely exclude Ceres and all or at least most Kuiper belt objects. But – it also excluded Pluto, because its moon Charon was deemed too large to be just a moon. It is more accurate to say that Pluto and Charon orbit each other, and share an orbit around the sun. Astronomers famously created a new category, Dwarf Planet, and placed Ceres and Pluto into this category, along with several Kuiper belt objects.

And we’re just talking about clumps of rock and gas orbiting stars. Imagine how messy nature gets when we talk about something like biology. Linnaeus was the first scientist to attempt to categorize all life into a single system, in his Systema Naturae published in 1735. Linnaeus used a nestled hierarchical classification system, which actually works well in biology since the evolutionary branches of life are nestled hierarchies. The challenge for Linnaeus is that he was operating at a time before evolutionary theory, before molecular biology and genetics. What he had at his disposal was gross morphology – how living things looked.

Most famously he divided plants mainly by their number of sexual organs, stamen and pistils. This made for easy categorization, but was largely arbitrary and was controversial even at the time. As we have learned more and more about biology, the classification system has evolved and become increasingly complex. But it has also become more accurate, reflecting actual relationships rather than just superficial characteristics. It is a great example of the challenges of categorization. Is the duck-billed platypus a mammal? Well, what are the defining characteristics of mammals – they are vertebrates, warm-blooded, have hair or fur, give birth to live young, and nurse their young with mammary glands. A platypus looks like a mammal, but it lays eggs. It’s clearly not a reptile. So – do some mammals lay eggs? Is the platypus in a category all its own (with other monotremes, like the echidnas).

One solution is called cladistics – where all the messiness of biology is reduced to a single defining characteristic – evolutionary branching order. That’s it. This does allow for an unambiguous system and it does reflect an important underlying reality. But some biologists are not entirely happy with this system, because it does not consider things like morphological distinctiveness. In a purely cladistic system, all birds are just one tiny branch of dinosaurs. While this is evolutionarily true, it does not reflect their diversity, their disparity from other dinosaurs, and their importance as a category of animals. It’s the Pluto thing all over again.

What this reflects is that there is often no objectively correct or more scientific choice when it comes to categorization. There are just tradeoffs. We have to decide whether to prioritize precision vs utility, for example. What are we using the categories for? How do they guide science? How much fuzziness are we willing to accept, how much to we lump vs split, and which characteristics are truly defining? What do we do with the inevitable “exceptions”?

The true controversies, however, come into play once we try to categorize humans. These categories can have real implications for people’s lives. They are no mere abstract scientific exercise. That is one reason it is so important to recognize what categories truly are – they are ultimately choices we make that reflect biases and value judgements. They do not automatically reflect objective underlying reality. Some are better than others, but we have to define what “better” means.

The post Categorization and What’s In a Name first appeared on NeuroLogica Blog.

Categories: Skeptic

Oxygen As A Technosignature

Thu, 01/04/2024 - 4:46am

This is one of the biggest thought experiments in science today – as we look for life elsewhere in the universe, what should we be looking for, exactly? Other stellar systems are too far away to examine directly, and even our most powerful telescopes can only resolve points of light. So how do we tell if there is life on a distant exoplanet? Also, how could we detect a distant technological civilization?

Here is where the thought experiment comes in. We know what life on Earth is like, and we know what human technology is like, so obviously we can search for other examples of what we already know. But the question is – how might life different from life on Earth be detected? What are the possible signatures of a planet covered in living things that perhaps look nothing like life on Earth. Similarly, what alien technologies might theoretically exist, and how could we detect them?

A recent paper explores this question from one particular angle – are there conditions on a planet that are necessary for the development of technology? They hypothesize that there is an “oxygen bottleneck”, a minimum concentration of oxygen in the atmosphere of a planet, that is necessary for the development of advanced technology. Specifically they argue that open air combustion, which requires a partial pressure (PO2) of oxygen of ≥ 18% (it’s about 21% on Earth), is necessary for fire and metallurgy, and that these are necessary stepping stones on the path to advanced technology.

There are a lot of assumptions in this argument, but they do a descent job of defending it. I’ve long thought an aquatic species, even if very intelligent, would not be able to develop advanced technology under water. This is a similar argument. They also point out that oxygen-based life itself can exist with much lower PO2, so there may be many planets out there with a biosignature of oxygen compatible with life, but too low to be a technosignature. They suggest we should focus our efforts to find other technosignatures on planets with high PO2.

They consider other types of atmospheres that might be compatible with life, but none are compatible with open air combustion – hence the notion of the oxygen bottleneck for the development of advanced technology.

While this is all a very interesting and potentially useful thought experiment, it is frustrating that we completely lack data. This is the same problem we have with the search for biosignatures – it challenges our imagination as to what is possible. The universe is an awfully big place, which provides for trillions of opportunities to experiment. This means that even extremely unlikely scenarios are still likely to exist, given enough opportunities. Therefore, saying that something is unlikely is not enough. We will likely find examples of everything out there in the universe unless it is so unlikely that it is essentially impossible.

It is useful to examine the physics of a hypothesis to determine if a certain type of life or technology is possible within the laws of physics. If it is truly impossible, then we can rule it out. But if it is possible, even if extremely unlikely, then we can’t.

But the big problem I have with this approach is that it is ultimately limited by our imagination. Even though the human imagination is quite expansive, it is still nothing compared to the scope of the universe. It is hard to capture in a thought experiment the real world experiment of trillions of worlds. Perhaps, for example, there might be other pathways to advanced technology, just not a type of technology that we are familiar with.

Here evolution is a good example. Biology has come up with countless clever solutions and workarounds to the challenges of life, ones that would be difficult to imagine and design from the top down. But given the countless opportunities for evolutionary experimentation, strange and unlikely solutions emerge. Similarly, perhaps most aquatic species never develop spacefaring technology, but if there are enough of them out there it may happen occasionally, through some weird path we have not imagined.

But even if the “oxygen bottleneck” is not absolute, it still may be a useful statistical realization. Open air combustion may not be necessary for the development of technology, but it does provide a relatively easy and high-probability pathway. Therefore we may be more likely to detect technosignatures on exoplanets with high oxygen levels. If we have a lot of choices we might as well start with the low-hanging fruit.

There is also the notion that we are more likely to recognize technosignatures for familiar technology. We can better imagine technology that results from open air combustion and what that might look like, and therefore know what the technosignatures might be. If an alien species did develop an alien technology through an alternate pathway, we might not even recognize it when we see it.

As long as we are stuck with an N of 1 when it comes to life and technology, all we can do is thought experiments, and keep looking for some actual data.

The post Oxygen As A Technosignature first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages