You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 18 hours 35 min ago

Frozen Embryos Are Not People

Tue, 02/27/2024 - 5:07am

Amid much controversy, the Alabama State Supreme Court ruled that frozen embryos are children. They did not support their decision with compelling logic, with cited precedence (their decision is literally unprecedented), with practical considerations, or with sound ethical judgement. They essentially referenced god. It was a pretty naked religious justification.

The relevant politics have been hashed out by many others. What I want to weigh in on is the relevant logic. Two years ago I wrote about the question of when a fetus becomes a person. I laid out the core question here – when does a clump of cells become a person? Standard rhetoric in the anti-abortion community is to frame the question differently, claiming that from the point of fertilization we have human life. But from a legal, moral, and ethical perspective, that is not the relevant question. My colon is human life, but it’s not a person. Similarly, a frozen clump of cells is not a child.

This point inevitably leads to the rejoinder that those cells have the potential to become a person. But the potential to become a thing is not the same same as being a thing. If allowed to develop those cells have the potential to become a person – but they are not a person. This would be analogous to pointing to a stand of trees and claiming it is a house. Well, the wood in those trees has the potential to become a house. It has to go through a process, and at some point you have a house.

That analogy, however, breaks down when you consider that the trees will not become a house on their own. An implanted embryo will become a child (if all goes well) unless you do something to stop it. True but irrelevant to the point. The embryo is still not a person. The fact that the process to become a person is an internal rather than external one does not matter. Also, the Alabama Supreme Court is extending the usual argument beyond this point – those frozen embryos will not become children on their own either. They would need to go through a deliberate, external, artificial process in order to have the full potential to develop into a person. In fact, they would not exist without such a process.

But again – none of this really matters. The potential to become something through some kind of process, whether internal or external, spontaneous or artificial, does not make one thing morally  equivalent to something else. A frozen clump of cells is not a child.

The history of how evangelicals and conservatives came to this rigid position – that personhood begins at fertilization – is complex, but illuminating. The quick version is that nowhere in the bible does it say life or personhood begins at conception, and many pre-1980 Christians believed that the bible says personhood begins at birth. However, the idea that the soul enters the body at conception goes back to the ancient Greeks. This view was largely accepted by Catholics and rejected by Protestants – until Jerry Falwell and then others started linking the Catholic view with American political conservatives, making it into a cultural issue that was good for outraging and motivating donors and voters.

Now it is a matter of unalterable faith, that human personhood begins at conception. This is what leads to the bizarre conclusion that a frozen embryo is a child. But this is not a biblical belief, not a historically universal belief, and is certainly not a scientific belief.

On some level, however, the religious right in America knows they cannot just legislate their faith. They really want to, and they have a couple of strategies for doing so. One is to argue against the separation of church and state. They will rewrite history, cherry pick references, and mostly just assert what they want to be true. When in power, such as in Alabama, they will just ignore the separation (unless and until slapped down by the Supreme Court).  But failing that they will sometimes argue that their religious view is actually the scientific view. This, of course, is when I become most interested.

One arena where they have done that extensively is in the teaching of evolution. They have legally failed on the separation of church and state arguments. They therefore pivoted to the scientific ones, with creationism and later Intelligent Design. But these are all warmed over religious views, and any attempt at sounding scientific is laughable and has completely failed. They do provide many object lessons in pseudoscience and poor logic, however.

I believe they are doing the same thing with the abortion issue. They are saying that the scientific view is that human life begins at conception. But again, this is a deceptive framing. That is not the question – the question is when personhood begins. Once again, frozen cells are not a person.

The post Frozen Embryos Are Not People first appeared on NeuroLogica Blog.

Categories: Skeptic

Odysseus Lands on the Moon

Fri, 02/23/2024 - 5:00am

December 11, 1972, Apollo 17 soft landed on the lunar surface, carrying astronauts Gene Cernan and Harrison Schmitt. This was the last time anything American soft landed on the moon, over 50 years ago. It seems amazing that it’s been that long. On February 22, 2024, the Odysseus soft landed on the Moon near the south pole. This was the first time a private company has achieved this goal, and the first time an American craft has landed on the Moon since Apollo 17.

Only five countries have ever achieved a soft landing on the moon, America, China, Russia, Japan, and India. Only America did so with a crewed mission, the rest were robotic. Even though this feat was first accomplished in 1966 by the Soviet Union, it is still an extremely difficult thing to pull off. Getting to the Moon requires powerful rocket. Inserting into lunar orbit requires a great deal of control, on a craft that is too far away for real time remote control. This means you either need pilots on the craft, or the craft is able to carry out a pre-programmed sequence to accomplish this goal. Then landing on the lunar surface is tricky. There is no atmosphere to slow the craft down, but also no atmosphere to get in the way. As the ship descends it burns fuel, which constantly changes the weight of the vehicle. It has to remain upright with respect to the lunar surface and reduce its speed by just the right amount to touch down softly – either with a human pilot or all by itself.

The Odysseus mission is funded by NASA as part of their program to develop private industry to send instruments and supplies to the Moon. It is the goal of their Artemis mission to establish a permanent base on the moon, which will need to be supported by regular supply runs. In January another company with a NASA grant under the same program, Astrobotic Technology, sent their own craft to the Moon, the Peregrine. However, a fuel leak prevented the craft from orienting its solar panels toward the sun, and the mission had to be abandoned. This left the door open for the Odysseus mission to grab the achievement of being the first private company to do so.

One of the primary missions of Odysseus is to investigate is the effect of the rocket’s exhaust on the landing site. When the Apollo missions landed the lander’s exhaust blasted regolith from the lunar surface at up to 3-4 km/second, faster than a bullet. With no atmosphere to slow down these particles, they blasted everything in the area and went a long distance. When Apollo 12 landed somewhat near the Surveyor 3 robotic lander the astronauts then walked to the Surveyor to bring back pieces for study. They found that the Surveyor had been “sandblasted” by the lander’s exhaust.

This is a much more serious problem for Artemis than Apollo. Sandblasting on landing is not really a problem if there is nothing else of value nearby. But with a permanent base on the Moon, and even possibly equipment from other nation’s lunar programs, this sandblasting can be dangerous and harm sensitive equipment. We need to know, therefore, how much damage it does, and how close landers can land to existing infrastructure.

There are potential ways to deal with the issue, including landing at a safe distance, but also erecting walls or curtains to block the blasted regolith from reaching infrastructure. A landing pad that is hardened and free of loose regolith is another option. These options, in turn, require a high degree of precision in terms of the landing location. For the Apollo missions, the designated landing areas were huge, with the landers often being kilometers away from their target. If the plan for Artemis is to land on a precise location, eventually onto a landing pad, then we need to not only pull off soft landings, but we need to hit a bullseye.

Fortunately, our technology is no longer in the Apollo era. SpaceX, for example, now routinely pulls off similar feats, with their reusable rockets that descend back down to Earth after launching their payload, and make a soft landing on a small target such as a floating platform.

The Odysseus craft will also carry out other experiments and missions to prepare the way for Artemis. This is also the first soft landing for the US near the south pole. All the Apollo missions landed near the equator. The craft will also be placing a laser retroreflector on the lunar surface. This is a reflector that can return a laser pointed at it directly back at the source. Such reflectors have been left on the Moon before and are used to do things like measure the precise distance between the Earth and Moon. NASA plans to place many retroreflectors on the Moon to use as a positioning system for spacecraft and satellites in lunar orbit.

This is all part of building an infrastructure for a permanent presence on the Moon. This, I think, is the right approach. NASA knows they need to go beyond the “flags and footprints” style one-off missions. Such missions are still useful for doing research and developing technology, but they are not sustainable. We should be focusing now on partnering with private industry, developing a commercial space industry, advancing international cooperation, developing long term infrastructure and reusable technology. While I’m happy to see the Artemis program get underway, I also hope this is the last time NASA develops these expensive one-time use rocket systems. Reusable systems are the way to go.

 

The post Odysseus Lands on the Moon first appeared on NeuroLogica Blog.

Categories: Skeptic

AI Video

Thu, 02/22/2024 - 5:05am

Recently OpenAI launched a website showcasing their latest AI application, Sora. This app, based on prompts similar to what you would use for ChatGPT or the image creation applications, like Midjourney or Dalle-2, creates a one minute photorealistic video without sound. Take a look at the videos and then come back.

Pretty amazing. Of course, I have no idea how cherry picked these videos are. Were there hundreds of failures for each one we are seeing? Probably not, but we don’t know. They do give the prompts they used, and they state explicitly that these videos were created entirely by Sora from the prompt without any further editing.

I have been using Midjourney quite extensively since it came out, and more recently I have been using ChatGPT 4 which is linked to Dalle-2, so that ChatGPT will create the prompt for you from more natural language instructions. It’s pretty neat. I sometimes use it to create the images I attach to my blog posts. If I need, for example, a generic picture of a lion I can just make one, rather than borrowing one from the internet and risking that some German firm will start harassing me about copyright violation and try to shake me down for a few hundred Euros. I also make images for personal use, mostly gaming. It’s a lot of fun.

Now I am looking forward to getting my hands on Sora. They say that they are testing the app, having given it to some creators to give them feedback. They are also exploring ways in which the app can be exploited for evil and trying to make it safe. This is where the app raises some tricky questions.

But first I have a technical question – how long will it be before AI video creation is so good that it becomes indistinguishable (without technical analysis) from real video? Right now Sora is about as good at video as Midjourney is at pictures. It’s impressive, but there are some things it has difficulty doing. It doesn’t actually understand anything, like physics or cause and effect, and is just inferring in it’s way what something probably looks like. Probably the best representation of this is how they deal with words. They will create pseudo-letters and words, reconstructing word like images without understanding language.

Here is a picture I made through ChatGPT and Dalle-2 asking for an advanced spaceship with the SGU logo. Superficially very nice, but the words are not quite right (and this is after several iterations). You can see the same kind of thing in the Sora videos. Often there are errors in scale, in how things related to each other, and objects just spawn our of nowhere. The video of the birthday party is interesting – I think everyone is supposed to be clapping, but it’s just weird.

So we are still right in the middle of the uncanny valley with AI generated video. Also, this is without sound. The hardest thing to do with photorealistic CG people is make them talk. As soon as their mouth starts moving, you know they are CG. They don’t even attempt that in these videos. My question is – how close are we to getting past the uncanny valley and fixing all the physics problems with these videos?

On the one hand it seems close. These videos are pretty impressive. But this kind of technology historically (AI driving cars, speech recognition) tend to follow a curve where the last 5% of quality is as hard or harder than the first 95%. So while we may seem close, fixing the current problems may be really hard. We will have to wait and see.

The more tricky question is – once we do get through the uncanny valley and can essentially create realistic video, paired with sound, of anything that is indistinguishable from reality, what will the world be like? We can already make fairly good voice simulations (again, at the 95% level). OpenAI says they are addressing these questions, and that’s great, but once this code is out there in the world who says everyone will adhere to good AI hygiene?

There are some obvious abuses of this technology to consider. One is to create fake videos meant to confuse the public and influence elections or for general propaganda purposes. Democracy requires a certain amount of transparency and shared reality. We are already seeing what happens when different groups cannot even agree on basic facts. This problem also cuts both ways – people can make videos to create the impression that something happened that didn’t, but also real video can be dismissed as fake. That wasn’t me taking a bribe, it was an AI fake video. This creates easy plausible deniability.

This is a perfect scenario for dictators and authoritarians, who can simply create and claim whatever reality they wish. The average person will be left with no solid confidence in what reality is. You can’t trust anything, and so there is no shared truth. Best put our trust in a strongman who vows to protect us.

There are other ways to abuse this technology, such as violating other people’s privacy by using their image. This could also revolutionize the porn industry, although I wonder if that will be a good thing.

While I am excited to get my hands on this kind of software for my personal use, and I am excited to see what real artists and creators can do with the medium, I also worry that we again are at the precipice of a social disruption. It seems that we need to learn the lessons of recent history and try to get ahead of this technology with regulations and standards. We can’t just leave it up to individual companies. Even if most of them are responsible, there are bound to be ones that aren’t. Not only do we need some international standards, we need the technology to enforce them (if that’s even possible).

The trick is, even if AI generated videos can be detected and revealed, the damage may already be done. The media will have to take a tremendous amount of responsibility for any video they show, and this includes social media giants. At the very least any AI generated video should be clearly labels as such. There may need to be several layers of detection to make this effective. At least we need to make it as difficult as possible, so not every teenager with a cellphone can interfere with elections. At the creation end AI created video can be watermarked, for example. There may also be several layers of digital watermarking to alert social media platforms so they can properly label such videos, or refuse to host them depending on content.

I don’t have the final answers, but I do have a strong feeling we should not just go blindly into this new world. I want a world in which I can write a screenplay, and have that screenplay automatically translated into a film. But I don’t want a world in which there is no shared reality, where everything is “fake news” and “alternative facts”. We are already too close to that reality, and taking another giant leap in that direction is intimidating.

The post AI Video first appeared on NeuroLogica Blog.

Categories: Skeptic

Scammers on the Rise

Tue, 02/20/2024 - 5:06am

Good rule of thumb – assume it’s a scam. Anyone who contacts you, or any unusual encounter, assume it’s a scam and you will probably be right. Recently I was called on my cell phone by someone claiming to be from Venmo. They asked me to confirm if I had just made two fund transfers from my Venmo account, both in the several hundred dollar range. I had not. OK, they said, these were suspicious withdrawals and if I did not make them then someone has hacked my account. They then transferred me to someone from the bank that my Venmo account is linked to.

I instantly knew this was a scam for several reasons, but even just the overall tone and feel of the exchange had my spidey senses tingling. The person was just a bit too helpful and friendly. They reassured me multiple times that they will not ask for any personal identifying information. And there was the constant and building pressure that I needed to act immediately to secure my account, but not to worry, they would walk me through what I needed to do. I played along, to learn what the scam was. At what point was the sting coming?

Meanwhile, I went directly to my bank account on a separate device and could see there were no such withdrawals. When I pointed this out they said that was because the transactions were still pending (but I could stop them if I acted fast). Of course, my account would show pending transactions. When I pointed this out I got a complicated answer that didn’t quite make sense. They gave me a report number that would identify this event, and I could use that number when they transferred me to someone allegedly from my bank to get further details. Again, I was reassured that they would not ask me for any identifying information. It all sounded very official. The bank person confirmed (even though it still did not appear on my account) that there was an attempt to withdraw funds and sent me back to the Venmo person who would walk be through the remedy.

What I needed to do was open my Venmo account. Then I needed to hit the send button in order to send a report to Venmo. Ding, ding ding!. That was the sting. They wanted me to send money from my Venmo account to whatever account they tricked me into entering. “You mean the button that says ‘send money’, that’s the button you want me to press?” Yes, because that would “send” a report to their fraud department to resolve the issue. I know, it sounds stupid, but it only has to work a fraction of the time. I certainly have elderly and not tech savvy relatives who I could see falling for this. At this point I confronted the person with the fact that they were trying to scam me, but they remained friendly and did not drop the act, so eventually I just hung up.

Digital scammers like this are growing, and getting more sophisticated. By now you may have heard about the financial advice columnist who was scammed out of $50,000. Hearing the whole story at the end, knowing where it is all leading, does make it seem obvious. But you have to understand the panic that someone can feel when confronted with the possibility that their identify has been stolen or their life savings are at risk. That panic is then soothed by a comforting voice who will help you through this crisis. The FBI documented $10.2 billion in online fraud in 2022. This is big business.

We are now living in a world where everyone needs to know how to defend themselves from such scams. First, don’t assume you have to be stupid to fall for a scam. Con artists want you to think that – a false sense of security or invulnerability plays into their hands.

There are many articles detailing good internet hygiene to protect yourself, but frequent reminders are helpful, so here is my list. As I said up top – assume it’s a scam. Whenever anyone contacts me I assume it’s a scam until proven otherwise. That also means – do not call that number, do not click that link, do not give any information, do not do anything that someone who contacted you (by phone, text, e-mail, or even snail mail) asks you to do. In many cases you can just assume it’s a scam and comfortably ignore it. But if you have any doubt, then independently look up a contact number for the relevant institution and call them directly.

Do not be disarmed by a friendly voice. The primary vulnerability of your digital life is not some sophisticated computer hack, but a social hack – someone manipulating you, trying to get you to act impulsively or out of fear. They also know how to make people feel socially uncomfortable. If you push back, they will make it seem like you are being unreasonable, rude, or stupid for doing so. They will push whatever social and psychological buttons they can. This means you have to be prepared, you have to be armed with a defense against this manipulation. Perhaps the best defense is simply protocol. If you don’t want to be rude, then just say, “Sorry, I can’t do that.” Take the basic information and contact the relevant institution directly. Or – just hang up. Remember, they are trying to scam you. You own them nothing. Even if they are legit, it’s their fault for breaking protocol – they should not be asking you to do something risky.

When in doubt, ask someone you know. Don’t be pressured by the alleged need to act fast. Don’t be pressured into not telling anyone or contacting them directly. Always ask yourself – is there any possible way this could be a scam. If there is, then it’s probably is a scam.

It’s also important to know that anything can be spoofed. A scammer can make it seem like the call is coming from a legit organization, or someone you know. Now, with AI, it’s possible to fake someone’s voice. Standard protocol should always be, take the information, hang up, look up the number independently and contact them directly. Just assume, if they contacted you, it’s a scam. Nothing should reassure you that it isn’t.

The post Scammers on the Rise first appeared on NeuroLogica Blog.

Categories: Skeptic

Fake Fossils

Mon, 02/19/2024 - 4:46am

In 1931 a fossil lizard was recovered from the Italian Alps, believed to be a 280 million year old specimen. The fossil was also rare in that it appeared to have some preserved soft tissue. It was given the species designation Tridentinosaurus antiquus and was thought to be part of the Protorosauria group.

A recent detailed analysis of the specimen, hoping to learn more about the soft tissue elements of the fossil, revealed something unexpected. The fossil is a fake (at least mostly). What appears to have happened is that a real fossil which was poorly preserved was “enhanced” to make it more valuable. There are real fossilized femur bones and some bony scales on what was the back of the lizard. But the overall specimen was poorly preserved and of not much value. What the forger did was carve out the outline of the lizard around the preserved bones and then paint it black to make it stand out, giving the appearance of carbonized soft tissue.

How did such a fake go undetected for 93 years? Many factors contributed to this delay. First, there were real bones in the specimen and it was taken from an actual fossil deposit. Initial evaluation did reveal some kind of lacquer on the specimen, but this was common practice at the time as a way of preserving the fossils, so did not raise any red flags. Also, characterization of the nature of the black material required UV photography and microscopic examination using technology not available at the time. This doesn’t mean they couldn’t have revealed it as a fake back then, but it is certainly much easier now.

It also helps to understand how fossils are typically handled. Fossils are treated as rare and precious items. They are typically examined with non-destructive techniques. It is also common for casts to be made and photographs taken, with the original fossils then catalogued and stored away for safety. Not every fossil has a detailed examination before being put away in a museum drawer. There simply aren’t the resources for that.

No fossil fake can withstand detailed examination. There is no way to forge a fossil that cannot be detected by the many types of analysis that we have available today. Some fakes are detected immediately, usually because of some feature that a paleontologist will recognize as fake. Others require high tech analysis. The most famous fake fossil, Piltdown Man, was a chimera of modern human and ape bones aged to look old. The fraud was revealed by drilling into the bones revealing they were not fossilized.

There was also an entire industry of fake fossils coming out of China. These are mostly for sale to private collectors, exploiting the genuine fossil deposits in China, especially of feathered dinosaurs. It is illegal to export real fossils from China, but not fakes. In at least one case, paleontologists were fooled for about a year by a well-crafted fake. Some of these fakes were modified real (but not very valuable) fossils while others were entire fabrications. The work was often so good, they could have just sold them as replicas for decent amounts of money. But still, claiming they were real inflated the price.

Creationists would have you believe that all fossils are fake, and will point to known cases as evidence. But this is an absurd claim. The Smithsonian alone boasts a collection of 40 million fossil specimens. Most fossils are discovered by paleontologists looking for them in geological locations that correspond to specific periods of time and have conditions amenable to fossil preservation. There is transparency, documentation, and a provenance to the fossils that would make a forgery impossible.

There are a few features that fake fossils have in common that in fact reinforce the nature of genuine fossils. Fake fossils generally were not found by scientists. They were found by amateurs who claim to have gotten lucky. The source and provenance of the fossils are therefore often questionable. This does not automatically mean they are fakes. There is a lot of non-scientific activity that can dig up fossils or other artifacts by chance. Ideally as soon as the artifacts are detected scientists are called in to examine them first hand, in situ. But that does not always happen.

Perhaps most importantly, fake fossils rarely have an enduring impact on science. Many are just knock-offs, and therefore even if they were real they are of little scientific value. They are just copies of real fossils. Fakes purported to be of unique fossil specimens, like Piltdown, have an inherent problem. If they are unique, then they would tell us something about the evolutionary history of the group. But if they are fake, they can’t be telling us something real. Chances are the fakes will not comport to the actual fossil record. They will be enigmas, and likely will be increasingly out of step with the actual fossil record as more genuine specimens are found.

That is exactly what happened with Piltdown. Some paleontologists were immediately skeptical of the find, and it was always thought of as a quirky specimen that scientists did not know how to fit into the tree of life. As more hominid specimens were found Piltdown became increasingly the exception, until finally scientists had enough, pulled the original specimens out of the vault, and showed them to be fakes. The same is essentially true of Tridentinosaurus antiquus specimen. Paleontologists could not figure out exactly where it fit taxonomically, and did not know how it had apparent soft tissue preservation. It was an enigma, which prompted the analysis which revealed it to be a fake.

Paleontology is essentially the world’s largest puzzle, with each fossil specimen being a puzzle piece. A fake fossil is either redundant or a puzzle piece that does not fit.

The post Fake Fossils first appeared on NeuroLogica Blog.

Categories: Skeptic

Biofrequency Gadgets are a Total Scam

Fri, 02/16/2024 - 4:51am

I was recently asked what I thought about the Solex AO Scan. The website for the product includes this claim:

AO Scan Technology by Solex is an elegant, yet simple-to-use frequency technology based on Tesla, Einstein, and other prominent scientists’ discoveries. It uses delicate bio-frequencies and electromagnetic signals to communicate with the body.

The AO Scan Technology contains a library of over 170,000 unique Blueprint Frequencies and created a hand-held technology that allows you to compare your personal frequencies to these Blueprints in order to help you achieve homeostasis, the body’s natural state of balance.

This is all hogwash (to use the technical term). Throwing out the names Tesla and Einstein, right off, is a huge red flag. This is a good rule of thumb – whenever these names (or Galileo) are invoked to hawk a product, it is most likely a scam. I guess you can say that any electrical device is based on the work of any scientist who had anything to do with electromagnetism.

What are “delicate bio-frequencies”? Nothing, they don’t exist. The idea, which is an old one used in scam medical devices for decades, is that the cells in our bodies have their own unique “frequency” and you want these frequencies to be in balance and healthy. If the frequencies are blocked or off, in some way, this causes illness. You can therefore read these frequencies to diagnoses diseases or illness, and you can likewise alter these frequencies to restore health and balance. This is all complete nonsense, not based on anything in reality.

Living cells, of course, do have tiny electromagnetic fields associated with them. Electrical potential is maintained across all cellular membranes. Specialized cells, like muscles and nervous tissue, use this potential as the basis for their function. But there is no magic “frequency” associated with these fields. There is no “signature” or “blueprint”. That is all made up nonsense. They claim to have researched 170,000 “Blueprint Frequencies” but the relevant science appears to be completely absent from the published literature. And of course there are no reliable clinical trials indicating that any type of frequency-based intervention such as this has any health or medical application.

As an aside, there are brainwave frequencies (although this is not what they are referring to). This is caused by networks of neurons in the brain all firing together with a regular frequency. We can also pick up the electrical signals caused by the contraction of the heart – a collection of muscle cells all firing in synchrony. When you contract a skeletal muscle, we can also record that electrical activity – again, because there are lots of cells activating in coordination. Muscle contractions have a certain frequency to them. Motor units don’t just contract, they fire at an increasing frequency as they are recruited, peaking (in a healthy muscle) at 10 hz. We will measure these frequencies to look for nerve or motor neuron damage. If you cannot recruit as many motor units, the ones you can recruit will fire faster to compensate.

These are all specialized tests looking at specific organs with many cells firing in a synchronous fashion. If you are just looking at the body in general, not nervous tissue or muscles, the electrical signals are generally too tiny to measure and would just be white noise anyway. You will not pick up “frequencies”, and certainly not anything with any biological meaning.

In general, be very skeptical of any “frequency” based claims. That is just a science-sounding buzzword used by some to sell dubious products and claims.

The post Biofrequency Gadgets are a Total Scam first appeared on NeuroLogica Blog.

Categories: Skeptic

Using AI and Social Media to Measure Climate Change Denial

Thu, 02/15/2024 - 5:14am

A recent study finds that 14.8% of Americans do not believe in global climate change. This number is roughly in line with what recent survey have found, such as this 2024 Yale study which put the figure at 16%. In 2009, by comparison, the figure was at 33% (although this was a peak – the 2008 result was 21%). The numbers are also encouraging when we ask about possible solutions, with 67% of Americans saying that we should prioritize development of green energy and should take steps to become carbon neutral by 2050. The good news is that we now have a solid majority of Americans who accept the consensus on climate change and broadly support measures to reduce our carbon footprint.

But there is another layer to this study I first mentioned – the methods used in deriving the numbers. It was not a survey. It used artificial intelligence to analyze posts on X (Twitter) and their networks. The fact that the results aligns fairly well to more tried and true methods, like surveys, is somewhat validating of the methods. Of course surveys can be variable as well, depending on exactly how questions are asked and how populations are targeted. But multiple well designed survey by experienced institutions, like Pew, can create an accurate picture of public attitudes.

The advantage of analyzing social media is that it can more easily provide vast amounts of data. The authors report:

We used a Deep Learning text recognition model to classify 7.4 million geocoded tweets containing keywords related to climate change. Posted by 1.3 million unique users in the U.S., these tweets were collected between September 2017 and May 2019.

That’s a lot of data. As is almost always the case, however, there is a price to pay for using methods which capture such vast amounts of data – that data is not strictly controlled. It’s observational. It is a self-selective group – people who post on X. It therefore may not be representative of the general population. Because the results broadly agree with more traditional survey methods, however, this does suggest that any such selective effects balanced out. Also, they adjusted for and skew toward certain demographic groups – so if younger people were overrepresented in the sample they adjusted for that.

The results also showed some more detail. Because the posts were geocoded the analysis can look at regional difference. They found broadly that acceptance of global warming science was highest on the coasts, and lower in the midwest and south. There were also significant county level differences. They found:

Political affiliation has the strongest correlation, followed by level of education, COVID-19 vaccination rates, carbon intensity of the regional economy, and income.

Climate change denial, again in line with prior data, correlated strongly with identifying as a Republican. That was the dominant factor. It’s likely that other factors, like COVID-19 vaccination rates, also derive from political affiliation. But it does suggest that when one rejects scientific consensus and the opinion of experts on climate change, it makes it more likely to do so on other issues.

Because they did network analysis they were also able to analyze who is talking to whom, and who the big influencers were. The found, again unsurprising, that there are networks of users who accept climate change and networks that reject climate change, with very little communication between the networks. This shows that the echo-chamber effect on social media is real, at least on this issue. This is a disturbing finding, perhaps the most disturbing of this study (even if we already knew this).

It reflects in data what many of us feel – that social media and the internet has transformed our society from one where there is a basic level of shared culture and facts to one in which different factions are siloed in different realities. There have always been different subcultures, with vastly different ideologies and life experiences. But the news was the news, perhaps with different spin and emphasis. Now it is possible for people to exist in completely different and relatively isolated information ecosystems. We don’t just have different priorities and perspectives -we live in different realities.

The study also identified individual influencers who were responsible for many of the climate change denial posts. Number one among them was Trump, followed by conservative media outlets. Trump is, of course, a polarizing figure, a poster child for the echo-chamber social media phenomenon. For many he represents either salvation or the destruction of American democracy.

On the bright side, it does seem there is still the possibility of movement in the middle. The middle may have shrunk, but still holds some sway in American politics, and there does seem to be a number of people who can be persuaded by facts and reason. We have moved the needle on many scientific issues, and attitudes have improved on topics such as climate change, GMOs, and nuclear power. The next challenge is fixing our dysfunctional political system so we can translate solid public majorities into tangible action.

The post Using AI and Social Media to Measure Climate Change Denial first appeared on NeuroLogica Blog.

Categories: Skeptic

Flow Batteries – Now With Nanofluids

Tue, 02/13/2024 - 5:12am

Battery technology has been advancing nicely over the last few decades, with a fairly predictable incremental increase in energy density, charging time, stability, and lifecycle. We now have lithium-ion batteries with a specific energy of 296 Wh/kg – these are in use in existing Teslas. This translates to BE vehicles with ranges from 250-350 miles per charge, depending on the vehicle. That is more than enough range for most users. Incremental advances continue, and every year we should expect newer Li-ion batteries with slightly better specs, which add up quickly over time. But still, range anxiety is a thing, and batteries with that range are heavy.

What would be nice is a shift to a new battery technology with a leap in performance. There are many battery technologies being developed that promise just that. We actually already have one, shifting from graphite anodes to silicon anodes in the Li-ion battery, with an increase in specific energy to 500 Wh/kg. Amprius is producing these batteries, currently for aviation but with plans to produce them for BEVs within a couple of years. Panasonic, who builds 10% of the world’s EV batteries and contracts with Tesla, is also working on a silocon anode battery and promises to have one in production soon. That is basically a doubling of battery capacity from the average in use today, and puts us on a path to further incremental advances. Silicon anode lithium-ion batteries should triple battery capacity over the next decade, while also making a more stable battery that uses less (or no – they are working on this too) rare earth elements and no cobalt. So even without any new battery breakthroughs, there is a very bright future for battery technology.

But of course, we want more. Battery technology is critical to our green energy future, so while we are tweaking Li-ion technology and getting the most out of that tech, companies are working to develop something to replace (or at least complement) Li-ion batteries. Here is a good overview of the best technologies being developed, which include sodium-ion, lithium-sulphur, lithium-metal, and solid state lithium-air batteries. As an aside, the reason lithium is a common element here is because it is the third-lightest element (after hydrogen and helium) and the first that can be used for this sort of battery chemistry. Sodium is right below lithium on the period table, so it is the next lightest element with similar chemistry.

But for the rest of this article I want to focus on one potential successor to Li-ion batteries – flow batteries. So-called flow batteries are called that because they use two liquid electrochemical substance to carry their charge and create electrical current. Flow batteries are stable, less prone to fires than lithium batteries, and have a potential critical advantage – they can be recharged by swapping out the electrolyte. They can also be recharged in the conventional way, by plugging them in. So theoretically a flow battery could provide the same BEV experience as a current Li-ion battery, but with an added option. For “fast charging” you could pull into a station, connect a hose to your car, and swap out spent electrolyte for fresh electrolyte, fully charging your vehicle in the same time it would take to fill up a tank. This is the best of both worlds – for those who own their own off-street parking space (82% of Americans) routine charging at home is super convenient. But for longer trips, the option to just “fill the tank” is great.

But there is a problem. As I have outlined previously, battery technology is one of those tricky technologies that requires a suite of characteristics in order to be functional, and any one falling short is a deal-killer. For flow batteries the problem is that their energy density is only about 10% that of Li-ion batteries. This makes them unsuitable for BEVs. This is also an inherent limitation of chemistry – you can only dissolve so much solute in a liquid. However, as you likely have guessed based upon my headline, there is also a solution to this limitation – nanofluids. Nanoparticles suspended in a fluid can potentially have much greater energy density.

Research into this approach actually goes back to 2009, at Argonne National Laboratory and the Illinois Institute of Technology, who did the initial proof of concept. Then in 2013 DARPA-energy gave a grant to the same team to build a working prototype, which they did. Those same researchers then spun off a private company, Influit Energy, to develop a commercial product, with further government contracts for such development. As an aside, we see here an example of how academic researchers, government funding, and private industry work together to bring new cutting edge technology to market. It can be a fruitful arrangement, as long as the private companies give back to the public the public support they built upon.

Where is this technology now? John Katsoudas, a founder and chief executive of Influit, claims that they are developing a battery with an specific energy of 550 to 850 Wh/kg, with the potential to go even higher. That’s roughly double to triple current EV batteries. They also claim these batteries (soup to nuts) will be cost competitive to Li-ion batteries. Of course, claims from company executives always need to be taken with a huge grain of salt, and I don’t get too excited until a product is actually in production, but this does all look very promising.

Part of the technology involved how much nanoparticles they can cram into their electrolyte fluid. They claim they are currently up to 50% by weight, but believe they can push that to 80%. At 80% nanoparticles, the fluid would have the viscosity of motor oil.

A big part of any new technology, often neglected in the hype, is infrastructure. We are facing this issue with BEVs. The technology is great, but we need an infrastructure of charging stations. They are being built, but currently are a limiting factor to public acceptance of the technology (lack of chargers contributes to range anxiety). The same issue would exist with nanoparticle flow batteries. However, they would have at least a good an infrastructure for normal recharging as current BEVs. Plus also they would benefit from pumping electrolyte fluid as a means of fast charging. Such fluid could be process and recharged on site, but also could be trucked or piped as with existing gasoline infrastructure. Still, this is not like flipping a switch. It could take a decade to build out an adequate infrastructure. But again, meanwhile at least such batteries can be charges as normal.

I don’t know if this battery technology will be the one to displace lithium-ion batteries. A lot will depend on which technologies make it to market first, and what infrastructure investments we make. It’s possible that the silicon anode Li-ion batteries may improve so quickly they will eclipse their competitors. Or the solid state batteries may make a big enough leap to crush the competition. Or companies may decide that pumping fluid is the path to public acceptance and go all-in on flow batteries. It’s a good problem to have, and will be fascinating to watch this technology race unfold.

The only prediction that seems certain is that battery technology is advancing quickly, and by the 2030s we should have batteries for electric vehicles with 2-3 times the energy density and specific energy of those in common use today. That will be a different world for BEVs.

 

The post Flow Batteries – Now With Nanofluids first appeared on NeuroLogica Blog.

Categories: Skeptic

The Exoplanet Radius Gap

Mon, 02/12/2024 - 5:03am

As of this writing, there are 5,573 confirmed exoplanets in 4,146 planetary systems. That is enough exoplanets, planets around stars other than our own sun, that we can do some statistics to describe what’s out there. One curious pattern that has emerged is a relative gap in the radii of exoplanets between 1.5 and 2.0 Earth radii. What is the significance, if any, of this gap?

First we have to consider if this is an artifact of our detection methods. The most common method astronomers use to detect exoplanets is the transit method – carefully observe a star over time precisely measuring its brightness. If a planet moves in front of the star, the brightness will dip, remain low while the planet transits, and then return to its baseline brightness. This produces a classic light curve that astronomers recognize as a planet orbiting that start in the plane of observation from the Earth. The first time such a dip is observed that is a suspected exoplanet, and if the same dip is seen again that confirms it. This also gives us the orbital period. This method is biased toward exoplanets with short periods, because they are easier to confirm. If an exoplanet has a period of 60 years, that would take 60 years to confirm, so we haven’t confirmed a lot of those.

There is also the wobble method. We can observe the path that a star takes through the sky. If that path wobbles in a regular pattern that is likely due to the gravitational tug from a large planet or other dark companion that is orbiting it. This method favors more massive planets closer to their parent star. Sometimes we can also directly observe exoplanets by blocking out their parent star and seeing the tiny bit of reflected light from the planet. This method favors large planets distant from their parent star. There are also a small number of exoplanets discovered through gravitational microlensing, and effect of general relativity.

None of these methods, however, explain the 1.5 to 2.0 radii gap. It’s also likely not a statistical fluke given the number of exoplanets we have discovered. Therefore it may be telling us something about planetary evolution. But there are lots of variables that determine the size of an exoplanet, so it can be difficult to pin down a single explanation.

One theory has to do the atmospheres if planets. Exoplanets that are small and rocky but larger than Earth are called super-earths. Here is an example of a recent super-earth discovered in the habitable zone of a nearby red star – TOI-715 b. It has a mass of 3.02 earth masses, and a radius 1.55 that of Earth. So it is right on the edge of the gap. I calculated the surface gravity of this planet, which is 1.25 g. It has an orbital period of 19.3 days, which means it is likely tidally locked to its parent star. This planet was discovered by the TESS telescope using the transit method.

Planets like TOI-715 b, at or below the gap, likely are close to their parent stars and have relatively thin atmospheres (something like Earth or less). If the same planet were further out from its parent star, however, with that mass it would likely retain a thick atmosphere. This would increase the apparent radius of the planet using the transit method (which cannot distinguish a rocky world from a thick atmosphere), increasing its size to greater than two Earth radii – vaulting it across the gap. These worlds, above the gap, are called mini-Neptunes or sub-Neptunes. So according to this theory the main factor is distance from the parent star and whether or not the planet can retain a thick atmosphere. When small rocky worlds get big enough and far enough from their parent star, they jump to the sub-Neptune category by retaining a thick atmosphere.

But as I said, there are lots of variables here, such as the mass of the parent star.  A recent paper adds another layer – what about planets that migrate? One theory of planetary formation (mainly through simulations) holds that some planets may migrate either closer to or farther away from their parent stars over time. Also the existence of “hot Jupiters” – large gas planets very close to their parent stars – suggests migration, as such planets likely could not have formed where they are.  It is likely that Neptune and Uranus migrated farther away from the sun after their formation. This is part of a broader theory about the stability of planetary systems. Such systems, almost by definition, are stable. If they weren’t, they would not last for long, which means we would not observe many of them in the universe. Our own solar system has been relatively stable for billions of years.

There are several possible explanations for this remarkable stability. One is that this is how planetary systems evolve. The planets form from a rotating disc of material which means they form roughly circular orbits all going in the same plane and same direction. But it is also possible that early stellar systems develop many more planets than ultimately survive. Those is stable orbits survive long term, while those in not stable orbits either fall into their parent star or get ejected from the system to become rogue planets wandering between the stars. There is therefore a selection for planets in stable orbits. There is also now a third process likely happening, and that is planetary migration. Planets may migrate to more stable orbits over time. Eventually all the planets in a system jockey into position in stable orbits that can last billions of years.

Observing exoplanetary systems is one way to test our theories about how planetary systems form and evolve. The relative gap in planet size is one tiny piece of this puzzle. With migrating planets what the paper says is likely happening is that if you have sub-Neptunes that migrate closer to their parent star, the thick atmosphere will be stripped away leaving behind a smaller rocky world below the gap. But also they hypothesize that a very icy world may migrate closer to their parent star, melting the ice and forming a thick atmosphere, jumping the gap to large planetary size.

What all of these theories of the gap have in common is the presence or absence of a thick atmosphere, which makes sense. There are some exoplanets in the gap, but it’s just much less likely. It’s hard to get a planet right in the gap, because either it’s too light to have a thick atmosphere, or too massive not to have one. The gap can be seen as an unstable region of planetary formation.

The more time that goes by the more data we will have and the better our exoplanet statistics will be. Not only will we have more data, but longer observation periods allow for the confirmation of planets with longer orbital periods, so our data will become progressively more representative. Also, better telescopes will be able to detect smaller worlds in orbits more difficult to observe, so again the data will become more representative of what’s out there.

Finally, I have to add, with greater than 5000 exoplanets and counting, we have still not found an Earth analogue. No exoplanet that is a small rocky world of roughly Earth size and mass in the habitable zone of an orange or yellow star. Until we find one, it’s hard to do statistics, except to say that truly Earth-like planets are relatively rare. But I anxiously await the discovery of the first true Earth twin.

The post The Exoplanet Radius Gap first appeared on NeuroLogica Blog.

Categories: Skeptic

JET Fusion Experiment Sets New Record

Fri, 02/09/2024 - 5:06am

Don’t get excited. It’s always nice to see incremental progress being made with the various fusion experiments happening around the world, but we are still a long way off from commercial fusion power, and this experiment doesn’t really bring us any close, despite the headlines. Before I get into the “maths”, here is some quick background.

Fusion is the process of combining light elements into heavier elements. This is the process the fuels stars. We have been dreaming about a future powered by clean abundant fusion energy for at least 80 years. The problem is – it’s really hard. In order to get atoms to smash into each other with sufficient energy to fuse, you need high temperatures and pressures, like those at the core of our sun. We can’t replicate the density and pressure at a star’s core, so we have to compensate here on Earth with even higher temperatures.

There are a few basic fusion reactor designs. The tokamak design (like the JET rector) is a torus, with a plasma of hydrogen isotopes (usually deuterium and tritium) inside the torus contained by powerful magnetic fields. The plasma is heated and squeezed by brute magnetic force until fusion happens. Another method, the pinch method, also uses magnetic fields, but they use a stream of plasma that gets pinched at one point to high density and temperature. Then there is kinetic confinement which essentially uses an implosion created by powerful lasers to create a brief moment of high density and temperature. More recently a group has used sonic cavitation to create an instance of fusion (rather than sustained fusion). These methods are essentially in a race to create commercial fusion. It’s an exciting (if very slow motion) race.

There are essentially three thresholds to keep an eye out for. The first is fusion – does the setup create any measurable fusion. You might think that this is the ultimate milestone, but it isn’t. Remember, the goal for commercial fusion is to create net energy. Fusion creates energy through heat, which can then be used to run a convention turbine. So just achieving fusion, while super nice, is not even close to where we need to get. If you are putting thousands of times the energy into the process as you get out, that is not a commercial power plant. The next threshold is “ignition”, or sustained fusion in which the heat energy created by fusion is sufficient to sustain the fusion process. (This is not relevant to the cavitation method which does not even try to sustain fusion.) A couple of labs have recently achieve this milestone.

But wait, there’s more. Even though they achieved ignition, and (as was widely reported) produced net fusion energy, they are still far from a commercial plant. The fusion created more energy than when into the fusion itself. But the entire process still used about 100 times the total energy output. So we are only about 1% of the way toward the ultimate goal of total net energy. When framed that way, it doesn’t sound like we are close at all. We need lasers or powerful magnets that are more than 100 times as efficient as the ones we are using now, or the entire method needs to pick up an order of magnitude or two of greater efficiency. That is no small task. It’s quite possible that we simply can’t do it with existing materials and technology. Fusion power may have to wait for some future unknown technology.

In the meantime we are learning an awful lot about plasmas and how to create and control fusion. It’s all good. It’s just not on a direct path to commercial fusion. It’s not just a matter of “scaling up”. We need to make some fundamental changes to the whole process.

So what record did the JET fusion experiment break? Using the tokamak torus constrained by magnetic fields design, they were able to create fusion and generate “69 megajoules of fusion energy for five seconds.” Although the BBC reports it produced, “69 megajoules of energy over five seconds.” That is not a subtle difference. Was it 69 megajoules per second for five seconds, or was it 13.8 megajoules per second for five seconds for a total of 69 megajoules? More to the point – what percentage of energy input was this. I could not find anyone reporting it (and ChatGPT didn’t know). But I did find this – “In total, when JET runs, it consumes 700 – 800 MW of electrical power.” A joule is one watt of power for one second.

It’s easy to get the power vs energy units confused, and I’m trying not to do that here, but the sloppy reporting is no help. Watts are a measure of power. Watts over time are a measure of energy, so a watt second or watt hour is a unit of energy. From here:

1 Joule (J) is the MKS unit of energy, equal to the force of one Newton acting through one meter.
1 Watt is the power of a Joule of energy per second

So since joules are a measure of energy, it makes more sense that it would be a total amount of energy created over 5 seconds (so the BBC was more accurate). So 700 MW of power over 5 seconds is 3,500 megajoules of energy input, compared to 69 megajoules output. That is 1.97%, which is close to where the best fusion reactors are so I think I got that right. However, that’s only counting the energy to run the reactor for the 5 seconds it was fusing. What about all the energy for starting up the process and everything else soup to nuts?

This is not close to a working fusion power plant. Some reporting says the scientists hope to double the efficiency with better superconducting magnets. That would be nice – but double is still nowhere close. We need two orders of magnitude, at least, just to break even. We probably need closer to three orders of magnitude for the whole thing to be worth it, cradle to grave. We have to create all that tritium too, remember. Then there is inefficiency in converting the excess heat energy to electricity. That may be an order a magnitude right there.

I am not down on fusion. I think we should continue to research it. Once we can generate net energy through fusion reactors, that will likely be our best energy source forever – at least for the foreseeable future. It would take super advanced technology to eclipse it. So it’s worth doing the research. But just being realistic, I think we are looking at the energy of the 22nd century, and maybe the end of this one. Not the 2040s as some optimists predict. I hope to be proven wrong on this one. But either way, premature hype is likely to be counterproductive. This is a long term research and development project. It’s possible no one alive today will see a working fusion plant.

At least, for the existing fusion reactor concepts I think this is true. The exception is the cavitation method, which does not even try to sustain fusion. They are just looking for a “putt putt putt” of individual fusion events, each creating heat. Perhaps this, or some other radical new approach, will cross over the finish line much sooner than anticipated and make me look foolish (although happily so).

 

The post JET Fusion Experiment Sets New Record first appeared on NeuroLogica Blog.

Categories: Skeptic

Weaponized Pedantry and Reverse Gish Gallop

Tue, 02/06/2024 - 4:45am

Have you ever been in a discussion where the person with whom you disagree dismisses your position because you got some tiny detail wrong or didn’t know the tiny detail? This is a common debating technique. For example, opponents of gun safety regulations will often use the relative ignorance of proponents regarding gun culture and technical details about guns to argue that they therefore don’t know what they are talking about and their position is invalid. But, at the same time, GMO opponents will often base their arguments on a misunderstanding of the science of genetics and genetic engineering.

Dismissing an argument because of an irrelevant detail is a form of informal logical fallacy. Someone can be mistaken about a detail while still being correct about a more general conclusion. You don’t have to understand the physics of the photoelectric effect to conclude that solar power is a useful form of green energy.

There are also some details that are not irrelevant, but may not change an ultimate conclusion. If someone thinks that industrial release of CO2 is driving climate change, but does not understand the scientific literature on climate sensitivity, that doesn’t make them wrong. But understanding climate sensitivity is important to the climate change debate, it just happens to align with what proponents of anthropogenic global warming are concluding. In this case you need to understand what climate sensitivity is, and what the science says about it, in order to understand and counter some common arguments deniers use to argue against the science of climate change.

What these few examples show is a general feature of the informal logical fallacies – they are context dependent. Just because you can frame someone’s position as a logical fallacy does not make their argument wrong (thinking this is the case is the fallacy fallacy). What logical fallacy is using details to dismissing the bigger picture? I have heard this referred to as a “Reverse Gish Gallop”. I’m don’t use this term because I don’t think it captures the essence of the fallacy. I have used the term “weaponized pedantry” before and I think that is better.

It’s OK to be a little pedantic if the purpose is to be precise and accurate. That is consistent with good science and good scholarship. But such pedantry must be fair and in context. This requires a fair assessment of the implications of the detail. It is good to get the details right for their own sake, but some details don’t matter to a particular argument or position. There are a couple of ways to weaponize pedantry not to advocate for genuine good scholarship but as a hit job against a position you don’t like.

One way is to simply be biased in your search for and exposure of small mistakes. If you are only looking for them on one side or in one direction of an argument, then that is not good scholarship. It’s searching for ammunition to use as a weapon. The other method is to imply, or sometimes even explicitly state, that an error in a detail calls into question or even invalidates the bigger picture, even when it doesn’t. Sometimes this could just be a non sequitur argument – you made a mistake in describing the uranium cycle, therefore your opinion on nuclear power is not correct. And sometimes this can be an ad hominem fallacy – you don’t know the difference between a clip and a magazine so you are not allowed to have an opinion on gun safety.

Given this complexity, what is a good approach to pedantry about details and accuracy? First, I will reiterate my position that having a discussion or even an “argument” should not be about winning. Winning is for debate club and the courtroom. Having a discussion should be about understanding the other person’s position, understanding your own position better, understanding the topic better, and coming to as much common ground as possible. This means identifying the factual claims and resolving any differences, hopefully with reliable sources. Then you need to examine the logic of every claim and statement, including your own, to see if it is valid. You may also need to identify any value judgements that are subjective, or any areas where the facts are unknown or ambiguous.

With this approach, knowledge of logical fallacies is a good way to police your own arguments and thinking on a topic, and a good way to resolve differences and come to common ground. But if wielded as a rhetorical weapon, you are almost certain to commit the fallacy fallacy, including weaponized pedantry.

Specifically with reference to this fallacy – you need to ask the question, does this detail affect the larger claim? It may be entirely irrelevant, or it may be a tiny tweak, or it may be truly critical to the claim. If someone falsely thinks that Monsanto sued farmers solely for accidental contamination, that is not a tiny detail – that is core to one anti-GMO argument. Try to be as fair and neutral as possible it making that call, and then be honest about it (to yourself and anyone else involved in the discussion).

It’s OK to be that person who says, “Well, actually.” It’s OK to get the details right for the sake of getting the details right. We all should have a dedication to accuracy and precision. But its very easy to disguise biased advocacy as dedication to accuracy when it isn’t.

The post Weaponized Pedantry and Reverse Gish Gallop first appeared on NeuroLogica Blog.

Categories: Skeptic

Did They Find Amelia Earhart’s Plane

Mon, 02/05/2024 - 4:25am

Is this sonar image taken at 16,000 feet below the surface about 100 miles from Howland island, that of a downed Lockheed Model 10-E Electra plane? Tony Romeo hopes it is. He spent $9 million to purchase an underwater drone, the Hugan 6000, then hired a crew and scoured 5,200 square miles in a 100 day search hoping to find exactly that. He was looking, of course, for the lost plane of Amelia Earhart. Has he found it? Let’s explore how we answer that question.

First some quick background, and most people know Amelia Earhart was a famous (and much beloved) early female pilot, the first female to cross the Atlantic solo. She was engaged in a mission to be the first solo pilot (with her navigator, Fred Noonan) to circumnavigate the globe. She started off in Oakland California flying east. She made it all the way to Papua New Guinea. From there her plan was to fly to Howland Island, then Honolulu, and back to Oakland. So she three legs of her journey left. However, she never made it to Howland Island. This is a small island in the middle of the Pacific ocean and navigating to it is an extreme challenge. The last communication from Earhart was that she was running low on fuel.

That was the last anyone heard from her. The primary assumption has always been that she never found Howland Island, her plane ran out of fuel and crashed into the ocean. This happened in 1937.  But people love mysteries and there has been endless speculation about what may have happened to her. Did she go of course an arrive at the Marshall Islands 1000 miles away? Was she captured by the Japanese (remember, this was right before WWII)? Every now and the a tidbit of suggestive evidence crops up, but always evaporates on close inspection. It’s all just wishful thinking and anomaly hunting.

There have also been serious attempts to find her plane. However, assuming she was off course, and that’s why they never made it to their target, there could potentially be a huge area of the Pacific ocean where her plane ended up. Romeo’s effort is the latest to look for her plane, and his approach was entirely reasonably – sonar scan the bottom of the ocean around Howland Island. He and his crew did this starting in September 2023. After the scanning mission was over, while going through the images, they found the image you can see above. Is this Earhart’s plane?

There are three possibilities to consider. One is that the image is not that of a plane at all, but just a random geological formation or something else. Remember that Romeo and his team poured through tons of data looking for a plane-like image. It’s not all that surprising that they found something. This could just be an example of the Face on Mars or the Martian Bigfoot – if you look at enough images looking for stuff you will find it.

The second possibility is that the sonar image is that of a plane, just not Earhart’s Lockheed Electra. There are lots of known missing aircraft. But more importantly perhaps, how many unknown missing aircraft are there? How many planes were lost during WWII and unaccounted for? There could be private unregistered planes, even drug smugglers. And of course, the third possibility is that this is Amelia Earhart’s plane. How can we know?

First, we can make some inferences from the information we have. Is the image that of a plane? I think this is a coin toss. It is reasonably symmetrical, has things that can be wings, a fuselage, and a tail. But again, it’s just a fuzzy image. It could just be a ledge and a rock. Neither outcome would shock me.

If it is a plane, could this be Earhart’s plane? The one data point that is in favor of this conclusion is the location – 100 miles off Howland Island. That is within the scope of where we would expect to find her plane. But there are two big things going against it being the Lockheed Electra. First, the Electra had straight wings, while, if this is a plane, the wings appear to be swept back. If this image is accurate, then the answer is no. But it is possible that the plane was damaged by the crash. Perhaps the wings broke and were pushed back by the fall through the water.

Also, the Lockheed Electra was a twin engine plane, with one large engine on each wing. They are not apparent in this image, and they should be. So we also have to speculate that the engines were lost in the process of the plane crashing and sinking, or that the image is too distorted to see them.

As you can see, speculation from the existing evidence is pretty thin. We need more data. What we have with the sonar image is not confirmatory evidence, just a clue that needs follow up. We need better images, hopefully with sufficient detail to provide forensic evidence. This will require a deep sea mission with lights and cameras, like the kind used to explore the wreckage of the Titanic. With such images it should be easy to tell if this is a Lockheed Electra. If it is, then it is almost certainly Earhart’s plane. But also, we may be able to read the registration numbers on the side of the plane, and that would be definitive.

Romeo is in the process of planning a follow up mission to investigate this sonar image. Unless and until this happens, we will not be able to say with any confidence if this is or is not Earhart’s plane.

The post Did They Find Amelia Earhart’s Plane first appeared on NeuroLogica Blog.

Categories: Skeptic

How To Prove Prevention Works

Fri, 02/02/2024 - 4:55am

Homer: Not a bear in sight. The Bear Patrol must be working like a charm.
Lisa: That’s specious reasoning, Dad.
Homer: Thank you, dear.
Lisa: By your logic I could claim that this rock keeps tigers away.
Homer: Oh, how does it work?
Lisa: It doesn’t work.
Homer: Uh-huh.
Lisa: It’s just a stupid rock.
Homer: Uh-huh.
Lisa: But I don’t see any tigers around, do you?
[Homer thinks of this, then pulls out some money]
Homer: Lisa, I want to buy your rock.
[Lisa refuses at first, then takes the exchange]

 

This memorable exchange from The Simpsons is one of the reasons the fictional character, Lisa Simpson, is a bit of a skeptical icon. From time to time on the show she does a descent job of defending science and reason, even toting a copy of “Jr. Skeptic” magazine (which was fictional at the time then created as a companion to Skeptic magazine).

What the exchange highlights is that it can be difficult to demonstrate (let alone “prove”) that a preventive measure has worked. This is because we cannot know for sure what the alternate history or counterfactual would have been. If I take a measure to prevent contracting COVID and then I don’t get COVID, did the measure work, or was I not going to get COVID anyway? Historically the time this happened on a big scale was Y2K – this was a computer glitch set to go off when the year changed to 2000. Most computer code only encoded the year as two digits, assuming the first two digits were 19, so 1995 was encoded as 95. So when the year changed to 2000, computers around the world would think it was 1900 and chaos would ensue. Between $300 billion and $500 billion were spent world wide to fix this bug by upgrading millions of lines of code to a four digit year stamp.

Did it work? Well, the predicted disasters did not happen, so from that perspective it did. But we can’t know for sure what would have happened if we did not fix the code. This has lead to speculation and even criticism about wasting all that time and money fixing a non-problem. There is good reason to think that the preventive measures worked, however.

At the other end of the spectrum, often doomsday cults, predicting that the world will end in some way on a specific date, have to deal with the day after. One strategy is to say that the faith of the group prevented doomsday (the tiger-rock strategy). They can now celebrate and start recruiting to prevent the next doomsday.

The question is – how do we know when our preventive efforts have been successful or if they were not needed. In either scenario above you can use the absence of anything bad happening as both evidence that the problem was fake all along, or that the preventive measures worked. The absence of disaster fits both narratives. The problem can get very complicated. When preventive measures are taken and negative outcomes happen anyway, can we argue that it would have been worse? Did the school closures during COVID prevent any deaths? What would have happened if we tried to keep schools open? The absence of a definitive answer means that anyone can use the history to justify their ideological narrative.

How do we determine if a preventive measure works. There are several valid methods, mostly involving statistics. There is no definitive proof (you can’t run history back again to see what happens), but you can show convincing correlation. Ideally the correlation will be repeatable with at least some control of confounding variables. For public health measures, for example, we can compare data from either a time or a place without the preventive measures to those with the preventive measures. This can vary by state, province, country, region, demographic population, or over historic time. In each country where the measles vaccine is rolled out, for example, there is an immediate sharp decline in the incidence of measles. And if vaccine compliance decreases there is a rises in measles. If this happens often enough, the statistical data can be incredibly robust.

This relates to a commonly invoked (but often misunderstood) logical fallacy, the confusion of correlation with causation. Often people will say “correlation does not equal causation”. This is true but can be misleading. Correlation is not necessarily due to a specific causation, but it can be. Over applying this principle is a way to dismiss correlational data as useless – but it isn’t. The way scientists use correlation is to look for multiple correlations and triangulate to the one causation that is consistent with all of them. Smoking correlates with an increased risk of lung cancer. But also, duration and intensity also correlate, as does filtered vs unfiltered, and quitting correlates with reduced risk over time back to baseline. There are multiple correlations that only make sense in total if smoking causes lung cancer. Interestingly, the tobacco industry argued for decades that this data does not prove smoking causes cancer, because it was just correlation.

Another potential line of evidence is simulations. We cannot rerun history, but we can simulate it to some degree. Our ability to do so is growing fast, as computers get more powerful and AI technology advances. So we can run the counterfactual and ask, what would have happened if we had not taken a specific measure. But of course, these conclusions are only as good as the simulations themselves, which are only as good as our models. Are we accounting for all variables? This, of course, is at the center of the global climate change debate. We can test our models both against historical data (would they have predicted what has already happened) and future data (did they predict what happened after the prediction). It turns out, the climate models have been very accurate, and are getting more precise. So we should probably pay attention to what they say is likely to happen with future release of greenhouse gases.

But I predict that if by some miracle we are able to prevent the worst of climate change through a massive effort of decarbonizing our industry, future deniers will argue that climate change was a hoax all along, because it didn’t happen. It will be Y2K all over again but on a more massive scale. That’s a problem I am willing to have, however.

Another way to evaluate claims for prevention is plausibility. The tiger rock example that Lisa gives is brilliant for two reason. First, the rock is clearly “just a stupid rock” that she randomly picked up off the ground. Second, there is no reason to think that there are any tigers anywhere near where they are. For any prevention claim, the empirical data from correlation or simulations has to be put into the context of plausibility. Is there a clear mechanism? The lower the plausibility (or prior probability, in statistical terms) then the greater the need for empirical evidence to show probable causation.

For Y2K, there was a clear and fully understood mechanism at play. They could also easily simulate what would happen, and computer systems did crash. For global climate change, there is a fairly mature science with thousands of papers published over decades. We have a pretty good handle on the greenhouse effect. We don’t know everything (we never do) and there are error-bars on our knowledge (climate sensitivity, for example) but we also don’t know nothing. Carbon dioxide does trap heat, and more CO2 in the atmosphere does increase the equilibrium point of the total heat in the Earth system. There is no serious debate about this, only about the precise relationship. Regarding smoking, we have a lot of basic science data showing how the carcinogens in tobacco smoke can cause cancer, so it’s no surprise that it does.

But if the putative mechanism is magic, then a simple unidirectional correlation would not be terribly convincing, and certainly not the absence of a single historical event.

Of course there are many complicated example about which sincere experts can disagree, but it is good to at least understand the relevant logic.

The post How To Prove Prevention Works first appeared on NeuroLogica Blog.

Categories: Skeptic

Some Future Tech Possibilities

Thu, 02/01/2024 - 5:10am

It’s difficult to pick winners and losers in the future tech game. In reality you just have to see what happens when you try out a new technology in the real world with actual people. Many technologies that look good on paper run into logistical problems, difficulty scaling, fall victim to economics, or discover that people just don’t like using the tech. Meanwhile, surprises hits become indispensable or can transform the way we live our lives.

Here are a few technologies from recent news that may or may not be part of our future.

Recharging Roads

Imaging recharging your electric vehicle wirelessly just by driving over a road. Sounds great, but is it practical and scalable? Detroit is running an experiment to help find out. On a 400 meter stretch of downtown road they installed inducting cables under the ground and connected them to the city grid. EVs that have the $1,000 device attached to their battery can charge up while driving over this stretch of road.

The technology itself is proven, and is already common for recharging smartphones. It’s inductive charging, using a magnetic field to induce a current which recharges a battery. Is this a practical approach to range anxiety? Right now this technology costs $2 million per mile. Having any significant infrastructure of these roads would be incredibly costly, and it’s not clear the benefit is worth it. How much are they going to charge the EV? What is the efficiency? Will drivers fork out $1000 for minimal benefit?

I think this approach has a low probability of working. Where I think there might be a role, however, is in long stretches of interstate highway. This will still be an expensive option, but a 100 mile stretch of highway, for example, fit with these coils would cost $200 million. Hopefully with mass production and advances the cost will come down, so maybe it will be only $100 million. That is not a bank breaker for a Federal infrastructure project. This could significantly extend the rage of EVs on long trips along such highways. Busy corridors, like I95, could potentially benefit. You could also put the coils under parking spaces at rest stations.

Will this be better and more efficient than just plugging in? Probably not. I give this a low probability, but it’s possible there may be some limited applications.

 

The Virtual Office

I like VR, and still use it for occasional gaming. I don’t use an app because it’s VR, but some VR games and apps are great. The technology, however, is not yet fully mature. Companies have tried to promote a virtual office in the past. Again it looks good on paper. Imagine having your office be a virtual space that you can configure anyway you want with everything you need to do right in front of you.

But these efforts all failed, because people simply don’t like wearing heavy goggles on their face for hours at a time. I get this – I can only play VR games for so long at once, then I need to stop. It can be exhausting (that is actually a feature for me, not a bug, to get off my chair, and at least stand up and move around). But for an 8 hour work day – no way.

Ideas that look good on paper often don’t die completely, they keep coming back. In this case, I think we will need to keep taking a look at this technology as it evolves. A recent spate of companies are doing just that, trying again for the virtual office. Now they are calling it “extended reality” or XR, which involves a combination of augmented reality and virtual reality. There are some real advantages – training is more effective in XR (than either in person or online). It also is cost effective to have remote, rather than in person meetings. It allows people to work more effectively from home, which also has potential huge efficiency gains.

Still I think this is essentially a hardware problem. The goggles are still bulky and tiring. The experience is still limited by motion sickness. At some point, however, we will get to a critical point where the hardware is good enough for regular extended use, and then adoption may explode.

Apple is coming out with their long awaited entry – the Vision Pro is being released tomorrow, Feb 2. It still looks pretty bulky, but does look like a solid incremental advance. I would like the opportunity to test it out. If this does not turn out to be the killer tech, I think it’s inevitable that we will get there eventually.

 

AI Generated News Anchors

We have been talking about this for years now – when will AI generated characters get good enough to replace actors completely? Now we are starting to see AI generated news anchors. That makes sense, and is likely much easier than an AI character in a dramatic role in a movie. A TV anchor is often just a talking head (while on camera – I’m not saying they are not also sometimes serious journalists). But this way you completely separate the journalism from the good looking talking head part of TV news. The journalism is all done behind the scenes, and the attractive anchor is AI generated.

All they have to do is read the text, with a fairly narrow range of emotional expression. It’s actually perfect, if you think about it. I predict this will rapidly become a thing. Probably the biggest limiting factor is going to be protests, contracts, and other legal stuff. But the tech itself is ready, and perhaps perfectly suited to this application.

 

Those are just a few things in tech news that caught my attention this week. This will be a fun post to look back on in a few years to see how I did.

The post Some Future Tech Possibilities first appeared on NeuroLogica Blog.

Categories: Skeptic

Neuralink Implants Chip in Human

Tue, 01/30/2024 - 2:18pm

Elon Musk has announced that his company, Neuralink, has implanted their first wireless computer chip into a human. The chip, which they plan on calling Telepathy (not sure how I feel about that) connects with 64 thin hair-like electrodes, is battery powered and can be recharged remotely. This is exciting news, but of course needs to be put into context. First, let’s get the Musk thing out of the way.

Because this is Elon Musk the achievement gets more attention than it probably deserves, but also more criticism. It gets wrapped up in the Musk debate – is he a genuine innovator, or just an exploiter and showman? I think the truth is a little bit of both. Yes, the technologies he is famous for advancing (EVs, reusable rockets, digging tunnels, and now brain-machine interface) all existed before him (at least potentially) and were advancing without him. But he did more than just gobble up existing companies or people and slap his brand on it (as his harshest critics claim). Especially with Tesla and SpaceX, he invested his own fortune and provided a specific vision which pushed these companies through to successful products, and very likely advanced their respective industries considerably.

What about Neuralink and BMI (brain-machine interface) technology? I think Musk’s impact in this industry is much less than with EVs and reusable rockets. But he is increasing the profile of the industry, providing funding for research and development, and perhaps increasing the competition. In the end I think Neuralink will have a more modest, but perhaps not negligible, impact on bringing BMI applications to the world. I think it will end up being a net positive, and anything that accelerates this technology is a good thing.

So – how big a deal is this one advance, implanting a wireless chip into a human brain? Not very, at least not yet. Just the mere fact of implanting a chip is not a big deal. The real test is how long it lasts, how long it maintains its function, and how well it functions – none of which has yet been demonstrated. Also, other companies (although only a few) are ahead of the game already.

Here is a list of five companies (in addition to Neuralink) working on BMI technology (and I have written about many of them before). Synchron is taking a different approach, with their stentrodes. Instead of implanting in the brain, which is very invasive, they place their electrodes inside veins inside the brain, which gets them very close to brain tissue, and critically inside the skull. They completed their first human implant in 2022.

Blackrock Neurotech has a similar computer chip with an array of tiny electrodes that gets implanted in the brain. They are farther along than Neuralink and are the favorite to have a product available for use outside a research lab setting. Clearpoint Neuro is working with Blackrock to develop a robot to automatically implant their chips with the precision necessary to optimize function. They also are developing their own applications for BMI and also implants for drug delivery to brain tissue.

Braingate has also successfully implants an array of electrodes into humans that allows them to communicate wireless to external devices, allowing them to control computer interfaces or robotic limbs.

These companies are all focusing on implanted devices. There is also research into using scalp surface electrodes for a BMI connection. The advantage here is that nothing has to be implanted. The disadvantage is that the quality of the signal is much less. Which option is better depends on the application. Neurable is working on external BMI that you wear like headphones. They envision this will be used like a virtual reality application, but with neuro-reality (VR through a neurological connection, rather than goggles).

All of these advances are exciting, and I have been following them closely and reporting on them over the years. The Neuralink announcement adds them to the list of companies who have implanted a BMI chip into a human, a very exclusive club, but does not advance the cutting edge beyond where it already is.

What has me the most excited recently, actually, is advances in AI. What we need to have fairly mature BMI technology, the kind that can allow a paralyzed person to communicate effectively or control robotic limbs, is an implant (surface electrodes are not enough for these applications) that has many connection, is durable, self powered (or easily recharged), does not damage brain tissue, and maintains a consistent connection (does not move or migrate). We keep inching close to this goal. The stentrode may be a great intermediary step, good enough for decades until we develop really good implantable electrodes, which will almost certainly have to be soft and flexible.

But as we slowly and incrementally advance toward this goal (basically the hardware) we also have to keep an eye on the software. I had thought that this basically peaked and was more than advanced enough for what it needed to do – translate brain signals into what the person is thinking with enough fidelity to provide communication and control. But recent AI applications are showing how much more powerful this software can be. This is what AI is good at – taking lots of data and making sense of it. The same way it can make a deep fake of someone’s voice, or recreate a work of art in the style of a specific artist, it can take the jumble of blurry signals from the brain and assemble it into coherent speech (at least that’s the goal). This essentially means we can do much more with the hardware we have.

This is the kind of thing that might make Stentrode the leader of the pack – they sacrifice a little resolution for being much safer and less invasive. But that sacrifice may be more than compensated for with a good AI interface.

The bottom line is that this industry is advancing nicely. We are at the cusp of going from the laboratory to early medical applications. From there we will go to more advanced medical applications, and then eventually to consumer applications. It should be exciting to watch.

 

The post Neuralink Implants Chip in Human first appeared on NeuroLogica Blog.

Categories: Skeptic

Controlling the Narrative with AI

Mon, 01/29/2024 - 5:08am

There is an ongoing battle in our society to control the narrative, to influence the flow of information, and thereby move the needle on what people think and how they behave. This is nothing new, but the mechanisms for controlling the narrative are evolving as our communication technology evolves. The latest addition to this technology is the large language model AIs.

“The media”, of course, has been a large focus of this competition. On the right there is constant complaints of the “liberal bias” in the media, and on the left there are complaints of the rise of right-wing media which they feel is biased and radicalizing. The culture wars focus mainly on schools, because those schools teach not only facts and knowledge but convey the values of our society. The left views DEI (diversity, equity, and inclusion) initiates as promoting social justice while the right views it as brainwashing the next generation with liberal propaganda. This is an oversimplification, but it is the basic dynamic. Even industry has been targeted by the culture wars – which narratives are specific companies supporting? Is Disney pro-gay? Which companies fly BLM or LGBTQ flags?

But increasingly “the narrative” (the overall cultural conversation) is not being controlled by the media, educational system, or marketing campaigns. It’s being controlled by social media. This is why, when the power of social media started to become apparent, many people panicked. Suddenly it seemed we had seeded control of the narrative to a few tech companies, who had apparently decided that destroying democracy was a price they were prepared to pay for maximizing their clicks. We now live in a world where YouTube algorithms can destroy lives and relationships.

We are not yet over panicking about the influence of social media and the tech giants who control them when another player has crashed the party – artificial intelligence, chatbots, and the large language models that run them. This is an extension of the social media infrastructure, but it is enough of a technological advance to be disruptive. Here is the concern – by shaping the flow of information to the masses, social media platforms and AI can have a significant effect on the narrative, enough to create populist movements, to alter the outcome of elections, or to make or destroy brands.

It seems likely that increasingly we will be giving control of the flow of information to AI. Now, instead of searching on Google for information you can have a conversation with Chat GPT. Behind the scenes it’s still searching the web for information, but the interface is radically different. I have documented and discussed here many times how easy human brains are to fool. We have evolved circuits in our brain that construct our perception of reality and make certain judgements about how to do so. One subset of these circuits is dedicated to determining if something out there in the world has agency (are they a person or just a thing) and once the agency-algorithm determines that something is an agent, that then connects to the emotional centers of our brain. We then feel toward that apparent agent and treat them as if they were a person. This extends to cartoons, digital entities, and even abstract shapes. Physical form, or the lack thereof, does not seem to matter because it is not part of the agency algorithm.

It is increasingly well established that people respond to an even half-way decent chatbot as if that chatbot were a person. So now when we interface with “the internet”, looking for information, we may not just be searching for websites but talking with an entity – an entity that can sound friendly, understanding, and authoritative. Even though we may know completely that this is just an AI, we emotionally fall for it. It’s just how our brains are wired.

A recent study demonstrates the subtle power that such chatbots can have. They asked subjects to talk with ChatGPT-3 about black lives matter (BLM) and climate change, but gave them no other instructions. They also surveyed the subjects attitudes toward these topics before and after the conversation. Those who scored negatively toward BLM or climate change ranked their experience half a point lower on a five point scale (which is significant), so they were unhappy when the AI told them things they did not agree with. But, more importantly, after the interaction their attitudes moved 6% in the direction of accepting climate change and the BLM movement. We don’t know from this study if this effect is enduring, or if it is enough to affect behavior, but at least temporarily ChatGPT did move the needle a little. This is a proof of concept.

So the question is – who controls these large language model AI chatbots, who we are rapidly making the gatekeepers to information on the internet?

One approach is to make it so that no one controls them (as much as possible). Through transparency, regulation, and voluntary standards, the large tech companies can try to keep their thumbs off the scale as much as possible, and essentially “let the chips fall where they may.” But this is a problem and early indications are this approach likely won’t work. The problem is that even if they are trying not to influence the behavior of these AI, they can’t help but to have a large influence on them by the choices they make about how to program and train them. There is no neutral approach. Every decision has a large influence, and they have to make choices. What do they prioritize.

If, for example, they prioritize the user experience, well, as we see in this study, one way to improve the user experience is to tell people what they want to hear, rather what the AI determines is the truth. How much does the AI caveat what it says? How authoritative should it sound? How thoroughly should it source whatever information it gives? And how does it weight different sources that it is using? Further, we know that these AI applications can “hallucinate” – just make up fake information. How do we stop that, and to what extent (and how) to we build in fact-checking processes into the AI?

These are all difficult and challenging questions, even for a well-meaning tech company acting in good faith. But of course, there are powerful actors out there who would not act in good faith. There is already deep concern about the rise of Tik Tok, and the ability of China to control the flow of information through that app to favor pro-China news and opinion. How long will it be before ChatGPT is accused of having a liberal bias, and ConservaGPT is created to combat that (just like the Conservapedia, or Truth Social)?

The narrative wars go on, but they seem to be increasingly concentrated in fewer and fewer choke points of information. That, I think, is the real risk. And the best solution may be an anti-trust approach – make sure there are lots of options out there, so no one or few options dominate.

The post Controlling the Narrative with AI first appeared on NeuroLogica Blog.

Categories: Skeptic

How Humans Can Adapt to Space

Fri, 01/26/2024 - 5:11am

My recent article on settling Mars has generated a lot of discussion, some of it around the basic concept of how difficult it is for humans to live anywhere but a thin envelope of air hugging the surface of the Earth. This is undoubtedly true, as I have discussed before – we evolved to be finely adapted to Earth. We are only comfortable in a fairly narrow range of temperature. We need a fairly high percentage of oxygen (Earth’s is 21%) at sufficient pressure, and our atmosphere can’t have too much of other gases that might cause us problems. We are protected from most radiation that bathes the universe. Our skin and eyes have adapted to the light of our sun, both in frequency and intensity. And we are adapted to Earth’s surface gravity, with any significantly more or less causing problems for our biology.

Space itself is an extremely unforgiving environment requiring a total human habitat, with the main current technological challenges being artificial gravity and radiation protection. But even on other worlds it is extremely unlikely that all of the variables will be within the range of human survival, let alone comfort and thriving. Mars, for example, has too thin an atmosphere with no oxygen, no magnetic field to protect from radiation, it’s too cold and its surface gravity is too little. It’s better than the cold vacuum of space, but not by much. You still need essentially a total habitat, and we will probably have to go underground for radiation protection. Gravity is 38% that of Earths, which is probably not ideal for human biology. In space, with microgravity, at least you can theoretically use rotation to simulate gravity.

In addition to adapting off-Earth environments to humans, is it feasible to adapt humans to other environments? Let me start with some far-future options then finish with what is likely to be the nearest-future options.

Perhaps the optimal way to most fully adapt humans to alien environments is to completely replace the human body with one that is adapted. This could be a robot body, a genetically engineered biological one, or a cyborg combination. How does one replace their body? One option might be taking virtual control of the “brain” of the avatar (yes, like in the movie, Avatar). This could be through a neural link, or even just through virtual reality. This way you can remain safely ensconced in a protective environment, while your Avatar runs around a world that would instantly kill you. We are closer to having robotic avatars than biological ones, and to a limited degree we are already doing this through virtual presence technology.

But this approach has a severe limitation – you have to be relatively close to your Avatar. If, for example, you wanted to explore the Martian surface with an avatar, you would need to be in Mars orbit or on the surface of Mars. You could not be on Earth, because the delay in communication would be too great. So essentially this approach is limited by the speed of light.

You could also “upload” your mind into the Avatar, so that real time communication is not required. I put “upload” in quotes, because in reality you would be copying the structure and function of your brain. The avatar would not be you, it would be a mental copy of you operating the avatar (again, whether machine or biological). That copy would feel like it is you, and so that would be a way for “you” to explore a hostile environment, but it would not be the original you. However, it may also be possible, once the exploration has concluded, to copy the acquired memories back to you. It may also be possible to do this as a streaming function. In this case the distance does not matter as much, because you have a local copy with real time interaction, while you are receiving the feed in a constant stream, just delayed by the communication time. Because the avatar is a copy of you, the original you would not need to send instructions, only receive the feed. So you could be safely on Earth while your mental twin avatar is running around on Mars.

A more advanced version of this is similar to the series Altered Carbon. In this hypothetical future people can have their minds transferred (again, copied) to a “stack” which is essentially a computer. The stack, which is now you, operates your body, which is called your “sleeve”. This means, however, that you can change sleeves by pulling our your stack and plugging it into a different sleeve. Such a sleeve could be genetically engineered for a specific environment, or again it could be a robot. This envisions a future in which humans are really digital information that can inhabit biological, robotic, or virtual entities.

So far these options are pretty far in the future. The closest would be using virtual reality to control a robot, which is currently very limited but I can this being fairly robust by the time we could, for example, get to Mars. Another approach which is also fairly near term (at least more than the other options) is to use genetic engineering, medical interventions, and cyborg implants to enhance our existing bodies. This does not involve any avatars or neural transfer, just making our existing bodies better able to handle harsh environments.

For existing adults, genetic engineering options are likely limited, but could still be helpful. For example, inserting a gene that produces a protein derived from tardigrades could protect our DNA from radiation damage. We could also adapt our skin to block out more radiation, and be resistant to UV damage. We could adapt our bones and muscles to different surface gravities. We may even find ways to adapt to microgravity, allowing our bodies to better handle fluids with gravity.

For adults, using medical interventions, such as drugs, is another option. Drugs could theoretically compensate for lower oxygen tension, radiation damage, altered cardiac function, neutralizing toxins, and other physiological responses to alien environments.  Cyborg implants are yet another option, reinforcing our bones, enhancing cardiac function, shielding light or radiation, or adapting to low pressure.

But we could more profoundly adapt humans to alien environments with germ line genetic engineering – altering the genes that control development from an embryo. We could then make profound alterations to the anatomy and physiology of humans. This would create, in essence, a subspecies of humans, adapted to a specific environment – Homo martianus or Homo lunus. Then we could theoretically include extreme adaptations, to temperature, air pressure, oxygen tension, radiation exposure, and surface gravity. These subspecies would not be adapted to Earth, and may find Earth as hostile and we find Mars. They would be an offshoot of humanity.

Even the nearest of these technologies will take a long time to develop. For now we need to carry our Earth environment with us, even if it is within the confines of a spacesuit. But it seems likely we will find ways to adapt ourselves to space to some degree.

The post How Humans Can Adapt to Space first appeared on NeuroLogica Blog.

Categories: Skeptic

DNA Directed Assembly of Nanomaterials

Thu, 01/25/2024 - 4:54am

Arguably the type of advance that has the greatest impact on technology is material science. Technology can advance by doing more with the materials we have, but new materials can change the game entirely. It is no coincidence that we mark different technological ages by the dominant material used, such as the bronze age and iron age. But how do we invent new materials?

Historically new materials were mostly discovered, not invented. Or we discovered techniques that allowed us to use new materials. Metallurgy, for example, was largely about creating a fire hot enough to smelt different metals. Sometimes we literally discovered new elements, like aluminum or tungsten, with desirable properties. We also figured out how to make alloys, combining different elements to create a new material with unique or improved properties. Adding tin to copper made a much stronger and more durable metal, bronze. While the hunt for new usable elements is basically over, there are so many possible combinations that researching new alloys is still a viable way to find new materials. In fact a recent class of materials known as “superalloys” have incredible properties, such as extreme heat resistance.

If there are no new elements (other than really big and therefore unstable artificial elements), and we already have a mature science of making alloys, what’s next? There are also chemically based materials, such as polymers, resins, and composites, that can have excellent properties, including the ability to be manufactured easily. Plastics clearly had a dramatic effect on our technology, and some of the strongest and lightest materials we have are carbon composites. But again it feels like we have already picked the low-hanging fruit here. We still need new better materials.

It seems like the new frontier of material science is nanostructured material. Now it’s not only about the elements that a material is made from, it is how the atoms of that material are arranged on a nano-scale. We are just at the beginning of this technology. This approach has yielded what we call metamaterials – substances with properties determined by their structure, not just their composition. Some metamaterials can accomplish feats previously thought theoretically impossible, like focusing light beyond the diffraction limit. Another class of structured material is two-dimensional material, such as carbon nanofibers.

The challenge of nanostructured materials, however, is manufacturing them with high quality and high output. It’s one thing to use a precise technique in the lab as a proof of concept, but unless we can mass produce such material they will benefit only the highest end users. This is still great for institutions like NASA, but we probably won’t be seeing such materials on the desktop or in the home.

This brings us to the topic of today’s post – using DNA in order to direct the assembly of nanomaterials. This is already in used, and has been for about a decade, but a recent paper highlights some advances in this technique:  Three-dimensional nanoscale metal, metal oxide, and semiconductor frameworks through DNA-programmable assembly and templating.

There are a few techniques being used here. DNA is a nanoscale molecule that essentially evolved to direct the assembly of proteins. The same process is not being used here, but rather the programmable structure of DNA means we can exploit it for other purposes. The first step in the process being outlined here is to use DNA in order to direct the assembly of a lattice out of inorganic material. They make the analogy that the lattice is like the frame of a house. It provides the basic structure, but then you install specific structures (like copper pipes for water and insulation) to provide specific functionality.

So they then use two different methods to infiltrate the lattice with specific materials to provide the desired properties – semiconductors, insulators, magnetic conduction, etc. One method is vapor-phase infiltration which introduces the desired elements as a gas, which can penetrate deeply into the lattice structure. The other is liquid phase infiltration, which is better at depositing substance on the surface of the lattice.

These combinations of methods address some of the challenging of DNA directly assembly. First, the process is highly programmable. This is critical for allowing the production of a variety of 3D nanostructured materials with differing properties. Second the process takes advantage of self-assembly, which is another concept critical to nanostructured materials. When you get down to the 30 nm scale, you can’t really place individual atoms or molecules in the desired locations. You need a manufacturing method that causes the molecules to automatically go where they are supposed to – to self assemble. This is what happens with infiltration of the lattice.

The researchers also hope to develop a method that can work with a variety of materials to produce a range of desirable structures in a process that can be scaled up to manufacturing levels. They demonstrate at least the first two properties here, and show the potential for mass production, but of course that has yet to be actually demonstrated. They worked with a variety of materials, including: ” zinc, aluminum, copper, molybdenum, tungsten, indium, tin, and platinum, and composites such as aluminum-doped zinc oxide, indium tin oxide, and platinum/aluminum-doped zinc oxide.”

I don’t know if we are quite there yet, but this seems like a big step toward the ultimate goal of mass producing specific 3D nanostructured inorganic materials that we can program to have a range of desirable properties. One day the computer chips in your smartphone or desktop may come off an assembly line using a process similar to the one outlined in this paper. Or this may allow for new applications that are not even possible today.

The post DNA Directed Assembly of Nanomaterials first appeared on NeuroLogica Blog.

Categories: Skeptic

Microbes Aboard the ISS

Tue, 01/23/2024 - 5:00am

As I have written many times, including in yesterday’s post, people occupying space is hard. The environment of space, or really anywhere not on Earth, is harsh and unforgiving. One of the issues, for example, rarely addressed in science fiction or even discussions of space travel, is radiation. We don’t really have a solution to deal with radiation exposure outside the protective atmosphere and magnetic field of Earth.

There are other challenges, however, that do not involve space itself but just the fact that people living off Earth will have to be in an enclosed environment. Whether this is a space station or habitat on the Moon or Mars, people will be living in a relatively small finite physical space. These spaces will be enclosed environments – no opening a window to let some fresh air in. Our best experience so far with this type of environment is the International Space Station (ISS). By all accounts, the ISS smells terrible. It is a combination of antiseptic, body odor, sweat, and basically 22 years of funk.

Perhaps even worse, the ISS is colonized with numerous pathogenic bacteria and different types of fungus. The bacteria is mainly human-associated bacteria, the kinds of critters that live on and in humans. According to NASA:

The researchers found that microbes on the ISS were mostly human-associated. The most prominent bacteria were Staphylococcus (26% of total isolates), Pantoea (23%) and Bacillus (11%). They included organisms that are considered opportunistic pathogens on Earth, such as Staphylococcus aureus (10% of total isolates identified), which is commonly found on the skin and in the nasal passage, and Enterobacter, which is associated with the human gastrointestinal tract.

This is similar to what one might find in a gym or crowded office space, but worse. This is something I often considered – when establishing a new environment off Earth, what will the microbiota look like? On the one hand, establishing a new base is an opportunity to avoid many infectious organisms. Having strict quarantine procedures can create a settlement without flu viruses, COVID, HIV or many of the germs that plague humans. I can imagine strict medical examinations and isolation prior to gaining access to such a community. But can such efforts to make an infection-free settlement succeed?

What is unavoidable is human-associated organisms. We are colonized with bacteria, most of which are benign, but some of which are opportunistic pathogens. We live with them, but they will infect us if they are given the chance. There are also viruses that many of us harbor in a dormant state, but can become activated, such as chicken pox. It would be near impossible to find people free of any such organisms. Also – in such an environment, would the population become vulnerable to infection because their immune systems will become weak in the absence of a regular workout? (The answer is almost certainly yes.) And would this mean that they are a setup for potentially catastrophic disease outbreaks when an opportunistic bug strikes?

In the end it is probably impossible to make an infection-free society. The best we can do is keep out the worst bugs, like HIV, but we will likely never be free of the common cold and living with bacteria.

There is also another issue – food contamination. There has been a research program aboard the ISS to grow food on board, like lettuce, as a supplement of fresh produce. However, long term NASA would like to develop an infrastructure of self-sustaining food production. If we are going to settle Mars, for example, it would be best to be able to produce all necessary food on Mars. But our food crops are not adapted to the microgravity of the ISS, or the low gravity of the Moon or Mars. A recent study shows that this might produce unforeseen challenges.

First, prior research has shown that the lettuce grown aboard the ISS is colonized with lots of different bacteria, including some groups capable of being pathogens. There have not been any cases of foodborne illness aboard the ISS, which is great, so the amounts and specific bacteria so far have not caused disease (also thoroughly washing the lettuce is probably a good idea). But it shows there is the potential for bacterial contamination.

What the new study looks at is the behavior of the stomata of the lettuce leaves under simulated microgravity (they slowly rotate the plants so they can never orient to gravity). The stomata of plants are little openings through which they breath. They can open and close these stomata under different conditions, and will generally close them when stressed by bacteria to prevent the bugs from entering and causing infection. However, under simulated microgravity the lettuce leaves opened rather than closed their stomata in response to a bacterial stress. This is not good and would make them vulnerable to infection. Further, there are friendly bacteria that cause the stomata to close, helping them to defend against harmful bacteria. But in microgravity these friendly bacteria failed to cause stomata closure.

This is concerning, but again we don’t know how practically relevant this is. We have too little experience aboard the ISS with locally grown plants. It suggests, however, that we can choose, or perhaps cultivate or engineer, plants that are better adapted to microgravity. We can test to see which cultivars will retain their defensive stomata closure even in simulated microgravity. Once we do that we may be able to determine which gene variants convey that adaptation. This is the direction the researchers hope to go next.

So yeah, while space is harsh and the challenges immense, people are clever and we can likely find solutions to whatever space throws at us. Likely we will need to develop crops that are adapted to microgravity, lunar gravity, and Martian gravity. We may need to develop plants that can grow in treated Martian soil, or lunar regolith. Or perhaps off Earth we need to go primarily hydroponic.

I also wonder how solvable the funk problem is. It seems likely that a sufficiently robust air purifier could make a huge impact. Environmental systems will not only need to scrub CO2, add oxygen, and manage humidity and temperature in the air aboard a station, ship, or habitat. It will also have to have a serious defunking ability.

 

The post Microbes Aboard the ISS first appeared on NeuroLogica Blog.

Categories: Skeptic

Is Mars the New Frontier?

Mon, 01/22/2024 - 5:08am

In the excellent sci fi show, The Expanse, which takes place a couple hundred years in the future, Mars has been settled and is an independent self-sustaining society. In fact, Mars is presented as the most scientifically and technologically advanced society of humans in the solar system. This is presented as being due to the fact that Martians have had to struggle to survive and build their world, and that lead to a culture of innovation and dynamism.

This is a  version of the Turner thesis, which has been invoked as one justification for the extreme expense and difficulty of settling locations off Earth. I was recently pointed to this article discussing the Turner thesis in the context of space settlement, which I found interesting. The Turner thesis is that the frontier mindset of the old West created a culture of individualism, dynamism, and democracy that is a critical part of the success of America in general. This theory was popular in the late 19th and early 20th centuries, but fell out of academic favor in the second half of the 20th century. Recent papers trying to revive some version of it are less than compelling, showing that frontier exposure correlates only very softly with certain political and social features, and that those features are a mixed bag rather than an unalloyed good.

The article is generally critical of the notion that some version of the Turner thesis should be used to justify settling Mars – that humanity would benefit from a new frontier. But I basically agree with the article, that the Turner thesis is rather weak and complex, and that analogies between the American Western frontier and Mars (or other space locations) is highly problematic. In every material sense, it’s a poor analogy. On the frontier there was already air, food, soil, water, and other people living there. None of those things (as far as we know) exists on Mars.

But I do think that something closer to The Expanse hypothesis is not unreasonable. Just as the Apollo program spawned a lot of innovation and technology, solving the problems of getting to and settling Mars would likely have some positive technological fallout. However, I would not put this forward as a major reason to explore and settle Mars. We could likely dream up many other technological projects here on Earth that would be better investments with a much higher ROI.

I do support space exploration, including human space exploration, however. I largely agree with those who argue that robots are much better adapted to space, and sending our robotic avatars into space is much cheaper and safer than trying to keep fragile biological organisms alive in the harsh environment of space. For this reason I think that most of our space exploration and development should be robotic.

I also think we should continue to develop our ability to send people into space. Yes, this is expensive and dangerous, but I think it would be worth it. One reason is that I think humanity should become a multi-world spacefaring species. This will be really hard in the early days (now) but there is every reason to believe that technological advancements will make it easier, cheaper, and safer. This is not just as a hedge against extinction, but also opens up new possibilities for humanity. It is also part of the human psyche to be explorers, and this is one activity that can have unifying effect on shared human culture (depending, of course, on how it’s done).

There is still debate about the effectiveness of sending humans into space for scientific activity. Sure, our robots are capable and getting more capable, but for the time-being they are no substitute for having people on site actively carrying out scientific exploration. Landers and rovers are great, but imagine if we had a team of scientists stationed on Mars able to guide scientific investigations, react to findings, and take research in new directions without having to wait 20 years for the next mission to be designed and executed.

There are also romantic reasons which I don’t think can be dismissed. Being a species that explores and lives in space can have a profound effect on our collective psyche. If nothing else it can inspire generations of scientists and engineers, as the Apollo program did. Sometimes we just need to do big and great things. It gives us purpose and perspective and can inspire further greatness.

In terms of cost the raw numbers are huge, but then anything the government does on that scale has huge dollar figures. But comparatively, the amount of money we spend on space exploration is tiny compared to other activity of dubious or even whimsical value. NASAs annual budget is around $23 billion, but Americans spend over $12 billion on Halloween each year. I’m not throwing shade on Halloween, but it’s hard to complain about the cost of NASA when we so blithely spend similar amounts on things of no practical value. NASA is only 0.48% of our annual budget. It’s almost a round off error. I know all spending counts and it all adds up, but this does put things into perspective.

Americans also spent $108 billion on lottery tickets in 2022. Those have, statistically speaking, almost no value. People are essentially buying the extremely unlikely dream of winning, which most will not. I would much rather buy the dream of space exploration. In fact, that may be a good way to supplement NASA’s funding. Sell the equivalent of NASA lottery tickets for a chance to take an orbital flight, or go to the ISS, or perhaps name a new feature or base on Mars. People spend more for less.

The post Is Mars the New Frontier? first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages