You are here

neurologicablog Feed

Subscribe to neurologicablog Feed feed
Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking
Updated: 19 hours 16 min ago

UFOs and SGU on John Oliver

Tue, 04/23/2024 - 5:01am

The most recent episode of John Oliver, Last Week Tonight, featured a discussion of the UFO phenomenon. I’m always interested, and often disappointed, in how the mainstream media portrays skeptical topics. One interesting addition here is that Oliver actually referenced an SGU episode, the one in which we interviewed Jimmy Carter about his UFO sighting. Unfortunately the rest of the episode was a bit of a let down.

Oliver is the first to acknowledge that he is not a journalist. He’s a comedian. But comedians often give biting satire of our culture and society, and Oliver has developed a specific niche. He essentially picks a topic and makes the point that – we’re doing it wrong, we can do better. Oliver also has a staff of researchers, so he is not jut making stuff up or giving superficial observations. In general I find he does an excellent job, giving a descent overview of a topic and making important observations. But of course many of the topics he covers are too complex to do justice in a 20 minute comedy routine, but he usually manages to hit the important points.

His piece of UFOs was very typical of his general approach. All storytellers (whether you are a journalist, blog writer, or comedian) have a narrative, some organizing point or argument that you are making. Otherwise a piece is just a list of facts or assertions. Oliver’s series has a narrative (as I said, we’re doing it wrong), and each episode has a narrative within that framework. For the UFO piece his main narrative was that the US government is doing UFO research wrong, and this is being driven by the underlying fact that most people interested in UFOs fall into one of two groups, true believers or hardline skeptics. True believers will believe anything, no matter how ridiculous, and hardline skeptics just roll their eyes dismissively, chilling genuine research.

Both of these narratives have problems. Oliver did give a descent summary of the last 70 years of UFO research by the US government, but left out some important bits and focused on those bits that fit his narrative. He mentioned Project Blue Book, which investigated 12, 618 UFO sightings between 1947 and 1969. He did not discuss, however, that this was high quality serious investigation that yielded exactly zero evidence that any part of the UFO phenomenon includes alien spacecraft or activity. He focused more on the Condon report, because that fit his narrative. This was a much smaller study, really a pilot study, to determine if UFOs deserved further serious research. Condon concluded that it did not, and Oliver’s critique focused on some dismissive statements that Condon had made about UFO believers.

The best part of the piece was his summary of the Roswell incident. I think here he did hit the main points – the government initially said it was a weather balloon, but admitted in the 1990s that it was really a spy balloon designed to detect Soviet atomic weapons research. He mainly used this story to make the point that the government lies to the public, but did seem to justify it to some extent by pointing out that they are not going to just admit that the real explanation for UFO sightings is highly sensitive classified military secrets. That point would have been stronger if he explicitly said that this does not mean they are necessarily lying about aliens. He also did a good job in pointing out that some sightings have mundane explanations, and no one is excited by that.

The weakest part of his report was the clip he showed of the navy pilots who witnessed one of the recent Pentagon videos, taking their statements at face value and calling it “chilling” – mainly that whatever they had witnessed it represents technology superior to our own. He made no mention of Mick West’s investigation of these videos, and the fact that the actual evidence is completely compatible with mundane phenomena, like distant airplanes, drones, mylar balloons, or even birds.

He came close when he discussed the “go fast” video, using it as an example of the kind of research we need more of. He referred to a four hour video doing a careful scientific analysis of the video, but his conclusion was highly misleading. He said, correctly, that the analysis shows where and how fast the object was moving, but then misleadingly that we don’t know what the object is. This was not wrong, but highly misleading without context, because it gives the impression that the object is still mysterious. He did not say that with the careful analysis he praised it is completely possible, and in fact highly likely, that the object was simply a bird (even though we cannot be entirely certain).

He also made no mention of the recent extensive Pentagon report finding zero evidence of alien phenomena or any government coverup or secret operations. You might excuse this given the limited time he had, and that’s fair enough, but given his premise it seems like a critical omission.

The second major narrative of the piece, that the UFO world is divided into true believers and hardened skeptics, is a false dichotomy. The question of how much research a topic like UFOs deserves is a complex one, and Oliver missed a lot of nuance. Again, this is understandable given the format, but the false dichotomy was a bit lazy, in my opinion, and could have been improved by framing it differently or throwing in a few caveats. First, I don’t know what he means by “hardened skeptics”. He seems to be saying, however, that the skeptical community is dismissive and ridiculing when it comes to UFOs, in a way that specifically discourages reporting and research. I would argue that this is an unfair characterization and essentially not true.

There may be individual people who consider themselves skeptical who are dismissive, but that does not describe the skeptical literature. In fact skeptics often champion careful and transparent technical and scientific investigation into fringe beliefs, specifically so that we can come to conclusions more definitively. It also is a great way to showcase the scientific method to the public over issues of great public interest. Skeptics spend way more time examining fringe topics than they deserve purely on their scientific merits. We are, if anything, the opposite of dismissive. But there is the assumption that dismissiveness and cynicism is the opposite of gullibility and belief. But this is false. Critical thinking with scientific literacy is the opposite of gullibility.

There are caveats. We would not want to waste resources that could be better spent on more fruitful research. That’s an individual judgement call. We also would always want to make sure the research is truly scientific and does not amount to pseudoscience. Further, we push back against using the fact that research is happening (rather than the results) to promote a belief that has not been established.

With regards to UFOs, that aliens exist is not impossible, and in fact is likely. That they are visiting the Earth is of unknown probability, but currently there is no smoking gun evidence that they are. And there are many other terrestrial phenomena that could be making up parts of UFO sightings that are also interesting – new scientific phenomena, changes in public behavior (like the increased use of drones, or releasing floating candles) that may pose risks to aviation, and foreign powers invading our airspace. If nothing else we will learn in more detail the ways in which pilots can be deceived by what they think they see, and the limitations of radar and other technology.

I feel that there was a narrative that Oliver and his staff could have found that would have worked with his general theme that did not unnecessarily give the impression that there is still a genuine mystery here, that something fantastical is going on. Also I would have preferred that he did not straw man thoughtful skepticism about the UFO phenomenon.

 

The post UFOs and SGU on John Oliver first appeared on NeuroLogica Blog.

Categories: Skeptic

Indigenous Knowledge

Mon, 04/22/2024 - 5:05am

I recently received the following question to the SGU e-mail:

“I have had several conversations with friends/colleagues lately regarding indigenous beliefs/stories. They assert that not believing these based on oral histories alone is morally wrong and ignoring a different cultures method of knowledge sharing. I do not want to be insensitive, and I would never argue this point directly with an indigenous person (my friends asserting these points are all white). But it really rubs me the wrong way to be told to believe things without what I would consider more concrete evidence. I’m really not sure how to comport myself in these situations. I would love to hear any thoughts you have on this topic, as I don’t have many skeptical friends.”

I also frequently encounter this tension, between a philosophical dedication to scientific methods and respect of indigenous cultures. Similar tensions come up in other contexts, such as indigenous cultures that hunt endangered species. These tensions are sometimes framed as “decolonization” defined as “the process of freeing an institution, sphere of activity, etc. from the cultural or social effects of colonization.” Here is a more detailed description:

“Decolonization is about “cultural, psychological, and economic freedom” for Indigenous people with the goal of achieving Indigenous sovereignty — the right and ability of Indigenous people to practice self-determination over their land, cultures, and political and economic systems.”

I completely understand this concept and think the project is legitimate. To “decolonize” an indigenous culture you have to do more than just physically remove foreign settlers. Psychological and cultural colonization is harder to remove. And often cultural colonization was very deliberate, such as missionaries spreading the “correct” religion to “primitive” people.

But like all good ideas, it can be taken too far. People tend to prefer the moral clarity of simplistic dichotomies. What the e-mailer is referring to is when science is considered part of colonization, and something that indigenous people should free themselves of. Further, we need to respect their cultural freedom from science and accept their historical view of reality as being just as legitimate as a science-based one. But I think this approach is completely misguided, even if it is well-intentioned (well intentioned but misguided is often a dangerous combination).

There are a couple of ways to look at this. One is that science is not a cultural belief. Science (and philosophy, for that matter) is something that transcends culture. The purpose of science is to transcend culture, to use a set of methods that are as objective as possible, and to eliminate bias as much as possible. In fact, scientists often have to make a deliberate effort to think outside of the biases of their own culture and world view.

Logic and facts are not cultural. Reality does not care about our own belief systems, whatever their origin, it is what it is regardless. Respecting an indigenous culture does not mean we must surrender respect for facts and logic.

Another important perspective, I think, is that as a species we have some shared culture and knowledge. This is actually a very useful and even beautiful thing – there is human culture and knowledge that we can all share, and I would put science at the top of that list. There are objective methods we can use to come to mutual agreement despite our differing cultures and histories. We can have the commonality of a shared reality, because that reality actually exists (whether we believe in it or not) and because the scientific methods we use to understand that reality are transcultural. Science, therefore, is not one culture colonizing another, but all cultures placing something objective and verifiable above their own history, culture, and parochial perspectives.

We can make similar arguments for certain basic aspects of ethics and morality, although this is more difficult to achieve universal objectivity. But as a species we can conclude that certain things are objectively ethically wrong, such as slavery. If an indigenous culture believed in and actively practiced human slavery, would we be compelled to respect that and look the other way?

Yet another layer to this discussion is consideration of the methods that are used by one society to convince another to adopt its norms. If it is done by force, that is colonization. If it is done by intellectual persuasion and adopted freely, that is just one group sharing its knowledge with another.

And finally, I think we can respect the mythology and beliefs of another culture without accepting those beliefs as objectively true, or abandoning all concept of “truth” and pretending that all knowledge is equal and relative. Pretending the ancient cultural beliefs of a group are “true” is actually infantilizing and racist, in my opinion. It assumes that they are incapable of reconciling what every culture has had to reconcile to some degree – the difference between historical beliefs and objective evidence. Every society has their narratives, their view of history, and facts invariably push up against those narratives.

I know that in practice these principles are very complex and there is a lot of gray zone. Science is an ideal, and people have a tendency to exploit ideals to promote their own agenda. Just labeling something science doesn’t mean we can bulldoze over other considerations, and science is often corrupted by corporate interests, and cultural promotion, even to the point of hegemony. This is because the people who execute science are flawed and biased. But that does not change the ideal itself. Science and philosophy (examining arguments for internal logical consistency) are methods we can use to arrive at transcultural human beliefs and institutions.

Let’s take the World Health Organization (WHO), for example, which is an international organization dedicated to promoting health around the world. I would argue, as an international organization, they should be relying on objective science-based methods as much as possible. Also, since their goal is to improve the health of humanity, science is the best way to do that. They should not, in my opinion, bow down before any individual culture’s pre-scientific beliefs about health for the purpose of cultural sensitivity. It is not their mission to promote local cultures or to right the wrongs of past colonizers. They should unapologetically take the position that they will only promote and use interventions that are based on objective scientific evidence. They can still do this in a culturally sensitive way. All physicians need to practice “culturally competent” medicine, which does not have to include endorsing or using treatments that do not work.

So in practice this is all very messy, but I think it’s important to at least following legitimate guiding principles. Science is something that all of humanity owns, and it strives towards an ideal that it is transcultural and objective. This is not incompatible with respecting local cultures and self-determination.

 

The post Indigenous Knowledge first appeared on NeuroLogica Blog.

Categories: Skeptic

New Generation of Electric Robots

Fri, 04/19/2024 - 5:10am

Boston Dynamics (now owned by Hyundai) has revealed its electric version of its Atlas robot. These robot videos always look impressive, but at the very least we know that we are seeing the best take. We don’t know how many times the robot failed to get the one great video. There are also concerns about companies presenting what the full working prototype might look like, rather than what it actually currently does. The state of CGI is such that it’s possible to fake robot videos that are indistinguishable to the viewer from real ones.

So it’s understandable that these robot reveal videos are always looked at with a bit of skepticism. But it does seem that pushback does have an effect, and there is pressure on robotics companies to be more transparent. The video of the new Atlas robot does seem to be real, at least. Also, these are products for the real world. At some point the latest version of Atlas will be operating on factory floors, and if it didn’t work Boston Dynamics would not be sustainable as a company.

What we are now seeing, not just with Atlas but also Tesla’s Optimus Gen 2, and others, is conversion to all electric robots. This makes them smaller, lighter, and quieter than the previous hydraulic versions. They are also not tethered to cables as previous versions.

My first question was – what is the battery life? Boston Dynamics says they are “targeting” a four hour battery life for the commercial version of the Atlas. I love that corporate speak. I could not find a more direct answer in the time I had to research this piece. But four hours seems reasonable – the prior version from 2015 had about a 90 minute battery life depending on use. Apparently the new Atlas can swap out its own battery.

In addition to being electric, the Atlas is faster and more nimble. It can rotate its joints to give it more flexibility than a human, as demonstrated in the video. The goal is to allow it to flexibly operate in narrow work spaces.

Tesla has also unveiled its Optimus Gen 2 robot, which is a bit more oriented around personal rather than factory use. Tesla hypes that it could theoretically go shopping and then come home and cook you dinner. By way of demonstration, it released a video of Optimus delicately handling eggs. To be clear, Optimus is a prototype, not ready for commercialization. Tesla knows it needs to make continued improvement before this product is ready for prime time. Musk claims he is aiming for a sub $20,000 price tag for the commercial version of Optimus – but of course that does not mean much until they are actually for sale.

There is no question that the latest crop of electric robots are a significant improvement on earlier robots – they are more agile, lighter, with longer battery life. These robots can also benefit from recent advances in AI technology. Currently there are estimated to be 3.4 million industrial robots at work in the world, and this number is growing. The question is – are we really on the cusp of robots transitioning to non-industrial work and residential spaces? As is often the case – it’s hard to say.

As a general rule it’s good to assume that technology hype tends to be premature, and real-world applications often take longer than we anticipate. But then, the technology crosses the finish line and suddenly appears. All the hype of personal data assistants merging with cell phones and the internet lasted for at least a decade, before the iPhone suddenly changed the world. There is a hype, a post-hype, and then a reality phase to such technologies. Of course, the reality may be that the technology fails. Right now, for example, we appear to be in the post-hype phase of self-driving cars. But we also seem to be rapidly transitioning to self-driving cars as a reality, at least to some extent.

It still feels like we are in the hype phase of residential robots. It’s hard to say how long it will be before all-purpose robots are common in work spaces and the home. The difference, I think, with this technology is that is already does exist, for industrial use. This is more of a transition to a new use, rather than developing the technology itself. But on the other hand, the transition from factory floor to home is a massive one, and does require new technology to some extent.

There is also the issue of cost. Are people going to pay 20k for a robot? What’s the “killer app” that will make the purchase worth it? Where is the price break where people will feel it is a worthwhile appliance, worth the cost. When will robots become the new microwave oven?

On the encouraging side is the fact that these robots are already very capable, and steady incremental advances will add up quickly (as they already have). On the down side, it’s hard to see such an appliance will be worth the cost anytime soon. They will need to become either incredibly useful, or much cheaper. Will they really provide 20k worth of convenience, and be more cost-effective than just hiring people to do the jobs you don’t want to do? There is a threshold, but we still may be years away from crossing it.

The post New Generation of Electric Robots first appeared on NeuroLogica Blog.

Categories: Skeptic

Evolution and Copy-Paste Errors

Tue, 04/16/2024 - 5:07am

Evolution deniers (I know there is a spectrum, but generally speaking) are terrible scientists and logicians. The obvious reason is because they are committing the primary mortal sin of pseudoscience – working backwards from a desired conclusion rather than following evidence and logic wherever it leads. They therefore clasp onto arguments that are fatally flawed because they feel they can use them to support their position. One could literally write a book using bad creationist arguments to demonstrate every type of poor reasoning and pseudoscience (I should know).

A classic example is an argument mainly promoted as part of so-called “intelligent design”, which is just evolution denial desperately seeking academic respectability (and failing). The argument goes that natural selection cannot increase information, only reduce it. It does not explain the origin of complex information. For example:

big obstacle for evolutionary belief is this: What mechanism could possibly have added all the extra information required to transform a one-celled creature progressively into pelicans, palm trees, and people? Natural selection alone can’t do it—selection involves getting rid of information. A group of creatures might become more adapted to the cold, for example, by the elimination of those which don’t carry the genetic information to make thick fur. But that doesn’t explain the origin of the information to make thick fur.

I am an educator, so I can forgive asking a naive question. Asking it in a public forum in order to defend a specific position is more dodgy, but if it were done in good faith, that could still propel public understanding forward. But evolution deniers continue to ask the same questions over and over, even after they have been definitively answered by countless experts. That demonstrates bad faith. They know the answer. They cannot respond to the answer. So they pretend it doesn’t exist, or when confronted directly, respond with the equivalent of, “Hey, look over there.”

The answer is right in the formulation of their position – “Natural selection alone can’t do it…”. I can quibble with the notion that natural selection only removes information, but even if we accept this premise, it doesn’t matter, because natural selection is not acting alone. Evolution is better understood as a two-step process, generating new information and then selecting the subset of that new information which provides an immediate survival advantage. There are multiple mechanisms for generating new information. These include mutations, where one amino acid is swapped out for another. But is also includes “copy paste” errors, in which entire genes, or sets of genes, or entire chromosomes, and sometime entire genomes are copied. It is difficult to argue that adding new genes to the total set of genes in a genome is not adding more information.

That is where evolution deniers play a logical game of three-card monte. They say – Ah, but mutations are random. They are “mistakes” that can only degrade the information. They are not directed or cumulative. This is the equivalent of arguing that a car cannot work because the engine cannot steer the car, and the steering column cannot propel the car. But of course, it’s the other way around. Similarly, mutations are not directed but they do add more information, and selection does not add more raw information but it can be directed and cumulative. The combination can add more specific information over time – new genes that make new proteins that have new functions.

The other major unstated assumption in this evolution denying argument is that there is some essential perfect state of a gene and any mutation is a degradation. But this is not correct. All genes are mutants, and there is no “correct” state or preferred state. There are only different states with different functionality. Functionality is also not objectively or essentially better or worse, just different. But some states may provide selective advantages under some conditions. Also, it is better to think of different functional states as having a different sets of tradeoffs. The statistically advantageous tradeoffs are more likely to survive and persist.

This is all logically sound, but what does the empirical evidence say? If intelligent design were true, then we would expect to see a pattern in biology that suggests top-down de-novo design. Genes would all be their own entities, made to purpose, without any remnants of a deep past history – at least, if you are willing to admit to a testable version of intelligent design. Proponents usually dodge any such tests by arguing, essentially, that – whatever we find, that’s what the designer intended.

In any case, if evolution were true we would expect to find a pattern in biology that suggests a nested branching relationship among all things, including genes. Genes did not come from nowhere, wholly perfect and complete. Genes must have evolved from ancestral genes, which further suggests that occasionally there are duplications of genes. That is how the total number of genes can potentially increase over evolutionary history.

Guess what we find when we look at the genomes of multicellular creatures. We find evidence of gene duplications and a branching pattern of relationships. A recent study adds to the mountain of evidence for this pattern. Researchers looked at the genomes of 20 bilaterian species – these include vertebrates and insects that have a basically bilaterally symmetrical body plan. What they found is that core genes and sets of genes that are involved with basic body anatomy are preserved across the bilaterian spectrum. Further, many of these core genes were the result of gene duplication, with multiple whole genome duplication events. They further found that when genes are duplicated, different cell lines can have different patterns of gene expression. This can even result in the evolution of new specialized cell types.

Gene expression refers to the fact that not all genes are expressed to the same degree in all cells. Liver cells express liver genes, while brain cells express brain genes (to put it simply). You can therefore have evolutionary change in a gene without mutating the amino acid sequence of the protein the gene codes for, but rather by altering the regulation of gene expression.

Gene duplication also allows for an important process in evolution – experimentation. When genes are duplicated, one copy can continue its original function. This, of course, is critical for genes that have core functions that are necessary for the organism to be alive. One copy continues this core function, while another copy (or more) is free to mutate and alter its function. This could lead to advantages in the core functionality, or to taking on entirely new functions. Any mutations that happen to provide even the slightest advantage will tend to be preserved, allowing for endless evolutionary tweaking and cumulative change that can ultimately lead to entirely new cell lines, tissues, anatomy, and functions. That certainly sounds like adding new information to me.

Not all changes, by the way, have to be immediately directed by natural selection. There is also random genetic drift. A redundant gene, unmoored from selective pressures, can endlessly “drift”, accumulating many genetic changes. If at any point, in any individual of any descendant line, that genes produces a protein that can be exploited for some immediate advantage, it will then gain a toe-hold on natural selection, and we’re off to the races.

When we look at the genomes of many different species, it’s pretty clear this is what has actually happened, many times, throughout evolutionary history. We can even map out a branching relationship of these events. Evolutionary lineages that are related have the same history of gene evolution (up to their last common ancestor). The quirky details of their genes line up in a way that can only be explained by a shared history. A shared function by a common designer doesn’t cut it. Many of these quirky details are not related to function, or there would be countless functional options. One would have to propose that the intelligent designer deliberately created life to look exactly as if it has evolved. That is yet another unfalsifiable notion that keeps intelligent design outside the boundaries of science.

The post Evolution and Copy-Paste Errors first appeared on NeuroLogica Blog.

Categories: Skeptic

Using AI To Create Virtual Environments

Mon, 04/15/2024 - 4:57am

Generative AI applications seem to be on the steep part of the development curve – not only is the technology getting better, but people are finding more and more uses for it. It’s a new powerful tool with broad applicability, and so there are countless startups and researchers exploring its potential. The last time, I think, a new technology had this type of explosion was the smartphone and the rapid introduction of millions of apps.

Generative AI applications have been created to generate text, pictures, video, songs, and imitate specific voices. I have been using most of these apps extensively, and they are continually improving. Now we can add another application to the list – generating virtual environments. This is not a public use app, but was developed by engineers for a specific purpose – to train robots.

The application is called holodeck, after the Star Trek holodeck. You can use natural language to direct the application to build a specific type of virtual 3D space, such as “build me a three bedroom single floor apartment” or “build me a music studio”. The application uses generative AI technology to then build the space, with walls, floor, and ceiling, and then pull from a database of objects to fill the space with appropriate things. It also has a set of rules for where things go, so it doesn’t put a couch on the ceiling.

The purpose of the app is to be able to generate lots of realistic and complex environments in which to train robot navigation AI. Such robotic AIs need to be trained on virtual spaces so they can learn how to navigate out there is the real world. Like any AI training, the more data the better. This means the trainers need millions of virtual environments, and they just don’t exist. In an initial test, Holodeck was compared to an earlier application called ProcTHOR and performed significantly better. For example, when asked to find a piano in a music studio a ProcTHOR trained robot succeeded 6% of the time while a Holodeck trained robot succeeded 30% of the time.

That’s great, but let’s get to the fun stuff – how can we use this technology for entertainment? The ability to generate a 3D virtual space is a nice addition to the list above, all of which is contributing to a specific application that I have in mind – generative video games. Of course there are companies already working on this. It’s a no-brainer. But let’s talk about what this can mean.

In the short run generative AI can be used to improve the currently chumpy AI behind most video games. For avid gamers, it is a cliche that video game AI not very good, although some are better than others. Responses from NPCs are canned and often nonsensical, missing a lot of context about the evolution of the plot in the game. The reaction of NPCs and creatures in the world is also ultimately simplistic and predictable. This makes it possible for gamers to quickly learn how to hack the limitations of the game’s AI in order to exploit it.

Now let’s imagine our favorite video games powered by generative AI. We could have a more natural conversation with a major NPC in the game. The world can remember the previous actions of the player and adapt accordingly. AI combat can be more adaptive and therefore unpredictable and challenging.

But there is another layer here – generative AI can be used to generate the video game itself, or at least parts of it. This was referenced in the Black Mirror episode, the USS Callister. The world of the game was an infinite generated space. In many ways this is an easier task than real-world applications, at least potentially. Think of a major title, like Fallout. The number of objects in the game, including every item, weapon, monster, and character, is finite. It’s much less than a real-world environment. The same is true for the elements of the environment itself. A generative AI could therefore use the database of objects that already exists for the game an generate new locations. The game could become literally infinite.

Of course, generative AI could be used to create the game in the first place, decreasing the development time, which is years for major titles. Such games famously use a limited set of recorded voices for the characters, which means you hear the same canned phrases over and over again. Now you don’t have to get actors into studios to record script (although you still might want to do this for major characters), you can just generate voices as needed.

This means that video game production can focus on creating the objects, the artistic feel, the backbone plot, the rules and physics for the world, and then let generative AI create infinite iterations of it. This can be done as part of game development. Or it can be done on a server that is hosting one instance of the game (which is how massive multiplayer games work), or eventually it can be done for one player’s individual instance of the game, just like using ChatGPT on your personal computer.

This could further mean that each player’s experience of a game can be unique, and will depend greatly on the actions of the player. In fact, players may be able to generate their own gaming environments. What I mean is, for example (sticking with Fallout), you could sign into a Bethesda Fallout website, choose the game you want, enter in the variables you want, and generate some additional content to add to your game. There could be lots of variables – how developed the area is, how densely populated, how dangerous are the people, how dangerous are the monsters, how challenging is the environment itself, what is the resource availability, etc. This already exists for the game Minecraft, which generates new unique environments as you go and allows players to tweak lots of variables, but the game is extremely graphically limited.

Also, I am just thinking of using AI to recreate the current style of video games but faster, better, and with unlimited content. Game developers, however, may think of ways to leverage generative AI to create new genres of video games – doing new things that are not possible without generative AI.

It seems inevitable that this is where we are headed. I am just curious how long it will take. I think the first crop of generative video games will come in the form of new content for existing games. Then we will see entirely new games developed with and for generative AI. This may also give a boost to VR gaming, with the ability to generate 3D virtual spaces.

And of course gaming is only one of many entertainment possibilities for generative AI. How long will it be before we have fully generated video, with music, voices, and a storyline? All the elements are there, now it’s just a matter of putting them all together with sufficient quality.

I am focusing on the entertainment applications, because it’s fun, but there are many practical applications as well, such as the original purpose of Holodeck, to train navigation AI for robots. But often technology is driven by entertainment applications, because that is where the money is. More serious applications then benefit.

The post Using AI To Create Virtual Environments first appeared on NeuroLogica Blog.

Categories: Skeptic

Reconductoring our Electrical Grid

Thu, 04/11/2024 - 5:26am

Over the weekend when I was in Dallas for the eclipse, I ran into a local businessman who works in the energy sector, mainly involved in new solar projects. This is not surprising as Texas is second only to California in solar installation. I asked him if he is experiencing a backlog in connections to the grid and his reaction was immediate – a huge backlog. This aligns with official reports – there is a huge backlog and its growing.

In fact, the various electrical grids may be the primary limiting factor in transitioning to greener energy sources. As I wrote recently, energy demand is increasing, faster than previously projected. Our grid infrastructure is aging, and mainly uses 100 year old technology. There are also a number of regulatory hurdles to expanding and upgrading the grid. There is good news in this story, however. We have at our disposal the technology to virtually double the capacity of our existing grid, while reducing the risk of sparking fires and weather-induced power outages. This can be done cheaper and faster than building new power lines.

The process is called reconductoring, which just means replacing existing power lines with more advanced power lines. I have to say, I falsely assumed that all this talk about upgrading the electrical grid included replacing existing power lines and other infrastructure with more advanced technology, but it really doesn’t. It is mainly about building new grid extensions to accommodate new energy sources and demand. Every resource I have read, including this Forbes article, give the same primary reason why this is the case. Utility companies make more money from expensive expansion projects, for which they can charge their customers. Cheaper reconductoring projects make them less money.

Other reasons are given as well. The utility companies may be unfamiliar with the technology, not want to retrain their workers, see this as “new technology” that should be approached as a pilot project, and may have some misconceptions about the safety of the technology. However, the newer powerlines have been used for over two decades, and Europe is way ahead of the US in installing it. These are hurdles that can all be solved with a little money and regulation.

Traditional power lines have a steel core with surrounding aluminum wires. Newer power lines have a carbon composite core with surrounding annealed aluminum. The newer cables are stronger, sag less, and have up to twice the energy carrying capacity as the older lines. Upgrading to the newer cables is a no-brainer.

The electrical grids are now the primary limiting factor in getting new clean energy online. But adding new power lines is a slow process. There is no single agency that can do it, so new permits have to go through a maze of local jurisdictions. Utility companies also fight with each other over who has to pay for what. And local residents create a NIMBY problem, pushing back against new power lines.

Reconductoring bypasses all of those issues, because it uses existing power lines and infrastructure. There are no new permits – you just do it.

In a way, we can take advantage of our past negligence. We have essentially been building new power lines to add more capacity, rather than updating lines. This means we have left ourselves an easy way to massively expand our grid capacity. There is already some money in the infrastructure bill and the IRA for grid upgrades, but the consensus seems to be that this is not enough. We likely need a new bill, one that provides the regulation and funding necessary for a massive reconductoring project in the US. And again, the best part about this approach is that it can be done fast. We can get ahead of our increasing energy demand, and make the grid more resilient and safer.

This will not solve all problems. Some new additions will still need to be made for the grid, not only to expand overall capacity, but to bring new locations onto the grid, both sources and users of electricity. Those necessary grid expansions, however, can take priority, as we won’t need to build new towers just to add capacity to existing routes.

Yet again it seems we have the technology we need to successfully make the transition to a much greener energy sector. We just need to get our act together. We need to make some strategic investments and changes to regulations and how we do things. There are about 3,000 electric utility companies in the US who are responsible for grid upgrades. There are also many state and local jurisdictions. This is an impossible patchwork of entities that need to work together to improve, update, and expand the grid, and so the result is a slow bureaucratic mess (which should come as a surprise to no one). There are also some perverse incentives, such as the way utility companies are reimbursed for capital expenditures.

Again I am reminded of my experience with telehealth – we had the technology, and the advantages were all there. But we could not seem to make it happen because of bureaucratic hurdles. Then COVID hit, and literally overnight we made it happen. If we see the threat of climate change with the same urgency, we can similarly removed logistical hurdles and make a green transition happen.

The post Reconductoring our Electrical Grid first appeared on NeuroLogica Blog.

Categories: Skeptic

Eclipse 2024

Mon, 04/08/2024 - 5:37am

I am currently in Dallas Texas waiting to see, hopefully, the 2024 total solar eclipse. This would be my first total eclipse, and everything I have heard indicates that it is an incredible experience. Unfortunately, the weather calls for some clouds, although forecasts have been getting a little better over the past few days, with the clouds being delayed. Hopefully there will be a break in the clouds during totality.

Actually there is another reason to hope for a good viewing. During totality the temperature will drop rapidly. This can cause changes in pressure that will temporarily disperse some types of clouds.

I am prepared with eclipse glasses, a pair of solar binoculars, and one of my viewing companions has a solar telescope. These are all certified and safe, and I have already used the glasses and binoculars extensively. You can use them to view the sun even when there is not an eclipse. With the binoculars you can see sunspots – it’s pretty amazing.

While we (me and the SGU crew including George Hrab and our tech guru, Ian) are in Dallas we put on three shows over the weekend, including recording two live episodes of the SGU. These were our biggest crowds ever for a live event, and included mostly people not from Texas. People from all over the world are here to see the eclipse.

I have to add, just because there is so much talk about this in the media, a clarification about the danger of viewing solar eclipses. You can view totality without protection and without danger. Also, during most of the partial eclipse, viewing the eclipse is no different than viewing the sun. It is dangerous to look directly at the sun. You should not do it as it can damage your retina.

But – we all live our lives without fearing accidentally staring at the sun, because it hurts and we naturally don’t do it. The only real danger of an eclipse is when most of the sun is covered, so that only a crescent of sun is visible. In this case the remaining amount of sun is not bright enough to trigger pain and cause us to look away. But that sliver of sun is still bright enough to damage your retina. So don’t look directly at a partial eclipse even if it is not painful. This includes locations out of the path of totality that will have a high degree of sun cover, or just before or after totality. That is when you want to use certified eclipse glasses (that are in good condition). During totality you do not need eclipse glasses, and you would see nothing but black anyway.

I will add updates here, and hopefully some pictures, once the eclipse happens.

Update: Well, despite weeks of bad weather reports and angst, we had clear skies in Dallas, and got to see the entire eclipse, including all of totality. Absolutely amazing. It is one of those wondrous natural phenomena that you have to experience in person.

During totality we were able to see multiple prominences, including one big one. Essentially this was a huge arc of red gas extending from the surface of the sun. Beautiful.

I would definitely recommend planning a trip to a future total solar eclipse. It will be worth it.

The post Eclipse 2024 first appeared on NeuroLogica Blog.

Categories: Skeptic

AI Designed Drugs

Tue, 04/02/2024 - 5:04am

On a recent SGU live streaming discussion someone in the chat asked – aren’t frivolous AI applications just toys without any useful output? The question was meant to downplay recent advances in generative AI. I pointed out that the question is a bit circular – aren’t frivolous applications frivolous? But what about the non-frivolous applications?

Recent generative AI applications are a powerful tool. They leverage the power and scale of current data centers with the massive training data provided by the internet, using large language model AI tools that are able to find patterns and generate new (although highly derivative) content. Most people are likely familiar with this tech through applications like ChatGPT, which uses this AI process to generate natural-language responses to open ended “prompts”. The result is a really good chat bot, but also a useful interface for searching the web for information.

This same technology can generate output other than text. It can generate images, video, and music. The results are technically impressive (if far from perfect), but in my experience not genuinely creative. I think these are the fun applications the questioner was referring to.

But there are many serious applications of this technology in development as well. An app like ChatGPT can make an excellent expert system, searching through tons of data to produce useful information. This can have many practical applications, from generating lists of potential diagnoses for doctors to consider, to writing first-draft legal contracts. There are still kinks to be worked out, but the potential is clearly amazing.

Perhaps most amazing, however, is the potential for AI in general, including these new generative AI applications, to assist in scientific research. This is already happening. As someone who reads dozens of science press releases a week, it is clear that the number of research studies leveraging AI is growing rapidly. The goal is to use AI to essentially complete months of research in mere hours. A recent such study caught my attention as a particularly powerful example.

The researchers used generative AI (an application called SyntheMol) to design potential antibiotics. Again, AI aided drug development is not new, but this looks like a significant advance. The idea is to use a large language model AI to generate not text but chemical structures. This is feasible because we already have a large library of known drug-like chemicals, their structure, their chemistry, the chemical reactions that make them, and their biological activity. The AI was trained on 130,000 chemical building blocks. This is a type of chemical language, and the AI can be used to generate new iterations with predicted properties.

This is essentially what traditional drug design does, but AI just does it much faster. It is estimated, for example that there are 10^60 potential drug-like chemical structures that could exist. That is an impossibly large space to explore with conventional methods. The AI used in the current study explored a “chemical space” of 30 billion new compounds. That is still a small slice of all possible drug molecules, but this subset had parameters. They were looking for chemicals that could have potential antibacterial activity against Acinetobacter baumannii, a Gram-negative bacterial pathogen. This also has been done before – looking for antibiotics – but one problem was that many of the resulting chemicals were hard to synthesize. So this time they included another parameter – only make molecules that are easy to synthesize, and include the chemical reaction steps necessary to make them.

In just 9 hours SyntheMol generated  25,000 potential new drugs. The researchers then filtered this list looking for the most novel compounds, to avoid current resistance to existing antibiotics. They chose 70 of the most promises chemicals and handed them off, including the recipe of chemical reactions to synthesize them, to a Ukrainian chemical company. They were able to synthesize 58 of them. The researchers then tested them as antibiotics and found that six of them represented structurally unique molecules with antibacterial activity against A. baumannii.

These results would have been impossible in this time frame without the use of generative AI. I would call that a non-frivolous outcome.

Drug candidates resulting from this process still need to be tested clinically, and may fail for a variety of reasons. But chemists who develop drugs know the parameters that make a successful drug. It has to have good bioavailability, a reasonable half-life, and a relative lack of toxicity (among others). These are all features that can be somewhat predicted based upon known chemical structures. These can all become parameters that SyntheMol or a similar application can use when generating potential molecules.

The goal, of course, is to do as much of the selection and filtering as possible digitally, so that when you get to in-vitro testing, animals testing, and eventually human testing, the probability of a successful drug has already been maximized. The potential for saving money, time, and suffering is massive.

This is only one specific example of how this new generative AI technology can supercharge scientific research. This is a quiet revolution that is already happening. In spaces where this kind of technology can be effectively leveraged, the pace of scientific progress may increase by orders of magnitude. Fans of the Singularity might argue that this is the beginning – a time when the pace of scientific and technology progress becomes so rapid that society cannot keep up, and the horizon of future predictability narrows to insignificance. The Singularity refers more to a time when general AI takes over human civilization, technology and research. But even with these narrow generative AI tools we are starting to see the real potential here. It’s both exciting and frightening.

The post AI Designed Drugs first appeared on NeuroLogica Blog.

Categories: Skeptic

What to Make of Havana Syndrome

Mon, 04/01/2024 - 5:21am

I have not written before about Havana Syndrome, mostly because I have not been able to come to any strong conclusions about it. In 2016 there was a cluster of strange neurological symptoms among people working at the US Embassy in Havana, Cuba. They would suddenly experience headaches, ringing in the ears, vertigo, blurry vision, nausea, and cognitive symptoms. Some reported loud whistles, buzzing or grinding noise, usually at night while they were in bed. Perhaps most significantly, some people who reported these symptoms claim that there was a specific location sensitivity – the symptoms would stop if they left the room they were in and resume if they returned to that room.

These reports lead to what is popularly called “Havana Syndrome”, and the US government calls “anomalous health incidents” (AHIs). Eventually diplomats in other countries also reported similar AHIs. Havana Syndrome, however, remains a mystery. In trying to understand the phenomenon I see two reasonable narratives or hypotheses that can be invoked to make sense of all the data we have. I don’t think we have enough information to definitely reject either narrative, and each has its advocates.

One narrative is that Havana Syndrome is caused by a weapon, thought to be a directed pulsed electromagnetic or acoustic device, used by our adversaries to disrupt American and Canadian diplomats and military personnel.  The other is that Havana Syndrome is nothing more than preexisting conditions or subjective symptoms caused by stress or perhaps environmental factors. All it would take is a cluster of diplomats with new onset migraines, for example, to create the belief in Havana Syndrome, which then takes on a life of its own.

Both hypotheses are at least plausible. Neither can be rejected based on basic science as impossible, and I would be cautious about rejecting either based on our preexisting biases or which narrative feels more satisfying. For a skeptic, the notion that this is all some kind of mass delusion is a very compelling explanation, and it may be true. If this turns out to be the case it would definitely be satisfying, and we can add Havana Syndrome to the list of historical mass delusions and those of us who lecture on skeptical topics can all add a slide to our Powerpoint presentations detailing this incident.

But I am not ready to do that. We need to go through due diligence. It remains possible that our adversaries have developed a device that can beam directed pulsed EM or acoustic energy over a moderate distance (say, 100 meters) and that they have been using such a device to experiment on its results, or to achieve some perceived goal of disrupting our diplomatic efforts. For those with a more conspiratorial mindset, this narrative is the most compelling.

While I view this story as a skeptic, I also view it as a neurologist. While all the symptoms being presented as Havana Syndrome are non-specific, meaning they can be caused by a lot of things, that does not mean they are not real. A lot of the symptoms are explainable as migraines, but that does not mean they are not triggered exogenously. In fact, that could make the claims a bit more plausible – the pulsed beam is triggering a migraine-like phenomenon in the brains of the targeted individuals. Not everyone would respond to such triggers, not all responses would be identical, and the symptoms induced can become chronic. Migraine-like phenomena would also not necessarily leave behind any objective pathological findings. We cannot see migraines on an MRI scan of the brain or in blood work or EEGs. Migraines are defined mostly by the subjective symptoms of those who suffer from them (with some subsets having mild findings on exam, such as autonomic symptoms).

The presence of neurological findings have been investigated. A 2019 study found some differences in the brains of people with reported AHIs. This was a small study, the findings were not necessarily what one would predict, and at most this was an exploratory study that generated some hypotheses to be further investigated. Now two recent studies have tried to replicate these results with larger sample sizes and some more detailed analysis – and they found no brain differences between those with AHIs and controls. While this is a blow to the Havana Syndrome hypothesis, it does not kill it entirely. As an accompanying editorial by Dr. Relman, who was involved in investigating subjects with AHI, points out, we would not necessarily see consistent brain imaging finding for a variety of reasons. He also criticizes the studies for not limiting their analysis to those with what he considers to be the cardinal feature of true Havana Syndrome – the location dependent aspect of the symptoms. This could have diluted out any real findings.

There are other ways to resolve the question about the true nature of Havana Syndrome. American intelligence agencies have investigated the question as a national security question, and they report finding no evidence of any program by a foreign power to develop or use such a device. Another approach is to study directed pulsed EM or acoustic device to see if we can replicate the symptoms of Havana Syndrome. This has not been done to date.

And here the controversy sits. So far it seems that the objective evidence favors the “mass delusion” hypothesis. This is similar to “sick building syndrome” and other health incidents where a chance cluster of symptoms leads to widespread reporting which is followed by confirmation bias and the background noise of stress and symptoms focusing on the alleged syndrome. This explanation, at least, cannot be ruled out by current evidence.

But I don’t think we can rule out that something physical is going on that so far has eluded detection. Relman focuses much of his arguments on the location-dependent symptoms reported by some individuals. That would be a strange and unique feature that favors and external phenomenon. But I don’t personally know how solid these reports are, if they were contaminated by suggestive history taking, or perhaps a coincidence magnified by faulty memory and pattern seeking behavior.

As we like to say – this questions needs more study. I don’t know how open a question there is from an intelligence perspective, or if they have closed the book on it. From a neurological perspective it seems like a follow up study, addressing the criticisms of the current studies, could lay the question to rest. But that will not resolve the underlying question, because there does not necessarily have to be an documentable brain changes for a migraine – like syndrome. Finally, there is the technology question. Is a directed pulsed EM or acoustic device workable, and will it reproduce the symptoms of Havana Syndrome. That might be the most definitive piece of evidence (short of the CIA catching a foreign agent red-handed with such a device).

I do think that if Havana Syndrome is real, we should be able to demonstrate it either through reproducing the technology or uncovering evidence of a foreign program to use it. The longer we go without definitive evidence, the more likely the mass delusion hypothesis becomes. The neurological approach is most useful in the positive – if we identify clear signs of Havana Syndrome in sufferers, that will go a long way to supporting its reality. But if these studies remain negative, that does not have the potential to falsify Havana Syndrome.

The post What to Make of Havana Syndrome first appeared on NeuroLogica Blog.

Categories: Skeptic

Is Music Getting Simpler

Fri, 03/29/2024 - 5:27am

I don’t think I know anyone personally who doesn’t have strong opinions about music – which genres they like, and how the quality of music may have changed over time. My own sense is that music as a cultural phenomenon is incredibly complex, no one (in my social group) really understands it, and our opinions are overwhelmed by subjectivity. But I am fascinated by it, and often intrigued by scientific studies that try to quantify our collective cultural experience. And I know there are true experts in this topic, musicologists and even ethnomusicologists, but haven’t found good resources for science communication in this area (please leave any recommendations in the comments).

In any case, here are some random bits of music culture science that I find interesting. A recent study analyzing 12,000 English language songs over the last 40 years has found that songs have been getting simpler and more repetitive over time. They are using fewer words with greater repetition. Further, the structure of the lyrics are getting simpler, and they are more readable and easier to understand. Also, the use of emotional words has increased, and has become overall more negative and more personal. I have to note this is a single study and there are some concerns about the software used in the analysis, but while this is being investigated the authors state that it is unlikely any glitch will alter their basic findings.

But taken at face value, it’s interesting that these findings generally fit with my subjective experience. This doesn’t necessarily make me more confident in the findings, and I do worry that I am just viewing these results through my confirmation bias filter. Still, it not only fits what I have perceived in music but in culture in general, especially with social media. We should be wary of simplistic explanations, but I wonder if this is mainly due to a general competition for attention. Overtime there is a selective pressure for media that is more immediate, more emotional, and easier to consume. The authors also speculate that it may reflect our changing habits in terms of consuming media. There is a greater tendency to listen to music, for example, in the background, while doing other things (perhaps several other things).

I’m really trying to avoid any “these kids today” moments in this piece, but I do have children and have been exposed through them (and other contexts) to their generation. It is common for them to be consuming 3-4 types of media at once. They may listen to music, while having a YouTube video running in the background, while playing a video game or watching TV. I wonder if it just comforting for people raised surrounded by so much digital media. This would tend to support the author’s hypothesis.

Our digital world has given us access to lots of media and information. But I have to wonder if that means there is a trend over time to consume more media more superficially. When I was younger I would listen to a much narrower range of music – I would buy an album of an artist I liked and listen to the entire album dozens or even hundreds of times. Now, when I listen to music, it’s mostly radio or streaming. Even when I listen to my own playlists, there are thousands of songs from hundreds of artists.

Or there may be other factors at play. Another study, for example, looking at film found that the average shot length in movies from 1945 was 13 seconds, while today it is about 4 seconds. I like to refer to this phenomenon as “short attention span theater”. But in reality I know this is about more than attention span. Directors and editors have become more skilled at communicating to their audience through cinema, and there is an evolving cinematic language that both filmmaker and audience learn. Part of the decreased shot length is that it is possible to convey and idea, emotion, or character element much more quickly and efficiently. I also think editing has just become tighter and more efficient.

I watch a lot of movies, and again having children meant I revisited many classics with them. It is amazing how well a really good classic film can hold up over time, even decades (the word “timeless” is appropriate). Simultaneously, it is amazing how dated and crusty not-so-classic movies become over time. The difference, I think, is between artistic films and popular flicks. Watch popular movies from any past decade, for example, and you will be able to identify their time period very easily. They are the opposite of timeless – they are embedded in their culture and time in a very basic way. You will likely also note that movies from past decades may tend to drag, even becoming unwatchable at times. I am OK with slow movies (2001 is still a favorite), if they are well done and the long shots have an artistic purpose. But older movies can have needlessly long scenes, actors mugging the camera for endless seconds, pointless action and filler, and a story that is just plodding.

The point is that shorter, quicker, and punchier media may not be all about short attention-span consumers. There is also a positive aspect to this – greater efficiency and a shared language. There may also be shifting habits of consumption, with the media just adapting to changing use.

But I still can’t help the subjective feeling that with music something is being lost as well. I am keenly aware that the phenomenon known as “neural nostalgia“. What may be happening is that the media we consume between the ages of 12 and 22 gets ingrained onto a brain that is rapidly developing and laying down pathways. This then becomes the standard by which we judge anything we consume for the rest of our lives. So everyone thinks that the music of their youth was the best, and music has only gotten worse since then. This is a bias that we have to account for.

But neural nostalgia does not mean that music has not objectively changed. It’s just difficult to tease apart real change from subjective perception, and to also avoid the bias of thinking of any change as a worsening (rather than just a difference). More emotional and personal song lyrics is not necessarily a bad thing, or a good thing – it’s just a thing. Simpler lyrics may sound annoyingly repetitive and mindless to boomers, but older lyrics may seem convoluted and difficult to understand let alone follow to younger generations.

I do think music can be an interesting window onto culture. It reflects the evolving lives of each generation and how cultural norms and technology are affecting every aspect of their experience.

 

The post Is Music Getting Simpler first appeared on NeuroLogica Blog.

Categories: Skeptic

The Experience Machine Thought Experiment

Tue, 03/26/2024 - 5:05am

In 1974 Robert Nozick published the book, Anarchy, State, and Utopia, in which he posed the following thought experiment: If you could be plugged into an “experience machine” (what we would likely call today a virtual reality or “Matrix”) that could perfectly replicate real-life experiences, but was 100% fake, would you do it? The question was whether you would do this irreversibly for the rest of your life. What if, in this virtual reality, you could live an amazing life – perfect health and fitness, wealth and resources, and unlimited opportunity for adventure and fun?

Nozick hypothesized that people generally would not elect to do this (as summarized in a recent BBC article). He gave three reasons – we want to actual do certain things, and not just have the experience of doing them, we want to be a certain kind of person and that can only happen in reality, and we want meaning and purpose in our lives, which is only possible in reality.

A lot has happened in the last 50 years and it is interesting to revisit Nozick’s thought experiment. I would say I basically disagree with Nozick, but there is a lot of nuance that needs to be explored. For me there are two critical variables, only one of which I believe was explicitly addressed by Nozick. In his thought experience once you go into the experience machine you have no memory of doing so, therefore you would believe the virtual reality to be real. I would not want to do this. So in that sense I agree with him – but he did not give this as a major reason people would reject the choice. I would be much more likely to go into a virtual reality if I retained knowledge of the real world and that I was in a virtual world.

Second – are there other people in this virtual reality with me, or is every other entity an AI? To me the worst case scenario is that I know I am in a virtual reality and that I am alone with nothing but AIs. That is truly a lonely and pointless existence, and no matter how fun and compelling it would be, I think I would find that realization hard to live with indefinitely. But, If I didn’t know that I was living in a virtual reality, than it wouldn’t matter that I was alone, at least not to the self in the virtual reality. But would I condemn myself to such an existence, even knowing I would be blissfully unaware? Then there is what I would consider to be the best case scenario – I know I am living in a virtual reality and there are other actual people in here with me. There is actually another variable – does anything that happens in the virtual reality have the potential to affect the real world? If I write a book, could that book be published in the real world?

Nozick’s thought experiment, I think, was pure in that you would not know you are in a virtual reality, there is no one else in there with you, and you are forever cut off from the real world. In that case I think the ultimate pointlessness of such an existence would be too much. I would likely only consider opting for this at the end of my life, especially if I were ill or disabled to a significant degree. This would be a great option in many cases. But let’s consider other permutations, with 50 years of additional experience.

I also think that at the other end of the spectrum, with people knowing they are in virtual reality, there are real people together in this virtual world, and it is connected to the real world, than most people would find living large parts of their life in virtual reality acceptable and enjoyable. This is the “Ready Player One” scenario. We know from experience that people already do some version of this, spending lots of time playing immersive video games or engaging in virtual communities on social media. People find meaning in their virtual lives.

What about the AI variable? I think we have to distinguish general AI from narrow AI. Are the AI sentient? If so, then I think it doesn’t matter that they are AI. If they are just narrow AI algorithms, the knowledge of that would be bothersome. But could people be fooled by narrow AI? I think the answer there is unequivocally yes. People have a tendency to anthropomorphize, and we generally accept and respond to the illusion of human interaction. People are already falling in love with narrow AIs and virtual characters that don’t actually exist.

What about the “Matrix” scenario? This is something else to consider – is all of humanity in the virtual reality? In Nozick’s thought experience the Matrix was run by benign and well-meaning overlords that just want us to have an enjoyable existence, without malevolent intent. I do think it would matter whether or not a subset of humanity were in the Matrix, with other people still advancing technology, art, science, and philosophy and running civilization. It is quite another thing for humanity in its entirety to check out of reality and just exist in a Matrix. Civilization would essentially be over. Some futurists speculate that this may be the ultimate fate of many civilizations, turning inward and creating a virtual civilization. The advantages may just be too massive to ignore, and some civilizations may decide that they have achieved the ultimate end already and go down the path of becoming a virtual civilization.

In the end I think Nozick’s solution to his own thought experiment was too simplistic and one sided. I do agree with him that people need a sense of purpose and meaning. But on the other hand, I think we know a lot more now about how compelling and appealing virtual reality can be, that people will respond emotionally to a sufficiently compelling illusion, and people will find fulfillment even in a virtual reality.

What I think this means for the future of humanity, at least in the short run, is something close to the Ready Player One scenario. We will build increasingly sophisticated and compelling virtual realities, and as a result people will spend more and more time there. But this virtual reality will be seamlessly integrated into physical reality. Yes, some people will use it as an escape, but it will also be just another aspect of actual reality.

The post The Experience Machine Thought Experiment first appeared on NeuroLogica Blog.

Categories: Skeptic

Man Gets Pig Kidney Transplant

Mon, 03/25/2024 - 4:55am

On March 16 surgeons transplanted a kidney taken from a pig into a human recipient, Rick Slayman. So far the transplant is a success, but of course the real test will be how well the kidney functions and for how long. This is the first time such a transplant has been done into a living donor – previous experimental pig transplants were done on brain dead patients.

This approach to essentially “growing organs” for transplant into humans, in my opinion, has the most potential. There are currently over 100 thousand people on the US transplant waiting list, and many of them will die while waiting. There are not enough organs to go around. If we could somehow manufacture organs, especially ones that have a low risk of immune rejection, that would be a huge medical breakthrough. Currently there are several options.

One is to essentially construct a new organ. Attempts are already underway to 3D print organs from stem cells, which can be taken from the intended recipient. This requires a “scaffold” which is connective tissue taken from an organ where the cells have been stripped off. So you still need, for example, a donor heart. You then strip that heart of cells, 3D print new heart cells onto what’s left to create a new heart. This is tricky technology, and I am not confident it will even work.

Another option is to grow the organs ex-vivo – grow them in a tank of some kind from stem cells taken from the intended recipient. The advantage here is that the organ can potentially be a perfect new organ, entirely human, and with the genetics of the recipient, so no issues with rejection. The main limitation is that it takes time. Considering, however, that people often spend years on the transplant wait list, this could still be an option for some. The problem here is that we don’t currently have the technology to do this.

Similar to this approach is to grow a human organ inside an animal – essentially using the animal as the “tank” in which to grow the organ. The host animal can then provide nutrition and oxygen, and a suitable environment. This would require that the animal will not reject the organ, which would mean treating with drugs or engineering animals hosts that are humanized or whose immune systems cannot mount a rejection.

The most futuristic and also ethically complex approach would be to clone an entire person in order to use them as an organ donor. This would not have to be like “The Island” movie in which the cloned future donors were living people kept in a controlled environment, unaware of their ultimate fate. Anencephalic humans (without brains) could be cloned and grown, and just kept as meat bags. There are two big disadvantages here. The first is that the clones would likely need to be kept alive for years before the organs would be mature enough to be used. How would that work? Would a recipient need to wait 10 years before they could get their donor organ, or would there be clone banks where clones were kept in case they were needed in the future? These seem like cost-prohibitive options, except for the super wealthy.

One potential solution would be to genetically engineer universal donors, whose organs could potentially be transplanted into any human recipient. Or perhaps there would need to be a finite number of donors, say for each blood type. When someone needs an organ they get the next one off the rack. Still, this seems like an expensive option.

The other main limitation of the clone approach is the ethical considerations. I doubt keeping banks of living donor clones will be morally acceptable to society, at least not anytime soon.

This leaves us with what I think is by far the best option – genetically engineering animals to be human organ donors. Pigs are good candidates because the size and shape of their organs are a good match. We just need to engineer them so their immune systems use human proteins instead of pig proteins. We can remove any of the proteins that are most likely to trigger rejection. This also means giving the pigs a human immune system. The pigs are therefore both humanized and altered so as not to trigger rejection. Slayman will still need to take anti-rejection drugs, but it is easy to imagine that as this technology incrementally improves eventually we will get to a population of pigs optimized for human organ donation. The advantages of this approach over all other approaches are simply massive, which leads me to predict that this approach is the one that will win out for the foreseeable future.

One potential ethical objection is from raising domestic animals for the purpose of being slaughtered, which some animal rights activists object to. But of course, we already do this for food. At least for now, this is ethically acceptable to most people. Slaughtering a pig not just for food but to save the lives of potentially 5-7 people is not a hard sell ethically. This approach could also be a huge money saver for the healthcare system.

I am therefore very happy to see this technology proceed, and I wish the best for Slayman, both for him personally and for the potential of this technology to save many lives.

The post Man Gets Pig Kidney Transplant first appeared on NeuroLogica Blog.

Categories: Skeptic

Using CRISPR To Treat HIV

Thu, 03/21/2024 - 4:35am

CRISPR has been big scientific news since it was introduced in 2012. The science actually goes back to 1987, but the CRISPR/Cas9 system was patented in 2012, and the developers won the Noble Prize in Chemistry in 2020. The system gives researchers the ability to quickly and cheaply make changes to DNA, by seeking out and matching a desired sequence and then making a cut in the DNA at that location. This can be done to inactivate a specific gene or, using the cells own repair machinery, to insert a gene at that location. This is a massive boon to genetics research but is also a powerful tool of genetic engineering.

There is also the potential for CRISPR to be used as a direct therapy in medicine. In 2023 the first regulatory approval for CRISPR as a treatment for a disease was given to treatments for sickle cell disease and thalassemia. These diseases were targeted for a technical reason – you can take bone marrow out of a patient, use CRISPR to alter the genes for hemoglobin, and then put it back in. What’s really tricky about using CRISPR as a medical treatment is not necessarily the genetic change itself, but getting the CRISPR to the correct cells in the body. This requires a vector, and is the most challenging part of using CRISPR as a medical intervention. But if you can bring the cells to the CRISPR that eliminates the problem.

The potential, however, is huge. The obvious targets for CRISPR therapy would be any genetic disease. Mutated genes could be silenced, inactive genes could be activated to compensate for the mutation, or new functional genes could be inserted. CRISPR gene therapy could be a literal cure for genetic diseases.

Recently, however, a team from the University of Amsterdam presented an abstract at a medical conference showing a proof of concept for a different therapeutic use of CRISPR – to treat HIV (the human immunodeficiency virus).  HIV is a retrovirus, which means it uses reverse transcriptase to insert its genetic code into the DNA of host cells, in this case immune cells. In this way it hijacks the reproductive machinery of the host cells to make more copies of itself. HIV is insidious because at the same time it also preemptively weakens the host’s immune system. We now have very effective treatments for HIV, and properly treated those infected could live a near normal life expectancy. But HIV is notoriously difficult to cure or eradicate.

Part of the challenge is that HIV can go dormant inside host immune cells. It just sits there, in the host DNA, and can reactivate at any time. Some treatments are geared towards activating these viruses so that they can then be targeted by other drugs, and this helps, but still does not lead to eradication. What the Amsterdam scientists did was use CRISPR/Cas9 to literally cut HIV out of the host cells. They treated three volunteer patients for 48 week, showing that the treatment was safe, with no serious side effects. Of course, this is very preliminary evidence, and the researchers are careful to point out this is a proof-of-concept only. We are still years away from an effective treatment. But this preliminary data is encouraging.

It’s still too early to see what the effect of this treatment was. Again, the real challenge is the vector – getting CRISPR to enough immune cells to make a difference. It’s easy to see how this could become an effective treatment, further reducing the HIV load in an infected patient. But the real goal, and what sets this apart from other HIV treatments, is that this is designed to be a cure. But that means it has to completely eliminate HIV from infected cells, or at least thoroughly enough that it is undetectable. The real test will be, can treated patients stop their HIV medications without running the risk of a recurrence? Again, this will take years of research to find out.

This is a nice proof of concept, and a brilliant use of the CRISPR/Cas9 system. I have been following CRISPR news closely for the last decade, and it really has advanced quickly. I think that is primarily due to the fact that this technology facilitates its own research. The first real appeal of CRISPR was as a tool for genetics research, which includes CRISPR itself. It is cheap and fast, meaning that even small labs around the world can engage in CRISPR research, and the pace of genetics research has increased dramatically. This is a great technological positive feedback loop.

We can likely expect continued advances in the CRISPR technology itself along with basic science genetics research, and increasing medical applications. We are still just at the beginning of this technology. It is also perhaps the one recent technology that I feel is not overhyped.

The post Using CRISPR To Treat HIV first appeared on NeuroLogica Blog.

Categories: Skeptic

Energy Demand Increasing

Mon, 03/18/2024 - 5:14am

For the last two decades electricity demand in the US has been fairly flat. While it has been increasing overall, the increase has been very low. This has been largely attributed to the fact that as the use of electrical devices has increased, the efficiency of those devices has also increased. The introduction of LED bulbs, increased building insulation, more energy efficient appliances has largely offset increased demand. However, the most recent reports show that US energy demand is turning up, and there is real fear that this recent spike is not a short term anomaly but the beginning of a long term trend. For example, the projection of increase in energy demand by 2028 has nearly doubled from the 2022 estimate to the 2023 estimate – ” from 2.6% to 4.7% growth over the next five years.”

First, I have to state my usual skeptical caveat – these are projections, and we have to be wary of projecting short term trends indefinitely into the future. The numbers look like a blip on the graph, and it seems weird to take that blip and extrapolate it out. But these forecasts are not just based on looking at such graphs and then extending the line of current trends. These are based on an industry analysis which includes projects that are already under way. So there is some meat behind these forecasts.

What are the factors that seem to be driving this current and projected increase in electricity demand? They are all the obvious ones you might think. First, something which I and other technology-watchers predicted, is the increase in the use of electrical vehicles. In the US there are more than 2.4 million registered electric vehicles. While this is only about 1% of the US fleet, EVs represent about 9% of new car sales, and growing. If we are successful in somewhat rapidly (it will still take 20-30 years) changing our fleet of cars from gasoline engine to electric or hybrid, that represents a lot of demand on the electricity grid. Some have argued that EV charging is mostly at night (off peak), so this will not necessarily require increased electricity production capacity, but that is only partly true. Many people will still need to charge up on the road, or will charge up at work during the day, for example. It’s hard to avoid the fact that EVs represent a potential massive increase in electricity demand. We need to factor this in when planning future electricity production.

Another factor is data centers. The world’s demand for computer cycles is increasing, and there are already plans for many new data centers, which are a lot faster to build than the plants to power them. Recent advances in AI only increase this demand. Again we may mitigate this somewhat by prioritizing computer advances that make computers more energy efficient, but this will only be a partial offset. We do also have to think about applications, and if they are worth it. The one that gets the most attention is crypto – by one estimate Bitcoin mining alone used 121 terra-watt hours of electricity in 2023, the same as the Netherlands (with a population of 17 million people).

Other factors increasing US energy demand include recent investments in industry, through the Inflation Reduction Act, the infrastructure bill, and the Chips and Science Act. Part of the goal of these bills was to bring manufacturing back to the US, and to the extent that they are working this comes with an increased demand for electricity. And fourth, another factor that was predicted and we are now starting to feel, as the Earth warms the demand for air conditioning increases.

All of these factors are likely to increase going forward. Also, in general there is a move to electrify as many processes as possible, as an approach to decarbonize our civilization – moving from gas stoves and heating to electric, for example. Even in industry, reducing the carbon footprint of steel making involves using a lot more electricity.

What all this means is that as we plan to decarbonize over the next 25 years, we need to expect that electricity demand will dramatically increase. This is true even in a country like the US, and even if our population remains stable over this time. Worldwide the situation is even worse, as many populations are trying to industrialize and world population is projected to grow (probably peaking at around 10 billion). The problem is that the rate at which we are building renewable low carbon energy is just treading water – we are essentially building enough to meet the increase in demand, but not enough to replace existing demand. This means that fossil fuel use worldwide is not dropping, in fact it is still increasing. These new energy demand projections may mean that we fall further behind.

Most concerning about these recent reports is that we currently are unable to meet this new projected increase in demand with renewables. Keep in mind, this is still far better than relying entirely on fossil fuel. Wind, solar, hydroelectric, geothermal, and nuclear capacity all replaces fossil fuel capacity, and is helping to mitigate CO2 release and climate change. But it has not been enough so far to actually reduce fossil fuel demand, and it’s going to get more challenging. The problem we are facing is bottlenecks in building new infrastructure. The primary limiting factor is the grid. It takes too long to build new grid projects. They are slowed by the patchwork of regulations and bickering among states over who is paying for what. New renewable energy projects are therefore delayed by years.

What needs to happen to fix the situation? First, we need more massive investment in electric grid infrastructure. There is some of this in the bills I mentioned, but not enough. We need perhaps a standalone bill investing billions in new grid projects. But also, this legislation should probably include new Federal authority to approve and enact such projects, to reduce local bottlenecks. We need Federal legislation to essentially enact eminent domain to rush through new grid projects. The report estimates that we will need to triple our existing grid capacity by 2035 to meet growing demand.

This analysis also reinforces the belief by many that wind and solar, while great sources of energy, are not going to get us to our goals. The problem is simply that they require a lot of new grid infrastructure and new connections to the grid. We will simply not be able to build them out in time. Residential solar is probably the best option, because it can use existing connections to the grid and is distributed to where it is used. This is especially true if you plan to switch to an electric vehicle – pair that with some solar panels. But still, this is not going to get us to our goals.

What we need is the big centralized power plants that can replace coal, oil, and natural gas plants – and this means nuclear, geothermal, and hydroelectric. The latter two are limited geographically, as there is limited potential to expand them, at least for now. Perhaps we may top out at 15% or so (that is of existing demand). This leaves nuclear. I know I have beat this drum for a while, but the most compelling and logical analyses I read all indicate that we will not get to our decarbonization goals without nuclear. Nuclear can generate the amount of electricity we need, and be plugged into existing connections to the grid, and can go anywhere. The main limitation with nuclear is the regulations make building new plants really slow – but this is fixable with the stroke of a pen. We need to streamline the regulation process for all zero carbon power plants – a project warp speed for energy. The bottom line really is coming down to – do you want a coal-fired plant or a nuclear plant? That is the real practical choice we face.  To some extend the choice is also between nuclear and natural gas, which is a lot better than coal but is still fossil fuel with the pollution and CO2 that comes with.

As the report indicates, many states are keeping coal-fired plants open longer to meet the increased demand. Or they are building natural gas fired plants, because the technology is proven, they are the fastest to build, and they are the most profitable. This has to change. It needs to be feasible to build nuclear plants instead. Some of this is happening, but not nearly enough.

We are dealing with hard numbers here, and the numbers are telling a very consistent and compelling story.

The post Energy Demand Increasing first appeared on NeuroLogica Blog.

Categories: Skeptic

What Is a Grand Conspiracy?

Fri, 03/15/2024 - 5:09am

Ah, the categorization question again. This is an endless, but much needed, endeavor within human intellectual activity. We have the need to categorize things, if for no other reason than we need to communicate with each other about them. Often skeptics, like myself, talk about conspiracy theories or grand conspiracies. We also often define exactly what we mean by such terms, although not always exhaustively or definitively. It is too cumbersome to do so every single time we refer to such conspiracy theories. To some extent there is a cumulative aspect to discussions about such topics, either here or, for example, on my podcast. To some extent I expect regular readers or listeners to remember what has come before.

For blog posts I also tend to rely on links to previous articles for background, and I have little patience for those who cannot bother to click these links to answer their questions or before making accusations about not having properly defined a term, for example. I don’t expect people to have memorized my entire catalogue, but click the links that are obviously there to provide further background and explanation. Along those lines, I suspect I will be linking to this very article in all my future articles about conspiracy theories.

What is a grand conspiracy theory? First a bit more background, about categorization itself. There are two concepts I find most useful when thinking about categories – operational definition and defining characteristics. An operational definition is one that essentially is a list of inclusion and exclusion criteria, a formula, that if you follow, will determine if something fits within the category or not. It’s not a vague description or general concept – it is a specific list of criteria that can be followed “operationally”. This comes up a lot in medicine when defining a disease. For example, the operational definition of “essential hypertension” is persistent (three readings or more) systolic blood pressure over 130 or diastolic blood pressure over 80.

Operational definitions often rely upon so-called “defining characteristics” – those features that we feel are essential to the category. For example, how do we define “planet”? Well, astronomers had to agree on what the defining characteristics of “planet” should be, and it was not entirely obvious. The one that created the most controversy was the need to gravitationally clear out one’s orbit – the defining characteristic that excluded Pluto from the list of planets.

There is therefore some subjectivity in categories, because we have to choose the defining characteristics. Also, such characteristics may have fuzzy or non-obvious boundaries. This leads to what philosophers call the “demarcation problem” – there may be a fuzzy border between categories. But, and this is critical, this does not mean the categories themselves don’t exist or are not meaningful.

With all that in mind, how do we operationally define a “grand conspiracy” and what are the defining characteristics. A grand conspiracy has a particular structure, but I think the key defining characteristic is the conspirators themselves. The conspirators are a secret group that have way more power than they should have or any group realistically could have. Further they are operating for their own nefarious goals and are deceiving the public about their existence and their true goals. This shadowy group may operate within a government, or represents a shadow government themselves, or even a secret world government. They can control the media and other institutions as necessary to control the public narrative. They are often portrayed a diabolically clever, able to orchestrate elaborate deceptions and false flag operations, down to tiny details.

But of course there would be no conspiracy theory if such a group were entirely successful. So there must also be an “army of light” that has somehow penetrated the veil of the conspirators, they see the conspiracy for what it is and try to expose it. Then there is everyone else, the “sheeple” who are naive and deceived by the conspiracy.

That is the structure of a grand conspiracy. Functionally, psychologically, the grand conspiracy theory operates in order to insulate the belief of the “conspiracy theorist”. Any evidence that contradicts the conspiracy theory was a “false flag” operation, meant to cast doubt on the conspiracy. The utter lack of direct evidence for the conspiracy is due to the extensive ability of the conspirators to cover up any and all such evidence. So how, then, do the conspiracy theorists even know that the conspiracy exists? They rely on pattern recognition, anomaly hunting, and hyperactive agency detection – not consciously or explicitly, but that is what they do. They look for apparent alignments, or for anything unusual. Then they assume a hidden hand operating behind the scenes, and give it all a sinister interpretation.

Here is a good recent example – Joe Rogan recently “blew” his audience’s mind by claiming that the day before 9/11, Donald Rumsfeld said in a press conference that the Pentagon had lost 2.3 trillion dollars. Then, the next day, a plane crashes into the part of the Pentagon that was carrying out the very audit of that missing trillions. Boom – a grand conspiracy is born (of course fitting into another existing conspiracy that 9/11 was an inside job). The coincidence was the press conference the day before 9/11, which is not much of a coincidence because you can go anomaly hunting by looking at any government activity in the days before 9/11 for anything that can be interpreted in a sinister way.

In this case, Rumsfeld did not say the Pentagon lost $2.3 trillion. He was criticizing the outdated technology in use by the DOD, saying it is not up to the modern standards used by private corporations. An analysis – released to the public one year earlier – concluded that because of the outdated accounting systems, as much as 2.3 trillion dollars in the Pentagon budget cannot be accurately tracked and documented. But of course, Rogan is just laying out a sinister-looking coincidence, not telling a coherent story. What is he actually saying? Was Rumsfeld speaking out of school? Was 9/11 orchestrated in a single day to cover up Rumsfeld’s accidental disclosure? Is Rumsfeld a rebel who was trying to expose the coverup? Would crashing into the Pentagon sufficiently destroy any records of DOD expenditures to hide the fact that $2.3 trillion was stolen? Where is the press on this story? How can anyone make $2.3 trillion disappear? How did the DOD operate with so much money missing from their budget?

Such questions should act as a “reality filter” that quickly marks the story as implausible and even silly. But the grand conspiracy reacts to such narrative problems by simply expanding the scope, depth, and power of the conspiracy. So now we have to hypothesize the existence of a group within the government, complicit with many people in the government, that can steal $2.3 trillion from the federal budget, keep it from the public and the media, and orchestrate and carry our elaborate distractions like 9/11 when necessary.

This is why, logically speaking, grand conspiracy theories collapse under their own weight. They must, by necessity, grow in order to remain viable, until you have a vast multi-generational conspiracy spanning multiple institutions with secret power over many aspects of the world. Any they can keep it all secret by exerting unbelievable control over the thousands and thousands of individuals who would need to be involved. They can bribe, threaten, and kill anyone who would expose them. Except, of course, for the conspiracy theorists themselves, who can work tirelessly to expose them with fear, apparently.

This apparent contradiction has even lead to a meta conspiracy theory that all conspiracy theories are in fact false flag operations, meant to discredit conspiracy theories and theorists so that the real conspiracies can operate in the shadows.

Being a “grand” conspiracy is not just about size. As I have laid out, it is about how such conspiracies allegedly operate, and the intellectual approach of the conspiracy theorists who believe in them. This can fairly easily be distinguished from actual conspiracies, in which more than one person or entity agree together to carry out some secret illegal activity. Actually conspiracies can even become fairly extensive, but the bigger they get the greater the risk that they will be exposed, which they are all the time. Of course, we can’t know about the conspiracies that were never exposed, by definition, but certainly there are a vast number of conspiracies that do ultimately get exposed. It makes it hard to believe that a conspiracy orders of magnitude larger can operate for decades without similarly being exposed.

Ultimately the grand conspiracy theory is about the cognitive style and behavior of the conspiracy theorists – the subject of a growing body of psychological research.

The post What Is a Grand Conspiracy? first appeared on NeuroLogica Blog.

Categories: Skeptic

Pentagon Report – No UFOs

Tue, 03/12/2024 - 5:06am

In response to a recent surge in interest in alien phenomena and claims that the US government is hiding what it knows about extraterrestrials, the Pentagon established a committee to investigate the question – the All-Domain Anomaly Resolution Office (AARO). They have recently released volume I of their official report – their conclusion:

“To date, AARO has not discovered any empirical evidence that any sighting of a UAP represented off-world technology or the existence a classified program that had not been properly reported to Congress.”

They reviewed evidence from 1945 to 2023, including interviews, reports, classified and unclassified archives, spanning all “official USG investigatory efforts” regarding possible alien activity. They found nothing – nada, zip, goose egg, zero. They did not find a single credible report or any physical evidence. They followed up on all the fantastic claims by UFO believers (they now use the term UAP for unidentified anomalous phenomena), including individual sightings, claims of secret US government programs, claims of reverse engineering alien technology or possessing alien biological material.

They found that all eyewitness accounts were either misidentified mundane phenomena (military aircraft, drones, etc), or simply lacked enough evidence to resolve. Eyewitness accounts of secret government programs were all misunderstood conversations or hearsay, often referring to known and legitimate military or intelligence programs. Their findings are familiar to any experience skeptic – people misinterpret what they see and hear, fitting their misidentified perception into an existing narrative. This is what people do. This is why we need objective evidence to know what is real and what isn’t.

I know – this is a government report saying the government is not hiding evidence of aliens. This is likely to convince no hard-core believer. Anyone using conspiracy arguments to prop up their claims of aliens will simply incorporate this into their conspiracy narrative. Grand conspiracy theories are immune to evidence and logic, because the conspiracy can be used to explain away anything – any lack of evidence, or any disconfirming evidence. It is a magic box in which any narrative can be true without the burden of evidence or even internal consistency.

But the report is devastating to those who claim the government has known for a long time that aliens exist and are in possession of alien tech. It also means that in order to maintain such a belief, you have to enlarge the conspiracy, give it more power and scope. You have to believe the secret program is secret even from Congress and the executive branch, and that it is either secret from the defense and intelligence communities or they are fully involved at every level. At some point, it’s not really even a government program, but a rogue program somehow existing secretly within the government.

This is how grand conspiracy theories fail. In order to be maintained against negative evidence, they have to be enlarged and deepened. They then quickly collapse under their own weight. Imagine what it would take to fund and maintain such a program over decades, over multiple administrations and generations. How total would their control need to be to keep something this huge secret for so long? There have been no leaks to the mainstream press, like the Pentagon papers, or the Snowden leaks, or even the Discord fiasco. And yet, some rando UFO researchers know all about it. There is no way to make this story make sense.

I also don’t buy the alleged motivation. Why would such an agency keep the existence of aliens secret for so long? I can see keeping it a secret for a short time, until they had a chance to wrap their head around what was going on – but half a century? The notion that the public is “not ready” for the revelation is just silly. We’ve been ready for decades. If they want to keep the tech secret, they can do that without keeping the very existence of aliens secret. Besides, wouldn’t the principle of deterrence mean that we would want our enemies to know – hey, we have reverse-engineered alien technology, so don’t mess with us?

Also, the conspiracy theories often ignore the fact that the US is not the only government in the world. So do all countries in the world who might come into possession of alien artifacts have similarly powerful and long-lived secret organizations within their government? Some conspiracy theorists solve this contradiction by, again, widening the conspiracy. This leads to “secret world government” territory. Perhaps the lizard aliens are really in charge, and they are trying to keep their own existence secret.

I’ll be interested to see what the effect of the report will be (especially in our social-media post truth world). Interests in UFOs wax and wane over the years. It seems each generation has a flirtation with the idea then quickly grows bored, leaving the hard core believers to keep the flame alive until a new generation comes up. This creates a UFO boom and bust cycle. The claims, blurry photos, faked evidence, and breathless eyewitness accounts all seem superficially fascinating. I got sucked into this when I was around 10. I remembering thinking that something this huge, aliens visiting the Earth, would come out eventually. All the suggestive evidence was interesting, but I knew deep down none of it was conclusive. At some point we would need the big reveal – unequivocal evidence of alien visitation.

As the years rolled by, the suggestive blurry evidence and wild speculation became less and less interesting. You can only maintain such anticipation for so long. Eventually all it took was for me to hear Carl Sagan say that all the UFO evidence was crap, and the entire house of cards collapsed. Now, 40 years later, nothing has changed. We have mostly the same cast of dubious characters making the same tired claims, citing mostly the same incidents with the same conspiracy theories. The only difference is that their audience is a new generation that hasn’t been through it all before.

Perhaps the boom bust cycle is faster now because of social media and the relative short attention spans of the public. I suspect the Pentagon report will have the effect of forcing those with a more casual interest off the fence – either you have to admit there is simply no evidence for alien visitation, or you have to go the other way an embrace the grand UFO conspiracy theory. Or perhaps the current generation simply does not care about evidence, logic, and internal consistency and will just believe whatever narrative generates the most clicks on Tik Tok.

The post Pentagon Report – No UFOs first appeared on NeuroLogica Blog.

Categories: Skeptic

Mach Effect Thrusters Fail

Mon, 03/11/2024 - 5:07am

When thinking about potential future technology, one way to divide possible future tech is into probable and speculative. Probable future technology involves extrapolating existing technology into the future, such as imaging what advanced computers might be like. This category also includes technology that we know is possible, we just haven’t mastered it yet, like fusion power. For these technologies the question is more when than if.

Speculative technology, however, may or may not even be possible within the laws of physics. Such technology is usually highly disruptive, seems magical in nature, but would be incredibly useful if it existed. Common technologies in this group include faster than light travel or communication, time travel, zero-point energy, cold fusion, anti-gravity, and propellantless thrust. I tend to think of these as science fiction technologies, not just speculative. The big question for these phenomena is how confident are we that they are impossible within the laws of physics. They would all be awesome if they existed (well, maybe not time travel – that one is tricky), but I am not holding my breath for any of them. If I had to bet, I would say none of these exist.

That last one, propellantless thrust, does not usually get as much attention as the other items on the list. The technology is rarely discussed explicitly in science fiction, but often it is portrayed and just taken for granted. Star Trek’s “impulse drive”, for example, seems to lack any propellant. Any ship that zips into orbit like the Millennium Falcon likely is also using some combination of anti-gravity and propellantless thrust. It certainly doesn’t have large fuel tanks or display any exhaust similar to a modern rocket.

In recent years NASA has tested two speculative technologies that claim to be able to produce thrust without propellant – the EM drive and the Mach Effect thruster (MET). For some reason the EM drive received more media attention (including from me), but the MET was actually the more interesting claim. All existing forms of internal thrust involve throwing something out the back end of the ship. The conservation of momentum means that there will be an equal and opposite reaction, and the ship will be thrust in the opposite direction. This is your basic rocket. We can get more efficient by accelerating the propellant to higher and higher velocity, so that you get maximal thrust from each atom or propellant your ship carries, but there is no escape from the basic physics. Ion drives are perhaps the most efficient thrusters we have, because they accelerate charged particles to relativistic speeds, but they produce very little thrust. So they are good for moving ships around in space but cannot get a ship off the surface of the Earth.

The problem with propellant is the rocket equation – you need to carry enough fuel to accelerate the fuel, and more fuel for that fuel, etc. It means that in order to go anywhere interesting very fast you need to carry massive amounts of fuel. The rocket equation also sets a lot of serious limits on space travel, in terms of how fast and far we can go, how much we can lift into orbit, and even if it is possible to escape from a strong gravity well (chemical rockets have a limit of about 1.5 g).

If it were possible to create thrust directly from energy without the need for propellant, a so-called propellantless or reactionless drive, that would free us from the rocket equation. This would make space travel much easier, and even make interstellar travel possible. We can accomplish a similar result by using external thrust, for example with a light sail. The thrust can come from a powerful stationary laser that pushes against the light sail of a spacecraft. This may, in fact, be our best bet for long distance space travel. But this approach has limits as well, and having an onboard source of thrust is extremely useful.

The problem with propellantless drives is that they probably violate the laws of physics, specifically the conservation of momentum. Again, the real question is – how confident are we that such a drive is impossible? Saying we don’t know how it could work is not the same as saying we know it can’t work. The EM drive is alleged to work using microwaves in a specially designed cone so that as they bounce around they push slightly more against one side than the other, generating a small amount of net thrust (yes, this is a simplification, but that’s the basic idea). It was never a very compelling idea, but early tests did show some possible net thrust, although very tiny.

The fact that the thrust was extremely tiny, to me, was very telling. The problem with very small effect sizes is that it’s really easy for them to be errors, or to have extraneous sources. This is a pattern we frequently see with speculative technologies, from cold fusion to free-energy machines. The effect is always super tiny, with the claim that the technology just needs to be “scaled up”. Of course, the scaling up never happens, because the tiny effect was a tiny error. So this is always a huge red flag to me, one that has proven extremely predictive.

And in fact when NASA tested the EM drive under rigorous testing conditions, they could not detect any anomalous thrust. With new technology there are two basic types of studies we can do to explore them. One is to explore the potential underlying physics or phenomena – how could such technology work. The other is to simply test whether or not the technology works, regardless of how. Ideally both of these types of evidence will align. There is often debate about which type of evidence is more important, with many proponents arguing that the only thing that matters is if the technology works. But the problem here is that often the evidence is low-grade or ambiguous, and we need the mechanistic research to put it into context.

But I do agree, at the end of the day, if you have sufficiently high level rigorous evidence that the phenomenon either exists or doesn’t exist, that would trump whether or not we currently know the mechanism or the underlying physics. That is what NASA was trying to do – a highly rigorous experiment to simply answer the question – is there anomalous thrust. Their answer was no.

The same is true of the MET. The theory behind the MET is different, and is based on some speculative physics. The idea stems from a question in physics for which we do not currently have a good answer – what determines inertial frames of reference. For example, if you have a bucket of water in deep intergalactic space (sealed at the top to contain the water), and you spin it, centrifugal force will cause the water to climb up the sides of the buck. But how can we prove physically that the bucket is spinning and the universe is not spinning around it. In other words – what is the frame of reference. We might intuitive feel like it makes more sense that the bucket is spinning, but how do we prove that with physics and math? What theory determines the frame of reference?

One speculative theory is that the inertial frame of reference is determined by the total mass energy of the universe, it derives from an interaction between an object and the rest of the universe. If this is the case then perhaps you can change that inertia by pushing against the rest of the universe, without expelling propellant. If this is all true, then the MET could theoretically work. This seems to be one step above the EM drive in that the EM drive likely violates the known laws of physics, while the MET is based on unknown laws.

Well, NASA tested the MET also and – no anomalous thrust. Proponents, of course, could always argue that the experimental setup was not sensitive enough. But at some point, teeny tiny becomes practically indistinguishable from zero.

It seems that we do not have a propellantless drive in our future, which is too bad. But the idea is so compelling that I also doubt we have seen the end of such claims, as with perpetual motion machines and free energy. There are already other claims, such as the quantum drive. There are likely to be more. What I typically say to proponents is this – scale it up first, then come talk to me. Since “scaling up” tends to be the death of all of these claims, that’s a good filter.

The post Mach Effect Thrusters Fail first appeared on NeuroLogica Blog.

Categories: Skeptic

Is the AI Singularity Coming?

Thu, 03/07/2024 - 4:49am

Like it or not, we are living in the age of artificial intelligence (AI). Recent advances in large language models, like ChatGPT, have helped put advanced AI in the hands of the average person, who now has a much better sense of how powerful these AI applications can be (and perhaps also their limitations). Even though they are narrow AI, not sentient in a human way, they can be highly disruptive. We are about to go through the first US presidential election where AI may play a significant role. AI has revolutionized research in many areas, performing months or even years of research in mere days.

Such rapid advances legitimately make one wonder where we will be in 5, 10, or 20 years. Computer scientist Ben Goertzel, who popularized the term AGI (artificial general intelligence), recently stated during a presentation that he believes we will achieve not only AGI but an AGI singularity involving a superintelligent AGI within 3-8 years. He thinks it is likely to happen by 2030, but could happen as early as 2027.

My reaction to such claims, as a non-expert who follows this field closely, is that this seems way to optimistic. But Goertzel is an expert, so perhaps he has some insight into research and development that’s happening in the background that I am not aware of. So I was very interested to see his line of reasoning. Will he hint at research that is on the cusp of something new?

Goertzel laid out three lines of reasoning to support his claim. The first is simply extrapolating from the recent exponential grown of narrow AI. He admits that LLM systems and other narrow AI are not themselves on a path to AGI, but they show the rapid advance of the technology. He aligns himself here with Ray Kurzweil, who apparently has a new book coming out, The Singularity is Nearer. Kurzweil has a reputation for predicting advances in computer technology that were overly optimistic, so that is not surprising.

I find this particular argument not very compelling. Exponential growth in one area of technology at one particular time does not mean that this is a general rule about technology for all time. I know that is explicitly what Kurzweil argues, but I disagree with it. Some technologies hit roadblocks, or experience diminishing returns, or simply peak. Stating exponential advance as a general rule did not mean that the hydrogen economy was coming 20 years ago. It has not made commercial airline travel any faster over the last 50 years. Rather, history is pretty clear that we need to do a detailed analysis of individual technologies to see how they are advancing and what their potential is. Even still, this only gives us a roadmap for a certain amount of time, and is not useful for predicting disruptive technologies or advances.

So that is a strike one, in my opinion. Recent rapid advances in narrow AI does not predict, in and of itself, that AGI is right around the corner. It’s also strike two, actually, because he argues that one line of evidence to support his thesis is Kurzweil’s general rule of exponential advance, and the other is the recent rapid advances in LLM narrow AIs. So what is his third line of evidence?

This one I find the most compelling, because at least it deals with specific developments in the field. Goertzel here is referring to his own work: “OpenCog Hyperon,” as well as associated software systems and a forthcoming AGI programming language, dubbed “MeTTa”. The idea here is that you can create an AGI by stitching together many narrow AI systems. I think this is a viable approach. It’s basically how our brains work. If you had 20 or so narrow AI systems that handled specific parts of cognition and were all able to communicate with each other, so that the output of one algorithm becomes the input of another, then you are getting close to a human brain type of cognition.

But saying this approach will achieve AGI in a few years is a huge leap. There is still a lot we don’t know about how such a system would work, and there is much we don’t know about how sentience emerges from the activity of our brains. We don’t know if linking many narrow AI systems together will cause AGI to emerge, or if it will just be a bunch of narrow AIs working in parallel. I am not saying there is something unique about biological cognition, and I do think we can achieve AGI in silicon, but we don’t know all the elements that go into AGI.

If I had to predict I would say that AGI is likely to happen both slower and faster than we predict. I highly doubt it will happen in 3-8 years. I suspect it is more like 20-30 years. But when it does happen, like with the LLMs, it will probably happen fast and take us by surprise. Goertzel, to his credit, admits he may be wrong. He says we may need a, “quantum computer with a million qubits or something.”  To me that is a pretty damning admission, that all his extrapolations actually mean very little.

Another aspect of his predictions is what happens after we achieve AGI. He, as many others have also predicted, said that if we give the AGI the ability to write its own code then it could rapidly become superintelligent, like a single entity with the cognitive ability of all human civilization. Theoretically, sure. But having an AGI that powerful is more than about writing better code, right? It’s also limited by the hardware, and the availability of training data, and perhaps other variables as well. But yes, such an AGI would be a powerful tool of science and technology that could be turned toward making the AGI itself more advanced.

Will this create a Kurzweil-style “singularity”? Ultimately I think that idea is a bit subjective, and we won’t really know until we get there.

The post Is the AI Singularity Coming? first appeared on NeuroLogica Blog.

Categories: Skeptic

Climate Sensitivity and Confirmation Bias

Mon, 03/04/2024 - 6:02am

I love to follow kerfuffles between different experts and deep thinkers. It’s great for revealing the subtleties of logic, science, and evidence. Recently there has been an interesting online exchange between a physicists science communicator (Sabine Hossenfelder) and some climate scientists (Zeke Hausfather and Andrew Dessler). The dispute is over equilibrium climate sensitivity (ECS) and the recent “hot model problem”.

First let me review the relevant background. ECS is a measure of how much climate warming will occur as CO2 concentration in the atmosphere increases, specifically the temperature rise in degrees Celsius with a doubling of CO2 (from pre-industrial levels). This number of of keen significance to the climate change problem, as it essentially tells us how much and how fast the climate will warm as we continue to pump CO2 into the atmosphere. There are other variables as well, such as other greenhouse gases and multiple feedback mechanisms, making climate models very complex, but the ECS is certainly a very important variable in these models.

There are multiple lines of evidence for deriving ECS, such as modeling the climate with all variables and seeing what the ECS would have to be in order for the model to match reality – the actual warming we have been experiencing. Therefore our estimate of ECS depends heavily on how good our climate models are. Climate scientists use a statistical method to determine the likely range of climate sensitivity. They take all the studies estimating ECS, creating a range of results, and then determine the 90% confidence range – it is 90% likely, given all the results, that ECS is between 2-5 C.

Hossenfelder did a recent video discussing the hot model problem. This refers to the fact that some of the recent climate models, ones that are ostensible improved from older models incorporating better physics and cloud modeling, produced an estimate for ECS outside the 90% confidence interval, with ECSs above 5.0. Hossenfelder expressed grave concern that if these models are closer to the truth on ECS we are in big trouble. There is likely to be more warming sooner, which means we have even less time than we thought to decarbonize our economy if we want to avoid the worst climate change has in store for us. Some climate scientists responded to her video, and then Hossenfelder responded back (links above). This is where it gets interesting.

To frame my take on this debate a bit, when thinking about any scientific debate we often have to consider two broad levels of issues. One type of issue is generic principles of logic and proper scientific procedure. These generic principles can apply to any scientific field – P-hacking is P-hacking, whether you are a geologist or chiropractor. This is the realm I generally deal with, basic principles of statistics, methodological rigor, and avoiding common pitfalls in how to gather and interpret evidence.

The second relevant level, however, is topic-specific expertise. Here I do my best to understand the relevant science, defer to experts, and essentially try to understand the consensus of expert opinion as best I can. There is often a complex interaction between these two levels. But if researchers are making egregious mistakes on the level of basic logic and statistics, the topic-specific details do not matter very much to that fact.

What I have tried to do over my science communication career is to derive a deep understanding of the logic and methods of good science vs bad science from my own field of expertise, medicine. This allows me to better apply those general principles to other areas. At the same time I have tried to develop expertise in the philosophy of science, and understanding the difference between science and pseudoscience.

In her response video Hossenfelder is partly trying to do the same thing, take generic lessons from her field and apply them to climate science (while acknowledging that she is not a climate scientist). Her main point is that, in the past, physicists had grossly underestimated the uncertainty of certain measurements they were making (such as the half life of protons outside a nucleus). The true value ended up being outside the earlier uncertainty range – h0w did that happen? Her conclusions was that it was likely confirmation bias – once a value was determined (even if just preliminary) then confirmation bias kicks in. You tend to accept later evidence that supports the earlier preliminary evidence while investigating more robustly any results that are outside this range.

Here is what makes confirmation bias so tricky and often hard to detect. The logic and methods used to question unwanted or unexpected results may be legitimate. But there is often some subjective judgement involved in which methods are best or most appropriate and there can be a bias in how they are applied. It’s like P-hacking – the statistical methods used may be individually reasonable, but if you are using them after looking at data their application will be biased. Hossenfelder correctly, in my opinion, recommends deciding on all research methods before looking at any data. The same recommendation now exists in medicine, even with pre-registration of methods before collective data and reviewers now looking at how well this process was complied with.

So Hausfather and Dessler make valid points in their response to Hossenfelder, but interestingly this does not negate her point. Their points can be legitimate in and of themselves, but biased in their application. The climate scientists point out (as others have) that the newer hot models do a relatively poor job of predicting historic temperatures and also do a poor job of modeling the most recent glacial maximum. That sounds like a valid point. Some climate scientists have therefore recommended that when all the climate models are averaged together to produce a probability curve of ECS that models which are better and predicting historic temperatures be weighted heavier than models that do a poor job. Again, sounds reasonable.

But – this does not negate Hossenfelder’s point. They decided to weight climate models after some of the recent models were creating a problem by running hot. They were “fixing” the “problem” of hot models. Would they have decided to weight models if there weren’t a problem with hot models? Is this just confirmation bias?

None of this means that there fix is wrong, or that the hot models are right. But what it means is that climate scientists should acknowledge exactly what they are doing. This opens the door to controlling for any potential confirmation bias. The way this works (again, generic scientific principle that could apply to any field) is to look a fresh data. Climate scientists need to agree on a consensus method – which models to look at, how to weight their results – and then do a fresh analysis including new data. Any time you make any change to your methods after looking at the data, you cannot really depend on the results. At best you have created a hypothesis – maybe this new method will give more accurate results – but then you have to confirm that method by applying it to fresh data.

Perhaps climate scientists are doing this (I suspect they will eventually), although Hausfather and Dessler did not explicitly address this in their response.

It’s all a great conversation to have. Every scientific field, no matter how legitimate, could benefit from this kind of scrutiny and questioning. Science is hard, and there are many ways  bias can slip in. It’s good for scientists in every field to have a deep and subtle understanding of statistical pitfalls, how to minimize confirmation bias and p-hacking, and the nature of pseudoscience.

The post Climate Sensitivity and Confirmation Bias first appeared on NeuroLogica Blog.

Categories: Skeptic

Virtual Walking

Fri, 03/01/2024 - 5:07am

When I use my virtual reality gear I do practical zero virtual walking – meaning that I don’t have my avatar walk while I am not walking. I general play standing up which means I can move around the space in my office mapped by my VR software – so I am physically walking to move in the game. If I need to move beyond the limits of my physical space, I teleport – point to where I want to go and instantly move there. The reason for this is that virtual walking creates severe motion sickness for me, especially if there is even the slightest up and down movement.

But researchers are working on ways to make virtual walking a more compelling, realistic, and less nausea-inducing experience. A team from the Toyohashi University of Technology and the University of Tokyo studied virtual walking and introduced two new variables – they added a shadow to the avatar, and they added vibration sensation to the feet. An avatar is a virtual representation of the user in the virtual space. Most applications allow some level of user control over how the avatar is viewed, but typically either first person (you are looking through the avatar’s eyes) or third person (typically your perspective is floating above and behind the avatar). In this study they used only first person perspective, which makes sense since they were trying to see how realistic an experience they can create.

The shadow was always placed in front of the avatar and moved with the avatar. This may seem like a little thing, but it provides visual feedback connecting the desired movements of the user with the movements of the avatar. As weird as this sounds, this is often all that it takes to not only feel as if the user controls the avatar but is embodied within the avatar. (More on this below.) Also they added four pads to the bottom of the feet, two on each foot, on the toe-pad and the heel. These vibrated in coordination with the virtual avatar’s foot strikes. How did these two types of sensory feedback affect user perception?

They found:

“Our findings indicate that the synchronized foot vibrations enhanced telepresence as well as self-motion, walking, and leg-action sensations, while also reducing instances of nausea and disorientation sickness. The avatar’s cast shadow was found to improve telepresence and leg-action sensation, but had no impact on self-motion and walking sensation. These results suggest that observation of the self-body cast shadow does not directly improve walking sensation, but is effective in enhancing telepresence and leg-action sensation, while foot vibrations are effective in improving telepresence and walking experience and reducing instances of cybersickness.”

So the shadow made people feel more like they were in the virtual world (telepresence) and that they were moving their legs, even when they weren’t. But the shadow did not seem to enhance the sensation of walking. Meanwhile the foot vibrations improved the sense of telepresence and leg movement, but also the sense that the user was actually walking. Further (and this is of keen interest to me) the foot vibrations also reduced motion sickness and nausea. Keep in mind, the entire time the user is sitting in a chair.

I do not find the telepresence or sense of movement surprising. It is now well established that this is how the brain usually works to create the sensation that we occupy our bodies and own and control the parts of our bodies. These sensations do not flow automatically from the fact that we are our bodies and do control them. There are specific circuits in the brain that create these sensations, and if those circuits are disrupted people can have out-of-body sensations or even feel disconnected from parts of their body. These circuits depend on sensory feedback.

What is happening is that our brains are comparing various information streams in real time – what movements do we intend to make, visual feedback regarding whether or not our body is moving in the way we intend, combined with physical sensation such as proprioception (feeling where your body is in three dimensional space) and tactile sensation. When everything lines up, we feel as if we occupy and control our bodies. When they don’t line up, weird stuff happens.

The same is true for motion sickness. Our brains compare several streams of information at once – visual information, proprioception, and vestibular information (sensing gravity and acceleration). When these sensory streams do not match up, we feel vertigo (spinning sensation) or motion sickness. Sometimes people have just a vague sense of “dizziness” without overt spinning – they are just off.

In VR there can be a complete mismatch between visual input and vestibular input. My eyes are telling me that I am running over a landscape, while my vestibular system is telling me I am not moving. The main way this is currently addressed is by not having virtual movement, hence the teleporting (which does not count as movement visually). Another potential way to deal with this is to have physical movement match the virtual movement, but this requires a large and expensive rig, which is currently not ready for consumer use. This is the Ready Player One scenario – a harness and an omnidirectional treadmill. This would probably be the best solution, and I suspect you would need only a little bit of movement to significantly reduce motion sickness, as long as it was properly synchronized.

There has also been speculation that perhaps motion sickness can be reduced by leveraging other sensory inputs, such as haptic feedback. There has also been research into using brain stimulation to reduce the effect. A 2023 study looked at “transcranial alternating current stimulation (tACS) at 10 Hz, biophysically modelled to reach the vestibular cortex bilaterally.” I look at this as a proof of concept, not a likely practical solution. But perhaps some lower tech stimulation might be effective.

I am a little surprised, although pleased, that in the current study a little haptic feedback of the feet lowered motion sickness. My hope is that as the virtual experience gets more multi-modal, with several sensory streams all synchronized, the motion sickness problem will be mostly resolved. In the current study, if the provided picture (see above) is any indication, the users were walking through virtual streets. This would not provide a lot of up and down movement, which is the killer. So perhaps haptic feedback might work for situations that would create mild motion sickness, but I doubt it would be enough for me to survive a virtual roller coaster.

All of this bodes well for a Ready Player One future – with mature VR including haptic feedback with some physical motion. I do wonder if the brain hacking (brain stimulation) component will be necessary or practical in the near future.

One last aside – the other solution to the motion sickness problem is AR – augmented reality. With AR you can see the physical world around you through the goggles, which overlay virtual information. This way you are moving through the physical world, which can be skinned to look very different or have virtual objects added. This does not work for every VR application, however, and is limited because you need the physical space to move around in. But applications and games built around what AR can do has the added benefit of no motion sickness.

The post Virtual Walking first appeared on NeuroLogica Blog.

Categories: Skeptic

Pages