A quick reminder, to those in the northwest’s big cities, that I will be giving two talks about my book in the next 48 hours:
Hope to see some of you there! (You can keep track of my speaking events at my events page.)
The idea that a field could be responsible for the masses of particles (specifically the masses of photon-like [“spin-one”] particles) was proposed in several papers in 1964. They included one by Peter Higgs, one by Robert Brout and Francois Englert, and one, slightly later but independent, by Gerald Guralnik, C. Richard Hagen, and Tom Kibble. This general idea was then incorporated into a specific theory of the real world’s particles; this was accomplished in 1967-1968 in two papers, one written by Steven Weinberg and one by Abdus Salam. The bare bones of this “Standard Model of Particle Physics” was finally confirmed experimentally in 2012.
How precisely can mass come from a field? There’s a short answer to this question, invented a couple of decades ago. It’s the kind of answer that serves if time is short and attention spans are limited; it is intended to sound plausible, even though the person delivering the “explanation” knows that it is wrong. In my recent book, I called this type of little lie, a compromise that physicists sometimes have to make between giving no answer and giving a correct but long answer, a “phib” — a physics fib. Phibs are usually harmless, as long as people don’t take them seriously. But the Higgs field’s phib is particularly problematic.
The Higgs PhibThe Higgs phib comes in various forms. Here’s a particularly short one:
There’s this substance, like a soup, that fills the universe; that’s the Higgs field. As objects move through it, the soup slows them down, and that’s how they get mass.
Some variants replace the soup with other thick substances, or even imagine the field as though it were a crowd of people.
How bad is this phib, really? Well, here’s the problem with it. This phib violates several basic laws of physics. These include foundational laws that have had a profound impact on human culture and are the first ones taught in any physics class. It also badly misrepresents what a field is and what it can do. As a result, taking the phib seriously makes it literally impossible to understand the universe, or even daily human experience, in a coherent way. It’s a pedagogical step backwards, not forwards.
What’s Wrong With The Higgs PhibSo here are my seven favorite reasons to put a flashing red warning sign next to any presentation of the Higgs phib.
1. Against The Principle of RelativityThe phib brazenly violates the principle of relativity — both Galileo’s original version and Einstein’s updates to it. That principle, the oldest law of physics that has never been revised, says that if your motion is steady and you are in a closed room, no experiment can tell you your speed, your direction of motion, or even whether you are in motion at all. The phib directly contradicts this principle. It claims that
But if that were true, the action of the Higgs field could easily allow you to distinguish steady motion from being stationary, and the principle of relativity would be false.
2. Against Newton’s First Law of MotionThe phib violates Newton’s first law of motion — that an object in motion not acted on by any force will remain in steady motion. If the Higgs field slowed things down, it could only do so, according to this law, by exerting a force.
But Newton, in predicting the motions of the planets, assumed that the only force acting on the planets was that of gravity. If the Higgs field exerted an additional force on the planets simply because they have mass (or because it was giving them mass), Newton’s methods for predicting planetary motions would have failed.
Worse, the slowing from the Higgs field would have acted like friction over billions of years, and would by now have caused the Earth to slow down and spiral into the Sun.
3. Against Newton’s Second Law of MotionThe phib also violates Newton’s second law of motion, by completely misrepresenting what mass is. It makes it seem as though mass makes motion difficult, or at least has something to do with inhibiting motion. But this is wrong.
As Newton’s second law states, mass is something that inhibits changes in motion. It does not inhibit motion, or cause things to slow down, or arise from things being slowed down. Mass is the property that makes it hard both to speed something up and to slow it down. It makes it harder to throw a lead ball compared to a plastic one, and it also makes the lead ball harder to catch bare-handed than a plastic one. It also makes it difficult to change something’s direction.
To say this another way, Newton’s second law F=ma says that to make a change in an object’s motion (an acceleration a) requires a force (F); the larger the object’s mass (m), the larger the required force must be. Notice that it does not have anything to say about an object’s motion (its velocity v).
To suggest that mass has to do with motion, and not with change in motion, is to suggest that Newton’s law should be F=mv — which, in fact, many pre-Newtonian physicists once believed. Let’s not let a phib throw us back to the misguided science of the Middle Ages!
4. Not a Universal Mass-GiverThe phib implies that the Higgs field gives mass to all objects with mass, causing all of them to slow down. After all, if there were a universal “soup” found everywhere, then every object would encounter it. If it were true that the Higgs field acted on all objects in the same way — “universally”, similar to gravity, which pulls on all objects — then every object in our world would get its mass from the Higgs field.
But in fact, the Higgs field only generates the masses of the known elementary particles. More complex particles such as protons and neutrons — and therefore the atoms, molecules, humans and planets that contain them — get most of their mass in another way. The phib, therefore, can’t be right about how the Higgs field does its job.
5. Not Like a SubstanceAs is true of all fields, the Higgs field is not like a substance, in contrast to soup, molasses, or a crowd. It has no density or materiality, as soup would have. Instead, the Higgs field (like any field!) is more like a property of a substance.
As an analogue, consider air pressure (which is itself an example of an ordinary field.) Air is a substance; it is made of molecules, and has density and weight. But air’s pressure is not a thing; it is a property of air, , and is not itself a substance. Pressure has no density or weight, and is not made from anything. It just tells you what the molecules of air are doing.
The Higgs field is much more like air pressure than it is like air itself. It simply is not a substance, despite what the phib suggests.
6. Not Filling the UniverseThe Higgs field does not “fill” the universe any more than pressure fills the atmosphere. Pressure is found throughout the atmosphere, yes, but it is not what makes the atmosphere full. Air is what constitutes the atmosphere, and is the only thing that can be said, in any sense, to fill it.
While a substance could indeed make the universe more full than it would otherwise be, a field of the universe is not a substance. Like the magnetic field or any other cosmic field, the Higgs field exists everywhere — but the universe would be just as empty (and just as full) if the Higgs field did not exist.
7. Not Merely By Its PresenceFinally, the phib doesn’t mention the thing that makes the Higgs field special, and that actually allows it to affect the masses of particles. This is not merely that it is present everywhere across the universe, but that it is, in a sense, “on.” To give you a sense of what this might mean, consider the wind.
On a day with a steady breeze, we can all feel the wind. But even when the wind is calm, physicists would say that the wind exists, though it is inactive. In the language I’m using here, I would say that the wind is something that can always be measured — it always exists — but
In other words, the wind is always present, whether it is calm or steady; it can always be measured.
In rough analogy, the Higgs field, though switched on in our universe, might in principle have been off. A switched-off Higgs field would not give mass to anything. The Higgs field affects the masses of elementary particles in our universe only because, in addition to being present, it is on. (Physicists would say it has a “non-zero average value” or a “non-zero vacuum expectation value”)
Why is it on? Great question. From the theoretical point of view, it could have been either on or off, and we don’t know why the universe arranged for the former.
Beyond the Higgs PhibI don’t think we can really view a phib with so many issues as an acceptable pseudo-explanation. It causes more problems and confusions than it resolves.
But I wish it were as easy to replace the Higgs phib as it is to criticize it. No equally short story can do the job. If such a brief tale were easy to imagine, someone would have invented it by now.
Some years ago, I found a way to explain how the Higgs field works that is non-technical and yet correct — one that I would be happy to present to my professional physics colleagues without apology or embarrassment. (In fact, I did just that in my recent talks at the physics departments at Vanderbilt and Irvine.) Although I tried delivering it to non-experts in an hour-long talk, I found that it just doesn’t fit. But it did fit quite well in a course for non-experts, in which I had several hours to lay out the basics of particle physics before addressing the Higgs field’s role.
That experience motivated me to write a book that contains this explanation. It isn’t brief, and it’s not a light read — the universe is subtle, and I didn’t want to water the explanation down. But it does deliver what it promises. It first carefully explains what “elementary particles” and fields really are [here’s more about fields] and what it means for such a “particle” to have mass. Then it gives the explanation of the Higgs field’s effects — to the extent we understand them. (Readers of the book are welcome to ask me questions about its content; I am collecting Q&A and providing additional resources for readers on this part of the website.)
A somewhat more technical explanation of how the Higgs field works is given elsewhere on this website: check out this series of pages followed by this second series, with additional technical information available in this third series. These pages do not constitute a light read either! But if you are comfortable with first-year university math and physics, you should be able to follow them. Ask questions as need be.
Between the book, the above-mentioned series of webpages, and my answers to your questions, I hope that most readers who want to know more about the Higgs field can find the explanation that best fits their interests and background.
Although I’ve been slowly revising the Higgs FAQ 2.0, this seemed an appropriate time to bring the Higgs FAQ on this website fully into the 2020’s. You will find the Higgs FAQ 3.0 here; it explains the basics of the Higgs boson and Higgs field, along with some of the wider context.
For deeper explanations of the Higgs field:
Events: this week I am speaking Tuesday in Berkeley, CA; Wednesday in Seattle, WA (at Town Hall); and Thursday outside of Portland, OR (at the Powell’s bookstore in Cedar Hills). Click here for more details.
The particle physics community is mourning the passing of Peter Higgs, the influential theoretical physicist and 2013 Nobel Prize laureate. Higgs actually wrote very few papers in his career, but he made them count.
It’s widely known that Higgs deeply disapproved of the term “God Particle”. That’s the nickname that has been given to the type of particle (the “Higgs boson”) whose existence he proposed. But what’s not as widely appreciated is why he disliked it, as do most other scientists I know.
It’s true that Higgs himself was an atheist. Still, no matter what your views on such subjects, it might bother you that the notion of a “God Particle” emerged neither from science nor from religion, and could easily be viewed as disrespectful to both of them. Instead, it arose out of marketing and advertising in the publishing industry, and it survives due to another industry: the news media.
But there’s something else more profound — something quite sad, really. The nickname puts the emphasis entirely in the wrong place. It largely obscures what Higgs (and his colleagues/competitors) actually accomplished, and why they are famous among scientists.
Let me ask you this. Imagine a type of particle that
This doesn’t seem very God-like to me. What do you think?
Perhaps this does seem spiritual or divine to you, and in that case, by all means call the “Higgs boson” the “God Particle”. But otherwise, you might want to consider alternatives.
For most humans, and even for most professional physicists, the only importance of the Higgs boson is this: it gives us insight into the Higgs field. This field
It may even be capable of bringing about the universe’s destruction, someday in the distant future. So if you’re going to assign some divinity to Higgs’ insights, this is really where it belongs.
In short, what’s truly consequential in Higgs’ work (and that of others who had the same basic idea: Robert Brout and Francois Englert, and Gerald Guralnik, C. Richard Hagen and Tom Kibble) is the Higgs field. Your life depends upon the existence and stability of this field. The discovery in 2012 of the Higgs boson was important because it proved that the Higgs field really exists in nature. Study of this type of particle continues at the Large Hadron Collider, not because we are fascinated by the particle per se, but because measuring its properties is the most effective way for us to learn more about the all-important Higgs field.
Professor Higgs helped reveal one of the universe’s great secrets, and we owe him a great deal. I personally feel that we would honor his legacy, in a way that would have pleased him, through better explanations of what he achieved — ones that clarify how he earned a place in scientists’ Hall of Fame for eternity.
A quick note today, as I am flying to Los Angeles in preparation for
and other events next week.
I hope many of you were able, as I was, to witness the total solar eclipse yesterday. This was the third I’ve seen, and each one is different; the corona, prominences, stars, planets, and sky color all vary greatly, as do the sounds of animals. (I have written about my adventures going to my first one back in 1999; yesterday was a lot easier.)
Finally, of course, the physics world is mourning the loss of Peter Higgs. Back in 1964, Higgs proposed the particle known as the Higgs boson, as a consequence of what we often call the Higgs field. (Note that the field was also proposed, at the same time, by Robert Brout and Francois Englert.) Much is being written about Higgs today, and I’ll leave that to the professional journalists. But if you want to know what Higgs actually did (rather than the pseudo-descriptions that you’ll find in the press) then you have come to the right place. More on that later in the week.
It’s always fun and interesting when a measurement of an important quantity shows a hint of something unexpected. If yesterday’s results from DESI (the Dark Energy Spectroscopic Instrument) were to hold up to scrutiny, it would be very big news. We may well find out within a year or two, when DESI’s data set triples in size.
The phrase “Dark Energy” is shorthand for “the energy-density and negative pressure of empty space”. This quantity was found to be non-zero back in 1998. But there’s been insufficient data to determine whether it has been constant over the history of the universe. Yesterday’s results from DESI suggest it might not be constant; perhaps the amount of dark energy has varied over time.
If so, this would invalidate scientists’ simplest viable model for the universe, the benchmark known as the “Standard Model of Cosmology” (not to be confused with the “Standard Model of Particle Physics.”, which is something else altogether.) In cosmology’s standard model, nicknamed ΛCDM, there is a constant amount of dark energy [Λ], along with a certain amount of relatively slow-moving (i.e. “cold”) dark matter [CDM] (meaning some kind of stuff that gravitates, but doesn’t shine in any way). All this is accompanied by a little bit of ordinary stuff, out of which familiar objects like planets and bloggers are made.
While ΛCDM agrees with most existing data, it’s crude, and may well be too simple. Perhaps it requires a small tweak. Or perhaps it requires a larger adjustment. We have already had a puzzle for several years, called the “Hubble tension”, concerning the so-called Hubble constant, which is a measure of how quickly the universe has been expanding over time. Measurements of the Hubble constant can be made by studying the nearby universe today; others can be made using views of the universe’s more distant past; and the two classes of measurements disagree by a few percent. This disagreement suggests that maybe there’s an important detail missing from the standard picture of the cosmos’s history.
Now, perhaps DESI has seen a sign of something else in ΛCDM breaking down… specifically the idea of a constant Λ. (At the moment, I know of no obvious relation between DESI’s results and the Hubble tension.) But it’s too early to say for sure; in fact, even if DESI’s results hold up over time, it might be that there are multiple interpretations of their results.
An aside, in answer to a common question: Is the whole concept of the Big Bang at risk? I doubt that very much. The discrepancies are only a few percent. They seem enough to potentially challenge important details, but nowhere near enough to undermine the whole story.
There are a number of murky elements here; in particular, DESI’s data is still limited enough that one’s interpretations of their results depends on one’s assumptions. (There’s also a story here involving the masses of neutrinos, which also depends upon one’s assumptions.) I don’t understand all the issues yet, but I’ll try to wrap my head around them soon, and report back to you. It happens that I’m in the middle of some travel for talks about my book (in CA, WA and OR; you can check my event page) so it may take me a little while, I’m afraid. In the meantime, you might want to read about the Baryon Acoustic Oscillations [BAO] that lie at the heart of DESI’s measurements.
[Postscript: I guess Nature liked my title and decided to shake up New Jersey today… as felt today even in Massachusetts.]
A reminder: tonight (April 3) at 6pm I’ll be giving a public lecture about my book, along with a Q&A in conversation with Greg Kestin, at Harvard University’s Science Center. It’s free, though they request an RSVP. More details are here. Please spread the word! (Next event in Pasadena, CA on April 10th.)
I hope that a number of you will be able to see the total solar eclipse next Monday, April 8th. I have written about my adventures taking in a similar eclipse in 1999, an event which had a profound impact on me. Perhaps my experience might give you some things to think about that go well beyond the mere scientific.
Meanwhile, for those who can only see a partial solar eclipse that day, there’s still something really cool (and very poorly appreciated!) that you can do that cannot be done on an ordinary day! Namely, you can easily estimate the size of the Moon, and then quickly go on to estimate how far away it is. This is daylight astronomy at its best!
Side note for those in the Boston area: I’m speaking about my new book at Harvard on Wednesday April 3rd.
You're currently a free subscriber. Upgrade your subscription to get access to the rest of this post and other paid-subscriber only content.
Upgrade subscriptionI’m beginning a period of travel and public speaking, so new posts may be a bit limited for a time. (Meanwhile, explore this site’s other offerings!) Tomorrow, Thursday March 28th, I’ll be in Nashville, at Vanderbilt University’s department of physics and astronomy, giving a talk (at 4 pm) about the subjects covered in my recent book. Next, on Wednesday April 3rd (6 pm) I’ll be in Cambridge, Massachusetts, giving a public talk about the book at the Harvard Science Center, as organized by the Harvard Bookstore (the wonderful independent book store located right in Harvard Square.) [Free, but RSVP.]
Then I’ll be on the west coast for a couple of weeks; if you live out there, check out this site’s upcoming-events page. And if you can’t attend any of these events, you can always listen to the recent podcasts that I’ve been on, with Sean Carroll (here) and with Daniel Whiteson (part 1 and part 2.)
More physics coming soon!
I recently pointed out that there are unfamiliar types of standing waves that violate the rules of the standing waves that we most often encounter in life (typically through musical instruments, or when playing with ropes and Slinkys) and in school (typically in a first-year physics class.) I’ve given you some animations here and here, along with some verbal explanation, that show how the two types of standing waves behave.
Today I’ll show you what lies “under the hood” — and how you yourself could make these unfamiliar standing waves with a perfectly ordinary physical system. (Another example, along with the relevance of this whole subject to the Higgs field, is discussed in chapter 20 of the book.)
Strings, Balls and SpringsIt’s a famous fact that an ordinary string bears a close resemblance to a set of balls connected by springs — their waves are the same, as long as the wave’s shape varies slowly compared to the distance between the balls.
Figure 1: (Top) A length of string. (Bottom) A set of balls connected by springs. The vertical waves of the ball-spring system are similar to the vertical waves on the string; see Figure 2.The string remains continuous, rather than fragmenting into pieces, because of its internal atomic forces. Similarly, in the ball-spring system, continuity is assured by the springs, which prevent neighboring balls from moving too far apart.
Both systems have familiar standing waves like those on a guitar string, but only if their ends are attached and fixed to something. The most familiar standing wave, shown for each of the two systems, is displayed below.
Figure 2: The classic standing wave for the string and for the the ball-spring system A Different Set of Balls and SpringsFigure 3 shows a different system of balls and springs, unlike a guitar string. Here, the two sets of springs have distinct roles to play.
Figure 3: A set of balls connected to each other by horizontal springs and to the ground by vertical springs. The waves of this system are less familiar.It’s the restoring effect that gives this system unfamiliar standing waves.
Compare The WavesThese systems can exhibit many types of waves, depending on whether their ends are fixed or allowed to float (“boundary conditions”). We can have some fun with all the different options at another time. But today I just want to convince you of the most important thing: that the first system of balls and springs requires walls for its standing waves, while the second one does not.
I’ll make waves analogous to the ones I made in last week’s post on this subject. In the animations below, horizontal springs are drawn as orange lines, while vertical ones are drawn as black lines.
First, let’s take the system with only horizontal springs, distort it upward only in the middle, and let go. No simple standing wave results; we get two traveling waves moving in opposite directions and reflecting off the walls (shown as red, fixed dots.)
Now let’s take the system that has vertical springs as well. In particular, let’s make the vertical springs strong, so that the restoring effect is powerful. Again, let’s distort the system upward at the center, and let go. Now the restoring force of the vertical springs creates a standing wave. That wave is nowhere near the walls, and doesn’t care that there are walls at all. It gradually spreads out, but maintains its coherence for many vibration cycles.
The stronger the vertical springs compared to the horizontal springs, the faster the vibration will be, and the slower the spreading of the wave — and thus the longer the standing wave will maintain its integrity.
The Profound Importance of the Restoring EffectThe key difference, then, between the two systems is the existence of the restoring effect of the vertical springs. More specifically, the two types of springs battle it out, the restoring effect fighting the continuity effect. Whether the former wins or the latter wins is what determines whether the system has long-lasting unfamiliar standing waves that require no walls.
In school and in music, we only encounter systems where the restoring effect is absent, and the continuity effect is dominant. But our very lives depend on the existence of a restoring effect for many of nature’s fields. That effect provides the key difference between photons and electrons (see chapters 17 and 20) — the electromagnetic field, whose ripples are photons, experiences no restoring effect, while the electron field, whose ripples are electrons, is subject to a significant restoring effect.
As described in chapter 20 of the book (which gives other examples of a systems with unfamiliar standing waves), this restoring effect is intimately tied to the workings of the Higgs field.
[Note Added: 9:30pm Eastern] Unfortunately this storm has consisted of a very bright spike of high activity and a very quick turnoff. It might restart, but it might not. Data below shows recorded activity in three-hour intervals — and the red or very high orange is where you’d want things to be for mid-latitude auroras.
The current solar storm has so far only had a high but brief spike, and might be over already.Quick note: a powerful double solar flare from two groups of sunspots occurred on Friday. This in turn produced a significant blast of subatomic particles and magnetic field, called a Coronal Mass Ejection [CME], which headed in the direction of Earth. This CME arrived at Earth earlier than expected — a few hours ago — which also means it was probably stronger than expected, too. For those currently in darkness and close enough to the poles, it is probably generating strong auroras, also known as the Northern and Southern Lights.
No one knows how long this storm will continue, but check your skies tonight if you are in Europe especially, and possibly North America as well. The higher your latitude and the earlier your nightfall compared to now, the better your chances.
The ACE satellite, located between the Earth and Sun at a distance from Earth approximately 1% of the Sun-Earth distance, recorded the arrival of the CME a few hours ago as a jump in a number of its readings.One of the most challenging aspects of writing a book or blog about the universe (as physicists currently understand it) is that both writer and reader must confront the concept of fields. The problem isn’t that fields are intrinsically that complicated. It’s that they are an unfamiliar abstraction — and novel abstractions of any sort are always difficult both for a writer to describe and for a reader to grasp.
What I’ll do today is give an explanation of fields that is complementary to the one that appears in the book’s chapters 13 and 14. The book’s approach is slow, methodical, and detailed, but today’s will be more of an overview, brief and relatively shallow, and presented in a different order. You will likely come away with many unanswered questions, but the book should help with that. And if the book and today’s post combined are still not enough, you can ask a question in the comments below, or on the book question page.
Negotiating the Abstract and the ConcreteTo approach an abstract concept, it’s always good to have concrete examples. The first example of a field of the cosmos that comes to mind, one that most people have heard of and encountered, is the magnetic field. Unfortunately, it’s not all that concrete. For most of us, it’s near-magic: we can see and feel that it somehow makes little metal blocks cluster together or stick to refrigerator doors, but the field itself remains remote from human experience, as it can’t be seen or felt directly.
There are fields, however, that are less magic-seeming and are known to everyone. The most obvious, though it often goes unrecognized, is the “wind field” of the atmosphere. Since we all experience it, and since weather maps often depict it, that’s the field I focused on in the book’s chapter 13. I hoped that by using it as an initial example, it would make the concept of “field” more intuitive for many readers.
But I knew that inevitably, no matter what approach I chose, it wouldn’t work for all readers. (My own father, for instance, has had more trouble making sense of that part of the book than any other.) Knowing this would happen, I’ve planned from the beginning to give alternate explanations here, to offer readers multiple roads into this unfamiliar concept.
Ordinary FieldsIn general, I find that the fields of the universe — I’ll call them “cosmic fields”, for short — are not the best starting point. That’s because they are mostly unfamiliar, and are intrinsically confusing and obscure even to physicists.
Instead, I’ll start with fields of ordinary materials, like water, air, rock and iron. We will see there is an interesting analogy between the fields of materials and the fields of the cosmos, one which will give us a great deal of useful intuition.
However, this analogy comes with a strongly-worded caution, warning and disclaimer, because the cosmos has properties that no ordinary material could possibly have. (See chapter 14 for a detailed discussion.) For this reason, please be careful not to take the analogy to firmly to heart. It has many merits, but we will definitely have to let some of it go — and perhaps all of it.
Air and its FieldsSo let’s start with a familiar material and its properties: the air that makes up the Earth’s atmosphere, and some of the properties of the air that are recorded by weather stations. As I write this, the weather station at Boston’s Logan Airport is reporting on conditions there, stating that it measures
There are similar weather stations scattered about the world that give us information about wind, pressure and humidity at various locations. But we don’t have complete information; obviously we don’t have weather stations at every point in the atmosphere!
Nevertheless, at all times, every point in the atmosphere does in fact have its own wind, pressure, and humidity, even if there’s no weather station there to measure it. Each of these properties of the air is meaningful throughout the atmosphere, varies from place to place, and changes over time.
Now we make our first step into abstraction. We can define the air’s property of pressure, viewed all across the atmosphere, as a field. When we do this, we view the pressure not as something measured at a particular place and time but as if it were measured everywhere in space and time. This makes it into a function of space and time — a function that tells us the pressure at all points in the atmosphere and at all times in Earth’s history. If we define x,y,z to be three coordinates that specify where we are in the three-dimensional atmosphere, and t to be a coordinate that specifies what time it is, then the pressure at that particular place and time can be written as P(x,y,z,t) — a function that takes in a position and a time and outputs the pressure at that position and time.
For instance, consider the point (xB,yB,zB) corresponding to Logan airport, and the time t0 when I was writing this article. According to the weather station whose measurements I reported above, the value of the pressure field at that position and moment, P(xB, yB, zB, t0), was equal to 29.71 inches of mercury (or, equivalently, 1005.8 millibarns).
Any one weather station’s report tells us only what the pressure is at a particular location and moment. But if we knew the pressure field perfectly at a moment t — if we had complete knowledge of the function P(x,y,z,t) — we’d know how strong the pressure is everywhere in the atmosphere at that moment.
In a similar way, we can define the “wind field” and the “humidity field” (or “water-vapor density field”) to capture what the wind and humidity are doing across the atmosphere’s entire expanse. Each field’s value at a particular location and time tells us what a measurement of the corresponding property would show at that location and time.
Maps and images illustrating three atmospheric fields: (top to bottom) air pressure, average wind patterns, and water vapor (humidity). Credits: NOAA.These three fields interact with each other, with other fields, and with external effects (such as sunlight) to create weather. Detailed weather forecasting is only possible because scientists have largely understood how these fields behave and how they affect one another, and have expressed their understanding through math equations that have been programmed into weather forecasting computers.
Air as a MediumAbstracting even further, we may think of the air of the atmosphere as an example of what one would call an ordinary medium — a largely uniform substance that occupies a wide area for an extended period of time. The water of the oceans is another example of an ordinary medium. Others include the rock of the Earth, the plasma that makes up the Sun, the gas of Jupiter’s atmosphere, a large block of iron or copper or lead, the pure neutron material of a neutron star, and so on.
Each medium has a number of properties, just as air does. Its properties that vary from place to place and change predictably with time can be viewed as fields, in the same way that air pressure and wind can be viewed as fields.
And so we reach a highly abstract level: an ordinary field is
Let’s look at a few examples to make this more concrete.
This manner of thinking is a commonplace, and a powerful one, for physicists who spend their careers studying ordinary materials, such as metals, superconductors, fluids, and so on. Each type of ordinary medium has ordinary fields that characterize it, and these fields interact with each other in ways that are specific to that medium. In some cases, even if we knew nothing about the medium, knowing all its fields and all their interactions with one another might allow us to guess what the medium is.
Cosmic FieldsWe can now turn to the cosmos itself. Over the last two centuries, physicists have found that there are quite a few quantities that can be measured everywhere and at all times, that vary from place to place and from moment to moment, and that affect one another. These quantities have also been called “fields”. Just to be clear, I’ll call them “cosmic fields” to distinguish them from the “ordinary fields” that we have just been discussing.
In many ways, cosmic fields resemble ordinary fields. They act in many ways as though the cosmos were a medium, and as though the fields represent some of the properties of that medium.
Empty Space as a MediumEinstein himself gave us a good reason to think along these lines. In his approach to gravity, known as general relativity, the empty space that pervades the universe should be viewed as a sort of medium. (That includes the space inside of objects, such as our own bodies.) Much as pressure represents a property of air, Einstein’s gravitational field (which generates all gravitational effects) represents a property of space — the degree to which space is bent. We often call that property the “curvature” or “warping” of space.
The list of cosmic fields is extensive, and includes the electromagnetic field and the Higgs field among others. Should we think of each of these fields as representing an additional property of empty space?
Maybe that’s the right way to think about these other cosmic fields. But we must be wary. We don’t yet have any evidence that this is the right viewpoint.
The Fields of Empty Space?This brings us to the greatest abstraction of all, the one that physicists live with every day.
This tremendous gap in our understanding might seem to leave us completely at sea. But fortunately, physicists have learned how to use measurement and math to make predictions about how the cosmic fields behave despite having no understanding what properties of empty space these fields might represent. Even though we don’t know what the cosmic fields are, we have extensive knowledge and understanding of what they do.
An Analogy Both Useful And ProblematicIt may seem comforting, if a bit counterintuitive, to imagine that the universe’s empty space might in fact be a something — a sort of substance, much as air and rock are substances — and that this cosmic substance, like air and rock, has properties that can be viewed as fields. From this perspective, a central goal of modern theoretical physicists who study particles and fields is the following: to figure out what the cosmos is made from and what properties the various fields correspond to.
Imagine that it was your job to start from weather reports that look like this:
and then try to deduce, from a huge number of these reports, what the atmosphere is made from and what properties the fields called “A”, “B” and “C” correspond to. This is akin to what today’s physicists have to do. We have discovered various fields that we can measure and study, and to which we’ve given arbitrary names; and we’d like to infer from their behavior what empty space really is and what its fields actually represent.
This is an interesting way to think about what particle physicists are doing nowadays. But we should be careful not to take it too seriously.
Nevertheless, the notion that ordinary media made from ordinary materials have ordinary fields, and that empty space has cosmic fields that bear some rough resemblance to what we see in ordinary media, is useful. The analogy helps us gain some basic intuition for how fields work and for what they might be, even though we have to remain cautious about its flaws, known and unknown. This manner of thinking was useful to Einstein in the research of his later years (even though it led to a dead end), and it also arises naturally in string theory (which may or may not be a dead end.)
Whether, in the long run, this analogy proves more beneficial than misleading is something that only future research will reveal. But for now, I think it can serve experts and non-experts alike, as long as we keep in mind that it cannot be the full story.
After my post last week about familiar and unfamiliar standing waves — the former famous from musical instruments, the latter almost unknown except to physicists (see Chapter 17 of the book) — I got a number of questions. Quite a few took the form, “Surely you’re joking, Mr. Strassler! Obviously, if you have a standing wave in a box, and you remove the box, it will quickly disintegrate into traveling waves that move in opposite directions! There is no standing wave without a container.”
Well, I’m not joking. These waves are unfamiliar, sure, to the point that they violate what some readers may have learned elsewhere about standing waves. Today I’ll show you animations to prove it.
When a Standing Wave Loses Its BoxThe animations below show familiar and unfamiliar standing waves inside small boxes (indicated in orange). The boxes are then removed, leaving the waves to expand into larger boxes. What happens next is determined by straightforward math; if you’re interested in the math, see the end of this post.
Though the waves start out with the same shape, they have different vibrational frequencies; the unfamiliar wave vibrates ten times faster. Each wave vibrates in place until the small box is taken away. Then the familiar wave instantly turns into two traveling waves that move in opposite directions at considerable speed, quickly reaching and reflecting off the walls of the new box. Nothing of the original standing wave survives, except that its ghost is recreated for a moment when the two traveling waves intersect.
The unfamiliar wave, however, has other plans. It continues to vibrates at the center of the box for quite a while, maintaining its coherence and only slowly spreading out. As the traveling waves from the familiar standing wave are hitting the walls of the outer box, the unfamiliar wave is still just barely tickling those walls. Only at the very end of the animation is this wave even responding of the presence of the box.
A familiar standing wave vibrates within a small box. When the small box is removed, the wave decomposes into traveling waves that reflect off the walls of the larger box. Animation made using Mathematica. Same as at left, but for an unfamiliar standing wave. For the same shape, it initially has a higher frequency, and it spreads much more slowly when the smaller box is removed. Animation made using Mathematica.To fully appreciate this effect, imagine if I’d made the ratio between the two waves’ frequencies one thousand instead of ten. Then the unfamiliar wave would have taken a thousand times longer than the familiar wave to completely spread across its box. However, I didn’t think you’d want to watch such boring animations, so I chose a relatively small frequency ratio.
Now let’s put in some actual numbers, to appreciate how impressive this becomes when applied to real particles.
Photons and Electrons in BoxesLet’s take an empty box (having removed the air inside it) whose sides are a tenth of a meter (about three inches) long. If I put a standing-wave photon (a particle of light) into it, that wave will have a frequency of 3 billion cycles per second. That puts it in the microwave range.
If I then release the photon into a box a full meter across, the photon’s wave will turn into traveling pulses, as my first animation showed. Moving at the speed of light, the pulses will reach the walls of the larger box in about 1.5 billionths of a second (1.5 nanoseconds.) This is what we are taught to expect: without the walls, the standing wave can’t survive.
But if I put a standing-wave electron in a box a tenth of a meter across, it will have a frequency of 800 billion billion cycles per second. That’s not a typo — I really do mean 800 Billion-Billion, which is enormously faster vibration than for a microwave photon.
Correspondingly, when the electron is released from its original box to a larger one a meter across, it will simply remain vibrating at the center of the box, in an extreme version of the second animation. The edges of the electron’s wave will expand, but no faster than a few millimeters per second. The amount of time it will take for its vibrating edges to reach out to the edges of the new box will be well over a minute.
From the electron’s perspective, vibrating once every billionth of a trillionth of a second, this spreading takes almost forever. It’s a long time even for a human physicist. Most experiments on freely floating electrons, including those that measure an electron’s rest mass, take much less than a second. For many such measurements, the fact that an unconstrained electron is gradually spreading is of little practical importance.
Atoms are Boxes TooThus standing waves can exist without walls for a quite a while, if they are sufficiently broad to start with. The word broad is important here. From smaller boxes, or from atoms, the spreading is more rapid; an electron liberated from a tiny hydrogen atom can grow to the size of a room in the blink of an eye. The larger the electron’s initial container, the wider the electron’s initial standing wave will be, and the more slowly it will spread.
This pattern might remind you of the famous and infamous uncertainty principle. And well it should.
For the math behind this, read this article (the fourth of this series); the familiar waves satisfy what I called Class 0 wave equations, while the unfamiliar ones satisfy Class 1 wave equations. If you read to the end of the series, you’ll see the direct connection of these two classes of waves with photons and electrons, and more generally with particles of zero and non-zero rest mass.
A quick note today about developments here at the website. The Reader Resources section of the site is slowly coming into being. These resources will supplement the book Waves in an Impossible Sea, providing answers to questions, opportunities to explore topics more deeply, access to endnotes (convenient for the upcoming audiobook and for readers who hate flipping back and forth between main text and endnotes), and access to figures (also convenient for the audiobook.)
First and foremost, though: readers’ questions!
I’ll be collecting questions and answering the most common in the reader resource materials. Those materials will be organized by book chapter. As an example, the post from last week on standing waves, which focuses on a central ingredient in the book, is already linked from the Chapter 17 section of the Reader Resources.
It’s going to take the better part of a year to fill out this new section of the website. I’ll be posting about it here on the blog as stuff comes available, so that you can check it out when it arrives.
Recently, the first completed search for what is nowadays known as SUEP — a Soft-Unclustered-Energy Pattern, in which large numbers of low-energy particles explode outward from one of the proton-proton collisions at the Large Hadron Collider [LHC] — was made public by physicists working at the CMS experiment. As a theoretical idea, SUEP has its origin in 2006-2008, but it was this paper from 2016 that finally brought the possibility to widespread attention. (However, the name they gave it was unfortunate. To replace it, the acronym “SUEP” was invented.)
How can SUEP arise? If a proton-proton collision produces currently-unknown types of particles that
they can cause that collision to turn to SUEP.
While the familiar strong nuclear force mainly produces large numbers of particles in narrow sprays, known as jets, a new ultra-strong force could produce even larger numbers of particles, with relatively less energy-per-particle, arranged in near-spherical blasts. I gave a somewhat detailed description of SUEP in this post. (In fact, SUEP is a prediction of string theory — though, I hasten to add, one that has nothing to do with whether string theory describes quantum gravity in our universe.)
Below are shown two events, one with SUEP and one without, simulated back in 2007. Can you see the difference? (In these crude images, the darker lines represent higher-energy particles, and energy deposits are drawn in orange. You can see that the picture at right has lower-energy particles, more numerous and distributed more symmetrically, than the picture on the left.)
(Left) A busy non-SUEP event, with many jets of high-energy particles. (Right) A SUEP-like event, with particles that have lower energy, are more numerous, and are more broadly spread around. Simulation by the author in 2007, using a modified form of PYTHIA 6; originally presented here and here.Now here’s some real data. Below is a typical (though still quite active) proton-proton collision at CMS. The yellow tracks show where the particles went. You can see that not all the tracks are straight (in contrast to my simulated events above). That’s because inside CMS is a magnetic field, which bends the paths of charged particles. The less energy a particle has, the more it curves. In this event, a substantial fraction of the tracks are straight and are clustered into narrow sprays (with orange cones drawn around them to guide the eye). These are the typical jets of mostly-high-energy particles created by the Standard Model’s strong nuclear force.
Now, here’s another real event observed at CMS, a truly amazing proton-proton collision that created an exceptional number of particles. Although there is a chance that it is SUEP, it’s probably just an extraordinary, rare process created by the strong nuclear force. Notice that almost all the tracks curve — these particles each have relatively low energy — and there are hardly any clusters of tracks similar to the ones above.
As was the case for the Higgs boson, a single suggestive picture is not enough. Discovery of SUEP would require many such SUEP-y proton-proton collisions be observed, in order that they could be distinguished, statistically, from known phenomena. (To be fair, there are some types of SUEP where just two or three events would suffice. But that’s a story for another day.)
No Discovery… But Still, CongratulationsHad this search actually found some evidence of SUEP, you would have seen it in news headlines. But it came up empty, as is the case for most scientific quests for new things. Nevertheless, despite a lack of a discovery, congratulations to CMS are due. This was a first-of-its-kind search, employing novel methods. Here’s CMS’s own description of their search.
Meanwhile, the story of SUEP is not over. CMS only looked for certain kinds of SUEP, and there are many more. A variety of hunting strategies will be needed in future, in order to cover all the possibilities.
The Current Status of the LHC ProgramMore generally, I want to highlight the significance and role of novel search strategies at the LHC experiments. This issue is often underestimated or misunderstood.
At the moment, and for the last few years, the central question facing LHC experimenters and their theoretical-physicist colleagues is this:
Does the Standard Model (the math that describes all the known elementary particles) correctly predict all observed data at the LHC?
In 2012-2016, the discovery and initial examination of the Higgs boson completed the Standard Model. Since that time, nothing outside the Standard Model has been observed at the LHC. But it’s crucial to remember that although finding something proves that it exists, not finding it does not prove it does not exist.
It’s similar to trying to find a set of keys that might be in your house. If you find them right away, your search is over. But if you don’t find them right away, you can’t conclude they’re not in the house. You need to keep looking; maybe you haven’t looked in the right place yet. You have to search as carefully as you can, covering all locations and considering all possibilities, before you conclude that they simply must be elsewhere.
A single failure of the Standard Model would bring its reign to an end, and answer the central question in the negative. But to answer it with a reasonably confident “Yes” will require a thorough plan of searches for a wide variety of possible phenomena. If our search strategy leaves loopholes, we simply won’t be able to answer the central question with either a “No” or a “Yes”! And then we’ll be left in limbo.
Importantly, making the LHC search programs more thorough isn’t expensive. In fact, it’s more expensive not to make them thorough.
Each experiment’s data is collected as a giant pile, and each search for new phenomena involves examining one of the already-assembled giant piles through a particular lens. If we don’t hunt for everything reasonable in those data sets, then we’re partly wasting the time, effort and money that we spent to obtain them!
And there’s no reason for undue pessimism that none of these searches will find anything. Even a dramatic new phenomenon like SUEP can lie hidden in a vast data set, undetected until the moment that someone searches the data in just the right way.
That’s why this first SUEP search is important: it’s a novel way of exploring the LHC’s data. It pushes the boundary of what we know in a previously unexplored direction, and sets a new frontier for future investigation.
On April 8th, 2024, a small strip of North America will witness a total solar eclipse. Total solar eclipses are amazing, life-changing experiences; I hope you have a chance to experience one, as I did.
Everyone else from Central America to northern Canada will see a partial solar eclipse. What good is a partial solar eclipse?
Astonishingly, the best thing about a partial solar eclipse seems to have essentially disappeared from public knowledge. We’ve got three weeks to change that, and I hope you’ll help me — especially the science journalists and science teachers among you.
The best thing about a partial solar eclipse is that you can measure the Moon’s true diameter, armed with only a map, a straight-edge, and a leafy tree or a card with a pinhole in it. No telescope, no eclipse glasses, no equipment that costs real money or is hard to obtain. You’ll get an answer that’s good within 10-20 percent, with no effort at all. And in the process you can teach kids about shadows, about simple geometry (no trigonometry needed), about the Earth, Moon, and Sun, and about how scientists figure things out.
About a week later, you can measure the Moon’s distance from Earth, too. All you need for that is a coin, a ruler, and drawings of similar triangles.
In this post, I’ll explain
Then the diameter of the Moon D is roughly equal to L divided by F:
That’s all there is to it!
This estimate should be accurate to within 10-20% (as long as you’re not too close to the Earth’s poles or too close to the totality strip; we can talk about why later.) Any child over ten can carry this out with a little help.
Click here to see an example of how this worksFrom Boston, here’s how I can measure L, using Google Maps and an eclipse map showing the location of the totality strip:
From Boston to the totality strip is about 145 miles.Meanwhile timeanddate.com predicts that in Boston at mid-eclipse the Sun will appear as shown below: a fraction of around 6 to 8% of the Sun will be unblocked. (We’re so close to the totality strip that measuring F is quite difficult to do accurately, so our estimate of the Moon’s size will be more uncertain than for people further away.)
The fraction of the Sun’s diameter that will be visible at mid-eclipse in Boston is less than 10%; the full diameter of the Sun is shown in orange. Prediction from timeanddate.com .That will give us (if we take F=7% as a best guess) an estimate of the Moon’s size
(If we let F range from 6% to 8%, this estimate really ranges between 1900 and 2200 miles.) The Moon’s true diameter is about 2160 miles, so this is wonderfully successful, given its ease.
Is this too good to be true? Let’s see.
Some Basic EvidenceHere’s a map from NASA (via Wikipedia) showing where the eclipse is total and partial. Let’s call the region of totality the “totality strip”, and the regions where the eclipse is partial the “northern partiality zone” and the “southern partiality zone.”
Figure 1: As with all total eclipses, the one on April 8th will have a narrow totality strip (dark blue) where the eclipse is total, surrounded by two large “partiality zones” (light blue) in which the eclipse will be partial. Eclipse Predictions by Fred Espenak, NASA’s GSFCNow here’s the same map with the Moon, correctly sized, superposed on the two partiality zones. You see that indeed the width of each partiality zone is roughly the size of the Moon — slightly larger, but much less than twice as large. [Because the eclipse is north of the equator, the northern partiality zone is closer to the north pole, and the Earth’s curvature causes it to be somewhat larger than the southern partiality zone. This is discussed in an aside later in this post.]
Figure 2: Each partiality strip is a little wider than the diameter of the Moon, for reasons explained later.Let’s say you’re in one of the partiality zones, and let’s call its width W. If you’re right near the outer edge of that zone, then L is approximately W. That’s also where the eclipse barely happens — the Moon just clips the edge of the Sun — and so, since the Sun’s diameter is hardly blocked at all, F is close to 1. Your estimate will then be
which is in accord with Figure 2.
Suppose instead that you are halfway between the totality strip and the outer edge of your zone (measured on a line perpendicular to the totality strip.) Then L = W/2. But also the Moon will block half the Sun’s diameter, leaving half of it unblocked: F = 1/2. That means that you will estimate, again,
More generally, if your distance from the totality strip is L, and L is a fraction P of the width W of the partiality zone, i.e.
then the fraction F of the Sun’s diameter that will be unblocked at mid-eclipse will also be approximately P. Therefore, no matter where you are in the zone,
so your estimate of the Moon’s diameter will always come out more or less right.
(That said, if you’re very close to the totality strip, F will be hard to measure precisely, so your estimate may be very uncertain; and if you’re close to the poles, W will be significantly larger than D, so your estimate will be poor. Fortunately, that won’t be true for most of us.)
What’s behind this clever trick? Here’s the reasoning.
Why Does This Work?What’s great about this trick is that it’s not hard to understand, though it does take a few steps. [I’m not sure I yet have the best pedagogical strategy for laying out those steps; suggestions welcome.]
Why the Partiality Zones are as Wide as the MoonFirst, we need to understand why the width of each partiality zone is roughly the same as the width of the Moon — why D and W are almost the same, as long as we are far from the Earth’s poles. It all has to do with shadows — moon shadows, of both types.
You may have noticed (kids often do) that there are often two types of shadows visible when you’re at home and lit by a single central light. If a thumbtack lit by a small light bulb is close to a wall, it casts a shadow that is crisp and about the same size as the tack. But as you move the tack away from the wall, the shadow becomes fuzzier. If you look closely, you’ll see that the inner dark part of the shadow (the “umbra”) is shrinking, while there’s an outer part (the “penumbra”), quite hard to see, that is growing.
Eventually, when the tack is far enough away, the inner dark part will become almost a dot. At that point, the outer dim shadow — you may only barely see it — has a diameter about twice the diameter of the tack. See Figure 3.
Figure 3: How a shadow of a tack changes as it moves away from the wall. (Left) The shadow is crisp and the same size as the tack. (Center) The dark umbra is narrower than the tack, while the dim penumbra is wider than the tack. (Right) As the dark umbra becomes very narrow, the penumbra becomes twice the width of the tack (as indicated by the two tack-sized disks). Credit: the author.For the solar eclipse, it’s the same idea, except that instead of bulb, tack and wall, we have Sun, Moon and Earth.
There’s simple geometry behind this shadow-play; I’ve drawn it vertically in the figure below, so that you can see it no matter how narrow your screen. Let’s assume that the Sun is much further away from Earth than the Moon is (a fact that you can also verify during daytime, either a week after the eclipse or a week before.) I’ve drawn four lines, two of them red, two of them black; watch where they go.
Figure 4: The Sun is totally eclipsed in the little gap between the black lines; it is partially eclipsed between the red and black lines. Not to scale.Inside the black lines, the Moon totally blocks the Sun; both the left edge and right edge of the Sun in the figure are blocked. Inside the red lines, the Moon partially blocks the Sun. And so, at the moment shown, the little space between the two black lines is where the eclipse is total; that’s a location within the totality strip. Meanwhile, the distance between each red line and the nearest black line is the width W of one of the partiality zones.
For maximum simplicity, I’ve drawn this where the Sun, Moon and Earth are perfectly lined up, so that the total eclipse is occuring where the Earth’s surface is nearest the Sun and Moon. That makes both partiality zones the same size. In the aside below, I’ll show you what happens if this isn’t the case. But let’s not get distracted by that yet.
The important thing is that because the Sun is so much further than the Moon, the red and black lines from the left edge of the Sun are almost parallel. Where they meet the Moon, they are separated by the Moon’s diameter — that is, by the distance D. But the distance between two parallel lines is constant, so the distance between two nearly parallel lines changes very slowly. This means they are still a distance D apart when they reach the Earth.
On top of this, the two black lines almost meet; the totality strip is very narrow. Taken together, these facts imply that W, the width of the region on Earth’s surface between a red line and the closest black line, is roughly the same size as D!
As noted in an aside below, the Earth’s curvature tends to make the partiality zones a bit larger than the Moon’s diameter, while for an eclipse nearing a pole of the Earth, the partiality zone closest to the pole will be larger than the other partiality zone. But these are details; they don’t change the basic story.
Click here for an brief discussion of why W and D aren’t quite the sameAs noted, W isn’t quite D. Even with the eclipse centered on the Earth as in Figure 5, W is larger than D because the Earth’s surface is a sphere. (This is somewhat compensated for by measuring L to the edge of the totality zone rather than its center or opposite edge.)
The width W of the partiality zone is wider than the diameter D of the Moon because of the Earth’s shape.Second, if the eclipse is far from the equator, the partiality zone nearer to a pole of the Earth will be larger than the other.
If the location of totality is offset from the line connecting Moon and Sun, then the totality zone on the side of the offset will be larger than the other, and than D, due to the Earth’s curvature.In short, this is not a method designed to get a precise or accurate measurement of the Moon’s diameter. But it’s perfectly fine if one’s aim is merely to get a rough idea of how nature works, which is often more than enough for scientists, as well as for everyone else. There is an opportunity here to talk to students about the nature of approximations, when and why it’s okay to use them, and how to improve upon them.
Why the Unblocked Fraction of the Sun’s Diameter Plays the Key RoleSecond, we have to understand why the unblocked fraction F of the Sun’s diameter is roughly the same as what we called P (the distance L to the totality strip divided by the width of the partiality zone W.) This is illustrated in Figure 5, where again I’ve drawn four lines on each image, but now all four begin from the location at which we are observing the eclipse, and they indicate where the Moon and Sun appear to us.
Figure 5: What one sees in the partiality zone between the red and black dots; the orange and grey lines show the location and width on the sky of the Sun and Moon.(Far Left) In the totality zone, F=P=0. (Far Right) Near the edge of the partiality zone, F and P are close to 1. (Near Left and Right) Throughout the partiality zone, F=P, as shown for P=1/6 and 1/2. Not to scale.In each image in Figure 5, the totality strip is indicated by the black dot, and the edge of a partiality zone by the red dot. The Sun’s disk in the sky is spanned by the green lines, and its apparent width in the sky is shown by the orange line; the Moon’s disk is spanned by the blue lines, and its apparent width is shown by the grey line.
In the far left image, the observer is in the totality strip, so P=0; the Moon blocks the Sun completely, so the grey line aligns with the orange line and F=0. At far right, the observer is at the outer edge of the partiality zone, so P is almost 1; the grey line almost misses the orange line completely, so almost all the Sun’s diameter remains unblocked and F is almost 1. In between are shown intermediate situations where F and P are both 1/6 or both 1/2. The closer the observer gets to the totality strip, making P smaller, the less of the Sun is unblocked, making F smaller by the same amount.
Bonus: Measure the Distance to the Moon in DaylightI’ve explained how this works in this older post. For today, here’s a quick summary.
Within a week after the eclipse, the Moon will reach first quarter, which means that the Moon will rise around noon. In the afternoon, then, you can see it in the East. Then you can have one person hold a penny, and another person move until the penny perfectly eclipses the Moon; a third person can measure the distance s between the person’s eyes and the penny. Then, measuring the diameter d of the penny, we can use similar triangles to convince ourselves that the distance S to the Moon, divided by the diameter of the Moon D, is the same as the distance from observer to penny divided by the diameter of the penny:
and so
Since we measured D during the eclipse, we now know S also.
By the way, in another old post, I showed that we can also confirm that our original assumption, that the Sun is much, much further than the Moon, is correct. At first quarter — i.e. one quarter of the way through the Moon’s monthly cycle — one can verify two things at once, by eye:
These two things can only both be true if the Sun is much further than the Moon.
Spread the WordSo you see, it’s not only easy to measure the Moon’s size during an eclipse, it’s relatively easy to explain how and why the method works. To do so requires a range of logical reasoning tools, some drawings of lines and triangles, and a little experimentation with shadows, but no actual math. (I’m sure it can be done better than I did it here.) On top of that, the measurement can be carried out in daylight, while most children are in school. I think it’s a great opportunity for science education — a chance for a meaningful fraction of the 600 million people in North America to experience scientific reasoning for themselves, and to observe how it leads to consistent, reliable knowledge.
Already I’ve had a few people ask me for clarification of a key point in the book, having to do with a certain type of unusual “standing wave.” It’s so central to the story that I’ve decided to address it right away.
The point that there are two quite different types of standing waves; the familiar ones you may know from musical instruments or from physics class, and less familiar ones that play a key role in the book. You can jump right to my new webpage comparing these two types of standing waves, or you can read the post below, which provides more context.
Note: Going forward, you’ll see a lot of posts and new webpages like this one. One of the great things about 21st century books is that they aren’t contained within their covers. I’ve always planned to continue the book into this website, allowing me to expand here upon key issues that I knew would will raise questions from readers. So even though the book in printed form is done and published, it will continue to live and grow on this website.
The Stationary Electron as a Standing WaveIn the book’s chapter 17, I suggest a sort of mental image of a stationary electron. In particular, electrons should be visualized as waves, not as little dots, and a stationary electron is a standing wave — a wave that vibrates in place. [I focus on a stationary electron because it’s the best context in which to understand an electron’s “rest mass” (see chapters 5 and 8.)]
If you know something about standing waves already, perhaps from music classes or from a first-year physics class, this statement is potentially confusing. An electron could be stationary out in the middle of nowhere, light-years from the nearest star. But the standing waves of music and physics classes are never found in the middle of nowhere; they are always found inside or upon objects of finite size, perhaps upon a guitar string, inside a room or in the Sun. So how could a free-floating, isolated electron out in the open be a standing wave?
This mismatch is naturally puzzling, and indeed it has already raised questions among listeners of my recent podcast appearances. [Here’s the conversation on Sean Carroll’s podcast, and here are the first half and second half of the conversation with Daniel Whiteson on his podcast.]
The point, which I didn’t have time to address in the podcasts and which is discussed in the book only in examples (see chapter 20.2), is that there is a type of standing wave that is not covered in first-year physics classes, and that appears in no human musical instruments that I’m aquainted with. Unlike familiar standing waves, it needs no walls.
Familiar Standing WavesA string or other extended vibrating object may have many types of standing waves. For a string, the four simplest standing waves are shown below (Figure 25 from the book chapter 11; illustration by C. Cesarotti).
But we can just focus our attention on the simplest of all such waves, the one at upper left, which has a single crest that over time becomes a single trough and back again, over and over.
Classic examples of these simplest standing waves are found on the strings of guitars and pianos; somewhat similar waves are found in organ pipes and flutes. As budding musicians quickly learn, it’s generally the case that the longer the string or pipe or bell, the lower the frequency of the wave and the lower the musical note. An organ’s lowest notes come from its longest pipes, and shortening a guitar string with your finger causes the instrument to create a higher note.
[It’s not quite as simple as that because, as covered in chapter 10, there are other ways to change frequency; tightening a string raises it, while replacing air with another gas can raise or lower it. But for a fixed material with fixed properties, what I’ve said is true.]
A simple version of this basic idea is illustrated by taking a box whose sides are of length , filling it with some sort of material, and considering that material’s simplest standing wave. For most familiar materials, the frequency of the standing wave decreases as the length of the box increases; specifically, if you double the length of the box, the frequency drops in half. Thus frequency is inversely proportional to length, as it is on many musical instruments, and as the box’s size becomes infinite, no standing wave remains — its frequency becomes zero, meaning that it no longer vibrates at all.
[In math, we would write
where is the speed with which traveling waves can move across the substance.]
Unfamiliar Standing WavesHowever, there are other standing waves whose frequency of vibration does not decrease in this way. For standing waves of this unfamiliar sort, doubling the length of the box does not cause the frequency to drop in half. In fact, if the box is big enough, doubling its size barely has any effect on the frequency at all! (This can happen in unfamiliar materials, or in familiar materials treated in unusual ways; see chapter 20.2 for a couple of examples.)
If you put one of these unfamiliar standing waves in a box and make the box larger and larger, its frequency won’t drop all the way to zero. Instead it will settle down to a steady frequency, which I’ll label and refer to as the “resonance frequency”. No matter how big the box, the smallest the wave’s frequency can possibly be is the resonance frequency . Said another way, if is sufficiently large, the difference between and will be too small to notice, or even to measure.
[In math, the wave’s frequency is related to the resonance frequency and to the length of the box by an equation similar to
The precise form of the expression depends on details of the box and the vibrating material; but the details are not important here.]
Summarizing the Two WavesThe size and shape of a large box thus affects these two types of standing waves differently,
This difference is illustrated in animations found here, and in the graph below, which shows how the frequencies of familiar and unfamiliar standing waves depend on the length of the box.
The frequencies of familiar standing waves (blue) decrease down to zero as L goes to infinity, but those of unfamiliar standing waves (orange) never go below a non-zero minimum frequency f0. A Wave that Stands Without SupportSuppose, then, that we take an unfamiliar standing wave and dismantle its box, or let its box grow to infinite size. Do we then have a standing wave that stretches all across the entire universe?! Well, mathematically speaking, this would indeed be true. But in realistic situations, the standing wave will always run into some obstructions around its edges. Perhaps your hand is close by, as is the ground and a nearby wall; trees and mountains block the wave in other directions, and so forth. Even out in deep space, the space is not completely empty; there are always stray particles moving by.
These objects will affect the shape of the standing wave. But if they are far enough away from the core of the standing wave, they do not affect its frequency — at least, not by an amount that anyone could readily notice or measure.
Thus even a realistic standing wave, with a diameter far smaller than the size of the universe, will vibrate at its resonance frequency if it is sufficiently isolated. That’s more than enough for the purposes of the universe.
The Importance of These WavesAnd so an isolated, stationary electron corresponds to an unfamiliar standing wave whose shape is determined by its local environment, but whose frequency is not. It will vibrate at the same rate whether it is in deep space far from any stars, or whether it is in an empty, airless box the size of a sugar cube.
Why should we care? Because there is a direct connection between the frequency of this standing wave and the electron’s rest mass. Explaining that connection is the most important goal of the first two-thirds of the book (through chapter 18). In addition, there’s a link between this issue and the Higgs field — a topic of the book’s remainder (especially chapter 20.)
I hope I’ve managed to write this post in a way which is useful even if you aren’t reading the book. For those reading it, it may well be helpful in clarifying chapters 17-20, where these issues take center stage. If you have found this post confusing, please leave comments, or ask a question on my new Book Questions page. As always, your questions and suggestions will help me improve this website.
About a month ago, there was a lot of noise, discussion and controversy concerning CERN‘s proposal to build a giant new tunnel and put powerful new particle accelerators in it. This proposal is collectively called the Future Circular Collider (“FCC”). (The BBC reported on it here.)
Some scientists made arguments that FCC is a great idea, based on reasoning that I somewhat disagree with. Others said it would be a waste of money, based on reasoning that I again disagree with. But any decision on whether to actually fund the building of the FCC’s tunnel is still some years off, so I was reluctant to get involved in the debate, especially since my nuanced opinion seemed likely to be drowned out amid the polemics.
But I did eventually write something in response to a reporter’s questions, and looking back on it, I think it may be of interest to some readers. So here it is.
My starting point is the timeline for the machine. Quoting from the FCC website, we are looking at “start of construction after the middle 2030s, with the first step of an electron-positron collider (FCC-ee) beginning operations around 2045. A second machine, colliding protons (FCC-hh) in the same tunnel, would extend the research programme from the 2070s to the end of the century.”
Now, although they use the same tunnel, these are two utterly different colliders, with complementary goals. It’s extremely useful to compare these plans with the journey of CERN from 1985 to 2035, which the FCC is designed to repeat.
The Previous Five Decades: One Tunnel, Two MachinesIn the latter 1980s, a 27 km (17 mile) near-circular tunnel was built under the French and Swiss countryside around CERN. Over the decades, two entirely different machines were built in that tunnel:
Again, although LEP and LHC occupied the same tunnel and were operated by many of the same people, they were completely different machines that shared little else.
The Next Five Decades: One Tunnel, Two MachinesIn largely the same way, the proposal is for the FCC tunnel to be used for two completely different machines analogous to LEP and LHC.
Let’s work backwards from Phase 2. What are the goals for the FCC-hh?
Asking physicists of today to state a precise goal for such a distant future is somewhat like asking Oppenheimer, all the way back in the 1950s, to predict the main aims of the LHC! It’s too early to expect a reliable answer. I suspect that current speculation about what the main motivation for FCC-hh will actually be, four or five decades from now, is likely to be wrong. (That said, there are some specific questions about the Higgs boson and Higgs field that only FCC-hh can address — most notably, how the Higgs field interacts with itself.)
Of course, I do understand why people are talking about FCC-hh right now. If CERN is going to build such a large tunnel, they ought to have some sensible ideas as to how it could be used not only soon but well into the distant future.
But nevertheless, a final decision on FCC-hh — whether and when to build it, and what its goals should be — lies decades away. It’s Phase 1 that really matters now. And Phase 1 has a clear purpose and a clear motivation.
What good is FCC-ee? You might well wonder! Since LHC has already run proton-proton collisions at 14000 GeV, making it capable of creating certain types of particles whose masses are several times larger than the Higgs boson or top quark, what hope do we have that FCC-ee, at a much, much lower collision energy, could make any new discoveries?
The answers are analogous to the ones that were appropriate for LEP-1, whose collision energies were already below those of the Tevatron, a predecessor to the LHC.
The rationale for building FCC-ee is very clear: to take full advantage of what the Higgs boson and its field can teach us. Only recently discovered, the Higgs boson is unique among the known elementary particles as the only particle that has no “spin” — no intrinsic angular momentum — and as the particle whose interactions with other particles are most diverse in strength. The corresponding Higgs field, which gives electrons their mass and makes atoms possible, is even more important: it’s crucial for planets and for life.
Our knowledge of this field and its particle will still be very limited even when the LHC shuts down for good around 2035. Few of the LHC’s measurements of the Higgs boson’s properties will be precise, and some properties simply will not be measurable. For all we will know in 2035, it could still be the case that one in every twenty Higgs bosons decays to particles that are currently unknown; the LHC experiments will be unable to rule out this possibility. The FCC-ee will change this, making far more measurements, bringing much higher precision to many of them, and allowing searches for decays to particles that LHC has no hope of observing.
Thus FCC-ee will give us a better handle on the properties of the Higgs boson and Higgs field than the LHC can achieve, and allow us access to rare and/or obscure phenomena that LHC experiments cannot discover. The potential significance of these scientific advances should not be underestimated.
The FCC-hh of Phase 2 is for the distant future. It will depend crucially on whether Phase 1 finds something, and on what it finds. It also depends on the results of many other smaller-scale experiments which will be running over the next 35 years. Any particle physics discovery before 2060 will influence the way we think about the goals of FCC-hh, and so I view it as far too early to wax poetic about what Phase 2 could do, or to criticize it as a waste of money. We can have that debate over the next generation.
By contrast, the goals of FCC-ee are clear, and the cost and benefits easier to identify. That’s why, in my opinion, Phase 1 is the only topic worthy of serious discussion and debate right now.
I’m aiming to get the blog back to science as soon as possible, but I need to answer some questions that I’ve been receiving about the book and website.
In fact, there will soon be a whole wing of this website devoted to the book, which will have
And more! [Some parts of this are almost ready, but we’re delayed by an array of minor technical issues with the newly upgraded website. Hopefully in a week or two.]
Speaking more broadly, the new book is just a part of a much larger project: to convey the worldview of contemporary physics to as many people as possible, making it accessible without watering it down. I hope to make it as clear as it can be made, and as meaningful. But the book, no matter how hard I have worked at it or how successful I may or may not have been at writing it, cannot possibly do that alone. Hence the commitment to expand the website, answer your questions, give public talks and courses, and much more to come.
By the way, the second half of the conversation with Daniel Whiteson on his podcast is posted. (Here’s the first half; and here’s the conversation on Sean Carroll’s podcast.)