BICEP2 Takes a Peek at Cosmic Inflation

Before I dive into trying to explain Monday’s big announcement, I would like you to take a moment to watch this video showing Chao-Lin Kuo of Stanford University, the designer of the BICEP2 detector, revealing to Andrei Linde, one of the architects of inflationary cosmology, the big news. Seriously, take a look.

You can see Linde’s knees weaken when the news sinks in that his life’s work has been validated, bringing to mind the image of Peter Higgs wiping tears from his eyes on July 4, 2012 as the five-sigma discovery of his boson was announced at CERN. Linde’s wife, Renata Kallosh, also a physicist, was also visibly moved. She knew quite well how much this moment meant to him.

The Announcement

So what was this breakthrough that had such an impact on Linde and his wife, not to mention making cosmologists and astrophysicists around the world giddy with delight?

On Monday, March 17, 2014, the BICEP2 collaboration (an NSF-funded experiment jointly operated by Stanford/SLAC, the Harvard-Smithsonian Center for Astrophysics, Caltech JPL, UMN, and other institutions) announced the detection of primordial b-mode polarization in the Cosmic Microwave Background, consistent with the theoretically-predicted imprint of primordial gravitational waves amplified in scale by cosmological inflation.

“What?” I hear you cry.

Okay, time for me to back up a bit and try to put that into plain English.

But What Does It Mean?

But before I take a stab at explaining, take a look at this very quick and dirty explanation by Henry Reich over at MinutePhysics. When I try to explain this, I tend to gesture a lot. That doesn’t exactly come through in a blog post, so the animations in this video will have to serve in the place of such gestures.

In a similar vein, you might which to take a look at this comic explainer from PhD Comics.

So that’s the 30,000 foot view.  Let’s start to zoom in a little bit. To do so, I’ll need to fill in a bit of background, making sure that we are all on the same page with definitions and concepts. (Bear with me for a bit, since there is a LOT of ground to cover.) So let’s start at the beginning. The VERY beginning.

The Big Bang (A Ridiculously Brief History)

A mere century ago, the accepted view of cosmology was pretty simple.  As far as any astronomers new, our Milky Way Galaxy was the only galaxy there was, occupying a simple, static, relatively unchanging universe. Oh, sure, astronomers had long known “spiral nebulae,” and the 18th century philosopher Immanuel Kant had even speculated that they might be “island universes,” but astronomers had no good evidence that they actually lay beyond the bounds of our own Milky Way.

Then, in 1915, Einstein developed his general theory of relativity, which described gravity in terms of the curvature of space-time. However, it didn’t take long for it to become clear that his gravitational field equations on a cosmic scale could lead to only two solutions: an expanding universe or a contracting universe. To make his equations consistent with the prevailing idea of a static universe, he tweaked them by incorporating a “cosmological constant” term which allowed for a static solution.

Now, I have previously discussed how Henrietta Leavitt had, in 1908, discovered a correlation between the the intrinsic brightness and the periodicity of a certain class of variable stars known as Cepheid variables. By the 1920s, several astronomers, including  Vesto Slipher, Gustaf Strömberg, and Edwin Hubble, had been using this relationship as a “yardstick” (along with other techniques) for measuring the distances to spiral nebulae. Not only had these measurements firmly established the fact that the spiral nebulae were, in fact, distinct galaxies a great distance beyond the Milky Way, but it was clear from comparing the distances with their redshift that they were, on average, moving away from us, and that the greater their distance, the faster they were moving.  The universe was expanding, and Einstein’s cosmological constant was not needed!

As I’ve also previously writtenMonsignor George Lemaître, a Belgian priest and astronomer, had studied these results, along with Einstein’s general relativity. By 1931, he had taken the facts at hand to their logical conclusion: further back in time, the universe was denser and hotter. This was the beginning of the Big Bang Theory of cosmology. That label was derisively coined in 1949 by the noted astronomer Sir Fred Hoyle, who stubbornly clung to the notion of a static universe in spite of the data.

Remember, boys and girls, evidence is the final arbitrator of reality, not the opinions of an authority figure, no matter how intelligent, highly-regarded, or accomplished they are. And all knowledge is provisional, subject to updates based upon new evidence. Even legendary intellects such as Einstein and Hoyle were wrong from time to time.

The Cosmic Microwave Background (CMB)

The statement in bold-face type in the previous section is the core of the Big Bang Theory. Filling in the details to form fully-fledged cosmological models has of course been the challenging part, with the construction of more and more detailed mathematical models and comparing them with observational data.  One of the major predictions of big bang cosmology is that the very early universe was so hot and dense that it would have been permeated by plasma (ionized gas). This plasma would have been so dense that photons would not be able to travel through it very far before being scattered or absorbed by electrons or nuclei, thus rendering this early universe opaque. However, as the universe expanded and cooled, a point would eventually be reached at which electrons could finally bond to nuclei, allowing the material permeating space to transition to a neutral gas that is largely transparent to light. Cosmologists refer to this event as “recombination,” an event which is currently estimated as having taken place when the universe was a mere 378,000 years old. Once this transition took place, photons were then free to travel along their merry way, filling the universe with light. (Cosmologists often refer to the last vestiges of this cooling plasma as these photons bounced off of it for the last time as the “last scattering surface.” This concept figures prominently in the experimental results that we are building up to, so keep that in mind.)

The last scattering surface (also referred to as the cosmic photosphere) would have had a temperature in the neighborhood of 3000 K. Now, the radiation from an idealized thermal absorber and emitter (called a “black body“) will have a spectrum that is described by Planck’s law (or classically by the Rayleigh-Jeans law). While the peak of the spectrum from a 3000 K gas would be in the infrared, plenty of it would still be in the visible range. So why isn’t the light from the last scattering surface visible filling the night sky? Simply put, as the universe continued expanding, the wavelengths of this once bright light permeating the cosmos were then stretched out to longer wavelengths, dragging this primordial light into the long microwave portion of the spectrum.

For the benefit of those of you whose eyes just glazed over from that explanation, let me put it in terms that are a bit easier to grasp. Imagine a blacksmith pulling a chunk of iron from his forge after he’s gotten it seriously hot. It starts off glowing white hot, since the light emitted from it has components from across the entire visible spectrum. However, as the iron cools, it starts to glow red. The peak of the spectrum has shifted to the longer wavelengths. Eventually, it stops glowing in visible light, but is still hot to the touch. At this point, the light it is emitting is in the infrared spectrum.  In other words, color correlates with temperature, and the light from recombination has, over the eons, shifted further into the infrared and into the microwave regions of the spectrum as the universe has expanded and cooled.

That is precisely where it was found in 1964 by Arno Penzias and Robert Wilson. The best current measurements of this cosmic microwave background (CMB) which they found filling the sky has a spectrum corresponding to a black body with a temperature of 2.7 K. The universe has cooled quite a bit in the 13.8 billion years which have elapsed since the recombination epoch.

Below is a graph of the CMB spectrum measured by the FIRAS instrument aboard the COBE satellite in the early 90s, overlayed upon the black body curve predicted by theory. The error bars are so tiny and the data points so closely match theory that the data points are indistinguishable from the theoretical curve in this graph (thus serving as the inspiration for this xkcd comic).

FIRAS/COBE CMB Spectrum

FIRAS/COBE CMB spectrum overlayed upon black body spectrum predicted by theory.
Image courtesy of NASA.

Inflation

One thing that quickly became apparent in the early days of studying the CMB was just how incredibly uniform it is.  No matter where one looks in the sky, the CMB has the same temperature. (That said, there are TINY fluctuations in the CMB, on the order of micro-Kelvins, and those are critical to this entire discussion, but we’ll get to them shortly.) This had cosmologists somewhat baffled. After all, how could portions of the CMB on opposite sides of the sky, which had never been in proximity with one another long enough to reach thermal equilibrium, be the same temperature?

In the mid- to late-1970s, Alan Guth was trying to figure out why magnetic monopoles had never been observed, despite being predicted by several attempts at unifying the fundamental forces of nature. He was working with some ideas about “cosmic phase transitions” pioneered earlier in that decade by Andre Linde (the gentleman from the video earlier), and, on December 7, 1979, had what he described in his notebook as “a spectacular realization.” He realized that a period of exceedingly fast expansion of spacetime in the early universe (inflation) many orders of magnitude more rapid than the currently-observed expansion would not only fix the monopole problem, but would also address the overall uniformity of the CMB, as well as taking care of an old cosmological problem described by Robert Dicke in 1968 as the “flatness problem.” The math worked out beautifully for addressing all of these issues. Guth’s model was not without its own problems, however. In the early 1980s, Linde refined Guth’s model, molding it into the modern model of cosmic inflation.

Assuming that inflation proves to be correct, the underlying scalar inflaton field which would have driven it remains one of the remaining gaps in the standard model of particle physics, along with a quantum description of gravity (more on that in a moment, as the results we are building up to have bearing on that subject), grand unification of the strong and electroweak gauge forcesneutrino mass, the hierarchy problem, and an explanation for dark energy and dark matter (not to be confused with one another). Those are the major blank spots on our map of knowledge of physics. Here there be dragons.

History-of-the-Universe-BICEP2

The History of the Universe (Image courtesy of BICEP/KECK)

Precision Measurements of the CMB

Although inflationary cosmology was originally inspired by the observed uniformity of the CMB, inflation theory actually predicts that there should actually be a certain degree of non-uniformity of the CMB, in the form of extremely tiny fluctuations in the temperature. By tiny, I mean millionths of Kelvin. These irregularities would be caused by rapid inflation magnifying tiny quantum fluctuations to macroscopic scales, resulting in slight variations in the density of the last scattering surface. (Remember the last scattering surface? We’ll be returning to it again.)

Over the years, we’ve managed to build more and more sensitive instruments for detecting such fluctuations, and they are in fact there:

COsmic Background Explorer (COBE) 1992

COBE map of the CMB

COBE map of the CMB
Image courtesy of NASA

Wilkinson Microwave Anisotropy Probe (WMAP) 2003

WMAP map of the CMB

WMAP map of the CMB
Image courtesy of NASA

Planck 2013

Planck map of the CMB

Planck map of the CMB
Image courtesy of NASA

As it turns out, the Planck results represent the highest resolution image we can ever get of the CMB.  Here, we are not bumping into the limits of the technology, but rather an inherent fuzziness in the data caused by the long wavelengths which are being observed. But mapping these temperature fluctuations is not the only thing Planck was designed to detect. It was also design to collect information about the polarization of the CMB, which is at the heart of Monday’s announcement.  But the BICEP2 team beat the Planck team to the punch. The Planck data is still being analyzed, and it will be very interesting to see the results of that analysis in comparison with the BICEP2 results. (Remember, reproduction of results is a critical part of the scientific method.)

In any case, the data collected by COBE, WMAP, and Planck have been a treasure trove for cosmologists and astrophysicists.  This data has helped with refining estimates of the age of the universe, the proportions of normal baryonic matter, dark matter, and dark energy, refining estimates of Hubble’s constant, and estimating the overall curvature of spacetime on cosmic scales. (The visible universe appears to be roughly flat, which is consistent with inflationary theory.) This data has even been used to place constraints on neutrino mass and the number of neutrino flavors (there are three, and Planck backs that up), and even constraining various proposed models for dark matter.

One would not naively expect all of those speckles in the images above to yield so much useful information, but scientists have plenty of ways of slicing and dicing the data in useful ways.  For example, just as an audio engineer can decompose a sound signal into a spectrum showing the strength each component frequency (by taking a Fourier transform), astrophysicists can decompose the CMB data in terms of something called spherical harmonics (the same mathematical constructs used to describe the quantum probability wave of an electron in an atom) to produce an angular power spectrum. Such a power spectrum shows the intensity of the fluctuations as a function of their angular size. And in that analysis lies a wealth of useful information.

One consequence of these density fluctuations in the early universe is that the matter that would go on to form the galaxies would be expected to be distributed through the universe in web of filaments.  Computer simulations of the evolution of the large scale structure of the universe based upon this assumption are consistent with what is actually observed. Compare, for example, the Millennium Simulation with the results of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.

Polarization

Stay with me. We are almost there.  We have just a few more fundamental concepts to go over before we get to the big show.

A Tale of Two Modes

The two polarization modes expected in the CMB are derived by applying Helmholtz decomposition (a.k.a., the fundamental theorem of vector calculus) to the vector field describing the polarizations. Using this decomposition, such a vector field can be described as the sum of two components, a curl-free component and a divergence-free component.

If \displaystyle \kappa is the density field producing the polarization, and \displaystyle \vec{u} is the resulting polarization vector any any given point in the vector field, we may observe that \displaystyle \nabla\kappa = \vec{u}. Separating the density field into the aforementioned components, we obtain:

\displaystyle \nabla^2\kappa^E = \nabla\cdot\vec{u}, \displaystyle \nabla^2\kappa^E = \nabla\times\vec{u}

Here, we are using the label E to denote a scaler potential component by analogy with an electric field, and B to denote the vector potential component by analogy with a magnetic field.

This is as good of a time as any to discuss polarization. The main contexts in which you’ve likely been exposed to this concept is in terms of polarizing sunglasses and 3D movies. Light, as described by classical electrodynamics, consists of an electric field oscillating at right angles to a magnetic field, with both fields oscillating at right angles to the direction in which the light is traveling. Most thermal (heat-based) light sources emit light that has those fields pointing at a mix of random angles (but still perpendicular to the direction the light is travelling). However, if the light passes through a polarizing filter, which blocks all light except that which has the electric field oscillating in a certain direction, we have what is called linearly polarized light. (There is also another mode of polarization called circular polarization, where the direction that the electric field is pointing follows a corkscrew path as the light travels. That is what is used with 3D movie glasses.)

But another way to get linear polarization of the light is for it to bounce, or “scatter” off of something. Light that has been scattered through the atmosphere or bounced off of water or glass is generally horizontally polarized, which is why polarizing sunglasses are able to cut glare. This process of light getting polarized by bouncing off of something is particularly strong when the matter scattering the light is charged, via a process called Thomson scattering (first explained by J. J. Thomson, the discoverer of the electron).

Recall that I earlier mentioned that the plasma cooling to neutral gas during the recombination epoch is sometimes referred to as the “last scattering surface.” Because of Thomson scattering from this last bit of plasma from the earlier universe, the microwaves from the CMB are polarized, and encoded in the patterns of that polarization (with respect to the fluctuations in the CMB) is information about how the last scattering surface was moving at the time that the light last bounced off of it.

Those patterns of polarization can be mathematically analyzed and broken down into two independent modes: E-mode polarization and B-mode polarization. (The names come from analogy with the vector behavior of electric (E) and magnetic (B) fields in classical electrodynamics. (For a slightly more technical treatment, see sidebar: “A Tale of Two Modes.”) E-mode polarization, which is the more straightforward type, should be either directly parallel to or perpendicular to the boundaries of temperature fluctuations in the CMB. B-mode polarization, which would manifest as being at angles across those boundaries, can be caused by one of two things: gravitational lensing of E-mode polarization, or the presence of gravitational waves in the last scattering surface.

Gravitational Waves

That’s right, gravitational waves, subtle fluctuations in the fabric of spacetime. They are really the last remaining unverified major prediction of general relativity. We are pretty sure from indirect evidence that they exist, based upon studies of the orbital decay of Hulse-Taylor binary pulsars (pairs of neutron stars orbiting one another), but they have yet to be directly detected. There are efforts to detect gravitational waves underway or under construction, but it is an amazingly difficult task. Imagine using a 2-mile long interferometer to try to detect a brief change in the length of that apparatus of less than the diameter of a proton. Yeah, that difficult.

But if indications of primordial gravitational waves (which had been amplified to tremendous scales by inflation) can be detected, it would be a tremendous feather in the cap for both general relativity and for inflation theory.  After all, Linde actually predicted that such indications would be there decades ago. What’s more, such indications could also be taken as a hint of quantum gravity, since the initial fluctuations (prior to being amplified by inflation) would have been at the quantum scale. Proving the quantum nature of gravity (or that it even IS quantum in nature, as opposed to an emergent pseudo-force as described by general relativity) has long been a challenge.  Thus far, efforts at constructing a quantum theory of gravitation have tended to result in mathematical divergences which have stymied generations of physicists.

The challenge here isn’t in detecting E-mode polarization or B-mode polarization, or in sorting them out from each other. Those things have been done before. E-mode polarization of the CMB was detected by DASI in 2002, and B-mode polarization was detected by the South Pole Telescope in 2013. The trick here is in sorting out the two possible sources of B-mode polarization. That means mapping out sources of gravitational lensing, and using that information to filter out the type of b-mode polarization that is not of interest, just leaving the primordial b-mode polarization due to gravitational waves.

The Breakthrough

Whew! Okay, we’re there.  With that crash course in cosmology out of the way (without even bothering to take a detour into the strange topics of dark energy and dark matter), we are finally ready to talk about the big announcement.

The Dark Sector Lab (DSL), located 3/4 of a mile from the Geographic South Pole, houses the BICEP2 telescope (left) and the South Pole Telescope (right). (Steffen Richter, Harvard University)

The Dark Sector Lab (DSL), located 3/4 of a mile from the Geographic South Pole, houses the BICEP2 telescope (left) and the South Pole Telescope (right). (Steffen Richter, Harvard University, via BICEP/KECK.)

Officially, BICEP doesn’t stand for anything; but, unofficially, it stands for “Background Imaging of Cosmic Extragalactic Polarization,” and this is the team that beat Planck to the release of b-mode polarization data. The BICEP2 telescope, located at the Dark Sector Lab at the Amundsen-Scott South Pole Station, collected polarization data that was used to generate the following graph showing primordial b-mode polarization for a small patch of sky:

BICEP2 B-mode Signal

B-mode pattern observed with the BICEP2 telescope, with the line segments showing the polarization from different spots on the sky. The red and blue shading shows the degree of clockwise and anti-clockwise twisting of this B-mode pattern. (Image courtesy of BICEP/KECK.)

Of course, that graph doesn’t mean a great deal without a little interpretation. It is the twisty bits that are of interest. What it means is that primordial b-mode polarization has definitely been found.

For the more technically inclined, the value of r, the ratio of  tensor to scalar amplitudes, comes out to 0.2, right in line with predictions from Linde, yet, strangely at odds with constraints placed on r by Planck results released thus far. (This means that it will be even more interesting to see what the final analysis of the Planck data reveals when it finally comes out.) The data also indicates that the energy scale for the inflation process is about 2×1016 GeV (close to the Planck scale of 2×1018 GeV).

What It Means

Of course, ultimately, these results will have to be validated by independent data, such as from the Planck Collaboration or elsewhere. As cosmologists Neil Turok points out, we shouldn’t jump to too many conclusions based upon this initial announcement.

But, if further data comes out from other teams backing up the BICEP2 results, here is a summary of the ramifications:

  • Strong, although still indirect, evidence supporting cosmological inflation
  • New bounds placed upon proposed inflaton models
  • New bounds on the energy scale of the inflationary epoch
  • Strong, although still indirect, evidence for gravitational waves
  • Hints at quantum gravity
  • A strong prospect for an eventual Nobel Prize for Guth and Linde

Of course, what it will really mean is that Stephen Hawking will have won a bet with Neil Turok.

For More Information

Posted in Uncategorized | 1 Comment

No, Hawking Isn’t Saying There Are No Black Holes

Update (Feb. 15): And even more commentary, in the form of two articles by Matthew Francis.

Update (Feb. 7): There has been a LOT of fascinating commentary posted about this topic. Here are some of the better bits.

So, I’m under the weather for a few days, and what happens? A prominent scientist puts forth a bold new idea, and the media promptly get it wrong.  Horribly, horribly wrong.

Stephen Hawking, holder of the Lucasian Chair of Mathematics at Cambridge, author of the best-seller A Brief History of Time, discoverer of Hawking radiation, and general pop-science superstar, put out a paper on the Cornell University ArXive preprint server (a paper, mind you, which has not yet been peer-reviewed) proposing a possible resolution of the AMPS firewall paradox. (“Huh?” I hear you cry.  More on that in a moment.)

What happened next was quite predictable. The press (and much of the blogosphere) promptly picked up on the story (this involving the legendary Stephen Hawking, after all) and, quite unsurprisingly, botched it. Headlines breathlessly proclaimed “Stephen Hawking: ‘There are no black holes’.” (That headline, by the way, was lifted directly from Nature, of all places.) Bloggers blogged. FaceBookers FaceBooked. Twitter was atwitter. The pronouncement even inspired a bit of political satire on the Borowitz Report that, Poe’s Law being what it is, many took to be an actual story.

But Hawking wasn’t claiming that black holes don’t exist.  The supermassive black hole designated Sagittarius A* known to be lurking at the center of our Milky Way galaxy didn’t evaporate overnight in a puff of Hawking radiation to comport with what Hawking had allegedly proposed. What Hawking instead was proposing was a mathematically subtle redefinition of one of the key attributes of a black hole, the event horizon, replacing it with a concept he calls the “apparent horizon.”

To understand what prompted this, it is necessary to go back to the 70’s, to the start of the Black Hole War. (No, that isn’t the title of a cheesy sci-fi flick, but it is the title of an excellent and quite accessible book by Leonard Susskind about the subject.) At a private meeting of scientific luminaries, Hawking presented an argument which seemed to show unequivocally that an inescapable consequence of general relativity is that information is destroyed by black holes. This did not sit well with Leonard Susskind (one of the pioneers of string theory) and other specialists in the field of quantum theory, since this would violate conservation of information.

Well, there isn’t REALLY a Law of Conservation of Information, but there is a quantum mechanical requirement something called “unitarity” be preserved, which basically means that the probabilities of all possible outcomes of an event have to add up to one. Hawking’s result runs afoul of this. Not for the first time, and certainly not for the last time, quantum theory and general relativity were butting heads, giving contradictory results.

This problem resulted in a blossoming of a particularly specialized sub-discipline of theoretical physics known as black hole thermodynamics. (When one is working on the bleeding edge of theoretical physics, “thermodynamics” and “information theory” are pretty much synonymous, for reasons I won’t even begin to try delving into here. That is topic worthy of a whole series of posts in of itself.)

Fast forward to 2012, when Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully put out a controversial paper, “Black Holes” Complementarity or Firewalls?” This paper (generically referred to as “AMPS” for short) posited that event horizons might not be as usually described in general relativity (where an inbound observer would notice only that they couldn’t get out), but rather are sheathed in a “firewall” which consumes incoming matter via the Hawking radiation mechanism prior to it even crossing the event horizon.

Rather than rehashing the details of the Great Firewall Debate and attempts to resolve it, I shall here refer you to some relevant articles:

Which brings us to Hawking’s proposal. He replaces the concept of the event horizon with something a bit – mushier, something not as clearly defined such that the need for the firewall is sidestepped.  What’s more, Hawking’s “apparent horizon” is temporary, eventually disappearing later in the lifecycle of the black hole, allowing an opportunity to sidestep the information paradox that prompted all of this.

Posted in Uncategorized | Leave a comment

The Story of Quarks – Part I

“Three Quarks for Muster Mark!” – James Joyce, Finnigan’s Wake

Fifty years ago today, the journal Physics Letters received a paper from Murray Gell-Mann entitled “A Schematic Model of Baryons and Mesons”.1  The brief two page paper introduced the concept of “quarks” as the constituent particles of hadrons. As is generally the case with new discoveries, this one did not emerge all at once overnight.

mgm_quarks

The Particle Zoo and Strangeness

[Note: Much of this section summarizes portions of the first chapter of David Griffiths’ Introduction to Elementary Particles.]

In the decades following World War 2, spurred by the Cold War’s nuclear arms race, particle physicists had been building more and more powerful particle accelerators, as well as more sensitive particle detection technology to use in conjunction with those accelerators, as well as for use in the study of cosmic rays and the radiation from nuclear reactors. In this booming period of research, something unexpected happened.  New particles were being discovered at a rapid pace.  The simple pre-war world of just electrons, protons, and neutrons had exploded into a particle zoo of exotic, short-lived particles that existing physical models simply could not account for.

This proliferation began in 1947 with the first detection of mesons, specifically charged pions, in cosmic rays by Cecil Powell, César Lattes, Giuseppe Occhialini, et al.2  Later that same year, Rochester and Butler observed the production by cosmic rays of neutral mesons which then decayed into two oppositely-charged pions via the interaction K^{0}\rightarrow\pi^{+} + \pi^{-}. This became known as the neutral kaon.3 In 1949, Powell identified a charged kaon, decaying as K^{+}\rightarrow\pi^{+} + \pi^{+} + \pi^{-}.4

In 1950, another odd particle joined the particle zoo. Discovered by Hopper and Biswas, the \Lambda particle (decaying as \Lambda \rightarrow p^{+} + \pi^{-}) appeared to be heavier than the proton, and was thus classified as a baryon rather than a meson.5 This classification was also necessary to preserve the conservation of baryon number that had been proposed by Stueckelberg in 1938 to explain the stability of protons.

And so it went, with even more mesons (\eta, \phi, \omega, \rho) and heavy baryons (the \Sigma‘s, the \Xi‘s, the \Delta‘s, and so forth) being discovered in the ensuing years, with the rate of discovery being ramped up by Brookhaven’s Cosmotron accelerator coming online in 1952, followed by others.

One interesting property exhibited by these “strange” new particles (as they were referred to by particle physicists) is that, while they appear to be formed via rather quick interactions (on the scale of 10-23 seconds), the decays are relatively slow (on the order of 10-10 seconds). Pais6 and others proposed that this could be due to the creation and decay of these particles being mediated by different mechanisms, and that strange particles have to be produced in pairs. In modern parlance, the creation of these particles is mediated by the strong nuclear force, while their decay is mediated by the weak nuclear force. In 1953, Gell-Mann7,8 and Nishijima9,10 expanded upon this idea, assigning to each particle a new quantum number which Gell-Mann dubbed “strangeness.”  In this new scheme, strangeness is a conserved quantity in strong interactions, but not in weak interactions.

The Eightfold Way

The next step on the road to the quark model involved corralling this particle zoo into some semblance of order. In 1961, Gell-Mann11 and Yuval Ne’eman12,13 concurrently applied the principles of group theory to analyze the relationships between these particles in terms of the SU(3) special unitary symmetry group. Gell-Mann famously referred to his scheme as the “Eightfold Way,” in reference to the Noble Eightfold Path of Buddhism.

Rather than diving into the rather esoteric mathematics of group theory, let us take a look at the symmetries involved using a graphical format, plotting the particles in various groupings (by particle family and spin) in terms of strangeness vs. charge. (Actually, we are graphing strangeness vs. a projection of the third component of isospin, with the charges lining up on diagonals.)  This is the most common method of presenting the Eightfold Way, although these graphs never appeared in Gell-Mann’s original paper. Instead, he represented these relationships in a matrix format.

The spin 0 pseudoscalar meson nonet. (Illustration by the author.)

The spin 0 pseudoscalar meson nonet. (Illustration by the author.)

The spin 1/2 baryon octet. (Illustration by the author.)

The spin 1/2 baryon octet. (Illustration by the author.)

The spin 3/2 baryon decuplet. (Illustration by the author.)

The spin 3/2 baryon decuplet. (Illustration by the author.)

Note the particle at the bottom of the baryon decuplet. At the time that Gell-Mann wrote his paper on the Eightfold Way, that particle had not yet been detected. At a 1962 meeting at CERN where the discovery of the cascade or \Xi baryons was announced, Gell-Mann predicted that a baryon with a strangeness of -3 and charge of -1 would be discovered, as well as predicting what its mass would be.   Sure enough, the \Omega^- particle was subsequently observed in 1964 at Brookhaven, with precisely the properties predicted by Gell-Mann.14

I feel somewhat obligated to point out here that papers from this time period frequently make reference to a value called “hypercharge,” which originally referred to the sum of strangeness and baryon number. This value is generally considered obsolete.

Quarks, Aces, & Partons

With a framework in place for organizing and mathematically analyzing the particle zoo, it did not take much longer for Gell-Mann to construct his quark model, which brings us to his 1964 paper. He postulated the existence of three new fundamental particles, the up quark (S=0, Q=2/3), the down quark (S=0, Q=-1/3), and the strange quark (S=-1, Q=-1/3), as well as corresponding anti-quarks with opposite strangeness and charge values. (The word “quark” was taken from the James Joyce line quoted at the top of this article.) He further postulated that baryons all consist of triplets of quarks or anti-quarks, and that mesons all consist of quark/anti-quark pairs. (In modern quark theory, it is understood that these particles all consist of a large number of quarks, with virtual quark/anti-quark pairs constantly being created and annihilated, but always with an excess of either two or three “valence quarks.”)

Let us take another look at our Eightfold Way diagrams with the constituent quarks for each particle labelled (u=up, d=down, s=strange). Note that the degenerate states at the center of the meson nonet consist of superpositions of quark states.

The spin 0 pseudoscalar meson nonet, with quark composition. (Illustration by the author.)

The spin 0 pseudoscalar meson nonet, with quark composition. (Illustration by the author.)

The spin 1/2 baryon octet, with quark composition. (Illustration by the author.)

The spin 1/2 baryon octet, with quark composition. (Illustration by the author.)

The spin 3/2 baryon decuplet, with quark composition. (Illustration by the author.)

The spin 3/2 baryon decuplet, with quark composition. (Illustration by the author.)

At about the same time, George Zweig, a researcher at CERN who had previously been a student of Richard Feynman, had independently constructed an almost identical model for the composition of hadrons. However, due to CERN policies in place at the time regarding the approval of papers submitted for publication, he was unable to get his paper published in a timely manner. Fortunately, his work survives in the form of internal CERN preprints.15 In Zweig’s model, the constituent particles of hadrons were called “aces.”

In the late 60’s, in an effort to explain experimental data related to deep inelastic scattering experiments, Richard Feynman developed what he called his “parton” model of hadrons. Eventually, it came to be realized that Feynman’s partons were simply quarks travelling at relativistic velocities. What Gell-Mann had arrived at through studying symmetry, Feynman had arrived at by studying hadron cross sections.

It is worth noting that Gell-Mann didn’t consider quarks to be actual particles.  For him, they were a convenient mathematical abstraction.  However, Feyman’s parton work made it quite clear that quarks were actual “things.”

Next Time

In Part II, we’ll dig into the experimental confirmation of the existence of quarks, gluons, color charge, the charm, top, and bottom quarks, and the development of quantum chromodynamics. Stay tuned.

References


1. M. Gell-Mann, “A Schematic Model of Baryons and Mesons”, Phys. Lett.8:214 (1964).


2. Occhialini, G.P.S. and Powell, C.F., “Nuclear Disintegrations Produced by Slow Charged Particles of Small Mass“, Nature 159, 186-190 (1947)


3. Rochester, G.D. and Butler, C.C., “Evidence for the existence of new unstable elementary particles“, Nature, 160, 855 (1947)


4. F. Powell et al., “Observations with Electron-Sensitive Plates Exposed to Cosmic Radiation“, Nature 163, 82-87 (1949)


5. Hopper, V.D. and Biswas, S., “Evidence Concerning the Existence of the New Unstable Elementary Neutral Particle“. Phys. Rev. 80: 1099. (1950)


6. Pais, A., “Some Remarks on the V-Particles“, Phys. Rev.86, 663-672 (1952). DOI: 10.1103/PhysRev.86.663


7. Gell-Mann, M., “Isotopic Spin and New Unstable Particles“, Phys. Rev. 92, 833 (1953).  Bibcode:1953PhRv…92..833G. doi: 10.1103/PhysRev.92.833


8. Gell-Mann, M., “The Interpretation of the New Particles as Displaced Charged Multiplets“, Il Nuovo Cimento, 4 (Supplement 2), 848 (1956). DOI: 10.1007/BF02748000


9.  Nakano, T. and Nishijima, N., “Charge Independence for V-particles“. Progress of Theoretical Physics 10 (5): 581 (1953). Bibcode:1953PThPh..10..581N. doi: 10.1143/PTP.10.581.


10. Nishijima, K., “Charge Independence Theory of V Particles“. Progress of Theoretical Physics 13 (3): 285 (1955). Bibcode:1955PThPh..13..285N.doi:10.1143/PTP.13.285


11. Gell-Mann, M. , “The Eightfold Way: A Theory of Strong Interaction Symmetry“, DOE Technical Report, March 15, 1961


12. Ne’eman, Y., “Derivation of Strong Interactions from a Gauge Invariance,” Nucl Phys, 26, 222-229 (1961). DOI: 10.1016/0029-5582(61)90134-1


13. Ne’eman, Y., “Gauges, Groups And An Invariant Theory Of The Strong Interactions” Tel-Aviv : Israel At. Energy Comm. (Aug. 1961) 213 pages


14. Barnes, V. E. et al., “Observation of a Hyperon with Strangeness Minus Three”Physical Review Letters 12 (8): 204 (1964). Bibcode: 1964PhRvL..12..204B. doi:10.1103/PhysRevLett.12.204.


15. Zweig, G., “An SU3 model for strong interaction symmetry and its breaking“, internal CERN pre-prints, 1964.

For More Information

The history of QCD – CERN Courier, Harald Fritzsch, Sep. 27, 2012

A watershed: the emergence of QCD – CERN Courier, David Gross and Frank Wilczek, Jan. 28, 2013

Murray Gell-Mann, the Eightfold Way, Quarks, and Quantum Chromodynamics

M. Gell-Mann; Y. Ne’eman, eds. (1964). The Eightfold Way. W. A. Benjamin. LCCN 65013009  (Google Books preview)

David Griffiths. (2008) Introduction to Elementary Particles. John Wiley & Sons. (Google Books preview)

Frank Wilczek, “QCD Made Simple“, Physics Today, 53N8 22-28, (2000).  doi: http://dx.doi.org/10.1063/1.1310117

Kaons and other strange mesons

Hadrons, baryons, mesons

Constructing the Universe: the Particle Explosion

“Quarks: Yeah, They Exist”|Quantum Diaries

“Meet the quarks”|Quantum Diaries

“World of Glue”|Quantum Diaries

“QCD and Confinement”|Quantum Diaries

Posted in Uncategorized | Leave a comment

“Right! What’s a Parsec?”

For those who do not recognize the reference being made in the title, it is to an old Bill Cosby comedy routine depicting a conversation between God and Noah. During the course of the conversation, God provides the dimensions of the ark He wants built in cubits, prompting a somewhat incredulous Noah to say, “Right! What’s a cubit?”

Understanding the units used in a given measurement is pretty important business, and one of the most commonly-used units in astronomy and astrophysics is the parsec. A parsec is roughly 3.26 light-years, which translates to about 30.9 trillion kilometers or 19.2 trillion miles. While it is fairly common knowledge that a light-year is defined as the distance light traverses in one year, the origins of the parsec are considerably less well-known. Let’s address that here.

The term “parsec” is thought to have been introduced by British astronomer Herbert Hall Turner in 1913. The parsec is defined as the distance which would induce an annual heliocentric parallax shift of one arc second.

“Right!”

Okay, perhaps I should back up a bit and define some parts of that definition.

Parallax is a key component of the cosmic distance ladder, the chain of techniques used to measure distances on astronomic scales. No one technique in the distance ladder is effective over all distance scales, but overlap between the distance scales over which each technique is valid provides continuity. Parallax provides effective measurements of distances out to a range of about 1,600 light years. Beyond that, the shift becomes too minuscule to be accurately measured, although the ESA’s Gaia mission (just launched on December 19) should extend that effective range ten-fold. Measurements of even greater distances are largely dependent upon fixed relationships between the luminosity and periodicity of variable stars. (For more about the discovery of such techniques, see my earlier article “Henrietta Leavitte and the Cepheid Variables”.)

Conceptually, parallax is pretty straightforward to understand, since it can be related to everyday experience. To illustrate, let’s do a simple experiment. Close your right eye and hold your thumb out in front of you. Note the position of your thumb relative to some distant landmark. Now, leaving your thumb unmoved, open your right eye and close your left eye. Your thumb will appear to have made a jump to the left relative to the “fixed” background. Of course, you know that your thumb hasn’t budged, but is merely being viewed from a slightly different perspective as seen from each eye. If we wanted to get down to brass tacks, we could measure the angle that the thumb appears to have shifted, measure the distance between our eyes, and use a simple bit of trigonometry to measure the distance to our thumb.

Image from Wikipedia. Created by Srain at English Wikipedia.

Astronomers can use the same idea to measure the distances to stars, using even more distant stars as a relatively fixed background against which they can measure parallax shift. Of course, to get a measurable shift in angle, they need as long of a baseline (corresponding to the distance between your eyes in the example given earlier) as they can get. The best they can do is to use the diameter of the Earth’s orbit around the sun. Make a measurement. (In other words, snap a photo of the star in question through a telescope.) Wait six months. Make another measurement.  Apply some trig, knowing the diameter of the Earth’s orbit (about 300 million kilometers), and viola!  We have the distance to the star under investigation.  (I won’t bother going into how those calculations are done here. That can be seen in detail in the Wikipedia article on parsec.)

Nicolaus Copernicus first came up with this idea, but he didn’t have access to the technology to make it happen.  The first astronomical parallax measurement was made in 1838 by Friedrich Wilhelm Bessel (of Bessel function fame).

Well, then.  Now that we have the definition of parallax out of the way, let’s get on to arc second. This is merely a unit of angular measurement.  A circle can be divided into 360 degrees.  Each degree can then be subdivided into 60 arc minutes. An arc second is simply 1/60th of an arc minute, or 1/3600th of a degree.

Now we have all of the pieces in place to understand what a parsec is. Imagine a hypothetical star out there whose distance is just right to cause a parallax displace of one arc second.  The distance to that imaginary star would be one parsec.

Right? Right.

Posted in Uncategorized | Leave a comment

Dr. David P. Anderson Lecture: “Do You Have What It Takes to Be a Citizen Scientist?”

Last night, I had the pleasure of attending an Austin Forum on Science and Technology lecture delivered by Dr. David P. Anderson of the University of California, Berkeley. The subject of the talk was on technology-enabled citizen science, and Dr. Anderson is well positioned to address this topic. He was one of the co-founders of the SETI@home project, and went on from there to establish the Berkeley Open Infrastructure for Networking Computing (BOINC) project.

It was pointed out during the introductions that the primary sponsor/partner for this lecture series, the Texas Advanced Computing Center (TACC), has contributed about 1000 years of CPU time to distributed BOINC projects via IBM’s World Community Grid. (In the interest of full disclosure, I should note that TACC is one of my customers for my day job as an Exchange administrator. In fact, another sponsor, the University of Texas at Austin’s Information Technology Services, or ITS, is my employer.)

The timing of this lecture could not have possibly been better. I’ve actually been preparing a posting about citizen science (which I hope to put out in the next few days), and I gleaned several useful tidbits from the lecture. Here are the broad brush strokes of the talk.

Continue reading

Posted in Uncategorized | Leave a comment

Bits & Pieces

There have been quite a few tidbits of physics news recently. Here is a quick recap:

Posted in Uncategorized | Leave a comment

Even more Susskind Lectures

My backlog of Leonard Susskind video lectures to watch continues to grow. The following courses have been added:

There are also a handful of one-off lectures listed in a new “Miscellaneous Lectures” section:

That last lecture deals with a recent proposal by Susskind and his fellow string-theory pioneer  Juan Maldacena (discoverer of the Ads/cft correspondence) of a relationship between quantum entanglement and wormholes. This hypothesis provides a possible explanation for the “AMPS” black hole firewall paradox.

  • Juan Maldacena, Leonard Susskind, “Cool horizons for entangled black holes”. arXiv:1306.0533v2 [hep-th]
Posted in Uncategorized | Leave a comment