BICEP2 Redux: How the Sausage is Made

An ongoing problem with communicating science to the general public is the existence of widely-held misconceptions among the public regarding how science actually works. A case in point is the March 17 announcement by the BICEP2 Collaboration regarding the detection of B-Mode polarization in the Cosmic Microwave Background and the events which have unfolded since then.

All too often, news stories and blog posts will trumpet some announcement with sensational headlines like “Scientists Say Cheap, Efficient Solar Cells Just Around the Corner”, or “Scientists Close in on Cure for Cancer.” Many people take such announcements at face value and consider the case closed. The work has been done.  The reality of the situation, however, is that the initial announcement of a discovery or breakthrough is just the beginning of the hard work, breathlessly hyped headlines notwithstanding.

How Science Actually Works (or at least how it is supposed to work)

Once a researcher or a team of researchers do their initial work, they write up the results of their work, including any experimental details needed to reproduce the work, in a paper and submit it for publication in a scientific journal. It is at this point that the “peer  review” process kicks in. Peer review really is the “special sauce” of the scientific method, and one of the least generally understood aspects of science among the general public. It is one of the key self-correction mechanisms of science, a way to keep errors from creeping in.  When a journal receives a submission for publication, they pass along copies of the proposed article to subject matter experts in the specific area of research covered by the paper.  These are the peer reviewers. (Ostensibly, the reviewers are anonymous. However, in practice, the number of specialists in a given area of research tends to be so small that they all know each other, so it isn’t hard to tell who is doing what in terms of peer review.) The reviewers are then supposed to go over the proposed paper with a fine-toothed comb, looking for errors or inconsistencies.  They will either recommend that the paper not be published, recommend that it be published as-is, or offer recommendations for changes or corrections to make the paper suitable for publication.

So, once an article passes muster in the peer-review process, it gets published. In the case at hand, the actual paper describing the BICEP2 results announced in March has now been published in Physical Review Letters:

P. A. R. Ade et al. (BICEP2 Collaboration), “Detection of B-Mode Polarization at Degree Angular Scales by BICEP2“, Phys. Rev. Lett. 112, 241101 – Published 19 June 2014 DOI: http://dx.doi.org/10.1103/PhysRevLett.112.241101

This is where the next step in the process kicks in: replication of results. It isn’t enough for a paper to say, “Hey, we’ve observed Whatsits decaying to a new particle, which we’ve dubbed Plaitons.”  Someone else has to try to reproduce the results (putting their own papers through the peer review process). Another team might give it a try and say, “Nah, we’re not seeing it.  Did you calibrate your framistasis properly?” Or, perhaps, “Yeah, we see something like that as well. Plaitons may be a real thing.” And just one or two follow-up papers like this don’t necessarily settle the issue. It can take years, or decades even, for the evidence to be strong enough or unambiguous enough for a consensus to build among the community of subject matter experts. (It took decades for particle physicists to accept the idea that the deficit of solar electron anti-neutrinos detected in the late sixties at the Homestake Gold Mine was due to neutrinos having a tiny amount of mass, allowing them to transform into different neutrino types. But that is a story for another day.) Again, like peer review, this is part and parcel of the self-correcting mechanism of the scientific method. It is all about keeping us from fooling ourselves.

For replication or refutation of the BICEP2 results, the scientific world eagerly awaits results from either the Planck Collaboration (which may come this fall), or from SPTpol (an instrument at the other end of the building that houses the BICEP2 instrument). It is only through validation from independent teams that the BICEP2 results can go from “Ooh, that is interesting” to physicists nodding sagely and saying “Yeah, we’ve got this” and placing serious wagers on Nobel Prizes.

I should note here that it is not essential that the replication phase come sequentially after publication of initial results. Sometimes, independent teams working in parallel come to the same conclusions at the same time.  This was the case with the Higgs boson discovery, in which the Atlas and CMS teams were working independently with separate detectors, not sharing data with one another until after the fact (in order to keep from contaminating each other’s work with errors or mistaken assumptions). Ditto for the discovery of the acceleration of cosmological expansion due to dark energy.

I should also point out that press conferences and press releases aren’t really part of this process.  They happen, but they aren’t part of the process.  When a researcher or team makes a big breakthrough, it isn’t uncommon for the associated institution to hold a press conference or issue a press release (with the latter often botching the story as badly as the press tends to mangle science stories, but that is a rant for another time). However, these things have little to do with advancing the process, and tend to be done for some combination of the following purposes:

• Promoting the prestige of the institution or institutions under which the research is performed. (“Hey, see what cool stuff our faculty are doing. Send your kids and money to our university, and they can do cool stuff like this too.”) Sometimes, this is tied to announcing a result before a competing team announces similar results.
• Public outreach.  Research institutions have a legitimate interest in communicating what they do to the public, even if the message gets mangled by the media.  (Sorry, I should still save that rant.)
• Funding/politics. (“Hey, Congress, we are getting results. Please don’t cut our funding.”)  BICEP2, by the way, is funded by a grant from the National Science Foundation.
• Rapid communication to the scientific community. Remember the big LHC meeting where the Higgs announcement was made on July 4, 2012?  It was pretty technical in content and aimed at scientific peers.  They had embargoed their results for about as long as they could and didn’t want to wait for the next big science conference or for their papers to get published, and wanted to get the actual results out before the rumor mill took over. However, due to the hype that had been built up around it in the press and the blogosphere, LOTS of lay-people watched the live-stream, even going so far as to endure a PowerPoint presentation using Comic Sans. (Really, Fabiola? Oh, well.)

Of course, sometimes science by press conference backfires, as in the case of the claimed discovery by Fleischmann and Pons of “cold fusion” back in 1988. That “breakthrough” turned out to be a bunch of hot air. Well, hot water, anyway, not to mention poor calorimetry. I recall eagerly following the ensuing debates on USENET, hoping that there was something to it, but, alas, it was not meant to be. Failure to replicate put a nail into that coffin.  The self-correcting mechanism of science asserted itself.  Evidence is the final arbitrator of reality, trumping any degree of wishful thinking we might hold.

Crowd-Sourcing Peer Review

While the scientific process generally follows the form outlined above, the BICEP2 case has included an added element that is somewhat novel. With the availability of a preprint on Cornell’s ArXiv preprint server since the initial announcement of the results, the preliminary peer review process has effectively extended well beyond the official reviewers. Analysis and criticism of the work has come in from numerous quarters, to such an extent that the final published form of the paper incorporates feedback beyond that provided by the official review process.  In this case, the peer review process was essentially crowd-sourced.

It isn’t as if such a phenomenon couldn’t have happened before. The ArXiv preprint service has been available in one form or another since the early nineties. However, a tipping point seems to have been reached in this instance, perhaps due to the hype and media attention which accompanied the original announcement. Researchers well beyond the circle of scientists who would typically be involved in the formal process have taken an interest, and they have dug in, looking for, and potentially finding, flaws in the original work. The possible flaws that have been identified (which I’ll go into shortly) have been insufficient to invalidate the conclusions, but they do weaken the confidence in the results somewhat, rendering the results of replication efforts something even more eagerly awaited. The important point, though, is that more eyeballs than ever have been scrutinizing the results, something which can only bode well for the process.

Piling On

It did not take long after the initial announcement for critiques to start appearing.  One of the earliest was regarding the possibility that some of the observed B-mode polarization observed might be a foreground effect due to something called galactic radio loops , a phenomenon caused by magnetized dust interacting with our own galaxy’s magnetic field.

Then on May 12, physicist and blogger Adam Falkowski (a.k.a. “Jester”) revealed on his Résonaances blog a rumor of an even bigger potential problem with the BICEP2 analysis.  One of the biggest challenges for the BICEP2 team has been to filter out B-mode polarization effects due to foreground dust. One component of the procedure that was employed was to use data on foreground contributions collected by the Planck team. However, Planck hasn’t yet released the raw data for this, and the BICEP2 team had to effectively “scrape” the data from a graph on page 6 of this presentation PDF. (This is not an ideal scenario, and efforts are reportedly underway to make the raw Planck data available to the BICEP2 team to improve their analysis.)

The rumor revealed by Jester is that the BICEP2 team had misinterpreted the content of this graph, thinking that it represents polarization contribution only from dust, when in fact it represents ALL foreground contributions. (Note where the slide says “Not CIB subtracted.”) If this is the case, some or all of the effect reported might not be present.

The net result of these (and other) criticisms can best be summed up by the following lines added to the abstract of the final paper:

However, these models are not sufficiently constrained to exclude the possibility of dust emission bright enough to explain the entire excess signal…  Accounting for the contribution of foreground dust will shift this value downward by an amount which will be better constrained with upcoming data sets.

Ultimately, more data and observations by independent teams will be needed to settle this once and for all. And if the critiques sometimes seem harsh, don’t worry about it.  That is how the process works. Science is to some degree an adversarial process, although the slings and arrows aren’t really directed at the researchers themselves, but their results. And it is the results that survive such harsh scrutiny that move us forward.

As an aside, I note that this entire kerfuffle has been transpiring amidst the 50th anniversary of the initial discovery of the Cosmic Microwave Background.

For some useful commentary on these latest results and where we go from here, have a look at these articles:

Posted in Uncategorized | | 1 Comment

BICEP2 Takes a Peek at Cosmic Inflation

Before I dive into trying to explain Monday’s big announcement, I would like you to take a moment to watch this video showing Chao-Lin Kuo of Stanford University, the designer of the BICEP2 detector, revealing to Andrei Linde, one of the architects of inflationary cosmology, the big news. Seriously, take a look.

You can see Linde’s knees weaken when the news sinks in that his life’s work has been validated, bringing to mind the image of Peter Higgs wiping tears from his eyes on July 4, 2012 as the five-sigma discovery of his boson was announced at CERN. Linde’s wife, Renata Kallosh, also a physicist, was also visibly moved. She knew quite well how much this moment meant to him.

The Announcement

So what was this breakthrough that had such an impact on Linde and his wife, not to mention making cosmologists and astrophysicists around the world giddy with delight?

On Monday, March 17, 2014, the BICEP2 collaboration (an NSF-funded experiment jointly operated by Stanford/SLAC, the Harvard-Smithsonian Center for Astrophysics, Caltech JPL, UMN, and other institutions) announced the detection of primordial b-mode polarization in the Cosmic Microwave Background, consistent with the theoretically-predicted imprint of primordial gravitational waves amplified in scale by cosmological inflation.

“What?” I hear you cry.

Okay, time for me to back up a bit and try to put that into plain English.

But What Does It Mean?

But before I take a stab at explaining, take a look at this very quick and dirty explanation by Henry Reich over at MinutePhysics. When I try to explain this, I tend to gesture a lot. That doesn’t exactly come through in a blog post, so the animations in this video will have to serve in the place of such gestures.

In a similar vein, you might which to take a look at this comic explainer from PhD Comics.

So that’s the 30,000 foot view.  Let’s start to zoom in a little bit. To do so, I’ll need to fill in a bit of background, making sure that we are all on the same page with definitions and concepts. (Bear with me for a bit, since there is a LOT of ground to cover.) So let’s start at the beginning. The VERY beginning.

The Big Bang (A Ridiculously Brief History)

A mere century ago, the accepted view of cosmology was pretty simple.  As far as any astronomers new, our Milky Way Galaxy was the only galaxy there was, occupying a simple, static, relatively unchanging universe. Oh, sure, astronomers had long known “spiral nebulae,” and the 18th century philosopher Immanuel Kant had even speculated that they might be “island universes,” but astronomers had no good evidence that they actually lay beyond the bounds of our own Milky Way.

Then, in 1915, Einstein developed his general theory of relativity, which described gravity in terms of the curvature of space-time. However, it didn’t take long for it to become clear that his gravitational field equations on a cosmic scale could lead to only two solutions: an expanding universe or a contracting universe. To make his equations consistent with the prevailing idea of a static universe, he tweaked them by incorporating a “cosmological constant” term which allowed for a static solution.

Now, I have previously discussed how Henrietta Leavitt had, in 1908, discovered a correlation between the the intrinsic brightness and the periodicity of a certain class of variable stars known as Cepheid variables. By the 1920s, several astronomers, including  Vesto Slipher, Gustaf Strömberg, and Edwin Hubble, had been using this relationship as a “yardstick” (along with other techniques) for measuring the distances to spiral nebulae. Not only had these measurements firmly established the fact that the spiral nebulae were, in fact, distinct galaxies a great distance beyond the Milky Way, but it was clear from comparing the distances with their redshift that they were, on average, moving away from us, and that the greater their distance, the faster they were moving.  The universe was expanding, and Einstein’s cosmological constant was not needed!

As I’ve also previously writtenMonsignor George Lemaître, a Belgian priest and astronomer, had studied these results, along with Einstein’s general relativity. By 1931, he had taken the facts at hand to their logical conclusion: further back in time, the universe was denser and hotter. This was the beginning of the Big Bang Theory of cosmology. That label was derisively coined in 1949 by the noted astronomer Sir Fred Hoyle, who stubbornly clung to the notion of a static universe in spite of the data.

Remember, boys and girls, evidence is the final arbitrator of reality, not the opinions of an authority figure, no matter how intelligent, highly-regarded, or accomplished they are. And all knowledge is provisional, subject to updates based upon new evidence. Even legendary intellects such as Einstein and Hoyle were wrong from time to time.

The Cosmic Microwave Background (CMB)

The statement in bold-face type in the previous section is the core of the Big Bang Theory. Filling in the details to form fully-fledged cosmological models has of course been the challenging part, with the construction of more and more detailed mathematical models and comparing them with observational data.  One of the major predictions of big bang cosmology is that the very early universe was so hot and dense that it would have been permeated by plasma (ionized gas). This plasma would have been so dense that photons would not be able to travel through it very far before being scattered or absorbed by electrons or nuclei, thus rendering this early universe opaque. However, as the universe expanded and cooled, a point would eventually be reached at which electrons could finally bond to nuclei, allowing the material permeating space to transition to a neutral gas that is largely transparent to light. Cosmologists refer to this event as “recombination,” an event which is currently estimated as having taken place when the universe was a mere 378,000 years old. Once this transition took place, photons were then free to travel along their merry way, filling the universe with light. (Cosmologists often refer to the last vestiges of this cooling plasma as these photons bounced off of it for the last time as the “last scattering surface.” This concept figures prominently in the experimental results that we are building up to, so keep that in mind.)

The last scattering surface (also referred to as the cosmic photosphere) would have had a temperature in the neighborhood of 3000 K. Now, the radiation from an idealized thermal absorber and emitter (called a “black body“) will have a spectrum that is described by Planck’s law (or classically by the Rayleigh-Jeans law). While the peak of the spectrum from a 3000 K gas would be in the infrared, plenty of it would still be in the visible range. So why isn’t the light from the last scattering surface visible filling the night sky? Simply put, as the universe continued expanding, the wavelengths of this once bright light permeating the cosmos were then stretched out to longer wavelengths, dragging this primordial light into the long microwave portion of the spectrum.

For the benefit of those of you whose eyes just glazed over from that explanation, let me put it in terms that are a bit easier to grasp. Imagine a blacksmith pulling a chunk of iron from his forge after he’s gotten it seriously hot. It starts off glowing white hot, since the light emitted from it has components from across the entire visible spectrum. However, as the iron cools, it starts to glow red. The peak of the spectrum has shifted to the longer wavelengths. Eventually, it stops glowing in visible light, but is still hot to the touch. At this point, the light it is emitting is in the infrared spectrum.  In other words, color correlates with temperature, and the light from recombination has, over the eons, shifted further into the infrared and into the microwave regions of the spectrum as the universe has expanded and cooled.

That is precisely where it was found in 1964 by Arno Penzias and Robert Wilson. The best current measurements of this cosmic microwave background (CMB) which they found filling the sky has a spectrum corresponding to a black body with a temperature of 2.7 K. The universe has cooled quite a bit in the 13.8 billion years which have elapsed since the recombination epoch.

Below is a graph of the CMB spectrum measured by the FIRAS instrument aboard the COBE satellite in the early 90s, overlayed upon the black body curve predicted by theory. The error bars are so tiny and the data points so closely match theory that the data points are indistinguishable from the theoretical curve in this graph (thus serving as the inspiration for this xkcd comic).

FIRAS/COBE CMB spectrum overlayed upon black body spectrum predicted by theory.
Image courtesy of NASA.

Inflation

One thing that quickly became apparent in the early days of studying the CMB was just how incredibly uniform it is.  No matter where one looks in the sky, the CMB has the same temperature. (That said, there are TINY fluctuations in the CMB, on the order of micro-Kelvins, and those are critical to this entire discussion, but we’ll get to them shortly.) This had cosmologists somewhat baffled. After all, how could portions of the CMB on opposite sides of the sky, which had never been in proximity with one another long enough to reach thermal equilibrium, be the same temperature?

In the mid- to late-1970s, Alan Guth was trying to figure out why magnetic monopoles had never been observed, despite being predicted by several attempts at unifying the fundamental forces of nature. He was working with some ideas about “cosmic phase transitions” pioneered earlier in that decade by Andre Linde (the gentleman from the video earlier), and, on December 7, 1979, had what he described in his notebook as “a spectacular realization.” He realized that a period of exceedingly fast expansion of spacetime in the early universe (inflation) many orders of magnitude more rapid than the currently-observed expansion would not only fix the monopole problem, but would also address the overall uniformity of the CMB, as well as taking care of an old cosmological problem described by Robert Dicke in 1968 as the “flatness problem.” The math worked out beautifully for addressing all of these issues. Guth’s model was not without its own problems, however. In the early 1980s, Linde refined Guth’s model, molding it into the modern model of cosmic inflation.

Assuming that inflation proves to be correct, the underlying scalar inflaton field which would have driven it remains one of the remaining gaps in the standard model of particle physics, along with a quantum description of gravity (more on that in a moment, as the results we are building up to have bearing on that subject), grand unification of the strong and electroweak gauge forcesneutrino mass, the hierarchy problem, and an explanation for dark energy and dark matter (not to be confused with one another). Those are the major blank spots on our map of knowledge of physics. Here there be dragons.

The History of the Universe (Image courtesy of BICEP/KECK)

Precision Measurements of the CMB

Although inflationary cosmology was originally inspired by the observed uniformity of the CMB, inflation theory actually predicts that there should actually be a certain degree of non-uniformity of the CMB, in the form of extremely tiny fluctuations in the temperature. By tiny, I mean millionths of Kelvin. These irregularities would be caused by rapid inflation magnifying tiny quantum fluctuations to macroscopic scales, resulting in slight variations in the density of the last scattering surface. (Remember the last scattering surface? We’ll be returning to it again.)

Over the years, we’ve managed to build more and more sensitive instruments for detecting such fluctuations, and they are in fact there:

COBE map of the CMB
Image courtesy of NASA

WMAP map of the CMB
Image courtesy of NASA

Planck 2013

Planck map of the CMB
Image courtesy of NASA

As it turns out, the Planck results represent the highest resolution image we can ever get of the CMB.  Here, we are not bumping into the limits of the technology, but rather an inherent fuzziness in the data caused by the long wavelengths which are being observed. But mapping these temperature fluctuations is not the only thing Planck was designed to detect. It was also design to collect information about the polarization of the CMB, which is at the heart of Monday’s announcement.  But the BICEP2 team beat the Planck team to the punch. The Planck data is still being analyzed, and it will be very interesting to see the results of that analysis in comparison with the BICEP2 results. (Remember, reproduction of results is a critical part of the scientific method.)

In any case, the data collected by COBE, WMAP, and Planck have been a treasure trove for cosmologists and astrophysicists.  This data has helped with refining estimates of the age of the universe, the proportions of normal baryonic matter, dark matter, and dark energy, refining estimates of Hubble’s constant, and estimating the overall curvature of spacetime on cosmic scales. (The visible universe appears to be roughly flat, which is consistent with inflationary theory.) This data has even been used to place constraints on neutrino mass and the number of neutrino flavors (there are three, and Planck backs that up), and even constraining various proposed models for dark matter.

One would not naively expect all of those speckles in the images above to yield so much useful information, but scientists have plenty of ways of slicing and dicing the data in useful ways.  For example, just as an audio engineer can decompose a sound signal into a spectrum showing the strength each component frequency (by taking a Fourier transform), astrophysicists can decompose the CMB data in terms of something called spherical harmonics (the same mathematical constructs used to describe the quantum probability wave of an electron in an atom) to produce an angular power spectrum. Such a power spectrum shows the intensity of the fluctuations as a function of their angular size. And in that analysis lies a wealth of useful information.

One consequence of these density fluctuations in the early universe is that the matter that would go on to form the galaxies would be expected to be distributed through the universe in web of filaments.  Computer simulations of the evolution of the large scale structure of the universe based upon this assumption are consistent with what is actually observed. Compare, for example, the Millennium Simulation with the results of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.

Polarization

Stay with me. We are almost there.  We have just a few more fundamental concepts to go over before we get to the big show.

A Tale of Two Modes

The two polarization modes expected in the CMB are derived by applying Helmholtz decomposition (a.k.a., the fundamental theorem of vector calculus) to the vector field describing the polarizations. Using this decomposition, such a vector field can be described as the sum of two components, a curl-free component and a divergence-free component.

If $\displaystyle \kappa$ is the density field producing the polarization, and $\displaystyle \vec{u}$ is the resulting polarization vector any any given point in the vector field, we may observe that $\displaystyle \nabla\kappa = \vec{u}$. Separating the density field into the aforementioned components, we obtain:

$\displaystyle \nabla^2\kappa^E = \nabla\cdot\vec{u}$, $\displaystyle \nabla^2\kappa^E = \nabla\times\vec{u}$

Here, we are using the label E to denote a scaler potential component by analogy with an electric field, and B to denote the vector potential component by analogy with a magnetic field.

This is as good of a time as any to discuss polarization. The main contexts in which you’ve likely been exposed to this concept is in terms of polarizing sunglasses and 3D movies. Light, as described by classical electrodynamics, consists of an electric field oscillating at right angles to a magnetic field, with both fields oscillating at right angles to the direction in which the light is traveling. Most thermal (heat-based) light sources emit light that has those fields pointing at a mix of random angles (but still perpendicular to the direction the light is travelling). However, if the light passes through a polarizing filter, which blocks all light except that which has the electric field oscillating in a certain direction, we have what is called linearly polarized light. (There is also another mode of polarization called circular polarization, where the direction that the electric field is pointing follows a corkscrew path as the light travels. That is what is used with 3D movie glasses.)

But another way to get linear polarization of the light is for it to bounce, or “scatter” off of something. Light that has been scattered through the atmosphere or bounced off of water or glass is generally horizontally polarized, which is why polarizing sunglasses are able to cut glare. This process of light getting polarized by bouncing off of something is particularly strong when the matter scattering the light is charged, via a process called Thomson scattering (first explained by J. J. Thomson, the discoverer of the electron).

Recall that I earlier mentioned that the plasma cooling to neutral gas during the recombination epoch is sometimes referred to as the “last scattering surface.” Because of Thomson scattering from this last bit of plasma from the earlier universe, the microwaves from the CMB are polarized, and encoded in the patterns of that polarization (with respect to the fluctuations in the CMB) is information about how the last scattering surface was moving at the time that the light last bounced off of it.

Those patterns of polarization can be mathematically analyzed and broken down into two independent modes: E-mode polarization and B-mode polarization. (The names come from analogy with the vector behavior of electric (E) and magnetic (B) fields in classical electrodynamics. (For a slightly more technical treatment, see sidebar: “A Tale of Two Modes.”) E-mode polarization, which is the more straightforward type, should be either directly parallel to or perpendicular to the boundaries of temperature fluctuations in the CMB. B-mode polarization, which would manifest as being at angles across those boundaries, can be caused by one of two things: gravitational lensing of E-mode polarization, or the presence of gravitational waves in the last scattering surface.

Gravitational Waves

That’s right, gravitational waves, subtle fluctuations in the fabric of spacetime. They are really the last remaining unverified major prediction of general relativity. We are pretty sure from indirect evidence that they exist, based upon studies of the orbital decay of Hulse-Taylor binary pulsars (pairs of neutron stars orbiting one another), but they have yet to be directly detected. There are efforts to detect gravitational waves underway or under construction, but it is an amazingly difficult task. Imagine using a 2-mile long interferometer to try to detect a brief change in the length of that apparatus of less than the diameter of a proton. Yeah, that difficult.

But if indications of primordial gravitational waves (which had been amplified to tremendous scales by inflation) can be detected, it would be a tremendous feather in the cap for both general relativity and for inflation theory.  After all, Linde actually predicted that such indications would be there decades ago. What’s more, such indications could also be taken as a hint of quantum gravity, since the initial fluctuations (prior to being amplified by inflation) would have been at the quantum scale. Proving the quantum nature of gravity (or that it even IS quantum in nature, as opposed to an emergent pseudo-force as described by general relativity) has long been a challenge.  Thus far, efforts at constructing a quantum theory of gravitation have tended to result in mathematical divergences which have stymied generations of physicists.

The challenge here isn’t in detecting E-mode polarization or B-mode polarization, or in sorting them out from each other. Those things have been done before. E-mode polarization of the CMB was detected by DASI in 2002, and B-mode polarization was detected by the South Pole Telescope in 2013. The trick here is in sorting out the two possible sources of B-mode polarization. That means mapping out sources of gravitational lensing, and using that information to filter out the type of b-mode polarization that is not of interest, just leaving the primordial b-mode polarization due to gravitational waves.

The Breakthrough

Whew! Okay, we’re there.  With that crash course in cosmology out of the way (without even bothering to take a detour into the strange topics of dark energy and dark matter), we are finally ready to talk about the big announcement.

The Dark Sector Lab (DSL), located 3/4 of a mile from the Geographic South Pole, houses the BICEP2 telescope (left) and the South Pole Telescope (right). (Steffen Richter, Harvard University, via BICEP/KECK.)

Officially, BICEP doesn’t stand for anything; but, unofficially, it stands for “Background Imaging of Cosmic Extragalactic Polarization,” and this is the team that beat Planck to the release of b-mode polarization data. The BICEP2 telescope, located at the Dark Sector Lab at the Amundsen-Scott South Pole Station, collected polarization data that was used to generate the following graph showing primordial b-mode polarization for a small patch of sky:

B-mode pattern observed with the BICEP2 telescope, with the line segments showing the polarization from different spots on the sky. The red and blue shading shows the degree of clockwise and anti-clockwise twisting of this B-mode pattern. (Image courtesy of BICEP/KECK.)

Of course, that graph doesn’t mean a great deal without a little interpretation. It is the twisty bits that are of interest. What it means is that primordial b-mode polarization has definitely been found.

For the more technically inclined, the value of r, the ratio of  tensor to scalar amplitudes, comes out to 0.2, right in line with predictions from Linde, yet, strangely at odds with constraints placed on r by Planck results released thus far. (This means that it will be even more interesting to see what the final analysis of the Planck data reveals when it finally comes out.) The data also indicates that the energy scale for the inflation process is about 2×1016 GeV (close to the Planck scale of 2×1018 GeV).

What It Means

Of course, ultimately, these results will have to be validated by independent data, such as from the Planck Collaboration or elsewhere. As cosmologists Neil Turok points out, we shouldn’t jump to too many conclusions based upon this initial announcement.

But, if further data comes out from other teams backing up the BICEP2 results, here is a summary of the ramifications:

• Strong, although still indirect, evidence supporting cosmological inflation
• New bounds placed upon proposed inflaton models
• New bounds on the energy scale of the inflationary epoch
• Strong, although still indirect, evidence for gravitational waves
• Hints at quantum gravity
• A strong prospect for an eventual Nobel Prize for Guth and Linde

Of course, what it will really mean is that Stephen Hawking will have won a bet with Neil Turok.

Posted in Uncategorized | 1 Comment

No, Hawking Isn’t Saying There Are No Black Holes

Update (Feb. 15): And even more commentary, in the form of two articles by Matthew Francis.

Update (Feb. 7): There has been a LOT of fascinating commentary posted about this topic. Here are some of the better bits.

So, I’m under the weather for a few days, and what happens? A prominent scientist puts forth a bold new idea, and the media promptly get it wrong.  Horribly, horribly wrong.

Stephen Hawking, holder of the Lucasian Chair of Mathematics at Cambridge, author of the best-seller A Brief History of Time, discoverer of Hawking radiation, and general pop-science superstar, put out a paper on the Cornell University ArXive preprint server (a paper, mind you, which has not yet been peer-reviewed) proposing a possible resolution of the AMPS firewall paradox. (“Huh?” I hear you cry.  More on that in a moment.)

What happened next was quite predictable. The press (and much of the blogosphere) promptly picked up on the story (this involving the legendary Stephen Hawking, after all) and, quite unsurprisingly, botched it. Headlines breathlessly proclaimed “Stephen Hawking: ‘There are no black holes’.” (That headline, by the way, was lifted directly from Nature, of all places.) Bloggers blogged. FaceBookers FaceBooked. Twitter was atwitter. The pronouncement even inspired a bit of political satire on the Borowitz Report that, Poe’s Law being what it is, many took to be an actual story.

But Hawking wasn’t claiming that black holes don’t exist.  The supermassive black hole designated Sagittarius A* known to be lurking at the center of our Milky Way galaxy didn’t evaporate overnight in a puff of Hawking radiation to comport with what Hawking had allegedly proposed. What Hawking instead was proposing was a mathematically subtle redefinition of one of the key attributes of a black hole, the event horizon, replacing it with a concept he calls the “apparent horizon.”

To understand what prompted this, it is necessary to go back to the 70’s, to the start of the Black Hole War. (No, that isn’t the title of a cheesy sci-fi flick, but it is the title of an excellent and quite accessible book by Leonard Susskind about the subject.) At a private meeting of scientific luminaries, Hawking presented an argument which seemed to show unequivocally that an inescapable consequence of general relativity is that information is destroyed by black holes. This did not sit well with Leonard Susskind (one of the pioneers of string theory) and other specialists in the field of quantum theory, since this would violate conservation of information.

Well, there isn’t REALLY a Law of Conservation of Information, but there is a quantum mechanical requirement something called “unitarity” be preserved, which basically means that the probabilities of all possible outcomes of an event have to add up to one. Hawking’s result runs afoul of this. Not for the first time, and certainly not for the last time, quantum theory and general relativity were butting heads, giving contradictory results.

This problem resulted in a blossoming of a particularly specialized sub-discipline of theoretical physics known as black hole thermodynamics. (When one is working on the bleeding edge of theoretical physics, “thermodynamics” and “information theory” are pretty much synonymous, for reasons I won’t even begin to try delving into here. That is topic worthy of a whole series of posts in of itself.)

Fast forward to 2012, when Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully put out a controversial paper, “Black Holes” Complementarity or Firewalls?” This paper (generically referred to as “AMPS” for short) posited that event horizons might not be as usually described in general relativity (where an inbound observer would notice only that they couldn’t get out), but rather are sheathed in a “firewall” which consumes incoming matter via the Hawking radiation mechanism prior to it even crossing the event horizon.

Rather than rehashing the details of the Great Firewall Debate and attempts to resolve it, I shall here refer you to some relevant articles:

Which brings us to Hawking’s proposal. He replaces the concept of the event horizon with something a bit – mushier, something not as clearly defined such that the need for the firewall is sidestepped.  What’s more, Hawking’s “apparent horizon” is temporary, eventually disappearing later in the lifecycle of the black hole, allowing an opportunity to sidestep the information paradox that prompted all of this.

The Story of Quarks – Part I

“Three Quarks for Muster Mark!” – James Joyce, Finnigan’s Wake

Fifty years ago today, the journal Physics Letters received a paper from Murray Gell-Mann entitled “A Schematic Model of Baryons and Mesons”.1  The brief two page paper introduced the concept of “quarks” as the constituent particles of hadrons. As is generally the case with new discoveries, this one did not emerge all at once overnight.

The Particle Zoo and Strangeness

[Note: Much of this section summarizes portions of the first chapter of David Griffiths’ Introduction to Elementary Particles.]

In the decades following World War 2, spurred by the Cold War’s nuclear arms race, particle physicists had been building more and more powerful particle accelerators, as well as more sensitive particle detection technology to use in conjunction with those accelerators, as well as for use in the study of cosmic rays and the radiation from nuclear reactors. In this booming period of research, something unexpected happened.  New particles were being discovered at a rapid pace.  The simple pre-war world of just electrons, protons, and neutrons had exploded into a particle zoo of exotic, short-lived particles that existing physical models simply could not account for.

This proliferation began in 1947 with the first detection of mesons, specifically charged pions, in cosmic rays by Cecil Powell, César Lattes, Giuseppe Occhialini, et al.2  Later that same year, Rochester and Butler observed the production by cosmic rays of neutral mesons which then decayed into two oppositely-charged pions via the interaction $K^{0}\rightarrow\pi^{+} + \pi^{-}$. This became known as the neutral kaon.3 In 1949, Powell identified a charged kaon, decaying as $K^{+}\rightarrow\pi^{+} + \pi^{+} + \pi^{-}$.4

In 1950, another odd particle joined the particle zoo. Discovered by Hopper and Biswas, the $\Lambda$ particle (decaying as $\Lambda \rightarrow p^{+} + \pi^{-}$) appeared to be heavier than the proton, and was thus classified as a baryon rather than a meson.5 This classification was also necessary to preserve the conservation of baryon number that had been proposed by Stueckelberg in 1938 to explain the stability of protons.

And so it went, with even more mesons ($\eta$, $\phi$, $\omega$, $\rho$) and heavy baryons (the $\Sigma$‘s, the $\Xi$‘s, the $\Delta$‘s, and so forth) being discovered in the ensuing years, with the rate of discovery being ramped up by Brookhaven’s Cosmotron accelerator coming online in 1952, followed by others.

One interesting property exhibited by these “strange” new particles (as they were referred to by particle physicists) is that, while they appear to be formed via rather quick interactions (on the scale of 10-23 seconds), the decays are relatively slow (on the order of 10-10 seconds). Pais6 and others proposed that this could be due to the creation and decay of these particles being mediated by different mechanisms, and that strange particles have to be produced in pairs. In modern parlance, the creation of these particles is mediated by the strong nuclear force, while their decay is mediated by the weak nuclear force. In 1953, Gell-Mann7,8 and Nishijima9,10 expanded upon this idea, assigning to each particle a new quantum number which Gell-Mann dubbed “strangeness.”  In this new scheme, strangeness is a conserved quantity in strong interactions, but not in weak interactions.

The Eightfold Way

The next step on the road to the quark model involved corralling this particle zoo into some semblance of order. In 1961, Gell-Mann11 and Yuval Ne’eman12,13 concurrently applied the principles of group theory to analyze the relationships between these particles in terms of the SU(3) special unitary symmetry group. Gell-Mann famously referred to his scheme as the “Eightfold Way,” in reference to the Noble Eightfold Path of Buddhism.

Rather than diving into the rather esoteric mathematics of group theory, let us take a look at the symmetries involved using a graphical format, plotting the particles in various groupings (by particle family and spin) in terms of strangeness vs. charge. (Actually, we are graphing strangeness vs. a projection of the third component of isospin, with the charges lining up on diagonals.)  This is the most common method of presenting the Eightfold Way, although these graphs never appeared in Gell-Mann’s original paper. Instead, he represented these relationships in a matrix format.

The spin 0 pseudoscalar meson nonet. (Illustration by the author.)

The spin 1/2 baryon octet. (Illustration by the author.)

The spin 3/2 baryon decuplet. (Illustration by the author.)

Note the particle at the bottom of the baryon decuplet. At the time that Gell-Mann wrote his paper on the Eightfold Way, that particle had not yet been detected. At a 1962 meeting at CERN where the discovery of the cascade or $\Xi$ baryons was announced, Gell-Mann predicted that a baryon with a strangeness of -3 and charge of -1 would be discovered, as well as predicting what its mass would be.   Sure enough, the $\Omega^-$ particle was subsequently observed in 1964 at Brookhaven, with precisely the properties predicted by Gell-Mann.14

I feel somewhat obligated to point out here that papers from this time period frequently make reference to a value called “hypercharge,” which originally referred to the sum of strangeness and baryon number. This value is generally considered obsolete.

Quarks, Aces, & Partons

With a framework in place for organizing and mathematically analyzing the particle zoo, it did not take much longer for Gell-Mann to construct his quark model, which brings us to his 1964 paper. He postulated the existence of three new fundamental particles, the up quark (S=0, Q=2/3), the down quark (S=0, Q=-1/3), and the strange quark (S=-1, Q=-1/3), as well as corresponding anti-quarks with opposite strangeness and charge values. (The word “quark” was taken from the James Joyce line quoted at the top of this article.) He further postulated that baryons all consist of triplets of quarks or anti-quarks, and that mesons all consist of quark/anti-quark pairs. (In modern quark theory, it is understood that these particles all consist of a large number of quarks, with virtual quark/anti-quark pairs constantly being created and annihilated, but always with an excess of either two or three “valence quarks.”)

Let us take another look at our Eightfold Way diagrams with the constituent quarks for each particle labelled (u=up, d=down, s=strange). Note that the degenerate states at the center of the meson nonet consist of superpositions of quark states.

The spin 0 pseudoscalar meson nonet, with quark composition. (Illustration by the author.)

The spin 1/2 baryon octet, with quark composition. (Illustration by the author.)

The spin 3/2 baryon decuplet, with quark composition. (Illustration by the author.)

At about the same time, George Zweig, a researcher at CERN who had previously been a student of Richard Feynman, had independently constructed an almost identical model for the composition of hadrons. However, due to CERN policies in place at the time regarding the approval of papers submitted for publication, he was unable to get his paper published in a timely manner. Fortunately, his work survives in the form of internal CERN preprints.15 In Zweig’s model, the constituent particles of hadrons were called “aces.”

In the late 60’s, in an effort to explain experimental data related to deep inelastic scattering experiments, Richard Feynman developed what he called his “parton” model of hadrons. Eventually, it came to be realized that Feynman’s partons were simply quarks travelling at relativistic velocities. What Gell-Mann had arrived at through studying symmetry, Feynman had arrived at by studying hadron cross sections.

It is worth noting that Gell-Mann didn’t consider quarks to be actual particles.  For him, they were a convenient mathematical abstraction.  However, Feyman’s parton work made it quite clear that quarks were actual “things.”

Next Time

In Part II, we’ll dig into the experimental confirmation of the existence of quarks, gluons, color charge, the charm, top, and bottom quarks, and the development of quantum chromodynamics. Stay tuned.

References

1. M. Gell-Mann, “A Schematic Model of Baryons and Mesons”, Phys. Lett.8:214 (1964).

2. Occhialini, G.P.S. and Powell, C.F., “Nuclear Disintegrations Produced by Slow Charged Particles of Small Mass“, Nature 159, 186-190 (1947)

3. Rochester, G.D. and Butler, C.C., “Evidence for the existence of new unstable elementary particles“, Nature, 160, 855 (1947)

4. F. Powell et al., “Observations with Electron-Sensitive Plates Exposed to Cosmic Radiation“, Nature 163, 82-87 (1949)

5. Hopper, V.D. and Biswas, S., “Evidence Concerning the Existence of the New Unstable Elementary Neutral Particle“. Phys. Rev. 80: 1099. (1950)

6. Pais, A., “Some Remarks on the V-Particles“, Phys. Rev.86, 663-672 (1952). DOI: 10.1103/PhysRev.86.663

7. Gell-Mann, M., “Isotopic Spin and New Unstable Particles“, Phys. Rev. 92, 833 (1953).  Bibcode:1953PhRv…92..833G. doi: 10.1103/PhysRev.92.833

8. Gell-Mann, M., “The Interpretation of the New Particles as Displaced Charged Multiplets“, Il Nuovo Cimento, 4 (Supplement 2), 848 (1956). DOI: 10.1007/BF02748000

9.  Nakano, T. and Nishijima, N., “Charge Independence for V-particles“. Progress of Theoretical Physics 10 (5): 581 (1953). Bibcode:1953PThPh..10..581N. doi: 10.1143/PTP.10.581.

10. Nishijima, K., “Charge Independence Theory of V Particles“. Progress of Theoretical Physics 13 (3): 285 (1955). Bibcode:1955PThPh..13..285N.doi:10.1143/PTP.13.285

11. Gell-Mann, M. , “The Eightfold Way: A Theory of Strong Interaction Symmetry“, DOE Technical Report, March 15, 1961

12. Ne’eman, Y., “Derivation of Strong Interactions from a Gauge Invariance,” Nucl Phys, 26, 222-229 (1961). DOI: 10.1016/0029-5582(61)90134-1

13. Ne’eman, Y., “Gauges, Groups And An Invariant Theory Of The Strong Interactions” Tel-Aviv : Israel At. Energy Comm. (Aug. 1961) 213 pages

14. Barnes, V. E. et al., “Observation of a Hyperon with Strangeness Minus Three”Physical Review Letters 12 (8): 204 (1964). Bibcode: 1964PhRvL..12..204B. doi:10.1103/PhysRevLett.12.204.

15. Zweig, G., “An SU3 model for strong interaction symmetry and its breaking“, internal CERN pre-prints, 1964.

The history of QCD – CERN Courier, Harald Fritzsch, Sep. 27, 2012

A watershed: the emergence of QCD – CERN Courier, David Gross and Frank Wilczek, Jan. 28, 2013

Murray Gell-Mann, the Eightfold Way, Quarks, and Quantum Chromodynamics

M. Gell-Mann; Y. Ne’eman, eds. (1964). The Eightfold Way. W. A. Benjamin. LCCN 65013009  (Google Books preview)

David Griffiths. (2008) Introduction to Elementary Particles. John Wiley & Sons. (Google Books preview)

Frank Wilczek, “QCD Made Simple“, Physics Today, 53N8 22-28, (2000).  doi: http://dx.doi.org/10.1063/1.1310117

Kaons and other strange mesons

Constructing the Universe: the Particle Explosion

“Quarks: Yeah, They Exist”|Quantum Diaries

“Meet the quarks”|Quantum Diaries

“World of Glue”|Quantum Diaries

“QCD and Confinement”|Quantum Diaries

“Right! What’s a Parsec?”

For those who do not recognize the reference being made in the title, it is to an old Bill Cosby comedy routine depicting a conversation between God and Noah. During the course of the conversation, God provides the dimensions of the ark He wants built in cubits, prompting a somewhat incredulous Noah to say, “Right! What’s a cubit?”

Understanding the units used in a given measurement is pretty important business, and one of the most commonly-used units in astronomy and astrophysics is the parsec. A parsec is roughly 3.26 light-years, which translates to about 30.9 trillion kilometers or 19.2 trillion miles. While it is fairly common knowledge that a light-year is defined as the distance light traverses in one year, the origins of the parsec are considerably less well-known. Let’s address that here.

The term “parsec” is thought to have been introduced by British astronomer Herbert Hall Turner in 1913. The parsec is defined as the distance which would induce an annual heliocentric parallax shift of one arc second.

“Right!”

Okay, perhaps I should back up a bit and define some parts of that definition.

Parallax is a key component of the cosmic distance ladder, the chain of techniques used to measure distances on astronomic scales. No one technique in the distance ladder is effective over all distance scales, but overlap between the distance scales over which each technique is valid provides continuity. Parallax provides effective measurements of distances out to a range of about 1,600 light years. Beyond that, the shift becomes too minuscule to be accurately measured, although the ESA’s Gaia mission (just launched on December 19) should extend that effective range ten-fold. Measurements of even greater distances are largely dependent upon fixed relationships between the luminosity and periodicity of variable stars. (For more about the discovery of such techniques, see my earlier article “Henrietta Leavitte and the Cepheid Variables”.)

Image from Wikipedia. Created by Srain at English Wikipedia.

Astronomers can use the same idea to measure the distances to stars, using even more distant stars as a relatively fixed background against which they can measure parallax shift. Of course, to get a measurable shift in angle, they need as long of a baseline (corresponding to the distance between your eyes in the example given earlier) as they can get. The best they can do is to use the diameter of the Earth’s orbit around the sun. Make a measurement. (In other words, snap a photo of the star in question through a telescope.) Wait six months. Make another measurement.  Apply some trig, knowing the diameter of the Earth’s orbit (about 300 million kilometers), and viola!  We have the distance to the star under investigation.  (I won’t bother going into how those calculations are done here. That can be seen in detail in the Wikipedia article on parsec.)

Nicolaus Copernicus first came up with this idea, but he didn’t have access to the technology to make it happen.  The first astronomical parallax measurement was made in 1838 by Friedrich Wilhelm Bessel (of Bessel function fame).

Well, then.  Now that we have the definition of parallax out of the way, let’s get on to arc second. This is merely a unit of angular measurement.  A circle can be divided into 360 degrees.  Each degree can then be subdivided into 60 arc minutes. An arc second is simply 1/60th of an arc minute, or 1/3600th of a degree.

Now we have all of the pieces in place to understand what a parsec is. Imagine a hypothetical star out there whose distance is just right to cause a parallax displace of one arc second.  The distance to that imaginary star would be one parsec.

Right? Right.

Dr. David P. Anderson Lecture: “Do You Have What It Takes to Be a Citizen Scientist?”

Last night, I had the pleasure of attending an Austin Forum on Science and Technology lecture delivered by Dr. David P. Anderson of the University of California, Berkeley. The subject of the talk was on technology-enabled citizen science, and Dr. Anderson is well positioned to address this topic. He was one of the co-founders of the SETI@home project, and went on from there to establish the Berkeley Open Infrastructure for Networking Computing (BOINC) project.

It was pointed out during the introductions that the primary sponsor/partner for this lecture series, the Texas Advanced Computing Center (TACC), has contributed about 1000 years of CPU time to distributed BOINC projects via IBM’s World Community Grid. (In the interest of full disclosure, I should note that TACC is one of my customers for my day job as an Exchange administrator. In fact, another sponsor, the University of Texas at Austin’s Information Technology Services, or ITS, is my employer.)

The timing of this lecture could not have possibly been better. I’ve actually been preparing a posting about citizen science (which I hope to put out in the next few days), and I gleaned several useful tidbits from the lecture. Here are the broad brush strokes of the talk.