Category Archives: Climate

Nate Silver falls off

In 2012, Nate Silver faced a conservative and media-led backlash for bringing rigor to election forecasting. His newly launched journalism project is now facing a backlash for failing to live up to its promise.

I am probably the ideal audience for Nate Silver’s new journalism project FiveThirtyEight.com. I am someone who values data in a frequently-substance-free world of reporting. Although I am an unabashed Sam Wang PEC partisan, I certainly appreciated Silver mainstreaming election forecasting based on factors other than wishful thinking and journalism biases towards “the horse race” and “momentum”. When Silver was attacked by know-nothings in the media and the conservative blogosphere, I cheered him on, and savored his election day vindication, anti-climatic as it was.

Rather than topping my “must read” list, however, the new FiveThirtyEight is something I won’t be reading. Here’s why:

I became aware of Silver’s imminent launch by his public Twitter announcement of two hires to cover science for his new venture: Emily Oster, a University of Chicago economist famous for counter-intuitive revelations (sound familiar?); and Roger Pielke Jr.

Image courtesy of Flickr user “ferdicam”, used under Creative Commons.

Now, I am not going to get into Roger’s pathological attacks on climate scientists. I am not going to get into his sweaty delusions of persecution. I am not going to get into Roger’s complete misunderstanding of elementary aspects of climate science. I am going to focus on just two things: what Nate Silver is known for, and what Roger Pielke Jr. is known for.

Nate Silver’s reputation is based on being a stats wiz. This is what his blogging was devoted to, what his best-selling book is about, and the one thing he has that his competition/peers like Ezra Klein or Matt Yglesias don’t. And one of Nate Silver’s very first, very public hires (Roger Pielke Jr.) sucks at statistics. Not “published something in need of minor correction once or twice” sucks. “Doesn’t understand how a t-test works” sucks. “Doesn’t understand basic probability” sucks. Sucks out loud. Sucks on ice.

Roger’s very first article for Silver’s new site is, unsurprisingly, about Roger’s hobbyhorse. The claim that disaster losses are not increasing due to climate change.

Let’s be clear about some things. Climate change is real. Humans are not just “contributing” to it, we are responsible for essentially all of it over the past several decades. Our perturbation of the climate through our emissions of greenhouse gases is fundamentally changing the Earth system. The biosphere and human systems are going to have to adapt to a rate of change as of now unseen anywhere else in the paleoclimatic record. In the absence of emissions stabilization, a difficult but decidedly achievable outcome, the threat to the biosphere and society is daunting. The amount of climate change we’ve already experienced, while extremely serious, is tiny compared to the impacts we will see in world of unchecked fossil fuel exploitation. In addition to changes in the average or mean state of the system, we have already begun to see changes in some types of extreme weather events, and changes to the drivers of yet other extreme events.

Ostensibly, Roger Pielke Jr. accepts all of the above. He just doesn’t want you to focus on this big picture. Instead Pielke wants you to believe and to focus on the claim that we’ve seen no increase in “normalized” damages due to climate change. The fundamental conceit of this claim is that even though disaster losses are unquestionably on the rise, once you account for changes in the value of infrastructure being built in areas affected by disaster (due to population growth, inflation, etc.), there is no “statistically significant increase”.

This claim rests on our ability to account for factors which might spuriously inflate the damages caused by disasters, but also our complete failure to account for factors that have allowed us to avoid even greater losses.

The case of 2012′s Superstorm Sandy is illustrative. While Roger spent the first few days of the disaster trying to play down the magnitude of the mounting carnage, Sandy ultimately ranked among the most costly storms on record, even using normalized losses. Preliminary estimates range from $50-65 billion USD.

And yet it could have been so much worse.

Hurricane Sandy uncharacteristically failed to recurve out to sea, and barreled back towards the East Coast of the US. Due to the amazing advances we’ve seen in our ability to model the global weather system, we knew well in advance that this unexpected turn by Sandy was a real possibility.

This possibility was taken into consideration by those trying to game out the impact of Sandy’s landfall. The impact of rising sea levels on the frequency and severity of storm surge flooding was as well. Looking to a future of warming-boosted surges, researchers identified huge vulnerabilities in the New York transportation infrastructure to previously rare events. Such considerations ultimately led the MTA to shutdown the subway system in order to avoid the corrosive impact of salt water if the tunnels were flooded. This decision, informed by modeling and meteorological sophistication unimaginable in the early 1900s, saved the subway system and prevented New York City from being paralyzed for weeks and nearly doubling the economic damages.

Image via Twitter

Roger Pielke Jr.’s “normalized” disaster loss fixation takes none of this into account. Nor does it account for the benefits of building code improvements. Or other disaster prevention measures like dikes.

Paul Krugman is among a growing list of knowledgeable folks who were hopeful about Silver’s new enterprise but are less than impressed. Krugman writes:

… data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming.)

I feel bad about picking on a young staffer [Note: not Pielke Jr.], but I think this piece on corporate cash hoards — which is the site’s inaugural economic analysis — is a good example. The post tells us that the much-cited $2 trillion corporate cash hoard has been revised down by half a trillion dollars…

… what does this downward revision tell us? We’re told that the “whole narrative” is gone; which narrative? Is the notion that profits are high, but investment remains low, no longer borne out by the data? (I’m pretty sure it’s still true.) What is the model that has been refuted?

“Neener neener, people have been citing a number that was wrong” is just not helpful. Tell me something meaningful! Tell me why the data matter!

Though Krugman is referring to a different 538 article, he could easily be making the same criticism of Pielke’s. Why do Pielke’s data matter? Are disaster losses not increasing? They are. Does “normalizing” the loss data tell the whole, unbiased, story? No, it doesn’t. Are extreme events, and drivers of yet more extreme events, changing in response to GHG emissions? They are.

If Nate Silver’s mission is to bring statistical cachet to good journalism, he’s off to a terrible start. One of his first big hires is terrible at statistics. If Silver wants to tell us something meaningful instead of peddling freakonomics-lite contrarianism, he’s similarly off to a poor start. Pielke’s personal hobbyhorse obscures far more than it enlightens. It offers a cocktail party morsel of contra-conventional wisdom instead of intellectual nourishment.

There are probably a lot of people who would like to see Silver fail. I’m not one of them. I just won’t be one of his readers, either, unless he makes some big changes to his current model.

Rapidly warming satellite data sends “skeptics” scurrying to models

Most people remotely familiar with climate “skeptics” know that if you can count on them for anything, it’s the following:

  1. “Skeptics” love satellite temperature data.
  2. “Skeptics” hate computer models. 

“Skeptics” claim to reject the surface instrumental temperature record because of alleged biases in the data, supposedly fraudulent “adjustments”, etc. These objections are not based in reality, as multiple analyses of the surface data have shown. In reality, “skeptics” reject the surface instrumental record for the same reason they reject so much of modern science: it doesn’t show what they want it to.

“Skeptics” claim that satellite temperature data, derived from microwave brightness soundings of the lower troposphere, are superior. The reality is that the satellite data cover a shorter record (and thus capture less of the warming), use a more recent baseline (and thus have cooler “anomalies” relative to the surface record), and are more sensitive to natural climatic variability like ENSO (and thus make the human signal harder to pick out visually). In other words, they like the satellite data because they show them more of what they want to see, and less of what they don’t. That one of the groups producing a satellite record is comprised of Roy Spencer and John Christy is icing on the cake.

And if there’s one thing “skeptics” disdain more than the surface instrumental record, it’s computer models. The ostensible justifications are legion, but the underlying cause is simple: they show things that “skeptics” don’t want to see.

So it was with great amusement that I took note of the “skeptic” reaction to the UAH satellite record’s rapid January warming, which reached temperatures exceeded only during the strong El Niño years of 1998 and 2010:

Rather than accept their beloved satellite data at face value, “skeptics” cast about for any alternative data set that didn’t show the inconvenient warming. Over at the wretched hive of scum and villainy known as WUWT, the innumerate and oft-beclowned Anthony Watts seized upon NCEP data showing much less January warming:

Of course NCEP isn’t actually an observational data set. It’s a reanalysis product created by those evil and untrustworthy models. You know, the ones “skeptics” demonize regularly in outlets like WUWT:

When the satellites don’t show what they want to see, “skeptics” waste no time in fleeing to the models they otherwise disdain.

Because climate “skeptics” are anything but skeptical.

And just for the record, the RSS satellite record showed a similarly large (+0.341°C) increase in January 2013.

Climate Change Communication – The Up Goer Five Edition

Image courtesy of Flickr user “njtrout_2000″, used under Creative Commons.

Reddit has a sub-reddit called Explain Like I’m Five, where people are attempt to explain often complex topics in simple, easy to understand language. The “like I’m five” part is often unsuccessful, but the idea is great.

Via Chris Rowan, people in the geosciences on Twitter have been talking about xkcd doing something similar, with the Up Goer Five. Basically, the challenge was to explain a spacecraft in relatively good detail in only the thousand most commonly used words in the English language.

This is a site that lets you try the concept out. Here is my first attempt at an explanation of the climate and energy challenge:

What is Going On?
Our home is changing because of some things we do, like burning stuff from the ground for power. One of the big changes is that we are warming up. What happens as we warm up is important! Some changes might be good: time for food growing may be longer. But lots of changes will be hard: where and how much or little rain we get, how hot and cold it gets, how much stuff is under water, how bad the air is to breathe, and a lot more things will all change. How we live, how we eat, and how we plan for things very much are tied to how things are around us. Big changes are hard to go through.

How Can We Know What Will Happen?
We can try to figure out what stuff might happen as we keep burning more and more stuff for power, and warm up. We can use computers to look forward. We can look at small changes from now and over the past couple hundred years, and think forward in time. We can even look way back into the very long ago past, at times when things warmed up or cooled down a lot, and learn from that!

Is It Too Late?
The important thing to know is this: what we do going forward matters very much to how things will change. Making very big changes to our home, or changing it less, is something we can decide. We need to think very carefully about how much we want to make our home change, and think about all of things that might happen as we change it. There is a lot we don’t know about what will change, and that makes it hard to plan for change. It may be safer to make little change, especially as we learn more about the bad stuff that happens with big changes.

What Can We Do?
There are a lot of new great things we can use for clean power that changes our home less. We can use the sun. We can use the wind. We can use water in many different ways. We can even use the same power the sun uses for its own power! All of these new ways of making clean power will keep our home more like it is now, and make it change a lot less than burning stuff for power. We can also use less stuff and power, and use the stuff and power we do use for doing more things. That will let us use less power to do the same stuff we do now.

How Do I Help?
What kind of home do you want? What kind of home do you want for your kids and their kids? Keep that in mind as you decide what to do. If you want to help, you can use less stuff and power, and tell people you want to use more of the things that use new clean power. You can also ask people who decide things to think about our home when they decide stuff, and to help us move to new ways of making clean power.

————————————————————–

This is an interesting exercise, and it might help if your intent is to de-technobabble a talk, or use wording that is friendly to translation software. As a communications tool, though, I don’t think it’s all that great. What makes ideas and concepts meme-like depends somewhat on simplicity, sure. But all of the other things that make ideas sticky are hamstrung by this. It’s really hard to use imagery, analogy, expectation-confounding, and all of the tools that make memorable ideas with such a limited vocabulary.

What do you think?

Matt Ridley and the Wall Street Journal misrepresent paper cited in Ridley column

Equilibrium climate sensitivity (ECS) evaluated from paleoclimatic data (PALAEOSENS group, Rohling et al., 2012).

There’s more to say about the latest attempt to deny the mainstream estimate of equilibrium climate sensitivity (e.g. NRC, 1979; Annan and Hargreaves, 2006; Knutti and Hegerl, 2008; Rohling et al., 2012) by Matt Ridley (remember him?) at the Wall Street Journal later. But I just wanted to point out something rather troubling about one of Ridley’s and Nic Lewis’s (the source of Ridley’s claims) citations.

Ridley claimed:

Some of the best recent observationally based research also points to climate sensitivity being about 1.6°C for a doubling of CO2. An impressive study published this year by Magne Aldrin of the Norwegian Computing Center and colleagues gives a most-likely estimate of 1.6°C.

I recalled the Aldrin et al. paper from the last time it made the rounds in the “skeptic” blogosphere, when Chip Knappenberger cited it as finding a “low” climate sensitivity.

The funny thing about the Aldrin et al. paper is that it really doesn’t find a “low” ECS at all. Their main result is an ECS of 2.0°C, which is completely consistent with the IPCC AR4 range. Moreover, they caution that their main result is incomplete, because it explicitly does not account for the effect of clouds:

When cloud behavior is included as another term, the ECS increases significantly, from ~2.5°C to 3.3°C depending on the values used:

Surely this wasn’t the Aldrin et al. paper Riddley and Lewis were citing as finding an ECS of 1.6°C.

The 1.6°C value literally never appears in the text of the paper.

Of course, it was entirely possible that Aldrin had published another paper on ECS this year finding 1.6°C that I was simply unable to find. I reached out to Bishop Hill and Matt Ridley for some clarification:

  1. thingsbreak
    @aDissentient Which Aldrin 2012 paper was Lewis citing on your blog?
  2. thingsbreak
    @mattwridley Can you provide either the title or the DOI for the Aldrin paper you cited in your WSJ piece? Thanks!
  3. aDissentient
    @thingsbreak Environmetrics 2012; 23: 253–271 Panel A of Fig 6.
  4. thingsbreak
    @aDissentient The one that finds an ECS of 2.5-3.3K when it bothers to account for clouds (4.8)? LOL.
  5. mattwridley
    @thingsbreak wattsupwiththat.com/2012/12… Aldrin, M., et al., 2012. Bayesian estimation of climate sensitiv… Environmetrics, doi:10.1002/env.2140.
  6. thingsbreak
    @mattwridley Did you personally read the paper? Where does the 1.6 number come from? Did you read section 4.8?
  7. aDissentient
    @thingsbreak Most likely values still only 2 ish. If we are to include cloud lifetime effect shld we include other highly uncertain effects?
  8. thingsbreak
    @aDissentient If you’re making a comparison to IPCC values, should use most apples-to-apples comparison, which Aldrin et al. discuss in 4.8.
  9. thingsbreak
    @aDissentient Where does the 1.6 value come from anyway? Literally doesn’t exist in paper.
  10. aDissentient
    @thingsbreak He got it by measuring the graph (It’s actually slightly lower I believe).
  11. mattwridley
    @thingsbreak lewis calculated it from aldrin’s paper’s data/charts and aldrin agreed it is correct
  12. thingsbreak
    @mattwridley Aldrin agreed that apples to apples comparison with IPCC ECS estimates is 1.6K? Doubtful. Directly contradicts paper itself.

I posted the following to Nic Lewis at Bishop Hill’s blog:

I think that some readers, and probably the authors of a paper themselves, might find it at least slightly misleading for you to claim findings on their behalf that the paper itself does not actually state.

The main result from Aldrin et al., as reported by Aldrin et al., is an ECS of 2.0°C. The authors caution that this result probably isn’t an apples to apples comparison to other ECS estimates due to the unaccounted for cloud term, and find that the value increases to ~2.5-3.3°C with clouds.

Rather than report either of these values, you simply claim Aldrin et al. “an impressively thorough study, gives a most likely estimate for ECS of 1.6°C…”.

Ridley likewse claims, “An impressive study published this year by Magne Aldrin of the Norwegian Computing Center and colleagues gives a most-likely estimate of 1.6°C.”

It would be easy for me to lob accusations of bad faith, as we don’t know each other and this is just the internet. Instead, I would encourage you, if your goal is to reach as wide an audience as possible, and try to make an impact beyond the “skeptic” and conservative blogospheres, to be more upfront about the scientific literature about ECS.

Ignoring the two main findings of a paper for values that you’re either estimating from a curve or are creating yourself based on data not used by the paper will be seen by at least some people to be misleading. Claiming that ECS cannot be estimated by paleo data is absurd, especially when so many are aware of efforts like the PALAEOSENS project and various paleoclimatic intercomparison groups.

I won’t attempt to read minds or divine motivations. I will simply suggest that what you have been doing thus far will cause some people to dismiss what you’re trying to say due to perceived dishonesty.

I hope you take this criticism in the constructive context in which it is being offered. There will be plenty of time for name-calling and insults later.

References:

  • Aldrin, M., M. Holden, P. Guttorp, R. B. Skeie, G. Myhre, and T. K. Berntsen (2012), Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperatures and global ocean heat content, Environmetrics, 23(3), 253–271, doi:10.1002/env.2140.
  • Annan, J. D., and J. C. Hargreaves (2006), Using multiple observationally-based constraints to estimate climate sensitivity, Geophys. Res. Lett., 33, 4 PP., doi:200610.1029/2005GL025259.
  • Knutti, R., and G. C. Hegerl (2008), The equilibrium sensitivity of the Earth’s temperature to radiation changes, Nature Geoscience, 1(11), 735–743, doi:10.1038/ngeo337.
  • National Research Council (1979),  Carbon Dioxide and Climate: A Scientific Assessment. Washington, DC: The National Academies Press.
  • Rohling, E.J., et al. (2012), Making sense of palaeoclimate sensitivity, Nature, 491(7426), 683–691, doi:10.1038/nature11574.

Hurricane Sandy and the Climate Hens

Image courtesy of NASA, used under Creative Commons

Hurricane Sandy is one for the record books in a number of senses, and as New York and the world struggle to grapple with its enormity, some discussion has turned to climate change. A topic that has been damningly absent from discussion in the U.S. Presidential election.

It is inevitable that when anyone anywhere tries to talk about climate change in relation to things in the here and now rather than some murky, distant future, a particular group descends to cluck their tongues and admonish everyone that climate change can’t be tied to any individual event (a proposition that is not true, and grows increasingly less defensible as the field of fractional attribution matures). This group includes many who also fall into the camp of those who style themselves as non-partisans or above the “tribal” nature of climate debates. The parallels with Jay Rosen’s larger media critique of the View from Nowhere have been noted by Michael Tobis among others.

Dave Roberts has a thoughtful piece about this phenomenon. He refers to this group as climate “scolds” in contrast to climate hawks (and yes, I do have my own problems with the latter moniker). And while I do think that “scold” captures a lot of the flavor of the group Roberts is describing, I think the hawk vs. “___” setup favors a different term for the group: climate hens.

Image courtesy of Flickr user “Ann Blair”, used under Creative Commons

Climate hens by and large acknowledge the human perturbation of the climate system. But they are very, very hesitant to highlight (or are even downright resistant to) the idea that humans are shaping the present climate in ways that are affecting the public now. This may be because it doesn’t jibe with what they learned about climate years ago. It may be because they view erring on the side of making climate change seem more serious than it is to be as bad or worse than denying that it’s a problem. It may be because they don’t really understand climate science very well- Eric Berger and Roger Pielke Jr., for instance, are two climate hens that have displayed a remarkable ignorance about basic aspects of climate science pertaining to natural variability in a warming world. (Pielke Jr. is also infamous for playing bait and switch by turning conversations about human contribution to extreme events into discussions about an economic signal in normalized disaster losses.) Whatever the reason, climate hens are just plain uncomfortable with people attempting to tie extreme events to our increasing influence on the planet’s climate.

Roberts points out, correctly and convincingly, that the climate hens are clucking about a problem that doesn’t really exist- at least not the one that they’re ostensibly worried about. When the general public sees something like the record US heat, the summer drought, or a hurricane like Sandy, and they start asking about global warming, they don’t really want a belabored lecture on fractional attribution or paleoclimatic precedents that the climate hens think should determine the answer. What the public is looking for is some way to connect this thing- that scientists are telling them is real and a real problem- to their own experiences of the world. That’s what we humans do. Climate hens are, by mistake or by design, frustrating one of the best avenues of facilitating public recognition of climate change as a problem they need to take seriously. Roberts frames it this way:

That’s the key missing ingredient on climate change: not a technical understanding of stochastic modeling, forensic attribution, and degrees of probability, but a visceral, more-than-intellectual sense of what climate change means. Most people simply lack a social and ethical context for it, so they end up jamming it into other, more familiar contexts (“big government,” “environmental problem,” “liberal special interest group”).

A storm like Sandy provides an opportunity for those who understand climate change to help construct that context. It provides a set of experiences — a set of images, sounds, smells, feelings, experiences — that can inscribe climate change with the cultural resonance it lacks. That’s what persuades and motivates people: not the clinical language of science, but experiences and emotions and associations. Of course communicating scientific facts is important too, but it’s not the primary need, nor the standard by which other communications should be judged. What scolds often do is interpret the language of emotion and association through the filter of science. That’s neither helpful nor admirable.

And this perspective has supporters amongst those studying climate communication. Elke Weber (2010) makes this point:

Behavioral research over the past 30 years strongly suggests that attention-catching and emotionally engaging informational interventions may be required to engender the public concern necessary for individual or collective action in response to climate change… To the extent that time-delayed consequences of our actions do not attract the attention or generate the concern ex-ante that they would seem to warrant ex-post, behavioral research provides some corrective actions. The concretization of future events and moving them closer in time and space seem to hold promise as interventions that will raise visceral concern.

The science of tropical cyclogenesis in a warming world is undoubtedly complex and uncertain- a point I’ve been making for years. But when the public starts asking questions about climate after an event like Hurricane Sandy, they aren’t looking for navel-gazing about ensembles of modeling runs, wind shear, and overwash sediment coring. They are asking for a way to connect something they keep hearing they are supposed to care about to things they already do. The proper response to such questions is not, as the climate hens would have it, to shut them down and turn them away. And it should go without saying that nor is it a reason to overstate the connections between our increasingly heavy influence on the climate and extreme events like Hurricane Sandy. Rather, the appropriate response is to treat the questions for what they are: an invitation to talk about climate change in a way that is meaningful to a curious but decidedly lay public. Climate change means sea levels rising, it means storm surge increases, it means heavier precipitation events (Schaeffer et al., 2012; Sriver et al., 2012; Shepard et al., 2012; Min et al., 2011). If Hurricane Sandy makes these threats more concrete, if it moves them closer in time and space, if- in Roberts’ words- it provides “a set of images, sounds, smells, feelings, experiences”, we should absolutely be talking about it. And perhaps something good will come of this disaster. Clucking from the climate hens be damned.

References

  • Min, S.-K., X. Zhang, F. W. Zwiers, and G. C. Hegerl (2011), Human contribution to more-intense precipitation extremes, Nature, 470(7334), 378–381, doi:10.1038/nature09763.
  • Schaeffer, M., W. Hare, S. Rahmstorf, and M. Vermeer (2012), Long-term sea-level rise implied by 1.5 °C and 2 °C warming levels, Nature Climate Change, doi:10.1038/nclimate1584.
  • Shepard, C., V. Agostini, B. Gilmer, T. Allen, J. Stone, W. Brooks, and M. Beck (2012), Assessing future risk: quantifying the effects of sea level rise on storm surge risk for the southern shores of Long Island, New York, Natural Hazards, 60(2), 727–745, doi:10.1007/s11069-011-0046-8.
  • Sriver, R., N. Urban, R. Olson, and K. Keller (2012), Toward a physically plausible upper bound of sea-level rise projections, Climatic Change, 1–10, doi:10.1007/s10584-012-0610-6.
  • Weber, E. U. (2010), What shapes perceptions of climate change?, Wiley Interdisciplinary Reviews: Climate Change, 1(3), 332–342, doi:10.1002/wcc.41.

A new LGM reconstruction, with implications for climate sensitivity

LGM Ice Sheet Extent from Clark et al., 2009

First off, it’s important to note that the paper has only appeared in CPD, it still has to pass review. However, I’m going to comment on the results for two reasons. Mundanely, I have a sliver of free time now, and I don’t know that the same will be true after the paper’s (presumed) eventual publication. More importantly, however, I think it’s safe to say that its results will be misinterpreted to the same or even a greater extent than Schmittner et al., 2011 (hereafter S11) was. The mainstream press largely ignored some potential reasons to be skeptical of that paper’s results (discussed by RealClimate and Skeptical Science among others, as well as by one of the paper’s authors in an interview with me at Planet 3.0). And of course the denialist echo chamber distorted the results ludicrously, going so far as to erase an entire portion demonstrating them to be consistent with the larger body of evidence on climate sensitivity (e.g. Knutti and Hegerl, 2008) and inconvenient to dismissals of the danger posed by unchecked GHG emissions.

With the throat clearing out of the way, here’s how things stand. Fyke and Eby (2012) offered some criticisms of S11. They objected to some of the proxy data used, and more importantly, pointed out that the model used (a version of the UVic model, which is more akin to simplified EMICs than GCMs) simply couldn’t produce realistic behaviors of key atmospheric processes which caused it to underestimate ECS:

[T]o explore the potentially large dependence of Schmittner et al.’s results on the choice of climate model, we carried out a new model simulation with the most recent version of the UVic ESCM in which the atmospheric latitudinal profile of heat diffusion varies in response to the global average atmospheric temperature anomaly (the “Mod” simulation in Fig. 2). This functionality gives a new model with much improved fit to both Antarctic and Arctic LGM temperatures as recorded by ice cores, yet still retains an excellent fit to low-latitude temperatures. Notably, and most importantly, we found that this model ranks very well with respect to the relative RMSE test, but with a much higher ECS (3.6°C) than similarly ranked models in (1). As suggested in (1), the lack of dust forcing in our LGM model may lower the equivalent ECS by ~0.3°C, but this is still well above the median ECS estimate of 2.3°C in (1).

Fyke and Eby’s revised LGM-derived ECS was quite similar to other LGM-based studies, such as Holden, et al. (2010). Criticism that the UVic model used had an atmospheric component that was perhaps insufficient to fully capture the climate state at the LGM was echoed in the RealClimate discussion as well as by coauthor Nate Urban in our interview.

Schmittner, et al. (2012) responded to Fyke and Eby by largely disagreeing with their discarding of some proxy records, but conceding that their model choice may well have led to underestimating ECS and uncertainty in their reconstruction:

This tentatively supports the conclusion in (1) that structural model uncertainties (in particular, formulations of atmospheric heat transport) may have led to systematic underestimation of ECS2xC in (2). Further study with new ensemble model experiments, including the modified heat flux formulation and LGM dust forcing, are necessary to quantify the effect of heat flux uncertainties on the best ECS2xC estimate.

Schmittner, et al. go on to suggest that further modeling be done to try to better test the effects of using more realistic models with their approach.

Several groups are doing that, or something very similar. One is Tamsin Edwards, who has teased her experiment but not revealed its results (yet). Another is Jules Hargreaves and James Annan, who discussed S11 and also teased their experiment some months back but likewise did not discuss their results.

Which brings us to today (or, technically, Wednesday). Annan and Hargreaves, 2012 (hereafter AH12) has been submitted to Climate of the Past – Discussion, and their results are now available. They used almost exactly the same proxy data as S11, but used a different model (in fact, an ensemble of the GCMs used in the PMIP2 project) and methodology to constrain the difference in climate between the present and the LGM. Their results share some similarities to S11 but also contain some differences.

AH12 use pseudo-proxy data to validate their reconstruction. Their fit to the proxy data is improved relative to S11 (correlation of 0.73 vs S11′s 0.53).

Figure 5: a) Validation with GCM-Generated Pseudo-Proxy Data and b) Fit to Proxy Data

One of the criticisms of S11 was that it found an LGM globally-averaged surface temperature that seemed awfully warm (areas where proxy data were available averaged a mere~2°C colder than more modern temperatures) relative to other estimates, which show an LGM nearly three times that cold (e.g. von Deimling et al., 2006). This warmer LGM was necessarily responsible for much of the difference in their ECS value vs. the “canonical” estimate of 3°C. The authors attributed much of this difference to the use of warmer MARGO SST data vs. older (and cooler) data, but that explanation might appear somewhat insufficient, as the PMIP2 models that best fit the MARGO data themselves had ECS estimates closer to 3°C (Otto-Bliesner et al., 2009). Another odd result of S11 was the large discrepancy between their land only and ocean only results.

AH12 find an overall cooling at the LGM of ~4°C. Their land only and ocean only data are somewhat different, but are much closer than S11′s and are consistent within their uncertainties:

Figure 1: LGM Surface Air Temperature Reconstruction

Figure 2: LGM SST Reconstruction

In some ways, this represents a validation of S11: it’s certainly warmer than previous estimates, and the warm SSTs do arise from the MARGO data rather than some problem with S11. In other ways, however, it’s a contradiction of S11 and a validation of consensus estimates: the IPCC AR4′s estimate for LGM cooling was 4-7°C, consistent with AH12 but not S11.

AH12′s LGM-derived ECS is where I anticipate the greatest amount of well-meaning misunderstanding as well as outright misrepresentation. Why? Because it’s low: 1.7°C (1.2-2.4°C).

But!

One of the criticisms of S11 I raised with Nate Urban in our interview was the problem of the asymmetry of climate sensitivity during different climatic states- i.e. climate sensitivity itself may be smaller at colder times than it is during warmer times. So hypothetically a perfect estimate of equilibrium sensitivity derived from data from the LGM might be significantly lower than a perfect estimate of ECS in a doubled-CO2 future due to the non-linearity of certain feedacks. While this asymmetry is by no means an unquestionably real phenomenon, there are some very good reasons to suspect it to be true (e.g. Crucifix, 2006; Hargreaves et al., 2007; Yoshimori et al., 2011). In fact, the authors of the MARGO SST data used by S11 themselves go out of their way to warn against mistaking an LGM-derived ECS as being comparable to 2xCO2 ECS for precisely this reason (Waelbroeck et al., 2009).

AH12 note this explicitly:

However, such a simplistic estimate is far from robust, as it ignores any asymmetry or nonlinearity which is thought to exist in the response to different forcings… The ratio between temperature anomalies obtained under LGM and doubled CO2 conditions found in previous modelling studies varies from 1.3… to over 2…

Therefore, a more apples-to-apples comparison (taking into consideration the asymmetry issue) of their findings to a doubling of CO2 might look more like 2.8°C, with a range of 1.56-4.8°C.

[All I’ve done is apply the average of asymmetry values (1.3-2) cited by AH12 to their central value of 1.7°C, while applying the low and high end asymmetry values to their lower and upper 95% CI values respectively. This is obviously meant to be illustrative of the difference taking asymmetry into account makes for 2xCO2 vs. LGM values rather than a rigorous quantitative exploration.]

This puts the 2xCO2 ECS inline with consensus estimates such as the IPCC AR4 GCM-only estimate of 3±1.5°C. Interestingly, some of the S11 authors, using the same UVic model but with instrumental rather than LGM paleo data, found broadly similar results for ECS (Olson et al., 2012).

I’m not claiming to show what AH12 “really” says about ECS, but rather making a general point that often gets overlooked in discussions of ECS estimates derived from colder climates. And it’s certainly possible that my not-even-back-of-the-envelope extrapolation of their LGM ECS into a 2xCO2 ECS is horribly misguided for some reason that I am as of yet unaware- but I’ve inquired, and will dutifully revise this post if there is.

More than anything, this is a place-marker in the event that the typical denialist spin cranks up as it has over papers in the past.

References:

  • Annan, J. D., and J. C. Hargreaves (2012), A new global reconstruction of temperature changes at the Last Glacial Maximum, Climate of the Past Discussions, 8(5), 5029–5051, doi:10.5194/cpd-8-5029-2012.
  • Clark, P. U., A. S. Dyke, J. D. Shakun, A. E. Carlson, J. Clark, B. Wohlfarth, J. X. Mitrovica, S. W. Hostetler, and A. M. McCabe (2009), The Last Glacial Maximum, Science, 325(5941), 710–714, doi:10.1126/science.1172873.
  • Crucifix, M. (2006), Does the Last Glacial Maximum constrain climate sensitivity?, Geophys. Res. Lett.33(18), L18701, doi:10.1029/2006GL027137.
  • Fyke, J., and M. Eby (2012), Comment on “Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum,” Science, 337(6100), 1294–1294, doi:10.1126/science.1221371.
  • Hargreaves, J. C., A. Abe-Ouchi, and J. D. Annan (2007), Linking glacial and future climates through an ensemble of GCM simulations, Clim. Past3(1), 77–87, doi:10.5194/cp-3-77-2007.
  • Holden, P., N. Edwards, K. Oliver, T. Lenton, and R. Wilkinson (2010), A probabilistic calibration of climate sensitivity and terrestrial carbon change in GENIE-1, Climate Dynamics, 35(5), 785–806, doi:10.1007/s00382-009-0630-8.
  • Knutti, R., and G. C. Hegerl (2008), The equilibrium sensitivity of the Earth’s temperature to radiation changes, Nature Geoscience, 1(11), 735–743, doi:10.1038/ngeo337.
  • Olson, R., R. Sriver, M. Goes, N. M. Urban, H. D. Matthews, M. Haran, and K. Keller (2012), A climate sensitivity estimate using Bayesian fusion of instrumental observations and an Earth System model, J. Geophys. Res., 117(D4), D04103, doi:10.1029/2011JD016620.
  • Otto-Bliesner, B. et al. (2009), A comparison of PMIP2 model simulations and the MARGO proxy reconstruction for tropical sea surface temperatures at last glacial maximum, Climate Dynamics32(6), 799–815, doi:10.1007/s00382-008-0509-0.
  • Schmittner, A., N. M. Urban, J. D. Shakun, N. M. Mahowald, P. U. Clark, P. J. Bartlein, A. C. Mix, and A. Rosell-Melé (2011), Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum, Science, 334(6061), 1385–1388, doi:10.1126/science.1203513.
  • Schmittner, A., N. M. Urban, J. D. Shakun, N. M. Mahowald, P. U. Clark, P. J. Bartlein, A. C. Mix, and A. Rosell-Melé (2012), Response to Comment on “Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum,” Science337(6100), 1294–1294, doi:10.1126/science.1221634.
  • von Deimling, T. S., A. Ganopolski, H. Held, and S. Rahmstorf (2006), How cold was the Last Glacial Maximum?, Geophys. Res. Lett., 33(14), L14709, doi:10.1029/2006GL026484.
  • Waelbroeck, C. et al. (2009), Constraints on the magnitude and patterns of ocean cooling at the Last Glacial Maximum, Nature Geoscience, 2(2), 127–132, doi:10.1038/ngeo411.
  • Yoshimori, M., J. C. Hargreaves, J. D. Annan, T. Yokohata, and A. Abe-Ouchi (2011), Dependency of Feedbacks on Forcing and Climate State in Physics Parameter Ensembles, Journal of Climate, 24(24), 6440–6455, doi:10.1175/2011JCLI3954.1.

Natural Gas Doesn’t Mean the End of Global Warming

Recently, a number of articles have been published gushing about the drop in US carbon emissions due to cheap natural gas. The recent glut of US natural gas has largely come from fracking. You can see an example at The Atlantic here, or in Foreign Policy here.

These articles appeal to people who might vaguely understand there’s something to this whole climate change problem after all, but really don’t care for hippies at Greenpeace and the EPA telling coal companies to reign in their emissions, and can’t be arsed to get out and support market-based solutions to the GHG externality problem like a carbon tax or cap and trade. They’re not anti-science, really. They’re just anti-proactively doing anything meaningful about the problem.

So you can imagine how news that US emissions have declined has been a godsend. Their inner monologue probably goes something like, “Look! No pesky regulations, no inconvenient carbon pricing. The magic of the market at work!” And of course, “Suck it, commies Europe!

Such articles fail to recognize two fundamental problems with the “Fracking Will Save Us All” meme.

1. Burning natural gas domestically doesn’t keep coal in the ground. As US coal consumption decreased, coal exports increased.

It’s great that US emissions dropped. But the atmosphere and ocean don’t care where the coal gets burned.

2. These low natural gas prices are unsustainable. The US Energy Administration forecasts:

Because of the projected increase in natural gas prices relative to coal, EIA expects the recent trend of substituting coal-fired electricity generation with natural gas generation to slow and likely reverse over the next year. From April through August 2012, average monthly natural gas prices to electric generators increased by 34 percent, while coal prices fell slightly. EIA expects that coal-fired electricity generation will increase by 9 percent in 2013, while natural gas generation will fall by about 10 percent.

EIA expects carbon dioxide emissions from fossil fuels, which fell by 2.3 percent in 2011, to further decline by 2.4 percent in 2012. However, projected emissions increase by 2.8 percent in 2013, as coal regains some of its electric-power-generation market share.

If the authors of such nonsense find the EIA analyses too difficult to read, perhaps webcomics might be less intimidating: