Clouds represent one of the largest areas of uncertainty in climate modeling and sensitivity and thus one of the last refuges of the deniers. This is especially true of the ones that still manage to garner respect/attention from serious journalists (e.g. Dick Lindzen, Freeman Dyson, and to a lesser extent- in terms of credibility, not overall obsession with clouds- Roy Spencer), but is also rampant in the fever swamps home to common internet denialists. The claims vary but inevitably reduce to the idea that positive forcings will engage one or more negative feedbacks relating to cloud cover. Although the effect is always assumed to be global, the “evidence” offered, such as it is, comes pretty much exclusively from the lower latitudes.
Of course there are several obvious, immediate problems with the existence of such negative feedback(s). As with any claim of low climate sensitivity, it ignores empirical evidence that our climate is fully capable of 5-6°C global changes in both (warming and cooling) directions. Additionally, even if the hypothesized negative feedback existed only in the tropics and somehow functioned as a purely regional rather than global phenomenon (yes, I know, it’s best not to even try to make sense of it), the paleoclimatic evidence from the tropics still refuses to play ball. Likewise, incarnations like Lindzen’s “iris” hypothesis have the pesky shortcoming of not being backed, or outright contradicted, by observational data.
Clement et al. present modeled and observational evidence for the existence of at least regional (NE Pacific) positive cloud feedback in their paper Observational and Model Evidence for Positive Low-Level Cloud Feedback appearing in the latest issue of Science (or here). By combining surface- and satellite-based observations of clouds over the NE Pacific (which offers a suitably long record), they conclude that “changes in subtropical stratocumulus [clouds] act as a positive feedback on climate in the region.” They also note that although observational sampling is at present too sparse to conclusively confirm it, their findings indicate that this behavior is likely “part of a dominant mode of global cloud variability.”
So how do the AR4/CMIP3 models fare in reproducing the observed behavior? They find that of the suite of models employed in the CMIP3 that can be tested:
Models are grouped according to whether they have the wrong sign correlation relative to observations. By eliminating models successively on this basis, we are left with only two that simulate the correct sign correlations for all variables, the INMCM3.0 and the HadGEM1.
Only two? So then it’s probably safe to assume that the two are relatively close in terms of their overall behavior, right? Actually, it’s even more complicated still. The two models that accurately reproduce Clement’s phenomena actually wind up being two with vastly differing sensitivities, falling at opposite ends of the sensitivity spectrum, with HadGEM1 representing the highest sensitivity and INMCM3.0 representing the lowest. And unsettlingly, the INMCM3.0 model deviates significantly from both the multimodel mean behavior and 20th century observations in terms of capturing a weakening tropical atmospheric circulation. Which is to say that at least for the tests Clement et al. evaluate, the model with the highest sensitivity is behaving the most realistically.
Now, this does not necessarily mean that the canonical ~3°C sensitivity estimate is too low, as individual models capable of most skillfully handling specific behaviors/feedbacks aren’t necessarily the most skillful overall. But it certainly seems to be another solid blow against the “low-cloud, negative feedback” crowd.