It is also not true that the second pillar – the UN science report known as the IPCC report – proves a consensus. The flagship study on which the IPCC report relies, known as the hockey stick and which shows an unprecedented rise in 20th century temperatures, has been thoroughly discredited by scientists on both sides of the debate. Moreover, the UN report relies on explosive increases in greenhouse emissions by poor countries over the next century based on the political decision by the report’s authors that countries such as Algeria will be as wealthy, or wealthier, than the United States.
The third pillar supposedly proving that the science is settled – that the Arctic is melting – is not so much based on hard science as on political science. Arctic temperatures are no warmer than they were in the 1930s. Similarly, the thickness of Arctic glaciers and sea ice appears to vary naturally by as much as 16 percent annually. These and other facts which alarmists find inconvenient would seem to indicate that projections of an Arctic climate catastrophe are speculative at best.
Today I would like to conclude my series on the Four Pillars of Climate Alarmism by discussing the problems associated with global climate models. Let me begin by briefly explaining what climate models are and how they function. Climate models help scientists describe changes in the climate system. They are not models in the conventional sense; that is, they are not physical replicas. Rather, they are mathematical representations of the physical laws and processes that govern earth’s climate. According to Dr. David Legates of the University of Delaware, climate models “are designed to be descriptions of the full three-dimensional structure of the earth's climate.” Dr. Legates explained that models are used “in a variety of applications, including the investigation of the possible role of various climate forcing mechanisms and the simulation of past and future climates.”
Thousands of climate change studies rely on computer models. The Arctic Council, whose work I addressed last week, stated that Arctic warming and the impacts stemming from that warming are firmly established by computer models. “While the models differ in their projections of some of the features of climate change,” the Arctic Council wrote, “they are all in agreement that the world will warm significantly as a result of human activities and that the Arctic is likely to experience noticeable warming particularly early and intensely.”
Similarly, the IPCC, which I also discussed in an earlier speech, relied on such models to project a long-term temperature increase ranging from 2.5 to 10.4 degrees Celsius and assorted and potentially dangerous climate changes over the next century. According to Dr. Kenneth Green, Dr. Tim Ball and Dr. Steven Schroeder, “politicians clearly do not realize that the major conclusions of the IPCC’s reports are not based on hard evidence and observation but rather largely upon the output of assumption-driven climate models.”
PUTTING MODELS IN CONTEXT
Alarmists cite the results of climate models as proof of the catastrophic warming hypothesis. Consider one alarmist scribe, who wrote recently, “Drawing on highly sophisticated computer models, climate scientists can project – not predict – how much temperatures may rise by, say, 2100 if we carry on with business as usual.” He continued: “Although scenarios vary, some get pretty severe. So do the projected impacts of climate change: rising sea levels, species extinctions, glacial melting, and so forth.”
Sounds pretty scary, but the statement is completely vacuous: It sheds no light on the likelihood or reliability of such projections. If, for example, a model shows a significant temperature increase over the next 50 years, how much confidence do we have in that projection?
Attaching probabilities to model results is extremely difficult and rife with uncertainties. In the 2000 edition of Nature, four climate modelers noted that, “A basic problem with all such predictions to date has been the difficulty of providing any systematic estimate of uncertainty.” This problem stems from the fact that “these [climate] models do not necessarily span the full range of known climate system behavior.” According to the National Academy of Sciences, “…without an understanding of the sources and degree of uncertainty, decision-makers could fail to define the best ways to deal with the serious issue of global warming.” This fact should temper the enthusiasm of those who support Kyoto-style regulations that will harm the American economy.
Note too the distinction between “project” and “predict.” The alarmist writer noted earlier creates the misimpression that a projection is more solid than a prediction. But a projection is the output of a model calculation. Put another way, it’s only as good as the model’s equations and inputs. As we will see later in this speech, such inputs, or assumptions about the future, can be extremely flawed, if not totally divorced from reality. And this, to be sure, is only one of the many technical shortcomings that limit the scientific validity of climate modeling.
CLIMATE MODELING ‘IN ITS INFANCY’
Unfortunately, rarely does any scrutiny accompany model simulations. But based on what we know about the physics of climate models, as well as the questionable assumptions built into the models themselves, we should be very skeptical of their results. This is exactly the view of the National Academy of Sciences. According to NAS, “Climate models are imperfect. Their simulation skill is limited by uncertainties in their formulation, the limited size of their calculations, and the difficulty of interpreting their answers that exhibit as much complexity as in nature.”
At this point, climate modeling is still a very rudimentary science. As Richard Kerr wrote in Science magazine, “Climate forecasting, after all, is still in its infancy.” Models, while helpful for scientists in understanding the climate system, are far from perfect. According to climatologist Gerald North of Texas A&M University, “It's extremely hard to tell whether the models have improved; the uncertainties are large.” Or as climate modeler Peter Stone of the Massachusetts Institute of Technology put it, “The major [climate prediction] uncertainties have not been reduced at all.” Based on these uncertainties, cloud physicist Robert Charlson, professor emeritus at the University of Washington, Seattle, has concluded: “To make it sound like we understand climate is not right.”
This is not to deny that climate modeling has improved over the last three decades. Indeed, scientists have constructed models that more accurately reflect the real world. In the 1970s, models were capable only of describing the atmosphere, while over the last few years, models can describe – albeit inadequately – the atmosphere, land surface, oceans, sea ice, and other variables.
But greater complexity does not mean more accurate results. In fact, the more variables scientists incorporate, the more uncertainties arise. Dr. Syukuro Manabe, who helped create the first climate model that coupled the atmosphere and oceans, has observed, “Models that incorporate everything from dust to vegetation may look like the real world, but the error range associated with the addition of each new variable could result in near total uncertainty. This would represent a paradox: The more complex the models, the less we know.” We are often reminded that the IPCC used sophisticated modeling techniques in projecting temperature increases for the coming century. But as William O’Keefe and Jeff Kueter of the George C. Marshall Institute pointed out in a recent paper, “The complex models envisioned by the IPCC have many more than twenty inputs, and many of those inputs will be known with much less than 90 percent confidence.”
Also, tinkering with climate variables is a delicate business – getting one variable wrong can greatly skew model results. Dr. David Legates has noted that “anything you do wrong in a climate model will adversely affect the simulation of every other variable.” Take precipitation, for example. As Dr. Legates noted, “Precipitation requires moisture in the atmosphere and a mechanism to cause it to condense (causing the air to rise over mountains, by surface heating, as a result of weather fronts, or by cyclonic rotation). Any errors in representing the atmospheric moisture content or precipitation-causing mechanisms will result in errors in the simulation of precipitation.” “Clearly,” Dr. Legates concluded, “the interrelationships among the various components that comprise the climate system make climate modeling difficult.”
The IPCC, in its Third Assessment Report, noted this problem, and many others, with climate modeling, including:
• “Discrepancies between the vertical profile of temperature change in the troposphere seen in observations and models.”
• “Large uncertainties in estimates of internal climate variability (also referred to as natural climate variability) from models and observations.”
• “Considerable uncertainty in the reconstructions of solar and volcanic forcing which are based on limited observational data for all but the last two decades.”
• “Large uncertainties in anthropogenic forcings associated with the effects of aerosols.”
• “Large differences in the response of different models to the same forcing.”
THE SURFACE AND THE TROPOSPHERE
I want to delve a little deeper into the first point concerning discrepancies between temperature observations in the troposphere and the surface. This discrepancy is very important, because it tends to undermine a key assumption supporting the warming hypothesis – that more rapid warming should occur in the troposphere than at the surface, creating the so-called greenhouse “fingerprint.” But the National Research Council (NRC) believes real-world temperature observations tell a different story.
In January 2000, an NRC panel examined the output from several climate models to assess how well they mimicked the observed surface and lower atmospheric temperature trends. They found that, “Although climate models indicate that changes in greenhouse gases and aerosols play a significant role in defining the vertical structure of the observed atmosphere, model–observation discrepancies indicate that the definitive model experiments have not been done.” John Wallace, the panel chairman and Professor of Atmospheric Sciences at the University of Washington, put it more bluntly: “There really is a difference between temperatures at the two levels that we don't fully understand.”
More recently, researchers at the University of Colorado, Colorado State University, and the University of Arizona examined the differences between real-world temperature observations with the results of four widely used climate models. They probed the following question: Do the differences stem from uncertainties in how greenhouse gases and other variables affect the climate system, or by chance model fluctuations – that is, the variability caused by the model’s flawed representation of the climate system?
As it turned out, neither of these factors was to blame. According to the researchers, “Significant errors in the simulation of globally averaged tropospheric temperature structure indicate likely errors in tropospheric water-vapor content and therefore total greenhouse-gas forcing, precipitable water, and convectively forced large-scale circulation.” Moreover, based on the “significant errors of simulation,” the researchers called for “extreme caution in applying simulation results to future climate-change assessment activities and to attributions studies.” They also questioned “the predictive ability of recent generation model simulations, the most rigorous test of any hypothesis.” There doesn’t seem to be much wiggle room here: Climate models are useful tools, but unable in important respects to simulate the climate system, undermining their “predictive ability.” Based on this hard fact, let me bring you back to the alarmist writer I referenced earlier. As he wrote recently, “Drawing on highly sophisticated computer models, climate scientists can project – not predict – how much temperatures may rise by, say, 2100 if we carry on with business as usual.” Again, based on what I’ve just recounted, this is disingenuous at best. I think a fair-minded person would find it horribly misleading and inaccurate.
CLOUDS AND WATER VAPOR
Another serious model limitation concerns the interaction of clouds and water vapor with the climate system. Dr. Richard S. Lindzen, professor of meteorology at MIT, reports of “terrible errors about clouds in all the models.” He noted that these errors “make it impossible to predict the climate sensitivity because the sensitivity of the models depends primarily on water vapor and clouds. Moreover, if clouds are wrong,” Dr. Lindzen said, “there’s no way you can get water vapor right. They’re both intimately tied to each other.”
In fact, water vapor and clouds are the main absorbers of infrared radiation in the atmosphere. Even if all other greenhouse gases, including carbon dioxide, were to disappear, we would still be left with over 98 percent of the current greenhouse effect. But according to Dr. Lindzen, “the way current models handle factors such as clouds and water vapor is disturbingly arbitrary. In many instances the underlying physics is simply not known.”
Dr. Lindzen notes that this is a significant flaw, because “a small change in cloud cover can strongly affect the response to carbon dioxide.” He further notes, “Current models all predict that warmer climates will be accompanied by increasing humidity at all levels.” Such behavior “is an artifact of the models since they have neither the physics nor the numerical accuracy to deal with water vapor.”
AEROSOLS
Along with water vapor and clouds, aerosols, or particles from processes such as dust storms, forest fires, the use of fossil fuels, and volcanic eruptions, represent another major uncertainty in climate modeling. To be sure, there is limited knowledge of how aerosols influence the climate system. This, said the National Academy of Sciences, represents “a large source of uncertainty about future climate change.”
Further, the Strategic Plan of the U.S. Climate Change Science Program (CCSP), which was reviewed and endorsed by the National Research Council, concluded that the “poorly understood impact of aerosols on the formation of both water droplets and ice crystals in clouds also results in large uncertainties in the ability to project climate changes.”
Climate researcher and IPCC reviewer Dr. Vincent Gray reached an even stronger conclusion, stating that “the effects of aerosols, and their uncertainties, are such as to nullify completely the reliability of any climate models.”
DATA GAPS
Another issue affecting model reliability is the relative lack of available climate data, something the National Research Council addressed in 2001. According to the NRC, “[a] major limitation of these model forecasts for use around the world is the paucity of data available to evaluate the ability of coupled models to simulate important aspects of past climate.”
There is plenty of evidence to support this conclusion. Consider, for example, that most of the surface temperature record covers less than 50 years and only a few stations are as much as 100 years old. The only reliable data come from earth-orbiting satellites that survey the entire atmosphere. Notably, while these temperature measurements agree with those taken by weather balloons, they disagree considerably with the surface record. There is also concern of an upward bias in the surface temperature record, caused by the “urban heat island effect.” Most meteorological stations in Western Europe and eastern North America are located at airports on the edge of cities, which have been enveloped by urban expansion. In the May 30, 2003 issue of Remote Sensing of Environment, David Streutker, a Rice University researcher, found an increase in the Houston urban heat island effect of nearly a full degree Celsius between 1987 and 1999. This study confirmed research published in the March 2001 issue of Australian Meteorological Magazine, which documented a significant heat island effect even in small towns. Although climate modelers have made adjustments to compensate for the urban heat island effect, other researchers have shown such adjustments are inadequate. University of Maryland researchers Eugenia Kalnay and Ming Cai, in Nature magazine, concluded that the effect of urbanization and land-use changes on U.S. average temperatures is at least twice as large as previously estimated.
MODEL SCENARIOS
Finally, to expand on a point I raised earlier, climate models are helpful in creating so-called “climate scenarios.” These scenarios help scientists describe how the climate system might evolve. To arrive at a particular scenario, scientists rely on model-driven assumptions about future levels of economic growth, population growth, greenhouse gas emissions, and other factors. However, as with the IPCC, these assumptions can create wildly exaggerated scenarios that, to put it mildly, have little scientific merit. In 2003, scientists with the federal Climate Change Science Program agreed that potential environmental, economic, and technological developments “are unpredictable over the long time-scales relevant for climate research.” William O’Keefe and Jeff Keuter of the George C. Marshall Institute reiterated this point recently. As they wrote, “The inputs needed to project climate for the next 100 years, as is typically attempted, are unknowable. Human emissions of greenhouse gases and aerosols will be determined by the rates of population and economic growth and technological change. Neither of these is predictable for more than a short period into the future.” Put simply, computer model simulations cannot prove that greenhouse gas emissions will cause catastrophic global warming. Again, here’s the National Academy of Sciences: “The fact that the magnitude of the observed warming is large in comparison to natural variability as simulated in climate models is suggestive of such a linkage, but it does not constitute proof of one because – [and this is a point I want to emphasize] – the model simulations could be deficient in natural variability on the decadal to century time scale.”
CONCLUSION
It’s clear that climate models, even with increasing levels of sophistication, still contain a number of critical shortcomings. With that in mind, policymakers should reject ridiculous statements that essentially equate climate model runs with scientific truth.
As I discussed today, climate modeling is in its infancy. It cannot predict future temperatures with reasonable certainty that these predictions are accurate. The physical world is exceedingly complex, and the more complex the models, the more potential errors are introduced into the models. We understand little about how to accurately model the troposphere and about the role of aerosols, clouds and water vapor. Moreover, there are enormous data gaps in the very short temperature records that we have. And surface data often conflict with more accurate balloon and satellite data.
Models can enhance scientists’ understanding of the climate system, but, at least at this point, cannot possibly serve as a rational basis for policymaking. It seems foolish in the extreme to undermine America’s economic competitiveness with policies based on computer projections about what the world will look like in 100 years. In short, we have no idea what the world will look like in 20 years, or even 10 years.
This concludes my series on the Four Pillars of Climate Alarmism. I hope these speeches will prod my colleagues to examine the science of climate change. In my view, if they examine the facts and evidence closely and dispassionately, they will find no “consensus” that catastrophic global warming is occurring or will occur – and further, they will recognize that Kyoto-style polices are scientifically unjustified, environmentally useless, and economically harmful.
It is clear that the cost of ignoring the science is enormous. Wharton Econometrics Forecasting Associates estimates that implementing Kyoto would cost an American family of four $2,700 annually. Inducing the United States to adopt policies that erode its economic power in world markets appears to be the goal of some economic rivals, as evidenced by the words of two international leaders who said it best. [chart] Margot Wallstrom, the EU’s Environment Commissioner, states that Kyoto is “about leveling the playing field for big businesses worldwide.” [chart] French President Jacques Chirac said during a speech at the Hague in November 2000 that Kyoto represents “the first component of an authentic global governance.”
Let us hope that America’s leadership has the wisdom not to fall prey to their openly admitted agenda.