Professional Biography 

Resume

Essays 

Recent Climate Change Headlines 

Greeting 

Thanks for visiting this website that describes my career accomplishments.  Overviews of how I have arrived at this point in my career, what skills I have developed along the way, and what I have  accomplished in the research community are available by following the links in the left menu.  Also in the left menu, you can follow a link to a page where you may download essays (informal writings that are not peer reviewed) on topics related to climate and weather research.  Below I will have occasional postings that provide some background information on news events that relate to climate science and the impacts of climate change on day-to-day life.

Current Events of Interest to Me

February 14, 2009

Anyone looking for a job?  You may have never considered a government position, but Obama really needs a Secretary of Commerce. It's not a glamorous position, according to this AP story, but I'm sure it pays well.

February 1, 2009

Senator Inhofe has released the most recent version of the Senate Minority report on climate change. It represents the antithesis of Al Gore's movie in that it is an advocacy piece for the view that greenhouse gases have caused zero warming.

An advocacy document is a political document.  Its only function is to argue from as many perspectives as possible for the enactment of a particular policy.  This is the way government works.  Groups voice their support for the government to take a role in some activity, in this case climate change.  The opposition should has the right to voice why the government shouldn't support the activity.  Both sides try to make the claim that they represent either the majority opinion or the authoritative view.  This goal has nothing to do with producing a coherent analysis hallmarked by integrity.  Thus, both of the advocacy documents are internally inconsistent and inaccurate in some ways.

December 31, 2008

There is a difference between a prediction, a forecast, and a projection.  Herein lies the fundamental misconception of the global warming deniar's blog entry on cold temperatures in 2008 that I referenced in my December 11, 2008 post.

Here is an example outside of the context of scientific research: your daily commute.  Imagine it is one of THOSE days.  You wake up late on a day in which an important client is visiting, and you don't have the time to turn on the TV news for commute reports.  You run to the car, flip on the radio -- it's busted.  Thoughts about how much THAT'S gonna cost run through your head, along with the idea that you should call your buddy who might be on the road.  Your cell phone is dead, because you forgot to charge it the night before.  How are you going to know if you'll make it to work without irritating your client?

You know that if traffic is heavy on your primary route then your commute takes 45 minutes to an hour, in light traffic 25 to 40 minutes, and an alternative route with light traffic gets you there in 30 to 40 minutes.  This is akin to what is called a prediction in scientific work.  The commute times are based on your past experience, but they do not necessarily help you determine in this particular instance if you will arrive at work before your client leaves, never to return.

You need a forecast, but the information you would use to create a forecast -- the current traffic report -- is not available to you.  You can't create a forecast.

As you approach the major road in your primary route, you see that traffic is heavy.  This allows you to create scenarios from which you can estimate an arrival time.  If the traffic is heavy here, would you expect it to be heavier or lighter elsewhere along the primary route.  Does heavy traffic here mean the alternative route is likely to have light traffic?  Scientists would say that you have made a projection of your arrival time.  Your projection contains some information about the current traffic condition and relies on the information from your predictions to give a range of arrival times.  But, you still can't warn your client, because your phone is dead, even though you've had it plugged into the lighter socket for the past 10 minutes.  It turns out that you didn't forget to charge the phone.  Instead, the battery is kaput.  It's one of THOSE days.

Here is how your commute relates to climate change information.  The IPCC has examined climate model simulations of future changes under a number of plausible scenarios for increasing greenhouse gases and refers to them as "projections" rather than forecasts.  The reason is that a lot of the ancillary information needed to produce a forecast is unknowable: volcanic eruptions, greenhouse gas emissions, some solar cycles, deforestation.  Since the climate simulations had to be completed shortly after the year 2000, many of these factors were specified from observations only through the year 2000.  The representation of these factors after the year 2000 is simplified in what are considered to be plausible, but incomplete, future scenarios.  Disagreements between the projected and observed temperature after the year 2000, say for 2008, may point moreso to the limitations of the projection due to the lack of exact ancillary data than a failure of the predictions of global warming theory.

The next round of assessments will have ancillary data specified through 2010 and will permit a more precise analysis of the factors responsible for the relatively cool years in 2007, 2008, and what might be a cool 2009.  Before then, more opportunities should arise to comment on the limitations of climate projections and whether they have any utility as forecasts.

December 11, 2008

A week after I posted that anthropogenic change deniers would have a heyday with the relatively cool temperatures of 2008, a less thoughtful and a very thoughtful discussion have been posted by one member of said group.  Thoughtful though it may be, and I encourage all to read it, the latter discussion contains many misconceptions about scientific inference, which gives me an opportunity to provide some education on the matter.

The heart of this blogger's argument is the two questions of what it means for observations to be consistent with a theory and how probability is used to accept or reject a theory.  A perfect scientific design would allow an experiment in which a process is isolated from all other processes in order to gauge the magnitude of response to changes in the process or ancillary data.  I know that is such a generalized statement that it makes no sense.  For example, physics lab classes are filled with illustrations, such as a soil physics experiment in which the capacity to transport water in the soil is tested for sensitivity to the type of soil by using a horizontal soil column (to remove the effect of gravity and isolate the soil transport process) with a constant water pressure at one end.  If a theory was posited beforehand that predicted the water flow would not be affected by soil type, the measurements would determine whether any situation existed for which the theory's prediction occurred.  I can say from performing that experiment that such a theory would be rejected because the only tests that would produce results consistent with its prediction would be those in which the makeup of the soil was changed by a trivial amount and wouldn't really be considered a different soil type.

The test for human-induced global warming is complicated by two factors: (1) it is impossible to build an alternative earth in a laboratory in order to isolate processes, and (2) future conditions are completely unknown and past conditions are better quantified but insufficient to have the certainty of the soil physics example.

There are two main techniques that can be used to examine the question of whether an increase in greenhouse gas and natural variations (solar, glacial cycles, volcanoes, etc.) can alter the climate to a similar magnitude.  One approach is to examine the correlation between an estimate of temperature and these factors.  The other approach is to use a numerical model of the ocean, atmosphere, land, and ice processes in which the climate forcing is altered systematically.

The trouble with the first approach is that there are few historical instances, if any, in which greenhouse gases increased without solar increases.  That is because the primary mechanism for greenhouse gas release in the past is biotic abundance.  It is difficult to quantify separately the statistical effect of solar and greenhouse gases when in the past record they are intertwined.  The best that can be done is to quantify rates of changes and anti-correlations.  For example, temperatures over the past decade have been much above normal even though the solar energy was at a minimum, as suggested by the sun spot cycle.

The trouble with the second approach is that the numerical models are inexact representations of real-world processes.  However, it is possible to perform cleaner experiments than what can be accomplished with the statistical approach in that specific climate forces can be turned off in the numerical models.  Thus, a number of models can be used with the same set of climate forces in order to quantify the range of plausible outcomes.

Finally, I have covered just enough background information and can talk about consistency with observations.  Some of the claims of the IPCC are that (1) warming in the first half of the 20th century was very likely due to solar variations, (2) warming since the 1950s was very likely due to greenhouse gases, and (3) warming in the next two decades is expected to be about 0.2 C per decade so long as unforeseeable events, such as volcanic eruptions, do not occur.  These claims rely heavily on model simulations as described in the second approach above in which greenhouse gas forcing was either turned on or off while all sources of well quantified natural variability, such as some solar variations and volcanic eruptions, were retained (meaning some of the natural variability is poorly quantified if at all and is unusable).

Consistency with observations for claims (1) and (2) is checked by overlaying results in the 20th century from the two sets of model simulations (one with and the other without greenhouse gas increases) on top of estimates of observed temperature.  The simulations with greenhouse gas increases predict a different range of plausible outcomes in the latter half of the 20th century compared to simulations without greenhouse gas increases.  The observations lie on top of the prediction of plausible outcomes of the simulations with greenhouse gas increases.

From this evidence, the claim is that the observed temperature is consistent with model predictions that include greenhouse gas increases and inconsistent with predictions that do not.  Thus, the inference is that it is very likely that much of the warming in the last half of the 20th century was caused by greenhouse gas increases.  The reason the inference cannot be exact, as in the soil physics example, is that the IPCC recognizes some factors are not included due to limitations of the numerical models and lack of precision in the measurements of solar variability, volcanic discharge, and changes in land use, like deforestation and agricultural production.

Claim (3) requires another somewhat long post that I should have up in a few days to clarify the difference between a forecast and a prediction.

December 5, 2008

The landscape of federal politics as it relates to climate and energy policy is changing rapidly.  Congressman Waxman has won the young lion/old lion fight with Congressman Dingell.  Waxman is now the chairman of the House Energy and Commerce Committee, though the webpage has yet to be updated to reflect this change.  Furthermore, President elect Obama is on record with a statement that the United States will be an international leader in the effort to mitigate greenhouse gas induced climate change, and he will nominate New Mexico governor Bill Richardson to fill the cabinet position of Commerce Secretary.  There is a lot of ground to cover here to learn the specifics of what these three guys might want to do.  One thing is certain. They all favor very aggressive changes to the way the United States produces electricity and emissions from vehicles.  My initial thought is that their proposals are heavy on mitigation, and I would like to see more projections of the economic benefits that are claimed to be far greater than the investment needed now to implement their ideas.  I hope they have put some thought into the cost of adaptation, and they invest increasingly in research to better quantify the impacts of climate changes over the next 40-50 years that will occur regardless of their mitigation tactics.

December 3, 2008

It is amusing to those in denial about human-induced climate change when temperatures are unseasonably cold, such as the recent condition.  While denialists are quick to point out to alarmists that a few hot years can not prove that climate change is caused by humans, they seem to use the same faulty logic to state that a cold year or a few cold years is evidence against it.  I see this as an educational opportunity, since both are incorrect.

October 2008 is not the warmest on record, and a data SNAFU that first indicated that it was the warmest on record has awakened the conspiratorial imagination of denialists. Furthermore, it will probably be soon pointed out by denialists that the temperature when averaged over January through October is cooler than in recent years.  Nevermind that this cold year is still warmer than the 30-year average.

The reality is that the natural and human influences on climate are interactive.  It is incorrect to say that one or the other is the sole cause. Natural variability modifies the human influence and vice versa.  The difficulty with quantifying mechanisms of climate variability is that the precision with which this can be accomplished is much less than what is customarily reported with traditional laboratory sciences or engineering design experiments, because we cannot build a physical replica of the earth system and systematically alter natural and human influences on the earth's climate.  The past does not contain exact analogues of the current situation, and our record of the past is much less refined than current observations, so that it is difficult to assess whether the details of current changes are similar to those of somewhat similar analogues in the past.  Regardless, the observational record is insufficient to isolate climate influences, since it is impossible to answer with high precision the question of what would have happened if the climate influence of interest had been absent from the observational record.  At best, the observational record can be used to identify factors that are correlated.  If it isn't possible to isolate a climate factor, it is impossible to say whether the variability is due solely to a climate factor, which is the error both alarmists and denialists have made.

The NOAA Climate Attribution Program is designing new techniques to ask questions like: "what were the causes of the relatively cold 2008 temperatures?".  A preliminary analysis of climate model simulations that contain only climate changes from increasing greenhouse gases suggests the likelihood of a cool year like 2008 is about 10% in the early 2000's.  So, this cold year or even a string of cold years of this magnitude is not inconsistent with climate change due to increased greenhouse gas concentration.  A more complete estimate of this likelihood may be somewhat different when the interaction between greenhouse gas increases and solar fluctuations are included.

November 12, 2008

A young-lion/old-lion battle is brewing in the House of Representatives over climate change legislation.  There are three drafts of climate legislation that have political heft behind them: Dingell-Boucher (which I link to below), Markey, and Boxer-Lieberman-Warner.  The fight will be over how climate legislation will impose caps on greenhouse gas emissions thereby forcing companies to incur the cost of reducing emissions and whether the auto industry will have stricter emissions limits imposed on their cars.  All of that isn't my area of expertise, though it is fascinating to read the proposals and cost estimates.  What is of interest to me is the way in which climate science contributes to policy.  All three draft bills implement a periodic report of national vulnerabilities to climate change.  The Boxer-Lieberman-Warner bill assigns the task to the National Academy of Science, seemingly placing it in the role of the current US Climate Change Science Program.  A more expansive Adaptation Program is proposed in the Dingell-Boucher bill, and I have described it in my November 6, 2008 posting.  Markey builds on the Dingell-Boucher approach by adding funding for research through competitive grants from NSF and the establishment of Climate Change Centers for Excellence.  I think the Markey bill is the way to go for the following reasons. (1) Research and development will need to be extensive if the United States is going to protect its natural resources and use them wisely to sustain a position of global economic leadership.  (2) Centers for Excellence provide a cost-effective interface between developmental, unprofitable work and industry implementation, which greatly reduces the cost to industry since they do not need to shell out the full cost of assembling, housing, and providing infrastructure for the development teams.  (3) Competitive NSF grants will ensure research will be focused on vulnerabilities in the United States.  This is a natural complement to the proposed National Climate Change Adaptation Program in both the Dingell-Boucher and Markey bills that is dependent on relevant, targeted scientific assessments.  In contrast, the Boxer-Lieberman-Warner bill provides a mechanism only to summarize research done elsewhere that may or may not assess directly the key vulnerabilities in the United States.  I can't say much about cap-and-trade approaches and markets, but I can say the draft legislation under which climate science has the best chance to make significant and meaningful contributions to policy is by far the Markey bill.

November 6, 2008

I stand corrected.  Upon a more thorough reading of the discussion draft of the climate bill (big pdf file), I have found substantial plans to increase climate studies and coordinate among federal agencies to adapt to climate change.  The legislation under discussion would create regional assessments of climate vulnerabilities for climate-sensitive components of nearly every program in government.  Similar to the IPCC, the assessments would be periodic, occurring every four years with the first possibly due as early as January 1, 2012.  You might be able to guess from yesterday's post that I think this is a great idea that is necessary to adapt intelligently if the United States is to remain a superpower.

November 5, 2008

Barack Obama is the President-elect.  What does this mean for climate science?  Besides the obvious that a slew of conspiracy theories will emerge that claim the Democrats are cooking the climate data, I expect not much that will happen in the short term will be attributable to Obama's Presidency, though much may be made of it.  Congress has already begun a draft of a climate bill. The details will now be guided by the new balance of power in Congress and the willingness of Obama to align with Congress.  If Obama sticks to his campaign rhetoric, electric and hybrid vehicles stand to be a major component as opposition in the Congress to those technologies appears reduced.  We can expect a battle fronted by McCain to emphasize nuclear power.  It isn't clear to what extent an increase in offshore drilling will be permitted since the Democratic majority in the Senate and House may block it even though Obama could support it, and I have a hard time believing there is sufficient support for a cap-and-trade system for greenhouse gas emissions, unless a "filibuster-proof" Congress is elected.  As you can see, however, none of the big issues relate directly to scientific analysis or the transition of scientific results into practical use.  With the next IPCC assessment due in 2013, I believe it is an opportune moment to include a chunk of money for scientific work that over the next 4-5 years would be directed at generating data and analysis techniques to relate climate variability and change to engineering, financial markets (think insurance for hurricane and flood damage and commodities markets), and energy technologies, so that the United States can maintain its position as a strong and attractive economy.


November 1, 2008

NASA has a large number of climate and societally relevant images from their satellites, and I think it could provide useful information for evaluation of an oft-used and recently reiterated argument of "global warming skeptics" that centers on sunspots. The argument riffs on the fact that the earth's magnetic field is less expansive when sunspots are numerous.  The idea is that charged particles from space may penetrate the atmosphere when the magnetic field is thin and may cause atmospheric particles to coagulate so that water condenses and forms clouds.  The cloud field regulates the global temperature.  There are two ways in which this mechanism needs verification or clarification.  First, the sun provides all but a small fraction of the energy intercepted by the earth, and the energy fluctuates about 0.1% with sunspot cycles.  It is necessary to identify the relative importance on cloud formation of cosmic particles and reductions of solar energy, since both are associated with sunspot cycles.  Second, it is unusual for cloud formation to be inhibited by a lack of condensation nuclei.  Usually, the culprit is a lack of water vapor.  This is why condensation trails produced by airplanes has garnered so much attention.  These clouds would not otherwise exist if airplanes hadn't pumped water vapor into the upper atmosphere.  This is where I think NASA satellites could help evaluate the cosmic particle mechanism.  If cosmic particle proponents could determine whether unique cloud features would result from this mechanism (such as a unique cloud particle size or reflectance property or electrical charge), NASA satellites could be used to look for these characteristics and determine whether clouds with the characteristics have formed in regions where they otherwise would not be expected to form.