Climategate: Justify Your Method of Homogenization

Science Fiction Author and multiple PhD Jerry Pournelle has some questions for the CRU:

Data and Climate Science

First, if you have any interest in the climate debate, you must read the careful analysis of the data from Darwin, Australia that we referred to last evening. I have studied this in some detail since Joanne recommended it, and it is important: not because it is a “smoking gun” demonstrating evil on the part of the climate analyzers, but because it raises questions that must be answered before the world spends trillions on remedies to climate change.

The analysis shows that the primary raw data show one trend; the adjusted “harmonized” data that were input into the models used to predict climate change show quite another. Now this may be a very reasonable adjustment — but that adjustment has to be open, aboveboard, and justified. So far we have not seen any such explanations, and it has all been done in house, not openly.

So: 5% of the Earth’s temperature is determined by 50 (actually it’s more like 30, but call it 50) thermometers reporting daily. .05X = 50 so we have about 1,000 thermometers to determine the Earth’s land temperature. Since the land is 30% of the earth’s surface, .30X = 1000 and we have 3,333 thermometers to determine the entire temperature of the earth. (I doubt we have that many, but it’ll do for this.) That means 3,333 data points ever hour, or 29,200,000 data points a year. At 8 bytes per data point we’re talking about 2 gigabytes of data per year; meaning that everyone reading this has the capacity to store that much data, and probably the computing power to do daily averages and print out trend curves. It’s too late to do that for past years, but I propose that given the enormous economic importance of climate trends, the IPCC should publish all the raw data: uncorrected, not homogenized, just the numbers you’d get if you went out on the porch and read the thermometer (or dropped your thermocouple over the side of a boat, or whatever it is they do to get the numbers); and also publish the corresponding “corrected” or “homogenized” number that is fed into the models. That’s publishing a few gigabytes of data per year, or some 10 megabytes a day. Let everyone on earth look at the data, and do things like calculate differences between raw and corrected data. We can all look at the trends and differences.

Given the trillions at stake the costs of doing this are trivial. I doubt that it will be done, but shouldn’t it be?

Indeed it should. And it should be done very openly.

Hat Tip: Glenn “Instapundit” Reynolds

Not only did he invent the Internet, he's cured insomnia
LowBama