Climate Change 101: Dataset Reanalysis

Reanalysis mathematically blends together all the products from multiple observing systems and then assimilates them. This process includes algorithms for quality control of the raw satellite data, as well as space and time interpolation schemes and a global operational forecasting model. The assimilation system then removes artificial trends introduced by the updates.

Afterall, more is better right? We cannot rely on one observing system alone.

The reanalysis should be consistent with the available data used as well as the forecasting model it creates.

A reanalysis data set has many strengths:

  1. Many have global coverage
  2. Consistent space and temporal resolution
  3. A large suite of meteorological variables (radiosonde, satellite, buoy, etc.)
  4. Incorporates millions of observations that would otherwise be impossible for one individual observation system to record
  5. Can be used as boundaries for global and regional climate models.

There are also limitations, as all data has:

  1. Errors in the data are inherited from their origin
  2. The number of observations can be detrimental, causing fake trends to arise. In 1979, for example, satellite data greatly increased the data available.

 

The first reanalysis dataset came out of the US National Centers for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR). It’s data dates back to 1948 with measurements taken four times daily from multiple levels in the atmosphere.

Since then, many more have followed including:

  • 20th Century Reanalysis from 1815-2014
  • ERA-interim from 1979-~2018
  • North American Regional Reanalysis (NARR) from 1979 to present

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s