Reanalysis mathematically blends together all the products from multiple observing systems and then assimilates them. This process includes algorithms for quality control of the raw satellite data, as well as space and time interpolation schemes and a global operational forecasting model. The assimilation system then removes artificial trends introduced by the updates.
Afterall, more is better right? We cannot rely on one observing system alone.
The reanalysis should be consistent with the available data used as well as the forecasting model it creates.
A reanalysis data set has many strengths:
- Many have global coverage
- Consistent space and temporal resolution
- A large suite of meteorological variables (radiosonde, satellite, buoy, etc.)
- Incorporates millions of observations that would otherwise be impossible for one individual observation system to record
- Can be used as boundaries for global and regional climate models.
There are also limitations, as all data has:
- Errors in the data are inherited from their origin
- The number of observations can be detrimental, causing fake trends to arise. In 1979, for example, satellite data greatly increased the data available.
The first reanalysis dataset came out of the US National Centers for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR). It’s data dates back to 1948 with measurements taken four times daily from multiple levels in the atmosphere.
Since then, many more have followed including:
- 20th Century Reanalysis from 1815-2014
- ERA-interim from 1979-~2018
- North American Regional Reanalysis (NARR) from 1979 to present