Search Results
You are looking at 1 - 10 of 34 items for
- Author or Editor: Russell S. Vose x
- Refine by Access: All Content x
Abstract
Set cover models are used to develop two reference station networks that can serve as near-term substitutes (as well as long-term backups) for the recently established Climate Reference Network (CRN) in the United States. The first network contains 135 stations distributed in a relatively uniform fashion in order to match the recommended spatial density for CRN. The second network contains 157 well-distributed stations that are generally not in urban areas in order to minimize the impact of future changes in land use. Both networks accurately reproduce the historical temperature and precipitation variations of the twentieth century.
Abstract
Set cover models are used to develop two reference station networks that can serve as near-term substitutes (as well as long-term backups) for the recently established Climate Reference Network (CRN) in the United States. The first network contains 135 stations distributed in a relatively uniform fashion in order to match the recommended spatial density for CRN. The second network contains 157 well-distributed stations that are generally not in urban areas in order to minimize the impact of future changes in land use. Both networks accurately reproduce the historical temperature and precipitation variations of the twentieth century.
The Global Historical Climatology Network version 2 temperature database was released in May 1997. This century-scale dataset consists of monthly surface observations from ~7000 stations from around the world. This archive breaks considerable new ground in the field of global climate databases. The enhancements include 1) data for additional stations to improve regional-scale analyses, particularly in previously data-sparse areas; 2) the addition of maximum–minimum temperature data to provide climate information not available in mean temperature data alone; 3) detailed assessments of data quality to increase the confidence in research results; 4) rigorous and objective homogeneity adjustments to decrease the effect of nonclimatic factors on the time series; 5) detailed metadata (e.g., population, vegetation, topography) that allow more detailed analyses to be conducted; and 6) an infrastructure for updating the archive at regular intervals so that current climatic conditions can constantly be put into historical perspective. This paper describes these enhancements in detail.
The Global Historical Climatology Network version 2 temperature database was released in May 1997. This century-scale dataset consists of monthly surface observations from ~7000 stations from around the world. This archive breaks considerable new ground in the field of global climate databases. The enhancements include 1) data for additional stations to improve regional-scale analyses, particularly in previously data-sparse areas; 2) the addition of maximum–minimum temperature data to provide climate information not available in mean temperature data alone; 3) detailed assessments of data quality to increase the confidence in research results; 4) rigorous and objective homogeneity adjustments to decrease the effect of nonclimatic factors on the time series; 5) detailed metadata (e.g., population, vegetation, topography) that allow more detailed analyses to be conducted; and 6) an infrastructure for updating the archive at regular intervals so that current climatic conditions can constantly be put into historical perspective. This paper describes these enhancements in detail.
Abstract
A procedure is described that provides guidance in determining the number of stations required in a climate observing system deployed to capture temporal variability in the spatial mean of a climate parameter. The method entails reducing the density of an existing station network in a step-by-step fashion and quantifying subnetwork performance at each iteration. Under the assumption that the full network for the study area provides a reasonable estimate of the true spatial mean, this degradation process can be used to quantify the relationship between station density and network performance. The result is a systematic “cost–benefit” relationship that can be used in conjunction with practical constraints to determine the number of stations to deploy.
The approach is demonstrated using temperature and precipitation anomaly data from 4012 stations in the conterminous United States over the period 1971–2000. Results indicate that a U.S. climate observing system should consist of at least 25 quasi-uniformly distributed stations in order to reproduce interannual variability in temperature and precipitation because gains in the calculated performance measures begin to level off with higher station numbers. If trend detection is a high priority, then a higher density network of 135 evenly spaced stations is recommended. Through an analysis of long-term observations from the U.S. Historical Climatology Network, the 135-station solution is shown to exceed the climate monitoring goals of the U.S. Climate Reference Network.
Abstract
A procedure is described that provides guidance in determining the number of stations required in a climate observing system deployed to capture temporal variability in the spatial mean of a climate parameter. The method entails reducing the density of an existing station network in a step-by-step fashion and quantifying subnetwork performance at each iteration. Under the assumption that the full network for the study area provides a reasonable estimate of the true spatial mean, this degradation process can be used to quantify the relationship between station density and network performance. The result is a systematic “cost–benefit” relationship that can be used in conjunction with practical constraints to determine the number of stations to deploy.
The approach is demonstrated using temperature and precipitation anomaly data from 4012 stations in the conterminous United States over the period 1971–2000. Results indicate that a U.S. climate observing system should consist of at least 25 quasi-uniformly distributed stations in order to reproduce interannual variability in temperature and precipitation because gains in the calculated performance measures begin to level off with higher station numbers. If trend detection is a high priority, then a higher density network of 135 evenly spaced stations is recommended. Through an analysis of long-term observations from the U.S. Historical Climatology Network, the 135-station solution is shown to exceed the climate monitoring goals of the U.S. Climate Reference Network.
No Abstract available.
No Abstract available.
Abstract
Studies which utilize a long-term temperature record in determining the possibility of a global warming have led to conflicting results. We suggest that a time-series evaluation of mean annual temperatures is not sufficiently robust to determine the existence of a long-term warming. We propose the utilization of an air mass-based synoptic climatological approach, as it is possible that local changes within particular air masses have been obscured by the gross scale of temperature time-series evaluations used in previous studies of this type. An automated synoptic index was constructed for the winter months in four western North American Arctic locations to determine if the frequency of occurrence of the coldest and mildest air masses has changed and if the physical character of these air masses has shown signs of modification over the past 40 years. It appears that the frequencies of the majority of the coldest air masses have tended to decrease, while those of the warmest air masses have increased. In addition, the very coldest air masses at each site have warmed between 1°C to almost 4°C over the same time interval. A technique is suggested to determine whether these changes are possibly attributable to anthropogenic influences.
Abstract
Studies which utilize a long-term temperature record in determining the possibility of a global warming have led to conflicting results. We suggest that a time-series evaluation of mean annual temperatures is not sufficiently robust to determine the existence of a long-term warming. We propose the utilization of an air mass-based synoptic climatological approach, as it is possible that local changes within particular air masses have been obscured by the gross scale of temperature time-series evaluations used in previous studies of this type. An automated synoptic index was constructed for the winter months in four western North American Arctic locations to determine if the frequency of occurrence of the coldest and mildest air masses has changed and if the physical character of these air masses has shown signs of modification over the past 40 years. It appears that the frequencies of the majority of the coldest air masses have tended to decrease, while those of the warmest air masses have increased. In addition, the very coldest air masses at each site have warmed between 1°C to almost 4°C over the same time interval. A technique is suggested to determine whether these changes are possibly attributable to anthropogenic influences.
Abstract
The evaluation strategies outlined in this paper constitute a set of tools beneficial to the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure’s effectiveness at detecting true errors in the data. Rather, as illustrated by way of an “extremes check” for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but, when applied repeatedly throughout the development process, it also aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approach described here constitutes a mechanism for reassessing system performance whenever revisions are made following initial development.
Abstract
The evaluation strategies outlined in this paper constitute a set of tools beneficial to the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure’s effectiveness at detecting true errors in the data. Rather, as illustrated by way of an “extremes check” for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but, when applied repeatedly throughout the development process, it also aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approach described here constitutes a mechanism for reassessing system performance whenever revisions are made following initial development.
Abstract
This paper presents a description of the fully automated quality-assurance (QA) procedures that are being applied to temperatures in the Integrated Global Radiosonde Archive (IGRA). Because these data are routinely used for monitoring variations in tropospheric temperature, it is of critical importance that the system be able to detect as many errors as possible without falsely identifying true meteorological events as erroneous. Three steps were taken to achieve such robust performance. First, 14 tests for excessive persistence, climatological outliers, and vertical and temporal inconsistencies were developed and arranged into a deliberate sequence so as to render the system capable of detecting a variety of data errors. Second, manual review of random samples of flagged values was used to set the “thresholds” for each individual check so as to minimize the number of valid values that are mistakenly identified as errors. The performance of the system as a whole was also assessed through manual inspection of random samples of the quality-assured data. As a result of these efforts, the IGRA temperature QA procedures effectively remove the grossest errors while maintaining a false-positive rate of approximately 10%.
Abstract
This paper presents a description of the fully automated quality-assurance (QA) procedures that are being applied to temperatures in the Integrated Global Radiosonde Archive (IGRA). Because these data are routinely used for monitoring variations in tropospheric temperature, it is of critical importance that the system be able to detect as many errors as possible without falsely identifying true meteorological events as erroneous. Three steps were taken to achieve such robust performance. First, 14 tests for excessive persistence, climatological outliers, and vertical and temporal inconsistencies were developed and arranged into a deliberate sequence so as to render the system capable of detecting a variety of data errors. Second, manual review of random samples of flagged values was used to set the “thresholds” for each individual check so as to minimize the number of valid values that are mistakenly identified as errors. The performance of the system as a whole was also assessed through manual inspection of random samples of the quality-assured data. As a result of these efforts, the IGRA temperature QA procedures effectively remove the grossest errors while maintaining a false-positive rate of approximately 10%.
Abstract
The Integrated Global Radiosonde Archive (IGRA) is a collection of historical and near-real-time radiosonde and pilot balloon observations from around the globe. Consisting of a foundational dataset of individual soundings, a set of sounding-derived parameters, and monthly means, the collection is maintained and distributed by the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information (NCEI). It has been used in a variety of applications, including reanalysis projects, assessments of tropospheric and stratospheric temperature and moisture trends, a wide range of studies of atmospheric processes and structures, and as validation of observations from other observing platforms. In 2016, NCEI released version 2 of the dataset, IGRA 2, which incorporates data from a considerably greater number of data sources, thus increasing the data volume by 30%, extending the data back in time to as early as 1905, and improving the spatial coverage. To create IGRA 2, 40 data sources were converted into a common data format and merged into one coherent dataset using a newly designed suite of algorithms. Then, an overhauled version of the IGRA 1 quality-assurance system was applied to the integrated data. Last, monthly means and sounding-by-sounding moisture and stability parameters were derived from the new dataset. All of these components are updated on a regular basis and made available for download free of charge on the NCEI website.
Abstract
The Integrated Global Radiosonde Archive (IGRA) is a collection of historical and near-real-time radiosonde and pilot balloon observations from around the globe. Consisting of a foundational dataset of individual soundings, a set of sounding-derived parameters, and monthly means, the collection is maintained and distributed by the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information (NCEI). It has been used in a variety of applications, including reanalysis projects, assessments of tropospheric and stratospheric temperature and moisture trends, a wide range of studies of atmospheric processes and structures, and as validation of observations from other observing platforms. In 2016, NCEI released version 2 of the dataset, IGRA 2, which incorporates data from a considerably greater number of data sources, thus increasing the data volume by 30%, extending the data back in time to as early as 1905, and improving the spatial coverage. To create IGRA 2, 40 data sources were converted into a common data format and merged into one coherent dataset using a newly designed suite of algorithms. Then, an overhauled version of the IGRA 1 quality-assurance system was applied to the integrated data. Last, monthly means and sounding-by-sounding moisture and stability parameters were derived from the new dataset. All of these components are updated on a regular basis and made available for download free of charge on the NCEI website.
Abstract
This paper provides a general description of the Integrated Global Radiosonde Archive (IGRA), a new radiosonde dataset from the National Climatic Data Center (NCDC). IGRA consists of radiosonde and pilot balloon observations at more than 1500 globally distributed stations with varying periods of record, many of which extend from the 1960s to present. Observations include pressure, temperature, geopotential height, dewpoint depression, wind direction, and wind speed at standard, surface, tropopause, and significant levels.
IGRA contains quality-assured data from 11 different sources. Rigorous procedures are employed to ensure proper station identification, eliminate duplicate levels within soundings, and select one sounding for every station, date, and time. The quality assurance algorithms check for format problems, physically implausible values, internal inconsistencies among variables, runs of values across soundings and levels, climatological outliers, and temporal and vertical inconsistencies in temperature. The performance of the various checks was evaluated by careful inspection of selected soundings and time series.
In its final form, IGRA is the largest and most comprehensive dataset of quality-assured radiosonde observations freely available. Its temporal and spatial coverage is most complete over the United States, western Europe, Russia, and Australia. The vertical resolution and extent of soundings improve significantly over time, with nearly three-quarters of all soundings reaching up to at least 100 hPa by 2003. IGRA data are updated on a daily basis and are available online from NCDC as both individual soundings and monthly means.
Abstract
This paper provides a general description of the Integrated Global Radiosonde Archive (IGRA), a new radiosonde dataset from the National Climatic Data Center (NCDC). IGRA consists of radiosonde and pilot balloon observations at more than 1500 globally distributed stations with varying periods of record, many of which extend from the 1960s to present. Observations include pressure, temperature, geopotential height, dewpoint depression, wind direction, and wind speed at standard, surface, tropopause, and significant levels.
IGRA contains quality-assured data from 11 different sources. Rigorous procedures are employed to ensure proper station identification, eliminate duplicate levels within soundings, and select one sounding for every station, date, and time. The quality assurance algorithms check for format problems, physically implausible values, internal inconsistencies among variables, runs of values across soundings and levels, climatological outliers, and temporal and vertical inconsistencies in temperature. The performance of the various checks was evaluated by careful inspection of selected soundings and time series.
In its final form, IGRA is the largest and most comprehensive dataset of quality-assured radiosonde observations freely available. Its temporal and spatial coverage is most complete over the United States, western Europe, Russia, and Australia. The vertical resolution and extent of soundings improve significantly over time, with nearly three-quarters of all soundings reaching up to at least 100 hPa by 2003. IGRA data are updated on a daily basis and are available online from NCDC as both individual soundings and monthly means.