Search Results
You are looking at 1 - 10 of 22 items for
- Author or Editor: Xungang Yin x
- Refine by Access: All Content x
Abstract
This paper presents a water balance model for Lake Victoria that can be inverted to estimate annual rainfall over the lake. The model is calibrated using a fixed value of evaporation and the regression expressions for inflow, discharge, and rainfall. Rainfall totals at stations in the catchment are used to estimate over-lake rainfall, applying a regression between catchment and over-lake rainfall derived from satellite data. The inflow regression is authenticated using a cross-validation technique applied to inflow estimates for the years 1956–78, and the discharge regression is validated using discharge data for the years 1901–55. The model is first written as an autoregression (AR) model form for the lake-level term. Model predictions of lake level are verified by comparing them with measured lake levels for the time period 1931–94. In doing so, the model is initialized using the end-of-year lake level for 1930 and then using only over-lake rainfall as external input. Predicted levels of the lake levels are compared to the measured levels, with a correlation of 0.98. This confirms that fluctuations of Lake Victoria are driven predominantly by rainfall. The model is then “inverted” so that the current year's over-lake rainfall is expressed as a function of the two lake-level terms. If the beginning and ending lake levels in a year are known, the over-lake rainfall in the same year is easy to obtain. Applying the inverse model to the measured lake levels in 1899–1994, over-lake annual rainfall in 1900–94 is estimated. A comparison with over-lake rainfall for the period 1931–94 gives a root-mean-square error of 98 mm yr−1, corresponding to 6% of the over-lake annual mean rainfall. This model is also compared with the previous water balance model, which can be employed only for multiyear mean rainfall estimates. The two models complement each other, with the current model's advantage being the ability to calculate annual rainfall. However, the previous model still provides better estimates of multiyear means.
Abstract
This paper presents a water balance model for Lake Victoria that can be inverted to estimate annual rainfall over the lake. The model is calibrated using a fixed value of evaporation and the regression expressions for inflow, discharge, and rainfall. Rainfall totals at stations in the catchment are used to estimate over-lake rainfall, applying a regression between catchment and over-lake rainfall derived from satellite data. The inflow regression is authenticated using a cross-validation technique applied to inflow estimates for the years 1956–78, and the discharge regression is validated using discharge data for the years 1901–55. The model is first written as an autoregression (AR) model form for the lake-level term. Model predictions of lake level are verified by comparing them with measured lake levels for the time period 1931–94. In doing so, the model is initialized using the end-of-year lake level for 1930 and then using only over-lake rainfall as external input. Predicted levels of the lake levels are compared to the measured levels, with a correlation of 0.98. This confirms that fluctuations of Lake Victoria are driven predominantly by rainfall. The model is then “inverted” so that the current year's over-lake rainfall is expressed as a function of the two lake-level terms. If the beginning and ending lake levels in a year are known, the over-lake rainfall in the same year is easy to obtain. Applying the inverse model to the measured lake levels in 1899–1994, over-lake annual rainfall in 1900–94 is estimated. A comparison with over-lake rainfall for the period 1931–94 gives a root-mean-square error of 98 mm yr−1, corresponding to 6% of the over-lake annual mean rainfall. This model is also compared with the previous water balance model, which can be employed only for multiyear mean rainfall estimates. The two models complement each other, with the current model's advantage being the ability to calculate annual rainfall. However, the previous model still provides better estimates of multiyear means.
Abstract
The two monthly precipitation products of the Global Precipitation Climatology Project (GPCP) and the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) are compared on a 23-yr period, January 1979–December 2001. For the long-term mean, major precipitation patterns are clearly demonstrated by both products, but there are differences in the pattern magnitudes. In the tropical ocean the CMAP is higher than the GPCP, but this is reversed in the high-latitude ocean. The GPCP–CMAP spatial correlation is generally higher over land than over the ocean. The correlation between the global mean oceanic GPCP and CMAP is significantly low. It is very likely because the input data of the two products have much less in common over the ocean; in particular, the use of atoll data by the CMAP is disputable. The decreasing trend in the CMAP oceanic precipitation is found to be an artifact of input data change and atoll sampling error. In general, overocean precipitation represented by the GPCP is more reasonable; over land the two products are close, but different merging algorithms between the GPCP and the CMAP can sometimes produce substantial discrepancy in sensitive areas such as equatorial West Africa. EOF analysis shows that the GPCP and the CMAP are similar in 6 out of the first 10 modes, and the first 2 leading modes (ENSO patterns) of the GPCP are nearly identical to their counterparts of the CMAP. Input data changes [e.g., January 1986 for Geostationary Operational Environmental Satellite (GOES) precipitation index (GPI), July 1987 for Special Sensor Microwave Imager (SSM/I), May 1994 for Microwave Sounding Unit (MSU), and January 1996 for atolls] have implications in the behavior of the two datasets. Several abrupt changes identified in the statistics of the two datasets including the changes in overocean precipitation, spatial correlation time series, and some of the EOF principal components, can be related to one or more input data changes.
Abstract
The two monthly precipitation products of the Global Precipitation Climatology Project (GPCP) and the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) are compared on a 23-yr period, January 1979–December 2001. For the long-term mean, major precipitation patterns are clearly demonstrated by both products, but there are differences in the pattern magnitudes. In the tropical ocean the CMAP is higher than the GPCP, but this is reversed in the high-latitude ocean. The GPCP–CMAP spatial correlation is generally higher over land than over the ocean. The correlation between the global mean oceanic GPCP and CMAP is significantly low. It is very likely because the input data of the two products have much less in common over the ocean; in particular, the use of atoll data by the CMAP is disputable. The decreasing trend in the CMAP oceanic precipitation is found to be an artifact of input data change and atoll sampling error. In general, overocean precipitation represented by the GPCP is more reasonable; over land the two products are close, but different merging algorithms between the GPCP and the CMAP can sometimes produce substantial discrepancy in sensitive areas such as equatorial West Africa. EOF analysis shows that the GPCP and the CMAP are similar in 6 out of the first 10 modes, and the first 2 leading modes (ENSO patterns) of the GPCP are nearly identical to their counterparts of the CMAP. Input data changes [e.g., January 1986 for Geostationary Operational Environmental Satellite (GOES) precipitation index (GPI), July 1987 for Special Sensor Microwave Imager (SSM/I), May 1994 for Microwave Sounding Unit (MSU), and January 1996 for atolls] have implications in the behavior of the two datasets. Several abrupt changes identified in the statistics of the two datasets including the changes in overocean precipitation, spatial correlation time series, and some of the EOF principal components, can be related to one or more input data changes.
Abstract
The Integrated Global Radiosonde Archive (IGRA) is a collection of historical and near-real-time radiosonde and pilot balloon observations from around the globe. Consisting of a foundational dataset of individual soundings, a set of sounding-derived parameters, and monthly means, the collection is maintained and distributed by the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information (NCEI). It has been used in a variety of applications, including reanalysis projects, assessments of tropospheric and stratospheric temperature and moisture trends, a wide range of studies of atmospheric processes and structures, and as validation of observations from other observing platforms. In 2016, NCEI released version 2 of the dataset, IGRA 2, which incorporates data from a considerably greater number of data sources, thus increasing the data volume by 30%, extending the data back in time to as early as 1905, and improving the spatial coverage. To create IGRA 2, 40 data sources were converted into a common data format and merged into one coherent dataset using a newly designed suite of algorithms. Then, an overhauled version of the IGRA 1 quality-assurance system was applied to the integrated data. Last, monthly means and sounding-by-sounding moisture and stability parameters were derived from the new dataset. All of these components are updated on a regular basis and made available for download free of charge on the NCEI website.
Abstract
The Integrated Global Radiosonde Archive (IGRA) is a collection of historical and near-real-time radiosonde and pilot balloon observations from around the globe. Consisting of a foundational dataset of individual soundings, a set of sounding-derived parameters, and monthly means, the collection is maintained and distributed by the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information (NCEI). It has been used in a variety of applications, including reanalysis projects, assessments of tropospheric and stratospheric temperature and moisture trends, a wide range of studies of atmospheric processes and structures, and as validation of observations from other observing platforms. In 2016, NCEI released version 2 of the dataset, IGRA 2, which incorporates data from a considerably greater number of data sources, thus increasing the data volume by 30%, extending the data back in time to as early as 1905, and improving the spatial coverage. To create IGRA 2, 40 data sources were converted into a common data format and merged into one coherent dataset using a newly designed suite of algorithms. Then, an overhauled version of the IGRA 1 quality-assurance system was applied to the integrated data. Last, monthly means and sounding-by-sounding moisture and stability parameters were derived from the new dataset. All of these components are updated on a regular basis and made available for download free of charge on the NCEI website.
Abstract
Previous research has shown that the 1877/78 El Niño resulted in great famine events around the world. However, the strength and statistical significance of this El Niño event have not been fully addressed, largely due to the lack of data. We take a closer look at the data using an ensemble analysis of the Extended Reconstructed Sea Surface Temperature version 5 (ERSSTv5). The ERSSTv5 standard run indicates a strong El Niño event with a peak monthly value of the Niño-3 index of 3.5°C during 1877/78, stronger than those during 1982/83, 1997/98, and 2015/16. However, an analysis of the ERSSTv5 ensemble runs indicates that the strength and significance (uncertainty estimates) depend on the construction of the ensembles. A 1000-member ensemble analysis shows that the ensemble mean Niño-3 index has a much weaker peak of 1.8°C, and its uncertainty is much larger during 1877/78 (2.8°C) than during 1982/83 (0.3°C), 1997/98 (0.2°C), and 2015/16 (0.1°C). Further, the large uncertainty during 1877/78 is associated with selections of a short (1 month) period of raw-data filter and a large (20%) acceptance criterion of empirical orthogonal teleconnection modes in the ERSSTv5 reconstruction. By adjusting these two parameters, the uncertainty during 1877/78 decreases to 0.5°C, while the peak monthly value of the Niño-3 index in the ensemble mean increases to 2.8°C, suggesting a strong and statistically significant 1877/78 El Niño event. The adjustment of those two parameters is validated by masking the modern observations of 1981–2017 to 1861–97. Based on the estimated uncertainties, the differences among the strength of these four major El Niño events are not statistically significant.
Abstract
Previous research has shown that the 1877/78 El Niño resulted in great famine events around the world. However, the strength and statistical significance of this El Niño event have not been fully addressed, largely due to the lack of data. We take a closer look at the data using an ensemble analysis of the Extended Reconstructed Sea Surface Temperature version 5 (ERSSTv5). The ERSSTv5 standard run indicates a strong El Niño event with a peak monthly value of the Niño-3 index of 3.5°C during 1877/78, stronger than those during 1982/83, 1997/98, and 2015/16. However, an analysis of the ERSSTv5 ensemble runs indicates that the strength and significance (uncertainty estimates) depend on the construction of the ensembles. A 1000-member ensemble analysis shows that the ensemble mean Niño-3 index has a much weaker peak of 1.8°C, and its uncertainty is much larger during 1877/78 (2.8°C) than during 1982/83 (0.3°C), 1997/98 (0.2°C), and 2015/16 (0.1°C). Further, the large uncertainty during 1877/78 is associated with selections of a short (1 month) period of raw-data filter and a large (20%) acceptance criterion of empirical orthogonal teleconnection modes in the ERSSTv5 reconstruction. By adjusting these two parameters, the uncertainty during 1877/78 decreases to 0.5°C, while the peak monthly value of the Niño-3 index in the ensemble mean increases to 2.8°C, suggesting a strong and statistically significant 1877/78 El Niño event. The adjustment of those two parameters is validated by masking the modern observations of 1981–2017 to 1861–97. Based on the estimated uncertainties, the differences among the strength of these four major El Niño events are not statistically significant.
Abstract
NOAA global surface temperature (NOAAGlobalTemp) is NOAA’s operational global surface temperature product, which has been widely used in Earth’s climate assessment and monitoring. To improve the spatial interpolation of monthly land surface air temperatures (LSATs) in NOAAGlobalTemp from 1850 to 2020, a three-layer artificial neural network (ANN) system was designed. The ANN system was trained by repeatedly randomly selecting 90% of the LSATs from ERA5 (1950–2019) and validating with the remaining 10%. Validations show clear improvements of ANN over the original empirical orthogonal teleconnection (EOT) method: the global spatial correlation coefficient (SCC) increases from 65% to 80%, and the global root-mean-square difference (RMSD) decreases from 0.99° to 0.57°C during 1850–2020. The improvements of SCCs and RMSDs are larger in the Southern Hemisphere than in the Northern Hemisphere and are larger before the 1950s and where observations are sparse. The ANN system was finally fed in observed LSATs, and its output over the global land surface was compared with those from the EOT method. Comparisons demonstrate similar improvements by ANN over the EOT method: The global SCC increased from 78% to 89%, the global RMSD decreased from 0.93° to 0.68°C, and the LSAT variability quantified by the monthly standard deviation (STD) increases from 1.16° to 1.41°C during 1850–2020. While the SCC, RMSD, and STD at the monthly time scale have been improved, long-term trends remain largely unchanged because the low-frequency component of LSAT in ANN is identical to that in the EOT approach.
Significance Statement
The spatial interpolation method of an artificial neural network has greatly improved the accuracy of land surface air temperature reconstruction, which reduces root-mean-square error and increases spatial coherence and variabilities over the global land surface from 1850 to 2020.
Abstract
NOAA global surface temperature (NOAAGlobalTemp) is NOAA’s operational global surface temperature product, which has been widely used in Earth’s climate assessment and monitoring. To improve the spatial interpolation of monthly land surface air temperatures (LSATs) in NOAAGlobalTemp from 1850 to 2020, a three-layer artificial neural network (ANN) system was designed. The ANN system was trained by repeatedly randomly selecting 90% of the LSATs from ERA5 (1950–2019) and validating with the remaining 10%. Validations show clear improvements of ANN over the original empirical orthogonal teleconnection (EOT) method: the global spatial correlation coefficient (SCC) increases from 65% to 80%, and the global root-mean-square difference (RMSD) decreases from 0.99° to 0.57°C during 1850–2020. The improvements of SCCs and RMSDs are larger in the Southern Hemisphere than in the Northern Hemisphere and are larger before the 1950s and where observations are sparse. The ANN system was finally fed in observed LSATs, and its output over the global land surface was compared with those from the EOT method. Comparisons demonstrate similar improvements by ANN over the EOT method: The global SCC increased from 78% to 89%, the global RMSD decreased from 0.93° to 0.68°C, and the LSAT variability quantified by the monthly standard deviation (STD) increases from 1.16° to 1.41°C during 1850–2020. While the SCC, RMSD, and STD at the monthly time scale have been improved, long-term trends remain largely unchanged because the low-frequency component of LSAT in ANN is identical to that in the EOT approach.
Significance Statement
The spatial interpolation method of an artificial neural network has greatly improved the accuracy of land surface air temperature reconstruction, which reduces root-mean-square error and increases spatial coherence and variabilities over the global land surface from 1850 to 2020.
Abstract
The 1981–2010 “U.S. Climate Normals” released by the National Oceanic and Atmospheric Administration’s (NOAA) National Climatic Data Center include a suite of monthly, seasonal, and annual statistics that are based on precipitation, snowfall, and snow-depth measurements. This paper describes the procedures used to calculate the average totals, frequencies of occurrence, and percentiles that constitute these normals. All parameters were calculated from a single, state-of-the-art dataset of daily observations, taking care to produce normals that were as representative as possible of the full 1981–2010 period, even when the underlying data records were incomplete. In the resulting product, average precipitation totals are available at approximately 9300 stations across the United States and parts of the Caribbean Sea and Pacific Ocean islands. Snowfall and snow-depth statistics are provided for approximately 5300 of those stations, as compared with several hundred stations in the 1971–2000 normals. The 1981–2010 statistics exhibit the familiar climatological patterns across the contiguous United States. When compared with the same calculations for 1971–2000, the later period is characterized by a smaller number of days with snow on the ground and less total annual snowfall across much of the contiguous United States; wetter conditions over much of the Great Plains, Midwest, and northern California; and drier conditions over much of the Southeast and Pacific Northwest. These differences are a reflection of the removal of the 1970s and the addition of the 2000s to the 30-yr-normals period as part of this latest revision of the normals.
Abstract
The 1981–2010 “U.S. Climate Normals” released by the National Oceanic and Atmospheric Administration’s (NOAA) National Climatic Data Center include a suite of monthly, seasonal, and annual statistics that are based on precipitation, snowfall, and snow-depth measurements. This paper describes the procedures used to calculate the average totals, frequencies of occurrence, and percentiles that constitute these normals. All parameters were calculated from a single, state-of-the-art dataset of daily observations, taking care to produce normals that were as representative as possible of the full 1981–2010 period, even when the underlying data records were incomplete. In the resulting product, average precipitation totals are available at approximately 9300 stations across the United States and parts of the Caribbean Sea and Pacific Ocean islands. Snowfall and snow-depth statistics are provided for approximately 5300 of those stations, as compared with several hundred stations in the 1971–2000 normals. The 1981–2010 statistics exhibit the familiar climatological patterns across the contiguous United States. When compared with the same calculations for 1971–2000, the later period is characterized by a smaller number of days with snow on the ground and less total annual snowfall across much of the contiguous United States; wetter conditions over much of the Great Plains, Midwest, and northern California; and drier conditions over much of the Southeast and Pacific Northwest. These differences are a reflection of the removal of the 1970s and the addition of the 2000s to the 30-yr-normals period as part of this latest revision of the normals.
The 1981–2010 U.S. Climate Normals released by the National Oceanic and Atmospheric Administration's (NOAA) National Climatic Data Center (NCDC) include a suite of descriptive statistics based on hourly observations. For each hour and day of the year, statistics of temperature, dew point, mean sea level pressure, wind, clouds, heat index, wind chill, and heating and cooling degree hours are provided as 30-year averages, frequencies of occurrence, and percentiles. These hourly normals are available for 262 locations, primarily major airports, from across the United States and its Pacific territories. We encourage use of these products specifically for examination of the diurnal cycle of a particular variable, and how that change may shift over the annual cycle.
The 1981–2010 U.S. Climate Normals released by the National Oceanic and Atmospheric Administration's (NOAA) National Climatic Data Center (NCDC) include a suite of descriptive statistics based on hourly observations. For each hour and day of the year, statistics of temperature, dew point, mean sea level pressure, wind, clouds, heat index, wind chill, and heating and cooling degree hours are provided as 30-year averages, frequencies of occurrence, and percentiles. These hourly normals are available for 262 locations, primarily major airports, from across the United States and its Pacific territories. We encourage use of these products specifically for examination of the diurnal cycle of a particular variable, and how that change may shift over the annual cycle.
Abstract
Our study shows that the intercomparison among sea surface temperature (SST) products is influenced by the choice of SST reference, and the interpolation of SST products. The influence of reference SST depends on whether the reference SSTs are averaged to a grid or in pointwise in situ locations, including buoy or Argo observations, and filtered by first-guess or climatology quality control (QC) algorithms. The influence of the interpolation depends on whether SST products are in their original grids or preprocessed into common coarse grids. The impacts of these factors are demonstrated in our assessments of eight widely used SST products (DOISST, MUR25, MGDSST, GAMSSA, OSTIA, GPB, CCI, CMC) relative to buoy observations: (i) when the reference SSTs are averaged onto 0.25° × 0.25° grid boxes, the magnitude of biases is lower in DOISST and MGDSST (<0.03°C), and magnitude of root-mean-square differences (RMSDs) is lower in DOISST (0.38°C) and OSTIA (0.43°C); (ii) when the same reference SSTs are evaluated at pointwise in situ locations, the standard deviations (SDs) are smaller in DOISST (0.38°C) and OSTIA (0.39°C) on 0.25° × 0.25° grids; but the SDs become smaller in OSTIA (0.34°C) and CMC (0.37°C) on products’ original grids, showing the advantage of those high-resolution analyses for resolving finer-scale SSTs; (iii) when a loose QC algorithm is applied to the reference buoy observations, SDs increase; and vice versa; however, the relative performance of products remains the same; and (iv) when the drifting-buoy or Argo observations are used as the reference, the magnitude of RMSDs and SDs become smaller, potentially due to changes in observing intervals. These results suggest that high-resolution SST analyses may take advantage in intercomparisons.
Significance Statement
Intercomparisons of gridded SST products be affected by how the products are compared with in situ observations: whether the products are in coarse (0.25°) or original (0.05°–0.10°) grids, whether the in situ SSTs are in their reported locations or gridded and how they are quality controlled, and whether the biases of satellite SSTs are corrected by localized matchups or large-scale patterns. By taking all these factors into account, our analyses indicate that the NOAA DOISST is among the best SST products for the long period (1981–present) and relatively coarse (0.25°) resolution that it was designed for.
Abstract
Our study shows that the intercomparison among sea surface temperature (SST) products is influenced by the choice of SST reference, and the interpolation of SST products. The influence of reference SST depends on whether the reference SSTs are averaged to a grid or in pointwise in situ locations, including buoy or Argo observations, and filtered by first-guess or climatology quality control (QC) algorithms. The influence of the interpolation depends on whether SST products are in their original grids or preprocessed into common coarse grids. The impacts of these factors are demonstrated in our assessments of eight widely used SST products (DOISST, MUR25, MGDSST, GAMSSA, OSTIA, GPB, CCI, CMC) relative to buoy observations: (i) when the reference SSTs are averaged onto 0.25° × 0.25° grid boxes, the magnitude of biases is lower in DOISST and MGDSST (<0.03°C), and magnitude of root-mean-square differences (RMSDs) is lower in DOISST (0.38°C) and OSTIA (0.43°C); (ii) when the same reference SSTs are evaluated at pointwise in situ locations, the standard deviations (SDs) are smaller in DOISST (0.38°C) and OSTIA (0.39°C) on 0.25° × 0.25° grids; but the SDs become smaller in OSTIA (0.34°C) and CMC (0.37°C) on products’ original grids, showing the advantage of those high-resolution analyses for resolving finer-scale SSTs; (iii) when a loose QC algorithm is applied to the reference buoy observations, SDs increase; and vice versa; however, the relative performance of products remains the same; and (iv) when the drifting-buoy or Argo observations are used as the reference, the magnitude of RMSDs and SDs become smaller, potentially due to changes in observing intervals. These results suggest that high-resolution SST analyses may take advantage in intercomparisons.
Significance Statement
Intercomparisons of gridded SST products be affected by how the products are compared with in situ observations: whether the products are in coarse (0.25°) or original (0.05°–0.10°) grids, whether the in situ SSTs are in their reported locations or gridded and how they are quality controlled, and whether the biases of satellite SSTs are corrected by localized matchups or large-scale patterns. By taking all these factors into account, our analyses indicate that the NOAA DOISST is among the best SST products for the long period (1981–present) and relatively coarse (0.25°) resolution that it was designed for.
Abstract
The National Oceanic and Atmospheric Administration Global Surface Temperature (NOAAGlobalTemp) dataset is widely used for scientific research, operational monitoring, and climate assessment activities. Aligning with NOAA’s mission values, NOAAGlobalTemp has been updated to version 6 (i.e., NGTv6), which includes two enhancements over its predecessor (NGTv5). The first enhancement is the expansion of the spatial coverage to encompass the entire globe and the extension of temporal coverage back to 1850 (an interim version of NOAAGlobalTemp with these features was released in February 2023). The expansion of spatial coverage is accomplished by utilizing surface air temperatures over the Arctic Ocean and by eliminating the data reconstruction mask used in NGTv5 that had suppressed interpolation in data-sparse regions. This change has important implications for global temperature trends since the Arctic region has been warming at a much faster pace, more than 4 times the global average, in the twenty-first century to date. The second enhancement is the implementation of a methodology based on artificial intelligence (AI) for reconstructing surface air temperature over the global land surface and the Arctic Ocean. The AI model employs an artificial neural network to fill data gaps and is demonstrated to be more robust, stable, and accurate than the previous gap-filling method, particularly in observation-sparse areas such as the polar regions. The model outperforms the previous approach across all evaluated statistical metrics, and the output reaches a stable state more quickly as observations are received, which facilitates climate monitoring. NGTv6 was released in February 2024.
Abstract
The National Oceanic and Atmospheric Administration Global Surface Temperature (NOAAGlobalTemp) dataset is widely used for scientific research, operational monitoring, and climate assessment activities. Aligning with NOAA’s mission values, NOAAGlobalTemp has been updated to version 6 (i.e., NGTv6), which includes two enhancements over its predecessor (NGTv5). The first enhancement is the expansion of the spatial coverage to encompass the entire globe and the extension of temporal coverage back to 1850 (an interim version of NOAAGlobalTemp with these features was released in February 2023). The expansion of spatial coverage is accomplished by utilizing surface air temperatures over the Arctic Ocean and by eliminating the data reconstruction mask used in NGTv5 that had suppressed interpolation in data-sparse regions. This change has important implications for global temperature trends since the Arctic region has been warming at a much faster pace, more than 4 times the global average, in the twenty-first century to date. The second enhancement is the implementation of a methodology based on artificial intelligence (AI) for reconstructing surface air temperature over the global land surface and the Arctic Ocean. The AI model employs an artificial neural network to fill data gaps and is demonstrated to be more robust, stable, and accurate than the previous gap-filling method, particularly in observation-sparse areas such as the polar regions. The model outperforms the previous approach across all evaluated statistical metrics, and the output reaches a stable state more quickly as observations are received, which facilitates climate monitoring. NGTv6 was released in February 2024.