- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Your profile timezone:
This is the registration and abstract submission site for MOW24.
The online agenda is also available here. Presentations are available until 30 November.
Mathematics of the Weather is a forum for the discussion of new numerical approaches for use in numerical forecasting, climate modelling and research into numerical modelling of the atmosphere. This year more theoretical aspects and climate will be one focus.
Homepage: MOW24 – TPChange
Coffee and a lunch snack will be served
Parametrization of surface turbulent exchange in almost all Earth System Models rely on statistical representations valid only in highly restrictive conditions not often encountered in the atmosphere (flat and horizontally homogeneous terrain, stationarity, moderate stratification). Under strong stratification over flat and homogeneous terrain, and over complex terrain the parametrizations of surface turbulent exchange fail, with no alternative to replace them. This failure is a cause of large uncertainties in the estimation of surface fluxes coupling the atmosphere to the underlaying surface. Recent research has shown that this failure of parametrizations can be ameliorated by adding the information on turbulence anisotropy. Anisotropy quantifies the directionality of turbulence exchange and is associated with differences in transport efficiently of turbulence, not accounted for in classic parametrizations.
Here we explore the characteristics and drivers of turbulence anisotropy in unstable atmospheric surface layer, over terrain ranging from flat to highly complex, using reduced variance budgets and machine learning, with the aim to develop a new generation of turbulence parametrizations valid over both flat and complex terrain. The results highlight the systematic differences in turbulence over complex terrain compared to flat terrain, but also show that anisotropy can be described by the same non-dimensional ratios for flat and complex terrain. Given the ever increasingly recognized importance of turbulent exchange on weather and climate at all scales, the improved parametrizations are necessary for reducing the uncertainty in weather, climate, and air-pollution models, particularly in polar regions and over complex terrain.
Atmospheric processes cover a wide range of spatial and temporal scales, where turbulence occurs at the lowest range of this spectrum of motions. The scale determines whether a process may be directly solved in a weather and climate model, or needs to be represented by a simplified empirical formulation due to computational limits.
Existing formulations of near-surface turbulence were developed for flat and horizontally homogeneous terrain, not representative of the majority of earth’s land surface. There is evidence that including the information on the directionality of the turbulent exchange of momentum (anisotropy), represented by the eigenvalues of the stress tensor, may improve the empirical formulations used in models and allow their extension to complex orography.
The present contribution explores how specific sets of eigenvalues (determining the shape of anisotropy) and eigenvector’s directions are related to specific components of momentum exchange in the coordinate system widely applied in turbulence studies. The approach investigates the uniqueness of the relation between eigenvalues and momentum exchange, and if eigenvectors’ direction is physically constrained (e.g. by atmospheric stability, height above ground) as this could give insights for turbulence models. To answer these questions two datasets from a relatively flat terrain and a glacier site are used. It is shown that eigenvectors’ direction depends on the examined site and other factors not straightforward to isolate.
The atmospheric kinetic energy is affected by two distinct sources at widely separated horizontal length-scales: baroclinic instability at wavelengths around 3000 km (synoptic scales), and convection at wavelengths between hundreds and a few thousand meters (mesoscales). Nastrom and Gage (1985) analyzed observations of the atmospheric energy spectrum as a function of horizontal wavenumber showing two power-law ranges: a steep -3 slope for the synoptic scale range and a shallow -5/3 slope for the mesoscale range. The -5/3 spectral slope was initially puzzling, as this was only anticipated for isotropic 3D turbulence, which does not apply to these scales.
High-resolution mesoscale model simulations demonstrate that convective systems are able to generate a background kinetic energy spectrum with a slope close to -5/3, like in the atmospheric mesoscales. This attributed to upscale transfer by Gage (1979) and Lilly (1983). We compare such meteorological models to idealized non-hydrostatic dry Boussinesq turbulence simulations with triply-periodic geometry, where such results are generally not obtained. We aim to study turbulence in as close as possible to the atmospheric context. To make it analogous to a typical meteorological model, we configure our simulations by using similar resolutions and an anisotropic grid. We also added a stable mean vertical shear flow profile and free-slip boundaries at the top and bottom. A first set of simulations took initial conditions as random buoyancy perturbations centered on an intermediate horizontal wavenumber. This caused a downscale cascade of energy, while producing a decaying peak that remained at the injection wavenumber. We were later able to produce the -5/3 spectral slope by instead specifying the initial buoyancy field as a series of aligned, localized structures, as used in previous meteorological studies (e.g, Sun et al., 2019). These structures produced a short-lived upscale transfer of energy. Additionally, it is to be noted that the -5/3 spectral energy slope was reproduced using the dry Boussinesq equations, suggesting that moist processes may not be central to these dynamics.
Lastly, we increase the horizontal resolution of our model configuration, while reducing horizontal diffusion. At the same time, we keep the vertical resolution and vertical diffusion the same. This changes the grid aspect ratio for our simulations, while moving closer to the isotropic grids typically employed in turbulence studies. We find that the -5/3 spectral energy slope is maintained in all those simulations, while the horizontal wavenumber extent of the slope and the upscale energy flux varies as a function of the horizontal resolution and numerical model diffusion. This also suggests that similar dynamics observed in meteorological models are somewhat robust to the grid aspect ratio.
A Lagrangian gravity-wave parameterization (MS-GWaM, Multi-Scale Gravity-Wave Model) that allows for fully transient wave-mean-flow interaction and horizontal propagation is applied to orographic gravity waves for the first time. Both linear and nonlinear mountain waves are modeled in idealized simulations within the pseudo-incompressible flow solver PincFlow. Two-dimensional flows over monochromatic orographies are considered, using MS-GWaM either in its fully transient implementation or in a steady-state implementation that represents classic mountain-wave parameterizations. Comparisons of wave-resolving simulations (not using MS-GWaM) and coarse-resolution simulations (using MS-GWaM) show that allowing for transience leads to a significantly more accurate forcing of the resolved mean flow. The model is able to reproduce the transient forcing of linearly generated mountain waves that slowly propagate upwards, in contrast to the instantaneous distribution of wave energy in classic parameterizations. At high altitudes, wave breaking induces a wind reversal that is captured by the transient model but inhibited in steady-state simulations, due to the assumption of critical level formation. This shows that transience can have a substantial impact in the interaction between mountain waves and mean flow.
The large-scale transport of tracers such as ozone and water vapor is primarily governed by the Brewer-Dobson circulation. However, this transport is modified by small-scale gravity waves (GWs) and turbulence from GW breaking. Since these dynamics are not fully resolved in weather and climate models, parameterization is necessary. Tracers significantly influence the Earth's energy budget and surface climate, making accurate modeling crucial. Current GW parameterization schemes account for the indirect effects on mean meridional circulation but not the direct effects of GW tracer transport or the enhanced mixing due to GW breaking.
To address this gap, we use wave-resolving simulations to investigate how GWs affect tracer distribution. We also extend a GW parameterization scheme, a Lagrangian ray tracer, to include GW-induced tracer transport from inertial and propagating GWs. Our findings highlight the significant direct impact of GWs on tracer transport. Additionally, we may discuss the influence of turbulent diffusive mixing on tracers. Our goal is to provide a comprehensive understanding of the complex processes influencing large-scale tracer transport in the atmosphere.
The microphysical properties of cirrus clouds and their interactions with local dynamics, such as gravity waves (GW), are not well understood (Gasparini et al. [2018], Joos et al. [2014]), leading to significant uncertainties in climate effect estimations. Accurate representation of ice formation processes and ice number concentration prediction is crucial for understanding the cirrus lifecycle (Krämer et al. [2020]), the structure of the tropopause layer, and the radiative budget (Matus and L’Ecuyer [2017]).
The present work complements (Dolaptchiev et al. [2023]) and is designed to further refine and generalize their approach, incorporating variability in the ice mean mass during cirrus evolution. Extending the results from (Baumgartner and Spichtinger [2019]), which are dedicated to homogeneous nucleation due to constant updraft velocities, we incorporate GW dynamics into the asymptotic approach. Previous studies Gierens et al. [2003] have shown that the deposition coefficient significantly impacts the nucleation process and is dependent on the mass of ice crystals. Numerical parcel model simulations in (Dolaptchiev et al. [2023]) also demonstrate that mean mass variability significantly affects cloud ice number concentration predictions, especially under conditions with a higher number of per-existing ice particles. In order to generalize the parameterisation introduced by (Dolaptchiev et al. [2023]), a correction to the parameterisation to account for changes in mean mass is proposed.
The suggested correction to the deposition term is based on underlying physical differences in the deposition process. The proposed approach is validated through ensemble calculations and tested for robustness with larger time steps. Recommendations for the optimal time steps for the representation of individual nucleation events, combined with sufficiently good captured statistics of post-nucleation ice number concentration occurrences, are provided. Incorporating mean mass variation leads to a better representation of predicted ice number concentrations, enhancing our understanding and modeling of cirrus clouds.
References
Manuel Baumgartner and Peter Spichtinger. Homogeneous nucleation from an asymptotic point of view. Theoretical and Computational Fluid Dynamics, 33 (1):83–106, 2019. doi: https://doi.org/10.1007/s00162-019-00484-0.
Stamen I. Dolaptchiev, Peter Spichtinger, Manuel Baumgartner, and Ulrich Achatz. Interactions between gravity waves and cirrus clouds: Asymptotic modeling of wave-induced ice nucleation. Journal of the Atmospheric Sciences, 80(12):2861 – 2879, 2023. doi: https://doi.org/10.1175/JAS-D-22-0234.1.
Bla$\check{\text{z}}$ Gasparini, Angela Meyer, David Neubauer, Steffen Münch, and Ulrike Lohmann. Cirrus cloud properties as seen by the CALIPSO satellite and ECHAM-HAM global climate model. Journal of Climate, 31(5):1983–2003, 2018. doi: https://doi.org/10.1175/JCLI-D-16-0608.1.
Klaus M Gierens, Marie Monier, and Jean-Francois Gayet. The deposition coefficient and its role for cirrus clouds. Journal of Geophysical Research: Atmospheres, 108(D2), 2003. doi: https:/ doi.org/10.1029/2001JD001558.
Hanna Joos, Peter Spichtinger, Philipp Reutter, and Fabian Fusina. Influence of heterogeneous freezing on the microphysical and radiative properties of orographic cirrus clouds. Atmospheric Chemistry and Physics, 14(13):6835–6852, 2014. doi: https://doi.org/10.5194/acp-14-6835-2014.
Martina Krämer, Christian Rolf, Nicole Spelten, Armin Afchine, David Fahey, Eric Jensen, Sergey Khaykin, Thomas Kuhn, Paul Lawson, Alexey Lykov, et al. A microphysics guide to cirrus–part 2: Climatologies of clouds and humidity from observations. Atmospheric Chemistry and Physics, 20(21): 12569–12608, 2020. doi: https://doi.org/10.5194/acp-20-12569-2020.
Alexander V Matus and Tristan S L’Ecuyer. The role of cloud phase in earth’s radiation budget. Journal of Geophysical Research: Atmospheres, 122(5):2559–2578, 2017. doi: https://doi.org/10.1002/2016JD025951
To improve our understanding of the mechanisms that drive the intensification of tropical cyclones (TCs), researchers are pushing the boundaries of simulation resolution, leaning towards scales as fine as a few meters. Increasing the model resolution and numerical accuracy are the primary means to reduce both the model error from parameterization and the error from excessive diffusion. Employing effective grid spacing of approx. 100 m and smaller allows for the explicit calculation of more dynamics, but at a possibly unbearable computational cost if we are not clever about it.
In a pursuit to alleviate the computational burden of very-high resolution large eddy simulation (LES) of TCs using high-order spectral elements, we are harnessing adaptive mesh refinement (AMR) to dynamically boost resolution where needed, guided by indicators such as potential vorticity and shear intensity, to mention only a few. In this talk we will present results obtained using the spectral element Nonhydrostatic Unified Model of the Atmosphere (NUMA) --dynamical core of NEPTUNE, developed at the U.S. Naval Research Lab (NRL)-- with adaptive mesh refinement. Furthermore, in this talk we will present our most recent work on non-column based microphysics with rain and link it to the use of AMR to simulate TCs.
Adaptive meshes are pivotal in numerical modeling and simulation, offering a means to efficiently, precisely, and flexibly represent intricate physical phenomena, particularly when grappling with their intricacies and varying scales. However, traditional adaptive mesh generation and optimisation algorithms tend to consume a large amount of computational cost, leading to a significant reduction in computational efficiency. Addressing this challenge effectively, we turn to the cutting-edge realm of artificial intelligence and neural networks. In our study, In our study, we harness the innovative power of a long short-term memory (LSTM) neural network as as a model for predicting the density of adaptive unstructured grids. By using the LSTM neural network to predict the mesh density of adaptive unstructured meshes in different sub-regions and obtain their position coordinates, followed by Delaunay triangulation methods are used to guide the generation of the meshes. To demonstrate the practical applicability of our approach, we seamlessly integrate the LSTM mesh density prediction model into the adaptive atmospheric model Fluidity-Atmosphere (Fluidity-Atmos), thereby enabling real-time mesh adaptation during numerical simulations. We evaluate the effectiveness of the method in terms of simulation accuracy and computational efficiency through a series of 2D experiments. The results show that the mesh generated by the LSTM mesh density prediction model is highly similar to the mesh pattern generated by the Fluidity-Atmos model. It is worth noting that the number of meshes generated by the LSTM mesh density prediction model is not fixed, and its number is dynamically adjusted in real time to meet the simulation requirements.
Data-driven multivariate deterministic and stochastic subgrid modelling schemes for atmosphere and ocean models are discussed. A pattern-based approach is taken where pairs of patterns in the space of resolved variables (or functions of these) and in the space of the subgrid forcing are identified and linked in a predictive manner. On top of this deterministic part of the subgrid scheme the subgrid patterns may be forced stochastically with a fitted vector-autoregressive process. Both the deterministic and the stochastic scheme can be constrained by physically motivated conservation laws, such as momentum conservation or (kinetic) energy conservation but enstrophy dissipation. The method can also be extended by combining it with a clustering algorithm to arrive at a set of local subgrid models. The schemes are machine-learning-style but not based on deep learning. Unlike black-box approaches such as neural networks the present methodology still allows to understand and interpret the subgrid model.
The subgrid modelling schemes are explored in the multiscale Lorenz 1996 model and then implemented in a spectral quasi-geostrophic three-level atmospheric model with realistic mean state and variability. The atmospheric model at a horizontal resolution of T30 is regarded as the reference against which coarser-resolution versions at T21 and T15, equipped with the subgrid modelling schemes, are compared. In long-term simulations, the novel subgrid schemes greatly improve on a standard hyperviscosity scheme as evidenced by the mean state, the variability pattern as well as kinetic energy and potential energy spectra. They also show marked skill improvements in an ensemble prediction setting.
AI weather models have shown remarkable success in short- and medium-range forecasting. However, major questions remain about their ability to predict the most extreme events, particularly those that are so rare they were absent from the training set (so-called gray swans). There are also already documented challenges for these models in learning multi-scale dynamics. We will show in-depth analyses of these models' shortcomings for learning gray swans and multi-scale dynamics. We will discuss the implications and present a number of solutions for addressing both.
A deep-learning model using convolutional neural nets is shown to produce physically realistic simulations of atmospheric and ocean circulations for the current climate state over 100-year autogressive rollouts. The model employs 10 prognostic variables above each cell on a 110-km resolution HEALPix mesh. The atmosphere and ocean are coupled asynchronously, with 6-hour and 2-day time-resolution in the atmosphere and ocean, respectively. The model is trained on both ERA5 reanalysis data and observations from the International Satellite Cloud Climatology Project (ISCCP).
The model’s climatology from the 100-year rollout is compared to ERA5 reanalysis data to assess its low-frequency variability, with particular emphasis on the northern and southern annular modes, blocking, extra-tropical and tropical cyclones, and the south Asian monsoon. The performance of our DLESM on these measures equals or exceeds that of much more computationally intensive Earth-system models from the 6th Climate Model Intercomparison Project (CMIP6).
It is sometimes erroneously assumed that auto-regressively generated ML forecasts inevitably smooth with time. Our model maintains sharp representations of the atmospheric structure indefinitely. This is demonstrated in the figure below which shows an intense winter-time mid-latitude cyclone roughly 100 years (73,000 steps) after the start of the simulation together with an observed event. (Could not seem to insert the image.)
Techniques of machine learning (ML) and what is called “artificial intelligence” (AI) today find a rapidly increasing range of applications touching upon social, economic, and technological aspects of everyday life. They are also being used increasingly and with great enthusiasm to fill in gaps in our scientific knowledge by data-based modeling approaches. I have followed these developments over the past almost 20 years with interest and concern, and with mounting disappointment. This leaves me sufficiently worried to raise here a couple of pointed remarks.
Developing a hybrid model that integrates physics-based principles with advanced artificial intelligence (AI) techniques is a promising strategy for achieving accurate and efficient environmental prediction. This hybrid approach harnesses the strengths of both disciplines to enhance the precision and versatility of predictive modelling in the realm of environmental sciences. In this framework, the physics-based component incorporates established principles from fluid dynamics, thermodynamics, and other relevant physical disciplines. In this talk, I will first demonstrate the capability of multis-scale adaptive mesh physical modelling for urban environmental problems, where, the details of buildings, and impact of green infrastructures (trees, parks) are considered. Furthermore, I will introduce recently development on machine learning techniques and data assimilation for improved predictive accuracy and uncertainty optimisation and rapid responding modelling. Complementing the physics-based foundation, AI algorithms are employed to dynamically adapt and refine the model based on real-time data. The capability of deep learning combined with data assimilation is demonstrated through hourly/daily PM2.5/ozone forecasting globally and regionally (in China). Finally, the presentation underscores the significance of digital twin tools in the context of smart city management, drawing connections to a recently funded EPSRC project. This holistic approach not only showcases the potential of a hybrid physics-AI model in environmental prediction but also emphasizes its practical implications for advancing smart city initiatives
Finite element methods have conventionally focused running on central processing units (CPUs). However, hardware is advancing rapidly, partly driven by machine learning applications. Representing numerical solvers with neural networks and implementing them with machine learning packages can bring advantages such as hardware agnosticism, automatic differentiation, and easy integration with data-driven models. This work implements unstructured finite element solvers using graph neural networks for the first time and shows its architecture agnosticism with tests on graphics processing units (GPUs). Specifically, high-order discontinuous Galerkin methods with an interior penalty scheme are adopted. The approach is first demonstrated on diffusion problems to illustrate the graph representation of an unstructured mesh, matrix-free residual evaluation inside neural networks, and a multigrid method consisting of p-multigrid levels and algebraic multigrid levels, which is shown to be scalable and robust. The approach is then extended to hyper-elasticity, incompressible flow, and finally, FSI problems. Overall, the approach has shown promising speed in diffusion and hyper-elasticity compared to some highly optimised implementations in the literature while maintaining high accuracy, i.e. $(p+1)$-order convergence for $p$-th order elements, in the three types of problems.
The parameterization of gravity wave momentum transport remains an active area of research in atmospheric model development. Although small relative to the synoptic flow, un- and under-resolved gravity waves can systematically modify the propagation and breaking of Rossby waves, thereby playing a significant role in the planetary-scale circulation. Parameterizations seek to faithfully represent these waves and their effects at minimal computational cost. Ray tracing is a promising method for modeling momentum transport associated with atmospheric gravity waves, allowing one to capture the propagation and dissipation of gravity wave packets. However, ray tracing parameterizations, such as the Multiscale Gravity Wave Model (MS-GWaM), have seen only limited adoption, in large part due to their computational intensity. Operational use of such parameterizations will remain challenging unless their efficiency can be increased by an order of magnitude or more.
Both the convergence and the runtime of ray tracing models are governed by the maximum number of Lagrangian rays permitted to exist at once and their resolution in wavenumber-height phase space. We investigate the use of coarse graining, intermittency, and machine learning to achieve comparable accuracy with dramatically fewer rays. We implement an idealized, one-dimensional version of the MS-GWaM ray tracer that models the interactions between internal gravity waves and a height- and time-varying mean flow. This model allows us to carefully analyze the errors present in resource-constrained integrations. Near the wave source, error is attributable to decreased phase-space resolution; we improve performance in this regime with a neural network trained to respect conservation properties of the system. Further aloft, we find that the error is driven by premature pruning of ray volumes and can be mitigated by introducing intermittency to the source. We explore these techniques and their marriage across a variety of test cases ranging from idealistic to GCM-informed.
Ocean models constitute a fundamental component of any Earth system model. Our goal is to capture the effects of submesoscale eddies, which require a resolution below one kilometer. Global simulations over several decades for this resolution are not yet feasible even on state-of-the-art high-performance computers due to excessive runtimes.
In this presentation, we explore two strategies to enhance the performance of the ICON-O ocean model: super-resolution to reduce the required spatial resolution and parallel-in-time integration to accelerate numerical time stepping.
First, a super-resolution approach motivated by machine learning is applied to incorporate fine-scale information into coarse-scale solutions. To this end, a deep neural network is trained using pairs of high- and low-resolution simulation snapshots. Subsequently, the network is used to periodically correct a low-resolution solution towards a restriction of a high-resolution simulation. Our results demonstrate that this correction enables computations on coarser grids, achieving discretization errors that are significantly lower than what would be possible with an uncorrected coarse solution.
Second, we propose to use spectral deferred corrections (SDC), a time integration scheme that allows for higher order and larger time steps by iteratively applying a low order integrator. A parallel version of SDC has recently been proposed, facilitating an even more efficient use of the available resources. This is achieved through optimized parameters and small-scale parallelism across the method, i.e., in each iteration within one time step.
We present our approach of implementing the aforementioned ideas in the ICON-O code. Furthermore, we show recent results, which were performed on the JUWELS cluster at the Jülich Supercomputing Centre.
Numerical simulations of large scale geophysical flows typically require unphysically strong dissipation for numerical stability, and the coarse resolution requires subgrid parameterisation. A popular scheme toward restoring energetic balance is horizontal kinetic energy backscatter. We consider a continuum formulation where momentum equations are augmented by a backscatter operator, e.g. in rotating Boussinesq. Consistent with numerical observations, it turns out that the injected energy can accumulate in certain scales. We discuss the occurrence of this phenomenon in specific plane waves and related bifurcations in the presence of bottom drag.
The area of fluid dynamics is based not on a single but on a multitude of dynamical equations that obey a hierarchical relationship. Among the constituents of this hierarchy are Euler and Navier-Stokes equations with their compressible and incompressible versions, as well as hydrostatic and geostrophic equations. The goal of this work is to formulate a computational approach for computing solutions for a whole spectrum of equations such that:
1) the solutions to these equations satisfy the respective conservation laws of the specific dynamical equation
2) the solutions to different equations respect the singular limit that relate these equations.
3) the numerics is to a certain extend mesh-unaware, i.e. it works for a spectrum of grids.
Discontinuous Galerkin (DG) / Flux Reconstruction (FR) methods are high order explicit finite element methods for solving advection dominated equations. They are very successful in obtaining reliable, small scale capturing methods that are arithmetically intense and thus suitable for modern memory bound HPC hardware. These methods have been used in various applications, including those involving numerical modelling of the atmosphere.
We will discuss a Lax-Wendroff Flux Reconstruction (LWFR) variant of the method which is arbitrary high order and advances time in a single stage, unlike the multi-stage Runge-Kutta (RK) methods. The single stage nature of the LWFR scheme minimizes inter-element communication, making the method more efficient than the standard RK methods. However, it loses out on admissibility preservation properties that are present in the standard Strong Stability Preserving (SSP) RK methods. Admissibility properties like positivity of density and pressure, entropy stability are crucial for the robustness and reliability of a numerical method. To that end, we present a novel subcell based shock capturing method that suppresses spurious oscillations for discontinuous solutions while also giving admissibility preservation. The idea of enforcing admissibility is to use a flux limiter which is interlaced within the face loops. The framework is also extended to handle source terms while maintaining high order accuracy and admissibility.
Several applications of the method like astrophysical jet flow and Kelvin-Helmholtz instability will be shown to demonstrate the accuracy and robustness of the method.
The currently available computing power limits the resolution in chemistry climate models, even on upcoming exascale machines. This is, for example, due to the much larger number of prognostic variables, including chemical tracers. However, to further enhance the reliability and accuracy of climate projections, smaller scales have to be taken into account.
Adaptive methods offer a solution here, as they allow to dynamically focus computational power on specific areas in time and space.This allows to drastically increase the level of details while keeping the time to solution and resource consumption low. However, adaptivity also requires a sophisticated selection of adaptation criteria, algorithms, memory layouts, and communication patterns to fully utilize modern HPC infrastructures.
Within the project ADAPTEX we develop a new framework for Earth-system modeling simulations on climate scales based on adaptive meshes. The overall aim is to make a variety of applications exascale-ready. The centerpiece is the Modular Earth Submodel System (MESSy), which is a flexible chemistry climate model. It is used e.g. for global chemistry climate applications and air quality studies but can also be applied in idealized setups. To enable adaptive mesh refinement within MESSy, we employ the parallel mesh management library t8code and the flow solver Trixi.jl, both of which have already demonstrated excellent scaling properties. Trixi.jl implements a high order discontinuous Galerkin scheme, which allows for consistent accuracy on coarser meshes and for efficient algorithms due to mainly local computations. Robustness is ensured by utilizing state-of-the-art entropy stable flux functions.
Furthermore Trixi.jl is written in Julia, a modern high-level programming language, which delivers performance comparable to classical languages but is also convenient to use. In our framework it allows domain scientist to quickly explore alternative dynamical cores while leveraging the underlying computing power and vendor agnostic GPU support.
In this conference contribution we want to provide an introduction to using adaptive mesh refinement for Earth-system modeling applications at the example of the chemistry climate model MESSy and present the current status and the envisioned final setup.
-all posters will be visible during the whole conference, authors will be available during the whole poster session and on demand during the conference-
In the Model Uncertainty-MIP (MUMIP) we run single column models from different modelling centres over the same period and domain with a series of 6hr simulations. By constructing the SCM initial and boundary conditions such that they are derived from a common 3D-simulation, a common prescription of dynamics is enforced. Consequently, combined dataset of the array of SCM-simulations will mimic a series of fully 3D NWP runs. Such a model intercomparison with objective procedures is highly important to understand and quantify uncertainty in physical parameterisations and parameterisation packages under the same large-scale state. It can further help in constraining stochasticity when coarse simulations are compared to storm-resolving simulations (Christensen, 2020). A dataset covering the Indian Ocean is currently under construction for the SCM of ECMWF, the UK Met Office, Météo France, and the NCAR/NOAA Developmental Testbed Centre.
Here, we present a first test case for MUMIP data. Recent work has demonstrated differences in (non-)stationarity across different reanalysis datasets (notably ERA5 and the Japanese reanalysis) in particular regarding the climatology of tropical explicit/convective precipitation and CAPE (Buschow 2024). A proposed hypothesis is that data assimilation and spin-up from non-native model states could be responsible for the non-stationary reanalysis climate.
We analyse MUMIP data to investigate similar transient behavior as a function of forecast time and the diurnal cycle. We further investigate the potential of a link between transience in short-term forecasts of (convective) precipitation and CAPE during the spin-up of the SCMs. Furthermore, MUMIP datasets allow us to intercompare different model physics packages as a function of lead time (0-6 hours), to quantify their divergence and to broadly illustrate model physics uncertainty.
The Multi-Scale Gravity-Wave Model (MS-GWaM) in the weather and climate code ICON is the first gravity-wave parameterization that takes wave transience and horizontal propagation into account (Achatz et al 2023, JMP; Voelker et al 2024, JAS). It predicts the development of the spectral gravity-wave field by a Lagrangian approach following gravity-wave rays parallel to the wave group velocity, while the predicted momentum and entropy fluxes couple back to the flow resolved by ICON. Using MS-GWaM in ICON, one can demonstrate that wave transience and horizontal wave propagation significantly modulate or even cause the momentum-flux intermittency observed in stratosphere and mesosphere, and that they modify the distribution of the wave fluxes. Moreover, horizontal gravity-wave propagation has a leading-order effect on the period and structure of the quasi-biennial oscillation (Kim et al 2024, ACP), and it makes a significant difference in the simulated middle-atmosphere residual circulation as well as zonal-mean zonal winds and temperature. While being costlier than conventional gravity-wave parameterizations, MS-GWaM outperforms them in realism and thereby provides an efficient alternative to capturing gravity-wave effects by explicitly resolving those waves in high-resolution codes.
The second law of thermodynamics requires positive internal entropy production rates. All subgrid scale processes in our models have to be described to be irreversible. A naive analysis of the heat flux parameterization at stable stratification reveals however, that the second law is violated by our usually applied methods. It will be explained that, when counting the TKE flux as a sort if heat flux additionally to the classical sensible heat flux, their sum must be downgradient the temperature. In fact, this flux sum is then nearly zero, since the subgrid scale processes describe nearly adiabatic motions where TKE and TPE are continuously transformed into each other. Hence, at stable stratification the shear production is the main dissipation rate. Parameterization developers have to check their code with respect to the second law constraint. Unfortunately, the usually applied Boussinesq approximated equations hide the problem. Those equations may not be formulated in accordance to the Gibbs fundamental equation from which the entropy budget follows.
There are multiple versions of Ertel's potentitial vorticity (EPV) to be found in the literature. The scalar $psi$ is either taken as the virtual potential temperature, or as the equivalent potential temperature. This poster motivates that both are not suitable. Rather, a potential temperature derived from the linear combination of the entropy potential and the total water content times a reference value is an appropriate choice. Both, the total water content and the entropy are Lagrangian invariants under ideal (reversible, adiabatic) conditions, so that particles can be relabelled on a $psi$ surface. The derivation of the EPV equation with the new 'modified entropy' (or modified entropy potential
temperature) contains further non-convective fluxes which are due to the mixing of constituents and have not yet been described in the literature.
The do-nothing-flux of EPV motivates the definition of the inactive wind, which is the most general wind balance in the atmosphere. A version of transformed atmospheric equations are given, which make the do-nothing-flux of EPV 'invisible'. In the horizontal momentum equation, one can thus distinguish baroclinic and barotropic forcing terms. The active wind (as the deviation from the inactive wind) and the entropy source term (see other poster) describe together, how isentropes are moved. In the time-mean, the vertical active wind resembles the vertical residual wind of the TEM framework.
The often used virtual potential temperature is not a Lagrangian invariant and must be ruled out as a choice of $psi$. The often investigated effect of latent heating on the evolution of the EPV is meaningless under the strict notion of a potential vorticity.
Clouds are one of the most important component of the Earth-Atmosphere system. They influence the hydrological cycle and the energy budget of the system via interaction with solar and infrared radiation. For clouds at lower levels consisting of water droplets, these effects are quite well understood, but for clouds containing ice particles (i.e. at lower temperatures) there are still open issues.
The description of clouds containing ice particles is still quite uncertain. Some processes (as e.g. ice nucleation) are not well known, for some others the parameters are not precisely determined. Since it is not always feasible to use the most complex formulation of all processes, we have to find meaningful approximations and reduced order models, which should have comparable properties as the complex models.
In this contribution, a hierarchy of ice cloud models formulated as ODE systems is presented, including also mathematical analyses of their qualitative properties using theory of dynamical systems.
Modern computer systems are more and more diverse and composed of nodes with shared memory and different type of processors, mainly CPU’s and GPU’s. GPU vendors are coming usually with their own language specification like CUDA (Nvidia), ROCm (AMD), or Metal (Mac).
The Julia language is designed for scientific computation and comes with special packages to write individual kernels for different backends. KernelAbstractions.jl allows to write typical stencil operations in a native way without a main knowledge about the GPU backend. Combined with domain decomposition and data transfer via MPI-aware implementations from backend to backend results in efficient code for high performance computing.
I will present results for two dycore implementation, a spectral element implementation following the HOMME code (Sandia Lab) and a finite volume implementation following the numerics of the ICON-model in the same Julia infrastructure.
The formation of ice clouds (cirrus clouds) in the tropopause region
requires moderate or even high vertical velocities up to several m/s
when homogeneous freezing is involved. Such vertical velocities result
from convective updrafts, turbulence or gravity waves. However, all
those processes are only purely represented in the tropopause region
of climate models. This in turn leads to misrepresentation of the
temporal and spatial variability of cirrus clouds in climate models
and consequently to increased uncertainties in the radiative effect of
the clouds.
In a recent study the asymptotic analyses of the interactions between
gravity waves and cirrus clouds revealed simplified equations for the
description of ice formation and ice dynamics forced by gravity
waves. Based on this study, here we present an approach for the self
consistent coupling of an existing transient GW parameterization to
cirrus parameterization. Idealized experiments of wave packet
propagation within an ice supersaturated region show a high agreement
in the cirrus evolution between the wave-resolving and
wave-parameterized simulation. Implications of the above results for
the cirrus parameterization in climate models will be discussed.
The well-posedness of the dynamic framework in earth-system model (ESM for short) is a common issue in earth sciences and mathematics. In this presentation, the authors will introduce the research history and fundamental roles of the well-posedness of the dynamic framework in the ESM, emphasizing the three core components of ESM, i.e., the atmospheric general circulation model (AGCM for short), land-surface model (LSM for short) and oceanic general circulation model (OGCM for short) and their couplings. In fact, this system strictly obeys the conservation of energy and is used to make better climate predictions. Then, some research advances made by their own research group are outlined. Finally, future research prospects are discussed.
Gravity waves are an important component of atmospheric dynamics, causing the transport of momentum and energy to the stratosphere and mesosphere. To make their parameterizations in atmospheric models more accurate, we need to improve our understanding of gravity waves. We address this problem by studying data from a global ICON simulation with a horizontal resolution of approximately 2.5 km. The data are divided into triangular subdomains defined by a low-resolution ICON model grid with a horizontal resolution of approximately 160 km, and 3D spatiotemporal spectra are evaluated within these subdomains. Finally, the spectra are filtered using the linear theory of gravity waves, yielding the global distribution of gravity wave spectra. Thanks to the spatial dependence and high number of subdomains, the results can be used to link the spectrum to flow properties or gravity wave sources.
pyBELLA+ is an innovative atmospheric flow solver and data assimilation engine designed to play a unique role in the landscape of research numerical weather prediction (NWP) models. This project focuses on developing a compact, modular software package that adheres to modern scientific software development principles, enabling researchers to focus on addressing NWP modelling questions with analytical precision and dynamic consistency.
Recently, pyBELLA+ has been successfully applied to advance scientific understanding in several key areas:
Building on these achievements, pyBELLA+ offers potential for further exploration of the dynamical coupling of NWP model components, particularly in evaluating the effectiveness of emerging machine-learning emulators within a unified Python framework. Its ability to maintain near-machine-level accuracy in specific solution fields makes it ideal for characterising novel numerical methods. The solver's unique capability for seamless dynamics switching also presents opportunities to explore such applications to NWP.
In this presentation, we will also delve into specific applications and ongoing investigations that underscore the transformative potential of pyBELLA+ in advancing NWP research.
The representation of subgrid-scale orography is a challenge in the physical parameterisation of orographic gravity-wave sources in weather forecasting. A significant hurdle is encoding maximum physical information with a simple spectral representation on unstructured geodesic grids with non-quadrilateral cells, such as those used in the German Weather Service's Icosahedral Nonhydrostatic Model. Additionally, the orographic representation must adapt to the grid cell size (scale awareness). This work introduces a novel spectral analysis method to approximate a scale-aware spectrum of subgrid-scale orography on unstructured geodesic grids. Our method reduces the dimension of physical orographic data by over two orders of magnitude in its spectral representation while maintaining the power of the approximated spectrum close to the physical value. Based on well-known least-squares spectral analyses, our approach is robust in the choice of free parameters, generally eliminating the need for tuning. Numerical experiments with an idealised setup show that this novel spectral analysis significantly outperforms straightforward least-squares spectral analysis in representing the physical energy of a spectrum, given the compression of spectral data and the irregular grid. Real-world topographic data studies show competitive error scores within 10% relative to the maximum physical quantity of interest across different grid sizes and background wind speeds. The deterministic behaviour of the method is investigated along with its principal capabilities and potential biases, showing that error scores can be iteratively improved if an optimisation target is known. This robust, physically sound method has broader potential applications in generic spectral analyses.
Recently, there has been a huge effort focused on developing highly efficient open-source libraries designed for Artificial Intelligence (AI) related computations on different computer architectures (for example, CPUs, GPUs and new AI processors). These advancements have not only made the algorithms based on these libraries highly efficient and portable between different architectures, but also has substantially simplified the entry barrier to develop methods using AI. Here, we present a novel methodology to leverage the power of both AI software and hardware into the field of numerical modelling by repurposing AI methods, such as Convolutional Neural Networks (CNNs), for the standard operations required in the field of the numerical solution of Partial Differential Equations (PDEs). CNNs are formed by the most popular AI libraires, Pytorch, in order to solve the incompressible flow equations on structured mesh through a finite element discretisation and a rapid multi-grid solution method. The proposed methodology is applied to develop a model of the airflow around many buildings and demonstrate high-fidelity simulation of urban flows using multiple GPUs, relevant to air pollution and flooding modelling. We also show how the Finite Element Method (FEM) can be modified for quadratic and higher order elements; in which case we can simplify the implementation of discrete FEM equations using CNNs filters. The results are validated with previous studies and indicate that the methodology can solve such problems using AI libraries in an efficient way, and presents a new avenue to explore in the development of numerical methods to undertake large-scale simulations.
Jexpresso is a new multi-physics general solver designed to solve, by numerical means, arbitrary systems of PDEs while making it easy for a user to set up one's own specific physical problem. While the version V2.0 presented in this poster relies on spectral elements and finite differences, the code is structured so that a user can add other grid-based numerical methods of choice without altering the structure of the existing code. Jexpresso is parallel and runs on both CPUs and GPUs. It is built to solve systems of PDEs provided by the user through the definition of the vector of unknowns, the fluxes, sources and, when needed, a constitutive law (e.q. equation of state for ideal gases). If the equations are formulated in flux form, then nothing else is necessary from the user’s perspective. Otherwise, if the equations are expressed in non-flux form (i.e. advective form), the user is also required to provide the matrix of non-constant coefficients. Independently of the equation set, initial and boundary conditions are also required. The user provides a simple boundary condition file that defines them as they are written on paper; nonetheless, the underlying code that handles them is hidden to the user.
Standard benchmarks and performance results are shown in the poster. The code is freely available via Github at https://github.com/smarras79/Jexpresso.
MU-MIP is an international project which seeks to characterise the systematic and random
component of model error across many different atmospheric models. An initiative of the WCRP
Working Group for Numerical Experimentation and the WWRP Predictability, Dynamics and
Ensemble Forecasting Working Group, MU-MIP includes representatives of 12+ institutes spanning
three continents.
MU-MIP is the first coordinated inter-comparison of model error. This poster will introduce the
initiative. We will describe the proposed protocol, highlight key questions, and outline progress to
date.
More participants are welcome! Please see our website for details of how to get involved.
https://mumip.web.ox.ac.uk
A comprehensive investigation of the predictability properties in a three-level quasi-geostrophic atmospheric model with realistic mean state and variability is performed. The full spectrum of covariant Lyapunov vectors and associated finite-time Lyapunov exponents (FTLEs) is calculated. The statistical properties of the fluctuations of the FTLEs as well as the spatial localisation and entanglement properties of the covariant Lyapunov vectors are studied. We look at correlations between the FTLEs by means of a principal component analysis, identifying modes of collective excitation across the Lyapunov spectrum. We also investigate FTLEs conditional on underlying weather regimes. An advanced clustering algorithm is employed to decompose the state space into weather regimes associated with specific predictability properties as given by the FTLEs. Finally, the extreme value properties of the FTLEs are studied using generalised Pareto models for exceedances above a high and below a low threshold. Return levels as well as upper and lower bounds on the FTLEs are determined and extremely unstable or stable atmospheric states are identified.
Currently, a new dynamical core for the weather and climate forecast model ICON, based on the Discontinuous Galerkin (DG) method, is under development at the Deutscher Wetterdienst (DWD). The DG method combines conservation of the prognostic variable via the finite volume approach with higher order accuracy via the finite element approach. Additionally it allows the use of explicit time integration schemes and is applicable on massively-parallel computers due to its very compact discretization stencils.
Some further steps in this development will be presented. Some optimizations have been achieved with the horizontally explicit-vertically implicit (HEVI) treatment by the use of the collocation method (DG-SEM) and implications with the use of HEVI together with the boundary conditions are discussed. Two different versions of the Euler equations with regard to the thermodynamic variable are compared and discussed. Now diffusion can be treated in the HEVI solver, too, and first results with a one-equation TKE turbulence model will be presented.
This presentation will present some modern developments of discontinuous Galerkin (DG) methods such as structure-preserving schemes equipped with properties like entropy stability, kinetic energy as well as pressure equilibrium preservation and their impact on the robustness and efficiency of numerical simulations.
Fluidity-Atmos, representative of a three-dimensional (3D) non-hydrostatic Galerkin compressible atmospheric model, is generated to resolve large-scale and small-scale atmospheric phenomena simultaneously. This achievement is facilitated by the use of non-hydrostatic equations and the adoption of a flexible 3D dynamically adaptive mesh where the mesh is denser in areas with higher gradients of variable solutions and relatively sparser in the rest of the domain while maintaining promising accuracy and reducing computational resource requirements. The dynamic core is formulated based on linear finite-element method and anisotropic tetrahedral meshes in both the horizontal and vertical directions and it incorporates a semi-implicit time-integration scheme. With the design of a virtual structured mesh index system, we achieve the coupling of the adaptive dynamic framework of Fluidity-Atmos with physical parameterisations and an urban-scale shadow computing module.
The performance of the adaptive mesh techniques in Fluidity-Atmos is evaluated by simulating (1) the classic advection, (2) the formation and propagation of a non-hydrostatic mountain wave, (3) the formation and separation of a supercell system and (4) a comparison of radiative fluxes at the top of the atmosphere and the earth surface and heating and cooling rates between Fluidity-Atmos and the high-accuracy results.
The results of idealised test cases (advection and mountain wave) using tetrahedral adaptive mesh are as accurate as those performed by cut-cell / terrain-following fixed mesh and indicates a promising reduction of computing time. Preliminary physics forecast results indicate that the solver formulation is robust and the shortwave radiative fluxes at the earth surface is more accurate than the reference results.
At the preceding MoW conference, an experiment reported in Mesinger and Veljovic (JMSJ 2020) was presented showing an advantage of the Eta ensemble over its driver ECMWF members in placing 250 hPa jet stream winds east of the Rockies. However, that Eta ensemble switched to use sigma, also achieved 250 hPa wind speed scores better than its driver members, although to a lesser extent. Thus, the Eta must include feature(s) additional to the eta coordinate responsible for this advantage.
An experiment we have done suggests that the van Leer type finite-volume vertical advection of the Eta, may be a significant contributor. Having replaced a centered finite-difference Lorenz-Arakawa scheme, this finite-volume scheme visibly improved the simulation of a downslope windstorm in the lee of the Andes.
Another likely feature contributing to that advantage is the sophisticated representation of topography, designed to arrive at the most realistic grid-cell values with no smoothing (Mesinger and Veljovic, MAAP 2017).
While apparently a widespread opinion is that it is a disadvantage of terrain intersecting coordinates that “vertical resolution in the boundary layer becomes reduced at mountain tops as model grids are typically vertically stretched at higher altitudes (Thuburn, 10.1007/978-3-642-11640-7 2011),” a comprehensive 2006 NCEP parallel test gave the opposite result (Mesinger, BLM 2023).
Many thousands of the Eta forecasts demonstrate that the relaxation lateral boundary condition, almost universally used in regional climate models, is unnecessary. Similarly, so-called large scale or spectral nudging, also based on an ill-founded belief, should be detrimental if numerical issues of the limited area model used are addressed. Note that this is confirmed by the Eta vs ECMWF results referred to above.
Even so, to have large scales of a nested model ensemble members mostly more accurate than those of their driver members, surely requires a lateral boundary condition scheme that is not inducing major errors. The scheme of the Eta at the outflow points of the boundary prescribes one less condition than at the inflow points (e.g., Mesinger and Veljovic, MAAP 2013), and has for that reason been referred to by McDonald (MWR 2003) as one of “fairly well-posed” schemes.
Some or all these might have made the Eta do so well in a recent performance comparison of REMO, WRF, RegCM4, RCA, and Eta, over the topography challenged Andes-western Amazon region (Gutierrez et al., JGR Atmos 2024). With various global driver model combinations, three using the Eta ranked the best.
While with terrain following coordinates the orography follows a line of coordinates, with the cut cell discretisation the points on the lower boundary are positioned in an irregular way. This means that the fields near such mountains is expected also to be irregular. However, idealised test situations use a smooth and often well resolved boundary. When the mountain is also well resolved, we expect smooth meteorological solutions and these are obtained when using terrain following coordinates. When using the rather accurate cut cell discretisation, even for such smooth mountains rather noisy and inaccurate solutions are encountered. This phenomena is called “noise generating smooth surfaces” with cut cells. This means that the particular cut cell scheme is less accurate than expected. Examples and test calculations for this are presented. The mathematical reasons for this phenomena are analysed, the most important being the fact that for the advection process it is inappropriate to pose boundary values for the fluxes at the mountain surface. Examples for a corrected boundary scheme with cut cells are presented. The example of a cloud being transported using the cut cell discretisation is shown, being noise free. This means that the noise generation surface problem has a solution within the high order L-Galerkin formalism.
TIGAR is a general circulation model aimed at studying Rossby and gravity wave dynamics, which is based on hydrostatic primitive equations. It is a spectral model that employs Hough harmonics, which are eigensolutions of the linearized rotating shallow water equations on the sphere as the basis function set for the horizontal representation of dynamical variables. This leads to the description of dynamics in terms of physically identifiable structures naturally associated with Rossby and gravity waves, which are fully separated at the level of linearization. In the vertical, the model employs spectral representation in terms of vertical structure functions (VSFs), which are eigensolutions of Sturm-Liouville equations. The additional structure provided by Hough framework compared to traditional spectral models can be leveraged on analytical, modelling and computational side. For instance, TIGAR allows to study wave-wave interactions and energy fluxes directly in the model.
Several dynamical cores differing in the vertical representation were developed for TIGAR. We present two of those:
1) VSFs with Neumann boundary conditions at surface, represented in terms of Laguerre polynomials coupled with Simmons and Burridge (1981) inspired vertical advection scheme (Neumann hybrid core)
2) VSFs with homogeneous Dirichlet boundary conditions at surface, represented in terms of Legendre polynomials coupled with fully spectral vertical scheme (Dirichlet core). Dirichlet VSFs, which were not previously used in atmospheric modelling and data analysis, improve the rate of convergence for spectrally expanded data.
We present TIGAR solutions of some classical tests for dynamical cores, including baroclinic instability test and compare them to the solutions of the dynamical core based on spherical harmonics and finite differences in the vertical (PUMA). The TIGAR solution represent well the dynamics of a baroclinic cycle even at a low vertical resolution. In comparison with PUMA, both dynamical cores exhibit much stronger tendency to shock formation which needs to be balanced by carefully chosen dissipation scheme. Dirichlet VSFs facilitate the use of vertical spectral truncation in TIGAR, thus paving the way for development of high resolution dynamical cores.
It has long been known that the excitation of fast motion in certain two-scale dynamical systems is linked to the singularity structure in complex time of the slow variables. We demonstrate, in the context of a fast harmonic oscillator forced by one component of the Lorenz 1963 model, that this principle can be used to construct time-discrete surrogate models by numerically extracting approximate locations and residues of complex poles via Adaptive Antoulas--Anderson (AAA) rational interpolation and feeding this information into the known "connection formula" to compute the resulting fast amplitude. Despite small but nonnegligible local errors, the surrogate model maintains excellent accuracy over very long times. In addition, we observe that the long-time behavior of fast energy offers a continuous-time analog of Gottwald and Melbourne's 2004 "0-1 test for chaos" - the asymptotic growth rate of the energy in the oscillator can discern whether or not the forcing function is chaotic.
Physical imbalances introduced by local sequential Bayesian data assimilation pose a significant challenge for numerical weather prediction. Fast-mode acoustic imbalances, for instance, can severely degrade solution quality. We present a novel dynamics-driven method to dynamically suppress these imbalances. Our approach employs a blended numerical model that seamlessly integrates compressible, soundproof, and hydrostatic dynamics. Through careful numerical and asymptotic analysis, we develop a one-step blending strategy that switches between model regimes during a simulation. Specifically, upon assimilation of data, the model configuration switches for one timestep to either the soundproof pseudo-incompressible or hydrostatic regime, then reverts to the compressible regime for the remainder of the assimilation window. This regime-switching is repeated for each subsequent assimilation window. Idealised experiments with travelling vortices, buoyancy-driven rising thermals, and internal gravity wave pulses demonstrate that our method effectively eliminates imbalances from data assimilation, achieving up to two orders of magnitude improvements in analysis fields. While our studies focused on eliminating acoustic and hydrostatic imbalances, the underlying principle of this dynamics-driven method can be applied to address other imbalances, with significant potential for real-world weather prediction applications.
We present a phase-averaging framework for the rotating shallow-water equations and a time-integration methodology for it. Phase averaging consists of averaging the nonlinearity over phase shifts in the exponential of the linear wave operator. Phase averaging aims to capture the slow dynamics in a solution that is smoother in time (in transformed variables), so that larger timesteps may be taken. In our numerical implementation, the averaging integral is replaced by a Riemann sum, where each term can be evaluated in parallel. This creates an opportunity for parallelism in the timestepping method.
In this talk, we will show proof-of-concept results and analyse their errors in order to examine the impact of the phase averaging on the rotating shallow-water solution. We will also examine how the averaging allows us to use larger timesteps and where the optimal averaging window is at a chosen timestep size.
In the atmosphere, fast oscillations such as gravity waves coexist with slow features such as geostrophic vortices. Numerical modelling of the fast and slow dynamics requires a small time step and long simulation time, which is computationally costly. Phase averaging filters out the fast oscillations whilst capturing their effect on the slow features, allowing for larger time steps.
We propose a modification to the phase averaging method called Phase Averaged Deferred Correction (PADC). PADC iteratively predicts and corrects an initial phase averaged solution. In contrast to classical deferred correction methods, we use a decreasing time averaging window to capture faster oscillations and increase solution accuracy. Furthermore, predictions and corrections are stacked and computed in parallel, reducing computational cost. We demonstrate the efficacy of PADC applied to a rotating shallow water system, comparing our results to those of phase averaging and direct simulation.
Blocking events are an important cause of extreme weather, especially long-lasting blocking events that trap weather systems in place. The duration of blocking events is, however, underestimated in climate models. Explainable Artificial Intelligence are a class of data analysis methods that can help identify physical causes of prolonged blocking events and diagnose model deficiencies. We demonstrate this approach on an idealized quasigeostrophic model developed by Marshall and Molteni (1993). We train a convolutional neural network (CNN), and subsequently, build a sparse predictive model for the persistence of Atlantic blocking, conditioned on an initial high-pressure anomaly. Shapley Additive ExPlanation (SHAP) analysis reveals that high-pressure anomalies in the American Southeast and North Atlantic, separated by a trough over Atlantic Canada, contribute significantly to prediction of sustained blocking events in the Atlantic region. This agrees with previous work that identified precursors in the same regions via wave train analysis. When we apply the same CNN to blockings in the ERA5 atmospheric reanalysis, there is insufficient data to accurately predict persistent blocks. We partially overcome this limitation by pre-training the CNN on the plentiful data of the Marshall-Molteni model, and then using Transfer Learning to achieve better predictions than direct training. SHAP analysis before and after transfer learning allows a comparison between the predictive features in the reanalysis and the quasigeostrophic model, quantifying dynamical biases in the idealized model. This work demonstrates the potential for machine learning methods to extract meaningful precursors of extreme weather events and achieve better prediction using limited observational data.
Multistability is a frequent feature in the climate system and leads to key challenges for our ability to predict how the system will respond to transient perturbations of the dynamics. As a particular example, the stably stratified atmospheric boundary layer is known to exhibit distinct flow regimes that are believed to be metastable. Numerical weather prediction and climate models encounter challenges in accurately representing these flow regimes and the transitions between them, leading to an inadequate depiction of regime occupation statistics. To improve theoretical understanding, stochastic conceptual models are used as a tool to systematically investigate what types of unsteady flow features may trigger abrupt transitions in the mean boundary layer state. The findings show that simulating intermittent turbulent mixing may be key in some cases, where transitions in the mean state follow from initial transient bursts of mixing.
Turbulent mixing is a parameterized process in atmospheric models, and the theory underpinning the parameterization schemes was developed for homogeneous and flat terrain, with stationary conditions. The parameterized turbulent mixing lacks key spatio-temporal variability that induces transient perturbations of the mean dynamics. This variability could be effectively included via stochastic parameterisation schemes, provided one knows how to define the strength or memory characteristics of random perturbations. Towards that goal, we use a systematic data-driven approach to quantify the uncertainty of parameterisations and inform us on how and when to incorporate uncertainty using stochastic models. To enable such a systematic data-driven approach, methods from entropy-based learning and uncertainty quantification were combined in a model-based clustering framework, where the model is a stochastic differential equation with piecewise constant parameters. As a result, stochastic parameterisation can be learned from observations. The method is able to retrieve a hidden functional relationship between the parameters of a stochastic model and the resolved variables. A reduced model is obtained, where the unresolved scales are expressed as stochastic differential equations whose parameters are continuous functions of the resolved variables. Using field measurements of turbulence, the stochastic modelling framework is able to uncover a stochastic parameterisation that represent unsteady mixing in difficult conditions. Such methodology will be explored for further derivation of stochastic parameterisations, and should help to quantify uncertainties in climate projections related to uncertainties of the unresolved scales dynamics.
We present a spatial Bayesian hierarchical model for postprocessing surface maximum wind gusts in COSMO-REA6. Our approach uses a non-stationary extreme value distribution (GEV) at the top level, with parameters that vary based on linear regressions of predictor variables from the COSMO-REA6 reanalysis. To capture spatial patterns in surface extreme wind gust behavior, the regression coefficients are modeled as 2D Gaussian random fields with a constant mean and an isotropic covariance function that depends only on the distance between locations. Additionally, we incorporate an altitude factor into the distance calculation, allowing us to include data from mountain top stations in the training process and utilize all available information. We evaluate the predictive performance using Brier Score and Quantile Score, comparing our model against climatological forecasts and a non-hierarchical, spatially constant baseline model. Our spatial model demonstrates up to 5% more skill in predicting quantile levels and shows high skill for more extreme wind gusts compared to the baseline model. Furthermore, the model improves the prediction of threshold levels at about 60-80% of the 109 locations investigated, depending on the threshold level. While a spatially constant approach already provides high skill, our model further enhances forecasts and improves spatial consistency. Additionally, by using Gaussian random fields, our model can be easily interpolated to unobserved locations, accounting for local characteristics.
We introduce numerical techniques and physical implementation for the development of a global large-eddy simulation model by the Nonhydrostatic Icosahedral Atmospheric Model (NICAM). NICAM has been studied for global kilometer-scale simulations since the first global 3.5-km mesh aqua-planet simulation by Tomita et al. (2005). Miyamoto et al. (2013) conducted a global 870-m mesh simulation using the K computer in Japan. Using the supercomputer Fugaku, we conducted a global 220-m mesh simulation. The model with this horizontal mesh scale is in the range of large eddy simulation (LES) for specific types of flow fields, including deep convection (Brian et al. 2003), although a higher resolution of O(10m) is ideal for more general purposes of LES.
To enable NICAM to function as a global large-eddy simulation model, we introduced the Smagorinsky-type LES scheme for turbulence. We also applied a smoother grid modification method by a transfer function based on Iga and Tomita (2014) instead of the spring dynamics and introduced several numerical stabilizations and less computationally demanding methods, such as using single precision arithmetic. We conducted the NICAM simulations for the horizontal mesh ranging from 220 m to 3. 5 km to analyze the resolution dependency and the turbulence scheme dependency between the eddy-diffusivity (Mellor-Yamada) type scheme and the Smagorinsky scheme. Additionally, we applied the satellite simulator, the Joint Simulator for Satellite Sensors, to the simulation results to compare with the recently launched satellite EarthCARE (Roh et al. 2024).
References:
Bryan, G. H., J. C. Wyngaard, and J. M. Fritsch, 2003: Resolution Requirements for the Simulation of Deep Moist Convection. Mon. Wea. Rev., 131, 2394–2416.
Iga, S., and H. Tomita, 2014: Improved smoothness and homogeneity of icosahedral grids using the spring dynamics method. J. Comp. Phys., 258, 208-226.
Miyamoto, Y., Y. Kajikawa, R. Yoshida, T. Yamaura, H. Yashiro, and H. Tomita, 2013: Deep moist atmospheric convection in a subkilometer global simulation, Geophys. Res. Lett., 40, 4922-4926.
Roh, W., Satoh, M., Hashino, T., Matsugishi, S., Nasuno, T., Kubota, T., 2023: Introduction to EarthCARE synthetic data using a global storm-resolving simulation. Atmos. Meas. Tech., 16, 3331–334.
Tomita, H, Miura, H., Iga, S., Nasuno, T., and Satoh, M., 2005: A global cloud-resolving simulation: preliminary results from an aqua planet experiment. Geophys. Res. Lett., vol.32, L08805.
Despite the steady progress in resolution and skill of weather and Earth-system models during the last decades, their physical fidelity and computational efficiency needs to be and can be significantly improved. Existing model infrastructures and software appear suboptimal to take advantage of computing technology advances and the potential from machine learning emulation alongside numerical techniques. In this presentation, we will give an overview of our activities regarding the development of the Portable Model for Multi-Scale Atmospheric Prediction (PMAP), which builds on methods of the IFS and FVM models at ECMWF. PMAP is an end-to-end Python programming implementation with the high-performance domain-specific framework GT4Py and will be equipped with efficient semi-implicit finite-volume integration schemes that are highly effective for weather prediction beyond kilometer resolution.
Final discussion of results and on future