name: layout-general layout: true class: left, top <style> .remark-slide-number { position: inherit; } .remark-slide-number .progress-bar-container { position: absolute; bottom: 0; height: 4px; display: block; left: 0; right: 0; } .remark-slide-number .progress-bar { height: 100%; background-color: rgba(2, 70, 79, 0.874); } .remark-slide-number .numeric { position: absolute; bottom: 4%; height: 4px; display: block; right: 2.5%; font-weight: bold; } .remark-slide-number .numeric-out{ color: rgba(2, 70, 79, 0.874); } </style> --- count: false class: top, center, title-slide <img src="img/mainz/X.png" align="right" height="110" width="100"> <img src="img/mainz/cnrs.png" align="middle" height="95"> <img src="img/mainz/UniversiteParisCite_logo.jpg" align="left" height="90"> <hr class="L1"> ## Estimating Balloon-observed Gravity Wave Momentum Fluxes<br> using Machine Learning and Input from Reanalysis <hr class="L2"> <h0br> <!-- #### SCALES conference --> <!-- <img src="img/RUPP_logo.png" align="middle" height="100"> --> .pull-left-60[ <!-- ### SCALES conference, June 26-30<h0br> --> .stress[Sothea Has<sup>1</sup>, Riwal Plougonven<sup>2</sup>, Aurélie Fischer<sup>1</sup>, ...]<hbr> .center[ .ref[<sup>1</sup> CNRS, Laboratoire de Probabilités, Statistique et Modélisation]<h0br> .ref[<sup>2</sup> Laboratoire de Meteorologie Dynamique, Ecole Polytechnique] ] ] .pull-right-40[ <img src="img/mainz/scale.png" align="left" width="350" height="75"> ] --- count:false ### What is gravity wave momentum flux (GWMFs)? .center[ <img src="./img/mainz/gravitational_wave.png" align="center" width="750px" height="400px"/> ] --- count:false ### What is gravity wave momentum flux (GWMFs)? .center[ <img src="./img/mainz/gravitational_wave2.png" align="center" width="750px" height="400px"/> ] --- count:false ### What is gravity wave momentum flux (GWMFs)? .center[ <img src="./img/mainz/gravitational_wave3.png" align="center" width="750px" height="400px"/> ] --- count:false ### What is gravity wave momentum flux (GWMFs)?<hbr> - .stress[Gravity waves]: oscillations in atmosphere due to the restoring force of gravity. - .stress[Momentum flux]: waves `\(\leftrightarrow\)` mean flow. - Sources: topographic features, convective processes, flow over mountains, ... <br> .center[ <img src="./img/mainz/gw.jpg" align="center" height="300px"/> ] --- count:false ### What is gravity wave momentum flux (GWMFs)?<hbr> - .stress[Gravity waves]: oscillations in atmosphere due to the restoring force of gravity. - .stress[Momentum flux]: waves `\(\leftrightarrow\)` mean flow. - Sources: topographic features, convective processes, flow over mountains, ... ### Why does it matter?<hbr> .pull-left[ - It influences atmospheric circulation. - Knowledge of its impact is crucial: - weather forecasting - climate modeling - studies of atmospheric dynamics ... ] .pull-right[ <img src="./img/mainz/gw.jpg" align="center" height="250px" width="340px"/> ] .stress[Fritts and Alexander (2003)] and .stress[Alexander et al. (2010)]. --- template: inter-slide class: left, middle ##
.bold-blue[Outline] .hhead[I. Strateole 2 campaign] <br> .hhead[II. Reanalysis (ERA5) and Machine Learning methods (ML)] <br> .hhead[III. Numerical results] <br> .hhead[IV. Conclusion and perspectives] --- template: inter-slide class: left, middle count: false ##
.bold-blue[Outline] .section[I. Strateole 2 campaign] <br> .hhead[II. Reanalysis (ERA5) and Machine Learning methods (ML)] <br> .hhead[III. Numerical results] <br> .hhead[IV. Conclusion and perspectives] --- ### I. Strateole 2 campaign <img src="./img/mainz/logo_strateole2_4x4cm_300dpi.png" align="right" width="100px"/><hbr> .subtitle[Hasse et al. (2018)] <hbr> - First campaign (Nov 2019 to Feb 2020): - 8 superpressure balloons drifting along the tropics - altitude `\(\sim 18\)`km - `\(20\)`km. - Data collected: 30s observations of in-situ **wind, pressure** and **temperature**. - GWMFs can be derived using wavelet analysis [.stress[Hertzog et al. (2012)]]. .center[ <img src="./img/mainz/vd_2020_01_1_25_bal1.gif" align="center" width="370px"/> <img src="./img/mainz/vd_2020_01_1_25_bal3.gif" align="center" width="370px"/> ] <!-- - Objective: reconstruct this GWMF using machine learning. --> --- count: false ### I. Strateole 2 campaign <img src="./img/mainz/logo_strateole2_4x4cm_300dpi.png" align="right" width="100px"/><hbr> .subtitle[Hasse et al. (2018)] <hbr> - First campaign (Nov 2019 to Feb 2020): - 8 superpressure balloons drifting along the tropics - altitude `\(\sim 18\)`km - `\(20\)`km. - Data collected: 30s observations of in-situ **wind, pressure** and **temperature**. - GWMFs can be derived using wavelet analysis [.stress[Hertzog et al. (2012)]]. .center[ <img src="./img/mainz/balloon_trajectory.png" align="center" height="220px" width="770px"/> .caption[The balloons' trajectories during the first campaign.] ] --- ### I. Strateole 2 campaign <img src="./img/mainz/logo_strateole2_4x4cm_300dpi.png" align="right" width="100px"/><hbr> .subtitle[Related works]<hbr> .right[.stress[Corcos et al. (2021)] 👉 <img src="./img/mainz/gwmf_corcos.png" align="center" width="450px"/>] .left[<img src="./img/mainz/cloud.png" align="center" width="400px"/> 👈 .stress[Bramberger et al. (2022)]] .right[.stress[Wilson et al. (2023)] 👉 <img src="./img/mainz/turbulence.png" align="center" width="400px"/>] .left[<img src="./img/mainz/parametrization.png" align="center" width="420px"/> 👈 .stress[Lott et al. (2023)]] --- count: false ### I. Strateole 2 campaign <img src="./img/mainz/logo_strateole2_4x4cm_300dpi.png" align="right" width="100px"/><hbr> .subtitle[Related works]<hbr> .right[.stress[Corcos et al. (2021)] 👉 <img src="./img/mainz/gwmf_corcos.png" align="center" width="450px"/>] .left[<img src="./img/mainz/cloud.png" align="center" width="400px"/> 👈 .stress[Bramberger et al. (2022)]] .right[.stress[Wilson et al. (2023)] 👉 <img src="./img/mainz/turbulence.png" align="center" width="400px"/>] .left[<img src="./img/mainz/parametrization1.png" align="center" width="420px"/> 👈 .stress[Lott et al. (2023)]] --- template: inter-slide class: left, middle count: false ##
.bold-blue[Outline] .hhead[I. Strateole 2 campaign] <br> .section[II. Reanalysis (ERA5) and Machine Learning methods (ML)] <br> .hhead[III. Numerical results] <br> .hhead[IV. Conclusion and perspectives] --- ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[5th Generation of the European Reanalysis (ERA5)]<hbr> - Reanalysis: historical observations `\(+\)` numerical models `\(\rightarrow\)` datasets. <!-- of the Earth's weather and climate variables over a specified period. --> - ERA5: state-of-the-art global atmospheric dataset ( 1 to 137 vertical levels ). - Extracted variables: .stress[precipitation], .stress[pressure], .stress[wind] and .stress[temperature] profile ( `\(67\)` vertical levels ) at `\(5\times 5\)` horizontal grid points of `\(1^\circ\times1^\circ\)` ( `\(\sim 100\)`km ) resolution. .center[ <img src="./img/mainz/bal_grid.png" align="center" width="800px"/><br> .caption[Including ERA5 grid points.] ] --- ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine Learning Methods (ML)]<hbr> - ML methods: tree-based ensemble machine learning, <h0br> - random forest (RF) - extremely randomized trees (Extra-trees) - adaptive boosting (Adaboost). - <del>.stress[DNN], .stress[CNN], .stress[LSTM], ...</del> `\(\leftarrow\)` lack of observations (16k)! -- .pull-left-60[ - Inputs: - Temperature: **temp** (*K*) - Zonal and meridional wind: **u** and **v** (*m/s*) - Log surface pressure: **lnsp** - Solar zenith angle: **sza** (*°*) - Precipitation: **tp**, **tp_mean** & **tp_sd** (*m*). <br><br> ] .pull-right-40[ <img src="./img/mainz/ERA52.png" align="center" width="220px"/> ] <!-- where `\(t,u\)` and `\(v\)` are the grid centers at 4 levels `\(\sim 19, 9, 2\)`km and at the surface ( `\(0\)`km ). --> -- <br> - Targets: .stress[absolute], .stress[eastward] & .stress[westward] GWMFs:<h0br> - High frequency waves (**HF**): period 15mn `\(\rightarrow\)` 1h. - Wide frequency waves (**WF**): period 15mn `\(\rightarrow\)` 1day. --- ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Decision trees]<hbr> - .stress[Decision trees]: fundamental to the primary ML methods. <h0br> - *Recursively partitioning* the input space at some *values* of *input features*. - Each split is performed so that each cell is as *pure* as possible. - *Impurity*: total within sum of square (TWSS): `$$\sum_{\text{input}_i\in R_\ell}(\text{target}_i-\mu_\ell)^2+\sum_{\text{input}_i\in R_r}(\text{target}_i-\mu_r)^2.$$` - Prediction `\(=\)` averaging the targets in the same region. --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Decision trees]<hbr> - .stress[Decision trees]: fundamental to the primary ML methods. <h0br> - *Recursively partitioning* the input space at some *values* of *input features*. - Each split is performed so that each cell is as *pure* as possible. - *Impurity*: total within sum of square (TWSS): `$$\sum_{\text{input}_i\in R_\ell}(\text{target}_i-\mu_\ell)^2+\sum_{\text{input}_i\in R_r}(\text{target}_i-\mu_r)^2.$$` - Prediction `\(=\)` averaging the targets in the same region. .center[ <img src="./img/mainz/tr0.png" align="center" width="450px"/><br> .caption[Input data with two variable:] `\(X_1\)` .caption[and] `\(X_2\)`. ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Decision trees]<hbr> - .stress[Decision trees]: fundamental to the primary ML methods. <h0br> - *Recursively partitioning* the input space at some *values* of *input features*. - Each split is performed so that each cell is as *pure* as possible. - *Impurity*: total within sum of square (TWSS): `$$\sum_{\text{input}_i\in R_\ell}(\text{target}_i-\mu_\ell)^2+\sum_{\text{input}_i\in R_r}(\text{target}_i-\mu_r)^2.$$` - Prediction `\(=\)` averaging the targets in the same region. .center[ <img src="./img/mainz/tr1.png" align="center" width="450px"/><br> .caption[First split is performed at] `\(X_1=t_1\)`. ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Decision trees]<hbr> - .stress[Decision trees]: fundamental to the primary ML methods. <h0br> - *Recursively partitioning* the input space at some *values* of *input features*. - Each split is performed so that each cell is as *pure* as possible. - *Impurity*: total within sum of square (TWSS): `$$\sum_{\text{input}_i\in R_\ell}(\text{target}_i-\mu_\ell)^2+\sum_{\text{input}_i\in R_r}(\text{target}_i-\mu_r)^2.$$` - Prediction `\(=\)` averaging the targets in the same region. .center[ <img src="./img/mainz/tr2.png" align="center" width="450px"/><br> .caption[Split the left child at] `\(X_2=t_2\)`. ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Decision trees]<hbr> - .stress[Decision trees]: fundamental to the primary ML methods. <h0br> - *Recursively partitioning* the input space at some *values* of *input features*. - Each split is performed so that each cell is as *pure* as possible. - *Impurity*: total within sum of square (TWSS): `$$\sum_{\text{input}_i\in R_\ell}(\text{target}_i-\mu_\ell)^2+\sum_{\text{input}_i\in R_r}(\text{target}_i-\mu_r)^2.$$` - Prediction `\(=\)` averaging the targets in the same region. .center[ <img src="./img/mainz/tr3.png" align="center" width="450px"/><br> .caption[Split the right child at] `\(X_1=t_3\)`. ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Decision trees]<hbr> - .stress[Decision trees]: fundamental to the primary ML methods. <h0br> - *Recursively partitioning* the input space at some *values* of *input features*. - Each split is performed so that each cell is as *pure* as possible. - *Impurity*: total within sum of square (TWSS): `$$\sum_{\text{input}_i\in R_\ell}(\text{target}_i-\mu_\ell)^2+\sum_{\text{input}_i\in R_r}(\text{target}_i-\mu_r)^2.$$` - Prediction `\(=\)` averaging the targets in the same region. .center[ <img src="./img/mainz/tr4.png" align="center" width="450px"/><br> .caption[Final split at] `\(X_2=t_4\)`. ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Decision trees]<hbr> - .stress[Decision trees]: fundamental to the primary ML methods. <h0br> - *Recursively partitioning* the input space at some *values* of *input features*. - Each split is performed so that each cell is as *pure* as possible. - *Impurity*: total within sum of square (TWSS): `$$\sum_{\text{input}_i\in R_\ell}(\text{target}_i-\mu_\ell)^2+\sum_{\text{input}_i\in R_r}(\text{target}_i-\mu_r)^2.$$` - Prediction `\(=\)` averaging the targets in the same region. .center[ <img src="./img/mainz/tr5.png" align="center" width="450px"/><br> .caption[Prediction by the tree.] ] --- ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Random forest]<hbr> - .stress[Random forest] averaging decorrelated decision trees.<h0br> - Several **bootstrap samples** `\(\rightarrow\)` several trees. - Each tree uses only a **small random subset** of input features. - Prediction `\(=\)` averaging trees' predictions. <br> .center[ <img src="./img/mainz/random-forest-diagram0.png" align="center" width="480px"/><br> .caption[] ] --- count:fase ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Random forest]<hbr> - .stress[Random forest] averaging decorrelated decision trees.<h0br> - Several **bootstrap samples** `\(\rightarrow\)` several trees. - Each tree uses only a **small random subset** of input features. - Prediction `\(=\)` averaging trees' predictions. <br> .center[ <img src="./img/mainz/random-forest-diagram1.png" align="center" width="480px"/><br> .caption[] ] --- count:fase ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Random forest]<hbr> - .stress[Random forest] averaging decorrelated decision trees.<h0br> - Several **bootstrap samples** `\(\rightarrow\)` several trees. - Each tree uses only a **small random subset** of input features. - Prediction `\(=\)` averaging trees' predictions. <br> .center[ <img src="./img/mainz/random-forest-diagram2.png" align="center" width="480px"/><br> .caption[] ] --- count:fase ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Random forest]<hbr> - .stress[Random forest] averaging decorrelated decision trees.<h0br> - Several **bootstrap samples** `\(\rightarrow\)` several trees. - Each tree uses only a **small random subset** of input features. - Prediction `\(=\)` averaging trees' predictions. <br> .center[ <img src="./img/mainz/random-forest-diagram3.png" align="center" width="480px"/><br> .caption[A Random forest with 3 decision trees.] ] --- ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Extremely randomized trees (Extra-trees)]<hbr> - .stress[Extra-trees] are similar to random forest except for a few points.<h0br> - **All** trees are built on *full training data*. - Each split is performed at **random values** on **random features**. - Prediction `\(=\)` averaging the trees' predictions. <br> .center[ <img src="./img/mainz/extra-tree-diagram.png" align="center" width="480px"/><br> .caption[An extra-trees method with 3 decision trees.] ] --- ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Adaptive boosting (Adaboost)]<hbr> - .stress[Adaboost] combines weak learners to build a strong one.<h0br> - Iteratively aggregates weak learners (trees with a few splits called *stumps*). - Initialization: **same** sample weights. - At each iteration: - a stump is built on the **weighted** data points - compute residual `\(\rightarrow\)` **update sample weights**: mispredicted points `\(\leftrightarrow\)` large weights - compute **weight** for the stump then add it to the sequence - Prediction `\(=\)` the weighted sum of all the stumps' predictions. .center[ <img src="./img/mainz/boosting0.png" align="center" width="350px"/><br> .caption[] ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Adaptive boosting (Adaboost)]<hbr> - .stress[Adaboost] combines weak learners to build a strong one.<h0br> - Iteratively aggregates weak learners (trees with a few splits called *stumps*). - Initialization: **same** sample weights. - At each iteration: - a stump is built on the **weighted** data points - compute residual `\(\rightarrow\)` **update sample weights**: mispredicted points `\(\leftrightarrow\)` large weights - compute **weight** for the stump then add it to the sequence - Prediction `\(=\)` the weighted sum of all the stumps' predictions. .center[ <img src="./img/mainz/boosting1.png" align="center" width="350px"/><br> .caption[] ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Adaptive boosting (Adaboost)]<hbr> - .stress[Adaboost] combines weak learners to build a strong one.<h0br> - Iteratively aggregates weak learners (trees with a few splits called *stumps*). - Initialization: **same** sample weights. - At each iteration: - a stump is built on the **weighted** data points - compute residual `\(\rightarrow\)` **update sample weights**: mispredicted points `\(\leftrightarrow\)` large weights - compute **weight** for the stump then add it to the sequence - Prediction `\(=\)` the weighted sum of all the stumps' predictions. .center[ <img src="./img/mainz/boosting2.png" align="center" width="350px"/><br> .caption[] ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Adaptive boosting (Adaboost)]<hbr> - .stress[Adaboost] combines weak learners to build a strong one.<h0br> - Iteratively aggregates weak learners (trees with a few splits called *stumps*). - Initialization: **same** sample weights. - At each iteration: - a stump is built on the **weighted** data points - compute residual `\(\rightarrow\)` **update sample weights**: mispredicted points `\(\leftrightarrow\)` large weights - compute **weight** for the stump then add it to the sequence - Prediction `\(=\)` the weighted sum of all the stumps' predictions. .center[ <img src="./img/mainz/boosting3.png" align="center" width="350px"/><br> .caption[] ] --- count:false ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Adaptive boosting (Adaboost)]<hbr> - .stress[Adaboost] combines weak learners to build a strong one.<h0br> - Iteratively aggregates weak learners (trees with a few splits called *stumps*). - Initialization: **same** sample weights. - At each iteration: - a stump is built on the **weighted** data points - compute residual `\(\rightarrow\)` **update sample weights**: mispredicted points `\(\leftrightarrow\)` large weights - compute **weight** for the stump then add it to the sequence - Prediction `\(=\)` the weighted sum of all the stumps' predictions. .center[ <img src="./img/mainz/boosting4.png" align="center" width="350px"/><br> .caption[An Adaboost with 2 stumps.] ] --- exclude:true ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: Gini importance (Gini)]<hbr> - Trees can record the reduction of impurity at each split for all features. - Importance of a feature `\(=\)` its total reduction / overall reduction. - .stress[Gini importance] of an ensemble method `\(=\)` (weighted) average of trees' *importances*. .center[ <img src="./img/mainz/feature_importance.png" align="center" width="600px"/><br> .caption[Gini importance of ensemble learning methods.] ] --- exclude:true ### II. Reanalysis (ERA5) & Machine Learning (ML)<hbr> .subtitle[Machine learning method: validation]<hbr> - Performance measurement:<h0br> - Pearson's correlation coefficient: `$$\text{cor}(y_{\text{true}},y_{\text{pred}})=\frac{\sum_{i=1}^n(y_{\text{true}_i}-\overline{y}_{\text{true}})(y_{\text{pred}_i}-\overline{y}_{\text{pred}})}{\sqrt{\sum_{i=1}^n(y_{\text{true}_i}-\overline{y}_{\text{true}})^2 \sum_{i=1}^n(y_{\text{pred}_i}-\overline{y}_{\text{pred}})^2}}$$` - Relative Mean Absolute Error (RMAE): `$$\text{RAME}(y_{\text{true}},y_{\text{pred}})=\frac{\sum_{i=1}^n|y_{\text{true}}-y_{\text{pred}}|}{\overline{y}_{\text{true}}}$$` - Data splitting: each model is trained on `\(7\)` balloons and tested on the remaining one.<h0br> .center[ <img src="./img/mainz/kfolds.png" align="center" width="500px"/><br> .caption[Train-test data splitting.] ] - Hyperparameters are tuned using `\(K\)`-fold cross validation. --- template: inter-slide class: left, middle count: false ##
.bold-blue[Outline] .hhead[I. Strateole 2 campaign] <br> .hhead[II. Reanalysis (ERA5) and Machine Learning methods (ML)] <br> .section[III. Numerical results] <br> .hhead[IV. Conclusion and perspectives] --- ### III. Numerical results <hbr> .subtitle[On absolute & westward GWMF of balloon 2 (HF)] <h0br> .center[ <img src="./img/mainz/fig_u1h_bal2_3h_24h_abs_1.png" align="center" width="750px"/><br> <img src="./img/mainz/fig_u1h_bal2_3h_24h_west.png" align="center" width="790px"/><br> .caption[Predicted and actual GWMF of HF absolute (up) & westward (bottom) GWMF of balloon 2 in 24h resolution.] ] --- exclude:true ### III. Numerical results <hbr> .subtitle[On absolute GWMF of balloon 7] <h0br> .center[ <img src="./img/mainz/fig_u1h_bal7_3h_24h_abs_1.png" align="center" width="790px"/><br> <img src="./img/mainz/fig_u1d_bal7_3h_24h_abs_1.png" align="center" width="790px"/><br> .caption[Predicted and actual absolute GWMF of HF (top) and WF (bottom) waves in 24h resolution.] ] --- exclude:true ### III. Numerical results <hbr> .subtitle[On east GWMF of balloon 8] <h0br> .center[ <img src="./img/mainz/fig_u1h_bal8_3h_24h_east_1.png" align="center" width="790px"/><br> <img src="./img/mainz/fig_u1d_bal8_3h_24h_east_1.png" align="center" width="790px"/><br> .caption[Predicted and actual eastward GWMF of HF (top) and WF (bottom) waves in 24h resolution.] ] --- ### III. Numerical results <hbr> .subtitle[Correlations over 50 independent runs of HF waves] <h0br> .center[ <img src="./img/mainz/fig_corr_all_u1h_3h_24h.PNG" align="center" width="600px"/><br> .caption[Boxplots of correlations.] ] ??? <!--.pull-right-30[ - The differences in variation among ML methods are not substantial on the same balloon. - Balloon 2, 6 and 8 are well predicted by the models (corr `\(> 0.7\)`). - Westward cases are more challenging than the other two cases. ] --> --- exclude:true ### III. Numerical results <hbr> .subtitle[Correlations over 50 independent runs of WF waves] <h0br> .center[ <img src="./img/mainz/fig_corr_all_u1d_3h_24h.PNG" align="center" width="600px"/><br> .caption[Boxplots over 50 independent runs of the three ML models on all balloons.] ] .pull-right-30[ - Balloon 2 and 8 are still well predicted by the models (corr `\(> 0.7\)`), but not the 6. - Performances on WF are often lower than HF since it also includes those of HF waves and longer-period waves. ] --- exclude:true ### III. Numerical results <hbr> .subtitle[Gini importance: High frequency range (HF) with period 15mn - 1h] <h0br> .pull-left-70[ <img src="./img/mainz/fig_feature_importance_all_u1h_3h_24h.PNG" align="center" width="600px"/><br> .caption[Gini importance of ML methods (by column) on all targets (by row) for HF.] ] .pull-right-30[ - In general, precipitation and zonal wind are the most important features. - Wind at balloon level `\(u19\)` stood out in eastward case for all models. - Surface wind is also very informative in many cases. ] --- ### III. Numerical results <hbr> .subtitle[Gini importance: Wide frequency range (WF) with period 15mn - 1 day] <h0br> .center[ <img src="./img/mainz/fig_feature_importance_all_u1d_3h_24h.PNG" align="center" width="600px"/><br> .caption[Gini importance of ML methods (by column) on all targets (by row) for WF.] ] ??? <!--.pull-right-30[ - Zonal wind stood out for absolute GWMF in WF case except for random forest method. - Clear importance of wind at balloon level presents in eastward case. - Precipitations are most informative in westward case. ] --> --- ### III. Numerical results <hbr> .subtitle[GWMF vs important inputs (balloon 2 & 3)] <h0br> <br> .center[ <img src="./img/mainz/qdm_with_inputs2.png" align="center" width="370px"/> <img src="./img/mainz/qdm_with_inputs3.png" align="center" width="370px"/><br> .caption[Target vs informative inputs of balloon 2 (left) and balloon 3 (right).] ] --- exclude:true ### III. Numerical results <hbr> .subtitle[Westward GWMF vs important inputs (balloon 8)] <h0br> .pull-left-60[ <img src="./img/mainz/qdm_with_inputs8.png" align="center" width="480px"/><br> .caption[target vs informative inputs.] ] .pull-right-40[ .middle[ - Balloon `\(8\)` (corr `\(> 0.7\)`), precipitation and winds seem more informative than the previous case. ] ] --- template: inter-slide class: left, middle count: false ##
.bold-blue[Outline] .hhead[I. Strateole 2 campaign] <br> .hhead[II. Reanalysis (ERA5) and Machine Learning methods (ML)] <br> .hhead[III. Numerical results] <br> .section[IV. Conclusion and perspectives] --- ### IV. Conclusion & Perspectives <hbr> .subtitle[Conclusion] <hbr> - Obtained encouraging results (corr > `\(0.7\)`). - No leading method, sensitive to the choices testing data. - Informative inputs: precipitation & zonal wind at balloon's level and below. - Westward GWMFs are more challenging than others. <h0br> ### .subtitle[Perspectives] <hbr> - Off from the tropic `\(\leftrightarrow\)` well descibed by ERA5 `\(\leftrightarrow\)` easier to predict. - **HF** `\(\subset\)` **WF** & they can propagate horizontally and stay present for several days. They are not well described by ERA5. - This result can provide an upper bound of stochastic components of GMWF that cannot be represented by EAR5. --- count: false template: inter-slide class: left, top count: false .left[# References ]<h0br> 📚 [.bib[Haase, J. S.,Alexander, M. J.,Hertzog, A.,Kalnajs, L.,Deshler, T.,Davis, S. M.,Plougonven, R.,Cocquerez, P., and Venel, S. (2018), Around the world in 84 days, Eos, 99.]](https://doi.org/10.1029/2018EO091907) 📚 [.bib[Alexander, M.J., Geller, M., McLandress, C., Polavarapu, S., Preusse, P., Sassi, F., Sato, K., Eckermann, S., Ern, M., Hertzog, A., Kawatani, Y., Pulido, M., Shaw, T.A., Sigmond, M., Vincent, R. and Watanabe, S. (2010), Recent developments in gravity-wave effects in climate models and the global distribution of gravity-wave momentum flux from observations and models. Q.J.R. Meteorol. Soc., 136: 1103-1124.]](https://doi.org/10.1002/qj.637) 📚 [.bib[Bramberger, M., Alexander, M. J., Davis, S., Podglajen, A., Hertzog, A., Kalnajs, L., et al. (2022). First super-pressure balloon-borne fine-vertical-scale profiles in the upper TTL: Impacts of atmospheric waves on cirrus clouds and the QBO. Geophysical Research Letters, 49, e2021GL097596.]](https://doi.org/10.1029/2021GL097596) 📚 [.bib[Corcos, M., Hertzog, A., Plougonven, R., & Podglajen, A. (2021). Observation of gravity waves at the tropical tropopause using superpressure balloons. Journal of Geophysical Research: Atmospheres, 126, e2021JD035165.]](https://doi.org/10.1029/2021JD035165) 📚 [.bib[Ern, M., & Preusse, P. (2012). Gravity wave momentum flux spectra observed from satellite in the summertime subtropics: Implications for global modeling. Geophysical Research Letters, 39(15).]](https://doi.org/10.1029/2012gl052659) 📚 [.bib[Lott, F., Rani, R., Podglajen, A., Codron, F., Guez, L., Hertzog, A., & Plougonven, R. (2023). Direct comparison between a non-orographic gravity wave drag scheme and constant level balloons. Journal of Geophysical Research: Atmospheres, 128, e2022JD037585.]](https://doi.org/10.1029/2022JD037585) --- template: inter-slide class: left, top count: false .left[# References ]<h0br> 📚 [.bib[R. Wilson, C. Pitois, A. Podglajen, A. Hertzog, M. Corcos, and R. Plougonven. "Detection of turbulence occurrences from temperature, pressure, and position measurements under superpressure balloons". Jan 2023.]](https://amt.copernicus.org/articles/16/311/2023/) 📚 [.bib[L. Breiman. "Random Forests." Machine Learning, 2001. DOI: 10.1023/A:1010933404324]](https://link.springer.com/article/10.1023/a:1010933404324) 📚 [.bib[J.H. Friedman. "Greedy Function Approximation: A Gradient Boosting Machine." The Annals of Statistics, 2001. DOI: 10.1214/aos/1013203451]](https://projecteuclid.org/journals/annals-of-statistics/volume-29/issue-5/Greedy-function-approximation-A-gradient-boosting-machine/10.1214/aos/1013203451.full) <br> .center[.large[Thank you 🤓]]