Prominence of the training data preparation in geomagnetic storm prediction using deep neural networks | Scientific Reports – Nature.com

Dataset

The data used for the present analysis are: the solar wind (SW) plasma parameters; the interplanetary magnetic field (IMF); the Dst index. The entire dataset has been obtained from the National Space Science Data Center of NASA, namely, from the OMNI database30. In particular, we used hourly averages of the three components ((B_x), (B_y), (B_z)) of the IMF in the GSM (Geocentric Solar Magnetospheric) reference frame (i.e. the x-axis of the GSM coordinate system is defined along the line connecting the center of the Sun to the center of the Earth; the origin is defined at the center of the Earth and is positive towards the Sun; the y-axis is defined as the cross product of the GSM x-axis and the magnetic dipole axis and is positive towards dusk; The z-axis is defined as the cross-product of the x- and y-axes; the magnetic dipole axis lies within the xz plane), the SW plasma temperature (T), density (D), total speed (V), pressure (P), and eastwest component of the electric field ((E_y) derived from (B_z) and (V_x)).

The dataset covers the period January 1990November 2019, and includes half of the 22nd solar cycle, all of the 23rd, and almost all of the 24th. To produce a robust forecasting of the Dst index, it is crucial to determine how the dataset is split and processed for the training and evaluation of the model. On the other hand, adopting a correct methodology for treating data is crucial to avoid bias especially when both a machine learning approach is used to develop predictive models and the data are time series.

If data are periodic, it is safe to train the model considering at least one complete period and test it on different periods. In fact, being the arrow of time fixed and the future unknown, the training operation that make use of points that follow the data used in the test can introduce bias. Therefore, the validation and test data-sets must be constructed by points of the time series that follow what is used for training one. In the present case, since we have only data from two solar cycles, the best option is to use one cycle for training and the other for both validation and test. Anyway, such a choice forces the validation to contain data relative to the first half of a solar cycle with a distribution of Dst values and storms different from the test set. Therefore, in our opinion, the most efficient choice for the validation and test process is to select points randomly for the two datasets.

Training a supervised fashion Deep Learning (DL) model requires both a balanced sampling of data referring to quiet and storm periods, and a proper evaluation of the metrics used to measure the performances. If not, the model will learn to predict only the most frequent case represented in the training set. Moreover, the standard performance metrics, computed on the full validation and test dataset, would produce a the prediction that would be correct most of the time but wrong in most relevant cases.

Taking care of these two aspects, we split the dataset using all the data before 1/1/2009 for training, and the remaining part for validation and test. In this way, we have at least one solar cycle for the training and one for the evaluation of the model. As previously said, for the validation and test we can choose dataset subsequent in time (i.e. ordered) or an equal number of points randomly from those available after 1/1/2009. The difference between random and ordered selection are displayed in Fig.1. In panel a the validation data includes the points in the first half of the cycle while the test is the other half. It is evident that the tail of the two distributions is different: in the validation dataset, events with very low Dst, which are particularly important being connected with storms, are missing. The situation completely changes when the points are picked randomly. In this case, the distributions are quite similar and also similar to the training dataset, representing the best starting point for the development of a data-driven predictive model. The last problem, directly connected to the data distribution, is that there are only few events associated with storms. In the framework used in this paper, where the algorithm learns by looking at the data, if the distribution is highly peaked around some value of the target variable, the algorithm will learn to predict only such values. To avoid this issue, we apply a re-weighting function for the sampling of the data that feed the algorithms training. In this way, every value of Dst is almost equally probable. The difference between the nominal distribution and the flatten (weighted) distribution is presented in Fig.1c.

Normalized distributions of Dst in the dataset used for training, validation and test. (a) Validation is the first half of the solar cycle period, test the second half. (b) Points for validation and test are randomly extracted. Train dataset includes all the available points before 1/1/2009. (c) Train dataset without and with re-weighting the low Dst events.

The points discussed above limit also the applicability of standard cross-validation methods usually recommended in machine learning applications to test the robustness of the models. While specific schemes of cross-validations have been developed for time series (e.g., the TimeSeriesSplit function available in the Scikit Python library), we prefer not to adopt this type of check because this kind of split increases the size of the training dataset, namely: in the first iterations, there are much fewer storms than in the latest. This automatically will favor the last iterations of the procedure in predicting storms, introducing an indirect bias in the interpretation of the results.

All the features are scaled linearly on a compact range as an additional pre-processing step. The scale is fitted on the training dataset, mapping these min and max values of data in 0.1 and 0.9, respectively. This choice leaves some room to accommodate smaller or larger values than those available in the training dataset that can emerge in future measurements of the variables.

The architecture of the Neural Network considered in this study is close to the one used in26 where a Long Short-Term Memory (LSTM) module is combined with a Fully Connected Network (FCNN). LSTM is a recurrent layer composed of cells designed to process long time series. The input of the proposed network is time series containing the variables described in Dataset for the 12 points in the time window ([t-11, t]). Each cell of the LSTM layer (Fig.2) receives in input one element (x_{t_i}) of this time series together with the outputs of the previous cells: the hidden state, (h_{t_{i-1}}), and the memory state, (c_{t_{i-1}}). As schematically depicted in the figure, these three sources of information are processed through fully connected layers and element-wise operations, all internal to the cells. In standard application of LSTM, the hidden state from the last cell represents the networks prediction, and the hidden states of all the other cells are not considered. In our approach, we collect and concatenate all the hidden states ([h_{t-11}, h_t]) in a multidimensional vector. This vector is then fed as input of a fully connected module. The output of this FCNN is the forecast of the Dst index for the hours ([t+1, t+12]).

Neural Network architecture used to forecast the Dst index as described in the text. In the LSTM cell, the square blocks are Fully Connected layers with activation function, while the circles are elementwise operations.

In optimizing DL networks, two types of parameters need to be fixed: the layers weights and the hyper-parameters specifying the architecture. During training, the back-propagation procedure takes care of the former, which can be millions or even billions (in our case 25,244). The others, typically limited in number (in our case 7), are usually determined manually by testing different solutions and considering only the training and validation dataset in the evaluation to avoid bias.

We found that better predictions are obtained using the following values for the hyper parameters:

LSTM, number of hidden layers: 2,

LSTM, size of the hidden layers: 8,

FCNN, number of layers: 4,

FCNN, number of output features for each layer: 96, 96, 48, 12.

Batch normalization is applied to the input vector of the FCNN, ReLU activation function, and a dropout layer with a drop factor of 0.2 follows every fully connected layer except the last one.

The loss function minimized during the training of the network is the Mean Absolute Error (MAE) function

$$begin{aligned} {text {MAE}} = frac{1}{N}sum _{i=1}^{N}left| y_{pred} - y_{true}right| _i end{aligned}$$

(1)

We use the Adam optimizer and a learning rate of (10^{-5}). During the training, back-propagation is applied after computing the loss on samples extracted from the dataset in batches. The procedure is repeated an arbitrary number of times. Statistics are collected after iterating back-propagation on as many samples as the number of elements in the training dataset: this is called an epoch. The training ends once the loss function stops decreasing on the validation dataset. We used batches of size 256 and stopped training after 10,000 epochs. Examples of the loss function behaviors are presented in Fig.3.

History of the loss function in the 10,000 epochs of the training.

The code with the implementation of the network architecture and the procedure to generate the training, validation, and test datasets are available as a Python notebook in the public GitLab repository gitlab.fbk.eu/dsip/dsip_physics/dsip_ph_space/Dstw.

A typical baseline forecast method for time series is the persistent model. The assumption at the base of this approach is that nothing changes between the last known value and all the future points:

$$begin{aligned} Dst(t + n) = Dst(t),quad nin mathbb {N}. end{aligned}$$

(2)

It is expected that the predictive power of this model will decrease with the increase of the forecast horizon; on the contrary, in the short term, assuming persistence is often a good approximation of the actual trend.

Different metrics can be considered to highlight and study models features and compare their predictive power. However, the focus of this work is the importance of how the training data are selected and used. This is appreciable even considering only the most common of these metrics, the Root Mean Squared Error (RMSE), defined as:

$$begin{aligned} text {RMSE}=sqrt{frac{sum _{i=1}^N left( y_{pred_i}-y_{true}right) ^2}{N}}. end{aligned}$$

(3)

Read the original here:
Prominence of the training data preparation in geomagnetic storm prediction using deep neural networks | Scientific Reports - Nature.com

Related Posts

Comments are closed.