A runoff prediction method based on hyperparameter optimisation of … – Nature.com

VMD-CEEMD decomposition algorithm

Variational Modal Decomposition (VMD) is an adaptive signal decomposition algorithm. It can decompose the signal into multiple components, and its essence and core idea is the construction and solution of the variational problem. VMD is commonly used to process non-linear signals and can decompose complex raw data to obtain a series of modal components16.

It can effectively extract the features of runoff data and reduce the influence of its nonlinearity and non-stationarity on the prediction results. The main steps of the VMD algorithm are: (1) The original signal is passed through the Hilbert transform to obtain a series of modal functions u, which are calculated to obtain the unilateral spectrum; (2) Transform the spectrum into the fundamental frequency band and construct the corresponding constrained variational problem by estimating the bandwidth; (3) Converting a constrained variational problem into an unconstrained variational problem17.

The calculated equations are as follows:

$$ L = left( {left{ {u_{k} } right},left{ {omega _{k} } right},lambda } right) = alpha mathop sum limits_{{k - 1}}^{K} left| {partial _{t} left[ {left( {delta left( t right) + frac{j}{{pi t}}} right)*u_{k} left( t right)} right]e^{{ - jomega _{t} t}} } right|_{2}^{2} + left| {fleft( t right) - mathop sum limits_{{k = 1}}^{K} u_{k} left( t right)} right|_{2}^{2} + left[ {lambda left( t right),fleft( t right) - mathop sum limits_{{k = 1}}^{k} u_{k} left( t right)} right], $$

(1)

where ({u}_{k}left(tright)) and ({omega }_{k}) are the modal components and the corresponding center frequencies, respectively, is the penalty function and is the Lagrange multiplier. The results of several experiments show that the decomposition results are better when is taken as 2000, so in this paper, is set to 2000. The k modal components of the VMD are solved by using the alternating direction method of multiplicative operators to find the saddle points of the unconstrained variational problem.

There are some potential features of the VMD decomposed runoff residual sequence. The CEEMD decomposition method is a new adaptive signal processing method. Compared with the commonly used EEMD method, its decomposition efficiency and reconstruction accuracy are higher, and it better exploits the potential features of residual sequences.

The EMD method is a method proposed by Huang et al. for signal time-domain decomposition processing, which is particularly suitable for the analysis of nonlinear and non-stationary time series18. In order to cope with the modal confusion problem of the EMD method, Wu et al.19 proposed an overall average empirical modal decomposition. The EEMD method effectively suppresses the modal aliasing caused by the EMD method by adding white noise to the original signal several times, followed by EMD decomposition, and averaging the EMD decomposed IMFs as the final IMFs20.

CEEMD by adding two Gaussian white noise signals with opposite values to the original signal, which are then subjected to separate EMD decompositions. In ensuring that the decomposition effect is comparable to that of EEMD, CEEMD reduces the reconstruction error induced by the EEMD method. After the original signal x(t) is decomposed by CEEMD, the reconstructed signal can be represented as

$$xleft(tright)=sum_{i=1}^{n}IM{F}_{i}left(tright)+{r}_{n}left(tright)$$

(2)

In Eq.(2), (IM{F}_{i}left(tright)) is the intrinsic modal function component; ({r}_{n}(t)) is the residual term; and n is the number of intrinsic modal components when ({r}_{n}(t)) becomes a monotonic function. The original sequence is finally decomposed into a finite number of IMFs.

In order to accurately predict the runoff sequence, this paper establishes a kernel limit learning machine prediction model based on the kernel function optimised by the nature-inspired BOA algorithm.

In Fig.1, the ELM input weights (omega in {R}^{XY}) (X and Y are the input and hidden layer neural networks, respectively) and biases are randomly generated21. Extreme learning machines require less manual tuning of parameters than BP neural networks, and can be trained on sample data in a shorter period of time, with fast learning rate and strong generalisation ability.

Structure of the KELM model.

Its regression function with output layer weights is:

$$left{begin{array}{c}fleft(xright)=h(x)beta =Hbeta \ {{varvec{H}}}^{T}{left(frac{1}{C}+{varvec{H}}{{varvec{H}}}^{T}right)}^{-1}Tend{array}right.$$

(3)

where: (fleft(xright))-model output; (x) -sample input ({varvec{h}}({varvec{x}})) and ({varvec{H}})-hidden layer mapping matrix; (beta ) -regularisation parameter; T-sample output vector.

Conventional ELM prediction models (solved by least squares) tend to destabilise the output when there is potential covariance in the sample parameters. Therefore, Huang et al.22 used the Kernel Extreme Learning Machine (KELM) with kernel function optimisation. Based on the kernel function principle, KELM can project covariant input samples into a high-dimensional space, which significantly improves the fitting and generalisation ability of the model. In addition, this model does not need to set the number of hidden layer nodes manually, reducing the number of spatial training bits and training time. The model output equation is:

$$fleft(xright)={left[begin{array}{c}K(x,{x}_{1})\ vdots \ K(x,{x}_{N})end{array}right]}^{T}{left(frac{1}{C}+{{varvec{Omega}}}_{ELM}right)}^{-1}$$

(4)

where: K(({x}_{i},{x}_{j}))-kernel function; ({{varvec{Omega}}}_{ELM})-kernel matrix, which is calculated as:

$$left{begin{array}{c}{{varvec{Omega}}}_{ELM}=H{{varvec{H}}}^{T}\ {{{varvec{Omega}}}_{ELM}}_{i,j}=hleft({x}_{i}right)hleft({x}_{j}right)=Kleft({x}_{i},{x}_{j}right)end{array}right.$$

(5)

where: ({x}_{i}) and ({x}_{j})-sample input vectors, i and j are taken as positive integers within [1,N]; K(({x}_{i},{x}_{j}))-kernel function.

KELM determines the implicit layer mapping kernel function in the form of an inner product by introducing a kernel function, and the number of implicit layer nodes does not need to be set; The result is faster model learning and effective improvement of the generalisation ability and stability of the KELM-based runoff prediction model.

Butterfly optimisation algorithm is an intelligent optimisation algorithm derived by simulating butterfly searching for food and mating behaviour23. In the BOA algorithm, each butterfly emits its own unique scent. Butterflies are able to sense the source of food in the air and likewise sense the scent emitted by other butterflies and move with the butterfly that emits a stronger scent, the scent concentration equation is:

where (f)Concentration of scent emitted by the butterfly, (c)Perceived morphology, (l)Stimulus intensity, (a)Power index, taken between [0,1]. When a=1, it means that the butterfly does not absorb the scent, meaning that the scent emitted by a specific butterfly is perceived by the same butterfly; This case is equivalent to a scent spreading in an ideal environment, where the butterfly emitting the scent can be sensed everywhere in the domain, and thus a single global optimum can be easily reached.

In order to prove the above with the search algorithm, the following hypothetical regulations were set up to idealise the characteristics of butterflies: (i) All butterflies can give off some scent, and butterflies attract and exchange information with each other by virtue of the scent. (ii) Butterflies undergo random movements or directional movements towards butterflies with strong scent concentrations.

By defining different fitness functions for different problems, the BOA algorithm can be divided into the following 3 steps:

Step 1: Initialisation phase. Randomly generate butterfly locations in the search space, calculate and store each butterfly location and fitness value.

Step 2: Iteration phase. Multiple iterations are performed by the algorithm, in each iteration the butterflies are moved to a new position in the search space and then their fitness values are recalculated. The adaptation values of the randomly generated butterfly population are sorted to find the best position of the butterfly in the search space.

Step 3: End Phase, In the previous phase, the butterflies move and then use the scent formula to produce a scent in a new location.

The penalty parameter C and the kernel function parameter K in the kernel-limit learning machine are chosen as the searching individuals of the butterfly population, and the BOA-KELM model is constructed to achieve the iterative optimisation of C and K. The specific steps are as follows:

Step 1: Collect runoff data and produce training and prediction sample sets.

Step 2: Initialise the butterfly population searching individuals i.e. penalty parameter C and kernel function parameter K.

Step 3: Initialise the algorithm parameters, including the number of butterfly populations M, the maximum number of iterations .

Step 4: Calculate the fitness value of the individual butterfly population and calculate the scent concentration f. Based on the fitness value, the optimal butterfly location is derived.

Step 5: Check the fitness value of the butterfly population searching individuals after updating their positions, determine whether it is better than before updating, and update the global optimal butterfly position and fitness value.

Step 6:Judge whether the termination condition is satisfied. If it is satisfied, exit the loop and output the prediction result; otherwise, bring in the calculation again.

Step 7:Input the test set into the optimised KELM and output the predictions.

According to the above steps, the corresponding flowchart is shown in Fig.2.

BOA Optimisation KELM Model Flowchart.

In order to improve the accuracy of runoff prediction, this paper designs a runoff prediction framework based on the idea of "decompositionmodeling predictionreconstruction", as shown in Fig.3, and the specific prediction steps are as follows:

VMD-CEEMD-BOA-KELM prediction model framework.

Step 1: Data pre-processing. Anomalies in the original runoff series were processed using the Lajda criterion.

Step 2: VMD-CEEMD decomposition. The raw runoff series was decomposed using the VMD algorithm, and then the data was decomposed quadratically using the CEEMD algorithm to obtain k components.

Step 3: Data preparation. Each component is normalised and divided into a training data set and a test data set.

Step 4: Modelling prediction. A BOA-optimised KELM model is built based on the training dataset for each component and predicted for the test dataset.

Step 5: Reconstruction. The predictions of all components are accumulated to obtain the prediction of the original runoff sequence.

In order to reflect the error and prediction accuracy of the model prediction results more clearly, four indicators, RMSE, MAE, R2, and NSE are used for the analysis, and the equations are calculated as follows:

$${varvec{R}}{varvec{M}}{varvec{S}}{varvec{E}}=sqrt{frac{1}{{varvec{N}}}cdot {sum }_{{varvec{i}}=1}^{{varvec{N}}}{left({{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}_{{varvec{c}}}right)}^{2}}$$

$${varvec{M}}{varvec{A}}{varvec{E}}=frac{1}{{varvec{N}}}cdot {sum }_{{varvec{i}}=1}^{{varvec{N}}}left|{{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}_{{varvec{c}}}right|$$

$${{varvec{R}}}^{2}={left[frac{sum left({{varvec{y}}}_{{varvec{i}}}-overline{{{varvec{y}} }_{{varvec{i}}}}right)left({{varvec{y}}}_{{varvec{c}}}-overline{{{varvec{y}} }_{{varvec{c}}}}right)}{sqrt{sum {left({{varvec{y}}}_{{varvec{i}}}-overline{{{varvec{y}} }_{{varvec{i}}}}right)}^{2}}sum {left({{varvec{y}}}_{{varvec{c}}}-overline{{{varvec{y}} }_{{varvec{c}}}}right)}^{2}}right]}^{2}$$

$${varvec{N}}{varvec{S}}{varvec{E}}=1-frac{{sum }_{{varvec{t}}=1}^{{varvec{T}}}{left({{varvec{y}}}_{{varvec{i}}}-{{varvec{y}}}_{{varvec{c}}}right)}^{2}}{{sum }_{{varvec{t}}=1}^{{varvec{T}}}{left({{varvec{y}}}_{{varvec{i}}}-overline{{{varvec{y}} }_{{varvec{i}}}}right)}^{2}}$$

This paper does not contain any studies with human participants or animals performed by any of the authors.

Read the original here:
A runoff prediction method based on hyperparameter optimisation of ... - Nature.com

Related Posts

Comments are closed.