Regression prediction | MATLAB implements BO-CNN-LSTM Bayesian optimized convolutional neural network-long short-term memory network multi-input single-output regression prediction

Regression prediction | MATLAB implements BO-CNN-LSTM Bayesian optimized convolutional neural network-long short-term memory network multi-input single-output regression prediction

List of effects







basic introduction

MATLAB implements BO-CNN-LSTM Bayesian optimized convolutional neural network-long short-term memory network multi-input single-output regression prediction. Based on Bayesian (bayes) optimized convolutional neural network-long short-term memory network (CNN-LSTM) regression prediction, BO-CNN-LSTM/Bayes-CNN-LSTM multi-input single-output model.
1. The optimization parameters are: learning rate, hidden layer nodes, and regularization parameters.
2. Evaluation indicators include: R2, MAE, MSE, RMSE and MAPE, etc.
3. The operating environment is matlab2020b and above.

model building

  • The CNN-LSTM model combines the advantages of CNN and LSTM. The CNN-LSTM network model is shown in Figure 1. The first part of the CNN-LSTM model used in this paper is a CNN partial pooling layer consisting of a convolutional layer and a maximum value. The original data is preprocessed and input to the CNN convolutional layer, and the convolution kernel is used to adaptively extract the vital features. The convolutional layer will traverse the input information, and the weight of the convolution kernel and the local sequence will be convoluted to obtain the preliminary feature matrix. , which is more expressive than raw sequence data (matrix).
  • The pooling layer used in this paper is the maximum pooling layer. The pooling operation performs data dimensionality reduction on the extracted features, avoids over-fitting of the model, and retains the main features. The maximum pooling layer takes the feature matrix obtained by the previous convolution layer as input, slides a pooling window on this matrix, takes the maximum value of the pooling window in each slide, and outputs a more expressive feature matrix.
  • After pooling, an LSTM layer is connected, and the relevant vector is extracted to be constructed by CNN into a long-term time series as the input data of LSTM. The convolutional layer flattenes the data of the convolutional layer (Flatten), adds Flatten to the model, and compresses the (height,width,channel) data into a one-dimensional array of long, high, wide channels, and then we can add a direct dense layer.
  • For convolutional pooling data compression feature operations, feature fusion extracted from multiple convolutional feature extraction frameworks or fusion from the output layer, fully connected layer aggregation of learned features, activation function using Relu.
  • Usually, hyperparameters need to be optimized during model training to select an optimal set of hyperparameters for the model to improve the performance and effectiveness of predictions. Setting hyperparameters empirically will make the final model hyperparameter combination not necessarily optimal, which will affect the fitting degree of the model network and its generalization ability to the test data.

  • pseudocode

  • Adjust model parameters by tuning optimization algorithms, learn repetition rate and Bayesian optimization hyperparameters to tune model parameters.

programming

%%  Optimization Algorithm Parameter Settings
%parameter value upper bound(Learning rate, hidden layer nodes, regularization coefficient)
%%  Bayesian optimization parameter range
optimVars = [
    optimizableVariable('NumOfUnits', [10, 50], 'Type', 'integer')
    optimizableVariable('InitialLearnRate', [1e-3, 1], 'Transform', 'log')
    optimizableVariable('L2Regularization', [1e-10, 1e-2], 'Transform', 'log')];

%%  Bayesian optimization of network parameters
bayesopt(fitness, optimVars, ...    % Optimize functions, and parameter ranges
        'MaxTime', Inf, ...                      % Optimization time (unlimited) 
        'IsObjectiveDeterministic', false, ...
        'MaxObjectiveEvaluations', 10, ...       % The maximum number of iterations
        'Verbose', 1, ...                        % Show optimization process
        'UseParallel', false);

%%  get the optimal parameters
NumOfUnits       = BayesObject.XAtMinEstimatedObjective.NumOfUnits;       % Optimal Number of Hidden Layer Nodes
InitialLearnRate = BayesObject.XAtMinEstimatedObjective.InitialLearnRate; % best initial learning rate
L2Regularization = BayesObject.XAtMinEstimatedObjective.L2Regularization; % optimal L2 regularization coefficient
%% create mix CNN-LSTM Network Architecture
% input feature dimension
numFeatures  = f_;
% output feature dimension
numResponses = 1;
FiltZise = 10;
%  create"CNN-LSTM"Model
    layers = [...
        % input features
        sequenceInputLayer([numFeatures 1 1],'Name','input')
        sequenceFoldingLayer('Name','fold')
        % CNN feature extraction
        convolution2dLayer([FiltZise 1],32,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
        batchNormalizationLayer('Name','bn')
        eluLayer('Name','elu')
        averagePooling2dLayer(1,'Stride',FiltZise,'Name','pool1')
        % Expand layer
        sequenceUnfoldingLayer('Name','unfold')
        % smooth layer
        flattenLayer('Name','flatten')
        % LSTM feature learning
        lstmLayer(50,'Name','lstm1','RecurrentWeightsInitializer','He','InputWeightsInitializer','He')
        % LSTM output
        lstmLayer(NumOfUnits,'OutputMode',"last",'Name','bil4','RecurrentWeightsInitializer','He','InputWeightsInitializer','He')
        dropoutLayer(0.25,'Name','drop3')
        % fully connected layer
        fullyConnectedLayer(numResponses,'Name','fc')
        regressionLayer('Name','output')    ];

    layers = layerGraph(layers);
    layers = connectLayers(layers,'fold/miniBatchSize','unfold/miniBatchSize');

%% CNNLSTM training options
% batch samples
% The maximum number of iterations
%% Train a hybrid network
net = trainNetwork(XrTrain,YrTrain,layers,options);

References

[1] https://blog.csdn.net/kjm13182345320/article/details/129036772?spm=1001.2014.3001.5502
[2] https://blog.csdn.net/kjm13182345320/article/details/128690229

Posted by hastishah on Wed, 22 Feb 2023 22:41:25 +1030