【CNN回归预测】基于matlab卷积神经网络CNN数据回归预测【含Matlab源码 2003期】
一、 CNN簡介
1 卷積神經網絡(CNN)定義
卷積神經網絡(convolutional neural network, CNN),是一種專門用來處理具有類似網格結構的數據的神經網絡。卷積網絡是指那些至少在網絡的一層中使用卷積運算來替代一般的矩陣乘法運算的神經網絡。
2 CNN神經網絡圖
CNN是一種通過卷積計算的前饋神經網絡,其是受生物學上的感受野機制提出的,具有平移不變性,使用卷積核,最大的應用了局部信息,保留了平面結構信息。
3 CNN五種結構組成
3.1 輸入層
在處理圖像的CNN中,輸入層一般代表了一張圖片的像素矩陣。可以用三維矩陣代表一張圖片。三維矩陣的長和寬代表了圖像的大小,而三維矩陣的深度代表了圖像的色彩通道。比如黑白圖片的深度為1,而在RGB色彩模式下,圖像的深度為3。
3.2 卷積層(Convolution Layer)
卷積層是CNN最重要的部分。它與傳統全連接層不同,卷積層中每一個節點的輸入只是上一層神經網絡的一小塊。卷積層被稱為過濾器(filter)或者內核(kernel),Tensorflow的官方文檔中稱這個部分為過濾器(filter)。
【注意】在一個卷積層中,過濾器(filter)所處理的節點矩陣的長和寬都是由人工指定的,這個節點矩陣的尺寸也被稱為過濾器尺寸。常用的尺寸有3x3或5x5,而過濾層處理的矩陣深度和當前處理的神經層網絡節點矩陣的深度一致。
下圖為卷積層過濾器(filter)結構示意圖
下圖為卷積過程
詳細過程如下,Input矩陣是像素點矩陣,Kernel矩陣是過濾器(filter)
3.3 池化層(Pooling Layer)
池化層不會改變三維矩陣的深度,但是它可以縮小矩陣的大小。通過池化層,可以進一步縮小最后全連接層中節點的個數,從而達到減少整個神經網絡參數的目的。使用池化層既可以加快計算速度也可以防止過擬合。池化層filter的計算不是節點的加權和,而是采用最大值或者平均值計算。使用最大值操作的池化層被稱之為最大池化層(max pooling)(最大池化層是使用的最多的磁化層結構)。使用平均值操作的池化層被稱之為平均池化層(mean pooling)。
下圖分別表示不重疊的4個2x2區域的最大池化層(max pooling)、平均池化層(mean pooling)
3.4 全連接層
在經過多輪卷積層和池化層的處理之后,在CNN的最后一般會由1到2個全連接層來給出最后的分類結果。經過幾輪卷積層和池化層的處理之后,可以認為圖像中的信息已經被抽象成了信息含量更高的特征。我們可以將卷積層和池化層看成自動圖像特征提取的過程。在提取完成之后,仍然需要使用全連接層來完成分類任務。
3.5 Softmax層
通過Softmax層,可以得到當前樣例屬于不同種類的概率分布問題。
二、部分源代碼
load([ 'test_input_xdata.mat']) load([ 'test_input_ydata.mat'])save_directory = ''; summary_file_name = 'Crossval_results_summary_table';%Network settings: augment_training_data = 1; %Randomly augment training dataset normalize_training_input = 1; %Normalize training data based on predictor distribution overwrite_crossval_results_table = 1; %1: create new cross validation results table, 0: add lines to existing table%Input data pre-processing: xdata = permute(xdata,[1,2,4,3]); %input needs to be [X, Y, nchannel, nslice]n_images = size(ydata,1); %For random splitting of input into training/testing groups img_number = 1:n_images; rng(20211013); idx = randperm(n_images);%Training/testing data split: n_test_images = 28; %Number of images to kept for independent test set n_crossval_folds = 8; %Number of training cross-validation folds n_train_images = n_images-n_test_images;idx_test = find(ismember(img_number, idx(end-(n_test_images-1):end))); XTest = xdata(:,:,:,idx_test); YTest = ydata(idx_test); xdata(:,:,:,idx_test) = []; ydata(idx_test) = []; img_number(idx_test) = []; idx(end-(n_test_images-1):end) = [];%Network parameters (can iterate over by moving into for loop): %params.optimizer = {'sgdm','adam'}; %'sgdm' | 'adam' params.batch_size = 8; %params.max_epochs = [4,6,8]; params.learn_rate = 0.001; params.learn_rate_drop_factor = 0.1; params.learn_rate_drop_period = 20; params.learn_rate_schedule = 'none'; %'none', 'piecewise' params.shuffle = 'every-epoch'; params.momentum = 0.9; %for sgdm optimizer params.L2_reg = 0.01; params.conv_features = [16, 16, 32]; %Number of feature channels in convolutional layers params.conv_filter_size = 3; params.conv_padding = 'same'; params.pooling_size = 2; params.pooling_stride = 1; params.dropout_factor = 0.2;params.duplication_factor = 3; %Duplicate training set by N times show_plots = 0; %1: show plots of training progresstic iter = 1; disp('Performing cross validation evaluation over all network iterations:') for var1 = {'sgdm','adam'}params.optimizer = var1;for var2 = [4,6,8]params.max_epochs = var2;%for var3 = X:Y%params.example = var3;%etc.%Splitting training data into k-folds for k = 1:n_crossval_folds images_per_fold = floor(n_train_images/n_crossval_folds); idx_val = find(ismember(img_number, idx(1+(k-1)*images_per_fold:images_per_fold+(k-1)*images_per_fold)));YValidation = ydata(idx_val); XValidation = xdata(:,:,:,idx_val);XTrain = xdata; XTrain(:,:,:,idx_val) = []; YTrain = ydata; YTrain(idx_val) = [];%ROS input normalization: if normalize_training_input == 1[XTrain, YTrain] = ROS(XTrain, YTrain, params.duplication_factor); elseXTrain = repmat(XTrain,1,1,1,params.duplication_factor);YTrain = repmat(YTrain,params.duplication_factor,1); end%Random geometric image augmenation:%Augmentation parametersaug_params.rot = [-90,90]; %Image rotation rangeaug_params.trans_x = [-5 5]; %Image translation in X direction rangeaug_params.trans_y = [-5 5]; %Image translation in Y direction rangeaug_params.refl_x = 1; %Image reflection across X axisaug_params.refl_y = 1; %Image reflection across Y axisaug_params.scale = [0.7,1.3]; %Imaging scaling rangeaug_params.shear_x = [-30,50]; %Image shearing in X direction rangeaug_params.shear_y = [-30,50]; %Image shearing in Y direction rangeaug_params.add_gauss_noise = 0; %Add Gaussian noiseaug_params.gauss_noise_var = 0.0005; %Gaussian noise varianceif augment_training_data == 1XTrain = image_augmentation(XTrain,aug_params); elseaug_params = structfun(@(x) [], aug_params, 'UniformOutput', false); end%Network structure: layers = [imageInputLayer([size(XTrain,1),size(XTrain,2),size(XTrain,3)])convolution2dLayer(params.conv_filter_size,params.conv_features(1),'Padding',params.conv_padding)%batchNormalizationLayerreluLayeraveragePooling2dLayer(params.pooling_size,'Stride',params.pooling_stride)convolution2dLayer(params.conv_filter_size,params.conv_features(2),'Padding',params.conv_padding)%batchNormalizationLayerreluLayeraveragePooling2dLayer(params.pooling_size,'Stride',params.pooling_stride)convolution2dLayer(params.conv_filter_size,params.conv_features(3),'Padding',params.conv_padding)%batchNormalizationLayerreluLayerdropoutLayer(params.dropout_factor)fullyConnectedLayer(1)regressionLayer];params.validationFrequency = floor(numel(YTrain)/params.batch_size);options = network_options(params,XValidation,YValidation,show_plots); net = trainNetwork(XTrain,YTrain,layers,options);%Network results: accuracy_threshold = 0.1; %Predictions within 10% will be considered 'accurate'predicted_train = predict(net,XTrain); predictionError_train = YTrain - predicted_train; numCorrect_train = sum(abs(predictionError_train) < accuracy_threshold); accuracy_train(k) = numCorrect_train/numel(YTrain); error_abs_train(k) = mean(abs(predictionError_train)); rmse_train(k) = sqrt(mean(predictionError_train.^2));predicted_val = predict(net,XValidation); predictionError_val = YValidation - predicted_val; numCorrect_val = sum(abs(predictionError_val) < accuracy_threshold); accuracy_val(k) = numCorrect_val/numel(YValidation); error_abs_val(k) = mean(abs(predictionError_val)); rmse_val(k) = sqrt(mean(predictionError_val.^2));if k == 1predicted_val_table(1:numel(YValidation),1) = predict(net,XValidation); if iter == 1YValidation_table(1:numel(YValidation),1) = YValidation; end三、運行結果
四、matlab版本及參考文獻
1 matlab版本
2014a
2 參考文獻
[1] 包子陽,余繼周,楊杉.智能優化算法及其MATLAB實例(第2版)[M].電子工業出版社,2016.
[2]張巖,吳水根.MATLAB優化算法源代碼[M].清華大學出版社,2017.
[3]周品.MATLAB 神經網絡設計與應用[M].清華大學出版社,2013.
[4]陳明.MATLAB神經網絡原理與實例精解[M].清華大學出版社,2013.
[5]方清城.MATLAB R2016a神經網絡設計與應用28個案例分析[M].清華大學出版社,2018.
3 備注
簡介此部分摘自互聯網,僅供參考,若侵權,聯系刪除
總結
以上是生活随笔為你收集整理的【CNN回归预测】基于matlab卷积神经网络CNN数据回归预测【含Matlab源码 2003期】的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Python QT5
- 下一篇: Quartz.NET simple_de