神经网络预测
clc ; clear ;
INPUT = xlsread('input.xls');
input = INPUT;
output = xlsread('output.xls');
k = rand(1,2000); %1x2000的矩陣
[m,n] = sort(k); % n是下標(biāo)矩陣 , 用來(lái)隨機(jī)選取數(shù)據(jù)
%訓(xùn)練數(shù)據(jù)
input_train = input(n(1:1900),:)';
output_train = output(n(1:1900),:)';
%測(cè)試數(shù)據(jù)
input_test = input(n(1901:2000),:)';
output_test = output(n(1901:2000),:)';
%訓(xùn)練數(shù)據(jù)歸一化
[inputn , inputps] = mapminmax(input_train); %inputn 歸一化之后的數(shù)據(jù)
[outputn , outputps] = mapminmax(output_train);
inputn_test = mapminmax('apply',input_test , inputps); %與訓(xùn)練數(shù)據(jù)同樣的映射 inputps
%BP神經(jīng)網(wǎng)絡(luò)構(gòu)建
net = newff(inputn , outputn , 5);
%網(wǎng)絡(luò)參數(shù)配置 (迭代次數(shù) , 學(xué)習(xí)率 , 目標(biāo))
net.trainParam.epochs = 100 ;
net.trainParam.lr = 0.1;
net.trainParam.goal = 0.00004 ;
%訓(xùn)練
net = train(net ,inputn , outputn);
% 訓(xùn)練結(jié)果權(quán)值, 閾值
InputWeights = net.iw{1,1} ; %輸入層到隱層的權(quán)值
LayerWeights = net.lw{2,1}; % 隱層到輸出層的權(quán)值
bias1 = net.b{1};%隱層閾值
bias2 = net.b{2};%輸出層閾值
%預(yù)測(cè)
an = sim(net , inputn_test);
%輸出結(jié)果反歸一化
BPoutput = mapminmax('reverse',an , outputps);
%
figure(1)
plot(BPoutput , ':og');
hold on;
plot(output_test , '- *');
legend('預(yù)測(cè)輸出','期望輸出');
title ('BP網(wǎng)絡(luò)預(yù)測(cè)輸出','fontsize',12);
ylabel ('函數(shù)輸出','fontsize',12);
xlabel('樣本','fontsize',12)
figure(2);
ms = output_test - BPoutput ;
plot(ms,'- *');
title ('預(yù)測(cè)誤差','fontsize',12);
ylabel ('誤差','fontsize',12);
xlabel('樣本','fontsize',12)
總結(jié)
- 上一篇: 高频面试题3 类初始化过程与实例初始化
- 下一篇: 算法:图(Graph)的遍历、最小生成树