Deep learning:二十二(linear decoder练习)
?
前言:
本節(jié)是練習(xí)Linear decoder的應(yīng)用,關(guān)于Linear decoder的相關(guān)知識(shí)介紹請(qǐng)參考:Deep learning:十七(Linear Decoders,Convolution和Pooling),實(shí)驗(yàn)步驟參考Exercise: Implement deep networks for digit classification。本次實(shí)驗(yàn)是用linear decoder的sparse autoencoder來(lái)訓(xùn)練出stl-10數(shù)據(jù)庫(kù)圖片的patch特征。并且這次的訓(xùn)練權(quán)值是針對(duì)rgb圖像塊的。
?
基礎(chǔ)知識(shí):
PCA Whitening是保證數(shù)據(jù)各維度的方差為1,而ZCA Whitening是保證數(shù)據(jù)各維度的方差相等即可,不一定要唯一。并且這兩種whitening的一般用途也不一樣,PCA Whitening主要用于降維且去相關(guān)性,而ZCA Whitening主要用于去相關(guān)性,且盡量保持原數(shù)據(jù)。
Matlab的一些知識(shí):
函數(shù)句柄的好處就是把一個(gè)函數(shù)作為參數(shù)傳入到本函數(shù)中,在該函數(shù)內(nèi)部可以利用該函數(shù)進(jìn)行各種運(yùn)算得出最后需要的結(jié)果,比如說(shuō)函數(shù)中要用到各種求導(dǎo)求積分的方法,如果是傳入該函數(shù)經(jīng)過(guò)各種運(yùn)算后的值的話,那么在調(diào)用該函數(shù)前就需要不少代碼,這樣比較累贅,所以采用函數(shù)句柄后這些代碼直接放在了函數(shù)內(nèi)部,每調(diào)用一次無(wú)需在函數(shù)外面實(shí)現(xiàn)那么多的東西。
Matlab中保存各種數(shù)據(jù)時(shí)可以采用save函數(shù),并將其保持為.mat格式的,這樣在matlab的current folder中看到的是.mat格式的文件,但是直接在文件夾下看,它是不直接顯示后綴的,且顯示的是Microsoft Access Table Shortcut,也就是.mat的簡(jiǎn)稱。
關(guān)于實(shí)驗(yàn)的一些說(shuō)明:
在Ng的教程和實(shí)驗(yàn)中,它的輸入樣本矩陣是每一列代表一個(gè)樣本的,列數(shù)為樣本的總個(gè)數(shù)。
matlab中矩陣64*10w大小肯定是可以的。
在本次實(shí)驗(yàn)中,ZCA Whitening是針對(duì)patches進(jìn)行的,且patches的均值化是對(duì)每一維進(jìn)行的(感覺(jué)這種均值化比較靠譜,前面有文章是進(jìn)行對(duì)patch中一個(gè)樣本求均值,感覺(jué)那樣很不靠譜,不過(guò)那是在natural image中做的,因?yàn)閚atural image每一維的統(tǒng)計(jì)特性都一樣,所以可以那樣均值化,但還是感覺(jué)不太靠譜)。因?yàn)槭褂玫氖荶CA whitening,所以新的向量并沒(méi)有進(jìn)行降維,只是去了相關(guān)性和讓每一維的方差都相等而已。另外,由此可見(jiàn),在進(jìn)行數(shù)據(jù)Whitening時(shí)并不需要對(duì)原始的大圖片進(jìn)行whitening,而是你用什么數(shù)據(jù)輸入網(wǎng)絡(luò)去訓(xùn)練就對(duì)什么數(shù)據(jù)進(jìn)行whitening,而這里,是用的小patches來(lái)訓(xùn)練的,所以應(yīng)該對(duì)小patches進(jìn)行whitening。
關(guān)于本次實(shí)驗(yàn)的一些數(shù)據(jù)和變量分配如下:
總共需訓(xùn)練的樣本矩陣大小為192*100000。因?yàn)檩斎胗?xùn)練的一個(gè)patch大小為8*8的,所以網(wǎng)絡(luò)的輸入層節(jié)點(diǎn)數(shù)為192(=8*8*3,因?yàn)槭?通道的,每一列按照rgb的順序排列),另外本次試驗(yàn)的隱含層個(gè)數(shù)為400,權(quán)值懲罰系數(shù)為0.003,稀疏性懲罰系數(shù)為5,稀疏性體現(xiàn)在3.5%的隱含層節(jié)點(diǎn)被激發(fā)。ZCA白化時(shí)分母加上0.1的值防止出現(xiàn)大的數(shù)值。
用的是Linear decoder,所以最后的輸出層的激發(fā)函數(shù)為1,即輸出和輸入相等。這樣在問(wèn)題內(nèi)部的計(jì)算量變小了點(diǎn)。
程序中最后需要把學(xué)習(xí)到的網(wǎng)絡(luò)權(quán)值給顯示出來(lái),不過(guò)這個(gè)顯示的內(nèi)容已經(jīng)包括了whitening部分了,所以是whitening和sparse autoencoder的組合。程序中顯示用的是displayColorNetwork( (W*ZCAWhite)');
這里為什么要用(W*ZCAWhite)'呢?首先,使用W*ZCAWhite是因?yàn)槊總€(gè)樣本x輸入網(wǎng)絡(luò),其輸出等價(jià)于W*ZCAWhite*x;另外,由于W*ZCAWhite的每一行才是一個(gè)隱含節(jié)點(diǎn)的變換值,而displayColorNetwork函數(shù)是把每一列顯示一個(gè)小圖像塊的,所以需要對(duì)其轉(zhuǎn)置。
?
實(shí)驗(yàn)結(jié)果:
原始圖片截圖:
?
ZCA Whitening后截圖;
?
學(xué)習(xí)到的400個(gè)特征顯示如下:
?
?
實(shí)驗(yàn)主要部分代碼:
%% CS294A/CS294W Linear Decoder Exercise% Instructions % ------------ % % This file contains code that helps you get started on the % linear decoder exericse. For this exercise, you will only need to modify % the code in sparseAutoencoderLinearCost.m. You will not need to modify % any code in this file.%%====================================================================== %% STEP 0: Initialization % Here we initialize some parameters used for the exercise.imageChannels = 3; % number of channels (rgb, so 3)patchDim = 8; % patch dimension numPatches = 100000; % number of patchesvisibleSize = patchDim * patchDim * imageChannels; % number of input units outputSize = visibleSize; % number of output units hiddenSize = 400; % number of hidden units %中間的隱含層還變多了sparsityParam = 0.035; % desired average activation of the hidden units. lambda = 3e-3; % weight decay parameter beta = 5; % weight of sparsity penalty term epsilon = 0.1; % epsilon for ZCA whitening%%====================================================================== %% STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder, % and check gradients % You should copy sparseAutoencoderCost.m from your earlier exercise % and rename it to sparseAutoencoderLinearCost.m. % Then you need to rename the function from sparseAutoencoderCost to % sparseAutoencoderLinearCost, and modify it so that the sparse autoencoder % uses a linear decoder instead. Once that is done, you should check % your gradients to verify that they are correct.% NOTE: Modify sparseAutoencoderCost first!% To speed up gradient checking, we will use a reduced network and some % dummy patchesdebugHiddenSize = 5; debugvisibleSize = 8; patches = rand([8 10]);%隨機(jī)產(chǎn)生10個(gè)樣本,每個(gè)樣本為一個(gè)8維的列向量,元素值為0~1 theta = initializeParameters(debugHiddenSize, debugvisibleSize); [cost, grad] = sparseAutoencoderLinearCost(theta, debugvisibleSize, debugHiddenSize, ...lambda, sparsityParam, beta, ...patches);% Check gradients numGrad = computeNumericalGradient( @(x) sparseAutoencoderLinearCost(x, debugvisibleSize, debugHiddenSize, ...lambda, sparsityParam, beta, ...patches), theta);% Use this to visually compare the gradients side by side disp([numGrad cost]); diff = norm(numGrad-grad)/norm(numGrad+grad); % Should be small. In our implementation, these values are usually less than 1e-9. disp(diff); assert(diff < 1e-9, 'Difference too large. Check your gradient computation again');% NOTE: Once your gradients check out, you should run step 0 again to % reinitialize the parameters %}%%====================================================================== %% STEP 2: Learn features on small patches % In this step, you will use your sparse autoencoder (which now uses a % linear decoder) to learn features on small patches sampled from related % images.%% STEP 2a: Load patches % In this step, we load 100k patches sampled from the STL10 dataset and % visualize them. Note that these patches have been scaled to [0,1]load stlSampledPatches.matdisplayColorNetwork(patches(:, 1:100));%% STEP 2b: Apply preprocessing % In this sub-step, we preprocess the sampled patches, in particular, % ZCA whitening them. % % In a later exercise on convolution and pooling, you will need to replicate % exactly the preprocessing steps you apply to these patches before % using the autoencoder to learn features on them. Hence, we will save the % ZCA whitening and mean image matrices together with the learned features % later on.% Subtract mean patch (hence zeroing the mean of the patches) meanPatch = mean(patches, 2); %注意這里減掉的是每一維屬性的均值,為什么會(huì)和其它的不同呢? patches = bsxfun(@minus, patches, meanPatch);%每一維都均值化% Apply ZCA whitening sigma = patches * patches' / numPatches; [u, s, v] = svd(sigma); ZCAWhite = u * diag(1 ./ sqrt(diag(s) + epsilon)) * u';%求出ZCAWhitening矩陣 patches = ZCAWhite * patches; figure displayColorNetwork(patches(:, 1:100));%% STEP 2c: Learn features % You will now use your sparse autoencoder (with linear decoder) to learn % features on the preprocessed patches. This should take around 45 minutes.theta = initializeParameters(hiddenSize, visibleSize);% Use minFunc to minimize the function addpath minFunc/options = struct; options.Method = 'lbfgs'; options.maxIter = 400; options.display = 'on';[optTheta, cost] = minFunc( @(p) sparseAutoencoderLinearCost(p, ...visibleSize, hiddenSize, ...lambda, sparsityParam, ...beta, patches), ...theta, options);%注意它的參數(shù)% Save the learned features and the preprocessing matrices for use in % the later exercise on convolution and pooling fprintf('Saving learned features and preprocessing matrices...\n'); save('STL10Features.mat', 'optTheta', 'ZCAWhite', 'meanPatch'); fprintf('Saved\n');%% STEP 2d: Visualize learned featuresW = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize); b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); figure; %這里為什么要用(W*ZCAWhite)'呢?首先,使用W*ZCAWhite是因?yàn)槊總€(gè)樣本x輸入網(wǎng)絡(luò), %其輸出等價(jià)于W*ZCAWhite*x;另外,由于W*ZCAWhite的每一行才是一個(gè)隱含節(jié)點(diǎn)的變換值 %而displayColorNetwork函數(shù)是把每一列顯示一個(gè)小圖像塊的,所以需要對(duì)其轉(zhuǎn)置。 displayColorNetwork( (W*ZCAWhite)');?
?sparseAutoencoderLinearCost.m:
function [cost,grad] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...lambda, sparsityParam, beta, data) % -------------------- YOUR CODE HERE -------------------- % Instructions: % Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your % earlier exercise onto this file, renaming the function to % sparseAutoencoderLinearCost, and changing the autoencoder to use a % linear decoder. % -------------------- YOUR CODE HERE -------------------- % The input theta is a vector because minFunc only deal with vectors. In % this step, we will convert theta to matrix format such that they follow % the notation in the lecture notes. W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize); b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);% Loss and gradient variables (your code needs to compute these values) m = size(data, 2);%樣本點(diǎn)的個(gè)數(shù)%% ---------- YOUR CODE HERE -------------------------------------- % Instructions: Compute the loss for the Sparse Autoencoder and gradients % W1grad, W2grad, b1grad, b2grad % % Hint: 1) data(:,i) is the i-th example % 2) your computation of loss and gradients should match the size % above for loss, W1grad, W2grad, b1grad, b2grad% z2 = W1 * x + b1 % a2 = f(z2) % z3 = W2 * a2 + b2 % h_Wb = a3 = f(z3)z2 = W1 * data + repmat(b1, [1, m]); a2 = sigmoid(z2); z3 = W2 * a2 + repmat(b2, [1, m]); a3 = z3;rhohats = mean(a2,2); rho = sparsityParam; KLsum = sum(rho * log(rho ./ rhohats) + (1-rho) * log((1-rho) ./ (1-rhohats)));squares = (a3 - data).^2; squared_err_J = (1/2) * (1/m) * sum(squares(:)); weight_decay_J = (lambda/2) * (sum(W1(:).^2) + sum(W2(:).^2)); sparsity_J = beta * KLsum;cost = squared_err_J + weight_decay_J + sparsity_J;%損失函數(shù)值% delta3 = -(data - a3) .* fprime(z3); % but fprime(z3) = a3 * (1-a3) delta3 = -(data - a3); beta_term = beta * (- rho ./ rhohats + (1-rho) ./ (1-rhohats)); delta2 = ((W2' * delta3) + repmat(beta_term, [1,m]) ) .* a2 .* (1-a2); W2grad = (1/m) * delta3 * a2' + lambda * W2; b2grad = (1/m) * sum(delta3, 2); W1grad = (1/m) * delta2 * data' + lambda * W1; b1grad = (1/m) * sum(delta2, 2);%------------------------------------------------------------------- % Convert weights and bias gradients to a compressed form % This step will concatenate and flatten all your gradients to a vector % which can be used in the optimization method. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];end %------------------------------------------------------------------- % We are giving you the sigmoid function, you may find this function % useful in your computation of the loss and the gradients. function sigm = sigmoid(x)sigm = 1 ./ (1 + exp(-x)); end?
參考資料:
? ? ?Deep learning:十七(Linear Decoders,Convolution和Pooling)
? ? ?Exercise: Implement deep networks for digit classification
?
?
?
?
?
轉(zhuǎn)載于:https://www.cnblogs.com/tornadomeet/archive/2013/04/08/3007435.html
總結(jié)
以上是生活随笔為你收集整理的Deep learning:二十二(linear decoder练习)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 有没有课本讲解的视频网站?就是在书上做笔
- 下一篇: python处理utf8编码中文,及打印