LeNet网络配置文件 lenet_train_test.prototxt
生活随笔
收集整理的這篇文章主要介紹了
LeNet网络配置文件 lenet_train_test.prototxt
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
.prototxt文件 定義了網(wǎng)絡(luò)的結(jié)構(gòu),我們可以通過(guò)它了解網(wǎng)絡(luò)是如何設(shè)計(jì)的,也可以建立屬于自己的網(wǎng)絡(luò)。這種格式來(lái)源于Google的Protocol Buffers,后來(lái)被開(kāi)源,主要用于海量數(shù)據(jù)存儲(chǔ)、傳輸協(xié)議格式等場(chǎng)合。https://blog.csdn.net/liuyuzhu111/article/details/52253491
針對(duì)xml解析對(duì)時(shí)間和空間的開(kāi)銷較大的缺點(diǎn)進(jìn)行了改進(jìn),兼容多種語(yǔ)言,向前/向后兼容,具有生成代碼的機(jī)制。
lenet_train_test.prototxt 是經(jīng)典的lenet網(wǎng)絡(luò)的文件,使用在MNIST手寫(xiě)字符分類中。下面是對(duì)它的一些簡(jiǎn)單的注釋
name: "LeNet" layer {name: "mnist"type: "Data"top: "data" //輸出datatop: "label" //輸出標(biāo)簽include {phase: TRAIN //在訓(xùn)練階段才在網(wǎng)絡(luò)中加入這一層}transform_param {#mean_file: "mean.binaryproto" //均值文件scale: 0.00390625 //對(duì)所有的圖片歸一化到0~1之間,也就是對(duì)輸入數(shù)據(jù)全部乘以scale,0.0039= 1/255}data_param {source: "examples/mnist/mnist_train_lmdb" // 從指定路徑讀取文件 所以沒(méi)有bottombatch_size: 64 //batch大小是64 太大不行 太小也不行 等于1時(shí)是online learningbackend: LMDB //數(shù)據(jù)類型是lmdb} } layer {name: "mnist"type: "Data"top: "data"top: "label"include {phase: TEST //在測(cè)試階段才在網(wǎng)絡(luò)中加入這一層}transform_param {#mean_file: "mean.binaryproto"scale: 0.00390625}data_param {source: "examples/mnist/mnist_test_lmdb"batch_size: 100backend: LMDB} } layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"param {lr_mult: 1 //學(xué)習(xí)率 在這種情況下,我們將設(shè)置權(quán)重學(xué)習(xí)率與求解器在運(yùn)行時(shí)給出的學(xué)習(xí)率相同,//并且偏差學(xué)習(xí)率為此的兩倍 - 這通常會(huì)導(dǎo)致更好的收斂率。}param {lr_mult: 2}convolution_param {num_output: 20 //產(chǎn)生20個(gè)通道kernel_size: 5 //卷積核大小stride: 1 //卷積核滑動(dòng)步長(zhǎng)weight_filler { //權(quán)重初始化type: "xavier"//該算法將根據(jù)輸入和輸出的神經(jīng)元數(shù)目自動(dòng)確定初始化的規(guī)模}bias_filler { //偏置填充初始化type: "constant"}} } layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAX //最大池化kernel_size: 2 //每個(gè)pool大小是2x2 stride: 2 //步長(zhǎng)2,大小2x2,所以沒(méi)有重疊} } layer {name: "conv2"type: "Convolution"bottom: "pool1" //連接在pool1層之后 top: "conv2"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant" }} } layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: "ip1"type: "InnerProduct" //內(nèi)積?bottom: "pool2"top: "ip1"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}} } layer { ?//同址計(jì)算(in-place computation,返回值覆蓋原值而占用新的內(nèi)存)//neuron_layers.hpp中,其派生類主要是元素級(jí)別的運(yùn)算(比如Dropout運(yùn)算,激活函數(shù)ReLu,Sigmoid等name: "relu1"//ReLU是按元素操作的,in-place操作,節(jié)省內(nèi)存type: "ReLU"//通過(guò)讓輸入輸出同名,即新的變量代替之前的bottom: "ip1"top: "ip1" } layer {name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}} } layer {name: "accuracy"type: "Accuracy"bottom: "ip2"bottom: "label"top: "accuracy"include { //決定了這一層什么時(shí)候被包含在網(wǎng)絡(luò)中phase: TEST // accuracy只存在于測(cè)試階段} } layer {name: "loss"type: "SoftmaxWithLoss"bottom: "ip2"bottom: "label"top: "loss" }
總結(jié)
以上是生活随笔為你收集整理的LeNet网络配置文件 lenet_train_test.prototxt的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: Material Design入门(二)
- 下一篇: Ubantu系统配置固定IP地址和Pyc