生活随笔
收集整理的這篇文章主要介紹了
【Kaggle-MNIST之路】CNN结构改进+改进过的损失函数(五)
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
簡述
基于之前的框架,修改了一下CNN的結構
【Kaggle-MNIST之路】CNN+改進過的損失函數(shù)+多次的epoch(四)
代碼
- 和之前的一樣,會把模型生成出來,用于后續(xù)的保存等工作。
- 可以用之前的方法來改一下代碼。但是需要把生成數(shù)據(jù)的那個版本的類給替換掉才行。
import pandas
as pd
import torch
.utils
.data
as data
import torch
import torch
.nn
as nn
file = './all/train.csv'
LR
= 0.01class MNISTCSVDataset(data
.Dataset
):def __init__(self
, csv_file
, Train
=True):self
.dataframe
= pd
.read_csv
(csv_file
, iterator
=True)self
.Train
= Train
def __len__(self
):if self
.Train
:return 42000else:return 28000def __getitem__(self
, idx
):data
= self
.dataframe
.get_chunk
(100)ylabel
= data
['label'].as_matrix
().astype
('float')xdata
= data
.ix
[:, 1:].as_matrix
().astype
('float')return ylabel
, xdata
class CNN(nn
.Module
):def __init__(self
):super(CNN
, self
).__init__
()self
.layer1
= nn
.Sequential
(nn
.Conv2d
(in_channels
=1,out_channels
=32,kernel_size
=3, stride
=1, ),nn
.ReLU
(),nn
.BatchNorm2d
(32),nn
.Conv2d
(in_channels
=32,out_channels
=32,kernel_size
=3, stride
=1, ),nn
.ReLU
(),nn
.BatchNorm2d
(32),nn
.Conv2d
(in_channels
=32,out_channels
=32,kernel_size
=5, stride
=2, padding
=2,),nn
.ReLU
(),nn
.BatchNorm2d
(32),nn
.Dropout
(0.4),)self
.layer2
= nn
.Sequential
(nn
.Conv2d
(in_channels
=32,out_channels
=64,kernel_size
=3, stride
=1, ),nn
.ReLU
(),nn
.BatchNorm2d
(64),nn
.Conv2d
(in_channels
=64,out_channels
=64,kernel_size
=3, stride
=1, ),nn
.ReLU
(),nn
.BatchNorm2d
(64),nn
.Conv2d
(in_channels
=64,out_channels
=64,kernel_size
=5, stride
=2, padding
=2,),nn
.ReLU
(),nn
.BatchNorm2d
(64),nn
.Dropout
(0.4),)self
.layer3
= nn
.Linear
(64 * 4 * 4, 10)def forward(self
, x
):x
= self
.layer1
(x
)x
= self
.layer2
(x
)x
= x
.view
(x
.size
(0), -1)x
= self
.layer3
(x
)return xnet
= CNN
()
loss_function
= nn
.MultiMarginLoss
()
optimizer
= torch
.optim
.Adam
(net
.parameters
(), lr
=LR
)
EPOCH
= 10
for epoch
in range(EPOCH
):mydataset
= MNISTCSVDataset
(file)train_loader
= torch
.utils
.data
.DataLoader
(mydataset
, batch_size
=1, shuffle
=True)print('epoch %d' % epoch
)for step
, (yl
, xd
) in enumerate(train_loader
):xd
= xd
.reshape
(100, 1, 28, 28).float()output
= net
(xd
)yl
= yl
.long()loss
= loss_function
(output
, yl
.squeeze
())optimizer
.zero_grad
()loss
.backward
()optimizer
.step
()if step
% 40 == 0:print('step %d' % step
, loss
)torch
.save
(net
, 'divided-net.pkl')
總結
以上是生活随笔為你收集整理的【Kaggle-MNIST之路】CNN结构改进+改进过的损失函数(五)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。