pytorch 笔记:torchsummary
生活随笔
收集整理的這篇文章主要介紹了
pytorch 笔记:torchsummary
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
作用:打印神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)
以pytorch筆記:搭建簡(jiǎn)易CNN_UQI-LIUWJ的博客-CSDN博客?中搭建的CNN為例
import torch from torchsummary import summaryclass CNN(nn.Module):def __init__(self):super(CNN,self).__init__()self.conv1=nn.Sequential(nn.Conv2d(in_channels=1, #輸入shape (1,28,28)out_channels=16, #輸出shape(16,28,28),16也是卷積核的數(shù)量kernel_size=5,stride=1,padding=2), #如果想要conv2d出來(lái)的圖片長(zhǎng)寬沒(méi)有變化,那么當(dāng)stride=1的時(shí)候,padding=(kernel_size-1)/2nn.ReLU(),nn.MaxPool2d(kernel_size=2)#在2*2空間里面下采樣,輸出shape(16,14,14))self.conv2=nn.Sequential(nn.Conv2d(in_channels=16, #輸入shape (16,14,14)out_channels=32, #輸出shape(32,14,14)kernel_size=5,stride=1,padding=2), #輸出shape(32,7,7),nn.ReLU(),nn.MaxPool2d(kernel_size=2))self.fc=nn.Linear(32*7*7,10) #輸出一個(gè)十維的東西,表示我每個(gè)數(shù)字可能性的權(quán)重def forward(self,x):x=self.conv1(x)x=self.conv2(x)x=x.view(x.shape[0],-1)x=self.fc(x)return xcnn=CNN() summary(cnn,(1,28,28))輸出的結(jié)果是這樣的:
----------------------------------------------------------------Layer (type) Output Shape Param # ================================================================Conv2d-1 [-1, 16, 28, 28] 416ReLU-2 [-1, 16, 28, 28] 0MaxPool2d-3 [-1, 16, 14, 14] 0Conv2d-4 [-1, 32, 14, 14] 12,832ReLU-5 [-1, 32, 14, 14] 0MaxPool2d-6 [-1, 32, 7, 7] 0Linear-7 [-1, 10] 15,690 ================================================================ Total params: 28,938 Trainable params: 28,938 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.00 Forward/backward pass size (MB): 0.32 Params size (MB): 0.11 Estimated Total Size (MB): 0.44 ----------------------------------------------------------------總結(jié)
以上是生活随笔為你收集整理的pytorch 笔记:torchsummary的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: sklearn 笔记整理:sklearn
- 下一篇: pytorch笔记:VGG 16