PyTorch基础(part3)
生活随笔
收集整理的這篇文章主要介紹了
PyTorch基础(part3)
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
學(xué)習(xí)筆記,僅供參考,有錯(cuò)必糾
文章目錄
- PyTorch 基礎(chǔ)
- 線性回歸
- 常用代碼
- 導(dǎo)包
- 生成數(shù)據(jù)
- 構(gòu)建神經(jīng)網(wǎng)絡(luò)模型
- 非線性回歸
- 生成數(shù)據(jù)
- 構(gòu)建神經(jīng)網(wǎng)絡(luò)模型
PyTorch 基礎(chǔ)
線性回歸
常用代碼
# 支持多行輸出 from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = 'all' #默認(rèn)為'last'導(dǎo)包
# 導(dǎo)入常用的包 import torch from torch import nn,optim import numpy as np import matplotlib.pyplot as plt from torch.autograd import Variable生成數(shù)據(jù)
x_data = np.random.rand(100) noise = np.random.normal(0,0.01,x_data.shape) y_data = x_data*0.1 + 0.2 + noiseplt.scatter(x_data,y_data) plt.show() <matplotlib.collections.PathCollection at 0x1dbfa189240> # 將數(shù)據(jù)變?yōu)?維數(shù)據(jù),因?yàn)樵谟胮ytorch在訓(xùn)練模型時(shí),需要傳入一個(gè)批次的數(shù)據(jù) x_data = x_data.reshape(-1,1) # -1表示自動(dòng)匹配,1表示列為1列。那么這樣換算下來(lái),就會(huì)轉(zhuǎn)換為(100,1)的二維數(shù)組 y_data = y_data.reshape(-1,1) # 將numpy 變成 tensor x_data = torch.FloatTensor(x_data) y_data = torch.FloatTensor(y_data) # 定義模型的輸入變量 inputs = Variable(x_data) # 定義模型的輸出標(biāo)簽 targets = Variable(y_data)構(gòu)建神經(jīng)網(wǎng)絡(luò)模型
# 定義一個(gè)自己的類,這個(gè)類需要繼承pytorch中的一個(gè)類 class MyLR(nn.Module):# 初始化方法,定義網(wǎng)絡(luò)的結(jié)構(gòu)(一般把網(wǎng)絡(luò)中具有可學(xué)習(xí)參數(shù)的層放在__init__()中)def __init__(self):# 初始化父類nn.Modulesuper(MyLR, self).__init__()self.fc = nn.Linear(1, 1) # (1, 1)表示輸入1個(gè)神經(jīng)元,輸出一個(gè)神經(jīng)元pass# 前向傳遞,定義網(wǎng)絡(luò)的計(jì)算def forward(self, x):out = self.fc(x)return out # 定義模型 model = MyLR() # 定義損失函數(shù) mse_loss = nn.MSELoss() # 定義優(yōu)化器 optimizer = optim.SGD(model.parameters(), lr = 0.1) # 查看模型參數(shù)(未訓(xùn)練的初始化參數(shù)) for name, parameters in model.named_parameters():print('name:{}, para:{}'.format(name, parameters)) name:fc.weight, para:Parameter containing: tensor([[0.4478]], requires_grad=True) name:fc.bias, para:Parameter containing: tensor([-0.3437], requires_grad=True) for i in range(1000):out = model(inputs)# 計(jì)算lossloss = mse_loss(out, targets)# 梯度清0,否則之前計(jì)算的梯度會(huì)疊加optimizer.zero_grad()# 計(jì)算梯度loss.backward()# 修改權(quán)值optimizer.step()if i % 100 == 0:print(i, loss.item()) 0 0.1456504464149475 100 0.0016374412225559354 200 0.00021086522610858083 300 0.00010340119479224086 400 9.530589886708185e-05 500 9.469607175560668e-05 600 9.465014591114596e-05 700 9.464671165915206e-05 800 9.464642789680511e-05 900 9.464641334488988e-05 # 計(jì)算預(yù)測(cè)值 y_pred = model(inputs) plt.scatter(x_data,y_data) plt.plot(x_data,y_pred.data.numpy(),'r-',lw=3) plt.show() <matplotlib.collections.PathCollection at 0x1dbfa5dc978>[<matplotlib.lines.Line2D at 0x1dbfa5c2c88>]非線性回歸
生成數(shù)據(jù)
# 在 -2 到 2 之間產(chǎn)生呈等差變化的200個(gè)數(shù)值,并將其變?yōu)?維數(shù)據(jù) x_data = np.linspace(-2, 2, 200)[:, np.newaxis] noise = np.random.normal(0,0.2,x_data.shape) y_data = np.square(x_data) + noiseplt.scatter(x_data,y_data) plt.show() <matplotlib.collections.PathCollection at 0x1dbfa76f240> # 將numpy 變成 tensor x_data = torch.FloatTensor(x_data) y_data = torch.FloatTensor(y_data) # 定義模型的輸入變量 inputs = Variable(x_data) # 定義模型的輸出標(biāo)簽 targets = Variable(y_data)構(gòu)建神經(jīng)網(wǎng)絡(luò)模型
# 定義一個(gè)自己的類,這個(gè)類需要繼承pytorch中的一個(gè)類 class MyLR(nn.Module):# 初始化方法,定義網(wǎng)絡(luò)的結(jié)構(gòu)(一般把網(wǎng)絡(luò)中具有可學(xué)習(xí)參數(shù)的層放在__init__()中)def __init__(self):# 初始化父類nn.Modulesuper(MyLR, self).__init__()# 定義1-10-1的網(wǎng)絡(luò)結(jié)構(gòu)self.fc1 = nn.Linear(1, 10)# 設(shè)置激活函數(shù)self.tanh = nn.Tanh()self.fc2 = nn.Linear(10, 1)pass# 前向傳遞,定義網(wǎng)絡(luò)的計(jì)算def forward(self, x):x = self.fc1(x)x = self.tanh(x)out = self.fc2(x)return out # 定義模型 model = MyLR() # 定義損失函數(shù) mse_loss = nn.MSELoss() # 定義優(yōu)化器 optimizer = optim.SGD(model.parameters(), lr = 0.3) # 查看模型參數(shù)(未訓(xùn)練的初始化參數(shù)) for name, parameters in model.named_parameters():print('name:{}, para:{}'.format(name, parameters)) name:fc1.weight, para:Parameter containing: tensor([[-0.8362],[ 0.2621],[ 0.9536],[ 0.9033],[-0.5467],[ 0.1015],[-0.4320],[-0.4793],[ 0.2248],[ 0.6971]], requires_grad=True) name:fc1.bias, para:Parameter containing: tensor([ 0.6691, 0.4091, -0.4945, -0.9603, 0.7015, -0.7920, 0.9798, 0.6661,0.5042, 0.3837], requires_grad=True) name:fc2.weight, para:Parameter containing: tensor([[ 0.1721, 0.2141, -0.1088, -0.2049, -0.1316, -0.0525, 0.0242, 0.1614,0.2276, 0.0702]], requires_grad=True) name:fc2.bias, para:Parameter containing: tensor([0.2010], requires_grad=True) for i in range(2000):out = model(inputs)# 計(jì)算lossloss = mse_loss(out, targets)# 梯度清0,否則之前計(jì)算的梯度會(huì)疊加optimizer.zero_grad()# 計(jì)算梯度loss.backward()# 修改權(quán)值optimizer.step()if i % 100 == 0:print(i, loss.item()) 0 2.2675838470458984 100 0.07629517465829849 200 0.11858276277780533 300 0.08543253690004349 400 0.08349892497062683 500 0.08922693133354187 600 0.09121432155370712 700 0.08348247408866882 800 0.10780174285173416 900 0.08897148072719574 1000 0.08086799830198288 1100 0.07277610898017883 1200 0.06655674427747726 1300 0.061655644327402115 1400 0.05747130513191223 1500 0.054821696132421494 1600 0.052742671221494675 1700 0.051381915807724 1800 0.050302788615226746 1900 0.04957994446158409 # 計(jì)算預(yù)測(cè)值 y_pred = model(inputs) plt.scatter(x_data,y_data) plt.plot(x_data,y_pred.data.numpy(),'r-',lw=3) plt.show() <matplotlib.collections.PathCollection at 0x1dbfa7ce630>[<matplotlib.lines.Line2D at 0x1dbfa3e26d8>]總結(jié)
以上是生活随笔為你收集整理的PyTorch基础(part3)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: PC 端也能收发苹果 iMessage
- 下一篇: PyTorch基础(part4)