【数据平台】Pytorch库初识
生活随笔
收集整理的這篇文章主要介紹了
【数据平台】Pytorch库初识
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
PyTorch是使用GPU和CPU優(yōu)化的深度學(xué)習(xí)張量庫(kù)。
1、安裝,參考官網(wǎng):http://pytorch.org/
????
conda install pytorch torchvision -c pytorch2、認(rèn)識(shí),參考:
https://github.com/yunjey/pytorch-tutorial
https://github.com/jcjohnson/pytorch-examples
http://pytorch-cn.readthedocs.io/zh/latest/
3、demo:
# Code in file tensor/two_layer_net_tensor.py import torchdtype = torch.FloatTensor # dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU# N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10# Create random input and output data x = torch.randn(N, D_in).type(dtype) y = torch.randn(N, D_out).type(dtype)# Randomly initialize weights w1 = torch.randn(D_in, H).type(dtype) w2 = torch.randn(H, D_out).type(dtype)learning_rate = 1e-6 for t in range(500):# Forward pass: compute predicted yh = x.mm(w1)h_relu = h.clamp(min=0)y_pred = h_relu.mm(w2)# Compute and print lossloss = (y_pred - y).pow(2).sum()print(t, loss)# Backprop to compute gradients of w1 and w2 with respect to lossgrad_y_pred = 2.0 * (y_pred - y)grad_w2 = h_relu.t().mm(grad_y_pred)grad_h_relu = grad_y_pred.mm(w2.t())grad_h = grad_h_relu.clone()grad_h[h < 0] = 0grad_w1 = x.t().mm(grad_h)# Update weights using gradient descentw1 -= learning_rate * grad_w1w2 -= learning_rate * grad_w2結(jié)果如下:
而同樣np跑出來(lái)的結(jié)果是:
代碼如下:
# Code in file tensor/two_layer_net_numpy.py import numpy as np# N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10# Create random input and output data x = np.random.randn(N, D_in) y = np.random.randn(N, D_out)# Randomly initialize weights w1 = np.random.randn(D_in, H) w2 = np.random.randn(H, D_out)learning_rate = 1e-6 for t in range(500):# Forward pass: compute predicted yh = x.dot(w1)h_relu = np.maximum(h, 0)y_pred = h_relu.dot(w2)# Compute and print lossloss = np.square(y_pred - y).sum()print(t, loss)# Backprop to compute gradients of w1 and w2 with respect to lossgrad_y_pred = 2.0 * (y_pred - y)grad_w2 = h_relu.T.dot(grad_y_pred)grad_h_relu = grad_y_pred.dot(w2.T)grad_h = grad_h_relu.copy()grad_h[h < 0] = 0grad_w1 = x.T.dot(grad_h)# Update weightsw1 -= learning_rate * grad_w1w2 -= learning_rate * grad_w2根據(jù)實(shí)際應(yīng)用場(chǎng)景,后續(xù)可深入學(xué)習(xí),重點(diǎn)是gpu了。
總結(jié)
以上是生活随笔為你收集整理的【数据平台】Pytorch库初识的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 损失函数中正则化项L1和L2的理解
- 下一篇: 【Python-ML】感知器学习算法(p