生活随笔
收集整理的這篇文章主要介紹了
PyTorch 训练可视化教程 visdom
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
visdom 快速啟動(dòng)
啟動(dòng)地址:http://localhost:8097/
visdom 的介紹
Visdom是Facebook專為PyTorch開發(fā)的實(shí)時(shí)可視化工具包,其作用相當(dāng)于TensorFlow中的Tensorboard,靈活高效且界面美觀,下面就一起來學(xué)習(xí)下如何使用吧!如果想更多了解關(guān)于Visdom的使用可以參考官方
首先來欣賞下官方提供的Visdom的可視化界面
Visdom的安裝
- 安裝非常簡(jiǎn)易,只需要打開cmd窗口,輸入一下命令即可快速安裝完成
pip
install visdom
Visdom 的使用
類似于TensorFlow的TensorBoard,要使用Visdom,就要先在終端開啟監(jiān)聽命令,根據(jù)顯示的網(wǎng)址然后在瀏覽器里輸入:http://localhost:8097 進(jìn)行登錄,此時(shí)如果報(bào)錯(cuò),別怕,參考以下網(wǎng)站一定能輕松解決(新版visdom已經(jīng)解決了可以使用pip install --upgrade visdom進(jìn)行更新即可):
開啟監(jiān)聽命令
python -m visdom.server
示例1:實(shí)時(shí)曲線繪制方法
- 方法是起點(diǎn)+數(shù)據(jù)點(diǎn)更新
from visdom
import Visdom
'''
單條追蹤曲線設(shè)置
'''
viz
= Visdom
()
viz
.line
([0.], [0.], win
="train loss", opts
=dict(title
='train_loss') )
'''
模型數(shù)據(jù)
'''
viz
.line
([1.], [1.], win
="train loss", update
='append' )
示例2:多條曲線繪制
from visdom
import Visdom
'''
多條曲線繪制 實(shí)際上就是傳入y值時(shí)為一個(gè)向量
'''
viz
= Visdom
(env
='my_wind')
viz
.line
([[0.0,0.0]], [0.], win
="test loss", opts
=dict(title
='test_loss') )
'''
模型數(shù)據(jù)
'''
viz
.line
([[1.1,1.5]], [1.], win
="test loss", update
='append' )
- 大家此時(shí)查看需要先切換environment窗口為my才能看到圖像,如圖所示:
示例3:圖像顯示
from visdom
import Visdom
import numpy
as npimage
= np
.random
.randn
(6, 3, 200, 300)
viz
= Visdom
(env
='my_image')
viz
.images
(image
, win
='x')
示例4:可視化數(shù)據(jù)集
from visdom
import Visdom
import numpy
as np
import torch
from torchvision
import datasets
, transforms
train_loader
= torch
.utils
.data
.DataLoader
(datasets
.MNIST
(r
'D:\data',train
=True,download
=False,transform
=transforms
.Compose
([transforms
.ToTensor
()])),batch_size
=128,shuffle
=True)
sample
=next(iter(train_loader
))
viz
= Visdom
(env
='my_visual')
viz
.images
(sample
[0],nrow
=16,win
='mnist',opts
=dict(title
='mnist'))
示例5:訓(xùn)練
- 下面通過具體的訓(xùn)練過程通過visdom可視化
- 為了方便顯示Visdom的功能,直接使用自帶的MNist數(shù)據(jù)進(jìn)行可視化。
'''
導(dǎo)入庫(kù)文件
'''
import torch
import torch
.nn
as nn
import torch
.nn
.functional
as F
import torch
.optim
as optim
from torchvision
import datasets
, transforms
from visdom
import Visdom
import numpy
as np
'''
構(gòu)建簡(jiǎn)單的模型:簡(jiǎn)單線性層+Relu函數(shù)的多層感知機(jī)
'''
class MLP(nn
.Module
):def __init__(self
):super(MLP
, self
).__init__
()self
.model
= nn
.Sequential
(nn
.Linear
(784, 200),nn
.ReLU
(inplace
=True),nn
.Linear
(200, 200),nn
.ReLU
(inplace
=True),nn
.Linear
(200, 10),nn
.ReLU
(inplace
=True),)def forward(self
, x
):x
= self
.model
(x
)return xbatch_size
= 128
learning_rate
= 0.01
epochs
= 10
train_loader
= torch
.utils
.data
.DataLoader
(datasets
.MNIST
(r
'D:\Users\Administrator\Desktop\PythonDLbasedonPytorch\data', train
=True,download
=True,transform
=transforms
.Compose
([transforms
.ToTensor
(),transforms
.Normalize
((0.1307, ), (0.3081, ))])),batch_size
=batch_size
,shuffle
=True)
test_loader
= torch
.utils
.data
.DataLoader
(datasets
.MNIST
(r
'D:\Users\Administrator\Desktop\PythonDLbasedonPytorch\data',train
=False,transform
=transforms
.Compose
([transforms
.ToTensor
(),transforms
.Normalize
((0.1307, ), (0.3081, ))])),batch_size
=batch_size
,shuffle
=True)
viz
= Visdom
()
viz
.line
([0.], [0.], win
="train loss", opts
=dict(title
='train_loss'))
device
= torch
.device
('cuda:0')
net
= MLP
().to
(device
)
optimizer
= optim
.SGD
(net
.parameters
(), lr
=learning_rate
)
criteon
= nn
.CrossEntropyLoss
().to
(device
)for epoch
in range(epochs
):for batch_idx
, (data
, target
) in enumerate(train_loader
):data
= data
.view
(-1, 28 * 28)data
, target
= data
.to
(device
), target
.cuda
()logits
= net
(data
)loss
= criteon
(logits
, target
)optimizer
.zero_grad
()loss
.backward
()optimizer
.step
()if batch_idx
% 100 == 0:print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch
, batch_idx
* len(data
), len(train_loader
.dataset
),100. * batch_idx
/ len(train_loader
), loss
.item
()))test_loss
= 0correct
= 0for data
, target
in test_loader
:data
= data
.view
(-1, 28 * 28)data
, target
= data
.to
(device
), target
.cuda
()logits
= net
(data
)test_loss
+= criteon
(logits
, target
).item
()pred
= logits
.argmax
(dim
=1)correct
+= pred
.eq
(target
).float().sum().item
()test_loss
/= len(test_loader
.dataset
)viz
.line
([test_loss
], [epoch
], win
="train loss", update
='append')print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss
, correct
, len(test_loader
.dataset
),100. * correct
/ len(test_loader
.dataset
)))
loss曲線如圖所示
visdom 基本可視化函數(shù)
- 具體使用方法仍然可以參考上述網(wǎng)站,限于篇幅,這里主要列舉最常用的line函數(shù)以及image函數(shù)的使用方法
visdom基本可視化函數(shù)
- vis.image
: 圖片
- vis.line: 曲線
- vis.images
: 圖片列表
- vis.text
: 抽象HTML 輸出文字
- vis.properties
: 屬性網(wǎng)格
- vis.audio
: 音頻
- vis.video
: 視頻
- vis.svg
: SVG對(duì)象
- vis.matplot
: matplotlib圖
- vis.save
: 序列化狀態(tài)服務(wù)端
上述函數(shù)參數(shù)
- 注意opt的參數(shù)都可以用python字典的格式傳入,大家可以參考下方使用方法
- opts.title
: 圖標(biāo)題
- opts.width
: 圖寬
- opts.height
: 圖高
- opts.showlegend
: 顯示圖例
(true or false
)
- opts.xtype
: x軸的類型
('linear' or
'log')
- opts.xlabel
: x軸的標(biāo)簽
- opts.xtick
: 顯示x軸上的刻度
(boolean
)
- opts.xtickmin
: 指定x軸上的第一個(gè)刻度
(number
)
- opts.xtickmax
: 指定x軸上的最后一個(gè)刻度
(number
)
- opts.xtickvals
: x軸上刻度的位置
(table of numbers
)
- opts.xticklabels
: 在x軸上標(biāo)記標(biāo)簽
(table of strings
)
- opts.xtickstep
: x軸上刻度之間的距離
(number
)
- opts.xtickfont :x軸標(biāo)簽的字體
(dict of font information
)
- opts.ytype
: type of y-axis
('linear' or
'log')
- opts.ylabel
: label of y-axis
- opts.ytick
: show ticks on y-axis
(boolean
)
- opts.ytickmin
: first tick on y-axis
(number
)
- opts.ytickmax
: last tick on y-axis
(number
)
- opts.ytickvals
: locations of ticks on y-axis
(table of numbers
)
- opts.yticklabels
: ticks labels on y-axis
(table of strings
)
- opts.ytickstep
: distances between ticks on y-axis
(number
)
- opts.ytickfont
: font
for y-axis labels
(dict of font information
)
- opts.marginleft
: 左邊框
(in pixels
)
- opts.marginright :右邊框
(in pixels
)
- opts.margintop
: 上邊框
(in pixels
)
- opts.marginbottom: 下邊框
(in pixels
)
- opts.lagent
=['']: 顯示圖標(biāo)
參考鏈接
輕松學(xué) Pytorch–Visdom 可視化GitHub 地址PyTorch的遠(yuǎn)程可視化神器visdom
總結(jié)
以上是生活随笔為你收集整理的PyTorch 训练可视化教程 visdom的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。