生活随笔
收集整理的這篇文章主要介紹了
TensorFlow 2.0 - Checkpoint 保存变量、TensorBoard 训练可视化
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
文章目錄
- 1. Checkpoint 保存變量
- 2. TensorBoard 訓(xùn)練過(guò)程可視化
學(xué)習(xí)于:簡(jiǎn)單粗暴 TensorFlow 2
1. Checkpoint 保存變量
- tf.train.Checkpoint 可以保存 tf.keras.optimizer 、 tf.Variable 、 tf.keras.Layer 、 tf.keras.Model
path
= "./checkp.ckpt"
mycheckpoint
= tf
.train
.Checkpoint
(mybestmodel
=mymodel
)
mycheckpoint
.save
(path
)
restored_model
= LinearModel
()
mycheckpoint
= tf
.train
.Checkpoint
(mybestmodel
=restored_model
)
path
= "./checkp.ckpt-1"
mycheckpoint
.restore
(path
)X_test
= tf
.constant
([[5.1], [6.1]])
res
= restored_model
.predict
(X_test
)
print(res
)
- 恢復(fù)最近的模型,自動(dòng)選定目錄下最新的存檔(后綴數(shù)字最大的)
mycheckpoint
.restore
(tf
.train
.latest_checkpoint
("./"))
- 管理保存的參數(shù),有時(shí)不需要保存太多,占空間
mycheckpoint
= tf
.train
.Checkpoint
(mybestmodel
=mymodel
)
manager
= tf
.train
.CheckpointManager
(mycheckpoint
, directory
="./",checkpoint_name
='checkp.ckpt',max_to_keep
=2) for loop
:manager
.save
() manager
.save
(checkpoint_number
=idx
)
2. TensorBoard 訓(xùn)練過(guò)程可視化
- summary_writer = tf.summary.create_file_writer(logdir=log_dir)
- tf.summary.scalar(name='loss', data=loss, step=idx)
- tf.summary.trace_on(profiler=True)
for loop
:with summary_writer
.as_default
():tf
.summary
.scalar
(name
='loss', data
=loss
, step
=idx
)
with summary_writer
.as_default
():tf
.summary
.trace_export
(name
='model_trace', step
=0,profiler_outdir
=log_dir
)
import tensorflow
as tf
import numpy
as np
class MNistLoader():def __init__(self
):data
= tf
.keras
.datasets
.mnist
(self
.train_data
, self
.train_label
), (self
.test_data
, self
.test_label
) = data
.load_data
()self
.train_data
= np
.expand_dims
(self
.train_data
.astype
(np
.float32
) / 255.0, axis
=-1)self
.test_data
= np
.expand_dims
(self
.test_data
.astype
(np
.float32
) / 255.0, axis
=-1)self
.train_label
= self
.train_label
.astype
(np
.int32
)self
.test_label
= self
.test_label
.astype
(np
.int32
)self
.num_train_data
, self
.num_test_data
= self
.train_data
.shape
[0], self
.test_data
.shape
[0]def get_batch(self
, batch_size
):idx
= np
.random
.randint
(0, self
.num_train_data
, batch_size
)return self
.train_data
[idx
, :], self
.train_label
[idx
]
class MLPmodel(tf
.keras
.Model
):def __init__(self
):super().__init__
()self
.flatten
= tf
.keras
.layers
.Flatten
()self
.dense1
= tf
.keras
.layers
.Dense
(units
=100, activation
='relu')self
.dense2
= tf
.keras
.layers
.Dense
(units
=10)def call(self
, input):x
= self
.flatten
(input)x
= self
.dense1
(x
)x
= self
.dense2
(x
)output
= tf
.nn
.softmax
(x
)return outputnum_epochs
= 5
batch_size
= 50
learning_rate
= 1e-4
log_dir
= './log'
mymodel
= MLPmodel
()
data_loader
= MNistLoader
()
optimizer
= tf
.keras
.optimizers
.Adam
(learning_rate
=learning_rate
)
num_batches
= int(data_loader
.num_train_data
// batch_size
* num_epochs
)
summary_writer
= tf
.summary
.create_file_writer
(logdir
=log_dir
)
tf
.summary
.trace_on
(profiler
=True)for idx
in range(num_batches
):X
, y
= data_loader
.get_batch
(batch_size
)with tf
.GradientTape
() as tape
:y_pred
= mymodel
(X
)loss
= tf
.keras
.losses
.sparse_categorical_crossentropy
(y_true
=y
, y_pred
=y_pred
)loss
= tf
.reduce_mean
(loss
)print("batch {}, loss {}".format(idx
, loss
.numpy
()))with summary_writer
.as_default
():tf
.summary
.scalar
(name
='loss', data
=loss
, step
=idx
)grads
= tape
.gradient
(loss
, mymodel
.variables
)optimizer
.apply_gradients
(grads_and_vars
=zip(grads
, mymodel
.variables
))with summary_writer
.as_default
():tf
.summary
.trace_export
(name
='model_trace', step
=0,profiler_outdir
=log_dir
)
- 開(kāi)始訓(xùn)練,命令行進(jìn)入 可視化界面 tensorboard --logdir=./log
- 點(diǎn)擊命令行中的鏈接,打開(kāi)瀏覽器,查看訓(xùn)練曲線
- 若重新訓(xùn)練,請(qǐng)刪除 log 文件,或設(shè)置別的 log 路徑,重新 cmd 開(kāi)啟 瀏覽器
總結(jié)
以上是生活随笔為你收集整理的TensorFlow 2.0 - Checkpoint 保存变量、TensorBoard 训练可视化的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。