tf.train.Saver
將訓練好的模型參數保存起來,以便以后進行驗證或測試,這是我們經常要做的事情。tf里面提供模型保存的是tf.train.Saver()模塊。
模型保存,先要創建一個Saver對象:如
saver=tf.train.Saver()在創建這個Saver對象的時候,有一個參數我們經常會用到,就是?max_to_keep 參數,這個是用來設置保存模型的個數,默認為5,即?max_to_keep=5,保存最近的5個模型。如果你想每訓練一代(epoch)就想保存一次模型,則可以將?max_to_keep設置為None或者0,如:
saver=tf.train.Saver(max_to_keep=0)但是這樣做除了多占用硬盤,并沒有實際多大的用處,因此不推薦。
當然,如果你只想保存最后一代的模型,則只需要將max_to_keep設置為1即可,即
saver=tf.train.Saver(max_to_keep=1)創建完saver對象后,就可以保存訓練好的模型了,如:
saver.save(sess,'ckpt/mnist.ckpt',global_step=step)第一個參數sess,這個就不用說了。第二個參數設定保存的路徑和名字,第三個參數將訓練的次數作為后綴加入到模型名字中。
saver.save(sess,'my-model', global_step=0) ==> ? ? ?filename: 'my-model-0'
...
saver.save(sess, 'my-model', global_step=1000) ==> filename: 'my-model-1000'
看一個mnist實例:
?
# -*- coding:utf-8 -*-
"""
Created on SunJun? 4 10:29:48 2017
?
@author:Administrator
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist =input_data.read_data_sets("MNIST_data/", one_hot=False)
?
x =tf.placeholder(tf.float32, [None, 784])
y_=tf.placeholder(tf.int32,[None,])
?
dense1 =tf.layers.dense(inputs=x,
????????????????????? units=1024,
????????????????????? activation=tf.nn.relu,
?????????????????????kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
????????????????????? kernel_regularizer=tf.nn.l2_loss)
dense2=tf.layers.dense(inputs=dense1,
????????????????????? units=512,
????????????????????? activation=tf.nn.relu,
?????????????????????kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
????????????????????? kernel_regularizer=tf.nn.l2_loss)
logits=tf.layers.dense(inputs=dense2,
??????????????????????? units=10,
??????????????????????? activation=None,
???????????????????????kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
???????????????????????kernel_regularizer=tf.nn.l2_loss)
?
loss=tf.losses.sparse_softmax_cross_entropy(labels=y_,logits=logits)
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
correct_prediction= tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_)???
acc=tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
?
sess=tf.InteractiveSession()?
sess.run(tf.global_variables_initializer())
?
saver=tf.train.Saver(max_to_keep=1)
for i in range(100):
? batch_xs, batch_ys = mnist.train.next_batch(100)
? sess.run(train_op, feed_dict={x: batch_xs,y_: batch_ys})
? val_loss,val_acc=sess.run([loss,acc],feed_dict={x: mnist.test.images, y_: mnist.test.labels})
? print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
? saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
sess.close()
?
代碼中紅色部分就是保存模型的代碼,雖然我在每訓練完一代的時候,都進行了保存,但后一次保存的模型會覆蓋前一次的,最終只會保存最后一次。因此我們可以節省時間,將保存代碼放到循環之外(僅適用max_to_keep=1,否則還是需要放在循環內).
在實驗中,最后一代可能并不是驗證精度最高的一代,因此我們并不想默認保存最后一代,而是想保存驗證精度最高的一代,則加個中間變量和判斷語句就可以了。
?
saver=tf.train.Saver(max_to_keep=1)
max_acc=0
for i in range(100):
? batch_xs, batch_ys =mnist.train.next_batch(100)
? sess.run(train_op, feed_dict={x: batch_xs,y_: batch_ys})
? val_loss,val_acc=sess.run([loss,acc],feed_dict={x: mnist.test.images, y_: mnist.test.labels})
? print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
? ifval_acc>max_acc:
????? max_acc=val_acc
?????saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
sess.close()
?
如果我們想保存驗證精度最高的三代,且把每次的驗證精度也隨之保存下來,則我們可以生成一個txt文件用于保存。
?
saver=tf.train.Saver(max_to_keep=3)
max_acc=0
f=open('ckpt/acc.txt','w')
for i in range(100):
? batch_xs, batch_ys =mnist.train.next_batch(100)
? sess.run(train_op, feed_dict={x: batch_xs,y_: batch_ys})
? val_loss,val_acc=sess.run([loss,acc],feed_dict={x: mnist.test.images, y_: mnist.test.labels})
? print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
??f.write(str(i+1)+', val_acc:'+str(val_acc)+'\n')
? if val_acc>max_acc:
????? max_acc=val_acc
????? saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
f.close()
sess.close()
?
?
模型的恢復用的是restore()函數,它需要兩個參數restore(sess,save_path),save_path指的是保存的模型路徑。我們可以使用tf.train.latest_checkpoint()來自動獲取最后一次保存的模型。如:
model_file=tf.train.latest_checkpoint('ckpt/')
saver.restore(sess,model_file)
則程序后半段代碼我們可以改為:
?
sess=tf.InteractiveSession()?
sess.run(tf.global_variables_initializer())
?
is_train=False
saver=tf.train.Saver(max_to_keep=3)
?
#訓練階段
if is_train:
??? max_acc=0
??? f=open('ckpt/acc.txt','w')
??? for i in range(100):
????? batch_xs, batch_ys = mnist.train.next_batch(100)
????? sess.run(train_op, feed_dict={x:batch_xs, y_: batch_ys})
????? val_loss,val_acc=sess.run([loss,acc],feed_dict={x: mnist.test.images, y_: mnist.test.labels})
????? print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))
???? ?f.write(str(i+1)+', val_acc:'+str(val_acc)+'\n')
????? if val_acc>max_acc:
????????? max_acc=val_acc
?????????saver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)
??? f.close()
?
#驗證階段
else:
??? model_file=tf.train.latest_checkpoint('ckpt/')
??? saver.restore(sess,model_file)
??? val_loss,val_acc=sess.run([loss,acc],feed_dict={x: mnist.test.images, y_: mnist.test.labels})
??? print('val_loss:%f, val_acc:%f'%(val_loss,val_acc))
sess.close()
?
標紅的地方,就是與保存、恢復模型相關的代碼。用一個bool型變量is_train來控制訓練和驗證兩個階段。
整個源程序:
?
# -*- coding:utf-8 -*-"""Created on SunJun 4 10:29:48 2017@author:Administrator"""import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datamnist =input_data.read_data_sets("MNIST_data/", one_hot=False)x =tf.placeholder(tf.float32, [None, 784])y_=tf.placeholder(tf.int32,[None,])dense1 =tf.layers.dense(inputs=x,units=1024,activation=tf.nn.relu,kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),kernel_regularizer=tf.nn.l2_loss)dense2=tf.layers.dense(inputs=dense1,units=512,activation=tf.nn.relu,kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),kernel_regularizer=tf.nn.l2_loss)logits=tf.layers.dense(inputs=dense2,units=10,activation=None,kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),kernel_regularizer=tf.nn.l2_loss)loss=tf.losses.sparse_softmax_cross_entropy(labels=y_,logits=logits)train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)correct_prediction= tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_) acc=tf.reduce_mean(tf.cast(correct_prediction, tf.float32))sess=tf.InteractiveSession() sess.run(tf.global_variables_initializer())is_train=Truesaver=tf.train.Saver(max_to_keep=3)#訓練階段if is_train:max_acc=0f=open('ckpt/acc.txt','w')for i in range(100):batch_xs, batch_ys =mnist.train.next_batch(100)sess.run(train_op, feed_dict={x:batch_xs, y_: batch_ys})val_loss,val_acc=sess.run([loss,acc],feed_dict={x: mnist.test.images, y_: mnist.test.labels})print('epoch:%d, val_loss:%f, val_acc:%f'%(i,val_loss,val_acc))f.write(str(i+1)+', val_acc: '+str(val_acc)+'\n')if val_acc>max_acc:max_acc=val_accsaver.save(sess,'ckpt/mnist.ckpt',global_step=i+1)f.close()#驗證階段else:model_file=tf.train.latest_checkpoint('ckpt/')saver.restore(sess,model_file)val_loss,val_acc=sess.run([loss,acc],feed_dict={x: mnist.test.images, y_: mnist.test.labels})print('val_loss:%f, val_acc:%f'%(val_loss,val_acc))sess.close()?
創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎總結
以上是生活随笔為你收集整理的tf.train.Saver的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: tf.transpose
- 下一篇: tf.layers.flatten