TensorFlow 笔记5--模型复用
生活随笔
收集整理的這篇文章主要介紹了
TensorFlow 笔记5--模型复用
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
TensorFlow 筆記5–模型復(fù)用
參考文檔:https://github.com/ageron/handson-ml/blob/master/11_deep_learning.ipynb
一、模型復(fù)用
法1:直接拷貝構(gòu)建模型的代碼
# 只需在session中恢復(fù)模型數(shù)據(jù),注意路徑替換 with tf.Session() as sess:saver.restore(sess, "./my_model_final.ckpt")法2:恢復(fù)模型圖
1、恢復(fù)模型的圖
# 路徑應(yīng)該為自己保存模型的路徑 saver = tf.train.import_meta_graph("./my_model_final.ckpt.meta")2、獲得模型的op
# 法1:查看模型有哪些op,根據(jù)op的名稱獲得op# 查看op for op in tf.get_default_graph().get_operations():print(op.name) # 查看tensor for var in tf.global_variables():print(var.name) # 根據(jù)op的名稱獲得op,名稱一定要對(duì)應(yīng) X = tf.get_default_graph().get_tensor_by_name("X:0") y = tf.get_default_graph().get_tensor_by_name("y:0") accuracy = tf.get_default_graph().get_tensor_by_name("eval/accuracy:0") training_op = tf.get_default_graph().get_operation_by_name("GradientDescent")# 法二:在原模型中添加字典存放所需的op,重用模型時(shí)通過字典獲得op# 在原模型中添加字典存放所需的op for op in (X, y, accuracy, training_op):tf.add_to_collection("my_important_ops", op)# 重用模型時(shí)通過字典獲得op X, y, accuracy, training_op = tf.get_collection("my_important_ops")3、在session中恢復(fù)模型數(shù)據(jù)
with tf.Session() as sess:saver.restore(sess, "./my_model_final.ckpt")重復(fù)使用變量:
# 預(yù)測(cè)時(shí)重復(fù)恢復(fù)模型 tf.get_variable_scope().reuse_variables()二、只使用模型的一部分
法一:重新構(gòu)建圖,恢復(fù)部分模型數(shù)據(jù)
1、重新構(gòu)建圖
# 需要重用的那幾層必須和原模型一樣 # 模型參數(shù) n_inputs = 28 * 28 n_hidden1 = 300 n_hidden2 = 50 n_hidden3 = 50 n_hidden4 = 20 n_outputs = 10 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y")# 構(gòu)建圖,添加層 with tf.name_scope("dnn"):hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # reusedhidden2 = tf.layers.dense(hidden1, n_hidden2, activation=tf.nn.relu, name="hidden2") # reusedhidden3 = tf.layers.dense(hidden2, n_hidden3, activation=tf.nn.relu, name="hidden3") # reusedhidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="hidden4") # new!logits = tf.layers.dense(hidden4, n_outputs, name="outputs") # new!# 圖剩下的部分和原模型的就一模一樣了2、恢復(fù)部分模型數(shù)據(jù)
# 創(chuàng)建saver指定要恢復(fù)的數(shù)據(jù),注意這個(gè)saver的名稱和保存現(xiàn)在的模型的saver不能一樣 reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,scope="hidden[123]") # regular expression restore_saver = tf.train.Saver(reuse_vars) # to restore layers 1-3# 保存現(xiàn)在的模型的saver saver = tf.train.Saver()# 恢復(fù)模型部分?jǐn)?shù)據(jù) with tf.Session() as sess:restore_saver.restore(sess, "./my_model_final.ckpt")法二:恢復(fù)圖,修改圖,恢復(fù)模型數(shù)據(jù)
1、恢復(fù)圖并獲得所需的op
2、修改圖,添加層
# 要添加層的參數(shù) n_hidden4 = 20 n_outputs = 10 # 添加層 new_hidden4 = tf.layers.dense(hidden3, n_hidden4, activation=tf.nn.relu, name="new_hidden4") new_logits = tf.layers.dense(new_hidden4, n_outputs, name="new_outputs")3、使用部分模型數(shù)據(jù)
# 可以恢復(fù)全部數(shù)據(jù)(只用其中一部分),也可以恢復(fù)部分層數(shù)據(jù) with tf.Session() as sess:saver.restore(sess, "./my_model_final.ckpt")三、使用其它框架的權(quán)重
如果只有模型的權(quán)重?cái)?shù)據(jù)沒有圖,可以自己構(gòu)建圖,然后加載權(quán)重到相應(yīng)的層。
法一:
# 模型參數(shù) n_inputs = 2 n_hidden1 = 3# 權(quán)重?cái)?shù)據(jù) original_w = [[1., 2., 3.], [4., 5., 6.]] # Load the weights from the other framework original_b = [7., 8., 9.] # Load the biases from the other framework# 構(gòu)建圖 X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # [...] Build the rest of the model# 獲得要加載權(quán)重的變量名 graph = tf.get_default_graph() assign_kernel = graph.get_operation_by_name("hidden1/kernel/Assign") assign_bias = graph.get_operation_by_name("hidden1/bias/Assign") init_kernel = assign_kernel.inputs[1] init_bias = assign_bias.inputs[1]init = tf.global_variables_initializer()with tf.Session() as sess:# 權(quán)重賦給相應(yīng)的變量后,喂給模型sess.run(init, feed_dict={init_kernel: original_w, init_bias: original_b})# [...] Train the model on your new taskprint(hidden1.eval(feed_dict={X: [[10.0, 11.0]]})) # not shown in the book法二:
n_inputs = 2 n_hidden1 = 3original_w = [[1., 2., 3.], [4., 5., 6.]] # Load the weights from the other framework original_b = [7., 8., 9.] # Load the biases from the other frameworkX = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.relu, name="hidden1") # [...] Build the rest of the model# Get a handle on the variables of layer hidden1 with tf.variable_scope("", default_name="", reuse=True): # root scopehidden1_weights = tf.get_variable("hidden1/kernel")hidden1_biases = tf.get_variable("hidden1/bias")# Create dedicated placeholders and assignment nodes original_weights = tf.placeholder(tf.float32, shape=(n_inputs, n_hidden1)) original_biases = tf.placeholder(tf.float32, shape=n_hidden1) assign_hidden1_weights = tf.assign(hidden1_weights, original_weights) assign_hidden1_biases = tf.assign(hidden1_biases, original_biases)init = tf.global_variables_initializer()with tf.Session() as sess:sess.run(init)sess.run(assign_hidden1_weights, feed_dict={original_weights: original_w})sess.run(assign_hidden1_biases, feed_dict={original_biases: original_b})# [...] Train the model on your new taskprint(hidden1.eval(feed_dict={X: [[10.0, 11.0]]}))總結(jié)
以上是生活随笔為你收集整理的TensorFlow 笔记5--模型复用的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: SpringBoot怎么直接访问temp
- 下一篇: php正则检查QQ,PHP 正则匹配手机