TensorFlow 完整的TensorFlow入门教程
1:你想要學(xué)習(xí)TensorFlow,首先你得安裝Tensorflow,在你學(xué)習(xí)的時(shí)候你最好懂以下的知識(shí):
??? a:怎么用python編程;
???? b:了解一些關(guān)于數(shù)組的知識(shí);
???? c:最理想的情況是:關(guān)于機(jī)器學(xué)習(xí),懂一點(diǎn)點(diǎn);或者不懂也是可以慢慢開始學(xué)習(xí)的。
2:TensorFlow提供很多API,最低級(jí)別是API:TensorFlow Core,提供給你完成程序控制,還有一些高級(jí)別的API,它們是構(gòu)建在
TensorFlow Core之上的,這些高級(jí)別的API更加容易學(xué)習(xí)和使用,于此同時(shí),這些高級(jí)別的API使得重復(fù)的訓(xùn)練任務(wù)更加容易,
也使得多個(gè)使用者操作對(duì)他保持一致性,一個(gè)高級(jí)別的API像tf.estimator幫助你管理數(shù)據(jù)集合,估量,訓(xùn)練和推理。
3:TensorsTensorFlow的數(shù)據(jù)中央控制單元是tensor(張量),一個(gè)tensor由一系列的原始值組成,這些值被形成一個(gè)任意維數(shù)的數(shù)組。
一個(gè)tensor的列就是它的維度。
4:import tensorflow as tf
上面的是TensorFlow 程序典型的導(dǎo)入語(yǔ)句,作用是:賦予Python訪問TensorFlow類(classes),方法(methods),符號(hào)(symbols)
5:The Computational Graph TensorFlow核心程序由2個(gè)獨(dú)立部分組成:
??? a:Building the computational graph構(gòu)建計(jì)算圖
??? b:Running the computational graph運(yùn)行計(jì)算圖
一個(gè)computational graph(計(jì)算圖)是一系列的TensorFlow操作排列成一個(gè)節(jié)點(diǎn)圖。
??? node1 = tf.constant(3.0, dtype=tf.float32)
??? node2 = tf.constant(4.0)# also tf.float32 implicitly
??? print(node1, node2)
最后打印結(jié)果是:
Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0",shape=(), dtype=float32)
要想打印最終結(jié)果,我們必須用到session:一個(gè)session封裝了TensorFlow運(yùn)行時(shí)的控制和狀態(tài)
??? sess = tf.Session()
??? print(sess.run([node1, node2]))
我們可以組合Tensor節(jié)點(diǎn)操作(操作仍然是一個(gè)節(jié)點(diǎn))來構(gòu)造更加復(fù)雜的計(jì)算,
??? node3 = tf.add(node1, node2)
??? print("node3:", node3)
??? print("sess.run(node3):", sess.run(node3))
打印結(jié)果是:
??? node3:Tensor("Add:0", shape=(), dtype=float32)
??? sess.run(node3):7.0
6:TensorFlow提供一個(gè)統(tǒng)一的調(diào)用稱之為TensorBoard,它能展示一個(gè)計(jì)算圖的圖片;如下面這個(gè)截圖就展示了這個(gè)計(jì)算圖
7:一個(gè)計(jì)算圖可以參數(shù)化的接收外部的輸入,作為一個(gè)placeholders(占位符),一個(gè)占位符是允許后面提供一個(gè)值的。
??? a = tf.placeholder(tf.float32)
??? b = tf.placeholder(tf.float32)
??? adder_node = a + b? # + provides a shortcut for tf.add(a, b)
這里有點(diǎn)像一個(gè)function (函數(shù))或者lambda表達(dá)式,我們定義了2個(gè)輸入?yún)?shù)a和b,然后提供一個(gè)在它們之上的操作。我們可以使用
feed_dict(傳遞字典)參數(shù)傳遞具體的值到run方法的占位符來進(jìn)行多個(gè)輸入,從而來計(jì)算這個(gè)圖。
??? print(sess.run(adder_node, {a:3, b:4.5}))
??? print(sess.run(adder_node, {a: [1,3], b: [2,4]}))
結(jié)果是:
??? 7.5
??? [3.? 7.]
在TensorBoard,計(jì)算圖類似于這樣:
8:我們可以增加另外的操作來讓計(jì)算圖更加復(fù)雜,比如
??????? add_and_triple = adder_node *3.
??? print(sess.run(add_and_triple, {a:3, b:4.5}))
??? 輸出結(jié)果是:
??? 22.5
在TensorBoard,計(jì)算圖類似于這樣:
9:在機(jī)器學(xué)習(xí)中,我們通常想讓一個(gè)模型可以接收任意多個(gè)輸入,比如大于1個(gè),好讓這個(gè)模型可以被訓(xùn)練,在不改變輸入的情況下,
我們需要改變這個(gè)計(jì)算圖去獲得一個(gè)新的輸出。變量允許我們?cè)黾涌捎?xùn)練的參數(shù)到這個(gè)計(jì)算圖中,它們被構(gòu)造成有一個(gè)類型和初始值:
??????? W = tf.Variable([.3], dtype=tf.float32)
??? b = tf.Variable([-.3], dtype=tf.float32)
??? x = tf.placeholder(tf.float32)
??? linear_model = W*x + b
10:當(dāng)你調(diào)用tf.constant時(shí)常量被初始化,它們的值是不可以改變的,而變量當(dāng)你調(diào)用tf.Variable時(shí)沒有被初始化,
在TensorFlow程序中要想初始化這些變量,你必須明確調(diào)用一個(gè)特定的操作,如下:
??????? init = tf.global_variables_initializer()
??? sess.run(init)
11:要實(shí)現(xiàn)初始化所有全局變量的TensorFlow子圖的的處理是很重要的,直到我們調(diào)用sess.run,這些變量都是未被初始化的。
既然x是一個(gè)占位符,我們就可以同時(shí)地對(duì)多個(gè)x的值進(jìn)行求值linear_model,例如:
??????? print(sess.run(linear_model, {x: [1,2,3,4]}))
??? 求值linear_model
??? 輸出為
??? [0.? 0.30000001? 0.60000002? 0.90000004]
12:我們已經(jīng)創(chuàng)建了一個(gè)模型,但是我們至今不知道它是多好,在這些訓(xùn)練數(shù)據(jù)上對(duì)這個(gè)模型進(jìn)行評(píng)估,我們需要一個(gè)
y占位符來提供一個(gè)期望的值,并且我們需要寫一個(gè)loss function(損失函數(shù)),一個(gè)損失函數(shù)度量當(dāng)前的模型和提供
的數(shù)據(jù)有多遠(yuǎn),我們將會(huì)使用一個(gè)標(biāo)準(zhǔn)的損失模式來線性回歸,它的增量平方和就是當(dāng)前模型與提供的數(shù)據(jù)之間的損失
,linear_model - y創(chuàng)建一個(gè)向量,其中每個(gè)元素都是對(duì)應(yīng)的示例錯(cuò)誤增量。這個(gè)錯(cuò)誤的方差我們稱為tf.square。然后
,我們合計(jì)所有的錯(cuò)誤方差用以創(chuàng)建一個(gè)標(biāo)量,用tf.reduce_sum抽象出所有示例的錯(cuò)誤。
??????? y = tf.placeholder(tf.float32)
??? squared_deltas = tf.square(linear_model - y)
??? loss = tf.reduce_sum(squared_deltas)
??? print(sess.run(loss, {x: [1,2,3,4], y: [0, -1, -2, -3]}))
??? 輸出的結(jié)果為
??? 23.66
?
13:我們分配一個(gè)值給W和b(得到一個(gè)完美的值是-1和1)來手動(dòng)改進(jìn)這一點(diǎn),一個(gè)變量被初始化一個(gè)值會(huì)調(diào)用tf.Variable,
但是可以用tf.assign來改變這個(gè)值,例如:fixW = tf.assign(W, [-1.])
??????? fixb = tf.assign(b, [1.])
??? sess.run([fixW, fixb])
??? print(sess.run(loss, {x: [1,2,3,4], y: [0, -1, -2, -3]}))
??? 最終打印的結(jié)果是:
??? 0.0
14:tf.train APITessorFlow提供optimizers(優(yōu)化器),它能慢慢改變每一個(gè)變量以最小化損失函數(shù),最簡(jiǎn)單的優(yōu)化器是
gradient descent(梯度下降),它根據(jù)變量派生出損失的大小,來修改每個(gè)變量。通常手工計(jì)算變量符號(hào)是乏味且容易出錯(cuò)的,
因此,TensorFlow使用函數(shù)tf.gradients給這個(gè)模型一個(gè)描述,從而能自動(dòng)地提供衍生品,簡(jiǎn)而言之,優(yōu)化器通常會(huì)為你做這個(gè)。例如:
??????? optimizer = tf.train.GradientDescentOptimizer(0.01)
??? train = optimizer.minimize(loss)
??? sess.run(init)# reset values to incorrect defaults.
??? for iin range(1000):
?????? sess.run(train, {x: [1,2,3,4], y: [0, -1, -2, -3]})
??? ?
??? print(sess.run([W, b]))
??? 輸出結(jié)果為
??? [array([-0.9999969], dtype=float32), array([ 0.99999082], dtype=float32)]
現(xiàn)在你已經(jīng)完成了實(shí)際的機(jī)器學(xué)習(xí),盡管這個(gè)簡(jiǎn)單的線性回歸模型不要求太多TensorFlow core代碼,更復(fù)雜的模型和
方法將數(shù)據(jù)輸入到模型中,需要跟多的代碼,因此TensorFlow為常見模式,結(jié)構(gòu)和功能提供更高級(jí)別的抽象,我們將會(huì)
在下一個(gè)章節(jié)學(xué)習(xí)這些抽象。
15:tf.estimatortf.setimator是一個(gè)更高級(jí)別的TensorFlow庫(kù),它簡(jiǎn)化了機(jī)械式的機(jī)器學(xué)習(xí),包含以下幾個(gè)方面:
??? running training loops 運(yùn)行訓(xùn)練循環(huán)
??? running evaluation loops 運(yùn)行求值循環(huán)
??? managing data sets 管理數(shù)據(jù)集合
tf.setimator定義了很多相同的模型。
16:A custom modeltf.setimator沒有把你限制在預(yù)定好的模型中,假設(shè)我們想要?jiǎng)?chuàng)建一個(gè)自定義的模型,它不是由
TensorFlow建成的。我還是能保持這些數(shù)據(jù)集合,輸送,訓(xùn)練高級(jí)別的抽象;例如:tf.estimator;
17:現(xiàn)在你有了關(guān)于TensorFlow的一個(gè)基本工作知識(shí),我們還有更多教程,它能讓你學(xué)習(xí)更多。如果你是一個(gè)機(jī)器學(xué)習(xí)初學(xué)者,
你可以繼續(xù)學(xué)習(xí)MNIST for beginners,否則你可以學(xué)習(xí)Deep MNIST for experts.
完整的代碼:
??? import tensorflow as tf
??? node1 = tf.constant(3.0, dtype=tf.float32)
??? node2 = tf.constant(4.0)? # also tf.float32 implicitly
??? print(node1, node2)
??? ?
??? sess = tf.Session()
??? print(sess.run([node1, node2]))
??? ?
??? # from __future__ import print_function
??? node3 = tf.add(node1, node2)
??? print("node3:", node3)
??? print("sess.run(node3):", sess.run(node3))
??? ?
??? ?
??? # 占位符
??? a = tf.placeholder(tf.float32)
??? b = tf.placeholder(tf.float32)
??? adder_node = a + b? # + provides a shortcut for tf.add(a, b)
??? ?
??? print(sess.run(adder_node, {a: 3, b: 4.5}))
??? print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
??? ?
??? add_and_triple = adder_node * 3.
??? print(sess.run(add_and_triple, {a: 3, b: 4.5}))
??? ?
??? ?
??? # 多個(gè)變量求值
??? W = tf.Variable([.3], dtype=tf.float32)
??? b = tf.Variable([-.3], dtype=tf.float32)
??? x = tf.placeholder(tf.float32)
??? linear_model = W*x + b
??? ?
??? #? 變量初始化
??? init = tf.global_variables_initializer()
??? sess.run(init)
??? ?
??? print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
??? ?
??? # loss function
??? y = tf.placeholder(tf.float32)
??? squared_deltas = tf.square(linear_model - y)
??? loss = tf.reduce_sum(squared_deltas)
??? print("loss function", sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
??? ?
??? ss = (0-0)*(0-0) + (0.3+1)*(0.3+1) + (0.6+2)*(0.6+2) + (0.9+3)*(0.9+3)? # 真實(shí)算法
??? print("真實(shí)算法ss", ss)
??? ?
??? print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, 0.3, 0.6, 0.9]}))? # 測(cè)試參數(shù)
??? ?
??? # ft.assign 變量重新賦值
??? fixW = tf.assign(W, [-1.])
??? fixb = tf.assign(b, [1.])
??? sess.run([fixW, fixb])
??? print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
??? print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
??? ?
??? ?
??? # tf.train API
??? optimizer = tf.train.GradientDescentOptimizer(0.01)? # 梯度下降優(yōu)化器
??? train = optimizer.minimize(loss)??? # 最小化損失函數(shù)
??? sess.run(init)? # reset values to incorrect defaults.
??? for i in range(1000):
????? sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})
??? ?
??? print(sess.run([W, b]))
??? ?
??? ?
??? print("------------------------------------1")
??? ?
??? # Complete program:The completed trainable linear regression model is shown here:完整的訓(xùn)練線性回歸模型代碼
??? # Model parameters
??? W = tf.Variable([.3], dtype=tf.float32)
??? b = tf.Variable([-.3], dtype=tf.float32)
??? # Model input and output
??? x = tf.placeholder(tf.float32)
??? linear_model = W*x + b
??? y = tf.placeholder(tf.float32)
??? ?
??? # loss
??? loss = tf.reduce_sum(tf.square(linear_model - y))? # sum of the squares
??? # optimizer
??? optimizer = tf.train.GradientDescentOptimizer(0.01)
??? train = optimizer.minimize(loss)
??? ?
??? # training data
??? x_train = [1, 2, 3, 4]
??? y_train = [0, -1, -2, -3]
??? # training loop
??? init = tf.global_variables_initializer()
??? sess = tf.Session()
??? sess.run(init) # reset values to wrong
??? for i in range(1000):
????? sess.run(train, {x: x_train, y: y_train})
??? ?
??? # evaluate training accuracy
??? curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
??? print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
??? ?
??? ?
??? print("------------------------------------2")
??? ?
??? # tf.estimator? 使用tf.estimator實(shí)現(xiàn)上述訓(xùn)練
??? # Notice how much simpler the linear regression program becomes with tf.estimator:
??? # NumPy is often used to load, manipulate and preprocess data.
??? import numpy as np
??? import tensorflow as tf
??? ?
??? # Declare list of features. We only have one numeric feature. There are many
??? # other types of columns that are more complicated and useful.
??? feature_columns = [tf.feature_column.numeric_column("x", shape=[1])]
??? ?
??? # An estimator is the front end to invoke training (fitting) and evaluation
??? # (inference). There are many predefined types like linear regression,
??? # linear classification, and many neural network classifiers and regressors.
??? # The following code provides an estimator that does linear regression.
??? estimator = tf.estimator.LinearRegressor(feature_columns=feature_columns)
??? ?
??? # TensorFlow provides many helper methods to read and set up data sets.
??? # Here we use two data sets: one for training and one for evaluation
??? # We have to tell the function how many batches
??? # of data (num_epochs) we want and how big each batch should be.
??? x_train = np.array([1., 2., 3., 4.])
??? y_train = np.array([0., -1., -2., -3.])
??? x_eval = np.array([2., 5., 8., 1.])
??? y_eval = np.array([-1.01, -4.1, -7, 0.])
??? input_fn = tf.estimator.inputs.numpy_input_fn(
??????? {"x": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)
??? train_input_fn = tf.estimator.inputs.numpy_input_fn(
??????? {"x": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)
??? eval_input_fn = tf.estimator.inputs.numpy_input_fn(
??????? {"x": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)
??? ?
??? # We can invoke 1000 training steps by invoking the? method and passing the
??? # training data set.
??? estimator.train(input_fn=input_fn, steps=1000)
??? ?
??? # Here we evaluate how well our model did.
??? train_metrics = estimator.evaluate(input_fn=train_input_fn)
??? eval_metrics = estimator.evaluate(input_fn=eval_input_fn)
??? print("train metrics: %r"% train_metrics)
??? print("eval metrics: %r"% eval_metrics)
??? ?
??? ?
??? print("------------------------------------3")
??? ?
??? # A custom model:客戶自定義實(shí)現(xiàn)訓(xùn)練
??? # Declare list of features, we only have one real-valued feature
??? def model_fn(features, labels, mode):
????? # Build a linear model and predict values
????? W = tf.get_variable("W", [1], dtype=tf.float64)
????? b = tf.get_variable("b", [1], dtype=tf.float64)
????? y = W*features['x'] + b
????? # Loss sub-graph
????? loss = tf.reduce_sum(tf.square(y - labels))
????? # Training sub-graph
????? global_step = tf.train.get_global_step()
????? optimizer = tf.train.GradientDescentOptimizer(0.01)
????? train = tf.group(optimizer.minimize(loss),
?????????????????????? tf.assign_add(global_step, 1))
????? # EstimatorSpec connects subgraphs we built to the
????? # appropriate functionality.
????? return tf.estimator.EstimatorSpec(
????????? mode=mode,
????????? predictions=y,
????????? loss=loss,
????????? train_op=train)
??? ?
??? estimator = tf.estimator.Estimator(model_fn=model_fn)
??? # define our data sets
??? x_train = np.array([1., 2., 3., 4.])
??? y_train = np.array([0., -1., -2., -3.])
??? x_eval = np.array([2., 5., 8., 1.])
??? y_eval = np.array([-1.01, -4.1, -7., 0.])
??? input_fn = tf.estimator.inputs.numpy_input_fn(
??????? {"x": x_train}, y_train, batch_size=4, num_epochs=None, shuffle=True)
??? train_input_fn = tf.estimator.inputs.numpy_input_fn(
??????? {"x": x_train}, y_train, batch_size=4, num_epochs=1000, shuffle=False)
??? eval_input_fn = tf.estimator.inputs.numpy_input_fn(
??????? {"x": x_eval}, y_eval, batch_size=4, num_epochs=1000, shuffle=False)
??? ?
??? # train
??? estimator.train(input_fn=input_fn, steps=1000)
??? # Here we evaluate how well our model did.
??? train_metrics = estimator.evaluate(input_fn=train_input_fn)
??? eval_metrics = estimator.evaluate(input_fn=eval_input_fn)
??? print("train metrics: %r"% train_metrics)
??? print("eval metrics: %r"% eval_metrics)
---------------------
原文:https://blog.csdn.net/lengguoxing/article/details/78456279
總結(jié)
以上是生活随笔為你收集整理的TensorFlow 完整的TensorFlow入门教程的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: TensorFlow Keras 官方教
- 下一篇: Fedora下网络配置及相关命令