Tensorflow 梯度下降实例
生活随笔
收集整理的這篇文章主要介紹了
Tensorflow 梯度下降实例
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
# coding: utf-8# #### 假設(shè)我們要最小化函數(shù) $y=x^2$, 選擇初始點 $x_0=5$# #### 1. 學習率為1的時候,x在5和-5之間震蕩。# In[1]:import tensorflow as tf
TRAINING_STEPS = 10
LEARNING_RATE = 1
x = tf.Variable(tf.constant(5, dtype=tf.float32), name="x")
y = tf.square(x)train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(y)with tf.Session() as sess:sess.run(tf.global_variables_initializer())for i in range(TRAINING_STEPS):sess.run(train_op)x_value = sess.run(x)print "After %s iteration(s): x%s is %f."% (i+1, i+1, x_value)#result 學習率為1的時候,x在5和-5之間震蕩。
# After 1 iteration(s): x1 is -5.000000.
# After 2 iteration(s): x2 is 5.000000.
# After 3 iteration(s): x3 is -5.000000.
# After 4 iteration(s): x4 is 5.000000.
# After 5 iteration(s): x5 is -5.000000.
# After 6 iteration(s): x6 is 5.000000.
# After 7 iteration(s): x7 is -5.000000.
# After 8 iteration(s): x8 is 5.000000.
# After 9 iteration(s): x9 is -5.000000.
# After 10 iteration(s): x10 is 5.000000.# #### 2. 學習率為0.001的時候,下降速度過慢,在901輪時才收斂到0.823355。# In[2]:
TRAINING_STEPS = 1000
LEARNING_RATE = 0.001
x = tf.Variable(tf.constant(5, dtype=tf.float32), name="x")
y = tf.square(x)train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(y)with tf.Session() as sess:sess.run(tf.global_variables_initializer())for i in range(TRAINING_STEPS):sess.run(train_op)if i % 100 == 0:x_value = sess.run(x)print "After %s iteration(s): x%s is %f."% (i+1, i+1, x_value)
# After 1 iteration(s): x1 is 4.990000.
# After 101 iteration(s): x101 is 4.084646.
# After 201 iteration(s): x201 is 3.343555.
# After 301 iteration(s): x301 is 2.736923.
# After 401 iteration(s): x401 is 2.240355.
# After 501 iteration(s): x501 is 1.833880.
# After 601 iteration(s): x601 is 1.501153.
# After 701 iteration(s): x701 is 1.228794.
# After 801 iteration(s): x801 is 1.005850.
# After 901 iteration(s): x901 is 0.823355.# #### 3. 使用指數(shù)衰減的學習率,在迭代初期得到較高的下降速度,可以在較小的訓練輪數(shù)下取得不錯的收斂程度。# In[3]:
TRAINING_STEPS = 100
global_step = tf.Variable(0)
LEARNING_RATE = tf.train.exponential_decay(0.1, global_step, 1, 0.96, staircase=True)x = tf.Variable(tf.constant(5, dtype=tf.float32), name="x")
y = tf.square(x)
train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(y, global_step=global_step)with tf.Session() as sess:sess.run(tf.global_variables_initializer())for i in range(TRAINING_STEPS):sess.run(train_op)if i % 10 == 0:LEARNING_RATE_value = sess.run(LEARNING_RATE)x_value = sess.run(x)print "After %s iteration(s): x%s is %f, learning rate is %f."% (i+1, i+1, x_value, LEARNING_RATE_value)# After 1 iteration(s): x1 is 4.000000, learning rate is 0.096000.
# After 11 iteration(s): x11 is 0.690561, learning rate is 0.063824.
# After 21 iteration(s): x21 is 0.222583, learning rate is 0.042432.
# After 31 iteration(s): x31 is 0.106405, learning rate is 0.028210.
# After 41 iteration(s): x41 is 0.065548, learning rate is 0.018755.
# After 51 iteration(s): x51 is 0.047625, learning rate is 0.012469.
# After 61 iteration(s): x61 is 0.038558, learning rate is 0.008290.
# After 71 iteration(s): x71 is 0.033523, learning rate is 0.005511.
# After 81 iteration(s): x81 is 0.030553, learning rate is 0.003664.
# After 91 iteration(s): x91 is 0.028727, learning rate is 0.002436.
?
轉(zhuǎn)載于:https://www.cnblogs.com/adong7639/p/7554114.html
總結(jié)
以上是生活随笔為你收集整理的Tensorflow 梯度下降实例的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: NOIP模拟题——神秘大门
- 下一篇: 牛客网Java刷题知识点之插入排序(直接