【Deep Learning】Tensorflow实现逻辑回归
生活随笔
收集整理的這篇文章主要介紹了
【Deep Learning】Tensorflow实现逻辑回归
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
# -*- coding: utf-8 -*-
'''
Created on 2018年4月20日@author: user
'''
import tensorflow as tf
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)# Parameters
learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_step = 1# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))# Construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)# Initializing the variables
init = tf.initialize_all_variables()# Launch the graph
with tf.Session() as sess:sess.run(init)# Training cyclefor epoch in range(training_epochs):avg_cost = 0.total_batch = int(mnist.train.num_examples/batch_size)# Loop over all batchesfor i in range(total_batch):batch_xs, batch_ys = mnist.train.next_batch(batch_size)# Run optimization op (backprop) and cost op (to get loss value)_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,y: batch_ys})# Compute average lossavg_cost += c / total_batch# Display logs per epoch stepif (epoch+1) % display_step == 0:print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)print "Optimization Finished!"# Test modelcorrect_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))# Calculate accuracyaccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
總結
以上是生活随笔為你收集整理的【Deep Learning】Tensorflow实现逻辑回归的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【Deep Learning】Tenso
- 下一篇: 台大李宏毅教授的神经网络教程