TF:基于CNN(2+1)实现MNIST手写数字图片识别准确率提高到99%
生活随笔
收集整理的這篇文章主要介紹了
TF:基于CNN(2+1)实现MNIST手写数字图片识别准确率提高到99%
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
TF:基于CNN(2+1)實現MNIST手寫數字圖片識別準確率提高到99%
導讀
與Softmax回歸模型相比,使用兩層卷積的神經網絡模型借助了卷積的威力,準確率高非常大的提升。
?
?
目錄
輸出結果
代碼實現
?
?
輸出結果
Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gzstep 0, training accuracy 0.1 step 1000, training accuracy 0.98 step 2000, training accuracy 0.96 step 3000, training accuracy 1 step 4000, training accuracy 1 step 5000, training accuracy 0.98 step 6000, training accuracy 0.98 step 7000, training accuracy 1 step 8000, training accuracy 1 step 9000, training accuracy 1 step 10000, training accuracy 1 step 11000, training accuracy 1 step 12000, training accuracy 1 step 13000, training accuracy 0.98 step 14000, training accuracy 1 step 15000, training accuracy 1 step 16000, training accuracy 1 step 17000, training accuracy 1 step 18000, training accuracy 1 step 19000, training accuracy 1?
?
?
代碼實現
#TF:基于CNN實現MNIST手寫數字識別準確率提高到99%import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data……if __name__ == '__main__':mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)x = tf.placeholder(tf.float32, [None, 784])y_ = tf.placeholder(tf.float32, [None, 10])x_image = tf.reshape(x, [-1, 28, 28, 1]) #x_image就是輸入的訓練圖像W_conv1 = weight_variable([5, 5, 1, 32])b_conv1 = bias_variable([32])h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) #是真正進行卷積計算,再選用ReLU作為激活函數h_pool1 = max_pool_2x2(h_conv1) #調用函數max_pool_2x2 進行一次池化操作。W_conv2 = weight_variable([5, 5, 32, 64])b_conv2 = bias_variable([64])h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)h_pool2 = max_pool_2x2(h_conv2)W_fc1 = weight_variable([7 * 7 * 64, 1024])b_fc1 = bias_variable([1024])h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)keep_prob = tf.placeholder(tf.float32)h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)W_fc2 = weight_variable([1024, 10])b_fc2 = bias_variable([10])y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))sess = tf.InteractiveSession()sess.run(tf.global_variables_initializer())for i in range(20000): # 訓練20000步batch = mnist.train.next_batch(50)# 每100步報告一次在驗證集上的準確度if i % 100 == 0:train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})print("step %d, training accuracy %g" % (i, train_accuracy))train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})print("test accuracy %g" % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))?
相關文章
TF:基于CNN(2+1)實現MNIST手寫數字識別準確率提高到99%
?
?
《新程序員》:云原生和全面數字化實踐50位技術專家共同創作,文字、視頻、音頻交互閱讀總結
以上是生活随笔為你收集整理的TF:基于CNN(2+1)实现MNIST手写数字图片识别准确率提高到99%的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: TF:利用是Softmax回归+GD算法
- 下一篇: Dataset之CIFAR-10:CIF