生活随笔
收集整理的這篇文章主要介紹了
Tensorflow实现MNIST数据自编码(1)
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
自編碼網(wǎng)絡(luò)能夠自學(xué)習(xí)樣本特征的網(wǎng)絡(luò),屬于無(wú)監(jiān)督學(xué)習(xí)模型的網(wǎng)絡(luò),可以從無(wú)標(biāo)注的數(shù)據(jù)中學(xué)習(xí)特征,它可以給出比原始數(shù)據(jù)更好的特征描述,具有較強(qiáng)的特征學(xué)習(xí)能力。
主要的網(wǎng)絡(luò)結(jié)構(gòu)就是高維特征樣本---》編碼成---》低維特征---》解碼回---》高維特征,下面以MNIST數(shù)據(jù)集為示例進(jìn)行演示:
import?tensorflow?as?tf??????from?tensorflow.examples.tutorials.mnist?import?input_data??mnist?=?input_data.read_data_sets('/data/',one_hot=True)??????????learning_rate?=?0.01??n_hidden_1?=?256???????n_hidden_2?=?128???????n_input?=?784????x?=?tf.placeholder('float',[None,n_input])??y?=?x????weights?=?{??????'encoder_h1':tf.Variable(tf.random_normal([n_input,n_hidden_1])),??????'encoder_h2':tf.Variable(tf.random_normal([n_hidden_1,n_hidden_2])),??????'decoder_h1':tf.Variable(tf.random_normal([n_hidden_2,n_hidden_1])),??????'decoder_h2':tf.Variable(tf.random_normal([n_hidden_1,n_input])),??}??biases?=?{??????'encoder_b1':tf.Variable(tf.zeros([n_hidden_1])),??????'encoder_b2':tf.Variable(tf.zeros([n_hidden_2])),??????'decoder_b1':tf.Variable(tf.zeros([n_hidden_1])),??????'decoder_b2':tf.Variable(tf.zeros([n_input])),??}????def?encoder(x):??????layer_1?=?tf.nn.sigmoid(tf.add(tf.matmul(x,weights['encoder_h1']),biases['encoder_b1']))??????layer_2?=?tf.nn.sigmoid(tf.add(tf.matmul(layer_1,weights['encoder_h2']),biases['encoder_b2']))??????return?layer_2??def?decoder(x):??????layer_1?=?tf.nn.sigmoid(tf.add(tf.matmul(x,weights['decoder_h1']),biases['decoder_b1']))??????layer_2?=?tf.nn.sigmoid(tf.add(tf.matmul(layer_1,weights['decoder_h2']),biases['decoder_b2']))??????return?layer_2????pred?=?decoder(encoder(x))??cost?=?tf.reduce_mean(tf.pow(y-pred,2))??optimizer?=?tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)????training_epochs?=?20????batch_size?=?256????????display_step?=?5????????????with?tf.Session()?as?sess:??????sess.run(tf.global_variables_initializer())??????total_batch?=?int(mnist.train.num_examples/batch_size)????????????for?epoch?in?range(training_epochs):??????????for?i?in?range(total_batch):??????????????batch_xs,batch_ys?=?mnist.train.next_batch(batch_size)??????????????_,c?=?sess.run([optimizer,cost],feed_dict={x:batch_xs})????????????????if?epoch?%?display_step?==?0:??????????????????print("Epoch:",'%4d'?%?(epoch+1),'cost=',"{:.9f}".format(c))??????print('Training?Finished!')????????correct_prediction?=?tf.equal(tf.argmax(pred,1),tf.argmax(y,1))??????accuracy?=?tf.reduce_mean(tf.cast(correct_prediction,'float'))??????print('Accuracy:',1-accuracy.eval({x:mnist.test.images,y:mnist.test.images}))??
總結(jié)
以上是生活随笔為你收集整理的Tensorflow实现MNIST数据自编码(1)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。