【TensorFlow-windows】keras接口——卷积手写数字识别,模型保存和调用
前言
上一節學習了以TensorFlow為底端的keras接口最簡單的使用,這里就繼續學習怎么寫卷積分類模型和各種保存方法(僅保存權重、權重和網絡結構同時保存)
國際慣例,參考博客:
官方教程
【注】其實不用看博客,直接翻到文末看我的colab就行,里面涵蓋了學習方法,包括自己的出錯內容和一些簡單筆記,下面為了展示方便,每次都重新定義了網絡結構,對Python熟悉的大佬可以直接def create_model():函數,把模型結構保存起來,后面直接調用就行
構建卷積模型分類
回顧一下上篇博客介紹的構建模型方法,有兩種寫法:
model = keras.models.Sequential([keras.layers.Flatten(...),keras.layers.Dense(...),... ]) model = keras.models.Sequential() model.add(keras.layers.Flatten(...)) model.add(keras.layers.Dense(...)) ])第一種簡單,第二種舒服,本博文采用第二種寫法構建一個簡單的卷積網絡
引入相關包
保存模型需要路徑(引入os),數據歸一化處理(引入numpy),此外注意,雖然我們學習keras,但是不僅要引入keras,還得引入tensorflow,具體原因后續再說
import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt import os構建數據集
還是用mnist吧,后續根據需要出一個訓練本地圖片數據的教程,看看是不是還得數據流操作
注意,要把標簽改為單熱度編碼格式,數據也得歸一化
還得注意就是keras的卷積操作接受的數據是一個思維矩陣,需要指定是channels_first即<樣本,通道,行,列>, 還是channels_last即<樣本,行,列,通道>,默認最后的維度是通道(channels_last)
train_x = train_x[ ..., np.newaxis ] test_x = test_x[..., np.newaxis ] print(train_x.shape)#(60000, 28, 28, 1)構建模型
構建簡單的AlexNet,但是直接用這個結構可能有問題,因為輸入圖片總共28\*28,經過多次卷積池化會越變越小,最后可能都不夠做卷積池化了,為稍微改了改
model = keras.models.Sequential() model.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) ) model.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))model.add( keras.layers.Flatten() ) model.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model.add( keras.layers.Dropout(rate=0.5) ) model.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model.add( keras.layers.Dropout(rate=0.5) )model.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )編譯和訓練模型
在keras中關于交叉熵分類有兩個函數sparse_categorical_crossentropy和categorical_crossentropy,這里就出現了第一個坑,如果將標簽[batch_size,10]輸入到編譯器,使用sparse_...的時候回報錯
logits and labels must have the same first dimension,got logits shape [200,10] and labels shape [2000]好像是默認把他拉長拼起來了,所以我們要使用后者
model.compile( optimizer= keras.optimizers.Adam(),loss= keras.losses.categorical_crossentropy, metrics=['accuracy'] )然后就可以訓練模型了
model.fit(train_x,train_y,epochs=2, batch_size=200) ''' Epoch 1/2 60000/60000 [==============================] - 26s 435us/step - loss: 0.2646 - acc: 0.9110 Epoch 2/2 60000/60000 [==============================] - 24s 407us/step - loss: 0.0510 - acc: 0.9855 <tensorflow.python.keras.callbacks.History at 0x7f65fe2de940> '''還能看網絡結構和參數,使用summary()函數
model.summary() ''' _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) multiple 7808 _________________________________________________________________ max_pooling2d (MaxPooling2D) multiple 0 _________________________________________________________________ conv2d_1 (Conv2D) multiple 307392 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 multiple 0 _________________________________________________________________ conv2d_2 (Conv2D) multiple 663936 _________________________________________________________________ conv2d_3 (Conv2D) multiple 1327488 _________________________________________________________________ conv2d_4 (Conv2D) multiple 884992 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 multiple 0 _________________________________________________________________ flatten (Flatten) multiple 0 _________________________________________________________________ dense (Dense) multiple 4198400 _________________________________________________________________ dropout (Dropout) multiple 0 _________________________________________________________________ dense_1 (Dense) multiple 16781312 _________________________________________________________________ dropout_1 (Dropout) multiple 0 _________________________________________________________________ dense_2 (Dense) multiple 40970 ================================================================= Total params: 24,212,298 Trainable params: 24,212,298 Non-trainable params: 0 _________________________________________________________________ '''可以用測試集評估模型
print(test_x.shape, test_y.shape) model.evaluate( test_x ,test_y) ''' (10000, 28, 28, 1) (10000, 10) 10000/10000 [==============================] - 3s 283us/step [0.03784323987539392, 0.9897] '''還能預測單張圖片,但是要注意輸入的第一個維度是樣本數, 記得增加一個維度
test_img_idx = 1000 test_img = test_x[test_img_idx,...] test_img= test_img[np.newaxis,...] img_prob = model.predict( test_img )plt.figure() plt.imshow( np.squeeze(test_img) ) plt.title(img_prob.argmax())保存模型
訓練過程中保存檢查點
函數為keras.callbacks.ModelCheckpoint
checkpoint_path='./train_save/mnist.ckpt' checkpoint_dir= os.path.dirname(checkpoint_path) # 創建檢查回調點 cp_callback= keras.callbacks.ModelCheckpoint( checkpoint_path, save_weights_only= True, verbose=1 )model.fit(train_x, train_y,epochs=2, validation_data=(test_x,test_y), callbacks=[cp_callback] ) ''' Train on 60000 samples, validate on 10000 samples Epoch 1/2 59968/60000 [============================>.] - ETA: 0s - loss: 0.1442 - acc: 0.9681 Epoch 00001: saving model to ./train_save/mnist.ckpt WARNING:tensorflow:This model was compiled with a Keras optimizer (<tensorflow.python.keras.optimizers.Adam object at 0x7faff48fae80>) but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.Consider using a TensorFlow optimizer from `tf.train`. 60000/60000 [==============================] - 93s 2ms/step - loss: 0.1442 - acc: 0.9681 - val_loss: 0.0693 - val_acc: 0.9811 Epoch 2/2 59968/60000 [============================>.] - ETA: 0s - loss: 0.0757 - acc: 0.9840 Epoch 00002: saving model to ./train_save/mnist.ckpt WARNING:tensorflow:This model was compiled with a Keras optimizer (<tensorflow.python.keras.optimizers.Adam object at 0x7faff48fae80>) but is being saved in TensorFlow format with `save_weights`. The model's weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizer's state will not be saved.Consider using a TensorFlow optimizer from `tf.train`. 60000/60000 [==============================] - 92s 2ms/step - loss: 0.0757 - acc: 0.9839 - val_loss: 0.0489 - val_acc: 0.9876 <tensorflow.python.keras.callbacks.History at 0x7fafead25f98> '''發現有個warnning,意思是說模型使用的是keras的優化器,保存以后不是tensorflow能直接使用的模型格式,好像少了個狀態,需要使用tensorflow自帶的優化器,好吧,調整代碼
import os model.compile(optimizer = tf.train.AdamOptimizer(),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )checkpoint_path='./train_save2/mnist.ckpt' checkpoint_dir= os.path.dirname(checkpoint_path) # 創建檢查回調點 cp_callback= keras.callbacks.ModelCheckpoint( checkpoint_path, save_weights_only= True, verbose=1 )model.fit(train_x, train_y,epochs=2, validation_data=(test_x,test_y), callbacks=[cp_callback] ) ''' Train on 60000 samples, validate on 10000 samples Epoch 1/2 59936/60000 [============================>.] - ETA: 0s - loss: 0.2241 - acc: 0.9325 Epoch 00001: saving model to ./train_save2/mnist.ckpt 60000/60000 [==============================] - 60s 1ms/step - loss: 0.2239 - acc: 0.9326 - val_loss: 0.1009 - val_acc: 0.9765 Epoch 2/2 59936/60000 [============================>.] - ETA: 0s - loss: 0.0866 - acc: 0.9801 Epoch 00002: saving model to ./train_save2/mnist.ckpt 60000/60000 [==============================] - 56s 930us/step - loss: 0.0867 - acc: 0.9800 - val_loss: 0.0591 - val_acc: 0.9855 <tensorflow.python.keras.callbacks.History at 0x7fad1a90d7b8> '''這回沒出錯了,嘗試構建一個沒訓練的模型,將參數載入進來
model_test = keras.models.Sequential() model_test.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) ) model_test.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))model_test.add( keras.layers.Flatten() ) model_test.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test.add( keras.layers.Dropout(rate=0.5) ) model_test.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test.add( keras.layers.Dropout(rate=0.5) )model_test.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )model_test.compile(optimizer = tf.train.RMSPropOptimizer(learning_rate=0.01),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )載入最近的檢查點
! ls train_save latest = tf.train.latest_checkpoint('train_save2')#checkpoint mnist.ckpt.data-00000-of-00001 mnist.ckpt.index type(latest)#str loss,acc = model_test.evaluate(test_x,test_y) print("未載入權重時:準確率{:5.2f}%".format(100*acc)) model_test.load_weights(latest) loss,acc = model_test.evaluate(test_x,test_y) print("載入權重時:準確率{:5.2f}%".format(100*acc)) ''' 10000/10000 [==============================] - 3s 308us/step 未載入權重時:準確率 9.60% 10000/10000 [==============================] - 3s 264us/step 載入權重時:準確率98.60% '''間隔保存
也可以指定多少次訓練保存一次檢查點,這樣能夠有效防止過擬合,以后自己可以挑選比較好的訓練參數
checkpoint_path="train_save3/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = keras.callbacks.ModelCheckpoint(checkpoint_path,verbose=1, save_weights_only=True, period=1) model.fit(train_x,train_y,epochs=2, callbacks=[cp_callback], validation_data=[test_x,test_y],verbose=1) ''' Train on 60000 samples, validate on 10000 samples Epoch 1/2 59936/60000 [============================>.] - ETA: 0s - loss: 0.0447 - acc: 0.9897 Epoch 00001: saving model to train_save3/cp-0001.ckpt 60000/60000 [==============================] - 61s 1ms/step - loss: 0.0446 - acc: 0.9897 - val_loss: 0.0421 - val_acc: 0.9920 Epoch 2/2 59936/60000 [============================>.] - ETA: 0s - loss: 0.0478 - acc: 0.9885 Epoch 00002: saving model to train_save3/cp-0002.ckpt 60000/60000 [==============================] - 61s 1ms/step - loss: 0.0478 - acc: 0.9884 - val_loss: 0.0590 - val_acc: 0.9859 <tensorflow.python.keras.callbacks.History at 0x7fafea497dd8> '''重新構建一個未訓練的模型,調用第一次的訓練結果
model_test1 = keras.models.Sequential() model_test1.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) ) model_test1.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test1.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test1.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test1.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test1.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test1.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test1.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))model_test1.add( keras.layers.Flatten() ) model_test1.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test1.add( keras.layers.Dropout(rate=0.5) ) model_test1.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test1.add( keras.layers.Dropout(rate=0.5) )model_test1.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )model_test1.compile(optimizer = tf.train.AdamOptimizer(learning_rate=0.01),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )選擇第一個檢查點載入
loss,acc=model_test1.evaluate(test_x,test_y) print("未載入權重時:準確率{:5.2f}%".format(100*acc)) model_test1.load_weights("train_save3/cp-0001.ckpt") loss,acc=model_test1.evaluate(test_x,test_y) print("載入權重時:準確率{:5.2f}%".format(100*acc)) ''' 10000/10000 [==============================] - 3s 260us/step 未載入權重時:準確率10.28% 10000/10000 [==============================] - 2s 244us/step 載入權重時:準確率98.30% '''手動保存模型
在訓練完畢以后,也可以自行調用save_weights函數保存權重
model.save_weights('./train_save3/mnist_checkpoint')構建未訓練模型
model_test2 = keras.models.Sequential() model_test2.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu) ) model_test2.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test2.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test2.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test2.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test2.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test2.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test2.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))model_test2.add( keras.layers.Flatten() ) model_test2.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test2.add( keras.layers.Dropout(rate=0.5) ) model_test2.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test2.add( keras.layers.Dropout(rate=0.5) )model_test2.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )model_test2.compile(optimizer = tf.train.AdamOptimizer(learning_rate=0.01),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] )讀取權重以及評估模型
loss,acc = model_test2.evaluate(test_x,test_y) print("未載入權重時:準確率{:5.2f}%".format(100*acc)) model_test2.load_weights('./train_save3/mnist_checkpoint') loss,acc = model_test2.evaluate(test_x,test_y) print("載入權重時:準確率{:5.2f}%".format(100*acc)) ''' 10000/10000 [==============================] - 3s 303us/step 未載入權重時:準確率12.15% 10000/10000 [==============================] - 3s 260us/step 載入權重時:準確率98.59% '''全部保存
同時保存模型與參數
構建未訓練模型
保存
model_test3.save('my_model.h5') ''' Currently `save` requires model to be a graph network. Consider using `save_weights`, in order to save the weights of the model. '''出現錯誤,意思是需要定義的模型是一個圖網絡結構,只能保存權重。其實錯誤原因在于我們的第一層沒有定義輸入的大小,嘗試定義一波
model_test4 = keras.models.Sequential() model_test4.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu,input_shape=(28,28,1)) ) model_test4.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test4.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test4.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test4.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test4.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test4.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test4.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))model_test4.add( keras.layers.Flatten() ) model_test4.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test4.add( keras.layers.Dropout(rate=0.5) ) model_test4.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test4.add( keras.layers.Dropout(rate=0.5) )model_test4.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )model_test4.compile(optimizer = tf.train.AdamOptimizer(),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] ) model_test4.fit(train_x,train_y,batch_size=200,epochs=2) ''' Epoch 1/2 60000/60000 [==============================] - 22s 364us/step - loss: 0.4112 - acc: 0.8556 Epoch 2/2 60000/60000 [==============================] - 21s 342us/step - loss: 0.0584 - acc: 0.9838 <tensorflow.python.keras.callbacks.History at 0x7f137972f4a8> '''嘗試保存
model_test4.save('my_model.h5') ''' WARNING:tensorflow:TensorFlow optimizers do not make it possible to access optimizer attributes or optimizer state after instantiation. As a result, we cannot save the optimizer as part of the model save file.You will have to compile your model again after loading it. Prefer using a Keras optimizer instead (see keras.io/optimizers). '''又出warning,說是不能使用tensorflow的優化器,要使用keras自帶的優化器,好吧,改
model_test5 = keras.models.Sequential() model_test5.add( keras.layers.Conv2D( filters = 64, kernel_size=(11,11),strides = (1,1), padding='valid', activation= tf.keras.activations.relu,input_shape=(28,28,1)) ) model_test5.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test5.add( keras.layers.Conv2D( filters = 192, kernel_size=(5,5),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test5.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) )) model_test5.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test5.add( keras.layers.Conv2D( filters = 384, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test5.add( keras.layers.Conv2D( filters = 256, kernel_size=(3,3),strides = (1,1), padding='same', activation= tf.keras.activations.relu) ) model_test5.add( keras.layers.MaxPool2D( pool_size=(2,2),strides=(2,2) ))model_test5.add( keras.layers.Flatten() ) model_test5.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test5.add( keras.layers.Dropout(rate=0.5) ) model_test5.add( keras.layers.Dense( units=4096, activation= keras.activations.relu ) ) model_test5.add( keras.layers.Dropout(rate=0.5) )model_test5.add( keras.layers.Dense(units=10 , activation= keras.activations.softmax ) )model_test5.compile(optimizer = tf.keras.optimizers.Adam(),loss = keras.losses.categorical_crossentropy, metrics=['accuracy'] ) model_test5.fit(train_x,train_y,batch_size=200,epochs=2) model_test5.save("my_model2.h5") ''' Epoch 1/2 60000/60000 [==============================] - 26s 434us/step - loss: 0.2850 - acc: 0.9043 Epoch 2/2 60000/60000 [==============================] - 25s 409us/step - loss: 0.0555 - acc: 0.9847 <tensorflow.python.keras.callbacks.History at 0x7f661033a9e8> '''這樣就不出錯了,嘗試調用模型和參數,因為保存了模型結構和參數,所以不需要重新定義網絡結構
model_test6= keras.models.load_model("my_model2.h5") model_test6.summary() ''' _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_10 (Conv2D) (None, 18, 18, 64) 7808 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 9, 9, 64) 0 _________________________________________________________________ conv2d_11 (Conv2D) (None, 9, 9, 192) 307392 _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 4, 4, 192) 0 _________________________________________________________________ conv2d_12 (Conv2D) (None, 4, 4, 384) 663936 _________________________________________________________________ conv2d_13 (Conv2D) (None, 4, 4, 384) 1327488 _________________________________________________________________ conv2d_14 (Conv2D) (None, 4, 4, 256) 884992 _________________________________________________________________ max_pooling2d_8 (MaxPooling2 (None, 2, 2, 256) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 1024) 0 _________________________________________________________________ dense_6 (Dense) (None, 4096) 4198400 _________________________________________________________________ dropout_4 (Dropout) (None, 4096) 0 _________________________________________________________________ dense_7 (Dense) (None, 4096) 16781312 _________________________________________________________________ dropout_5 (Dropout) (None, 4096) 0 _________________________________________________________________ dense_8 (Dense) (None, 10) 40970 ================================================================= Total params: 24,212,298 Trainable params: 24,212,298 Non-trainable params: 0 _________________________________________________________________ '''穩如狗,做一下測試
- 測試集上的測試
- 單張圖片的測試
后記
這一章主要學習了如何搭建簡單的卷積網絡,以及集中保存方法:僅權重以及權重和模型結構。
主要記住的就是如果僅保存權重,注意用tensorflow自帶的優化器,而保存網絡和權重的時候要用keras的優化器
下一章針對深度學習的幾個理論做一下理解以及實驗,包括BatchNorm、ResNet等。
博客代碼鏈接:戳這里
總結
以上是生活随笔為你收集整理的【TensorFlow-windows】keras接口——卷积手写数字识别,模型保存和调用的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 《仙剑奇侠传4》:花样外装让你变得与众不
- 下一篇: DNF手游官网地址 DNF手游官网开启内