基于深度学习的交通标识别算法对比研究-TensorFlow2实现
生活随笔
收集整理的這篇文章主要介紹了
基于深度学习的交通标识别算法对比研究-TensorFlow2实现
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
- 🔗 運(yùn)行環(huán)境:python3
- 🚩 作者:K同學(xué)啊
- 🥇 精選專欄:《深度學(xué)習(xí)100例》
- 🔥 推薦專欄:《新手入門深度學(xué)習(xí)》
- 📚 選自專欄:《Matplotlib教程》
- 🧿 優(yōu)秀專欄:《Python入門100題》
大家好,我是K同學(xué)啊!
今天和大家分享一篇 本科畢設(shè) 實(shí)戰(zhàn)項(xiàng)目,項(xiàng)目中我將使用VGG16、InceptionV3、DenseNet121、MobileNetV2 等四個(gè)模型進(jìn)行對(duì)比分析(文中提供了每一個(gè)模型的 算法框架圖),最后可以自選圖片進(jìn)行預(yù)測(cè),最后的識(shí)別效果高達(dá) 99.2%。結(jié)果如下:
文章目錄
- 一、導(dǎo)入數(shù)據(jù)
- 二、定義模型
- 1. VGG16模型
- 2. InceptionV3模型
- 3. DenseNet121算法模型
- 4. MobileNetV2算法模型
- 三、結(jié)果分析
- 1. 準(zhǔn)確率對(duì)比分析
- 2. 損失函數(shù)對(duì)比分析
- 3. 混淆矩陣
- 4. 評(píng)估指標(biāo)生成
- 四、指定圖片進(jìn)行預(yù)測(cè)
一、導(dǎo)入數(shù)據(jù)
""" 關(guān)于image_dataset_from_directory()的詳細(xì)介紹可以參考文章:https://mtyjkh.blog.csdn.net/article/details/117018789 """ train_ds = tf.keras.preprocessing.image_dataset_from_directory("./1-data/",validation_split=0.2,subset="training",seed=12,image_size=(img_height, img_width),batch_size=batch_size) Found 1308 files belonging to 14 classes. Using 1047 files for training. """ 關(guān)于image_dataset_from_directory()的詳細(xì)介紹可以參考文章:https://mtyjkh.blog.csdn.net/article/details/117018789 """ val_ds = tf.keras.preprocessing.image_dataset_from_directory("./1-data/",validation_split=0.2,subset="validation",seed=12,image_size=(img_height, img_width),batch_size=batch_size) Found 1308 files belonging to 14 classes. Using 261 files for validation. class_names = train_ds.class_names print(class_names) ['15', '16', '17', '20', '22', '23', '24', '26', '27', '28', '29', '30', '31', '32'] train_ds <BatchDataset shapes: ((None, 224, 224, 3), (None,)), types: (tf.float32, tf.int32)> AUTOTUNE = tf.data.AUTOTUNE# 歸一化 def train_preprocessing(image,label):return (image/255.0,label)train_ds = (train_ds.cache().map(train_preprocessing) # 這里可以設(shè)置預(yù)處理函數(shù).prefetch(buffer_size=AUTOTUNE) )val_ds = (val_ds.cache().map(train_preprocessing) # 這里可以設(shè)置預(yù)處理函數(shù).prefetch(buffer_size=AUTOTUNE) ) plt.figure(figsize=(10, 8)) # 圖形的寬為10高為5for images, labels in train_ds.take(1):for i in range(15):plt.subplot(4, 5, i + 1)plt.xticks([])plt.yticks([])plt.grid(False)# 顯示圖片plt.imshow(images[i])# 顯示標(biāo)簽plt.xlabel(class_names[int(labels[i])])plt.show()二、定義模型
1. VGG16模型
# 加載預(yù)訓(xùn)練模型 vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',include_top=False,# input_tensor=tf.keras.Input(shape=(img_width, img_height, 3)),input_shape=(img_width, img_height, 3),pooling='max') for layer in vgg16_base_model.layers:layer.trainable = FalseX = vgg16_base_model.output X = Dropout(0.4)(X)output = Dense(len(class_names), activation='softmax')(X) vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)vgg16_model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy']) # vgg16_model.summary() vgg16_history = vgg16_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds) Epoch 1/10 33/33 [==============================] - 8s 113ms/step - loss: 2.7396 - accuracy: 0.2531 - val_loss: 1.4678 - val_accuracy: 0.6092 Epoch 2/10 33/33 [==============================] - 2s 45ms/step - loss: 1.5873 - accuracy: 0.5091 - val_loss: 0.8500 - val_accuracy: 0.8046 Epoch 3/10 33/33 [==============================] - 2s 45ms/step - loss: 1.0996 - accuracy: 0.6495 - val_loss: 0.5299 - val_accuracy: 0.9272 Epoch 4/10 33/33 [==============================] - 2s 45ms/step - loss: 0.7349 - accuracy: 0.7947 - val_loss: 0.3765 - val_accuracy: 0.9349 Epoch 5/10 33/33 [==============================] - 2s 45ms/step - loss: 0.5373 - accuracy: 0.8481 - val_loss: 0.2888 - val_accuracy: 0.9502 Epoch 6/10 33/33 [==============================] - 2s 45ms/step - loss: 0.4326 - accuracy: 0.8892 - val_loss: 0.2422 - val_accuracy: 0.9617 Epoch 7/10 33/33 [==============================] - 2s 45ms/step - loss: 0.3350 - accuracy: 0.9198 - val_loss: 0.2068 - val_accuracy: 0.9693 Epoch 8/10 33/33 [==============================] - 2s 45ms/step - loss: 0.2821 - accuracy: 0.9398 - val_loss: 0.1713 - val_accuracy: 0.9885 Epoch 9/10 33/33 [==============================] - 2s 45ms/step - loss: 0.2489 - accuracy: 0.9456 - val_loss: 0.1589 - val_accuracy: 0.9847 Epoch 10/10 33/33 [==============================] - 2s 48ms/step - loss: 0.2146 - accuracy: 0.9608 - val_loss: 0.1511 - val_accuracy: 0.98852. InceptionV3模型
# 加載預(yù)訓(xùn)練模型 InceptionV3_base_model = tf.keras.applications.inception_v3.InceptionV3(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='max') for layer in InceptionV3_base_model.layers:layer.trainable = FalseX = InceptionV3_base_model.output X = Dropout(0.4)(X)output = Dense(len(class_names), activation='softmax')(X) InceptionV3_model = Model(inputs=InceptionV3_base_model.input, outputs=output)InceptionV3_model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy']) # InceptionV3_model.summary() InceptionV3_history = InceptionV3_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds) Epoch 1/10 33/33 [==============================] - 5s 82ms/step - loss: 3.1642 - accuracy: 0.4040 - val_loss: 0.6005 - val_accuracy: 0.8352 Epoch 2/10 33/33 [==============================] - 1s 34ms/step - loss: 0.7241 - accuracy: 0.8042 - val_loss: 0.2476 - val_accuracy: 0.9234 Epoch 3/10 33/33 [==============================] - 1s 34ms/step - loss: 0.3558 - accuracy: 0.8949 - val_loss: 0.2323 - val_accuracy: 0.9425 Epoch 4/10 33/33 [==============================] - 1s 35ms/step - loss: 0.2435 - accuracy: 0.9226 - val_loss: 0.1599 - val_accuracy: 0.9617 Epoch 5/10 33/33 [==============================] - 1s 34ms/step - loss: 0.1444 - accuracy: 0.9551 - val_loss: 0.1246 - val_accuracy: 0.9617 Epoch 6/10 33/33 [==============================] - 1s 34ms/step - loss: 0.1508 - accuracy: 0.9522 - val_loss: 0.1231 - val_accuracy: 0.9732 Epoch 7/10 33/33 [==============================] - 1s 35ms/step - loss: 0.0793 - accuracy: 0.9761 - val_loss: 0.0853 - val_accuracy: 0.9885 Epoch 8/10 33/33 [==============================] - 1s 35ms/step - loss: 0.0636 - accuracy: 0.9809 - val_loss: 0.1223 - val_accuracy: 0.9732 Epoch 9/10 33/33 [==============================] - 1s 35ms/step - loss: 0.0503 - accuracy: 0.9857 - val_loss: 0.0769 - val_accuracy: 0.9923 Epoch 10/10 33/33 [==============================] - 1s 34ms/step - loss: 0.0346 - accuracy: 0.9904 - val_loss: 0.1066 - val_accuracy: 0.99233. DenseNet121算法模型
4. MobileNetV2算法模型
# 加載預(yù)訓(xùn)練模型 MobileNetV2_base_model = tf.keras.applications.mobilenet_v2.MobileNetV2(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='max') for layer in MobileNetV2_base_model.layers:layer.trainable = FalseX = MobileNetV2_base_model.output X = Dropout(0.4)(X)output = Dense(len(class_names), activation='softmax')(X) MobileNetV2_model = Model(inputs=MobileNetV2_base_model.input, outputs=output)MobileNetV2_model.compile(optimizer="adam",loss='sparse_categorical_crossentropy',metrics=['accuracy']) #MobileNetV2_model.summary() MobileNetV2_history = MobileNetV2_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds) Epoch 1/10 33/33 [==============================] - 3s 47ms/step - loss: 4.0865 - accuracy: 0.4403 - val_loss: 0.5897 - val_accuracy: 0.8812 Epoch 2/10 33/33 [==============================] - 1s 22ms/step - loss: 1.1042 - accuracy: 0.7536 - val_loss: 0.1841 - val_accuracy: 0.9540 Epoch 3/10 33/33 [==============================] - 1s 22ms/step - loss: 0.6147 - accuracy: 0.8596 - val_loss: 0.1722 - val_accuracy: 0.9770 Epoch 4/10 33/33 [==============================] - 1s 22ms/step - loss: 0.3826 - accuracy: 0.9007 - val_loss: 0.1505 - val_accuracy: 0.9770 Epoch 5/10 33/33 [==============================] - 1s 22ms/step - loss: 0.2290 - accuracy: 0.9370 - val_loss: 0.1408 - val_accuracy: 0.9885 Epoch 6/10 33/33 [==============================] - 1s 22ms/step - loss: 0.1976 - accuracy: 0.9484 - val_loss: 0.1294 - val_accuracy: 0.9923 Epoch 7/10 33/33 [==============================] - 1s 22ms/step - loss: 0.1193 - accuracy: 0.9608 - val_loss: 0.1038 - val_accuracy: 0.9923 Epoch 8/10 33/33 [==============================] - 1s 22ms/step - loss: 0.0859 - accuracy: 0.9675 - val_loss: 0.1140 - val_accuracy: 0.9923 Epoch 9/10 33/33 [==============================] - 1s 22ms/step - loss: 0.0973 - accuracy: 0.9704 - val_loss: 0.1292 - val_accuracy: 0.9923 Epoch 10/10 33/33 [==============================] - 1s 22ms/step - loss: 0.0504 - accuracy: 0.9828 - val_loss: 0.1361 - val_accuracy: 0.9923三、結(jié)果分析
1. 準(zhǔn)確率對(duì)比分析
# 可在原碼中進(jìn)行閱讀 plt.show()2. 損失函數(shù)對(duì)比分析
# 可在原碼中進(jìn)行閱讀 plt.show()3. 混淆矩陣
# 可在原碼中進(jìn)行閱讀 plot_cm(val_label, val_pre)4. 評(píng)估指標(biāo)生成
- support:當(dāng)前行的類別在測(cè)試數(shù)據(jù)中的樣本總量;
- precision:被判定為正例(反例)的樣本中,真正的正例樣本(反例樣本)的比例,精度=正確預(yù)測(cè)的個(gè)數(shù)(TP)/被預(yù)測(cè)正確的個(gè)數(shù)(TP+FP)。
- recall:被正確分類的正例(反例)樣本,占所有正例(反例)樣本的比例,召回率=正確預(yù)測(cè)的個(gè)數(shù)(TP)/預(yù)測(cè)個(gè)數(shù)(TP+FN)。
- f1-score: 精確率和召回率的調(diào)和平均值,F1 = 2精度召回率/(精度+召回率)。
- accuracy:表示準(zhǔn)確率,也即正確預(yù)測(cè)樣本量與總樣本量的比值。
- macro avg:表示宏平均,表示所有類別對(duì)應(yīng)指標(biāo)的平均值。
- weighted avg:表示帶權(quán)重平均,表示類別樣本占總樣本的比重與對(duì)應(yīng)指標(biāo)的乘積的累加和。
四、指定圖片進(jìn)行預(yù)測(cè)
from PIL import Imageimg = Image.open("./1-data/17/017_0001.png") image = tf.image.resize(img, [img_height, img_width])img_array = tf.expand_dims(image, 0) predictions = InceptionV3_model.predict(img_array) print("預(yù)測(cè)結(jié)果為:",np.argmax(predictions)) 預(yù)測(cè)結(jié)果為: 11掃我,獲取源碼
《新程序員》:云原生和全面數(shù)字化實(shí)踐50位技術(shù)專家共同創(chuàng)作,文字、視頻、音頻交互閱讀總結(jié)
以上是生活随笔為你收集整理的基于深度学习的交通标识别算法对比研究-TensorFlow2实现的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 深度学习100例-卷积神经网络(CNN)
- 下一篇: 深度学习100例 - 卷积神经网络(CN