raspberry pi_在Raspberry Pi上使用TensorFlow进行对象检测
raspberry pi
The following post shows how to train and test TensorFlow and TensorFlow Lite models based on SSD-architecture (to get familiar with SSD follow the links in the ?References? down below) on Raspberry Pi.
以下帖子顯示了如何在Raspberry Pi上基于SSD架構(gòu)訓練和測試TensorFlow和TensorFlow Lite模型(要熟悉SSD,請遵循下面“參考”中的鏈接)。
Note: The described steps were tested on Linux Mint 19.3 but shall work on Ubuntu and Debian.
注意:所描述的步驟已經(jīng)在Linux Mint 19.3上進行了測試,但是可以在Ubuntu和Debian上運行。
資料準備 (Data preparation)
Like in the post dedicated to YOLO one have to prepare data first. Follow the first 7 steps and then do this:
就像在專門針對YOLO的帖子中一樣,必須首先準備數(shù)據(jù)。 請遵循前7個步驟,然后執(zhí)行以下操作:
1. In order to get listed data and generate TFRecords clone repository ?How To Train an Object Detection Classifier for Multiple Objects Using TensorFlow(GPU) on Windows 10?:
1.為了獲取列出的數(shù)據(jù)并生成TFRecords克隆存儲庫《如何在Windows 10上使用TensorFlow(GPU)訓練多個對象的對象檢測分類器》:
git clone https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10.gitcd TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10
2. Put all labeled images into folders ?images/test? and ?images/train?:
2.將所有帶標簽的圖像放入“圖像/測試”和“圖像/訓練”文件夾中:
3. Get data records:
3.獲取數(shù)據(jù)記錄:
python3 xml_to_csv.pyThis command creates ?train_labels.csv? and ?test_labels.csv? in ?im-ages? folder:
該命令在?im-ages?文件夾中創(chuàng)建?train_labels.csv?和?test_labels.csv?:
4. Open ?generate_tfrecord.py?:
4.打開?generate_tfrecord.py?:
And replace the label map starting at line 31 with your own label map, where each object is assigned an ID number, for ex.:
并用您自己的標簽圖替換從第31行開始的標簽圖,其中為每個對象分配一個ID號,例如:
5. Generate TFRecords for data:
5.為數(shù)據(jù)生成TFRecords:
python3 generate_tfrecord.py — csv_input=images/train_labels.csv— image_dir=images/train — output_path=train.recordpython3 generate_tfrecord.py — csv_input=images/test_labels.csv
— image_dir=images/test — output_path=test.record
These commands generate ?train.record? and ?test.record? file which will be used to train the new object detection classifier.
這些命令生成“ train.record”和“ test.record”文件,這些文件將用于訓練新的對象檢測分類器。
6. Create a label map. The label map defines a mapping of class names to classID numbers, for ex.:
6.創(chuàng)建標簽圖。 標簽映射定義了類名到classID號的映射,例如:
item {id: 1
name: 'nutria'
}
Save it as ?labelmap.pbtxt?.
將其另存為?labelmap.pbtxt?。
7. Configure the object detection training pipeline. It defines which model and what parameters will be used for training.Download ?ssd_mobilenet_v2_quantized_300x300_coco.config? from https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs:
7.配置對象檢測訓練管道。 它定義了用于訓練的模型和參數(shù)。從https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs下載 ?ssd_mobilenet_v2_quantized_300x300_coco.config?:
wget https://github.com/tensorflow/models/blob/master/research/object_detection/samples/config/ssd_mobilenet_v2_quantized_300x300_coco.config
8. Change configuration file:
8.更改配置文件:
Set number of classes:
設(shè)置班數(shù):
- num_classes: SET_YOUR_VALUE
-num_classes:SET_YOUR_VALUE
Set checkpoint:
設(shè)置檢查點:
- fine_tune_checkpoint: “/path/to/ssd_mobilenet_v2_quantized/model.ckpt”
-fine_tune_checkpoint:“ / path / to / ssd_mobilenet_v2_quantized / model.ckpt”
Set ?input_path? and ?label_map_path? in ?train_input_reader?:
在?train_input_reader?中設(shè)置?input_path?和?label_map_path?:
- input_path: “/path/to/train.record”
-input_path:“ / path / to / train.record”
- label_map_path: “/path/to/labelmap.pbtxt”
-label_map_path:“ / path / to / labelmap.pbtxt”
Set ?batch_size? in ?train_config?:
在?train_config?中設(shè)置?batch_size?:
- batch_size: 6 (OR SET_YOUR_VALUE)
-batch_size:6(或SET_YOUR_VALUE)
Set ?input_path? and ?label_map_path? in ?eval_input_reader?:
在?eval_input_reader?中設(shè)置?input_path?和?label_map_path?:
- input_path: “/path/to/test.record”
-input_path:“ / path / to / test.record”
- label_map_path: “/path/to/labelmap.pbtxt”
-label_map_path:“ / path / to / labelmap.pbtxt”
設(shè)定環(huán)境 (Setup environment)
Raspberry Pi的常規(guī)設(shè)置 (General settings for Raspberry Pi)
1. Update and upgrade first:
1.首先更新和升級:
sudo apt updatesudo apt dist-upgrade
2. Install some important dependencies:
2.安裝一些重要的依賴項:
sudo apt updatesudo apt install -y joe telnet nmap htop sysbench iperf bonnie++ iftop nload hdparm bc stress python-dev python-rpi.gpio wiringpi stress sysstat zip locate nuttcp attr imagemagick netpipe-tcp netpipe-openmpi git libatlas-base-dev libhdf5-dev libc-ares-dev libeigen3-dev build-essential libsdl-ttf2.0-0 python-pygame festival
3. Install dependencies for TensorFlow:
3.安裝TensorFlow的依賴項:
sudo apt updatesudo apt install libatlas-base-dev python-tk virtualenv
sudo pip3 install pillow Pillow lxml jupyter matplotlib cython numpy pygame
4. Install dependencies for OpenCV:
4.安裝OpenCV依賴項:
sudo apt updatesudo apt install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev qt4-dev-tools libatlas-base-dev
5. Install OpenCV itself:
5.安裝OpenCV本身:
sudo apt updatesudo pip3 install opencv-python
6. Install TensorFlow by downloading ?wheel? from https://github.com/lhelontra/tensorflow-on-arm/releases:
6.通過從https://github.com/lhelontra/tensorflow-on-arm/releases下載?wheel?安裝TensorFlow:
sudo apt updatesudo pip3 install tensorflow-2.2.0-cp37-none-linux armv7l.whl
Note: Experience shows that it is better to install ?wheel? rather then from pip default repository, since it does not show all the versions for Raspberry Pi:
注意:經(jīng)驗表明,最好安裝?wheel?,而不要從pip默認存儲庫安裝,因為它不會顯示Raspberry Pi的所有版本:
訓練 (Training)
Note: Training shall be done on host machine to avoid additional problems that might occur on Raspberry Pi since TensorFlow framework and its accompanying software were originally developed and optimized for usage on mainframes.
注意:由于TensorFlow框架及其隨附軟件最初是針對大型機開發(fā)和優(yōu)化的,因此應在主機上進行培訓,以避免在Raspberry Pi上可能發(fā)生的其他問題。
1. Install TensorFlow (for CPU or GPU):
1.安裝TensorFlow(用于CPU或GPU):
sudo pip3 install tensorflow==1.13.1or
sudo pip3 install tensorflow-gpu==1.13.1
Note: Use v1.13.1 since it is the most stable version for main frames and works with all other software used here (from own experience).
注意:請使用v1.13.1,因為它是主機最穩(wěn)定的版本,并且可以與此處使用的所有其他軟件一起使用(根據(jù)自己的經(jīng)驗)。
2. Get TensorFlow models:
2.獲取TensorFlow模型:
git clone https://github.com/tensorflow/models.git3. Copy ?train.py? from folder ?legacy? to ?object_detection?:
3.將“傳統(tǒng)”文件夾中的“ train.py”復制到“對象檢測”中:
cp /path/to/models/research/object_detection/legacy/train.py/path/to/models/research/object_detection/
4. Get pretrained model from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md:
4.從https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md獲取預訓練的模型:
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz
5. Unpack archive:
5.解壓縮檔案:
tar -xvzf ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz-C /destination/folder/
Note: Unpack archive in folder for which ?fine_tune_checkpoint? is configured in ?*.config?.
注意:在“ * .config”中配置了“ fine_tune_checkpoint”文件夾的文件夾中解壓縮檔案。
6. Start training:
6.開始訓練:
python3 train.py --logtostderr -train_dir=/path/to/training/--pipeline_config_path=/path/to/ssd_mobilenet_v2_quantized.config
Note #1: ?/path/to/training/? is any folder where all training results couldbe saved to.Note #2: If training process is suddenly terminated one can change values?num_steps? and ?num_examples? reducing the load on memory.
注意#1: ?/ path / to / training /?是可以將所有訓練結(jié)果保存到的任何文件夾。 注意#2:如果訓練過程突然終止,則可以更改值“ num_steps”和“ num_examples”,以減少內(nèi)存負荷。
7. After training has finished, the model can be exported for conversion to TensorFlow Lite using the ?export_tflite_ssd_graph.py? script:
7.訓練完成后,可以使用?export_tflite_ssd_graph.py?腳本將模型導出以轉(zhuǎn)換為TensorFlow Lite:
python3 export_tflite_ssd_graph.py--pipeline_config_path=/path/to/ssd_mobilenet_v2_quantized.config
--trained_checkpoint_prefix=/path/to/training/model.ckpt-XXXX
--output_directory=/path/to/output/directory
--add_postprocessing_op=true
Note #1: For each ?model.ckpt-XXXX? there must be corresponding ?model.ckpt-XXXX.data-00000-of-00001?, ?model.ckpt-XXXX.index?, ?model.ckpt-XXXX.meta” in the ?training? folder.Note #2: ?/path/to/output/directory? is any folder where all final results could be saved to.
注意事項1:對于每個?model.ckpt-XXXX?,必須有相應的?model.ckpt-XXXX.data-00000-of-00001?,?model.ckpt-XXXX.index?和?model.ckpt-XXXX。 meta”位于“培訓”文件夾中。 注意#2: ?/ path / to / output / directory?是可以將所有最終結(jié)果保存到的任何文件夾。
After the command has been executed, there must be two new files in theoutput folder specified for ?output_directory?: ?tflite_graph.pb? and?tflite_graph.pbtxt?.
執(zhí)行命令后,在為“ output_directory”指定的輸出文件夾中必須有兩個新文件:“ tflite_graph.pb”和“ tflite_graph.pbtxt”。
8. Install Bazel in order to optimize trained model through the TensorFlow Lite Optimizing Converter (TOCO) before it will work with the TensorFlow Lite interpreter:
8.安裝Bazel以便通過TensorFlow Lite優(yōu)化轉(zhuǎn)換器(TOCO)優(yōu)化訓練后的模型,然后再與TensorFlow Lite解釋器一起使用:
- Install dependencies: 安裝依賴項:
sudo apt install openjdk-11-jdk
Download version 0.21.0 (from https://github.com/bazelbuild/bazel/releases/tag/0.21.0):
下載版本0.21.0(來自https://github.com/bazelbuild/bazel/releases/tag/0.21.0 ):
0.21.0/bazel-0.21.0-installer-linux-x86_64.sh
Note: The experience shows that only Bazel v0.21.0 works well. Other versions cause multiple errors.
注意:經(jīng)驗表明,只有Bazel v0.21.0可以正常工作。 其他版本會導致多個錯誤。
- Change permission rights: 更改權(quán)限:
- Install Bazel: 安裝Bazel:
Installation is shown for Ubuntu (https://docs.bazel.build/versions/master/install-ubuntu.html). The same steps are applicable for Debian and Linux Mint. For other OS follow installation guide fromhttps://docs.bazel.build/versions/master/install.html
顯示了針對Ubuntu的安裝( https://docs.bazel.build/versions/master/install-ubuntu.html )。 相同的步驟適用于Debian和Linux Mint。 對于其他操作系統(tǒng),請遵循h(huán)ttps://docs.bazel.build/versions/master/install.html中的安裝指南
9. Clone TensorFlow repository and open it:
9.克隆TensorFlow存儲庫并打開它:
git clone https://github.com/tensorflow/tensorflow.gitcd tensorflow
10. Use Bazel to run the model through the TOCO tool by issuing this command:
10.通過發(fā)出以下命令,使用Bazel通過TOCO工具運行模型:
bazel run --config=opt tensorflow/lite/toco:toco ----input_file=/path/to/tflite_graph.pb
--output_file=/path/to/detect.tflite
--input_shapes=1,300,300,3
--input_arrays=normalized_input_image_tensor
--output_arrays=TFLite_Detection_PostProcess,
TFLite_Detection_PostProcess:1,
TFLite_Detection_PostProcess:2,
28TFLite_Detection_PostProcess:3
--inference_type=QUANTIZED_UINT8
--mean_values=128
--std_values=128
--change_concat_input_ranges=false
--allow_custom_ops
Note: The output could be the following:
注意:輸出可能是以下內(nèi)容:
After the command finishes running, there shall be a file called ?detect.tflite? in the directory specified for ?output_file?.
命令運行完畢后,在為“ output_file”指定的目錄中將存在一個名為“ detect.tflite”的文件。
11. Create ?labelmap.txt? and add all class (object) names for which the model was trained:
11.創(chuàng)建?labelmap.txt?并添加訓練了模型的所有類(對象)名稱:
touch labelmap.txtThe contents:
內(nèi)容:
Only one class in this case在這種情況下只有一個班級12. The model is ready for usage. Put ?detect.tflite? and ?labelmap.txt? into separate folder and use it as normal pretrained model (see ?Testing? paragraph).
12.該模型可以使用了。 將“ detect.tflite”和“ labelmap.txt”放入單獨的文件夾中,并將其用作常規(guī)的預訓練模型(請參見“測試”段落)。
測試中 (Testing)
對于TensorFlow Lite模型 (For TensorFlow Lite model)
For custom model
對于自定義模型
1. Clone repository for Raspberry Pi and open it
1.克隆Raspberry Pi的存儲庫并打開它
git clone https://github.com/EdjeElectronics/TensorFlow-Lite-Object\-Detection-on-Android-and-Raspberry-Pi.git
cd TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi
2. Put earlier trained model (custom ?detect.tflite? and ?labelmap.txt?) into ?/path/to/model? and run the command:
2.將先前訓練有素的模型(自定義?detect.tflite?和?labelmap.txt?)放入?/ path / to / model?并運行以下命令:
python3 /path/to/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/TFLite_detection_webcam.py –modeldir=/path/to/model
For pretrained model
對于預訓練模型
The same is applicable to already pretrained model.
這同樣適用于已經(jīng)預訓練的模型。
1. Download pretrained SSD MobileNet from https://www.tensorflow.org/lite/models/object_detection/overview:
1.從https://www.tensorflow.org/lite/models/object_detection/overview下載經(jīng)過預訓練的SSD MobileNet:
wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip
2. Unzip the model:
2.解壓縮模型:
unzip /path/to/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip -d/path/to/model
Archive must contain ?detect.tflite? and ?labelmap.txt? files.
歸檔文件必須包含“ detect.tflite”和“ labelmap.txt”文件。
3. Open cloned repository and run the same command:
3.打開克隆的存儲庫并運行相同的命令:
python3 /path/to/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/TFLite_detection_webcam.py –modeldir=/path/to/model
對于TensorFlow模型 (For TensorFlow model)
1. Install package ?argparse?:
1.安裝軟件包?argparse?:
sudo pip3 install argparse2.1. Either copy the script ?Object_detection_webcam.py? from ?TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10? repository to ?models? repository in ?/path/to/models/research/object_detection? and add the following:
2.1。 將?TensorFlow-對象檢測-API-Tutorial-Train-Multiple-Objects-Windows-10?存儲庫中的腳本?Object_detection_webcam.py?復制到?/ path / to / models / research / object_detection?中的“ models”存儲庫中并添加以下內(nèi)容:
- import package argparse 導入軟件包argparse
- add the following arguments: 添加以下參數(shù):
ap.add_argument('-pb', '--path_to_pb')
ap.add_argument('-l', '--path_to_labels')
ap.add_argument('-nc', '-num_classes')
args = vars(ap.parse_args())
Comment out lines with variables ?MODEL_NAME?, ?PATH_TO_CKPT?,
用變量?MODEL_NAME?,?PATH_TO_CKPT?,
?PATH_TO_LABELS?, ?CWD_PATH? and ?NUM_CLASSES? and add the :
?PATH_TO_LABELS?,?CWD_PATH?和?NUM_CLASSES?并添加:
ap.add_argument('-pb', '--path_to_pb')
ap.add_argument('-l', '--path_to_labels')
ap.add_argument('-nc', '--num_classes')
args = vars(ap.parse_args())# Name of the directory containing the object detection module we're using
#MODEL_NAME = 'inference_graph'# Grab path to current working directory
#CWD_PATH = os.getcwd()# Path to frozen detection graph .pb file, which contains the model that is used
# for object detection.
#PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb')
PATH_TO_CKPT = args['path_to_pb']# Path to label map file
#PATH_TO_LABELS = os.path.join(CWD_PATH,'training','labelmap.pbtxt')
PATH_TO_LABELS = args['path_to_labels']# Number of classes the object detector can identify
#NUM_CLASSES = 6
NUM_CLASSES = int(args['num_classes'])
2.2. Or download already modified script:
2.2。 或下載已修改的腳本:
cd /path/to/models/research/object_detectionwget https://bitbucket.org/ElencheZetetique/fixed_scripts/src/master/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10/Object_detection_webcam.py
3.1. Open script ?label_map_util.py? in ?/path/to/models/research/object_detection/utils/? and either comment out if-statement for ?item.keypoints? or add an exception for it:
3.1。 在?/ path / to / models / research / object_detection / utils /?中打開腳本?label_map_util.py?,并注釋掉?item.keypoints?的if語句或為其添加例外:
# if item.keypoints:# keypoints = {}
# list_of_keypoint_ids = []
# for kv in item.keypoints:
# if kv.id in list_of_keypoint_ids:
# raise ValueError('Duplicate keypoint ids are not allowed. Found {} more than once'.format(kv.id))
# keypoints[kv.label] = kv.id
# list_of_keypoint_ids.append(kv.id)
# category['keypoints'] = keypoints
try:
if item.keypoints:
keypoints = {}
list_of_keypoint_ids = []
for kv in item.keypoints:
if kv.id in list_of_keypoint_ids:
raise ValueError('Duplicate keypoint ids are not allowed. Found {} more than once'.format(kv.id))
keypoints[kv.label] = kv.id
list_of_keypoint_ids.append(kv.id)
category['keypoints'] = keypoints
except AttributeError:
pass
3.2. Alternatively one might download modified script:
3.2。 或者,可以下載修改后的腳本:
cd /path/to/models/research/object_detection/utils/wget https://bitbucket.org/ElencheZetetique/fixed_scripts/src/master/models_TF/label_map_util.py
For custom model
對于自定義模型
1.1. Open script ?export_inference_graph.py? in ?/path/to/models/research/object_detection? and comment out last parameters:
1.1。 在?/ path / to / models / research / object_detection?中打開腳本?export_inference_graph.py?,并注釋掉最后一個參數(shù):
exporter.export_inference_graph(FLAGS.input_type, pipeline_config, FLAGS.trained_checkpoint_prefix,
FLAGS.output_directory, input_shape=input_shape,
write_inference_graph=FLAGS.write_inference_graph,
additional_output_tensor_names=additional_output_tensor_names,
#use_side_inputs=FLAGS.use_side_inputs,
#side_input_shapes=side_input_shapes,
#side_input_names=side_input_names,
#side_input_types=side_input_types)
)
1.2. Or copy the script replacing the original one:
1.2。 或復制腳本以替換原始腳本:
cd /path/to/models/research/object_detectionwget https://bitbucket.org/ElencheZetetique/fixed_scripts/src/master/models_TF/export_inference_graph.py
2. Export inference graph using script ?export_inference_graph.py? from?/path/to/models/research/object_detection?:
2.使用?/ path / to / models / research / object_detection?腳本?export_inference_graph.py?導出推理圖:
python3 export_inference_graph.py--input_type image_tensor
--pipeline_config_path /path/to/ssd_mobilenet_v2_quantized.config
--trained_checkpoint_prefix /path/to/training/model.ckpt-XXX
--output_directory /path/to/output/directory
3. In the output directory assigned for flag --output_directory theremust be file ?frozen_inference_graph.pb?:
3.在分配給標志--output_directory的輸出目錄中,必須有文件?frozen_inference_graph.pb?:
4. Run modified script ?Object_detection_webcam.py? for custom model:
4.針對自定義模型運行修改后的腳本“ Object_detection_webcam.py”:
python3 Object_detection_webcam.py -nc 1-pb /path/to/frozen_inference_graph.pb
-l /path/to/labelmap.pbtxt
Example of detection:
檢測示例:
For pretrained model
對于預訓練模型
1. Download the model you are interested in from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md
1.從https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md下載您感興趣的模型
wget http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_2018_01_28.tar.gz
2. Extract file ?frozen_inference_graph.pb? from archive
2.從存檔中提取文件“ frozen_inference_graph.pb”
3. Run modified script ?Object_detection_webcam.py? for pretrained model:
3.針對預訓練的模型運行修改后的腳本“ Object_detection_webcam.py”:
python3 Object_detection_webcam.py -nc 100-pb /path/to/frozen_inference_graph.pb
-l /path/to/mscoco_label_map.pbtxt
Examples of detection:
檢測示例:
Assign maximum number of classes for flag -nc/--num_classesAssign path to ?/path/to/models/research/object_detection/data/mscoco_label_map.pbtxt? for flag -l/--path_to_labels
為標記-nc/--num_classes分配最大的類數(shù)為標記-l/--path_to_labels -nc/--num_classes分配到?/path/to/models/research/object_detection/data/mscoco_label_map.pbtxt?的-l/--path_to_labels
翻譯自: https://medium.com/@Elenche.Zetetique/object-detection-with-tensorflow-42eda282d915
raspberry pi
總結(jié)
以上是生活随笔為你收集整理的raspberry pi_在Raspberry Pi上使用TensorFlow进行对象检测的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 快手极速版关注的人发作品通知怎么关闭
- 下一篇: 我如何在20小时内为AWS ML专业课程