TensorRT 基于Yolov3的开发
TensorRT 基于Yolov3的開發
Models
Desc
tensorRT for Yolov3
https://github.com/lewes6369/TensorRT-Yolov3
Test Enviroments
Ubuntu 16.04
TensorRT 5.0.2.6/4.0.1.6
CUDA 9.2
下載官方模型轉換的caffe模型:
百度云pwd:gbue
谷歌drive
如果運行模型是自己訓練的,注釋“upsample_param”塊,并將最后一層的prototxt修改為:
Download the caffe model converted by official model:
Baidu Cloud here pwd: gbue
Google Drive here
If run model trained by yourself, comment the “upsample_param” blocks, and modify the prototxt the last layer as:
layer {
#the bottoms are the yolo input
layers
bottom: "layer82-conv"bottom: "layer94-conv"bottom:
“layer106-conv”
top: "yolo-det"name: "yolo-det"type: "Yolo"
}
如果不同的內核,還需要更改“YoloConfigs.h”中的yolo配置。
Run Sample
#build source code
git submodule update --init --recursive
mkdir build cd build && cmake … && make && make install
cd …
#for yolov3-608
./install/runYolov3 --caffemodel=./yolov3_608.caffemodel
–prototxt=./yolov3_608.prototxt --input=./test.jpg --W=608 --H=608 --class=80
#for fp16
./install/runYolov3 --caffemodel=./yolov3_608.caffemodel
–prototxt=./yolov3_608.prototxt --input=./test.jpg --W=608 --H=608 --class=80 --mode=fp16
#for int8 with calibration datasets
./install/runYolov3 --caffemodel=./yolov3_608.caffemodel
–prototxt=./yolov3_608.prototxt --input=./test.jpg --W=608 --H=608 --class=80
–mode=int8 --calib=./calib_sample.txt
#for yolov3-416
(need to modify include/YoloConfigs for YoloKernel)
./install/runYolov3
–caffemodel=./yolov3_416.caffemodel --prototxt=./yolov3_416.prototxt
–input=./test.jpg --W=416 --H=416 --class=80
Desc
tensorRT for Yolov3
Test Enviroments
Ubuntu 16.04TensorRT 5.0.2.6/4.0.1.6CUDA 9.2
Performance
Eval Result
用appending附件編譯上面的模型模型–evallist=labels.txt
從val2014中選擇的200張圖片制作的int8校準數據(見腳本目錄)
提示注意:
在yolo層和nms中,caffe的實現沒有什么不同,應該與tensorRT fp32的結果相似。
Details About Wrapper
see link TensorRTWrapper
https://github.com/lewes6369/tensorRTWrapper
TRTWrapper
Desc a wrapper for tensorRT net (parser caffe)
Test Environments
Ubuntu 16.04TensorRT 5.0.2.6/4.0.1.6CUDA 9.2
About Wraper
you can use the wrapper like this:
//normalstd::vector<std::vector> calibratorData;trtNet net(“vgg16.prototxt”,“vgg16.caffemodel”,{“prob”},calibratorData);//fp16trtNet net_fp16(“vgg16.prototxt”,“vgg16.caffemodel”,{“prob”},calibratorData,RUN_MODE:FLOAT16);//int8trtNet net_int8(“vgg16.prototxt”,“vgg16.caffemodel”,{“prob”},calibratorData,RUN_MODE:INT8); //run inference:net.doInference(input_data.get(), outputData.get()); //can print time costnet.printTime(); //can write to engine and load From enginenet.saveEngine(“save_1.engine”);trtNet net2(“save_1.engine”);
when you need add new plugin ,just add the plugin code to pluginFactory
Run Sample
#for classificationcd samplemkdir buildcd build && cmake … && make && make installcd …/install/runNet --caffemodel=CAFFEMODELNAME??prototxt={CAFFE_MODEL_NAME} --prototxt=CAFFEM?ODELN?AME??prototxt={CAFFE_PROTOTXT} --input=./test.jpg
總結
以上是生活随笔為你收集整理的TensorRT 基于Yolov3的开发的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 大规模数据处理Apache Spark开
- 下一篇: 在Yolov5 Yolov4 Yolov