英特尔OpenVINO使用入门(C++集成方式)
生活随笔
收集整理的這篇文章主要介紹了
英特尔OpenVINO使用入门(C++集成方式)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
一、簡介
OpenVINO?是英特爾推出的一個用于優化和部署AI推理的開源工具包。常用于 Inter 的集成顯卡網絡推理使用。
官網地址:https://docs.openvino.ai
二、下載
下載地址:https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux.html
針對不同平臺,在如圖紅框處選擇不同的文檔參考,按照官網文檔一步步執行就行。
三、使用
注意:筆者當前使用的版本為 openvino_2021。
假設你已經有了模型的 xml 文件和對應的 bin 文件了,基本代碼流程如下:
#include <stdio.h> #include <string> #include "inference_engine.hpp"#define LOGD(fmt, ...) printf("[%s][%s][%d]: " fmt "\n", __FILE__, __FUNCTION__, __LINE__, ##__VA_ARGS__)using namespace InferenceEngine;int main(int argc, char *argv[]) {// 1.查看版本號信息const Version* version = GetInferenceEngineVersion();LOGD("version description: %s, buildNumber: %s, major.minor: %d.%d",version->description, version->buildNumber, version->apiVersion.major, version->apiVersion.minor);// 2.創建推理引擎Core ie;std::vector<std::string> devices = ie.GetAvailableDevices(); // 查看可使用的Devices,包含 CPU、GPU 等for (std::string device : devices) {LOGD("GetAvailableDevices: %s", device.c_str());}// 3.讀取模型文件const std::string input_model_xml = "model.xml";CNNNetwork network = ie.ReadNetwork(input_model_xml);// 4.配置輸入輸出信息InputsDataMap& inputs = network.getInputsInfo();for (auto& input : inputs) {auto& input_name = input.first; //input是一個鍵值對類型InputInfo::Ptr& input_info = input.second;input_info->setLayout(Layout::NCHW); // 設置排列方式input_info->setPrecision(Precision::FP32); // 設置精度為float32input_info->getPreProcess().setResizeAlgorithm(ResizeAlgorithm::RESIZE_BILINEAR);input_info->getPreProcess().setColorFormat(ColorFormat::RAW); // 設置圖片格式}OutputsDataMap& outputs = network.getOutputsInfo();for (auto& output : outputs) {auto& output_name = output.first; //output也是一個鍵值對類型DataPtr& output_info = output.second;output_info->setPrecision(Precision::FP32);auto& dims = output_info->getDims();LOGD("output shape name: %s, dims: [%d, %d, %d, %d]", output_name.c_str(), dims[0], dims[1], dims[2], dims[3]);}// 5.根據設備(CPU、GPU 等)加載網絡std::string device_name = "CPU"; // 可用的device通過ie.GetAvailableDevices查詢ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);// 6.創建推理請求InferRequest infer_request = executable_network.CreateInferRequest();/* 如上6步,在多次執行網絡推理過程中,可以緩存起來只創建一次,節約耗時*/// 7.設置輸入數據InputsDataMap& inputs = network.getInputsInfo();for (auto& input : inputs) {auto& input_name = input.first; //input是一個鍵值對類型Blob::Ptr blob = infer_request.GetBlob(name);unsigned char* data = static_cast<unsigned char*>(blob->buffer());// TODO: 通過memcpy等方式給data賦值// readFile(input_path, data);}// 8.網絡推理infer_request.Infer();// 9.獲取輸出OutputsDataMap& outputs = network.getOutputsInfo();for (auto& output : outputs) {auto& output_name = output.first; //output也是一個鍵值對類型const Blob::Ptr output_blob = infer_request.GetBlob(name);LOGD("size: %d, byte_size: %d", output_blob->size(), output_blob->byteSize());const float* output_data = static_cast<PrecisionTrait<Precision::FP32>::value_type*>(output_blob->buffer());// writeFile(path, (void *)output_data, output_blob->byteSize());} }其余更復雜的使用場景,可以參考下載的SDK中的示例,路徑是 .\openvino_2021\inference_engine\samples\cpp。
四、ReadNetwork說明
1.通過文件路徑讀取模型
通常我們的模型文件就是本地的一個文件,通過路徑加載即可,對應的接口為:
/*** @brief Reads models from IR and ONNX formats* @param modelPath path to model* @param binPath path to data file* For IR format (*.bin):* * if path is empty, will try to read bin file with the same name as xml and* * if bin file with the same name was not found, will load IR without weights.* For ONNX format (*.onnx or *.prototxt):* * binPath parameter is not used.* @return CNNNetwork*/ CNNNetwork ReadNetwork(const std::string& modelPath, const std::string& binPath = {}) const;如果bin文件路徑和xml文件路徑一致且文件名相同,該參數可以省略,如:CNNNetwork network = ie.ReadNetwork("model.xml")。
2.通過內存地址讀取模型
假設我們的模型已經在內存中了,可以通過如下接口創建:
/*** @brief Reads models from IR and ONNX formats* @param model string with model in IR or ONNX format* @param weights shared pointer to constant blob with weights* Reading ONNX models doesn't support loading weights from data blobs.* If you are using an ONNX model with external data files, please use the* `InferenceEngine::Core::ReadNetwork(const std::string& model, const Blob::CPtr& weights) const`* function overload which takes a filesystem path to the model.* For ONNX case the second parameter should contain empty blob.* @note Created InferenceEngine::CNNNetwork object shares the weights with `weights` object.* So, do not create `weights` on temporary data which can be later freed, since the network* constant datas become to point to invalid memory.* @return CNNNetwork*/ CNNNetwork ReadNetwork(const std::string& model, const Blob::CPtr& weights) const;使用示例:
extern unsigned char __res_model_xml []; extern unsigned int __res_model_xml_size; extern unsigned char __res_model_bin []; extern unsigned int __res_model_bin_size;std::string model(__res_model_xml, __res_model_xml + __res_model_xml_size); CNNNetwork network = ie.ReadNetwork(model,InferenceEngine::make_shared_blob<uint8_t>({InferenceEngine::Precision::U8,{__res_model_bin_size}, InferenceEngine::C}, __res_model_bin));總結
以上是生活随笔為你收集整理的英特尔OpenVINO使用入门(C++集成方式)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 小屁孩的超可爱新年祝福铃声 小屁孩的超可
- 下一篇: 2021级新生个人训练赛第38场