5g与edge ai_使用OpenVINO部署AI Edge应用
5g與edge ai
In my previous articles, I have discussed the basics of the OpenVINO toolkit, OpenVINO’s Model Optimizer and Inference Engine. In this article, we will be exploring:-
在之前的文章中,我討論了OpenVINO工具箱 ,OpenVINO的模型優化器和推理引擎的基礎 。 在本文中,我們將探索:
- Types of Computer Vision models. 計算機視覺模型的類型。
- Pre-trained models in OpenVINO. OpenVINO中的預訓練模型。
- Downloading Pre-trained models. 下載預訓練的模型。
- Deploying an Edge App using a pre-trained model. 使用預先訓練的模型部署Edge App。
計算機視覺模型的類型 (Types of Computer Vision Models)
There are different types of computer vision models which are used for various purposes. But the three main computer vision models are:-
有多種類型的計算機視覺模型可用于各種目的。 但是,三種主要的計算機視覺模型是:
- Classification 分類
- Object Detection 物體檢測
- Segmentation 分割
The classification model identifies the “class” of a given image or an object in the image. The classification can be binary i.e. Yes or No, or thousands of classes like a person, apple, car, cat, etc.. There are several classification models like- ResNet, DenseNet, Inception, etc..
分類模型識別給定圖像或圖像中對象的“類別”。 分類可以是二進制的,即“是”或“否”,也可以是數千種類別,例如人,蘋果,汽車,貓等。有幾種分類模型,例如ResNet,DenseNet,Inception等。
Object Detection models are used to determine the objects present in the image and oftentimes draw bounding boxes around the detected objects. They also use classification to identify the class of the object inside the bounding box. You can also set a threshold for bounding boxes so that you can reject low-threshold detections. RCNN, Fast-RCNN, YOLO, etc. are some examples of Object Detection Models.
對象檢測模型用于確定圖像中存在的對象,并經常在檢測到的對象周圍繪制邊界框。 他們還使用分類來識別邊界框內對象的類。 您還可以為邊界框設置閾值,以便拒絕低閾值檢測。 RCNN,Fast-RCNN,YOLO等是對象檢測模型的一些示例。
Segmentation models perform pixel-wise classification in the given image. There are two different types of Segmentation- Semantic Segmentation and Instance Segmentation. In Semantic Segmentation, all the objects which belong to the same class are considered the same, whereas in Instance Segmentation each and every object is considered different even if it belongs to the same class. For example, if there are five people in an image, a Semantic Segmentation model will treat all five of them as same, whereas in Instance Segmentation model all five of them will be treated differently. U-Net, DRN, etc..
分割模型在給定圖像中執行逐像素分類。 有兩種不同類型的細分-語義細分和實例細分。 在語義分割中,屬于同一類的所有對象都被認為是相同的,而在實例分割中,即使每個對象屬于同一類,也被認為是不同的。 例如,如果圖像中有五個人,則語義分割模型將把所有五個人視為相同,而在實例分割模型中,將把所有五個人區別對待。 U-Net,DRN等。
OpenVINO中的預訓練模型 (Pre-trained Models in OpenVINO)
Pre-trained models, as the name suggests, are models which are already trained with high, or even cutting edge accuracy. Training a deep learning model requires a lot of time and computation power. Although, it is exciting to create your own model and train it by fine-tuning the hyperparameters (number of hidden layers, learning rate, activation function, etc.) to achieve higher accuracy. But, this needs hours of work.
顧名思義,預訓練模型是已經以高甚至尖端精度進行訓練的模型。 訓練深度學習模型需要大量時間和計算能力。 雖然,創建自己的模型并通過微調超參數(隱藏層數,學習率,激活函數等)以達到更高的精度來訓練它是令人興奮的。 但是,這需要數小時的工作。
By using pre-trained models, we avoid the need for large-scale data collection and long, costly training. Given knowledge of how to preprocess the inputs and handle the outputs of the network, you can plug these directly into your own app.
通過使用預先訓練的模型,我們避免了大規模數據收集和長期,昂貴的訓練的需要。 在掌握了如何預處理輸入和處理網絡輸出的知識之后,您可以將其直接插入自己的應用程序中。
OpenVINO has a lot of pre-trained models in the model zoo. The model zoo has Free Model Set and Public Model Set, the Free Model Set contains pre-trained models already converted to Intermediate Representation(.xml and .bin) using the Model Optimizer. These models can be used directly with the Inference Engine. The Public Model Set contains pre-trained models, but these are not converted to the intermediate representation.
OpenVINO在模型動物園中有很多預先訓練的模型 。 模型動物園具有“免費模型集”和“公共模型集”,“免費模型集”包含已使用“模型優化器”轉換為中間表示(.xml和.bin)的預訓練模型。 這些模型可以直接與推理引擎一起使用。 公共模型集包含預訓練的模型,但是不會將其轉換為中間表示。
下載預訓練的模型 (Downloading Pre-trained Models)
In this article, I will be loading the “vehicle-attributes-recognition-barrier-0039” model from the open model zoo.
在本文中,我將從開放模型動物園加載“ vehicle-attributes-recognition-barrier-0039”模型。
To download a pre-trained model, follow these steps(type the commands in Command Prompt/Terminal):-
要下載預訓練的模型,請按照以下步驟操作(在命令提示符/終端中鍵入命令):
For Linux:-
對于Linux:-
cd /opt/intel/openvino/deployment_tools/open_model_zoo/tools/model_downloaderFor Windows:-
對于Windows:-
cd C:/Program Files (x86)/IntelSWTools/openvinoI have used the default installation directory in the above command if your installation directory is different then navigate to the appropriate directory.
如果您的安裝目錄不同,那么我在上面的命令中使用了默認的安裝目錄,然后導航到適當的目錄。
2. Run the downloader.py
2.運行downloader.py
The downloader Python file requires some arguments, you can use the “-h” argument to see available arguments.
下載器的Python文件需要一些參數,您可以使用“ -h”參數查看可用參數。
python downloader.py -hLet’s download the model,
讓我們下載模型,
python downloader.py --name vehicle-attributes-recognition-barrier-0039 --precisions -FP32 --output_dir /home/pretrained_models— name → model name.
—名稱 →型號名稱。
— precision → model precision (FP16, FP32 or INT8).
—精度 →模型精度(FP16,FP32或INT8)。
— output_dir → path where to save models.
— output_dir →保存模型的路徑。
After successfully downloading the model, navigate to the path where you have downloaded the model and you will find the “.xml” and “.bin” files of the model.
成功下載模型后,導航至下載模型的路徑,您將找到模型的“ .xml”和“ .bin”文件。
Kindly refer the documentation to know more details(inputs and outputs) about the model.
請參考文檔以了解有關該模型的更多詳細信息(輸入和輸出)。
部署Edge應用 (Deploying an Edge App)
Now, since we have downloaded the pre-trained model, Let’s deploy it in an Edge app.
現在,由于我們已經下載了預訓練的模型,因此讓我們將其部署在Edge應用程序中。
Let’s create a file “inference.py” to define and work with the inference engine. In my previous article, about the inference engine, I have used different functions, but here I will be defining a class.
讓我們創建一個文件“ inference.py”來定義和使用推理引擎。 在上一篇關于推理引擎的文章中 ,我使用了不同的函數,但是在這里我將定義一個類。
from openvino.inference_engine import IENetwork, IECoreclass Network:def __init__(self):
self.plugin = None
self.network = None
self.input_blob = None
self.exec_network = None
self.infer_request = None def load_model(self):
self.plugin = IECore()
self.network = IENetwork(model='path_to_xml', weights='path_to_bin')
### Defining CPU Extension path
CPU_EXT_PATH= "/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/ libcpu_extension_sse4.so" ### Adding CPU Extension
plugin.add_extension(CPU_EXT_PATH,"CPU") ### Get the supported layers of the network
supported_layers = plugin.query_network(network=network, device_name="CPU") ### Finding unsupported layers
unsupported_layers = [l for l in network.layers.keys() if l not in supported_layers] ### Checking for unsupported layers
if len(unsupported_layers) != 0:
print("Unsupported layers found")
print(unsupported_layers)
exit(1) ### Loading the network
self.exec_network = self.plugin.load_network(self.network,"CPU") self.input_blob = next(iter(self.network.inputs))
print("MODEL LOADED SUCCESSFULLY!!!) def get_input_shape(self):
return self.network.inputs[self.input_blob].shape def synchronous_inference(self,image):
self.exec_network.infer({self.input_blob: image}) def extract_output(self):
return self.exec_network.requests[0].outputs
Don’t get confused! I’ll explain every function.
不要困惑! 我將解釋每個功能。
__init__(self):
__在自身):
It’s the constructor of the class Network, where I initialize the data members of the class.
它是網絡類的構造函數,在這里我初始化類的數據成員。
load_model(self):
load_model(self):
As the name suggests, it is used to load the model(pre-trained), in this function we:-
顧名思義,它用于加載模型(預先訓練),在此函數中,我們:-
? Declared an IECore object.
ed聲明為IECore對象。
? Declare an IENetwork object.
?聲明IENetwork對象。
? Loaded the model xml and bin files.
?加載了模型xml和bin文件。
? Checked for unsupported layers
?檢查了不支持的圖層
? Load the IENetwork object in IECore Object.
?將IENetwork對象加載到IECore對象中。
get_input_shape(self):
get_input_shape(self):
Returns the shape of the input required by the model
返回模型所需輸入的形狀
synchronous_inference(self, image):
sync_inference(自己,圖片):
Performs Synchronous Inference on the input image
在輸入圖像上執行同步推理
extract_output(self):
extract_output():
Returns the output from the model after the inference is completed.
推斷完成后,返回模型的輸出。
So, that was “inference.py”, now let’s create a file “main.py”.
因此,那是“ inference.py”,現在讓我們創建一個文件“ main.py”。
import cv2import numpy as np
from inference import Network
def preprocessing(image,height,width): ### Resize the Image
image = cv2.resize(image,(width,height)) ### Add color channel first
image = image.transpose((2,0,1)) ### Add Batch Size
image = np.reshape((image,(1,3,height,width))
return image
According to the documentation, the model reads the channels first and then the image dimensions, but OpenCV reads the image dimensions first and then the channels, so I’ve used the transpose(), to bring the color channel first.
根據文檔 ,模型首先讀取通道,然后讀取圖像尺寸,但是OpenCV首先讀取圖像尺寸,然后讀取通道,因此我使用了transpose()首先打開了彩色通道。
### Read the image
image = cv2.imread('path_to_image') ### Declare a Network Object
plugin = Network() ### Input shape required by model
input_shape = plugin.get_input_shape() height = input_shape[2]
width = input_shape[3] ### Preprocess the input
p_image = preprocessing(image,height,width) ### Perform Synchronous Inference
plugin.synchronous_inference(p_image) ### Extract the output
results = plugin.extract_output()
According to the documentation, the output(results) from the model is a dictionary, which contains the following information:-
根據文檔,模型的輸出(結果)是一個字典,其中包含以下信息:
Since it is a softmax output, we need to map the index of the maximum value with the color and the type.
由于它是softmax輸出,因此我們需要將最大值的索引與顏色和類型進行映射。
color = ['white','grey','yellow','red','green','blue','black']vehicle = ['car','bus','truck','van'] ### Finding out the color and type
result_color = str(color[np.argmax(results['color'])])
result_type = str(vehicle[np.argmax(results['type'])])### Add details to image
font = cv2.FONT_HERSHEY_SIMPLEX
font_scale = 1
col = (0, 255,0) #BGR
thickness = 2
color_text= 'color: '+result_color
type_text = 'vehicle: '+result_type
cv2.putText(image,color_text,(50,50), font, font_scale, col, thickness, cv2.LINE_AA)
cv2.putText(image,type_text,(50,75), font, font_scale, col, thickness, cv2.LINE_AA) ### Save the image
cv2.imwrite('path/vehicle.png',image)if __name__=="__main__":
main()
I tried for two vehicles and I got the following output:-
我嘗試了兩輛車,結果如下:
Source: Author資料來源:作者Well, That’s all folks. I hope by now you have a proper understanding of how to deploy an AI edge application using OpenVINO. OpenVINO has various pre-trained models for several applications. Try implementing different pre-trained models available in the OpenVINO model zoo and create your own edge application. Thank you so much for reading my article.
好吧,這就是所有人。 我希望到目前為止,您對如何使用OpenVINO部署AI邊緣應用程序有正確的了解。 OpenVINO具有針對各種應用的各種預訓練模型。 嘗試實施OpenVINO 模型庫中可用的不同的預訓練模型,然后創建自己的邊緣應用程序。 非常感謝您閱讀我的文章。
翻譯自: https://towardsdatascience.com/deploying-an-ai-edge-app-using-openvino-aa84e87c4577
5g與edge ai
總結
以上是生活随笔為你收集整理的5g与edge ai_使用OpenVINO部署AI Edge应用的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 固废处理概念股票龙头一览表,2022固废
- 下一篇: 力合微电子上市日期