一个openMP编程处理图像的示例
一個openMP編程處理圖像的示例:? ? ?
? ? ?從硬盤讀入兩幅圖像,對這兩幅圖像分別提取特征點,特征點匹配,最后將圖像與匹配特征點畫出來。理解該例子需要一些圖像處理的基本知識,我不在此詳細介紹。另外,編譯該例需要opencv,我用的版本是2.3.1,關于opencv的安裝與配置也不在此介紹。我們首先來看傳統串行編程的方式。
1 #include "opencv2/highgui/highgui.hpp"2 #include "opencv2/features2d/features2d.hpp"
3 #include <iostream>
4 #include <omp.h>
5 int main( ){
6 cv::SurfFeatureDetector detector( 400 );
7 cv::SurfDescriptorExtractor extractor;
8 cv::BruteForceMatcher<cv::L2<float> > matcher;
9 std::vector< cv::DMatch > matches;
10 cv::Mat im0,im1;
11 std::vector<cv::KeyPoint> keypoints0,keypoints1;
12 cv::Mat descriptors0, descriptors1;
13 double t1 = omp_get_wtime( );
14 //先處理第一幅圖像
15 im0 = cv::imread("rgb0.jpg", CV_LOAD_IMAGE_GRAYSCALE );
16 detector.detect( im0, keypoints0);
17 extractor.compute( im0,keypoints0,descriptors0);
18 std::cout<<"find "<<keypoints0.size()<<"keypoints in im0"<<std::endl;
19 //再處理第二幅圖像
20 im1 = cv::imread("rgb1.jpg", CV_LOAD_IMAGE_GRAYSCALE );
21 detector.detect( im1, keypoints1);
22 extractor.compute( im1,keypoints1,descriptors1);
23 std::cout<<"find "<<keypoints1.size()<<"keypoints in im1"<<std::endl;
24 double t2 = omp_get_wtime( );
25 std::cout<<"time: "<<t2-t1<<std::endl;
26 matcher.match( descriptors0, descriptors1, matches );
27 cv::Mat img_matches;
28 cv::drawMatches( im0, keypoints0, im1, keypoints1, matches, img_matches );
29 cv::namedWindow("Matches",CV_WINDOW_AUTOSIZE);
30 cv::imshow( "Matches", img_matches );
31 cv::waitKey(0);
32 return 1;
33 }
很明顯,讀入圖像,提取特征點與特征描述子這部分可以改為并行執行,修改如下:
1 #include "opencv2/highgui/highgui.hpp"2 #include "opencv2/features2d/features2d.hpp"
3 #include <iostream>
4 #include <vector>
5 #include <omp.h>
6 int main( ){
7 int imNum = 2;
8 std::vector<cv::Mat> imVec(imNum);
9 std::vector<std::vector<cv::KeyPoint>>keypointVec(imNum);
10 std::vector<cv::Mat> descriptorsVec(imNum);
11 cv::SurfFeatureDetector detector( 400 ); cv::SurfDescriptorExtractor extractor;
12 cv::BruteForceMatcher<cv::L2<float> > matcher;
13 std::vector< cv::DMatch > matches;
14 char filename[100];
15 double t1 = omp_get_wtime( );
16 #pragma omp parallel for
17 for (int i=0;i<imNum;i++){
18 sprintf(filename,"rgb%d.jpg",i);
19 imVec[i] = cv::imread( filename, CV_LOAD_IMAGE_GRAYSCALE );
20 detector.detect( imVec[i], keypointVec[i] );
21 extractor.compute( imVec[i],keypointVec[i],descriptorsVec[i]);
22 std::cout<<"find "<<keypointVec[i].size()<<"keypoints in im"<<i<<std::endl;
23 }
24 double t2 = omp_get_wtime( );
25 std::cout<<"time: "<<t2-t1<<std::endl;
26 matcher.match( descriptorsVec[0], descriptorsVec[1], matches );
27 cv::Mat img_matches;
28 cv::drawMatches( imVec[0], keypointVec[0], imVec[1], keypointVec[1], matches, img_matches );
29 cv::namedWindow("Matches",CV_WINDOW_AUTOSIZE);
30 cv::imshow( "Matches", img_matches );
31 cv::waitKey(0);
32 return 1;
33 }
兩種執行方式做比較,時間為:2.343秒v.s. 1.2441秒
在上面代碼中,為了改成適合#pragma omp parallel for執行的方式,我們用了STL的vector來分別存放兩幅圖像、特征點與特征描述子,但在某些情況下,變量可能不適合放在vector里,此時應該怎么辦呢?這就要用到openMP的另一個工具,section,代碼如下:
1 #include "opencv2/highgui/highgui.hpp"2 #include "opencv2/features2d/features2d.hpp"
3 #include <iostream>
4 #include <omp.h>
5 int main( ){
6 cv::SurfFeatureDetector detector( 400 ); cv::SurfDescriptorExtractor extractor;
7 cv::BruteForceMatcher<cv::L2<float> > matcher;
8 std::vector< cv::DMatch > matches;
9 cv::Mat im0,im1;
10 std::vector<cv::KeyPoint> keypoints0,keypoints1;
11 cv::Mat descriptors0, descriptors1;
12 double t1 = omp_get_wtime( );
13 #pragma omp parallel sections
14 {
15 #pragma omp section
16 {
17 std::cout<<"processing im0"<<std::endl;
18 im0 = cv::imread("rgb0.jpg", CV_LOAD_IMAGE_GRAYSCALE );
19 detector.detect( im0, keypoints0);
20 extractor.compute( im0,keypoints0,descriptors0);
21 std::cout<<"find "<<keypoints0.size()<<"keypoints in im0"<<std::endl;
22 }
23 #pragma omp section
24 {
25 std::cout<<"processing im1"<<std::endl;
26 im1 = cv::imread("rgb1.jpg", CV_LOAD_IMAGE_GRAYSCALE );
27 detector.detect( im1, keypoints1);
28 extractor.compute( im1,keypoints1,descriptors1);
29 std::cout<<"find "<<keypoints1.size()<<"keypoints in im1"<<std::endl;
30 }
31 }
32 double t2 = omp_get_wtime( );
33 std::cout<<"time: "<<t2-t1<<std::endl;
34 matcher.match( descriptors0, descriptors1, matches );
35 cv::Mat img_matches;
36 cv::drawMatches( im0, keypoints0, im1, keypoints1, matches, img_matches );
37 cv::namedWindow("Matches",CV_WINDOW_AUTOSIZE);
38 cv::imshow( "Matches", img_matches );
39 cv::waitKey(0);
40 return 1;
41 }
上面代碼中,我們首先用#pragma omp parallel sections將要并行執行的內容括起來,在它里面,用了兩個#pragma omp section,每個里面執行了圖像讀取、特征點與特征描述子提取。將其簡化為偽代碼形式即為:
1 #pragma omp parallel sections2 {
3 #pragma omp section
4 {
5 function1();
6 }
7 #pragma omp section
8 {
9 function2();
10 }
11 }
意思是:parallel sections里面的內容要并行執行,具體分工上,每個線程執行其中的一個section,如果section數大于線程數,那么就等某線程執行完它的section后,再繼續執行剩下的section。在時間上,這種方式與人為用vector構造for循環的方式差不多,但無疑該種方式更方便,而且在單核機器上或沒有開啟openMP的編譯器上,該種方式不需任何改動即可正確編譯,并按照單核串行方式執行。
以上分享了這兩天關于openMP的一點學習體會,其中難免有錯誤,歡迎指正。另外的一點疑問是,看到各種openMP教程里經常用到private,shared等來修飾變量,這些修飾符的意義和作用我大致明白,但在我上面所有例子中,不加這些修飾符似乎并不影響運行結果,不知道這里面有哪些講究。
在寫上文的過程中,參考了包括以下兩個網址在內的多個地方的資源,不再一 一列出,在此一并表示感謝。
總結
以上是生活随笔為你收集整理的一个openMP编程处理图像的示例的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 技术分享:KVM虚拟化如何取证
- 下一篇: boost多线程使用简例