task2:opencv的python接口图像储存、色彩空间、
task2:圖像儲存、色彩空間、圖像的算數運算。
筆記:
使用managers.WindowManager抽象窗口和鍵盤:
main.py:
manager.py:
import cv2 class CaptureManager(object):def init (self, capture, previewWindowManager = None,shouldMirrorPreview = False):self.previewWindowManager = previewWindowManagerself.shouldMirrorPreview = shouldMirrorPreviewself._capture = captureself._channel = 0self. enteredFrame = Falseself._frame = Noneself._imageFilename = Noneself._videoFilename = Noneself._videoEncoding = Noneself._videoWriter = Noneself._startTime = Noneself._framesElapsed = int(0)self._fpsEstimate = None@propertydef channel(self):return self._channel@channel.setterdef channel(self, value):if self._channel != value:self._channel = valueself._frame = None@propertydef frame(self):if self._enteredFrame and self._frame is None:self._frame = self._capture.retrieve()return self._frame@propertydef isWritinglmage (self):return self._imageFilename is not None@propertydef isWritingVideo(self):return self._videoFilename is not Noneclass WindowManager(object):def init (self, windowName,keypressCallback = None):self.keypressCallback = keypressCallbackself._windowName = windowNameself._isWindowCreated = False@propertydef isWindowCreated(self):return self._isWindowCreateddef createWindow (self):cv2.namedWindow(self._windowName)self._isWindowCreated = Truedef show(self, frame):cv2.imshow(self._windowName, frame)def destroyWindow (self):cv2.destroyWindow(self._windowName)self._isWindowCreated = Falsedef processEvents (self):keycode = cv2.waitKey(1)if self.keypressCallback is not None and keycode != -1: # Discard any non-ASCII info encoded by GTK.self.keypressCallback(keycode)ps:這個代碼運行以后顯示:
File “C:\Users\14172\PycharmProjects\pythonProject3\main.py”, line
25, in run
self._windowManager.createWindow() AttributeError: ‘Cameo’ object has no attribute ‘_windowManager’
還沒找出問題。
HSV和RGB
OpenCV中有數百種關于在不同色彩空間之間轉換的方法。當前,在計算機視覺中有三 種常用的色彩空間:灰度、BGR以及HSV (Hue, Saturation, Value)o
□灰度色彩空間是通過去除彩色信息來將其轉換成灰階,灰度色彩空間對中間處理特 別有效,比如人臉檢測。
□ BGR,即藍-綠-紅色彩空間,每一個像素點都由一個三元數組來表示,分別代表 藍、綠、紅三種顏色。網頁開發者可能熟悉另一個與之相似的顏色空間:RGB,它 們只是在顏色的順序上不同。
□ HSV, H ( Hue)是色調,S ( Saturation)是飽和度,V (Value)表示黑暗的程度(或 光譜另一端的明亮程度)。
BGR圖像中像素B,G和R的取值與落在物體上的光相關,因此這些值也彼此相關,無法準確描述像素。相反,HSV空間中,三者相對獨立,可以準確描述像素的亮度,飽和度和色度。
RGB色彩空間是一種被廣泛接受的色彩空間,但是它過于抽象,我們不能直接通過其值感知具體的色彩,HSV色彩空間我們可以更加方便地通過色調 飽和度 亮度來感知顏色。
H色調 S飽和度 V亮度 色調0為紅色,300為品紅色
亮度為0時圖像是純黑色
img[0,0,0]=255將該像素點第0個通道(即B通道)設置為255,即該點指定為藍色。
綠色的色調為60 藍色為120
示例:
結果:
HSV模式中各種顏色的范圍:
藍色-[110,100,100]到[130,255,255]之間
綠色-值在[50,100,100]和,[70,255,255]之間
紅色-[0,100,100], [10,255,255]之間
通過掩碼的按位與運算,鎖定顏色區域:
結果:=
傅里葉變換
在OpenCV中,對圖像和視頻的大多數處理都或多或少會涉及傅里葉變換的概念。
也就是說,人們所看到的波形都是由其他波形疊加得到的。這個概念對操作圖像非常 有幫助,因為這樣我們可以區分圖像里哪些區域的信號(比如圖像像素)變化特別強,哪些區域的信號變化不那么強,從而可以任意地標記噪聲區域、感興趣區域、前景和背景等。 原始圖像由許多頻率組成,人們能夠分離這些頻率來理解圖像和提取感興趣的數據。
傅里葉變換的概念是許多常見的圖像處理操作的基礎,比如邊緣檢測或線段和形狀 檢測。
下面先介紹兩個概念:高通濾波器和低通濾波器,上面提到的那些操作都是以這兩個 概念和傅里葉變換為基礎。
3.2.1高通濾波器
高通濾波器(HPF)是檢測圖像的某個區域,然后根據像素與周圍像素的亮度差值來提 升(boost)該像素的亮度的濾波器。
邊緣檢測
OpenCV提供了許多邊緣檢測濾波函數,包括Laplacian、Sobel()以及Scharr()這些濾波函數都會將非邊緣區域轉為黑色,將邊緣區域轉為白色或其他飽和的顏色。但是, 這些函數都很容易將噪聲錯誤地識別為邊緣。緩解這個問題的方法是在找到邊緣之前對圖 像進行模糊處理。OpenCV也提供了許多模糊濾波函數,包括blur()(簡單的算術平均)、 medianBlur()以及GaussianBlur()。邊緣檢測濾波函數和模糊濾波函數的參數有很多,但總會有一個ksize參數,它是一個奇數,表示濾波核的寬和高(以像素為單位)。
這里使用medianBlur()作為模糊函數,它對去除數字化的視頻噪聲?非常有效,特別是去除彩色圖像的噪聲;使用Laplacian()作為邊緣檢測函數,它會產生明顯的邊緣線條,灰度圖像更是如此。在使用medianBlur()函數之后,將要使用L aplacian ()函數之前,需要將 圖像從BGR色彩空間轉為灰度色彩空間。
在得到Laplacian()函數的結果之后,需要將其轉換成黑色邊緣和白色背景的圖像。然 后將其歸一化(使它的像素值在0到1之間),并乘以源圖像以便能將邊緣變黑。
代碼測試
flags = [i for i in dir(cv) if i.startswith('COLOR_')]print(flags)輸出:
C:\Users\14172\PycharmProjects\pythonProject3\venv\Scripts\python.exe C:/Users/14172/PycharmProjects/pythonProject3/main.py
[‘COLOR_BAYER_BG2BGR’, ‘COLOR_BAYER_BG2BGRA’, ‘COLOR_BAYER_BG2BGR_EA’, ‘COLOR_BAYER_BG2BGR_VNG’, ‘COLOR_BAYER_BG2GRAY’, ‘COLOR_BAYER_BG2RGB’, ‘COLOR_BAYER_BG2RGBA’, ‘COLOR_BAYER_BG2RGB_EA’, ‘COLOR_BAYER_BG2RGB_VNG’, ‘COLOR_BAYER_GB2BGR’, ‘COLOR_BAYER_GB2BGRA’, ‘COLOR_BAYER_GB2BGR_EA’, ‘COLOR_BAYER_GB2BGR_VNG’, ‘COLOR_BAYER_GB2GRAY’, ‘COLOR_BAYER_GB2RGB’, ‘COLOR_BAYER_GB2RGBA’, ‘COLOR_BAYER_GB2RGB_EA’, ‘COLOR_BAYER_GB2RGB_VNG’, ‘COLOR_BAYER_GR2BGR’, ‘COLOR_BAYER_GR2BGRA’, ‘COLOR_BAYER_GR2BGR_EA’, ‘COLOR_BAYER_GR2BGR_VNG’, ‘COLOR_BAYER_GR2GRAY’, ‘COLOR_BAYER_GR2RGB’, ‘COLOR_BAYER_GR2RGBA’, ‘COLOR_BAYER_GR2RGB_EA’, ‘COLOR_BAYER_GR2RGB_VNG’, ‘COLOR_BAYER_RG2BGR’, ‘COLOR_BAYER_RG2BGRA’, ‘COLOR_BAYER_RG2BGR_EA’, ‘COLOR_BAYER_RG2BGR_VNG’, ‘COLOR_BAYER_RG2GRAY’, ‘COLOR_BAYER_RG2RGB’, ‘COLOR_BAYER_RG2RGBA’, ‘COLOR_BAYER_RG2RGB_EA’, ‘COLOR_BAYER_RG2RGB_VNG’, ‘COLOR_BGR2BGR555’, ‘COLOR_BGR2BGR565’, ‘COLOR_BGR2BGRA’, ‘COLOR_BGR2GRAY’, ‘COLOR_BGR2HLS’, ‘COLOR_BGR2HLS_FULL’, ‘COLOR_BGR2HSV’, ‘COLOR_BGR2HSV_FULL’, ‘COLOR_BGR2LAB’, ‘COLOR_BGR2LUV’, ‘COLOR_BGR2Lab’, ‘COLOR_BGR2Luv’, ‘COLOR_BGR2RGB’, ‘COLOR_BGR2RGBA’, ‘COLOR_BGR2XYZ’, ‘COLOR_BGR2YCR_CB’, ‘COLOR_BGR2YCrCb’, ‘COLOR_BGR2YUV’, ‘COLOR_BGR2YUV_I420’, ‘COLOR_BGR2YUV_IYUV’, ‘COLOR_BGR2YUV_YV12’, ‘COLOR_BGR5552BGR’, ‘COLOR_BGR5552BGRA’, ‘COLOR_BGR5552GRAY’, ‘COLOR_BGR5552RGB’, ‘COLOR_BGR5552RGBA’, ‘COLOR_BGR5652BGR’, ‘COLOR_BGR5652BGRA’, ‘COLOR_BGR5652GRAY’, ‘COLOR_BGR5652RGB’, ‘COLOR_BGR5652RGBA’, ‘COLOR_BGRA2BGR’, ‘COLOR_BGRA2BGR555’, ‘COLOR_BGRA2BGR565’, ‘COLOR_BGRA2GRAY’, ‘COLOR_BGRA2RGB’, ‘COLOR_BGRA2RGBA’, ‘COLOR_BGRA2YUV_I420’, ‘COLOR_BGRA2YUV_IYUV’, ‘COLOR_BGRA2YUV_YV12’, ‘COLOR_BayerBG2BGR’, ‘COLOR_BayerBG2BGRA’, ‘COLOR_BayerBG2BGR_EA’, ‘COLOR_BayerBG2BGR_VNG’, ‘COLOR_BayerBG2GRAY’, ‘COLOR_BayerBG2RGB’, ‘COLOR_BayerBG2RGBA’, ‘COLOR_BayerBG2RGB_EA’, ‘COLOR_BayerBG2RGB_VNG’, ‘COLOR_BayerGB2BGR’, ‘COLOR_BayerGB2BGRA’, ‘COLOR_BayerGB2BGR_EA’, ‘COLOR_BayerGB2BGR_VNG’, ‘COLOR_BayerGB2GRAY’, ‘COLOR_BayerGB2RGB’, ‘COLOR_BayerGB2RGBA’, ‘COLOR_BayerGB2RGB_EA’, ‘COLOR_BayerGB2RGB_VNG’, ‘COLOR_BayerGR2BGR’, ‘COLOR_BayerGR2BGRA’, ‘COLOR_BayerGR2BGR_EA’, ‘COLOR_BayerGR2BGR_VNG’, ‘COLOR_BayerGR2GRAY’, ‘COLOR_BayerGR2RGB’, ‘COLOR_BayerGR2RGBA’, ‘COLOR_BayerGR2RGB_EA’, ‘COLOR_BayerGR2RGB_VNG’, ‘COLOR_BayerRG2BGR’, ‘COLOR_BayerRG2BGRA’, ‘COLOR_BayerRG2BGR_EA’, ‘COLOR_BayerRG2BGR_VNG’, ‘COLOR_BayerRG2GRAY’, ‘COLOR_BayerRG2RGB’, ‘COLOR_BayerRG2RGBA’, ‘COLOR_BayerRG2RGB_EA’, ‘COLOR_BayerRG2RGB_VNG’, ‘COLOR_COLORCVT_MAX’, ‘COLOR_GRAY2BGR’, ‘COLOR_GRAY2BGR555’, ‘COLOR_GRAY2BGR565’, ‘COLOR_GRAY2BGRA’, ‘COLOR_GRAY2RGB’, ‘COLOR_GRAY2RGBA’, ‘COLOR_HLS2BGR’, ‘COLOR_HLS2BGR_FULL’, ‘COLOR_HLS2RGB’, ‘COLOR_HLS2RGB_FULL’, ‘COLOR_HSV2BGR’, ‘COLOR_HSV2BGR_FULL’, ‘COLOR_HSV2RGB’, ‘COLOR_HSV2RGB_FULL’, ‘COLOR_LAB2BGR’, ‘COLOR_LAB2LBGR’, ‘COLOR_LAB2LRGB’, ‘COLOR_LAB2RGB’, ‘COLOR_LBGR2LAB’, ‘COLOR_LBGR2LUV’, ‘COLOR_LBGR2Lab’, ‘COLOR_LBGR2Luv’, ‘COLOR_LRGB2LAB’, ‘COLOR_LRGB2LUV’, ‘COLOR_LRGB2Lab’, ‘COLOR_LRGB2Luv’, ‘COLOR_LUV2BGR’, ‘COLOR_LUV2LBGR’, ‘COLOR_LUV2LRGB’, ‘COLOR_LUV2RGB’, ‘COLOR_Lab2BGR’, ‘COLOR_Lab2LBGR’, ‘COLOR_Lab2LRGB’, ‘COLOR_Lab2RGB’, ‘COLOR_Luv2BGR’, ‘COLOR_Luv2LBGR’, ‘COLOR_Luv2LRGB’, ‘COLOR_Luv2RGB’, ‘COLOR_M_RGBA2RGBA’, ‘COLOR_RGB2BGR’, ‘COLOR_RGB2BGR555’, ‘COLOR_RGB2BGR565’, ‘COLOR_RGB2BGRA’, ‘COLOR_RGB2GRAY’, ‘COLOR_RGB2HLS’, ‘COLOR_RGB2HLS_FULL’, ‘COLOR_RGB2HSV’, ‘COLOR_RGB2HSV_FULL’, ‘COLOR_RGB2LAB’, ‘COLOR_RGB2LUV’, ‘COLOR_RGB2Lab’, ‘COLOR_RGB2Luv’, ‘COLOR_RGB2RGBA’, ‘COLOR_RGB2XYZ’, ‘COLOR_RGB2YCR_CB’, ‘COLOR_RGB2YCrCb’, ‘COLOR_RGB2YUV’, ‘COLOR_RGB2YUV_I420’, ‘COLOR_RGB2YUV_IYUV’, ‘COLOR_RGB2YUV_YV12’, ‘COLOR_RGBA2BGR’, ‘COLOR_RGBA2BGR555’, ‘COLOR_RGBA2BGR565’, ‘COLOR_RGBA2BGRA’, ‘COLOR_RGBA2GRAY’, ‘COLOR_RGBA2M_RGBA’, ‘COLOR_RGBA2RGB’, ‘COLOR_RGBA2YUV_I420’, ‘COLOR_RGBA2YUV_IYUV’, ‘COLOR_RGBA2YUV_YV12’, ‘COLOR_RGBA2mRGBA’, ‘COLOR_XYZ2BGR’, ‘COLOR_XYZ2RGB’, ‘COLOR_YCR_CB2BGR’, ‘COLOR_YCR_CB2RGB’, ‘COLOR_YCrCb2BGR’, ‘COLOR_YCrCb2RGB’, ‘COLOR_YUV2BGR’, ‘COLOR_YUV2BGRA_I420’, ‘COLOR_YUV2BGRA_IYUV’, ‘COLOR_YUV2BGRA_NV12’, ‘COLOR_YUV2BGRA_NV21’, ‘COLOR_YUV2BGRA_UYNV’, ‘COLOR_YUV2BGRA_UYVY’, ‘COLOR_YUV2BGRA_Y422’, ‘COLOR_YUV2BGRA_YUNV’, ‘COLOR_YUV2BGRA_YUY2’, ‘COLOR_YUV2BGRA_YUYV’, ‘COLOR_YUV2BGRA_YV12’, ‘COLOR_YUV2BGRA_YVYU’, ‘COLOR_YUV2BGR_I420’, ‘COLOR_YUV2BGR_IYUV’, ‘COLOR_YUV2BGR_NV12’, ‘COLOR_YUV2BGR_NV21’, ‘COLOR_YUV2BGR_UYNV’, ‘COLOR_YUV2BGR_UYVY’, ‘COLOR_YUV2BGR_Y422’, ‘COLOR_YUV2BGR_YUNV’, ‘COLOR_YUV2BGR_YUY2’, ‘COLOR_YUV2BGR_YUYV’, ‘COLOR_YUV2BGR_YV12’, ‘COLOR_YUV2BGR_YVYU’, ‘COLOR_YUV2GRAY_420’, ‘COLOR_YUV2GRAY_I420’, ‘COLOR_YUV2GRAY_IYUV’, ‘COLOR_YUV2GRAY_NV12’, ‘COLOR_YUV2GRAY_NV21’, ‘COLOR_YUV2GRAY_UYNV’, ‘COLOR_YUV2GRAY_UYVY’, ‘COLOR_YUV2GRAY_Y422’, ‘COLOR_YUV2GRAY_YUNV’, ‘COLOR_YUV2GRAY_YUY2’, ‘COLOR_YUV2GRAY_YUYV’, ‘COLOR_YUV2GRAY_YV12’, ‘COLOR_YUV2GRAY_YVYU’, ‘COLOR_YUV2RGB’, ‘COLOR_YUV2RGBA_I420’, ‘COLOR_YUV2RGBA_IYUV’, ‘COLOR_YUV2RGBA_NV12’, ‘COLOR_YUV2RGBA_NV21’, ‘COLOR_YUV2RGBA_UYNV’, ‘COLOR_YUV2RGBA_UYVY’, ‘COLOR_YUV2RGBA_Y422’, ‘COLOR_YUV2RGBA_YUNV’, ‘COLOR_YUV2RGBA_YUY2’, ‘COLOR_YUV2RGBA_YUYV’, ‘COLOR_YUV2RGBA_YV12’, ‘COLOR_YUV2RGBA_YVYU’, ‘COLOR_YUV2RGB_I420’, ‘COLOR_YUV2RGB_IYUV’, ‘COLOR_YUV2RGB_NV12’, ‘COLOR_YUV2RGB_NV21’, ‘COLOR_YUV2RGB_UYNV’, ‘COLOR_YUV2RGB_UYVY’, ‘COLOR_YUV2RGB_Y422’, ‘COLOR_YUV2RGB_YUNV’, ‘COLOR_YUV2RGB_YUY2’, ‘COLOR_YUV2RGB_YUYV’, ‘COLOR_YUV2RGB_YV12’, ‘COLOR_YUV2RGB_YVYU’, ‘COLOR_YUV420P2BGR’, ‘COLOR_YUV420P2BGRA’, ‘COLOR_YUV420P2GRAY’, ‘COLOR_YUV420P2RGB’, ‘COLOR_YUV420P2RGBA’, ‘COLOR_YUV420SP2BGR’, ‘COLOR_YUV420SP2BGRA’, ‘COLOR_YUV420SP2GRAY’, ‘COLOR_YUV420SP2RGB’, ‘COLOR_YUV420SP2RGBA’, ‘COLOR_YUV420p2BGR’, ‘COLOR_YUV420p2BGRA’, ‘COLOR_YUV420p2GRAY’, ‘COLOR_YUV420p2RGB’, ‘COLOR_YUV420p2RGBA’, ‘COLOR_YUV420sp2BGR’, ‘COLOR_YUV420sp2BGRA’, ‘COLOR_YUV420sp2GRAY’, ‘COLOR_YUV420sp2RGB’, ‘COLOR_YUV420sp2RGBA’, ‘COLOR_mRGBA2RGBA’]
Process finished with exit code 0
在藍色物體附件畫圈:
import cv2 as cv import numpy as np if __name__ == '__main__':cap = cv.VideoCapture(0)while (1):# Take each frame_, frame = cap.read()# Convert BGR to HSVhsv = cv.cvtColor(frame, cv.COLOR_BGR2HSV)# define range of blue color in HSVlower_blue = np.array([110, 50, 50])upper_blue = np.array([130, 255, 255])# Threshold the HSV image to get only blue colorsmask = cv.inRange(hsv, lower_blue, upper_blue)# Bitwise-AND mask and original imageres = cv.bitwise_and(frame, frame, mask=mask)cv.imshow('frame', frame)cv.imshow('mask', mask)cv.imshow('res', res)k = cv.waitKey(5) & 0xFFif k == 27:breakcv.destroyAllWindows()效果:
將參數修改
之后效果:
一份用c++畫矩形的代碼:
習題
代碼調用電腦攝像頭,尋找視野中任意顏色(自定)并具有一定大小的物體,并用矩形框處,最后顯示在圖像上;
知道了怎么用圓形框住,還沒有框矩形:
代碼:
效果:
框正矩形:
其余句子不變,if后面改成這個:
cv2.boundingRect(img)這個函數很簡單,img是一個二值圖,也就是它的參數;返回四個值,分別是x,y,w,h;
x,y是矩陣左上點的坐標,w,h是矩陣的寬和高
換一個顏色試試:
閾值設置要準確,才能正確框住。
還有一個框斜矩形的:
minAreaRect函數返回矩形的中心點坐標,長寬,旋轉角度[-90,0),當矩形水平或豎直時均返回-90
以rect作為它的返回值則有:
centar = rect[0] width,height = rect[1] (注意第二個返回值是一個數組)
提取輪廓:
import cv2if __name__ == '__main__':img = cv2.imread("me1.jpg")gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)ret, binary = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)cv2.drawContours(img, contours, -1, (0, 0, 255), 3)cv2.imshow("img", img)cv2.waitKey(0)效果:
框一定顏色的物塊:(中間的綠線還不知道咋出來的。。。
然后發現有的顏色還沒有框住。。
一個框住一定物體的代碼:
img = cv2.pyrDown(cv2.imread("hammer.png" ,cv2.IMREAD_UNCHANGED))ret, thresh = cv2.threshold(cv2.cvtColor(img.copy(),cv2.COLOR_BGR2GRAY), 127, 255, cv2.THRESH_BINARY)image,contours= cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)for c in contours:# find bounding box coordinatesx, y, w, h = cv2.boundingRect(c)cv2.rectangle(img, (x, y), (x+w, y + h), (0, 255, 0), 2)# find minimum arearect = cv2.minAreaRect(c)# calculate coordinates of the minimum area rectanglebox = cv2.boxPoints(rect)# normalize coordinates to integersbox = np.into(box)# draw contourscv2.drawContours(img, [box], 0, (0, 0, 255), 3)# calculate center and radius of minimum enclosing circle(x, y),radius = cv2.minEnclosingCircle(c)# cast to integerscenter = (int(x), int(y))radius = int(radius)# draw the circleimg = cv2.circle(img, center, radius, (0, 255, 0),2)cv2.drawContours(img, contours, -1, (255, 0, 0),1)cv2.imshow("contoursn", img)原圖:
不知道代碼是哪里出了bug 還沒跑成功:
關鍵是好多函數還不知道是怎么用的。。。
追蹤特定顏色:
import imutilsif __name__ == '__main__':cap = cv.VideoCapture(0)while True:# 讀取每一幀_, frame = cap.read()# 重設圖片尺寸以提高計算速度frame = imutils.resize(frame, width=600)# 進行高斯模糊blurred = cv.GaussianBlur(frame, (11, 11), 0)# 轉換顏色空間到HSVhsv = cv.cvtColor(blurred, cv.COLOR_BGR2HSV)# 定義紅色無圖的HSV閾值lower_red = np.array([20, 100, 100])upper_red = np.array([220, 255, 255])# 對圖片進行二值化處理mask = cv.inRange(hsv, lower_red, upper_red)# 腐蝕操作mask = cv.erode(mask, None, iterations=2)# 膨脹操作,先腐蝕后膨脹以濾除噪聲mask = cv.dilate(mask, None, iterations=2)cv.imshow('mask', mask)# 尋找圖中輪廓cnts = cv.findContours(mask.copy(), cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)[-2]# 如果存在至少一個輪廓則進行如下操作if len(cnts) > 0:# 找到面積最大的輪廓c = max(cnts, key=cv.contourArea)# 使用最小外接圓圈出面積最大的輪廓((x, y), radius) = cv.minEnclosingCircle(c)# 計算輪廓的矩M = cv.moments(c)# 計算輪廓的重心center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))# 只處理尺寸足夠大的輪廓if radius > 5:# 畫出最小外接圓cv.rectangle(frame, (int(x), int(y)), (int(x), int(y)), (0, 255, 255), 2)# 畫出重心cv.imshow('frame', frame)k = cv.waitKey(5) & 0xFFif k == 27:breakcap.release()cv.destroyAllWindows()
不知道為什么只有試藍色物體的時候才有陰影。。。然后圓形框也沒出來。。。。。
追蹤:
if __name__ == '__main__':cap = cv2.VideoCapture(0) # 獲取攝像頭視頻while True:ret, frame = cap.read() # 讀取每一幀圖片hsv_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) # 將每一幀圖片轉化HSV空間顏色lower = np.array([0, 104, 205])upper = np.array([15, 208, 255])mask = cv2.inRange(hsv_frame, lower, upper)img, conts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 找出邊界cv2.drawContours(frame, conts, -1, (0, 255, 0), 3) # 畫出邊框dst = cv2.bitwise_and(frame, frame, mask=mask) # 對每一幀進行位與操作,獲取追蹤圖像的顏色# cv2.imshow("mask",mask)# cv2.imshow("dst",dst)cv2.imshow("frame", frame)if cv2.waitKey(1) & 0xff == 27:breakcv2.destroyAllWindows()動態識別物體:
import cv2 import numpy as npcamera = cv2.VideoCapture(0) # 參數0表示第一個攝像頭 # 判斷視頻是否打開 if (camera.isOpened()):print('Open') else:print('攝像頭未打開')# 測試用,查看視頻size size = (int(camera.get(cv2.CAP_PROP_FRAME_WIDTH)),int(camera.get(cv2.CAP_PROP_FRAME_HEIGHT))) print('size:' + repr(size))es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9, 4)) kernel = np.ones((5, 5), np.uint8) background = Nonewhile True:# 讀取視頻流grabbed, frame_lwpCV = camera.read()# 對幀進行預處理,先轉灰度圖,再進行高斯濾波。# 用高斯濾波進行模糊處理,進行處理的原因:每個輸入的視頻都會因自然震動、光照變化或者攝像頭本身等原因而產生噪聲。對噪聲進行平滑是為了避免在運動和跟蹤時將其檢測出來。gray_lwpCV = cv2.cvtColor(frame_lwpCV, cv2.COLOR_BGR2GRAY)gray_lwpCV = cv2.GaussianBlur(gray_lwpCV, (21, 21), 0)# 將第一幀設置為整個輸入的背景if background is None:background = gray_lwpCVcontinue# 對于每個從背景之后讀取的幀都會計算其與北京之間的差異,并得到一個差分圖(different map)。# 還需要應用閾值來得到一幅黑白圖像,并通過下面代碼來膨脹(dilate)圖像,從而對孔(hole)和缺陷(imperfection)進行歸一化處理diff = cv2.absdiff(background, gray_lwpCV)diff = cv2.threshold(diff, 148, 255, cv2.THRESH_BINARY)[1] # 二值化閾值處理diff = cv2.dilate(diff, es, iterations=2) # 形態學膨脹# 顯示矩形框image, contours, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) # 該函數計算一幅圖像中目標的輪廓for c in contours:if cv2.contourArea(c) < 1500: # 對于矩形區域,只顯示大于給定閾值的輪廓,所以一些微小的變化不會顯示。對于光照不變和噪聲低的攝像頭可不設定輪廓最小尺寸的閾值continue(x, y, w, h) = cv2.boundingRect(c) # 該函數計算矩形的邊界框cv2.rectangle(frame_lwpCV, (x, y), (x + w, y + h), (0, 255, 0), 2)cv2.imshow('contours', frame_lwpCV)cv2.imshow('dis', diff)key = cv2.waitKey(1) & 0xFF# 按'q'健退出循環if key == ord('q'):break# When everything done, release the capture camera.release() cv2.destroyAllWindows()感覺可能是opencv4發生了改動,這個代碼在我的電腦上一直報錯:
還沒時間看官方文檔,以后再看吧
總結
以上是生活随笔為你收集整理的task2:opencv的python接口图像储存、色彩空间、的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: mybatis的mapper.xml文件
- 下一篇: 【学习笔记】opencv的python接