Learning OpenCV Lecture 5 (Filtering the Images)
- Filtering images using low-pass filters
- Filtering images using a median filter
- Applying directional filters to detect edges
- Computing the Laplacian of an image
cv::medianBlur(image,result,5); The resulting image is then as follows: The median filter will simply compute the median value of this set, and the current pixel is then replaced by this median value.?This explains why the filter is so efficient in eliminating of the salt-and-pepper noise. Indeed,?when an outlier black or white pixel is present in a given pixel neighborhood, it is never?selected as the median value (being rather maximal or minimal value) so it is always replaced?by a neighboring value.? The median filter also has the advantage of preserving the sharpness of the edges. However, it washes out the textures in uniform regions (for example, the trees in the background). Applying directional filters to detect edges The filter we will use here is called the Sobel filter. It is said to be a directional filter because it only affects the vertical or the horizontal image frequencies depending on which kernel of the filter is used. OpenCV has a function that applies the Sobel operator on an image. The horizontal filter is called as follows: cv::Sobel(image,sobelX,CV_8U,1,0,3,0.4,128); While vertical filtering is achieved by the following (and very similar) call: cv::Sobel(image,sobelY,CV_8U,0,1,3,0.4,128); these have been chosen to produce an 8-bit image (CV_8U) representation of the output.The result of the horizontal Sobel operator is as follows: in this representation, a zero value corresponds to gray-level 128. Negative values are represented by darker pixels, while positive values are represented by brighter pixels. The vertical Sobel image is: Since its kernel contains positive and negative values, the result of the Sobel filter is generally computed in a 16-bit signed integer image (CV_16S). The two results (vertical and horizontal) are then combined to obtain the norm of the Sobel filter: // Compute norm of Sobel cv::Sobel(image,sobelX,CV_16S,1,0); cv::Sobel(image,sobelY,CV_16S,0,1); cv::Mat sobel; //compute the L1 norm sobel= abs(sobelX)+abs(sobelY);
The Sobel norm can be conveniently displayed in an image using the optional rescaling parameter of the convertTo method in order to obtain an image in which zero values correspond to white, and higher values are assigned darker gray shades:
// Find Sobel max value double sobmin, sobmax; cv::minMaxLoc(sobel,&sobmin,&sobmax); // Conversion to 8-bit image // sobelImage = -alpha*sobel + 255 cv::Mat sobelImage; sobel.convertTo(sobelImage,CV_8U,-255./sobmax,255);The result can be seen in the following image:
Looking at this image, it is now clear why this kind of operators are called edge detector. It is then possible to threshold this image in order to obtain a binary map showing the image contour. The following snippet creates the following image: cv::threshold(sobelImage, sobelThresholded,threshold, 255, cv::THRESH_BINARY);The Sobel operator is a classic edge detection linear filter that is based on a simple 3x3kernel which has the following structure: If we view the image as a two-dimensional function, the Sobel operator can then be seen as a measure of the variation of the image in the vertical and horizontal directions. In mathematical terms, this measure is called a gradientand it is defined as a 2D vector made of the function's first derivatives in two orthogonal directions: The cv::Sobel function computes the result of the convolution of the image with a Sobel kernel. Its complete specification is as follows: cv::Sobel(image, // inputsobel, // outputimage_depth, // image typexorder,yorder, // kernel specificationkernel_size, // size of the square kernelalpha, beta); // scale and offset
Since the gradient is a 2D vector, it has a norm and a direction. The norm of the gradient vector tells you what the amplitude of the variation is and it is normally computed as a Euclidean norm (also called L2 norm):
However, in image processing, we generally compute this norm as the sum of the absolute values. This is called the L1norm and it gives values close to the L2norm but at a much lower computational cost. This is what we did in this recipe, that is: //compute the L1 norm sobel= abs(sobelX)+abs(sobelY); The gradient vector always points in the direction of the steepest variation. For an image, this means that the gradient direction will be orthogonal to the edge, pointing in the darker to brighter direction. Gradient angular direction is given by: Most often, for edge detection, only the norm is computed. But if you require both the norm and the direction, then the following OpenCV function can be used: // Sobel must be computed in floating points cv::Sobel(image,sobelX,CV_32F,1,0); cv::Sobel(image,sobelY,CV_32F,0,1); // Compute the L2 norm and direction of the gradient cv::Mat norm, dir; cv::cartToPolar(sobelX,sobelY,norm,dir);By default, the direction is computed in?radians. Just add true as an additional argument in order to have them computed in?degrees. A binary edge map has been obtained by applying a threshold on the gradient magnitude. Choosing the right threshold is not an obvious task. If the threshold value is too low, too many (thick) edges will be retained, while if we select a more severe (higher) threshold, then broken edges will be obtained. As an illustration of this tradeoff situation, compare the preceding binary edge map with the following, o btained using a higher threshold value: One possible alternative is to use the concept of hysteresis thresholding. This will be explained in the next chapter where we introduce the?Canny operator.
- Computing the Laplacian of an image
main.cpp:
#include "laplacianZC.h"int main() {cv ::Mat image = cv::imread ("../boldt.jpg", 0);if (! image.data ) {return 0 ;}cv ::namedWindow( "Original Image" );cv ::imshow( "Original Image" , image);// Compute Laplacian using LaplacianZC classLaplacianZC laplacian ;laplacian .setAperture( 7);cv ::Mat flap = laplacian.computeLaplacian (image);cv ::Mat laplace = laplacian.getLaplacianImage ();cv ::namedWindow( "Laplacian Image" );cv ::imshow( "Laplacian Image" , laplace);cv ::waitKey( 0);return 0 ; }Formally, the Laplacianof a 2D function is defined as the sum of its second derivatives: In its simplest form, it can be approximated by the following 3x3 kernel: Consequently, a transition between a positive and a negative Laplacian value (or reciprocally) constitutes a good indicator of the presence of an edge.Another way?to express this fact is to say that edges will be located at the?zero-crossings?of the Laplacian?function. A white box has been drawn in the following image to show the exact location of this region of interest: Now looking at the Laplacian values (7x7kernel) inside this window, we have: If, as illustrated, you carefully follow the zero-crossings of the Laplacian (located between pixels of different signs), you obtain a curve which corresponds to the edge visible in the image window.?This implies that, in principle, you can?even detect the image edges at sub-pixel accuracy. a simplified algorithm can be used to detect the approximate zero-crossing locations. This one proceeds as follows.Scan the Laplacian image and compare the current pixel with the one at?its left. If the two pixels are of different signs, then declare a zero-crossing at the current pixel,?if not, repeat the same test with the pixel immediately above.?This algorithm is implemented?by the following method which generates a binary image of zero-crossings: // Get a binary image of the zero-crossing// if the product of the two adjascent pixels is// less than threshold then this zero-crsooing// will be ignoredcv ::Mat getZeroCrossing( float threshold = 1.0) {// Create the iteratorscv ::Mat_< float>::const_iterator it = laplace. begin<float >() + laplace.step1 ();cv ::Mat_< float>::const_iterator itend = laplace. end<float >();cv ::Mat_< float>::const_iterator itup = laplace. begin<float >();// Binary image initialize to whitecv ::Mat binary( laplace.size (), CV_8U, cv::Scalar (255));cv ::Mat_< uchar>::iterator itout = binary. begin<uchar >() + binary.step1 ();// negate the input threshold valuethreshold *= - 1.0;for ( ; it != itend; ++it , ++ itup, ++itout) {// if the product of two adjascent pixel is// negative the there is a sign changeif (* it * *(it - 1) < threshold ) {*itout = 0; // horizontal zero-crossing} else if (*it * *itup < threshold) {*itout = 0; // vertical zeor-crossing}}return binary;}
Using the function as follows:
double lapmin, lapmax;cv ::minMaxLoc( flap,&lapmin ,&lapmax);// Compute and display the zero-crossing pointscv ::Mat zeros;zeros = laplacian. getZeroCrossing(lapmax );cv ::namedWindow( "Zero-crossings" );cv ::imshow( "Zero-crossings" ,zeros);Then we get the following result:
轉載于:https://www.cnblogs.com/starlitnext/p/3861440.html
《新程序員》:云原生和全面數字化實踐50位技術專家共同創作,文字、視頻、音頻交互閱讀總結
以上是生活随笔為你收集整理的Learning OpenCV Lecture 5 (Filtering the Images)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 关于“心脏出血”漏洞(heartblee
- 下一篇: 伪距定位算法(matlab版)