基于OpenCV进行相机标定
相機(jī)已經(jīng)存在了很長一段時間。 隨著二十世紀(jì)末廉價針孔相機(jī)的推出,相機(jī)已經(jīng)在日常生活中普及。雖然價格便宜,但是成像存在嚴(yán)重的畸變。不過,這些畸變是固定的形式,基于標(biāo)定和重映技術(shù)可以糾正畸變。此外,基于標(biāo)定,可以確定相機(jī)的自然單位(像素)和現(xiàn)實世界單位(例如毫米)之間的關(guān)系。
理論
對于畸變, OpenCV 中考慮徑向和切向的因素.? 對于徑向的畸變因素使用如下公式:
xdistorted=x(1+k1r2+k2r4+k3r6)ydistorted=y(1+k1r2+k2r4+k3r6)所以對于沒有畸變的像素點 (x,y) 坐標(biāo), 在畸變圖像中的坐標(biāo)為 (xdistortedydistorted). 徑向畸變的形式通常表現(xiàn)為"桶型(barrel)" 或者 "魚眼"形式.
切向畸變的發(fā)生是因為拍攝圖像的鏡頭是不能和圖像平面完全平行造成的。可以用公式表示成:
xdistorted=x+[2p1xy+p2(r2+2x2)]ydistorted=y+[p1(r2+2y2)+2p2xy]所以在OpenCV中有5個畸變參數(shù),一般表示成具有5個元素的行矩陣:
distortion_coefficients=(k1k2p1p2k3)現(xiàn)在,對于單位轉(zhuǎn)換(unit conversion),我們使用以下公式:
????xyw????=?????fx000fy0cxcy1?????????XYZ????這里出現(xiàn)的w 表示使用齊次坐標(biāo)系統(tǒng) (并且 w=Z). 未知參數(shù)為 fx 和 fy (相機(jī)的焦距) 以及 (cx,cy) 光學(xué)像素坐標(biāo)的中心. If for both axes a common focal length is used with a given a aspect ratio (通常為 1), 那么 fy=fx?a and in the upper formula we will have a single focal length f. 矩陣包含的四個參數(shù)稱為相機(jī)的內(nèi)參(camera matrix). 雖然使用不同的相機(jī)分辨率的情況下畸變系數(shù)都是相同的, these should be scaled along with the current resolution from the calibrated resolution.
確定這兩個矩陣的過程就是校準(zhǔn)。這些參數(shù)的計算是通過基本的幾何方程實現(xiàn)。使用的方程取決于所選擇的標(biāo)定對象. 目前OpenCV 支持三種類型的對象用于標(biāo)定:
- 經(jīng)典的黑白棋盤格
- 對稱的圓形圖案
- 非對稱的圓形圖案
首先,需要用待校正的相機(jī)拍攝這些標(biāo)定圖案,并讓OpenCV 找到他們. 每一個識別到的圖案樣式都可以產(chǎn)生一個新的方程. 至少要拍攝預(yù)定數(shù)量的圖案形成一個適定的方程組. 標(biāo)定中棋盤格需要拍攝較多的數(shù)量,圓形的圖案拍攝的數(shù)量較少.例如,理論上棋盤格至少需要拍攝兩幅圖案. 但是,實際上根據(jù)經(jīng)驗統(tǒng)計,如果想要得到較好的校正效果,至少要拍攝10張不同位置的圖案。
目的:
展示如下功能:
- 確定畸變矩陣(Determine the distortion matrix)
- 確定相機(jī)內(nèi)參(Determine the camera matrix)
- 輸入為攝像頭、視頻和圖像列表(Take input from Camera, Video and Image file list)
- 讀取配置文件(Read configuration from XML/YAML file)
- 保存校正結(jié)果(Save the results into XML/YAML file)
- 再投影誤差的計算(Calculate re-projection error)
源代碼
#include <iostream> #include <sstream> #include <string> #include <ctime> #include <cstdio>#include <opencv2/core.hpp> #include <opencv2/core/utility.hpp> #include <opencv2/imgproc.hpp> #include <opencv2/calib3d.hpp> #include <opencv2/imgcodecs.hpp> #include <opencv2/videoio.hpp> #include <opencv2/highgui.hpp>using namespace cv; using namespace std;static void help() {cout << "This is a camera calibration sample." << endl<< "Usage: camera_calibration [configuration_file -- default ./default.xml]" << endl<< "Near the sample file you'll find the configuration file, which has detailed help of ""how to edit it. It may be any OpenCV supported file format XML/YAML." << endl; } class Settings { public:Settings() : goodInput(false) {}enum Pattern { NOT_EXISTING, CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };enum InputType { INVALID, CAMERA, VIDEO_FILE, IMAGE_LIST };void write(FileStorage& fs) const //Write serialization for this class{fs << "{"<< "BoardSize_Width" << boardSize.width<< "BoardSize_Height" << boardSize.height<< "Square_Size" << squareSize<< "Calibrate_Pattern" << patternToUse<< "Calibrate_NrOfFrameToUse" << nrFrames<< "Calibrate_FixAspectRatio" << aspectRatio<< "Calibrate_AssumeZeroTangentialDistortion" << calibZeroTangentDist<< "Calibrate_FixPrincipalPointAtTheCenter" << calibFixPrincipalPoint<< "Write_DetectedFeaturePoints" << writePoints<< "Write_extrinsicParameters" << writeExtrinsics<< "Write_outputFileName" << outputFileName<< "Show_UndistortedImage" << showUndistorsed<< "Input_FlipAroundHorizontalAxis" << flipVertical<< "Input_Delay" << delay<< "Input" << input<< "}";}void read(const FileNode& node) //Read serialization for this class{node["BoardSize_Width" ] >> boardSize.width;node["BoardSize_Height"] >> boardSize.height;node["Calibrate_Pattern"] >> patternToUse;node["Square_Size"] >> squareSize;node["Calibrate_NrOfFrameToUse"] >> nrFrames;node["Calibrate_FixAspectRatio"] >> aspectRatio;node["Write_DetectedFeaturePoints"] >> writePoints;node["Write_extrinsicParameters"] >> writeExtrinsics;node["Write_outputFileName"] >> outputFileName;node["Calibrate_AssumeZeroTangentialDistortion"] >> calibZeroTangentDist;node["Calibrate_FixPrincipalPointAtTheCenter"] >> calibFixPrincipalPoint;node["Calibrate_UseFisheyeModel"] >> useFisheye;node["Input_FlipAroundHorizontalAxis"] >> flipVertical;node["Show_UndistortedImage"] >> showUndistorsed;node["Input"] >> input;node["Input_Delay"] >> delay;node["Fix_K1"] >> fixK1;node["Fix_K2"] >> fixK2;node["Fix_K3"] >> fixK3;node["Fix_K4"] >> fixK4;node["Fix_K5"] >> fixK5;validate();}void validate(){goodInput = true;if (boardSize.width <= 0 || boardSize.height <= 0){cerr << "Invalid Board size: " << boardSize.width << " " << boardSize.height << endl;goodInput = false;}if (squareSize <= 10e-6){cerr << "Invalid square size " << squareSize << endl;goodInput = false;}if (nrFrames <= 0){cerr << "Invalid number of frames " << nrFrames << endl;goodInput = false;}if (input.empty()) // Check for valid inputinputType = INVALID;else{if (input[0] >= '0' && input[0] <= '9'){stringstream ss(input);ss >> cameraID;inputType = CAMERA;}else{if (readStringList(input, imageList)){inputType = IMAGE_LIST;nrFrames = (nrFrames < (int)imageList.size()) ? nrFrames : (int)imageList.size();}elseinputType = VIDEO_FILE;}if (inputType == CAMERA)inputCapture.open(cameraID);if (inputType == VIDEO_FILE)inputCapture.open(input);if (inputType != IMAGE_LIST && !inputCapture.isOpened())inputType = INVALID;}if (inputType == INVALID){cerr << " Input does not exist: " << input;goodInput = false;}flag = 0;if(calibFixPrincipalPoint) flag |= CALIB_FIX_PRINCIPAL_POINT;if(calibZeroTangentDist) flag |= CALIB_ZERO_TANGENT_DIST;if(aspectRatio) flag |= CALIB_FIX_ASPECT_RATIO;if(fixK1) flag |= CALIB_FIX_K1;if(fixK2) flag |= CALIB_FIX_K2;if(fixK3) flag |= CALIB_FIX_K3;if(fixK4) flag |= CALIB_FIX_K4;if(fixK5) flag |= CALIB_FIX_K5;if (useFisheye) {// the fisheye model has its own enum, so overwrite the flagsflag = fisheye::CALIB_FIX_SKEW | fisheye::CALIB_RECOMPUTE_EXTRINSIC;if(fixK1) flag |= fisheye::CALIB_FIX_K1;if(fixK2) flag |= fisheye::CALIB_FIX_K2;if(fixK3) flag |= fisheye::CALIB_FIX_K3;if(fixK4) flag |= fisheye::CALIB_FIX_K4;}calibrationPattern = NOT_EXISTING;if (!patternToUse.compare("CHESSBOARD")) calibrationPattern = CHESSBOARD;if (!patternToUse.compare("CIRCLES_GRID")) calibrationPattern = CIRCLES_GRID;if (!patternToUse.compare("ASYMMETRIC_CIRCLES_GRID")) calibrationPattern = ASYMMETRIC_CIRCLES_GRID;if (calibrationPattern == NOT_EXISTING){cerr << " Camera calibration mode does not exist: " << patternToUse << endl;goodInput = false;}atImageList = 0;}Mat nextImage(){Mat result;if( inputCapture.isOpened() ){Mat view0;inputCapture >> view0;view0.copyTo(result);}else if( atImageList < imageList.size() )result = imread(imageList[atImageList++], IMREAD_COLOR);return result;}static bool readStringList( const string& filename, vector<string>& l ){l.clear();FileStorage fs(filename, FileStorage::READ);if( !fs.isOpened() )return false;FileNode n = fs.getFirstTopLevelNode();if( n.type() != FileNode::SEQ )return false;FileNodeIterator it = n.begin(), it_end = n.end();for( ; it != it_end; ++it )l.push_back((string)*it);return true;} public:Size boardSize; // The size of the board -> Number of items by width and heightPattern calibrationPattern; // One of the Chessboard, circles, or asymmetric circle patternfloat squareSize; // The size of a square in your defined unit (point, millimeter,etc).int nrFrames; // The number of frames to use from the input for calibrationfloat aspectRatio; // The aspect ratioint delay; // In case of a video inputbool writePoints; // Write detected feature pointsbool writeExtrinsics; // Write extrinsic parametersbool calibZeroTangentDist; // Assume zero tangential distortionbool calibFixPrincipalPoint; // Fix the principal point at the centerbool flipVertical; // Flip the captured images around the horizontal axisstring outputFileName; // The name of the file where to writebool showUndistorsed; // Show undistorted images after calibrationstring input; // The input ->bool useFisheye; // use fisheye camera model for calibrationbool fixK1; // fix K1 distortion coefficientbool fixK2; // fix K2 distortion coefficientbool fixK3; // fix K3 distortion coefficientbool fixK4; // fix K4 distortion coefficientbool fixK5; // fix K5 distortion coefficientint cameraID;vector<string> imageList;size_t atImageList;VideoCapture inputCapture;InputType inputType;bool goodInput;int flag;private:string patternToUse;};static inline void read(const FileNode& node, Settings& x, const Settings& default_value = Settings()) {if(node.empty())x = default_value;elsex.read(node); }static inline void write(FileStorage& fs, const String&, const Settings& s ) {s.write(fs); }enum { DETECTION = 0, CAPTURING = 1, CALIBRATED = 2 };bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints );int main(int argc, char* argv[]) {help();//! [file_read]Settings s;const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml";FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settingsif (!fs.isOpened()){cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl;return -1;}fs["Settings"] >> s;fs.release(); // close Settings file//! [file_read]//FileStorage fout("settings.yml", FileStorage::WRITE); // write config as YAML//fout << "Settings" << s;if (!s.goodInput){cout << "Invalid input detected. Application stopping. " << endl;return -1;}vector<vector<Point2f> > imagePoints;Mat cameraMatrix, distCoeffs;Size imageSize;int mode = s.inputType == Settings::IMAGE_LIST ? CAPTURING : DETECTION;clock_t prevTimestamp = 0;const Scalar RED(0,0,255), GREEN(0,255,0);const char ESC_KEY = 27;//! [get_input]for(;;){Mat view;bool blinkOutput = false;view = s.nextImage();//----- If no more image, or got enough, then stop calibration and show result -------------if( mode == CAPTURING && imagePoints.size() >= (size_t)s.nrFrames ){if( runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints))mode = CALIBRATED;elsemode = DETECTION;}if(view.empty()) // If there are no more images stop the loop{// if calibration threshold was not reached yet, calibrate nowif( mode != CALIBRATED && !imagePoints.empty() )runCalibrationAndSave(s, imageSize, cameraMatrix, distCoeffs, imagePoints);break;}//! [get_input]imageSize = view.size(); // Format input image.if( s.flipVertical ) flip( view, view, 0 );//! [find_pattern]vector<Point2f> pointBuf;bool found;int chessBoardFlags = CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE;if(!s.useFisheye) {// fast check erroneously fails with high distortions like fisheyechessBoardFlags |= CALIB_CB_FAST_CHECK;}switch( s.calibrationPattern ) // Find feature points on the input format{case Settings::CHESSBOARD:found = findChessboardCorners( view, s.boardSize, pointBuf, chessBoardFlags);break;case Settings::CIRCLES_GRID:found = findCirclesGrid( view, s.boardSize, pointBuf );break;case Settings::ASYMMETRIC_CIRCLES_GRID:found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID );break;default:found = false;break;}//! [find_pattern]//! [pattern_found]if ( found) // If done with success,{// improve the found corners' coordinate accuracy for chessboardif( s.calibrationPattern == Settings::CHESSBOARD){Mat viewGray;cvtColor(view, viewGray, COLOR_BGR2GRAY);cornerSubPix( viewGray, pointBuf, Size(11,11),Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.1 ));}if( mode == CAPTURING && // For camera only take new samples after delay time(!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) ){imagePoints.push_back(pointBuf);prevTimestamp = clock();blinkOutput = s.inputCapture.isOpened();}// Draw the corners.drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );}//! [pattern_found]//----------------------------- Output Text ------------------------------------------------//! [output_text]string msg = (mode == CAPTURING) ? "100/100" :mode == CALIBRATED ? "Calibrated" : "Press 'g' to start";int baseLine = 0;Size textSize = getTextSize(msg, 1, 1, 1, &baseLine);Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);if( mode == CAPTURING ){if(s.showUndistorsed)msg = format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames );elsemsg = format( "%d/%d", (int)imagePoints.size(), s.nrFrames );}putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ? GREEN : RED);if( blinkOutput )bitwise_not(view, view);//! [output_text]//------------------------- Video capture output undistorted ------------------------------//! [output_undistorted]if( mode == CALIBRATED && s.showUndistorsed ){Mat temp = view.clone();if (s.useFisheye)cv::fisheye::undistortImage(temp, view, cameraMatrix, distCoeffs);elseundistort(temp, view, cameraMatrix, distCoeffs);}//! [output_undistorted]//------------------------------ Show image and check for input commands -------------------//! [await_input]imshow("Image View", view);char key = (char)waitKey(s.inputCapture.isOpened() ? 50 : s.delay);if( key == ESC_KEY )break;if( key == 'u' && mode == CALIBRATED )s.showUndistorsed = !s.showUndistorsed;if( s.inputCapture.isOpened() && key == 'g' ){mode = CAPTURING;imagePoints.clear();}//! [await_input]}// -----------------------Show the undistorted image for the image list ------------------------//! [show_results]if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed ){Mat view, rview, map1, map2;if (s.useFisheye){Mat newCamMat;fisheye::estimateNewCameraMatrixForUndistortRectify(cameraMatrix, distCoeffs, imageSize,Matx33d::eye(), newCamMat, 1);fisheye::initUndistortRectifyMap(cameraMatrix, distCoeffs, Matx33d::eye(), newCamMat, imageSize,CV_16SC2, map1, map2);}else{initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0), imageSize,CV_16SC2, map1, map2);}for(size_t i = 0; i < s.imageList.size(); i++ ){view = imread(s.imageList[i], IMREAD_COLOR);if(view.empty())continue;remap(view, rview, map1, map2, INTER_LINEAR);imshow("Image View", rview);char c = (char)waitKey();if( c == ESC_KEY || c == 'q' || c == 'Q' )break;}}//! [show_results]return 0; }//! [compute_errors] static double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,const vector<vector<Point2f> >& imagePoints,const vector<Mat>& rvecs, const vector<Mat>& tvecs,const Mat& cameraMatrix , const Mat& distCoeffs,vector<float>& perViewErrors, bool fisheye) {vector<Point2f> imagePoints2;size_t totalPoints = 0;double totalErr = 0, err;perViewErrors.resize(objectPoints.size());for(size_t i = 0; i < objectPoints.size(); ++i ){if (fisheye){fisheye::projectPoints(objectPoints[i], imagePoints2, rvecs[i], tvecs[i], cameraMatrix,distCoeffs);}else{projectPoints(objectPoints[i], rvecs[i], tvecs[i], cameraMatrix, distCoeffs, imagePoints2);}err = norm(imagePoints[i], imagePoints2, NORM_L2);size_t n = objectPoints[i].size();perViewErrors[i] = (float) std::sqrt(err*err/n);totalErr += err*err;totalPoints += n;}return std::sqrt(totalErr/totalPoints); } //! [compute_errors] //! [board_corners] static void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,Settings::Pattern patternType /*= Settings::CHESSBOARD*/) {corners.clear();switch(patternType){case Settings::CHESSBOARD:case Settings::CIRCLES_GRID:for( int i = 0; i < boardSize.height; ++i )for( int j = 0; j < boardSize.width; ++j )corners.push_back(Point3f(j*squareSize, i*squareSize, 0));break;case Settings::ASYMMETRIC_CIRCLES_GRID:for( int i = 0; i < boardSize.height; i++ )for( int j = 0; j < boardSize.width; j++ )corners.push_back(Point3f((2*j + i % 2)*squareSize, i*squareSize, 0));break;default:break;} } //! [board_corners] static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints, vector<Mat>& rvecs, vector<Mat>& tvecs,vector<float>& reprojErrs, double& totalAvgErr) {//! [fixed_aspect]cameraMatrix = Mat::eye(3, 3, CV_64F);if( s.flag & CALIB_FIX_ASPECT_RATIO )cameraMatrix.at<double>(0,0) = s.aspectRatio;//! [fixed_aspect]if (s.useFisheye) {distCoeffs = Mat::zeros(4, 1, CV_64F);} else {distCoeffs = Mat::zeros(8, 1, CV_64F);}vector<vector<Point3f> > objectPoints(1);calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern);objectPoints.resize(imagePoints.size(),objectPoints[0]);//Find intrinsic and extrinsic camera parametersdouble rms;if (s.useFisheye) {Mat _rvecs, _tvecs;rms = fisheye::calibrate(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, _rvecs,_tvecs, s.flag);rvecs.reserve(_rvecs.rows);tvecs.reserve(_tvecs.rows);for(int i = 0; i < int(objectPoints.size()); i++){rvecs.push_back(_rvecs.row(i));tvecs.push_back(_tvecs.row(i));}} else {rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs,s.flag);}cout << "Re-projection error reported by calibrateCamera: "<< rms << endl;bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs);totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints, rvecs, tvecs, cameraMatrix,distCoeffs, reprojErrs, s.useFisheye);return ok; }// Print camera parameters to the output file static void saveCameraParams( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,const vector<Mat>& rvecs, const vector<Mat>& tvecs,const vector<float>& reprojErrs, const vector<vector<Point2f> >& imagePoints,double totalAvgErr ) {FileStorage fs( s.outputFileName, FileStorage::WRITE );time_t tm;time( &tm );struct tm *t2 = localtime( &tm );char buf[1024];strftime( buf, sizeof(buf), "%c", t2 );fs << "calibration_time" << buf;if( !rvecs.empty() || !reprojErrs.empty() )fs << "nr_of_frames" << (int)std::max(rvecs.size(), reprojErrs.size());fs << "image_width" << imageSize.width;fs << "image_height" << imageSize.height;fs << "board_width" << s.boardSize.width;fs << "board_height" << s.boardSize.height;fs << "square_size" << s.squareSize;if( s.flag & CALIB_FIX_ASPECT_RATIO )fs << "fix_aspect_ratio" << s.aspectRatio;if (s.flag){std::stringstream flagsStringStream;if (s.useFisheye){flagsStringStream << "flags:"<< (s.flag & fisheye::CALIB_FIX_SKEW ? " +fix_skew" : "")<< (s.flag & fisheye::CALIB_FIX_K1 ? " +fix_k1" : "")<< (s.flag & fisheye::CALIB_FIX_K2 ? " +fix_k2" : "")<< (s.flag & fisheye::CALIB_FIX_K3 ? " +fix_k3" : "")<< (s.flag & fisheye::CALIB_FIX_K4 ? " +fix_k4" : "")<< (s.flag & fisheye::CALIB_RECOMPUTE_EXTRINSIC ? " +recompute_extrinsic" : "");}else{flagsStringStream << "flags:"<< (s.flag & CALIB_USE_INTRINSIC_GUESS ? " +use_intrinsic_guess" : "")<< (s.flag & CALIB_FIX_ASPECT_RATIO ? " +fix_aspectRatio" : "")<< (s.flag & CALIB_FIX_PRINCIPAL_POINT ? " +fix_principal_point" : "")<< (s.flag & CALIB_ZERO_TANGENT_DIST ? " +zero_tangent_dist" : "")<< (s.flag & CALIB_FIX_K1 ? " +fix_k1" : "")<< (s.flag & CALIB_FIX_K2 ? " +fix_k2" : "")<< (s.flag & CALIB_FIX_K3 ? " +fix_k3" : "")<< (s.flag & CALIB_FIX_K4 ? " +fix_k4" : "")<< (s.flag & CALIB_FIX_K5 ? " +fix_k5" : "");}fs.writeComment(flagsStringStream.str());}fs << "flags" << s.flag;fs << "fisheye_model" << s.useFisheye;fs << "camera_matrix" << cameraMatrix;fs << "distortion_coefficients" << distCoeffs;fs << "avg_reprojection_error" << totalAvgErr;if (s.writeExtrinsics && !reprojErrs.empty())fs << "per_view_reprojection_errors" << Mat(reprojErrs);if(s.writeExtrinsics && !rvecs.empty() && !tvecs.empty() ){CV_Assert(rvecs[0].type() == tvecs[0].type());Mat bigmat((int)rvecs.size(), 6, CV_MAKETYPE(rvecs[0].type(), 1));bool needReshapeR = rvecs[0].depth() != 1 ? true : false;bool needReshapeT = tvecs[0].depth() != 1 ? true : false;for( size_t i = 0; i < rvecs.size(); i++ ){Mat r = bigmat(Range(int(i), int(i+1)), Range(0,3));Mat t = bigmat(Range(int(i), int(i+1)), Range(3,6));if(needReshapeR)rvecs[i].reshape(1, 1).copyTo(r);else{//*.t() is MatExpr (not Mat) so we can use assignment operatorCV_Assert(rvecs[i].rows == 3 && rvecs[i].cols == 1);r = rvecs[i].t();}if(needReshapeT)tvecs[i].reshape(1, 1).copyTo(t);else{CV_Assert(tvecs[i].rows == 3 && tvecs[i].cols == 1);t = tvecs[i].t();}}fs.writeComment("a set of 6-tuples (rotation vector + translation vector) for each view");fs << "extrinsic_parameters" << bigmat;}if(s.writePoints && !imagePoints.empty() ){Mat imagePtMat((int)imagePoints.size(), (int)imagePoints[0].size(), CV_32FC2);for( size_t i = 0; i < imagePoints.size(); i++ ){Mat r = imagePtMat.row(int(i)).reshape(2, imagePtMat.cols);Mat imgpti(imagePoints[i]);imgpti.copyTo(r);}fs << "image_points" << imagePtMat;} }//! [run_and_save] bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints) {vector<Mat> rvecs, tvecs;vector<float> reprojErrs;double totalAvgErr = 0;bool ok = runCalibration(s, imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, reprojErrs,totalAvgErr);cout << (ok ? "Calibration succeeded" : "Calibration failed")<< ". avg re projection error = " << totalAvgErr << endl;if (ok)saveCameraParams(s, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, reprojErrs, imagePoints,totalAvgErr);return ok; } //! [run_and_save]程序只有一個參數(shù):配置文件的名字. 如果沒有給出將會嘗試打開名字為 "default.xml"的配置文件.XML格式的配置文件的例子如下:
<?xml version="1.0"?> <opencv_storage> <Settings><!-- Number of inner corners per a item row and column. (square, circle) --><BoardSize_Width> 9</BoardSize_Width><BoardSize_Height>6</BoardSize_Height><!-- The size of a square in some user defined metric system (pixel, millimeter)--><Square_Size>50</Square_Size><!-- The type of input used for camera calibration. One of: CHESSBOARD CIRCLES_GRID ASYMMETRIC_CIRCLES_GRID --><Calibrate_Pattern>"CHESSBOARD"</Calibrate_Pattern><!-- The input to use for calibration. To use an input camera -> give the ID of the camera, like "1"To use an input video -> give the path of the input video, like "/tmp/x.avi"To use an image list -> give the path to the XML or YAML file containing the list of the images, like "/tmp/circles_list.xml"--><Input>"images/CameraCalibration/VID5/VID5.xml"</Input><!-- If true (non-zero) we flip the input images around the horizontal axis.--><Input_FlipAroundHorizontalAxis>0</Input_FlipAroundHorizontalAxis><!-- Time delay between frames in case of camera. --><Input_Delay>100</Input_Delay> <!-- How many frames to use, for calibration. --><Calibrate_NrOfFrameToUse>25</Calibrate_NrOfFrameToUse><!-- Consider only fy as a free parameter, the ratio fx/fy stays the same as in the input cameraMatrix. Use or not setting. 0 - False Non-Zero - True--><Calibrate_FixAspectRatio> 1 </Calibrate_FixAspectRatio><!-- If true (non-zero) tangential distortion coefficients are set to zeros and stay zero.--><Calibrate_AssumeZeroTangentialDistortion>1</Calibrate_AssumeZeroTangentialDistortion><!-- If true (non-zero) the principal point is not changed during the global optimization.--><Calibrate_FixPrincipalPointAtTheCenter> 1 </Calibrate_FixPrincipalPointAtTheCenter><!-- The name of the output log file. --><Write_outputFileName>"out_camera_data.xml"</Write_outputFileName><!-- If true (non-zero) we write to the output file the feature points.--><Write_DetectedFeaturePoints>1</Write_DetectedFeaturePoints><!-- If true (non-zero) we write to the output file the extrinsic camera parameters.--><Write_extrinsicParameters>1</Write_extrinsicParameters><!-- If true (non-zero) we show after calibration the undistorted images.--><Show_UndistortedImage>1</Show_UndistortedImage><!-- If true (non-zero) will be used fisheye camera model.--><Calibrate_UseFisheyeModel>0</Calibrate_UseFisheyeModel><!-- If true (non-zero) distortion coefficient k1 will be equals to zero.--><Fix_K1>0</Fix_K1><!-- If true (non-zero) distortion coefficient k2 will be equals to zero.--><Fix_K2>0</Fix_K2><!-- If true (non-zero) distortion coefficient k3 will be equals to zero.--><Fix_K3>0</Fix_K3><!-- If true (non-zero) distortion coefficient k4 will be equals to zero.--><Fix_K4>1</Fix_K4><!-- If true (non-zero) distortion coefficient k5 will be equals to zero.--><Fix_K5>1</Fix_K5> </Settings> </opencv_storage> 配置文件中可以選擇攝像頭、視頻或者圖像列表作為輸入。如果選擇圖像列表作為輸入,你需要創(chuàng)建一個包含要使用的圖像的配置文件,例子如下: <?xml version="1.0"?> <opencv_storage> <images> images/CameraCalibraation/VID5/xx1.jpg images/CameraCalibraation/VID5/xx2.jpg images/CameraCalibraation/VID5/xx3.jpg images/CameraCalibraation/VID5/xx4.jpg images/CameraCalibraation/VID5/xx5.jpg images/CameraCalibraation/VID5/xx6.jpg images/CameraCalibraation/VID5/xx7.jpg images/CameraCalibraation/VID5/xx8.jpg </images> </opencv_storage>配置文件中要指定程序執(zhí)行路徑下,能夠訪問到圖像的絕對或相對路徑。程序開始先讀取配置文件. 對于XML 和 YAML文件的讀入和寫出,請參見《OpenCV中XML文件和YAML文件的讀寫》。
結(jié)果
棋盤格結(jié)果
棋盤格(9 X 6)的下載地址:棋盤格
相機(jī): AXIS IP camera
圖片配置文件:VID5.XML
<?xml version="1.0"?> <opencv_storage> <images> images/CameraCalibration/VID5/xx1.jpg images/CameraCalibration/VID5/xx2.jpg images/CameraCalibration/VID5/xx3.jpg images/CameraCalibration/VID5/xx4.jpg images/CameraCalibration/VID5/xx5.jpg images/CameraCalibration/VID5/xx6.jpg images/CameraCalibration/VID5/xx7.jpg images/CameraCalibration/VID5/xx8.jpg </images> </opencv_storage>找到的棋盤格的圖案如下圖所示:
畸變移除厚的效果為:
非對稱圓樣式圖案標(biāo)定
下載地址:this asymmetrical circle pattern
輸入:攝像頭
在設(shè)定寬度4高度11的情況下,效果如下:
校正系數(shù)輸出
輸出存儲的XML/YAML文件如下:
<camera_matrix type_id="opencv-matrix"> <rows>3</rows> <cols>3</cols> <dt>d</dt> <data>6.5746697944293521e+002 0. 3.1950000000000000e+002 0.6.5746697944293521e+002 2.3950000000000000e+002 0. 0. 1.</data></camera_matrix> <distortion_coefficients type_id="opencv-matrix"> <rows>5</rows> <cols>1</cols> <dt>d</dt> <data>-4.1802327176423804e-001 5.0715244063187526e-001 0. 0.-5.7843597214487474e-001</data></distortion_coefficients>將這些參數(shù)以常數(shù)的形式放在你的代碼中, 調(diào)用 cv::initUndistortRectifyMap 和 cv::remap 函數(shù)進(jìn)行畸變的移除。
總結(jié)
以上是生活随笔為你收集整理的基于OpenCV进行相机标定的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 基于OpenCV给图片添加边框
- 下一篇: 纹理对象的实时姿态估计