I'm using the Stereo Camera Calibration toolbox in Matlab, and have successfully gotten the stereo parameters. However, my application requires knowing a 3×4 camera matrix, (a la slide 29 of these notes). The cameraMatrix function provided by Matlab creates a 4×3 matrix, and it's a bit unclear to me whether this is simply a projection matrix or actually capable of converting homogeneous world coordinates to pixel coordinates.
Here is my code to get the stereo parameters:
% Detect checkerboards in images
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames1, imageFileNames2); % Generate world coordinates of the checkerboard corners (keypoints) in the
% pattern-centric coordinate system, with the upper-left corner at (0,0)
squareSize = 15; % in units of 'mm'
worldPoints = generateCheckerboardPoints(boardSize, squareSize); % Calibrate the camera
[stereoParams, pairsUsed, estimationErrors] = estimateCameraParameters(imagePoints, worldPoints, ... 'EstimateSkew', false, 'EstimateTangentialDistortion', false, ... 'NumRadialDistortionCoefficients', 2, 'WorldUnits', 'mm');
Next, I find the camera matrices such that [u v 1]' = M*[X Y Z 1]', where M is 3×4 and [u,v] are the pixel coordinates of a point [X,Y,Z] in 3-D world space.
% Find camera matrices M1 and M2 for each camera
[R,t]= extrinsics(imagePoints(:,:,21,1), worldPoints, stereoParams.CameraParameters1); K= stereoParams.CameraParameters1.IntrinsicMatrix'; M1= K*[R' t']; [R,t]= extrinsics(imagePoints(:,:,21,2), worldPoints, stereoParams.CameraParameters1); K= stereoParams.CameraParameters2.IntrinsicMatrix'; M2= K*[R' t'];
With my set of calibration images, the matrix M1 is:
M1 = [6899, 164.0, -234.2, 1,491,000; -103.8, 6891, 544.7, 35,440; 0.1257, -0.0106, 0.992, 2138]
So, my main question is: has anyone come across some prepackaged code that computes a 3×4 camera matrix against which I could check my work? Or do you see where I might be going wrong or losing accuracy? When I convert a point from image coordinates in camera 1 to world coordinates (assuming z=0), and then convert this point in world coordinates to camera 2, the points don't match up accurately (example attached). Note that the calibration target is resting on a flat surface in both calibration images, which is the one I'm using to define the origin of the world coordinate system.
I've also attached the functions I wrote to convert from world to image and image to world space. It's basically just algebra.
Best Answer