MATLAB: Stereo camera calibration and 3×4 camera matrices

camera matrixcamera-calibrationcomputer visionComputer Vision Toolboxstereo vision

I'm using the Stereo Camera Calibration toolbox in Matlab, and have successfully gotten the stereo parameters. However, my application requires knowing a 3×4 camera matrix, (a la slide 29 of these notes). The cameraMatrix function provided by Matlab creates a 4×3 matrix, and it's a bit unclear to me whether this is simply a projection matrix or actually capable of converting homogeneous world coordinates to pixel coordinates.
Here is my code to get the stereo parameters:
% Detect checkerboards in images
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(imageFileNames1, imageFileNames2);
% Generate world coordinates of the checkerboard corners (keypoints) in the
% pattern-centric coordinate system, with the upper-left corner at (0,0)
squareSize = 15; % in units of 'mm'
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
% Calibrate the camera
[stereoParams, pairsUsed, estimationErrors] = estimateCameraParameters(imagePoints, worldPoints, ...
'EstimateSkew', false, 'EstimateTangentialDistortion', false, ...
'NumRadialDistortionCoefficients', 2, 'WorldUnits', 'mm');
Next, I find the camera matrices such that [u v 1]' = M*[X Y Z 1]', where M is 3×4 and [u,v] are the pixel coordinates of a point [X,Y,Z] in 3-D world space.
% Find camera matrices M1 and M2 for each camera
[R,t]= extrinsics(imagePoints(:,:,21,1), worldPoints, stereoParams.CameraParameters1);
K= stereoParams.CameraParameters1.IntrinsicMatrix';
M1= K*[R' t'];
[R,t]= extrinsics(imagePoints(:,:,21,2), worldPoints, stereoParams.CameraParameters1);
K= stereoParams.CameraParameters2.IntrinsicMatrix';
M2= K*[R' t'];
With my set of calibration images, the matrix M1 is:
M1 = [6899, 164.0, -234.2, 1,491,000; -103.8, 6891, 544.7, 35,440; 0.1257, -0.0106, 0.992, 2138]
So, my main question is: has anyone come across some prepackaged code that computes a 3×4 camera matrix against which I could check my work? Or do you see where I might be going wrong or losing accuracy? When I convert a point from image coordinates in camera 1 to world coordinates (assuming z=0), and then convert this point in world coordinates to camera 2, the points don't match up accurately (example attached). Note that the calibration target is resting on a flat surface in both calibration images, which is the one I'm using to define the origin of the world coordinate system.
I've also attached the functions I wrote to convert from world to image and image to world space. It's basically just algebra.

Best Answer

Hi Tracy,
The functions in the Image Processing Toolbox and the Computer Vision System Toolbox use the pre-multiplication convention for coordinate transformations: row vectors mutiplied by a matrix. This is different from the convention you may see many textbooks, and it requires the matrix to be transposed. That is why you are seeing a 4x3 matrix instead of a 3x4 matrix. It does indeed map homogeneous world coordinates onto homogeneous image coordinates (ignoring distortion), but, again, it operates on row vectors.
[xw, yw, w] = [X,Y,Z,1] * P;
Now back to your problem. I would guess that the mismatch you are seeing happens because imagePoints come from the original image, which has some lens distortion. To do your test correctly, you have to first undistort the image, using the aptly named undistortImage function, and then detect the checkerboard in the undistorted image. Then you can compute the extrinsics and the camera matrix from the resulting image points.
By the way, in R2014b release the cameraParameters class has a pointsToWorld method, which maps image points to world points on a plane. And while I am at it, there is also a graphical app for stereo calibration.