Categories
3d Augmented Reality code graphics opencv opengl programming qt Tracking video vision

Augmented Reality on libQGLViewer and OpenCV-OpenGL tips [w/code]

How to create an augmented reality app with QGLViewer, including some tips on camera calibration and extrinsic parameters from OpenCV to OpenGL.

You already know I love libQGLViewer. So here a snippet on how to do AR in a QGLViewer widget. It only requires a couple of tweaks/overloads to the plain vanilla widget setup (using the matrices properly, disable the mouse binding) and it works.

The major problems I recognize with getting a working AR from OpenCV’s intrinsic and extrinsic camera parameters are their translation to OpenGL. I saw a whole lot of solutions online, and I contributed from my own experience a while back, so I want to reiterate here again in the context of libQGLViewer, with a couple extra tips.

Intrinsic parameters and the projection matrix

We all know the intrinsic parameter matrix that is obtained form a calibration process:

[latexpage]
\[\begin{pmatrix} \alpha & 0 & c_x \\ 0 & \beta & c_y \\ 0 & 0 & 1 \end{pmatrix}\]

Well it could be approximated with mock values if you know the frame size, but you cannot calibrate the camera.
For example for a 640×480 frame the matrix would be:

\[\begin{pmatrix} 640 & 0 & 320 \\ 0 & 640 & 240 \\ 0 & 0 & 1 \end{pmatrix}\]

Using

$$ \alpha = \beta = max(width,height)$$

as the focal length and pixel size dependent parameter (this number is not the focal length!).
If you want precision, calibrate the camera or get the calibration matrix from somewhere, but if you just want to hack – this is a rough approximation.
Getting the projection matrix that is derived from this matrix is fairly simple. It’s the following 4×4 matrix:

\[\begin{pmatrix} \frac{f_x}{c_x} & 0 & 0 & 0 \\ 0 & \frac{f_y}{c_y} & 0 & 0 \\ 0 & 0 & \frac{-(far+near)}{far-near} & \frac{-2.0*far*near}{far-near} \\ 0 & 0 & -1 & 0 \end{pmatrix}\]

And in code:

Mat_<double> persp(4,4); persp.setTo(0);
// http://kgeorge.github.io/2014/03/08/calculating-opengl-perspective-matrix-from-opencv-intrinsic-matrix/
double fx = camMat.at<float>(0,0);
double fy = camMat.at<float>(1,1);
double cx = camMat.at<float>(0,2);
double cy = camMat.at<float>(1,2);
persp(0,0) = fx/cx;
persp(1,1) = fy/cy;
persp(2,2) = -(far+near)/(far-near);
persp(2,3) = -2.0*far*near / (far-near);
persp(3,2) = -1.0;
cout << "perspective m \n" << persp << endl;
persp = persp.t(); //to col-major for OpenGL
glMatrixMode(GL_PROJECTION);
glLoadMatrixd((double*)persp.data);

It works, now let’s keep going.

Extrinsic parameters

Another point I see people have a trouble getting through is taking the output of solvePnP() and getting the modelview matrix for OpenGL.
Many of the guides say “simply use R and t as they are”, but that’s not exactly the case… we need to flip the Y and Z axis because of OpenCV and OpenGL conventions.

cv::Mat Rvec,Tvec;
cv::solvePnP(ObjPoints, Points(trackedFeatures), camMat, Mat(), raux, taux, !raux.empty());
raux.convertTo(Rvec,CV_32F);
taux.convertTo(Tvec ,CV_64F);
Mat Rot(3,3,CV_32FC1);
Rodrigues(Rvec, Rot);
// [R | t] matrix
Mat_<double> para = Mat_<double>::eye(4,4);
Rot.convertTo(para(Rect(0,0,3,3)),CV_64F);
Tvec.copyTo(para(Rect(3,0,1,3)));
Mat cvToGl = Mat::zeros(4, 4, CV_64F);
cvToGl.at<double>(0, 0) = 1.0f;
cvToGl.at<double>(1, 1) = -1.0f; // Invert the y axis
cvToGl.at<double>(2, 2) = -1.0f; // invert the z axis
cvToGl.at<double>(3, 3) = 1.0f;
para = cvToGl * para;
Mat(para.t()).copyTo(modelview_matrix); // transpose to col-major for OpenGL

This should get you going.
Remember raux and taux could be used for the processing of the next frame as an initial guess.

Setting up QGLViewer

First step is to get the projection matrix uploaded. This has to be done via the GLViewer’s camera() object. Their documentation suggest we subclass it, so here’s how it’s done:

class OpenCVCamera : public qglviewer::Camera {
public:
    Mat camMat;
    virtual void loadProjectionMatrix(bool reset) const {
        static Mat_<double> persp;
        double near = 1, far = 100.0;
        glMatrixMode(GL_PROJECTION);
        if(persp.empty()) {
            persp.create(4,4); persp.setTo(0);
            // http://kgeorge.github.io/2014/03/08/calculating-opengl-perspective-matrix-from-opencv-intrinsic-matrix/
            double fx = camMat.at<float>(0,0);
            double fy = camMat.at<float>(1,1);
            double cx = camMat.at<float>(0,2);
            double cy = camMat.at<float>(1,2);
            persp(0,0) = fx/cx;
            persp(1,1) = fy/cy;
            persp(2,2) = -(far+near)/(far-near);
            persp(2,3) = -2.0*far*near / (far-near);
            persp(3,2) = -1.0;
            cout << "perspective m \n" << persp << endl;
            persp = persp.t(); //to col-major
        }
        glLoadMatrixd((double*)persp.data);
    }
};

Apparently the loadProjectionMatrix() functions gets called every frame, so I optimized by caching the “persp” matrix and thereafter simply use the prepared matrix.
This needs to be then initialized in thw QGLWidget’s init():

class MyQGLViewer : public QGLViewer {
// ...
private:
  QBasicTimer*         frameTimer;
  RS::OpenCVGLTexture  ocv_tex;
  Mat                  frame;
  Mat                  camMat;
// ...
public:
// ...
  virtual void init() {
      // Enable GL textures
      glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
      glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
      // Nice texture coordinate interpolation
      glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST );
      ocv_tex = RS::MakeOpenCVGLTexture(frame);
      setFixedHeight(frame.rows);
      setFixedWidth(frame.cols);
      clearMouseBindings();
      frameTimer->start(1,this);
      OpenCVCamera* c = new OpenCVCamera;
      c->camMat = camMat;
      setCamera(c);
  }
// ...
};

Now I got a few more things going there besides the camera(), first there’s the QBasicTimer.


This timer fires every 1ms (in reality this should be set to 30ms) and will upload the frame to the GPU memory to be shown as a texture, we’ll see that in a moment.


Then there’s the OpenCV-OpenGL texture object that’s my own implementation, to make life easier when using OpenCV Mats and OpenGL textures. You can get the gist here: https://gist.github.com/royshil/5b96b6a1797e12fcef8d

One more thing, I set the widget size to be fixed width and height as well as remove the mouse bindings. This being an AR program, the mouse should have control of the camera and the window size should be set to avoid having to create the projection matrix again.

Drawing is trivial, and partially based on the QGLViewer background image example: http://www.libqglviewer.com/examples/contribs.html#backgroundImage

And here is a screen shot.. (I’m using my own natural features marker tracker)

Leave a Reply

Your email address will not be published. Required fields are marked *