20-lines AR in OpenCV [w/code]


Just wanted to share a bit of code using OpenCV's camera extrinsic parameters recovery, camera position and rotation - solvePnP (or it's C counterpart cvFindExtrinsicCameraParams2). I wanted to get a simple planar object surface recovery for augmented reality, but without using any of the AR libraries, rather dig into some OpenCV and OpenGL code.
This can serve as a primer, or tutorial on how to use OpenCV with OpenGL for AR.

Update 2/16/2015: I wrote another post on OpenCV-OpenGL AR, this time using the fine QGLViewer - a very convenient Qt OpenGL widget.

The program is just a straightforward optical flow based tracking, fed manually with four points which are the planar object's corners, and solving camera-pose every frame. Plain vanilla AR.

Well the whole cpp file is ~350 lines, but there will only be 20 or less interesting lines... Actually much less. Let's see what's up
Continue reading "20-lines AR in OpenCV [w/code]"


Quick and Easy Head Pose Estimation with OpenCV [w/ code]

Update: check out my new post about this http://www.morethantechnical.com/2012/10/17/head-pose-estimation-with-opencv-opengl-revisited-w-code/

Just wanted to share a small thing I did with OpenCV - Head Pose Estimation (sometimes known as Gaze Direction Estimation). Many people try to achieve this and there are a ton of papers covering it, including a recent overview of almost all known methods.

I implemented a very quick & dirty solution based on OpenCV's internal methods that produced surprising results (I expected it to fail), so I decided to share. It is based on 3D-2D point correspondence and then fitting of the points to the 3D model. OpenCV provides a magical method - solvePnP - that does this, given some calibration parameters that I completely disregarded.

Here's how it's done

Continue reading "Quick and Easy Head Pose Estimation with OpenCV [w/ code]"


GeekCon 2009: RunVas - Our project [w/ video, img]

runvasHi everyone

In the last weekend I attended GeekCon 2009, a tech-conference, with a friend and colleague Arnon (not Arnon from the blog, who recently had a birthday - Happy B-Day Arnon!). Each team that attended had to create a project they can complete in 2-days of the conference. Our project is called "RunVas", and the basic idea was to let people run around and paint by doing so. We wanted to combine computer vision with a little artistic angle.

Here's some more details
Continue reading "GeekCon 2009: RunVas - Our project [w/ video, img]"


Advanced topics in 3D game building [w/ code, video]


The graphics course I took at TAU really expanded my knowledge of 3D rendering, and specifically using OpenGL to do so. The final task of the course, aside from the exam, was to write a 3D game. We were given 3 choices for types of games: worms-like, xonix-like and lightcycle-like. We chose to write our version of Worms in 3D.

I'll try to take you through some of the problems we encountered, the decisions we made, and show as much code as possible. I'm not, however, gonna take you through the simple (yet grueling) work of actually showing meshes to the screen or moving them around, these subjects are covered extensively online.

The whole game is implemented in Java using JOGL and SWT for 3D rendering. The code is of course available entirely online.

Continue reading "Advanced topics in 3D game building [w/ code, video]"


Augmented reality on the iPhone using NyARToolkit [w/ code]


I saw the stats for the blog a while ago and it seems that the augmented reality topic is hot! 400 clicks/day, that's awesome!

So I wanted to share with you my latest development in this field - cross compiling the AR app to the iPhone. A job that proved easier than I originally thought, although it took a while to get it working smoothly.

Basically all I did was take NyARToolkit, compile it for armv6 arch, combine it with Norio Namura's iPhone camera video feed code, slap on some simple OpenGL ES rendering, and bam - Augmented Reality on the iPhone.

Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here's sample code from Apple developer.

This is how I did it...
Continue reading "Augmented reality on the iPhone using NyARToolkit [w/ code]"


Augmented Reality with NyARToolkit, OpenCV & OpenGL


I have been playing around with NyARToolkit's CPP implementation in the last week, and I got some nice results. I tried to keep it as "casual" as I could and not get into the crevices of every library, instead, I wanted to get results and fast.

First, NyARToolkit is a derivative of the wonderful ARToolkit by the talented people @ HIT Lab NZ & HIT Lab Uni of Washington. NyARToolkit however was ported to many other different platforms, like Java, C# and even Flash (Papervision3D?), and in the process making it object oriented, instead of ARToolkit procedural approach. NyARToolkit have made a great job, so I decided to build from there.

NyART don't provide any video capturing, and no 3D rendering in their CPP implementation (they do in the other ports), so I set out to build it on my own. OpenCV is like a second language to me, so I decided to take its video grabbing mechanism wrapper for Win32. For 3D rendering I used the straightforward GLUT library which does an excellent job ridding the programmer from all the Win#@$#@ API mumbo-jumbo-CreateWindowEx crap.

So let's dive in....
Continue reading "Augmented Reality with NyARToolkit, OpenCV & OpenGL"


Tracing wild rays


I havn't published in a while. I was back up with work on a project for uni., work and my writing...

But the good thing with keeping busy, is that after a while - you have something to show for! So here's what i've been working on for Comp. Graphics course - A Ray Tracer.

Continue reading "Tracing wild rays"