GeekCon 2009: RunVas - Our project [w/ video, img]

runvasHi everyone

In the last weekend I attended GeekCon 2009, a tech-conference, with a friend and colleague Arnon (not Arnon from the blog, who recently had a birthday - Happy B-Day Arnon!). Each team that attended had to create a project they can complete in 2-days of the conference. Our project is called "RunVas", and the basic idea was to let people run around and paint by doing so. We wanted to combine computer vision with a little artistic angle.

Here's some more details
Continue reading "GeekCon 2009: RunVas - Our project [w/ video, img]"


Awesome pictures fusing with a GIMP plugin [w/ code]

desert_bear_arrowSwitching, merging or swapping, call it what you like - it's a pain to pull off. You need to spend a lot of time tuning the colors, blending the edges and smudging to get a decent result. So I wrote a plugin for the wonderful GIMP program that helps this process. The merge is done using a blending algorithm that blends in the colous from the original image into the pasted image.

I'll write a little bit about coding GIMP plugins, which is very simple, and some about the algorithm and its sources.

Let's see how it's done
Continue reading "Awesome pictures fusing with a GIMP plugin [w/ code]"


iPhoneOS 3.1 will not allow marker-based AR


I had very high hopes for iPhoneOS 3.1 in the AR arena. With all the hype about it, I naturally thought that with 3.1 developers will be able to bring marker-detection AR to the app-store - meaning, using legal and published APIs. A look around 3.1's APIs I wasn't able to find anything that will allow this.

Not all AR is banned. In fact AR apps like Layar will be very much possible, as they rely on compass & gyro to create the AR effect. These don't require processing the live video feed from the camera, only overlaying data over it. This can be done easily with the new cameraOverlayView property of UIImagePickerController. All you need to do is create a transparent view with the required data, and it will be overlaid on the camera preview.

Sadly, to get marker-detection abilities developers must still hack the system (camera callback rerouting), or use very slow methods (UIGetScreenImage). I can only hope apple will see the potential of letting developers manipulate the live video feed.



Near realtime face detection on the iPhone w/ OpenCV port [w/code,video]

iphone + opencv = winHi
OpenCV is by far my favorite CV/Image processing library. When I found an OpenCV port to the iPhone, and even someone tried to get it to do face detection, I just had to try it for myself.

In this post I'll try to run through the steps I took in order to get OpenCV running on the iPhone, and then how to get OpenCV's face detection play nice with iPhoneOS's image buffers and video feed (not yet OS 3.0!). Then i'll talk a little about optimization

Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here's sample code from Apple developer.
Update: I do not have the xcodeproj file for this project, please don't ask for it. Please see here for compiling OpenCV for the iPhone SDK 4.3.

Let's begin
Continue reading "Near realtime face detection on the iPhone w/ OpenCV port [w/code,video]"


Advanced topics in 3D game building [w/ code, video]


The graphics course I took at TAU really expanded my knowledge of 3D rendering, and specifically using OpenGL to do so. The final task of the course, aside from the exam, was to write a 3D game. We were given 3 choices for types of games: worms-like, xonix-like and lightcycle-like. We chose to write our version of Worms in 3D.

I'll try to take you through some of the problems we encountered, the decisions we made, and show as much code as possible. I'm not, however, gonna take you through the simple (yet grueling) work of actually showing meshes to the screen or moving them around, these subjects are covered extensively online.

The whole game is implemented in Java using JOGL and SWT for 3D rendering. The code is of course available entirely online.

Continue reading "Advanced topics in 3D game building [w/ code, video]"


Augmented reality on the iPhone using NyARToolkit [w/ code]


I saw the stats for the blog a while ago and it seems that the augmented reality topic is hot! 400 clicks/day, that's awesome!

So I wanted to share with you my latest development in this field - cross compiling the AR app to the iPhone. A job that proved easier than I originally thought, although it took a while to get it working smoothly.

Basically all I did was take NyARToolkit, compile it for armv6 arch, combine it with Norio Namura's iPhone camera video feed code, slap on some simple OpenGL ES rendering, and bam - Augmented Reality on the iPhone.

Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here's sample code from Apple developer.

This is how I did it...
Continue reading "Augmented reality on the iPhone using NyARToolkit [w/ code]"


Augmented Reality with NyARToolkit, OpenCV & OpenGL


I have been playing around with NyARToolkit's CPP implementation in the last week, and I got some nice results. I tried to keep it as "casual" as I could and not get into the crevices of every library, instead, I wanted to get results and fast.

First, NyARToolkit is a derivative of the wonderful ARToolkit by the talented people @ HIT Lab NZ & HIT Lab Uni of Washington. NyARToolkit however was ported to many other different platforms, like Java, C# and even Flash (Papervision3D?), and in the process making it object oriented, instead of ARToolkit procedural approach. NyARToolkit have made a great job, so I decided to build from there.

NyART don't provide any video capturing, and no 3D rendering in their CPP implementation (they do in the other ports), so I set out to build it on my own. OpenCV is like a second language to me, so I decided to take its video grabbing mechanism wrapper for Win32. For 3D rendering I used the straightforward GLUT library which does an excellent job ridding the programmer from all the Win#@$#@ API mumbo-jumbo-CreateWindowEx crap.

So let's dive in....
Continue reading "Augmented Reality with NyARToolkit, OpenCV & OpenGL"


Porting Rob Hess's SIFT impl. to Java

beavers_siftThis is a Java port of Rob Hess' implementation of SIFT that I did for a project @ work.

However, I couldn't port the actual extraction of SIFT descriptors from images as it relies very heavily on OpenCV. So actually all that I ported to native Java is the KD-Tree features matching part, and the rest is in JNI calls to Rob's code.

I wrote this more as a tutorial to Rob's work, with an easy JNI interface to Java.

You can find the sources here:

Here's how to use it:
Continue reading "Porting Rob Hess's SIFT impl. to Java"


Combining Java's BufferedImage and OpenCV's IplImage


I recently did a small project combining a Java web service with a OpenCV processing. I tried to transfer the picture from Java environment (as BufferedImage) to OpenCV (IplImage) as seamlessly as possible. This proved a but tricky, especially the Java part where you need to create your own buffer for the image, but it worked out nicely.

Let me show you how I did it

Continue reading "Combining Java's BufferedImage and OpenCV's IplImage"


iPhone camera frame grabbing and a real-time MeanShift tracker


Just wanted to report on a breakthrough in my iPhone-CV digging. I found a true realtime frame grabber for the iPhone preview frame (15fps of ~400x300 video), and successfully integrated this video feed with a pure C++ implementation of the MeanShift tracking algorithm. The whole setup runs at realtime, under a few constraints of course, and gives nice results.

Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here's sample code from Apple developer.

So lets dig in...

Continue reading "iPhone camera frame grabbing and a real-time MeanShift tracker"