Categories
code ffmpeg graphics opencv video vision

Extending the hand tracker with snakes and optimizations [w/ code, OpenCV]

Continuing the work on a hand curve particle filter following Heap&Hogg’s work, adding active contours and optimizations

I wish to report of a number of tweaks and additions to the hand silhouette tracker I posted a while back. First is the ability for it to “snap” to the object using a simple Active Snake method, another is a more advanced resampling technique (the older tracker always resampled after every frame), and of a number of optimizations to increase the speed (tracker now runs at real-time on a single core).

Guided Resampling

In the last implementation of the hand tracker the particles were re-sampled in every iteration. Resampling means that after evaluating the fitness for each hypothesized particle to the measurement an implicit distribution function is created, and from that distribution new particles are drawn. This is a necessary operation as we get more particles in the areas where the distribution is strong, and thus better tracking. But, we can choose to let particles linger a bit longer before getting new ones, which will save us some time on each frame. This is done by calculating the number of “effective particles”, which are the particles that are believed to contribute to the distribution.

    /**
     * calculate the number of effective particles. Based on http://en.wikipedia.org/wiki/Particle_filter on SIS
     * @return number of effective particles
     **/
    float numEffectiveParticles() {
        //N_eff = 1 / sum(w_i)^2
        float N_eff_denom = 0.0;
        for(int i=0;i<num_particles;i++) {
            N_eff_denom += normalized_weights[i] * normalized_weights[i];
        }
        return 1.0 / N_eff_denom;
    }

What this does is give us a measure of how good our current particles are in describing the implicit distribution function.
Plotting our particles’ weights and the number of effective particles shows that only the strong particles are considered effective: (vertical line shows the number of effective particles)
Screen Shot 2013-05-23 at 11.42.23 AM
So in our update function we can choose to resample on a given threshold (I’m using 25% of the number of particles, i.e. with 40 particles I threshold at 10 effective particles for a resampling step).

        N_eff = numEffectiveParticles();
        if(isnan(N_eff)) N_eff = 0.0f;
        if(N_eff < N_thr) {
            cout << "N_eff "<<N_eff<<" < "<<N_thr<<" (N_thr) : resample\n";
            resampleParticles();
        }

After resampling, of course, all particles get an equal score, ready for the next round of evaluation.
This may be a small thing, but it saves some cycles…

Shape Model Guided Active Contours (Snakes)

The original work by Heap & Hogg (paper) indeed includes an Active Contours step within the evaluation of each particle, this means they deform the template curve (from the database) a little so it snaps to the real shape. The active contours method is a very well known and fairly simple curve-tracking method, that is based on marching the curve’s vertices in the direction of edges in the image, while advancing in a direction that is perpendicular to the curve itself.
However, in this active contour we are using the Hierarchical Point Distribution Model (our shape model) to guide the movement of the snake.
This is best illustrated by an image:
Screen Shot 2013-05-24 at 9.02.40 AM
The green curve is where the snake wants to go – each point wants to move perpendicular to the curve until it hits an edge.
The blue curve is the original curve, and the red is the result of projecting the green curve to the HPDM:

//Since we're working in curve-space, we need to negate the rotation, translation and scale the curve
//has, so it will conform with our shape model:
Mat_<float> Tpca = Mat_<float>::eye(3,3);
getRotationMatrix2D(Point2f(0,0), -particles[i].orientation, 1.0/particles[i].scale).copyTo(Tpca.rowRange(0,2));
Tpca = Tpca * (Mat_<float>(3,3) << 1,0,-particles[i].centroid.x, 0,1,-particles[i].centroid.y, 0,0,1);
transform(curve,curve,Tpca.rowRange(0,2));
//Next we project the re-aligned curve to the shape-space (the PCA on the PDM)
Mat in_shape_space = hpdm.projectToShapeSpace(curve);
//And store the shape-space deformation
in_shape_space.copyTo(particles[i].deformation);

Starting from an initial guess

It’s very important to start the tracker from a good initial guess, else it has poor chances of working. One thing we can do as part of the interaction design is ask the users to place their hands in known position and start tracking from there.
output-comp
Look at the running number on the top-left for the score, once it hits 10.0 it will trigger the tracker and give it the template as the initial guess.
We can easily get a measure if the user is in the right place and set a threshold to when tracking should start:

//setup a "particle" that is simply an open-hand type
HPDMTracker::Particle p;
p.orientation = 60;
p.centroid.x = oni.getWidth()/2;
p.centroid.y = oni.getHeight()/2-20;
p.scale = 0.7;
p.patch_num = 5;
tracker.getHPDM().getPatchMean(p.patch_num).copyTo(p.deformation);
vector<Point2f> openhandcurve = tracker.getShapeForParticle(p);
while(true) {
    if(!tracking) {
        drawOpenCurve(depthImage, openhandcurve, Scalar(255), 2);
        grayEdgeDetect(depthImage, edgemap);
        vector<Point2f> tmpcurve;
        //Measure the fitness of the image to the template
        float score = tracker.calculateFitness(p, edgemap, Mat(), tmpcurve);
        putText(depthRGB, SSTR(score), Point(10,10), CV_FONT_HERSHEY_PLAIN, 1.0, Scalar(255));
        if(score > 10.0f) {
            //init tracker and start tracking
            tracker.setInitialGuess(p.centroid, p.scale, p.orientation, p.patch_num);
            tracking = true;
        }
    } else {
        tracker.update(depthImage, Mat());
        drawOpenCurve(depthRGB, tracker.getMeanShape(), Scalar(0,0,255), 2);
    }
    imshow("depth",depthRGB);
    if(waitKey(3)==27) break;
}

Example

output-comp

Code

This time the code is up at GitHub: https://github.com/royshil/HHParticleFilter
Enjoy
Roy

7 replies on “Extending the hand tracker with snakes and optimizations [w/ code, OpenCV]”

Hello,
Thanks for this post! Do you know if it is possible to implement this code using python+opencv? More generally, is there a way to build a model any shape with python-opencv?

@Ashwin
These file are actually not needed for this project, you can remove them and any reference to them. I will amend the repo shortly.

@jean-patrick, basically yes, there should not be a problem to implement this in python (do you want to do it and share the result with us?)
the tracker can bb used to track any shape if it has the right shape model. it should simply be trained with the right shapes, see the former post for details on training

Hi Roy,
I got the code to successfully compile and run. But I do not understand how to train the tracker. It says click on the center and drag to scale. But nothing happens after that. What is the correct procedure to train?

Hello …
Thank you for this work, it is really a verry good work ..
I have similar work for my final project of master degree .. i want to create a data presenter using AR and the event will be generated by hand ..
the problem is that im still beginnaire in this domain …
now i’m using opencv&VC++ … is it possible to use your code in vc++ ??
thank yu…

Hello Roy,
Thanks for this post, and i got the code to successfully compile and run.
But i can not find a grayedgedetect function. where is this function?

Leave a Reply

Your email address will not be published. Required fields are marked *