Qt & OpenCV combined for face detecting QWidgets

As my search for the best platform to roll-out my new face detection concept continues, I decided to give ol' Qt framework a go.

I like Qt. It's cross-platform, a clear a nice API, straightforward, and remindes me somewhat of Apple's Cocoa.

My intention is to get some serious face detection going on mobile devices. So that means either the iPhone, which so far did a crummy job performance-wise, or some other mobile device, preferably linux-based.
This led me to the decision to go with Qt. I believe you can get it to work on any linux-ish platform (limo, moblin, android), and since Nokia baught Trolltech - it's gonna work on Nokia phones soon, awesome!

Lets get to the details, shall we?

First thing's first: face detection.

I ripped OpenCV's facedetect.c sample and extracted only the detect_and_draw() function. Originally the function detects the faces and draws a circle over them, but I needed only the face detection and the result bounding rectangle. So in the end I was left with this:


CvRect detect_and_draw( IplImage* img, CvMemStorage* storage, CvHaarClassifierCascade* cascade )
{
IplImage *gray, *small_img;
int i = 0;

gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 );
small_img = cvCreateImage( cvSize( cvRound (img->width/scale),
cvRound (img->height/scale)), 8, 1 );

cvCvtColor( img, gray, CV_RGB2GRAY );
cvResize( gray, small_img, CV_INTER_LINEAR );
cvEqualizeHist( small_img, small_img );
cvClearMemStorage( storage );

CvRect* r = NULL;

if( cascade )
{
double t = (double)cvGetTickCount();
CvSeq* faces = cvHaarDetectObjects( small_img, cascade, storage,
1.1, 2, 0
|CV_HAAR_FIND_BIGGEST_OBJECT
//|CV_HAAR_DO_ROUGH_SEARCH
//|CV_HAAR_DO_CANNY_PRUNING
//|CV_HAAR_SCALE_IMAGE
,
cvSize(30, 30) );
t = (double)cvGetTickCount() - t;

printf( "detection time = %gms\n", t/((double)cvGetTickFrequency()*1000.) );

r = (CvRect*)cvGetSeqElem( faces, i );

cvReleaseImage( &gray );
cvReleaseImage( &small_img );

if(r) {
return cvRect(r->x,r->y,r->width,r->height);
} else {
return cvRect(-1,-1,0,0);
}

}

This can go anywhere in the code base, as it's totally independant (as long as you train the cascade and allocate a MemStorage).Note that I am assuming only one face in the input image, and also that it will be the largest detected object. This bring my benchmark to about 25ms per frame, using the original general detection approach of facedetect.c benchmarked at about 160ms per frame.

OK, done with pure OpenCV, on to Qt.

I subclassed a QWidget, who's sole purpose is to show the input video with the detected face. For starters, I needed to have a QImage and an IplImage instances as members, they can also share the same buffer (how awesome is that..). I also need a CvCapture, CvMemStorage and a CvHaarCalssifierCascade:


class FaceRecognizer : public QWidget
{
Q_OBJECT

public:
FaceRecognizer(QWidget *parent = 0);
~FaceRecognizer();

private:
Ui::FaceRecognizerClass ui;

QImage m_i;

QRect faceLoc;

CvMemStorage* storage;
CvHaarClassifierCascade* cascade;
CvCapture* capture;
IplImage* m_opencvimg;

QTimer* m_timer;

void paintEvent(QPaintEvent* e);

public slots:
void queryFrame();
};

You can see that I'm gonna use a QTimer to query the frames from the CvCapture and also I override paintEvent to paint the frame onto the canvas. In fact in my QWidget I have a QFrame, that the image will painted over it. UI was generated in Qt Designer.

First, some initialization in the constructor:


FaceRecognizer::FaceRecognizer(QWidget *parent)
: QWidget(parent)
{
ui.setupUi(this);

capture = cvCaptureFromAVI( "/home/user/Desktop/video.avi" );
//grab one frame to get width and height

IplImage* frame = cvQueryFrame( capture );

m_i = QImage(QSize(frame->width,frame->height),QImage::Format_RGB888);
ui.frame->setMinimumSize(m_i.width(),m_i.height());
ui.frame->setMaximumSize(ui.frame->minimumSize());
//create only the header, as the data buffer is shared, and was allocated by QImage

m_opencvimg = cvCreateImageHeader(cvSize(m_i.width(),m_i.height()),8,3);
m_opencvimg->imageData = (char*)m_i.bits(); // share buffers

if( frame->origin == IPL_ORIGIN_TL )
cvCopy( frame, m_opencvimg, 0 );
else
cvFlip( frame, m_opencvimg, 0 );

//images from cvQueryFrame come in BGR form and not what Qt expects - RGB

//and since the buffers are shared - format should be consistent
cvCvtColor(m_opencvimg,m_opencvimg,CV_BGR2RGB);

//we need memstorage and a cascade
storage = cvCreateMemStorage(0);
cascade = (CvHaarClassifierCascade*)cvLoad( CASCADE_NAME, 0, 0, 0 );

//set timer for 50ms intervals

m_timer = new QTimer(this);
connect(m_timer,SIGNAL(timeout()),this,SLOT(queryFrame()));
m_timer->start(50);
}

And now, querying the frame: query CvCapture, convert BGR to RGB, detect faces and update faceLoc QRect.


void FaceRecognizer::queryFrame() {
IplImage* frame = cvQueryFrame( capture );

if( frame->origin == IPL_ORIGIN_TL )
cvCopy( frame, m_opencvimg, 0 );
else
cvFlip( frame, m_opencvimg, 0 );
cvCvtColor(m_opencvimg,m_opencvimg,CV_BGR2RGB);

CvRect r = detect_and_draw(m_opencvimg,storage,cascade);
faceLoc = QRect(QPoint(r.x,r.y),QSize(r.width,r.height));

this->update();
}

Finally - painting, which is easy:

void FaceRecognizer::paintEvent(QPaintEvent* e) {
QPainter painter(this);

painter.drawImage(QPoint(ui.frame->x(),ui.frame->y()),m_i);

if(faceLoc.x() > 0 && faceLoc.y() > 0) {
painter.setBrush(Qt::NoBrush);
painter.setPen(QColor(255,0,0));
painter.drawRect(QRect(faceLoc.x()+ui.frame->x(),faceLoc.y()+ui.frame->y(),faceLoc.width(),faceLoc.height()));
}
}

Looks like it's all done... Here's a video:

Share