Categories
graphics Mobile phones programming video vision

Near realtime face detection on the iPhone w/ OpenCV port [w/code,video]

iphone + opencv = winHi
OpenCV is by far my favorite CV/Image processing library. When I found an OpenCV port to the iPhone, and even someone tried to get it to do face detection, I just had to try it for myself.
In this post I’ll try to run through the steps I took in order to get OpenCV running on the iPhone, and then how to get OpenCV’s face detection play nice with iPhoneOS’s image buffers and video feed (not yet OS 3.0!). Then i’ll talk a little about optimization
Update: Apple officially supports camera video pixel buffers in iOS 4.x using AVFoundation, here’s sample code from Apple developer.
Update: I do not have the xcodeproj file for this project, please don’t ask for it. Please see here for compiling OpenCV for the iPhone SDK 4.3.
Let’s begin

Cross compiling OpenCV on iPhoneOS

The good people @ computer-vision-software.com have posted a guideline on how to compile OpenCV on iPhone and link them as static libraries, and I followed it. I did have to recompile it with one change – OpenCV needed zlib linkage, and the OpenCV configure script wasn’t able to config the makefiles to compile zlib as well. So I downloaded zlib from the net, and just added all the files to the XCode project to compile and link. If you’re trying to recreate this, remember to configure/build zlib before adding the files to XCode so you get a zconf.h file. Now OpenCV linked perfectly.
All in all it was really not a big deal to compile OpenCV to the iPhoneOS. I imagined it will be much harder…
OK moving on to

Plain vanilla face detection

So the first step is to just get OpenCV to detect a single face on a single image. But let’s make it harder and use UIImage.
So first, I took OCV’s facedetect.c example, and added it to the project as is. Then I add 2 peripheral functions to setup and tear down the structs and allocated static memory (things that are done in the main function).

void init_detection(char* cascade_location) {
	cascade = (CvHaarClassifierCascade*)cvLoad( cascade_location, 0, 0, 0 );
	storage = cvCreateMemStorage(0);
}
static IplImage *gray = 0, *small_img = 0;
void release_detection() {
	if (storage)
    {
        cvReleaseMemStorage(&storage);
    }
    if (cascade)
    {
        cvReleaseHaarClassifierCascade(&cascade);
    }
	cvReleaseImage(&gray);
	cvReleaseImage(&small_img);
}

The detect_and_draw function remains exactly the same at this point. I just take the XML files of the haarcascades, and add them to the projects resources.
Now I initialize the detection structs from my UIView or UIViewController that will do the detection. The main NSBundle will find the path to the XML file:

NSString* myImage = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_alt" ofType:@"xml"];
		char* chars = (char*)malloc(512);
		[myImage getCString:chars maxLength:512 encoding:NSUTF8StringEncoding];
		init_detection(chars);

Awesome, now let’s face-detect already! For that all we need is to attach a picture of someone to the projects resources, load it, convert it to IplImage* and hand it over to detect_and_draw – simple.
I used a couple of helper function from the informative post I mentioned earlier:

- (void)manipulateOpenCVImagePixelDataWithCGImage:(CGImageRef)inImage openCVimage:(IplImage *)openCVimage;
- (CGContextRef)createARGBBitmapContext:(CGImageRef)inImage;
- (IplImage *)getCVImageFromCGImage:(CGImageRef)cgImage;
-(CGImageRef)getCGImageFromCVImage:(IplImage*)cvImage;

Now it’s only putting it together:

IplImage* im = [self getCVImageFromCGImage:[UIImage imageNamed:"a_picture.jpg"].CGImage];
detect_and_draw(im);
UIImage* result = [UIImage imageWithCGImage:[self getCGImageFromCVImage:im]];
UIImageView* imv = [[UIImageView alloc] initWithImage:result];
[self addSubview:imv];
[imv release];

Just remember those externs, if you don’t use a header file:

extern "C" void detect_and_draw( IplImage* img, CvRect* found_face );
extern "C" void init_detection(char* cascade_location);
extern "C" void release_detection();

Sweet. But detecting a face on a single photo is not so difficult – we want video and real-time face detection! So let’s do that..

Tying it up with video feed from the iPhone camera (no OS 3.0 yet)

This step was so amazingly simple, it was borderline funny. I used my well-known camera frame grabbing code from Norio Numora. Of course to align it with OS 3.0 you must plug it in to the API Apple provide, and not this wily hack, but it’s really a plug-and-play situation. I use it in many of my projects that use the iPhone camera, untill video on the OS 3.0 will be finalized.
So all I needed was to set everything up, make a timer to fire every so-and-so millisec, and send the frame to detection:

- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil {
    if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) {
        // Initialization code
		ctad = [[CameraTestAppDelegate alloc] init];
		[ctad doInit];
		NSString* myImage = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_alt" ofType:@"xml"];
		char* chars = (char*)malloc(512);
		[myImage getCString:chars maxLength:512 encoding:NSUTF8StringEncoding];
		init_detection(chars);
		[self.view addSubview:[ctad getPreviewView]];
		[self.view sendSubviewToBack:[ctad getPreviewView]];
		repeatingTimer = [NSTimer scheduledTimerWithTimeInterval:0.0909 target:self selector:@selector(doDetection:) userInfo:nil repeats:YES];
}
-(void)doDetection:(NSTimer*) timer {
	if([ctad getPixelData]) {
		if(!im) {
			im = cvCreateImageHeader(cvSize([ctad getVideoSize].width,[ctad getVideoSize].height), 8, 4);
		}
		cvSetData(im, [ctad getPixelData],[ctad getBytesPerRow]);
		CvRect r;
		detect_and_draw(im,&r);
		if(r.width > 0 && r.height > 0) {
			NSLog(@"Face: %.0f,%.0f,%.0f,%.0f",r.x,r.y,r.width.r.height);
		}
	}
}

See that for optimization sake, I only create the IplImage header once (the if goes in only in the first time), and every frame after that I only set the IplImage data by taking the buffer I got from the camera. This way the IplImage is sharing buffers, so there is also a little memory optimization there.
From that point on you can take it anywhere you like. Add stuff to faces, mark the face in the image, etc.
But… there’s the issue of performance. This method will get you very very bad timings. In the area of 5-15 seconds (!!) for a single frame – which is horrendous. And I promised near real time performance. So without further ado,

Optimizing the hell out of the detection algorithm

Well the guys at computer-vision-software.com have done some work in the field of optimizing OpenCV’s haar-based detection, but never released code. Their method was based on the fact that the iPhone’s CPU can handle integers far better than floating-points, so they set out to change the algorithm to use integers. I also did that, and found that it only shaves off a few millisec of the total time. The far more influencing factor is the window size of the features scan, the scaling factor of the window size, and the derived number of passes.
Let me explain a little bit how the detection works in OpenCV. First you set the minimal size of the window. Then you specify a scale factor. OpenCV uses this scale factor to do multiple passes over the image to scan for feature-hits. It take the window size, say 30×30, and the factor, say 1.1, and starts multiplying the window size by the factor until it reaches the size of the image. So for a 256×256 image you get: 30×30 scan, 33×33, 36×36, 39×39, 43×43… 244×244 – a total of 23 passes, for one frame! This is way too much… This is done to get better and finer results, and it may be good for resource abundant systems, but this is not our case.
So first thing I did was slash down on those scans. There is, as expected a very strong impact on the quality of the results, but the times are getting close to acceptable. After all my optimizations I got the timing down to even ~120ms.
I optimized a few things:

  • The size of the input image, originally ~300×400, was cut down by 1.5
  • The scale factor for cvHaarDetectObjects: I played with values ranging from 1.2 to 1.5, with pleasing timings
  • The ROI (region of interest) in the IplImage to scan was set every frame to have the previous frame’s detection, the location of the face, plus some buffer on the sides to allow movement of the face frame-to-frame. This decreases the scanned area from the whole image to just a small portion that contains the known face. Of course if a face was not found the ROI is reset.
  • I change the internal works of the cvHaarDetectObjects algorithm to do a lot less floats multiplications and turned them into integer multiplications.
  • I dawned upon me just the other day that I can also optimize the size of the search window, and not keep it constant from frame to frame (30×30). If the last frame had found a 36×36 face, the next detection should also try for a 36×36 object. I haven’t tried it yet.
  • Memory optimization: don’t alloc buffers every frame, share buffers, etc.

So first the most influential change, is in the detection phase:

void detect_and_draw( IplImage* img, CvRect* found_face )
{
	static CvRect prev;
	if(!gray) {
		gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 );
		small_img = cvCreateImage( cvSize( cvRound (img->width/scale),
							 cvRound (img->height/scale)), 8, 1 );
	}
	if(prev.width > 0 && prev.height > 0) {
		cvSetImageROI(small_img, prev);
		CvRect tPrev = cvRect(prev.x * scale, prev.y * scale, prev.width * scale, prev.height * scale);
		cvSetImageROI(img, tPrev);
		cvSetImageROI(gray, tPrev);
	} else {
		cvResetImageROI(img);
		cvResetImageROI(small_img);
		cvResetImageROI(gray);
	}
    cvCvtColor( img, gray, CV_BGR2GRAY );
    cvResize( gray, small_img, CV_INTER_LINEAR );
    cvEqualizeHist( small_img, small_img );
    cvClearMemStorage( storage );
		CvSeq* faces = mycvHaarDetectObjects( small_img, cascade, storage,
										   1.2, 0, 0
										   |CV_HAAR_FIND_BIGGEST_OBJECT
										   |CV_HAAR_DO_ROUGH_SEARCH
										   //|CV_HAAR_DO_CANNY_PRUNING
										   //|CV_HAAR_SCALE_IMAGE
										   ,
										   cvSize(30, 30) );
	if(faces->total>0) {
		CvRect* r = (CvRect*)cvGetSeqElem( faces, 0 );
		int startX,startY;
		if(prev.width > 0 && prev.height > 0) {
			r->x += prev.x;
			r->y += prev.y;
		}
		startX = MAX(r->x - PAD_FACE,0);
		startY = MAX(r->y - PAD_FACE,0);
		int w = small_img->width - startX - r->width - PAD_FACE_2;
		int h = small_img->height - startY - r->height - PAD_FACE_2;
		int sw = r->x - PAD_FACE, sh = r->y - PAD_FACE;
		prev = cvRect(startX, startY,
					  r->width + PAD_FACE_2 + ((w < 0) ? w : 0) + ((sw < 0) ? sw : 0),
					  r->height + PAD_FACE_2 + ((h < 0) ? h : 0) + ((sh < 0) ? sh : 0));
		printf("found face (%d,%d,%d,%d) setting ROI to (%d,%d,%d,%d)\n",r->x,r->y,r->width,r->height,prev.x,prev.y,prev.width,prev.height);
		found_face->x = (int)((double)r->x * scale);
		found_face->y = (int)((double)r->y * scale);
		found_face->width = (int)((double)r->width * scale);
		found_face->height = (int)((double)r->height * scale);
	} else {
		prev.width = prev.height = found_face->width = found_face->height = 0;
	}
}

As you can see I keep the previous face in prev, and use it to set the ROI of the images for the next frame. Note that the small_img is a scaled-down version of the input image, so the detection results must be scaled-up to match the real size of the input.
Now, I can bore you with the details of how I changed the cvHaarDetectObjects to use more integers, but I won’t. Anyway it’s all in the code, that is freely available, so you can diff it against cvHarr.cpp of OpenCV and see the changes. In short what I did was:

  • Mark out image scaling and canny pruning.
  • in the cvSetImagesForHaarClassifierCascade, which fires many times for each frame and is governed on scaling/shifting/rotating the Haar classifiers to get better detection, I changed the weights and sizes to be integers rather than floats.
  • in cvRunHaarClassifierCascade, which calculates the score for a single Haar feature-hit, I changed the results calculation to integers instead of floats.
  • I played around with integer oriented calculations of the sqrt function, that the cvRunHaarClassifierCascade func uses (fires many many times each frame), but that actually caused a slow-down on the device. Turns out the standard library (math.h) implementation is the best

Well guys, that’s pretty much all my discovery in the field. Please keep working on it. I’m anxious to see a true real-time face detection on the iPhone.
Time for a video proof? you bet

Here’s proof that all I wrote here is not total BS

Code

Code is as usual available in Google Code SVN repo:
http://code.google.com/p/morethantechnical/source/browse/#svn/trunk/FaceDetector-iPhone
OK, ‘Till next time, enjoy
Roy.

61 replies on “Near realtime face detection on the iPhone w/ OpenCV port [w/code,video]”

Hi,
I’ve tried compiling MyCvDetectHaarObjects.cpp, and it complained about missing functions (defined in cxmisc.h). I included that file, and then I started getting lots of errors on jumps crossing the initialization of values.
Would you mind sending me the complete project folder (l e o n p a l m a t g m a i l)?
Much appreciated!
Leon

Hi Leon
In the SVN repo you get the whole project folder, except for the xcodeproject file. The file contains a lot of information about the computer, code signing, my company, etc – stuff that I can’t share.
To get it to work, just add all the files in the folder to a new project, add the libcv*.a files to the project target under “Other linking flags”, and also include the .h header files of opencv.
These steps appear here in good detail.
Good luck.
Roy

Hi Roy,
Nice results! Very impressive video 🙂
Correct me if I’m wrong, but your approach will not give any effect for a single image file, isn’t it? I mean there will be no strong performance improvement if the same detection parameters (like window size and scale step) are used with original OpenCV implementation and your version.
I was trying different approached including yours and I haven’t seen big improvement – just few milliseconds.
Thanks you.
Sergey

Hi Sergey,
Thanks for the comment
You are correct, there is no advantage for single image over video feed.
But even for single image detection I was able to drop many cycles and significantly reduce the detection time. The major factor, however, remains the frame-by-frame reduced search window size, so this method is greatly optimized for video feeds.
Roy.

Yeah, that’s what I thought… 🙁
Did you try it on 3GS? I’d like to see how fast face detection is there…
Also, do you know if it is possible to do the same thing (i.e. face detection in video feed) officialy on 3.x.x? I know that there is way to overlay some data on top of camera view, but as you mentioned in this blog there is no way to get access to frames. Did you do any extra investigation in this area?
Regards,
Sergey

Hi Roy:
I had followed step by step according to your helpful instructions. I did link all the *.a files, and put all header files in Class folder. Unfortunately, in the end I still can compile the MyCvDetectHaarObjects.cpp.
Here are the error messages from Xcoder:
—-
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp: In function ‘MyCvHidHaarClassifierCascade* myicvCreateHidHaarClassifierCascade(CvHaarClassifierCascade*)’:
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:158: error: ‘sprintf’ was not declared in this scope
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:190: error: ‘sprintf’ was not declared in this scope
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:270: error: ‘cvAlignPtr’ was not declared in this scope
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp: At global scope:
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:429: error: expected constructor, destructor, or type conversion before ‘int’
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:581: error: expected constructor, destructor, or type conversion before ‘void’
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:826: error: ‘CV_IMPL’ does not name a type
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:5:1: warning: this is the location of the previous definition
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:158: error: ‘sprintf’ was not declared in this scope
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:190: error: ‘sprintf’ was not declared in this scope
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:270: error: ‘cvAlignPtr’ was not declared in this scope
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp: At global scope:
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:429: error: expected constructor, destructor, or type conversion before ‘int’
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:581: error: expected constructor, destructor, or type conversion before ‘void’
/Users/imac/Desktop/Untitled/Classes/mycvHaarDetectObjects.cpp:826: error: ‘CV_IMPL’ does not name a type
Build failed (6 errors, 3 warnings)

Could you plz help me to figure it out?
Thanks a lot!

Hi Bill
I used PAD_FACE = 40 and PAD_FACE_2 is just a multiplication by 2 (80).
I added a missing file to the SVN repo of this project: facedetect.c it constains the detect_and_draw function and all constants, etc.
Roy.

It seems to be an #include problem.
Make sure the compiler can find cv.h (which is #included in at the beginning), and perhaps try with #include instead of “cv.h” that is used for local indirection.
Roy.

Roy, thank you for this great post!
I haven’t gotten a chance to try this out on a device, but you did say your optimizations reduced timings to the range of hundreds of milliseconds – for video feeds at least.
So my question is, if I apply the same optimizations to a single image (so no way to use the ROI from a previous detection), what kind of timing could I expect on a real device (like 3Gs)?
Thanks again!
Vincent

Hi Vincent
You can expect something like 200-500ms for say a 480×640 image.
The still images captured by the camera are 1200×1600, and the algorithm will run for 1-3 seconds to find a face.
Keep in mind that increasing the scale factor will give faster results, but may not find a face at all…
Roy.

Hi Roy,
So, 480×640 in under a second… wow, that’s pretty exciting!
I’m expecting my 3G s to be delivered in a few days, can’t wait to try it out on the real thing 🙂
BTW, if anyone is interested, I ran into an article by Yoshimasa Niwa, in which he provided an overview on compiling and using OpenCV with iPhone. And even better, he provided a complete XCode project, along with iPhone OS-compiled OpenCV libraries in his github repository. Check it out here:
http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en
Vincent

Hi Roy,
Looks like a very useful project, however I’m still unable to compile MyCvDetectHaarObjects.cpp.
1) I have successfully compiled projects including the static libraries using the methods detailed by Yoshimasa Niwa with no errors.
2) I have included the following line for library linking (in simulator, I also have the relevant device line for that build setting):
$(SRCROOT)/opencv_simulator/lib/libml.a $(SRCROOT)/opencv_simulator/lib/libhighgui.a $(SRCROOT)/opencv_simulator/lib/libcvaux.a -lstdc++ -lz $(SRCROOT)/opencv_simulator/lib/libcv.a $(SRCROOT)/opencv_simulator/lib/libcxcore.a
3) and for headers (again for the simulator build setting):
$(SRCROOT)/opencv_simulator/include
4) And I have included all the .h files in the project.
The errors are all based on the macros used in the file:
/Users/mick/Desktop/otherApps/FaceDetector-fast/FaceDetector1/Classes/mycvHaarDetectObjects.cpp:570: error: jump to label ‘exit’
/Users/mick/Desktop/otherApps/FaceDetector-fast/FaceDetector1/Classes/mycvHaarDetectObjects.cpp:459: error: from here
/Users/mick/Desktop/otherApps/FaceDetector-fast/FaceDetector1/Classes/mycvHaarDetectObjects.cpp:464: error: crosses initialization of ‘int pq3’
Is it possible there’s a build flag (VERY_ROUGH_SEARCH) somewhere that needs to be defined?
Cheers
Mick

Hi Roy. After following your instructions, I’m still unable to run your code successfully. It compiles with no errors, but when running I keep getting SIGABRTs…

Any news about the OS 3.0 version? I can’t get the iphone frame capturing to work. Thanks

Hi, does anyone could compile it successfully? I get the exact same problem as Mick. Lots of strange errors in the MyCvDetectHaarObjects.cpp file. Thanks in advance

Cool example – I wish I could get it to work.
I’m having the same problem as Leon, Mik and Dan – MyCvDetectHaarObjects.cpp throws 57 errors of this type:
Jump to label ‘exit’
From here
Crosses initialization of ‘int pq3’
Crosses initialization of ‘int pq2’
Crosses initialization of ‘int pq1’
Crosses initialization of ‘int pq0’
I think it may be because we’re using the 3.x SDK. Apple no longer offers the 2.x SDK. Maybe it’s using a stricter compiler.

Getting closer to running this thing on iPhone 3.x…
I got MyCvDetectHaarObjects.cpp to compile by adding curly brackets around lines 672-820 and lines 1079-1509.
So now I’m able to build and run with no errors, although it immediately crashes (very hard). I’m assuming this is because it uses the 2.x only methodology to capture the frames, and I’m about to try to swap this code out with UIGetScreenImage().

Just wanted to remind everyone that there is a probable way of getting the frames right from the CoreSurafce in OS 3.1.X, as suggested by JD here.
As for iPhoneOS 4, there seems to be a much simpler way via AV Foundation framework, but I have no concrete (code) evidence yet.

Yes, you’re correct Roy. I ditched my attempt to port this code to 3.x and jumped straight to 4.0. It was much simpler and works great! There’s an in-depth post about it on the Apple Dev Forums. Now to hook it into OpenCV.
Thanks for all the informative work on this blog, and congrats on getting accepted to MIT!

Thanks Justin!
Just peeked at your website and saw you’re an MIT grad as well..
If you have a concrete lead about grabbing the pixels data off the camera – please share!

Hi , very good solution to optimize. But this approche seems cannot detect multi faces:
The ROI (region of interest) in the IplImage to scan was set every frame to have the previous frame’s detection, the location of the face, plus some buffer on the sides to allow movement of the face frame-to-frame. This decreases the scanned area from the whole image to just a small portion that contains the known face. Of course if a face was not found the ROI is reset.
if another human face goes into screen, but detector only scan the ROI areas of last frame.

Excellent !! but in your svn they don’t have the file “.xcodeproj” so I don’t have the possibility to compile your code :s

Got this working on iOS4. Pretty sweet indeed. Although I did have to change the ROI since that optimization made it too aggressive and failed to detect extra faces until the ROI was reset.
Thanks again

Hi Ragi
It would be great if you can share your code with us!
Specifically the part of obtaining the pixel data of the video stream, and converting it to OpenCV structures
Thanks!
Roy

Excellent post. I’m about to dive into opencv and I think this will save me weeks! Yes, Ragi, please share!

Hi, Roy
I have a small problem which I hope you’ll be able to answer.
Whenever I move/copy the directory in which the project is located, the moved/copied project does not compile.
I have been trying to figure out a sensible reason as to why this would happen for awhile, and can’t find one.
My question is, will I be able to transfer the app to a physical device and have it work properly? And if not, how should I go about fixing this issue?
Thanks in advance.
PS.
You really helped me and my partner in developing our app. This is the only useful thread we found on the net concerning openCV on the iPhone. Thanks a lot =)

Hey,
Nice post – I have a couple of problems, though – I’m getting a warning: ‘cvConvertImage undeclared’, and an error: ‘CV_CVTIMG_SWAP_RB’ – once I insert ‘getCGImageFromCVImage’ into my code. Do those ring any bells? Should I be importing something as well as having OpenCV_iPhone_Prefix.pch all set up?

Hi,
interesting post!
I have some problems to compile the project for iOS4 maybe you or someone else can help me out.
When I try to compile the project there are errors inside the mycvHaarDetectObjects.cpp class.
I get:
error: ‘__BEGIN__’ was not declared in this scope
and
error: ‘__END__’ was not declared in this scope
and also
error: label ‘exit’ used but not defined
for the CV_ERROR(…) methods.
I couldn’t figure it out up to now, but I wish it would work!
Thanks for helping!

I found the problem!! 🙂
on xCode, edit the target info:
1) On Tab Build, look for “GCC – Warnings” section
2) look for “TREAT NONCONFORMANT CODE ERRORS AS WARNINGS” and set this flag to TRUE
The issue is the following:
There is some code like this:
if( pt.x < 0 || pt.y real_window_size.width >= cascade->sum.width-2 ||
pt.y + _cascade->real_window_size.height >= cascade->sum.height-2 )
EXIT;
p_offset = pt.y * (cascade->sum.step/sizeof(sumtype)) + pt.x;
pq_offset = pt.y * (cascade->sqsum.step/sizeof(sqsumtype)) + pt.x;
EXIT is replaced by: “goto exit”
The standard configuration requires this to be like:
f( pt.x < 0 || pt.y real_window_size.width >= cascade->sum.width-2 ||
pt.y + _cascade->real_window_size.height >= cascade->sum.height-2 )
{
EXIT;
}
else
{
p_offset = pt.y * (cascade->sum.step/sizeof(sumtype)) + pt.x;
pq_offset = pt.y * (cascade->sqsum.step/sizeof(sqsumtype)) + pt.x;
}
which makes sense if we are picky.
Activating the flag will show the error as a warning and will compile the project.
Cheers!

I have been trying to build this project for nearly a week with no success on IOS 4.2. Is there anyone who could please share a working copy or point me to a more complete version of source code? Is there another good blog or code sample that addresses the same issue? I have other projects where I have successfully implemented openCV, face detection, and the AVCaptureVideoDataOutputSampleBufferDelegate, but it is important that I can figure out how to get near realtime face detection. Thanks.
re: @Nillson et al.
In order to solve your compilation errors, try this:
replace every instance
of __BEGIN__ with __CV_BEGIN__
of __END__ with __CV_END__
of EXIT with exit(0)

Hello I have two errors in my code :
– core.hpp line 432 :
typedef Matx diag_type;
statement-expressions are allowed only inside functions
confused by earlier errors, bailing out
– mycvHaarDetectObjects.cpp line 839 :
CvSeq* seq_thread[CV_MAX_THREADS] = {0};
‘CV_MAX_THREADS’ was not declared in this scope
Have you an idea of why i am this mistake thank’s

Hi looks great but I’m having trouble compiling it… Perhaps you could remove your code signing properties and post the xcode file, it would be very useful and strongly appreciated!
Thx
Antoine

Also do you have a donate button or something I’d really like to support your work 🙂

I also encount some error in .cpp file, when I compile the project. could you share the .xcodeproj file?

Hi Roy,
I am a PhD student from the university of Modena and Reggio Emilia, Italy. I have cross-compiled opencv-2.0.0 on a smart camera board equipped with embedded linux. This board is based on the ARM PXA270 processor. I successfully tested the original opencv face detector. Now I would like use your improved face detector. I have cross compiled your code (mycvHaarDetectObjects.cpp) and statically linked with my main program and with opencv libs. My code calls your function mycvHaarDetectObjects(). The cross-compilation is successful but at run time the program aborts with this error:
“OpenCV Error: Unspecified error (The node does not represent a user object (unknown type?)) in cvRead, file ../../opencv-2.0.0.int/src/cxcore/cxpersistence.cpp, line 4722
terminate called after throwing an instance of cv::Exception’
Aborted”
This run time error happen when the main code try to load the file haarcascade_frontalface_alt2.xml. This is the statement : cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); where cascade_name point to the filename string.
The function call at the cvLoad() hangs because the system doesn’t know the type related with the format of the file haarcascade_frontalface_alt2.xml.
I have tried to fix this problem adding at your source file the code to register a new type : CvType haar_type() and related functions. I have picked up this code from the original cvhaar.cpp source file.
Now the run time error seems to be fixed but the program doesn’t detect any face.
Please could you give me some tips about the workaround of this problem.
Thanks in advance.
Paolo

Hi Roy,
I am a PhD student from the university of Modena and Reggio Emilia, Italy. I have cross-compiled opencv-2.0.0 on a smart camera board equipped with embedded linux. This board is based on the ARM PXA270 processor. I successfully tested the original opencv face detector. Now I would like use your improved face detector. I have cross compiled your code (mycvHaarDetectObjects.cpp) and statically linked with my main program and with opencv libs. My code calls your function mycvHaarDetectObjects(). The cross-compilation is successful but at run time the program aborts with this error:
“OpenCV Error: Unspecified error (The node does not represent a user object (unknown type?)) in cvRead, file ../../opencv-2.0.0.int/src/cxcore/cxpersistence.cpp, line 4722
terminate called after throwing an instance of cv::Exception’
Aborted”
This run time error happen when the main code try to load the file haarcascade_frontalface_alt2.xml. This is the statement : cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); where cascade_name point to the filename string.
The function call at the cvLoad() hangs because the system doesn’t know the type related with the format of the file haarcascade_frontalface_alt2.xml.
I have tried to fix this problem adding at your source file the code to register a new type : CvType haar_type() and related functions. I have picked up this code from the original cvhaar.cpp source file.
Now the run time error seems to be fixed but the program doesn’t detect any face.
Please could you give me some tips about the workaround of this problem.
Thanks in advance.
Paolo

Hi,
First, thank you for your page!
Where can we find your optimized version of cvHarr.cpp, beacause it’s not in your svn in FaceDetector-iPhone?
Thank you

@Kelly, I can’t recover the xcodeproj for this old project anymore… however all the code including optimization is in the repository.
It’s only a matter of setting up a new iOS project and adding all the files into it.

help me sir is there any possibility to store the detected images and this detected image is to be reconginise

Hi there!
First of all, great job on doing face detection with openCV on iOS.
I’m trying to accomplish face “recognition” in my iphone app. I know there are APIs available such as the face.com API. Do you know what the openCV support is for face recognition? Thanks
Sunny

Leave a Reply

Your email address will not be published. Required fields are marked *