Categories
programming

OpenGL for AviSynth [Update: now w/code]

Hi
I had a little project at work recently, that involved creating movie clips using AviSynth.
And I was appalled by the shabbiness of existing transition plugins available freely for AviSynth, they always reminded me of 80s-like video editing…
So I set out to integrate AviSynth with OpenGL to create a nice 3D transition effect for our movie clips.
I had 2 major bases to cover:

  • AviSynth plugin API
  • OpenGL rendering

AviSynth API is not so well documented, but they have very good ground-up examples on how to DIY plugin. Here is the one I used, that basically does nothing but copy the input frame to the output frame.
Open GL on the other hand is very well documented and “tutorialed”. I based my code on this example from NeHe.
So basically what I wanted to achive is:

  1. Read input frame (AviSynth)
  2. Paint frame as texture over 3D model (OpenGL)
  3. Draw rendered 3D image to output frame (OpenGL+AviSynth)

Reading the frame is pretty straightforward. Frames come encoded as RGB 24bit, with a little twist: rows size in bytes is not width*3 as you’d expect it be, but AviSynth use a parameter called “Pitch” to determine row size in bytes.
Update (14/9/09): source is now available in the repo: browse download

So when I extract the input image I do:
uchar* texBuf = (uchar*)malloc(width * height * 3); //”pure” RGB24 encoding
const unsigned char* srcp = src->GetReadPtr(); //strange AviSynth RGB24 encoded data

memset(texBuf,0,width*height*3);
int line_length = width*3;
for(int y=0;y [lt] height; y++)
const unsigned char* line_srcp = srcp + y*src_pitch;
unsigned char* line_dstp = texBuf + (height – 1 – y)*line_length;
memcpy(line_dstp,line_srcp,line_length);
}
OK, now I have the input frame in my memory space and I can move on to make an OpenGL texture out of it:
glGenTextures(1,&tex);
glBindTexture(GL_TEXTURE_2D, tex);
gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width,height,
GL_RGB, GL_UNSIGNED_BYTE, texBuf );

Cool.
This is when I ran into the first wall. In order to render anything, OpenGL must have a rendering context, for a rendering context you need an OS drawing context. In Win32 case a drawing context can be either a bona fide actual window, or a memory-based BITMAP…
I burned many hours trying to make BITMAPs work as offscreen redering contexts for OpenGL but with no avail… So I went with the “dirtier” solution of creating a window for the sake of off-screen redering (which kinda takes the “off” out of “offscreen”), but it works..
For easier setup I used GLUT (with an OpenGLUT impl.) and GLEE:
glutInitDisplayMode ( GLUT_RGB | GLUT_DOUBLE );
glutInitWindowSize(width,height);
glutCreateWindow( “offscreen” );

glutDisplayFunc ( display );
glutReshapeFunc ( reshape );
glutKeyboardFunc ( keyboard );
glutIdleFunc ( idle );
GLeeInit();
This is done in each frame, sadly, but I wasn’t able to make it work otherwise.
Now, another hack I had to do is have the GLUT main loop render only one frame… Because the whole thing is setup again in the next frame.
So I enter the main loop after all settings are done:
glutMainLoop();
And in the end of the display() function I exit the main loop: (this can only be done in OpenGLUT, regular GLUT does not support this)
glFlush();
glFinish();

glReadPixels(0,0,width,height,GL_RGB,GL_UNSIGNED_BYTE,img_data);
glutLeaveMainLoop();
But, not before I grab all the pixel data from the rendered image using glReadPixels. This will go into the output frame of the AviSynth session:
for(int y=0;y [lt] height; y++)
unsigned char* line_dstp = dstp + y*dst_pitch;
unsigned char* line_srcp = img_data + y*width*3;
memcpy(line_dstp,line_srcp,line_length);
}

OK, we’re pretty much done. Just need to draw some polygons in the display() function, use the tex texture and the input movie will appear frame-by-frame on the polygons – sweet!
Some results
This is the .avs file:
LoadPlugin(“opengl-avisynth.dll”)
A = ImageSource(“A.jpg”,end=70)
B = ImageSource(“B.jpg”,end=50)
Dissolve(A,B,20).SimpleSample

This is the result:

Another result, this time with actual video as an input:
A = AVISource(“MVI_6130.avi.AVI”).ConvertToRGB24()
SimpleSample(A)

7 replies on “OpenGL for AviSynth [Update: now w/code]”

Wow, for the longest time I have dreamed of someone would open the door to 3D effects in Avisynth. Now you have done it!! Will you make your plugin/code public? It could be fun to add more transitions to it.

You make a great point!
Maybe I will try to use the Blender exporter to make nifty transitions…
Actually the problem with my plugin was that we couldn’t make a real killer effect with it, all we had was the cube.
Re the code, unfortunately the complete source belongs to the company I work in, and I can’t release it under any license. But the key points are in the snippets, you just have to fill in the blanks (like the OpenGL display code, and the AviSynth plugin code that is available online).
Perhaps i’ll be able to release the code once my company abandons this concept.
Thanks for commenting!
Roy.

Hi Roy,
Sorry to pester you again. I’m not that strong in c++, but I would love to build a free 3d transition lib for Avisynth. Are there any chance that your company has abandoned the concept yet and you therefore could release the code?
Best regards,
Tin2tin

awesome, i tried that some time ago, but couldn’t find the time.
and actually had the same problem with the context,
i’ve tried using mesa library wich for offscreen drawing has some “software” rendering, but couldn’t make it work either.

Leave a Reply

Your email address will not be published. Required fields are marked *