Tuesday, March 30, 2010

FBO's, VBO's & Design Books

The engine is taking shape. I can now upload meshes to the graphics card, and I have a working flow for rendering to half-float framebuffers. I don't have to grab the screen buffer anymore, but I can directly render into a texture and write that texture to disk. I changed the motion blur routine to also do antialiasing.

I need a clearer vision of what I'm going to do, so I ordered about €100 worth of ancient culture design books: aztec, mexican, mayan, maori, aborigine, celtic, hindu, ... I hope they give me inspiration for patterns and animations in the movie.

I currently have no clear plan where to take the meshes from. I'll play around with the GL extrusion library (GLE) a bit, and see if I can dump the data to disk. Perhaps I also have to work with Blender, and do a Collada importer. We will see.

Sunday, March 28, 2010

Rendering & Capturing Pipeline

I wrote the rendering and capturing pipeline today. I use the OpenGL accumulation buffer to do motion blur. Each image is captured and saved as a BMP file. When a sequence is done, I use Avidemux to encode the video.

Unfortunately, the screen has to be entirely unobstructed by any other window for capturing to work. Once there is another window in the foreground, glReadPixels() ignores that part of the picture. Apparently this is due to Pixel Ownership testing.

Here is a test render:



Rendering took about 4 minutes, of which a bulk of time was spent writing to disk. Resolution is 1920x800 at 24fps, with 64x motion blur sampling (= 1536 fps).

Sync to music seems to be equally fine (same test, with a different material):

Saturday, March 27, 2010

Shader Workflow, Bindings & Camera Testing

Yesterday evening I added a Python console to the renderer and a new Binding class that allows me to tie values and UI control elements like e.g. sliders together, so I can quickly scrub over a value to find good settings.

I also imported the GLSL shader pipeline from my "nedu" engine that I wrote for Die Ewigkeit schmerzt. It scans the shader sources and exposes attributes to modify uniform variables from within Python. It can merge multiple shaders by replacing GL symbol names. It also keeps track of changing sources and re-compiles shaders on the fly.

This way I can quickly rewrite a shader while the preview is running, and cut down on waiting time between making changes and seeing the results. This is very important.

A colleague lent me his Full HD camcorder, a Canon VIXIA HF10, so I can try it out and get a feel for shooting with a digital camera. The camera records an AVCHD encoded video stream at 1920x1080, and does pretty well at various lighting conditions. I uploaded some material to YouTube.



The image stabilizer isn't bad when you keep a steady hand, but it does terrible things to panning shots, and it's not working at all when you zoom in. Good thing Sylvia bought a tripod recently, so I can probably turn off the stabilizer.



Contrast seems to be a little low by default (more greys than blacks), and I haven't figured out how to disable auto-exposure yet. The motor zoomer has four speed settings, of which some might come in handy for steady, almost invisible zooms.



On Sunday Sylvia and I are going to make a few shots for a very short film, so I can learn something about shooting, cutting and post processing on Linux.

"Teonanacatl" is going to be mostly CGI, but it's an animated story embedded in a real one - I am going to need one or two shots for the beginning and the end. It will be helpful if I have had some previous experience shooting digital film, so I can make this perfect.

I currently spend the way to work reading Robert Rodriguez' "Rebel without a Crew". His insights into how he shot his first feature film and got into film business are tremendously helpful to me. If it weren't for his book, I would be shitting my pants and hiding unter a desk right now - or I would just sit around and bore myself to death with my cowardice.

Thursday, March 25, 2010

Shader Screenshots

I managed to incorporate normal mapping into the textures. The result is good, but it strongly depends on the quality of gradients and textures. Here are three test shots of the "mystic material". The last one uses a 2D map for lookups.

Shader Experiments & Feedback from Kashyyyk

Yesterday evening I experimented with custom shaders, finding that PyOpenGL has perfect support for GLSL, which allowed me to go straight to work.

I had the idea of using rim/back lights for "organic" objects in the demo, to give the visuals a dark feel, and a strongly unrealistic, but mystical look. I wrote a shader that allows to pick intensity values from a gradient texture, which is the right direction, but still needs some improvement.

I figured I would also need normal mapping to give the rim a bit of texture. To do that efficiently, I needed an utility that could convert grayscale bump maps to blue-ish normal maps. To my great delight, the GIMP normalmap plugin does exactly that, featuring a 3D preview of the texture before applying the conversion, which is just sweet.

I also found Kashyyyk's MySpace page, told him about my project and asked about permission to use the soundtrack non-commercially. I expected some reluctance, but to my big surprise he instantly gave me a go, and also offered an uncompressed wave file of "Teonanacatl".

Wednesday, March 24, 2010

First Steps

I started setting up a loose framework yesterday evening to make sure I got the technical side correct.

The rendering is done within a very small Python script, which sets up an OpenGL context within a GTK window, creates a GStreamer player pipeline, and uses the streamer's position feedback to synchronize the visuals. The track is exactly 147 BPM all through the end, and the timing was perfect from the start. I'm glad I don't have to fix anything here.

There was a slight disappointment when seeking within the track caused the position feedback to become unreliable, but I found that adding gst.SEEK_FLAG_ACCURATE to the seek call fixes these problems instantly - seek-friendly, sample-perfect, nanosecond-precise synchronization within an MP3 stream? I am impressed.

I went through the track and marked the time (in beats) of various segments that I can imagine being separate scenes.

I have very rough ideas about what to show in my head, but these need to be put into order, and be made part of a larger whole. I need a script. I'm thinking about using Celtx to write the index cards.

Tuesday, March 23, 2010

Production has started.

Since the old site has been quite dead recently, I decided to revive the label as a name for a short film production site. I will use this blog to report on ongoing projects.

Since a while, I am crazy about doing a music video for Kashyyyk - Teonanacatl, a speedy full-on psytrance track that I fell in love with instantly. I suspect that the wonderful aesthetic of this track escapes the grasp of most people, and thus I want to produce something visual that fits the mood.

Plus, I want to connect to creative movie makers all over the world, and let them know that I'm here, ready to do new projects.

So far, the plan is to write a procedural renderer using Python and OpenGL, that means: code everything that can be seen in the video, and render the result into a video that I'll put online on YouTube. I have opened a repository in an undisclosed location, and will do a few visual tests to see if the workflow makes sense.

Ah yes: the movie will be produced using free software only. Just so I can feel like a hero.