Friday, April 30, 2010

More Blender & Next Steps

The past week, I played around with my 2D physics engine and plant growth. I also had a look at soft bodies in Bullet, which would be great to simulate cells.

I found the Algorithmic Botany website which has a lot of interesting papers on simulating plants and plant growth and checked out some of the papers.

I recognized that plants are essentially spatio-temporal structures (STS), that is: structures formed from the movement of a body in time. So I abstracted the problem away from plants, and thought about simulating and animating spatio-temporal structures in general.

I thought about how I wanted to implement an STS generator, and concluded that it would be best to integrate it as as plugin into Blender. Problem is: I know very little about Blender. So I dug around in the Blender 2.5 scripting documentation, and it looks easily extensible from what I can gather.

I want my spatio-temporal structures to be transformable into physical objects, so I need some integration with Blenders' built-in physics engine. They are also most likely to generate geometry as part of an animation. I have no idea how to bake that.

As a first step, to get comfortable with Blender, I will do a test scene for a small segment of the music, to see how well I can synchronize the visuals to the music. This will also answer the question how much I have to program to get good results.

I know that my own program would render images much faster, as it's not software rendering, but GPU accelerated, but I can anticipate the amount of work involved with building my own editor, which I don't want. I will instead master Blender, and maybe one day write my own GPU accelerated renderer for it.

Saturday, April 24, 2010

Growing Plant Experiments

I experimented with Python and Box2D in the past week and wrote my first self-replicating cell plant simulator. The results are in 2D, but they gave me a sense of direction.


This one was produced in a zero-gravity environment. The plant was created by having a "driver" cell at the top split into three cells, two static ones and a third new "driver". All cells would be glued through trusses made of distance joints, which is the lines you see. Each joint would grow linearly over time, and get bigger and bigger until it reaches a treshold.

Occasionally, a driver cell would split into two driver cells, forming branches that continued to grow on their own.

I was fairly happy with the above solution, but distance joints don't have a collision boundary, so it was possible a plant could grow into itself, or the truss would misalign.


Here is an example of a plant with weak, misaligning trusses, and trusses that grow back into themselves. Also zero gravity, with a low resonant frequency for distance joints.


This is the first experiment with gravity. Here I'm using the cells themselves to build a structure, without using trusses to expand the space. Each cell divides into two cells properly, but some cells don't divide (30% chance) and just keep growing. The cells are not connected. I made the observation that large cells tend to cluster at the top, while small and replicating cells prefer to stay close to the ground.


Now I'm splitting each cell into two at steady time intervals, and connect both new cells with all previously connected ones through distance joints. This time, heavily stretched distance joints will be removed. After a few iterations, this optimal triangle formation can be observed.


A few iterations later. You can see that the limited space for the growing structure contracts the joints (red), and a few cells overlap. All in all, the resulting structure is more close to a tumor.

Conclusions:

I like the growing cell approach more, but I have no idea yet how to make them take on a specific shape.

Also, the resulting formation is a lot more rigid, but it should be flexible. Bullet, the engine I plan to use for the 3D conversion, supports soft bodies, so it's likely I'll use a soft body variant.

At some point, these structures need to be skinned, that is: a surface has to be generated from the cell point set. I have no idea how to do that, either.

Sunday, April 18, 2010

Stop. Relax. Think again. Keep the plants.

After doing a bit of storyboarding last week I became increasingly tired with the whole thing. At one point I was even contemplating giving it up altogether. But I have made promises to myself and others earlier, and so there's no chickening out.

The way I have planned this, it would take an entire summer to make it. That's not the biggest trouble, though. I found out that I don't like doing traditional storytelling work, I don't like drawing storyboards, actually I don't have much fun pinning the whole thing down in advance - at all.

What I have fun with though, is progress through experiments. I like building stuff on the fly, and discovering cool things to do with it while I do it. I usually have a very rough outline for what I'd like to do with it, and I decided to keep it that way.

So, I decided I'll kick the story, and just keep the plants. We went to the park this afternoon and looked at the succulents in the tropical house again, also took some photos. I started to think about programming plants that grow, very much like real plants do.

I find it weird that we, the rational people, like to demonstrate to religious lunatics that there is no god, and evolution is a much likelier theory than intelligent design, yet our 3D artists play god all the time: plants, people, cities, everything is modeled thing by thing - no evolution to see here. All hand-picked and hand-animated.

I will therefore attempt to write a program that can make plants grow from very simple parameters, and have these plants actually look like something, and respond like actual physical objects.

I admit, I have not much of a clue how to do it. Maybe generate a skeleton of support bars, and skin that skeleton with plant material. I don't know how far I will get, but the result will certainly allow much wider parametrization, and yield enough material to fill seven minutes worth of time without becoming boring.

Not every piece of entertainment needs a story. What happened to good old wonder? Can't we just make things that amaze people without meaning anything? Even animal documentaries need a narrator - or do they? Why don't we let the pictures speak for themselves?

Is it strange to think that money can be made with videos that are like music? Wouldn't you watch a two hour movie that was filled with amazing stuff, without any story? I wish they had done that to Avatar. It would have made the second viewing so much better. %)

Anyway, moving on.

Sunday, April 11, 2010

First Storyboard Work & Animatics

Today I finally managed to sketch the first four segments of 19. So far, each segment got about eight to ten images. My drawing style is terrible, but it's good enough to give someone else a rough idea of shot angle, setting and movement.


Before I did each sequence, I changed the script so that each sentence would more closely reflect what is seen on the images. Especially the beginning is full of very vague descriptions such as: "The light warrior manifests". Now I know how he actually manages to manifest.

I wanted to see how easy it was to make an animatic. I took the first sequence and shot each image using a Canon IXUS on a tripod, because that's much faster than scanning them. Besides my scanner doesn't work with Linux and has to be used using Win32 in a virtual machine, which sucks.

I imported the images into the Blender video editor and assembled an animatic of about half a minute. Blenders controls are terrible, but I will get used to this. I'll just pretend this is my only option.

If I can continue at a pace like this, I'll have the rough storyboard done in about one to two weeks. From then on it's actual character, environment and set design.

Tuesday, April 6, 2010

Books & Storyboard

Most of the design books arrived today. They are entirely beautiful, and I'm sure they will help me designing the characters and enriching the visuals.

I also bought 2 packs of 100 DIN A5 sheets for storyboard drawing, a few pens, and another Moleskine sketch book - you can't have enough of those. I started doodling a bit and found out that the first idea for the main character needs some work.

I spent a bit of time on Wikipedia researching the spiritual warrior and reading up on the monomyth again. I have that Joseph Campbell book in the cupboard. I guess it's time to check it out once more. Studying this will help me find the optimal shape for the main character.

The beginning of the screenplay needs a bit of fleshing out. I omitted quite a few character details, and there's not enough action.

Sunday, April 4, 2010

Screenplay

Yeah. I finished the first draft of the screenplay today. I'm very happy with the result, but the script indicates very clearly that I'm going to need a bit more than my own engine. I guess this is the time to pull out Blender 2.5 and get cracking.

The next step though, is a storyboard which helps me to find the pace and optimal cut for the movie.

In other news, my wife and I watched "West Side Story" again today. I had to cry several times, mostly because they don't make them like that anymore(tm), and also at the great universal truths expressed. The guys who made this movie are all dead. It was a great way to connect with the past.

Saturday, April 3, 2010

More Shaders & Spirals

Finally I'm getting somewhere. I improved the checkers shader yesterday, so that I could do a few more things with it. The squares were originally individual faces on a rendertarget, but now it's all done using a blend texture:



And this is how it looks in action (an earlier iteration):



This way I can re-use the shader with other blend textures, and also produce slightly different effects:



I also wrote a spiral shader, which takes a tagged mesh and restructures it to the parameters of a spiral. Here is an action shot:

Thursday, April 1, 2010

Blender support & First Procedural Textures

I added simple support for COLLADA meshes yesterday evening, so I can load meshes from Blender. Here is proof:









I also worked on a first texture effect. I got the idea from a mural drawing I once did. It's also vaguely related to Escher.









I cleaned up the checkers effect a bit and kept watching it. While doing so, I got an interesting idea for a post processing effect, from tricks the contrast played on my eye:

Tuesday, March 30, 2010

FBO's, VBO's & Design Books

The engine is taking shape. I can now upload meshes to the graphics card, and I have a working flow for rendering to half-float framebuffers. I don't have to grab the screen buffer anymore, but I can directly render into a texture and write that texture to disk. I changed the motion blur routine to also do antialiasing.

I need a clearer vision of what I'm going to do, so I ordered about €100 worth of ancient culture design books: aztec, mexican, mayan, maori, aborigine, celtic, hindu, ... I hope they give me inspiration for patterns and animations in the movie.

I currently have no clear plan where to take the meshes from. I'll play around with the GL extrusion library (GLE) a bit, and see if I can dump the data to disk. Perhaps I also have to work with Blender, and do a Collada importer. We will see.

Sunday, March 28, 2010

Rendering & Capturing Pipeline

I wrote the rendering and capturing pipeline today. I use the OpenGL accumulation buffer to do motion blur. Each image is captured and saved as a BMP file. When a sequence is done, I use Avidemux to encode the video.

Unfortunately, the screen has to be entirely unobstructed by any other window for capturing to work. Once there is another window in the foreground, glReadPixels() ignores that part of the picture. Apparently this is due to Pixel Ownership testing.

Here is a test render:



Rendering took about 4 minutes, of which a bulk of time was spent writing to disk. Resolution is 1920x800 at 24fps, with 64x motion blur sampling (= 1536 fps).

Sync to music seems to be equally fine (same test, with a different material):

Saturday, March 27, 2010

Shader Workflow, Bindings & Camera Testing

Yesterday evening I added a Python console to the renderer and a new Binding class that allows me to tie values and UI control elements like e.g. sliders together, so I can quickly scrub over a value to find good settings.

I also imported the GLSL shader pipeline from my "nedu" engine that I wrote for Die Ewigkeit schmerzt. It scans the shader sources and exposes attributes to modify uniform variables from within Python. It can merge multiple shaders by replacing GL symbol names. It also keeps track of changing sources and re-compiles shaders on the fly.

This way I can quickly rewrite a shader while the preview is running, and cut down on waiting time between making changes and seeing the results. This is very important.

A colleague lent me his Full HD camcorder, a Canon VIXIA HF10, so I can try it out and get a feel for shooting with a digital camera. The camera records an AVCHD encoded video stream at 1920x1080, and does pretty well at various lighting conditions. I uploaded some material to YouTube.



The image stabilizer isn't bad when you keep a steady hand, but it does terrible things to panning shots, and it's not working at all when you zoom in. Good thing Sylvia bought a tripod recently, so I can probably turn off the stabilizer.



Contrast seems to be a little low by default (more greys than blacks), and I haven't figured out how to disable auto-exposure yet. The motor zoomer has four speed settings, of which some might come in handy for steady, almost invisible zooms.



On Sunday Sylvia and I are going to make a few shots for a very short film, so I can learn something about shooting, cutting and post processing on Linux.

"Teonanacatl" is going to be mostly CGI, but it's an animated story embedded in a real one - I am going to need one or two shots for the beginning and the end. It will be helpful if I have had some previous experience shooting digital film, so I can make this perfect.

I currently spend the way to work reading Robert Rodriguez' "Rebel without a Crew". His insights into how he shot his first feature film and got into film business are tremendously helpful to me. If it weren't for his book, I would be shitting my pants and hiding unter a desk right now - or I would just sit around and bore myself to death with my cowardice.

Thursday, March 25, 2010

Shader Screenshots

I managed to incorporate normal mapping into the textures. The result is good, but it strongly depends on the quality of gradients and textures. Here are three test shots of the "mystic material". The last one uses a 2D map for lookups.

Shader Experiments & Feedback from Kashyyyk

Yesterday evening I experimented with custom shaders, finding that PyOpenGL has perfect support for GLSL, which allowed me to go straight to work.

I had the idea of using rim/back lights for "organic" objects in the demo, to give the visuals a dark feel, and a strongly unrealistic, but mystical look. I wrote a shader that allows to pick intensity values from a gradient texture, which is the right direction, but still needs some improvement.

I figured I would also need normal mapping to give the rim a bit of texture. To do that efficiently, I needed an utility that could convert grayscale bump maps to blue-ish normal maps. To my great delight, the GIMP normalmap plugin does exactly that, featuring a 3D preview of the texture before applying the conversion, which is just sweet.

I also found Kashyyyk's MySpace page, told him about my project and asked about permission to use the soundtrack non-commercially. I expected some reluctance, but to my big surprise he instantly gave me a go, and also offered an uncompressed wave file of "Teonanacatl".

Wednesday, March 24, 2010

First Steps

I started setting up a loose framework yesterday evening to make sure I got the technical side correct.

The rendering is done within a very small Python script, which sets up an OpenGL context within a GTK window, creates a GStreamer player pipeline, and uses the streamer's position feedback to synchronize the visuals. The track is exactly 147 BPM all through the end, and the timing was perfect from the start. I'm glad I don't have to fix anything here.

There was a slight disappointment when seeking within the track caused the position feedback to become unreliable, but I found that adding gst.SEEK_FLAG_ACCURATE to the seek call fixes these problems instantly - seek-friendly, sample-perfect, nanosecond-precise synchronization within an MP3 stream? I am impressed.

I went through the track and marked the time (in beats) of various segments that I can imagine being separate scenes.

I have very rough ideas about what to show in my head, but these need to be put into order, and be made part of a larger whole. I need a script. I'm thinking about using Celtx to write the index cards.

Tuesday, March 23, 2010

Production has started.

Since the old site has been quite dead recently, I decided to revive the label as a name for a short film production site. I will use this blog to report on ongoing projects.

Since a while, I am crazy about doing a music video for Kashyyyk - Teonanacatl, a speedy full-on psytrance track that I fell in love with instantly. I suspect that the wonderful aesthetic of this track escapes the grasp of most people, and thus I want to produce something visual that fits the mood.

Plus, I want to connect to creative movie makers all over the world, and let them know that I'm here, ready to do new projects.

So far, the plan is to write a procedural renderer using Python and OpenGL, that means: code everything that can be seen in the video, and render the result into a video that I'll put online on YouTube. I have opened a repository in an undisclosed location, and will do a few visual tests to see if the workflow makes sense.

Ah yes: the movie will be produced using free software only. Just so I can feel like a hero.