Monday 4 February 2013

Making of: Leviathan Sun

This is the story of Leviathan Sun, a poor and helpless 4k intro for the modern day browser, that uses the HTML5 canvas and audio api's with javascript to create a generative audiovisual experience.

The quest started when noticing i had a ticket but no entries for the Payback demoparty which was occuring in just a few weeks in Helsinki, Finland.

I had a few leftover javascript effects that never got used and decided to try something out, either a 64k or a 4k. Slapping the "framework" together was quite simple, just a few copy pastes from older similar projects.

The first bump on the road was getting some sound. The alternatives were to try and get a musician to do a .ahx, it kinda worked out ok when we used it on Parsley State, but it's hard to find people willing to track in ahx in 2013. A close alternative was to convince the musician to do a .mod, but only replayer for mods on javascript is buggy, so you might be shooting yourself in the foot by trying to use it. Ofcourse there already are some softsynths ported to javascript like js-sonant and sorollet. And then there was also the possibility of trying out GLSL audio, which seemed like an interesting concept, but i was running out of time and the prospect of having to learn the ropes by myself wasn't very appealing at this point. I ended up trying some algorythms of bytesize music code and after discarding most of them i finally figured out a couple of nice drone loops and some subtle glitch percussion texture.

The idea was to make the intro generative, that means that it takes a seed value and the content is slightly different depending on that seed. I had already implemented a similar concept regarding haikus for my Recycle Bin Laden web art project. So it felt obvious to connect both projects under the same roof.

The original idea for the generative aspect of the audio was to have several channels of different loops, which would be randomly selected to be included or not, and all of them have generative ADSR, that means that out of 4 channels, there was a possibility that only 2 of them could be played, and one of them have a quick fade in and long fade out while the other have not. Additionally the speechsynth would read out the current seed value, marking the tempo of the audio. After a few tests and debug sessions it became obvious on aesthetic principles that the tracks weren't really fitting together very nicely and that the ADSR trickery was mostly not working out at all. So i picked the best drone loop, turned it into a slow buildup, tuned the percussion elements to work abit better together and added some generative parameters to the sound (such as the tone of the drone, the probability of the speech synth being played at all, etc)

The visuals were 2 similar effects overlaid on top of each other. Both of them simple implementations of psychedelic centric circle effects references. Added some generative parametrization to some of the variables used, some occasional extra elements like the circles, the glitching bars and the possibility of a strobe effect.

Following Murphy's law to the book the version that was randomly selected to play on the big screen at the partyplace was of a horribly high pitched drone with an extremely abusive strobe and no speechsynth at all, which managed to alienate anyone from wanting to watch it till the end, forcing the orgas to censor it and causing everyone to completely miss out the build up hypnotizing concept of the piece. In retrospect, it quite successfully maintained the long track record of minimalartifact releases failing miserably to get displayed as intended on a big screen at party events.

After party comments seem to be mostly positive. Which is quite nice.

One last issue worth mentioning in this report is the heavy RAM required to load the audio buffer. Even 2 gigs sometimes aren't enough and manages to freeze the browser. Generated audio is a lot of data, but it shouldn't choke a browser to death. It only goes to show how beta the realtime usage of audio and graphics on browsers still is, hope to see some improvements on this department, not sure who to poke on the browsers teams to get this investigated and added to the dev roadmap but sure would be nice to be able to use improved audio api's.

No comments:

Post a Comment