The quest started when noticing i had a ticket but no entries for the Payback demoparty which was occuring in just a few weeks in Helsinki, Finland.
The idea was to make the intro generative, that means that it takes a seed value and the content is slightly different depending on that seed. I had already implemented a similar concept regarding haikus for my Recycle Bin Laden web art project. So it felt obvious to connect both projects under the same roof.
The original idea for the generative aspect of the audio was to have several channels of different loops, which would be randomly selected to be included or not, and all of them have generative ADSR, that means that out of 4 channels, there was a possibility that only 2 of them could be played, and one of them have a quick fade in and long fade out while the other have not. Additionally the speechsynth would read out the current seed value, marking the tempo of the audio. After a few tests and debug sessions it became obvious on aesthetic principles that the tracks weren't really fitting together very nicely and that the ADSR trickery was mostly not working out at all. So i picked the best drone loop, turned it into a slow buildup, tuned the percussion elements to work abit better together and added some generative parameters to the sound (such as the tone of the drone, the probability of the speech synth being played at all, etc)
The visuals were 2 similar effects overlaid on top of each other. Both of them simple implementations of psychedelic centric circle effects references. Added some generative parametrization to some of the variables used, some occasional extra elements like the circles, the glitching bars and the possibility of a strobe effect.
Following Murphy's law to the book the version that was randomly selected to play on the big screen at the partyplace was of a horribly high pitched drone with an extremely abusive strobe and no speechsynth at all, which managed to alienate anyone from wanting to watch it till the end, forcing the orgas to censor it and causing everyone to completely miss out the build up hypnotizing concept of the piece. In retrospect, it quite successfully maintained the long track record of minimalartifact releases failing miserably to get displayed as intended on a big screen at party events.
After party comments seem to be mostly positive. Which is quite nice.
One last issue worth mentioning in this report is the heavy RAM required to load the audio buffer. Even 2 gigs sometimes aren't enough and manages to freeze the browser. Generated audio is a lot of data, but it shouldn't choke a browser to death. It only goes to show how beta the realtime usage of audio and graphics on browsers still is, hope to see some improvements on this department, not sure who to poke on the browsers teams to get this investigated and added to the dev roadmap but sure would be nice to be able to use improved audio api's.