(Or, The Death of a Million Scraps of Paper Containing Unreadable Notes and Cryptic Scribbles)
Written by Rob Bridgett.
For some time now, a common goal of content designers has been toolsets and pipelines that enable not only real-time tuning, but also real-time sound replacement. At the heart of the yearning for this technology is an attempt to get the kind of iteration-time and critical workflow that film sound designers and mixers already have (press play, preview sync sound along with image, audition new sounds, make changes, review, etc). This dichotomy in iteration-time is one of the fundamental differences in production between video games and movies, at least in the editorial and post-production context, and is one of the more painful and laborious processes at the heart of the iteration process for game developers.
The notion of an ‘update’ button, that is able to push delta content across to a running piece of software, is one I’ve been anticipating for a while now. Again, it is not whether this can be done, or is practical at this point in the development of our technology, but how well it is executed in the software and supporting pipeline that will dictate whether a tool with features like this provides the grassroots support for change in entrenched development habits.
I believe sound artists will be in a better to position to preview very quick changes for a variety of reviews, testing, and on-the-fly discussions with creative directors, producers and other stakeholders. This potentially also changes the political landscape for sound’s role inside the game development process, and we need to be ready for a more outward facing role on the team to take advantage of this.
Having the audio engine and interface connected to the game running on its host platform is of course standard across all audio tools today. However, the notion of ‘hot-swapping’ source sound files, and even re-arranging the inner-guts of the code that triggers sound events as it runs, is a severely more complex issue. It probably has more parallels to a live-organ transplant than you’d like to think about. But the problems associated with not having this kind of environment continue to have significant impact on our creative endeavors and working lives.
Whenever I have run a final mix on a game, or sat down for a sound review on a level or feature, we invariably end up with two lists. Well, one list really. Changes that we can preview and change immediately are less of an issue since these can get done to a satisfactory level in the moment, such as changes made to state-based mix snapshots, changes to static volume or pitch, filter tweaks, reverb send level-changes, or other DSP effect alterations. The other list, the real ‘list’, is a shopping-cart of things we just can’t do in the moment, that require us to stop the simulation, rebuild content and halt the flow. Things like source sound effect changes, brand new state-based triggers for sounds or new snapshots, things to ‘try out’ that I know can’t be done without designing a new replacement .wav sound, rebuilding the game, and finding my way back to the place in the game where the sound lives – which can be a very long and very tedious process (often involving cheats and command-line script spew), and not the kind of thing you’ll ever be able to do with a producer sitting there awkwardly checking his blackberry for a couple of hours as you make small talk about the Star Wars reboot*.
The creation of that list is of course essential for delegation and recap of these tasks to the wider sound team, but I lament the missed opportunities of having the ability to try out some of the simple suggestions right there in the room, and quickly decide whether they are good suggestions or not. Coming back to review notes cold, perhaps a week later, you kind of miss the idea that was had in the moment, and often the point is lost for ever… along with the opportunity. In fact, working from paper lists loses almost all of the original inspiration for suggesting changing an effect or sound that was had in the room at that moment.
I always come back to the idealistic/Utopian scenario (at least through the eyes of a game sound designer) of a motion picture final mix, where the mixers and clients are all present, along with effects editors, to take in the spirit and inspiration of a discussion and almost immediately try out the idea, and move on an opportunity. Everyone is present, everyone gets to hear the idea, preview it, and everyone gets to agree or disagree and move on.
So, it is this second list, this cursed ‘to-do’ list of items that will take days worth of work, delegation, and review, that I think could be the new focus of attention for our technical pipeline endeavors in interactive sound production. This is of course not just about commissioning this kind of technology, but using it to have these kinds of collaborative, spontaneous, experimentation-based meetings & reviews, they are two equally essential halves of the process.
As we move ever-forwards towards this run-time production environment where sounds can be manipulated, replaced, re-sequenced, removed, ripped-out, re-imagined, and re-configured without the need to even pause the game simulation, we will find ourselves closer to the place where film sound design and editorial teams find themselves (with new problems and new challenges). With this, I believe the way we make games together will change.
The notion that there is always a chair empty in my sound studio, ready for a creative collaboration to happen and for the focus of two or more sets of ears and eyes, is an enticing one, and one which I am very excited to try to move towards.
* I thought this reference might date the article, but then I realized there would always be some kind of Star Wars reboot happening for the rest of human future.
I’m reminded of the ability they have on the larger mix stages to have pretty much the entire edit/mix laid out before them, able to be drilled down into and tweaked at will.
The game development workflow equivalent will be when you have e.g. an audio director playing through an instance of the game, to which everyone in the sound team can connect and profile (in the Wwise vernacular), see the same video, hear the director’s suggestions on the fly. The whole thing ideally recorded and immediately ‘scrubbable’ for reference, with the data that’s changed in that instance able to be immediately incorporated into the live build of the game, but also everything *saved out* – the comments, the video, the audio – so it can be played back offline and referred to for those longer tasks when sounds need wholesale changes.
Yes, instead of those bloody to-do lists on paper… ‘What does that say again?’ 🙂
[…] Impact of real time sound replacement […]