1. Depth released on Steam. Dive in!


    Engine Audio is finally allowed to discuss our involvement in a game called Depth. Depth is an underwater multiplayer thrill ride of the highest order. After many months of in game testing and development, it still makes me jump out of my seat.

    Depth is the brainchild of Alex Quick and Digital Confectioners. The game play pits teams of gold thirsty divers against blood-thirsty (and downright cruel) sharks. This game is set as a 4 vs 2 to balance out the mechanics. We will let you take a guess as to which side needs the extra numbers.

    The Divers must defend themselves from the great predators of the sea while doing their best to defend their submersible safe cracking robot S.T.E.V.E. Divers who dare to gather sunken treasure are rewarded with bragging rights and enough cash to upgrade their load out. Believe me, it is in your own self interest to upgrade your gear. The Sharks hunt their prey in the deep with heightened and upgradeable senses. Oh, and of course a lot of razor sharp teeth.

    Engine Audio became involved in the project in 2010 after watching some of the initial prototype captures. It was obvious that Alex was developing an ambitious game with a lot of potential for audio design. We contacted Alex to discuss working together and jumped right into the deep end.

    The game had gone through several iterations when Alex decided to put development on an indefinite hiatus. Alex began a relationship with New Zealand developers “Digital Confectioners” who had a depth of experience with the Unreal engine and revived and transformed the game into what it is today.

    Depth was created with a team of great developers including Alex Quick, Digital Confectioners, Super Genius, LevelDesigner and of course audio and implementation by Engine Audio.

    Thank you for checking out this announcement, and thank you for purchasing Depth. We will have a series of behind the scenes articles relating to the audio development coming soon.

    Audio Highlights:

    • Audiokinetic’s Wwise integrated into Unreal 3 engine
    • 14 high caliber weapons
    • Separate and unique audio mix heard from shark or diver perspective
    • 1p/3p multiplayer game created new audio challenges
    • Occlusion through the rocky Olmec temples or sunken metal ships

    Thanks to:


    Comments (1)

  2. Short Wave Image
  3. Building a custom Kismet node: MusicBPM

    In Part 1 & Part 2 of our series we discussed how you might build and use a horizontal or vertical musical system in UDK. A horizontal system can be built relatively easily from the default nodes provided with Kismet. A vertical system becomes a bit more difficult stemming from the need to track bars and beats, keep multiple musical loops in sync, and create timed fades.

    In order to have an intelligent music system that can keep multiple music files in sync you need to have a system that can account for the number of bars in a loop and the tempo/bpm of the music. For this purpose I have created a custom kismet node that uses a timer to loop a soundcue after a certain amount of time. This creates a seamless loop as the audio file plays back. The length of the loop is determined by the BPM, and the number of bars in the loop. These values are set as inputs to the Kismet node. This allows the values to be easily set in Kismet ahead of time, or on-the-fly while the game is running.

    Download the MusicBPM.zip here

    Instructions to use MusicBPM.uc as a Kismet node in UDK:

    • Unzip the MusicBPM folder into \Development\Src folder of your UDK install

    • The MusicBPM folder contains:

      MusicBPM (folder)
          |_Classes (folder)
              |_SeqAct_MusicBPM.uc (unrealscript file)

    • Open up DefaultEngine.ini inside the \UDKGame\Config directory with Notepad. (e.g C:\Projects\UDK-2011-08\UDKGame\Config\DefaultEngine.ini)

    • Search for EditPackages

    • Add +EditPackages=MusicBPM

    • Rebuild scripts from the UDKFrontEnd or just open the UDK Editor

    • If the editor asks if you want to rebuild scripts click YES

    Using MusicBPM in a vertical music system

    I developed the MusicBPM Kismet node as a way to create a music system in UDK that is aware of the tempo of the music and can react in a way that is more intuitive for a musician.

    Multiple MusicBPM Kismet nodes are meant to work together to create a vertical music system that can play, loop, and mix music according to the action occurring in the game environment. All nodes will start playing at the beginning of the level using a Level Loaded Kismet node. If the audio is meant to heard at the beginning of the level we can feed the signal to the ‘Start’ input on the MusicBPM node to begin play at full volume. If the audio is not meant to be heard from the beginning then use the ‘Mute’ input. This will start playing the audio in sync with the other files, but the volume will be set to zero.

    Any other Kismet node can be used to trigger MusicBPM. You just need to determine what game event you want to use to start the music system. It could be a location in the game world, health, time left in level, or any number of things. For the purposes of this example we will rely on the player’s location in the game world to trigger our vertical musical arrangement. In Kismet we can use a TriggerVolume to determine when a player has entered a certain area. This TriggerVolume will send a signal to the Fade input of the next MusicBPM node. The volume of the AudioComponent will increase from zero over a time set by Bars per Fade. To create a crossfade between two pieces of music set the output of the TriggerVolume to the Fade input of two different MusicBPM nodes. The node at full volume will start fading down to zero while the node at zero volume will fade up to full volume. This creates a smooth linear crossfade.

    The MusicBPM node has a series of inputs that control how the soundcue is plays back:
    Start – begins playback of the soundcue, volume at 100%
    Stop – ends playback of the soundcue
    Mute – begins playback of soundcue, volume at 0%. If already playing, toggles volume of soundcue between 0% and 100%,
    Fade – increses or decreases the volume over a certain number of bars

    The outputs give feedback to the audio system:
    Out – this output fires when the node is first activated
    Finished – this output fires when the soundcue has reached the end of the file

    The node has input variables with information on the playback of the audio:
    BPM – the tempo of the music loop
    # of Bars – the number of bars in the music loop
    Bars per Fade – number of bars used to fade the volume in/out

    How it works

    The MusicBPM is based off of a timer. It uses the values of ‘BPM’ and ‘# of Bars’ to determine the length of the audio loop in seconds. The result is then added to WorldInfo.RealTimeSeconds to determine when the music will loop. Every frame or tick in the game checks to see if this time has passed and if so it plays the audio file again. Along with checking the time the node checks the inputs at every tick to see if a signal has been received to stop, mute, or fade the audio. Once the signal has been received the node will either mute the AudioComponent or fade it in or out according to its current state. When the node receives a signal to stop it deactivates the node until it starts up again.

    Download MusicBPM.zip and try it out for yourself. I would love to see what you do with it. If you have any questions contact me chris.at.engineaudio.com or post a comment below.

    Click here to read the MusicBPM UnrealScript code


    Comments (1)

  4. Short Wave Image
  5. UDK Vertical Music System using Kismet

    Vertical vs. Horizontal

    The choice of a horizontal or vertical music system is often the difference between a linear or non-linear style gameplay. Last time, we introduced the advantages of using a horizontal music system in a UDK game. A horizontal approach is best used when the composer is aware of how the music system will transition from one musical loop to the next. A particular piece of music will loop over and over again as many times as it needs, then transition to the next loop at a predetermined event; for example all enemies dead, player killed, or time running out. A horizontal system will only play one stereo music track at a time. Transitions from one piece of music to the next can happen immediately, or over time using a fade.

    In contrast, a vertical music system is best utilized when the outcome or timing of any particular cue, scene, or part of the game is unknown. This is often the approach used in open world adventure games where the player can take a unique path through the storyline. A game’s soundtrack needs to change in response to the player’s action or game parameters. In open world type games the player is moving from one town or location on the map to the next, each area can have a specific piece of music attached to it. As the player enters that area the music system transitions to the correct piece of music by fading in the track that was playing in the background. While exploring the game world there can be invisible “lines in the sand” that trigger the correct piece of music for that new area of the map.

    A vertical system has two main ways of introducing a new layer of music: either immediately or fading over time. The simplest system will have the music play immediately as soon as it is triggered. There is a lot use for a system that triggers immediately, mainly the element of surprise or attack. This technique can be used in horror games when needing to build the intensity and tension in a piece to scare the player. A vertical approach can be used to introduce or remove particular element of the music soundtrack when necessary, according to choices made by the player. Enemies attack, monsters jump out at you, all triggering music and sounds immediately, often without consideration of the tempo or bpm.

    Vertical systems are well utilized during periods of exploration or during a battle scene, when the length of a particular piece of music needs to play is unknown. The player can explore different areas of the game world and trigger different piece of music dependant on location in the game map. Distance is also a deciding factor in an exploration audio system. How close or far the player is from an object can determine which piece of music is playing and how loud it is heard.

    The truth is many modern games use a combination of both horizontal and vertical approaches. Through a combination of timed music cues, loops, stingers and transitions, a music soundtrack can be molded to fit the particular style of gameplay at any point during the experience.

    Setup for Vertical System

    Think of a vertical music system as being similar to a multitrack DAW session. Each track contains one or more elements of the overall mix. The tracks have individual volume controls allowing different instrumentation to be brought in and out of the mix as needed. Vertical arrangements are used to play one or more tracks of music simultaneously, typically layers of the same composition. In UDK this means we need to keep multiple soundcues running in sync together, along with control over their playback and volume.

    One of the more standard approaches used in a vertical system is to have the full soundtrack split out into multiple stereo music stems, divided by similar instrumentation. Each of these stems are audio loops of the same length, utilizing the same musical key and BPM. The music system starts playing all the loops simultaneously, once the end of the file is reached the music system will loop back around to the top. Since some layers of the mix are not meant to be heard from the start, these loops play in sync with the rest of the music but their volumes are set to zero so they are muted in the mix. Once a new layer of the mix is needed the music system can immediately unmute it or fade it in over time. In Kismet, we need to be able to control multiple soundcues, triggering them to playback in sync to create the full loop. We also will need to set and fade their volumes at the correct time to create the audio mix.

    It is best to keep all audio files the same length and the same number of bars. If a particular loop is going to be shorter than the others, the length of this file should still be a musical division of the overall piece. For example, if a main loop is 16 bars long the other bars should be a division of that (8,4,2,1). All cues need to loop seamlessly with no pops, clicks, or noticeable changes when the loop restarts. The music system operates as one big loop that can fade or unmute different layers to build the mix, determined by a variety of factors.

    Fading two pieces of music together can introduce a number of issues with timing and sync. Anyone who has listened to a bad DJ has heard this before. Are the two pieces of music in sync? Did they start at the same time? If one started later, did it start on the next downbeat or somewhere out of sync? When should the fade start? How long will the fade be? The problems introduced by these types of question require a more intelligent music system to create a successful game soundtrack.

    A smart music system like this would take into account bars, beats, and the tempo of the music. The game knows where we are in a music loop (bars & beats), and how fast we are going (tempo). It can then use this information to make intelligent decisions about how and when to transition the music. If it knows when the next bar or beat is and how long a fade should be it can properly fade any piece of music, in or out, synced to the music.

    Vertical System in Kismet

    The vertical music system for our example will use many different pieces of music that will crossfade in and out as the player enters new areas. TriggerVolumes are located at different points in the map to change the music system at the right time. When the game starts all PlaySound Kismet nodes are triggered to start playing at the same time. All tracks, except one, are muted leaving only the first to play at the beginning of the game. Typically vertical music systems have an ambient track that plays at all times or an intro track that plays only at the beginning.

    As the player moves through out the world they enter a new location and trigger a different piece of music. That music will fade in over a predetermined amount of time. The new layer of music will remain in sync with the rest of the mix because it has been playing along with the other music tracks from the beginning. We need the functionality to play, stop and fade a music loop either immediately or in sync with the music. We will also need to consider what game events will be used to trigger the music system to play correctly.

    The challenge when developing a vertical music system in Kismet is the lack of any pre-existing node that has any reference to tempo, bpm, bar or beats. By itself Kismet has no internal musical clock, it is tied to the framerate of the game. We have to build one instead. If we know bpm and the number of bars in an audio file we can determine the length of the loop. Keeping sample accurate playback of audio loops is a challenge in a toolset that isn’t built for music. The game clock is typically based off a variable framerate, instead of bars and beats. Soundcues have the internal ability to loop, but without an external clock there is no way to keep more than one in sync. If we use a timer we can trigger two or more files of the same length to loop at the same time. If we don’t want them to be heard at the same time we have find a way to fade the audio in and out.

    We can imagine a custom Kismet that has the ability to play and loop a single soundcue, it understands and uses info like BPM and bar length, and has the ability to fade audio up and down. If we can trigger two or more of these nodes at the same time then we should be able to keep them in sync with an external timer. Assuming that the length of the timer equals the length of the audio files the audio files should loop in sync. If one node triggers at full volume and the others are muted we should be able to crossfade the two pieces of music with the ability to control when the fade happens and how long the fade will be.

    In Kismet we will need to create a custom node that can load up any soundcue then attach it to an AudioComponent for playback. The AudioComponent can control the audio playback, looping, and volume. This Kismet node has two outputs. ‘Finished’ is fired every time the end of the file is reached, ‘Out’ fires when the Kismet node is first activated.

    This Kismet node will have four inputs to control of playback of the audio. ‘Play’ starts the file playing with the volume at 100%, ‘Stop’ ends playback of the file, ‘Mute’ toggles the volume of the AudioComponent between 0% and 100%, and ‘Start Mute’ begins playback of the file with the volume set to 0%.

    This Kismet node also takes in a few parameters. Bars per Measure is a float variable that is the number of bars in the musical loop, Beats per Minute (BPM) is the tempo of the music represented as a float variable, and the Bars per Fade is the number of bar to use while the audio fades in or out

    The real difficulty of a music system like this is having a music system that is smart enough to keep track of tempo of the music and the number of bars in a loop. With a bit of math we can determine how long a music loop is if we have a pre-determined tempo and number of bars.

    In our next article we will discuss how to actually build this custom kismet node that can track musical parameters like volume, bars, and bpm. If you have any questions contact me chris.at.engineaudio.com or post a comment below.

    Comments (2)

  6. Short Wave Image
  7. UDK Horizontal Music System using Kismet

    There is a need for a custom music solution when using the Unreal Development Kit. The default music system known as UTMapMusicInfo is limited and not completely functional as a complex music system. Using Kismet instead provides a strong visual scripting language with many possibilities for triggering sound.

    Horizontal and vertical arrangements are well used techniques for creating music systems for games. This article will focus specifically on a horizontal setup. We will explore the benefits of vertical arrangement in a future article.

    The best way to visualize a horizontal arrangement is to think of a single stereo track playing back in a DAW. This track can play only one file at a time, and each file has to contain the entire stereo mix intended to be heard. Each individual piece of music needs to be completely independent of the others because only one piece is heard at a time. For the purposes of a game the music needs to be able to loop endlessly, waiting for a specific player action to transition into the next piece.

    Horizontal arrangement techniques have been used since the dawn of game audio. The earliest gaming systems generated the musical score on the fly with limited number of polyphony and channels. Once music was sampled and streamed off a disc only one stream could play at a time, necessitating the use of a horizontal arrangement. A typical setup would have the music system changing according to different conditions or game states. Once a condition was met the music system would stop playing the first piece of music and move on to the next. Game states that are commonly used would be: health remaining, power-ups, time running out, ambient, action, death, winning, and losing.

    Horizontal music systems are still a very common approach to playing back music in games. Arcade games, casual games, and others with a linear style of gameplay often necessitate a horizontal setup. Mobile games are often limited by hardware restrictions and a horizontal setup can offer the best solution.

    When considering a horizontal music system it is important to determine how one piece of music will transition into the next. Transitions can happen immediately, at the end of the current musical loop, or after a certain number of beats. In addition the transitions either happen as a hard cut from one to the next or as a crossfade over time. For this example each musical transition happens immediately once the previous cue has finished. This maintains the linear nature of the music and keeps the everything on time and in sync.

    Looping music in Kismet

    In the UDK there are a couple of ways to loop a soundcue. The first is using a ‘Looping’ soundnode directly in the soundcue itself. This is best for ambient level sounds where you need a looping sound but don’t need the control of Kismet. The main problem with using ‘Looping’ node is you don’t have much control over how the file plays back dynamically in game.

    In Kismet you can set the ‘Finished’ output back to the ‘Play’ input on a PlaySound node. This triggers the soundcue to play over again once it reached the end of the file. You can then use other Gate node to stop playback. As the player triggers different events in the game it causes one Gate to close and another to open re-directing the ‘Finished’ output signal to reach the ‘Play’ input of the next PlaySound node.

    Description of project

    I am using the Racer Starter Kit (link here) provided by Epic Games to create and test my horizontal music system. It is one of their UDK Development Gems and contains everything needed to get started with a simple game project. It is a linear race course that uses a map based off of the EpicCitadel map provided with the UDK. There are a series of Touch Triggers placed throughout the race course. I used these as checkpoints and use the triggers to move the music system forward as the player drives through each one. You can use any number of checkpoints triggering the music system.

    This Horizontal music setup is not only limited to linear checkpoints or triggers. Any number of parameters or game states could drive the music system. Basically if it has an output in Kismet it can be hooked in to this system.

    Here is how the logic flows through the Horizontal music system:

    1. When player spawns in the level PlayerSpawned triggers the first PlaySound to start playing. It also sends a stop message to all other Playsounds in case they were still playing.

    2. When this first PlaySound is done playing it fires the FINISHED output.

    3. This feeds to the input of two Gates. The first defaults to open and triggers the first PlaySound again, and the second defaults to close and triggers the second PlaySound.

    4. The first Gate remains open, continually looping the first PlaySound, until the Touch Trigger is used and closes the gate preventing the first PlaySound from looping again.

    5. The second Gate is triggered to open by the Touch Trigger. This allow the first PlaySound, once finished, to trigger the second PlaySound to start playing. Thus transitioning seamlessly from once musical loop to the next.

    6. The second PlaySound is also connected to another similar chain of Gate and Triggers continuing though the musical progression as needed.

    Next time we will discuss a vertical Music System. If you have any questions contact me chris.at.engineaudio.com or post a comment below.

    Comments (1)

  8. Short Wave Image
  9. Using the PT Frequency Shifter to turn guns into randomized lasers

    Designingsound.org is one of my all time favorite sites. Since they declared January 2013 “Plug-In month”,  I thought this was a great opportunity to share one of my favorite little audio tools with all of you.

    This plug in does exactly what its name states but it has a twist that I absolutely love. If you have not yet used it then allow me to introduce you to the “Frequency Shifter”. You will find this little gem as part of the AIR bundle standard with Pro Tools 8 and higher.




    I swim in the dark and mysterious water that is game audio. This means that when I produce a sound effect I am often tasked with creating at least half a dozen variations of it. Why isn’t one sound variation enough you ask? Well if you have ever played a Sci-Fi style video game, then you know how often you are likely to hear the same laser gun being fired. Audible repetition is something we try our best to avoid, lest the illusion of the game world be broken. So in that design spirit, let me show you how easily you can create multiple variations of a laser gun with this plug-in without automating a single parameter.

    First I need an audio element to mangle. I will start with a recording of a 9-mm automatic I recently recorded at an indoor gun range. Normally indoor firearms recordings are not as useful as their outdoor brethren. However in the case of a laser gun I have found that they happen to work quite well as a base design element. A gun fired at an indoor range will produce a tremendous amount of reverberation and low frequency content. Once that noise floor is heavily modulated you can expect some interesting results. Here is an example of the dry indoor gunfire



    Below is my description of the two primary functions for the plug-in.

    The Frequency knob allows you to control the amount of Frequency being shifted. I couldn’t find any more information than that. This sounds very similar to modulating pitch with a Low Frequency Oscillator (LFO).


    Pro Tools

    The Shifter knob is the real star of this show. It allows you to choose the direction you want the frequency to be shifted. This behavior is basically a frequency arpeggio. I chose the down mode for my example because it somewhat mimics another laser gun creation method I learned some years ago. The method is where you take a synthesized tone and automate a filter cut that starts around 3 kHz at the beginning of the tone and drops to 20 Hz at its end. This plug-ins Shifter knob essentially allows me to skip the automation step and force the frequency to shift down.

    Pro Tools


    After inserting the Frequency Shifter onto the handgun track, I could hear that I needed a bit more noise. I created another track and inserted a signal generator set to pink noise. Next came a noise gate set to open with a key input from the main gunfire. Always remember to tame the ADSR of your support layers. In the video you will hear how each pass produces a unique variation.


    Pro Tools


    There are in infinite number of ways to continue building on this process, especially when you get creative with your source elements. With some knob fiddling, the Frequency Shifter can generate anything from a simple effect up to one with serious modulation.


    Now go and turn some knobs!

    That’s what they are there for.








    Comments (0)

  10. Short Wave Image