Audio Technical Terminology

Introduction

For our game development project, I will need to create audio for the game and understand it more and how to create it. For this I will be doing a workshop with Lee.

cOMPRESSION & Uncompression

In terms of audio, uncompressed audio is a higher quality type of audio. This is because it is able to detect more sound in the sample given. However, compressed audio is – well, compressed audio. This makes it lose quality.

An example of file types for uncompressed audio would be: .WAV (Waveform Audio File) and .AIFF (Audio Interchange File Format). Some file formats for compressed files are ZIP, JPEG etc. The reason why people use compressed files is because the file format is smaller, this means that it can be put online and sent to other people.

sample rates

Sample rates can come in multiple different types. The highest being 192KHS all the way to 6KHS. Additionally, 44.1 KHS is the best to take sound at, using most CD’s. When we create our project, we don’t want to go higher that 48KHS, and nothing underneath 44.1KHS.

Note: KHS = Kilohertz per second. (this is audio frequency)

Sample Rates

Sample rates are the amount and the speed that information is taken. There are two ways that sample rates are taken, such as:
Analogue: Takes the audio in curves
Digital: Record the audio using digital bits to store audio information

Sample rates is that rate that the sound is taken into the recording device

Bit Depth, Digital

Bit depth effects the sound of the audio and depends on what soundcard you have. Normal soundcards can only do between 8 to 16 Bit Depth. The sample rates vary on how much bit depth you can get. 8-bit ranges, to 48KHZ, 16-bit ranges to 96KHZ, 24 bits ranger to 144KHZ and 32 bits ranges to 192 KHZ.

Dynamic Range

Dynamic range is the available range of the volume between the loudest sound and the softest sound you can hear. The reason why it is important is that the better the dynamic range is, the better the Signal-to-noise range is.
The signal-To-Noise range is the difference in volume between what you can hear and what you don’t want to hear. E.g. background noise or white noise.

Reverb

Reverb is like an echo. It is the after the sound that is built up in a room which reacts and reflects with the rooms surrounding. Reverb can be a big problem with recording audio due to the fact that it is usually something you don’t want in your audio. There are many ways you can get rid of this. One way being removing the reverb in real life as you can buy acoustic foam. However, there are also ways to get rid of reverb in the software you are using to edit the audio.

Copyright and Licensing

Copyright and licenses can be a major problem when trying to use music or sound effects in games or projects. Copyright makes it so that you have to pay for the item you used as it is a creation that belongs to another person. It is always better to create the audio yourself. However, there are various websites where you can get royalty free sounds.

Primary and secondary audio

Primary and secondary audio is the same as searching for visual resources. To record primary audio you can use a recording device like a zoom mic that uses an SD card for storage. You can also use a headset or phone microphones to record. The advantages of using a dedicated mic, however, is the quality of the audio and the variety of options it has for the right purpose.

Primary and Secondary audio

Primary audio sources are recording your own audio. Primary audio can also be created using software to create sound effects from wave generators. Some advantages of primary sources are that it is copyright free and usually easier and faster if you know how to find the sounds you need for your game project. However, if you’re recording in a corridor, there will almost always be reverb in the audio which is what you want to remove. That will have to be manually removed using software or record the sound in a music studio with acoustic foam.

Secondary audio sources are found online or from sample packs. It is important to always use licensed or royalty free secondary audio to avoid conflictions. Well-known songs and music should also be avoided in a game project.

Audio in Unreal

Unreal Engine 4 currently supports importing uncompressed 16-bit wave files at any sample rate. For the best results, it is recommended that sample rates of 44100Hz or 22050Hz be used.

Big-endian and little-endian are terms that are used to describe the order in which a sequence if bytes are stored in the computer’s memory. Big-endian is an order in which the “big end” is stored first. The “big end” is the most important value in the sequence. Little-endian is an order in which the “little end” is stored first. The little end, logistically, is the least important value in the sequence.

In Unreal Engine we need to research:
Ambient Sounds
Sound attenuation
Attenuation shapes
Using dialogue
Volume ducking

When using Unreal Engine, sound cues are composite sounds that allow you to modify the behaviour of audio playback. Combine audio effects and apply the audio modifiers with sound nodes to alter the final output.

Conclusion

From this blog, I have learned the meaning of terminology associated with audio and creating audio files, I know new terminology that I can use in future work and blogs. When creating this blog, a lot of the terminology made me think of similar terminology used in music and writing music. This makes me think they are closely related.

Leave a comment