techniques used to record audio in HL2

T

Tom.

Guest
I work as a television cameraman, and a location sound recordist, for film and tv. Basically that means that when not filming, I'm recording audio on location, for use for whatever program I'm working on (my camerawork is sports-related only, but my audio work is just about anything).

Basically, I have to consider what the correct audio perspective is for a particular shot, for instance, if the shot is quite distant from the camera, and the camera angle is quite wide, then the audio will be correspondingly more distant, with more reverberant audio, and with the HF and LF components rolled off. If the shot is very close to the camera (usually for dramatic effect), then I'll get my microphone in as close as I possibly can. This is a bit of a simplistic way of describing what I do, but I was wondering:

In HL2, what steps have the designers made to ensure that what happens in real life (ie far away = hard to hear), will happen in the game? Its easy today to replicate the effect that hard surfaces have on a room's acoustic character, but what if that room contains a large object, or a corner, and the audio source is behind that corner (from the point of view of Gordon).

Also, in films, to enable the audience to hear dialogue in battle sequences, the atmospheric audio is dropped significantly in level so that this is possible (the human brain can do this automatically in a real environment, but not when viewing a TV/cinema screen). Will such things also happen in HL2?
 
lets hope so... i always wondered how they make the sounds in the first place. Do they send some guy out with a microphone recording people saying random things and birds chirping or people shootin guns? or is it possible so synthesise it all?
Wouldnt it be the sound card that makes the sounds different properties according to how far the source is away?
 
im not sure but it has to come down to a combination of surround sound tech and the source engine that moulds everything together. the source is responsible for textures etc, so applying sounds to the textures obviously isnt that hard for the source engine.

tbh though i just dont know lol, i just made a guess through reading about the engine.
 
I imagine that many of the sounds they use would have been purchased from libraries, but many sounds would have been recorded specially for this game.

Think of it this way: you have a script that says all of the dialogue from a particular character will take place through a window. Now, do you record the audio in a voice booth, and then equalise it to remove the HF components, before storing it with the game code, or would the game itself automatically do this?

What if you have a scripted event, like thunder/lightning? Do you record this audio IRL from within a cave, where the game would take you, or do you buy the audio 'clean' and the game makes the proper calculations?

I'm trying to imagine the processing power needed to make such calculations, but I can't see a normal PC being able to do this and play the game at any playable speed.
 
Another example, if the dialogue for the G-man is recorded in a voice booth (with the microphone 6 inches away), and in one sequence you have the G-man standing 10 feet away from you, the audio will not 'fit' the scene. Who or what takes care of this?
 
The engine.

Positional audio within the engine.
 
unless you understand how the engine works, u wont know.
 
What ben said.

The game uses a positional algorithm to position sounds within a certain gamespace (e.g. 5.1/7.1 surround) and/or setting (e.g. Hall, behind glass etc.).

It has to do this because for example there are no cut scenes in Half-Life 2. If a character is delivering a line, and the player moves into another room, this same algorithm has to account for this.
Not to mention if it didn't do this you'd have to have either the voice over done several times for different positions or ambient surroundings, or you'd have to store all those different in preprocessed form on the disc (taking up valuable space).

So it would be impractical to not process the audio at runtime.
 
you should send this question in an email to them, interesting question.
 
Ben and Synthaxx have pretty much got it spot on. I can't remember the technical name though.
 
Yeah, I thought there was another name though... meh, someone e-mail Valve :)
 
Tom. said:
Another example, if the dialogue for the G-man is recorded in a voice booth (with the microphone 6 inches away), and in one sequence you have the G-man standing 10 feet away from you, the audio will not 'fit' the scene. Who or what takes care of this?

in half life one the further away you get from someone the more distant they sound, as far as i remember anyway. the best place to hear it "in action" so to speak is to just do the hazard course and listen to the hologram lady as you progress. they probably just record the sound in the studio and then add an effect to amke it sound distant, its not tasking or complex stuff.
 
bodhi said:
in half life one the further away you get from someone the more distant they sound, as far as i remember anyway. the best place to hear it "in action" so to speak is to just do the hazard course and listen to the hologram lady as you progress. they probably just record the sound in the studio and then add an effect to amke it sound distant, its not tasking or complex stuff.
I hope they lower the volume on the walking sound effects in HL2, walking on that metal catwalk flooring really drowned all other noise almost.
 
bodhi said:
in half life one the further away you get from someone the more distant they sound, as far as i remember anyway. the best place to hear it "in action" so to speak is to just do the hazard course and listen to the hologram lady as you progress. they probably just record the sound in the studio and then add an effect to amke it sound distant, its not tasking or complex stuff.


One problem with that is that another character will not automatically lower or raise the level of their voice (ie shout or whisper).

Its simple stuff, but glaringly obvious when done wrong, and I just hope valve take this into account.
 
Tom. said:
One problem with that is that another character will not automatically lower or raise the level of their voice (ie shout or whisper).

Its simple stuff, but glaringly obvious when done wrong, and I just hope valve take this into account.

I see what your saying.
And I wish you were working with Valve when they did the sound engine. ;)
 
Went looking for quotes on this subject, only found this :
quote : "Gabe Newell: Kelly has been developing the notion of soundscapes, which turn out to be pretty powerful. It's more in the direction of an AI foley artist than a synthesized score. I like what he's done a lot, as the effects seem to disappear in terms of your conscious awareness of sound, and instead you just get the emotional impact of them. That sounds
horrendously vague, so I'll try to give a concrete example. Let's say there's a hole in the ground. Let's say there's a basic set of 3D ambients playing (creaking, procedurally varying wind sounds). As you come up to the hole, there's an entity placed at the bottom of the hole, and when it can see you it fires and says "put the scary low rumbling tone into the mix" and it gets added in. Most of the time people won't be able to tell you "oh, that's when the sound showed up" but they will tell you that things got a lot scarier all of a sudden and they're not sure why. When they jump into a hole, there's a crescendo that gets added in that peaksas they hit the ground. There are blended transistions between ambient scapes, so as you move away from the scary things, the scary sounds become less prominent, and get replaced by more pastoral "let's go explore" sounds."

Half remembered this interview , thought it said more than it does in reality.
 
so in the end the answer really is: the engine does all the work, by adding pre recorded sounds according to the situation and landscape.
 
IMO the last thing your going to worry about when playing HL2, is weather or not your foot steps echo off a corner in a room or not. either way this game is going to look, feel, and sound great. :thumbs:
 
Yea, EAX has been doing similar things for a while. Just not quite as powerful. For an example, any dialogue or whatever... If you're standing in a cave, EAX makes it sound like it's coming from a cave. Sound occlusion, reverberation, etc. It's even been able to blend a few environments together/ do more than one environment at a time in the past couple of versions. I'm an audiophile, so I try to keep up on this kind of stuff.

The great thing about Doom3 and (i'm assuming) Half-Life 2 is that most of the sound is processed in engine and not dependent on your sound card (doesn't have to rely on EAX or A3D). That way everyone who has say a 5.1-supported card will get the same aural experience whether they have onboard sound or an Audigy 2. Realtime 5.1/6.1/7.1, heck yea. About time. Of course you still need good speakers...
 
now why didnt we just say that...the only time your gonna say 'the sound is a bit crap' on a game like doom3 or hl2, is when uve played it 500 times. eventually ull start noticing little errors or bugs that havent been removed or altered. u do get some people who will play a game and focus on every little detail looking for faults in design...not me tho...
 
I think you'll find that well-done sound adds to the overall experience. Try playing any game with the sound turned off, and see how interesting it is.

Personally, I don't particularly want to hear a grenade explode when its behind a 6 foot thick wall, at the same volume as it would behind nothing but fresh air.
 
I think the best idea is to send an e-mail to Valve.


Make sure you post the answer here... :E
 
i remember reading an email from valve talking about this..
supposedly part of what goes on should account for the shape of the envirement, so if you hear something around the corner, it will sound like it is around the corner.. sounds wont go through thick walls.
 
If I remember correctly, I read somewhere that they even implemented the Doppler effect. So I doubt that simpler effects you talked about or the effects with the sounds behind objects are not implemented, especially because the do relate to the Doppler effect...
 
Yeah, wouldn't this be part of the Doppler effect?

Because I remember Gabe or someone saying they implemented it.
 
The doppler effect is nothing to do with the shape of a room, it is to do with the relative velocity between the person experiencing the sound, and the thing making the sound.
 
In Far Cry most voices did not drop off even if they were 20 feet away. It seemed they had a threshold and when you got within a certain range, it would just play the talking through your speakers, without any account of distance or direction towards the npc. That made me mad because I couldn't use sound to figure out where the enemies were. Maybe it was just my bogus audigy 1 though :p
 
Valteq said:
In Far Cry most voices did not drop off even if they were 20 feet away. It seemed they had a threshold and when you got within a certain range, it would just play the talking through your speakers, without any account of distance or direction towards the npc. That made me mad because I couldn't use sound to figure out where the enemies were. Maybe it was just my bogus audigy 1 though :p

It wasn't your card, it was the game. That bugged me quite a bit too. How am I going to be 500 yards from people and hear their conversation perfectly?
 
each object has an independent sound volume falloff value/zone - you input the distance from the object to where the sound will be 0dB, from this the sound engine will decrease volume, more advanced engines (EAX etc) will include some form of rolloff filtering. While doppler processing does exist, it's very dependent on the sample source, it's sometimes best to turn it off and cheat by triggering a few fly-by samples when an object (say a bullet), passes by your character.
 
Back
Top