Virtual Haircut

Congrats to Dark for not telling me that :(
 
This is entirely new to me and one of the coolest things that I've heard in a while tbh.
 
You could make sounds like this for any video game using generic head related transfer functions and proper phase lagging and interference.
 
God, when he whispered then laughed it freaked me out a little, and the electric razor made my spine tingle just like when I get a real haircut. Adding sounds like that into a game would improve atmosphere 100000x
 
Ravenholm + Audio Illusions = D:
 
You could make sounds like this for any video game using generic head related transfer functions and proper phase lagging and interference.

It's too bad that these functions are rather taxing on the processor.
 
You could make sounds like this for any video game using generic head related transfer functions and proper phase lagging and interference.

You sure? Because to record these sounds they use actual models of a human head, with actual ears in which they place the microphone. This is why the quality is the best when you use earphones: headphones are on the outside of your ears and the sound would receive double dampening, the pre-recorded dampening of ears and your own ears. The idea of binaural sound is that, if you can hear 3D sound with only 2 ears, you only need 2 sound sources. But it only works if the sound is recorded just like you would hear it, simulating all those factors on a computer seem very costly to me. Like simulating the dampening quality of the human head, or sound bouncing in your ears. Would be nice though if they could make hardware for this that could do that without a sweat, I'd get such a soundcard :D
 
it would sound so ****ing awesome if games were like this.

but it won't work by just recording sounds like this, unless it was for a cinematic sequence.

Reason being, it would be incredibly tedious to record, for example, footsteps or gunfire, from every angle and distance that you might hear them from. It could work for a first person cinematic since they know where the player character (your ears) would be. That wouldn't blend well with the rest of the game though.


The only feasible solution is software code that alters the sound of the recordings accordingly.


Didn't he whisper something in your ear at the end, "the only software that can do it", or was that just the voices in my head?
 
You sure? Because to record these sounds they use actual models of a human head, with actual ears in which they place the microphone. This is why the quality is the best when you use earphones: headphones are on the outside of your ears and the sound would receive double dampening, the pre-recorded dampening of ears and your own ears. The idea of binaural sound is that, if you can hear 3D sound with only 2 ears, you only need 2 sound sources. But it only works if the sound is recorded just like you would hear it, simulating all those factors on a computer seem very costly to me. Like simulating the dampening quality of the human head, or sound bouncing in your ears. Would be nice though if they could make hardware for this that could do that without a sweat, I'd get such a soundcard :D

Simulating it is rather straight forward. You simply take the sound source that you want, which is any time series of sound intensity values across the range of hearing frequencies (basically any .wav file), then you modify it through a transfer function matrix that has precalculated the impulse response (response to an instantaneous single peak across all frequencies) at your ear (or at the location of where a headphone or earbud would be) for any sound from any possible angle (this data is collected empirically using an anechoic chamber and a dummy head or a really patient participant). That impulse response is called the head related transfer function, and actually what that recording was advertising was a microphone that includes its own inverse microphone related transfer function to basically edit out all of the diffraction that it causes, so that it sounds as though the microphone body doesn't exist. The exact opposite of what you want to do in a video game which is to make it seem like a head and ears exist around your virtual listening position.

So you can basically any imitate sound coming from any angle to either ear. You set the distance by modulating the overall sound intensity and the relative angle between the sound projected to the left and right channel.


Now this only can only simulate direct sound transmissions. Real environments create diffractions, reverberance, and echoes, all of which can be simulated using the exact same methods as 3d graphics, substituting sound intensity for light intensity and sound frequency for colour. Phase lag and destructive interference might need to be taken into account as well, but I'm not sure that the human ear can detect phase or relative phase angle anyways.

oh yeah, and wtf this is older than my socks.
 
Dan you are really smart! Thanks for all that information, it really helped me figure out how the heck they do this kind of thing :)

Great thread, too!
 
Back
Top