facial animation just eye candy?

after watching hl2 e3 vids and then playing any other game, all the other game's character interactions seems rather "bleah"

their faces are so stiff and lifeless. Right now valve is setting the new standard in face animations.
 
denzelr said:
I think the illusion would be shattered if every character smiled as you say they should. Ever think that maybe her character doesn't show her emotions as much as, say Eli or his daughter Alyx(it runs in the family). She does have her arms crossed and she seems to be fidgeting during Eli's speech, which IMO indicates that she is nervous around Gordon. I think she is a very believable character.

My point entirely. Less is best in real life acting, and the same is true with animation of this sort. If you don't have a degree of reserve in the characters expressions and gestures then you loose the sense of emphasis when they really need to express themselves. Eli knows Gordon, and it's clear that Alyx has met him before also, but Judith has only heard about Gordon it seems. Reserve is a natural thing in such circumstances.
 
clarky003 said:
the only reason there wont be extensive facial animation in MP is because there wont be any scripted sequences that envolve dialogue. It'll just mean that the characters will have default expressions. But who knows, you may have talk keys that allow you to say stuff that valve allow you to say, like team tactic talk etc.. we havnt seen HL2 Multiplayer and I dont think we will till the games released
it can be binded to your voice com so you can hear and see the person in mp talking.
 
I think discussion about using facial animations' usefulness is the same like a discussion 6 six years ago about if a story in a shooter would work. Valve showed it worked, and 6 years later, they will show that emotional expressions are very important too.
 
Problem is, it needs tags (as in the wav files of SP characters) to properly shape the mouh movements, I hear...
 
Brian Damage said:
Problem is, it needs tags (as in the wav files of SP characters) to properly shape the mouh movements, I hear...

This is pretty much how it works (I think)

- You make a character
- You rig the char with the bone system that is used in HL2 provided in the XSI package
- You animate the character mouth for each sound which is represented by a frame (for example the sound of 'aa' is stored in frame 10)
- You type in the dialog in Faceposer in the right syllable's.
- The information will be stored in the header of the .wav file that stores the sound

So you have to type in the text to make the mouth move properly. But in multiplayer, I think you can have something that guesses how the mouth moves (like yelling in the mic will make your mouth go wide open) but you won't have something as accurate as in the HL2 singeplayer.
 
its both
exhibiting the capabalities of source
and immersing the player in a realistic environment
 
OH NOES I BOUGHT A GAME FOR TEH EEYE CANDY ALONE LOLOLOL STRUPID ME!

:| :rolling: :LOL:
 
This is my first post, so uh.. Hi everyone! :cheers:

I happen to know a bit about how lip synching works, so I figured I would contribute.

Brian Damage said:
Problem is, it needs tags (as in the wav files of SP characters) to properly shape the mouh movements, I hear...
I think these were optional, to improve the lip synching

Well we have a few things going here. There are two major methods of interpreting/synthezising voice. The first is volume based, which is what the current CS stuff is using. It provides a mouth that moves in synch with changing volume. It at looks alright, and gets the job done.

The second method is phoneme based analysis. http://www.google.com/search?hl=en&lr=&ie=UTF-8&q=Phoneme+based+lip+synch&btnG=Search
This method looks at the sounds in the wav or stream and makes guesses on the correct mouth shape that produces those sounds, which makes for a 'higher resolution' lipsynching that matches the sounds a lot better. This can be done in a automated way, but doing the analysis live would chew a lot of CPU cycles.

In the in game, pre-recorded way I recall that valve has an automated analysis that happens first (out of game) that gets the basics right, then you can add 'tags' or some kind of extra pronounciation information that helps perfect the rough analysis.

In the multiplayer sense, they will likely stick with the CS volume based system, because that is easy enough to do, and not do any segnificant facial animation to indicate mood. I do hope they associate some more advanced animation with menu binds though, but that may make the whole thing seem out of synch.

In a few years we will be able to do the live phoneme analysis, and possibly read the 'tone of voice' to imply some of the facial emotion as well. But for now I think they are doing a pretty durn good job. :thumbs:
 
I agree: MP real-time mouth movement will probably be primarily volume based (though perhaps a bit of very very simple form of waveform analysis to take a rough guess at more than just open jaw/closed jaw but also the width of the mouth opening), though with the new fildeity of animations and jaw/lip structure, it's going to look a lot better than in CS.

I was also thinking that they could base mood off of volume as well. If you are talking you could have one expression, but if you yell your face looks angry. Of coure, that would probably make lots of people run around yelling "DOES MY FACE LOOK ALL FROWNY NOW!!!!" for no reason.
 
Apos said:
I was also thinking that they could base mood off of volume as well. If you are talking you could have one expression, but if you yell your face looks angry. Of coure, that would probably make lots of people run around yelling "DOES MY FACE LOOK ALL FROWNY NOW!!!!" for no reason.

Thats true, but then you imediately run into problems with people who's mics are poorly adjusted, or have lots of background noise etc. I think a fairly netural expresion, or maybe a somewhat urgent one (since its likely a 'war game' of some kind) would work, but basing mood on volume is too problematic and annoying. Although they could put a lot more than simple jaw motion into it, so when you talk quiet you have a laid back demeanor, and when you are yelling your whole face is moving, and your chest is expanding some because of the extra air etc. I think the shoulder movements and body language in the E3 video was one of the best parts for me :)
 
Back
Top