fur/hair, shaders, bump maps, normal maps, etc.

  • Thread starter Thread starter Morlock
  • Start date Start date
M

Morlock

Guest
I'm clueless when it comes to understanding shaders, bump maps, normal maps, etc. I've had them all explained to me but I'm still clueless.

My question is, will fur/hair be doable in HL2? Would this be a shader, bump map, or normal map thing? I'd be satisfied with just getting the silhouettes right, since the rest of a character can be imparted the illusion of hair/fur with a good texture.

To clarify, I'm not talking a about real fur/hair here, just a good illusion.

I'd also like to know what kind of performance hit (ball park) I'm looking at.

Any good shaders/bump maps/normal maps for dummies links (as long as they actually explain what's going on) would be appreciated.

Sorry if this is the wrong forum.
 
I think hair/fur is usually a series of textured polygons with transparencies, isn't it? I think you can do shader-based hair (I dunno, I think there's a 3Dmark demo that has it), but I think it's mostly polygons. Which can be made to move. I think.
 
That's about right. Most techniques so far involve tricks with polys. ATI have a couple of demos relating to this....
http://www.ati.com/developer/demos.html

Have a look at some of those. The bear and chimp ones have the fur but the others are preedy anyway :)
 
Hmmm...

Did anyone notice that some of the cables in Doom 3 were made to appear fully 3D (if you didn't look too closely) by normal mapping a 2D picture? Maybe you could do that with hair...
 
I think hair/fur is usually a series of textured polygons with transparencies, isn't it? I think you can do shader-based hair (I dunno, I think there's a 3Dmark demo that has it), but I think it's mostly polygons. Which can be made to move. I think.
That's about right. Most techniques so far involve tricks with polys. ATI have a couple of demos relating to this....
http://www.ati.com/developer/demos.html

Have a look at some of those. The bear and chimp ones have the fur but the others are preedy anyway
That makes a lot of sense; use a nice hair texture (maybe bump-mapped) for most of the character, and a sort of billboard technique (alpha-mapped fur outlines) around the its outline that constantly changes with its movement relative to the player's viewpoint.

edit: oh and thanks for the link fragbait.

edit2: err, which ones are the fur ones?
# RADEON® X800 Demos
# RADEON® 9800 Demos
# RADEON® 9700 Demos
# MOBILITY™ RADEON® 9000 Demos
# RADEON® 8000 Series Demos
# RADEON® 7000 Series Demos
# APPLE(MAC) SmartShader 2.0 Demos and Screen Savers
# APPLE(MAC) SmartShader 1.0 Demos and Screen Savers
I'm on a (slow) 56k here so I'd rather not try to find out through trial and error. :P
 
Umm if you try the links you'll find those links to pages....im on 56k too so I know your pain. :)

eidt: the fur ones are on the 9800/9700 pages i think..

Note that all there (except the x800 - i havent tried them yet) work on a 9500 pro so any ATI dx9 card will work. You can tweak res/AA settings in the ini files if they run too slow.
 
Well, let's take this one at a time shall we :)

You mentioned that you don't understand shaders/bump maps/normal maps. Well, to put it simply, a shader is a small program which operates on a stream of vertices or pixels (fragments). They can be used on every vertex and pixel in a scene, or they can be applied to a specific set of vertices and pixels. Shaders reside, and operate, entirely on the graphics card. You could use shaders to implement a new lighting model for a game. For example, per-pixel lighting could be done with a vertex and pixel shader set. You could implement parallax mapping with them. You could distort the surface of an object via a normal map to simulate refraction. You could simulate reflections, and if you so desired you could combine the two, add in what's called the Fresnel term, and you'd have yourself some rather realistic looking water/glass materials. You could also apply bump mapping to a surface using a normal map. It's not exactly bad to think of shaders as programs that define how surfaces look. However they can be used for animation systems, physical simulations, and even commonplace tasks in applications you'd never expect to find them.

Bump mapping is a generic term for a technique thats really starting to take off in real-time graphics, but has been around for years. It basically allows a scene to look much more realistic and detailed than it really is by providing the illusion of depth in areas that traditional techniques don't cover very well. For example, say you have a brick wall you'd like to place in your game. Now a brick wall, while not exactly bumpy, is not exactly flat. It's possible to slap a brick texture onto a flat wall in game and have it look like a halfway decent wall of bricks. However a texture alone cannot capture the crucial details. In reality a brick wall is composed of individual bricks which stick out just a bit, and are separated by mortar, which sits back further in the wall. If you were to run your hand over it you'd feel a rather rough, bumpy surface. The way light interacts with this surface is quite different than how it would interact with a flat wall of concrete. Light would not fall upon the brick wall evenly. Brick would block it from reaching certain areas of mortar. With just a brick texture to simulate the wall in-game, light would fall evenly upon the face of the surface. This is because the lighting pass doesn't take the brick texture into account, but instead looks to the normal vector of each polygon that makes up the wall. Since the surface is totally flat, the lighting will be totally even. Now you could go ahead and model each brick, all the cracks, crevices, and mortar, but that would require a ridiculous amount of time, and more importantly, an obscene number of polygons. The alternative is to have the lighting pass collect the data it needs for lighting not from the actual polygons the wall is composed of, but from a texture known as a normal map. A normal map is an odd looking texture which is never actually seen in game by the player. It is called a normal map because, unlike conventional textures which store colors, it is storing a set of normals. Using shaders you can have it so the lighting pass looks to the normals encoded in the normal map for lighting information in place of the normals held by the polygons making up the wall. The effect you get in the end is a scene lit in a much more realistic way. Light no longer falls evenly across the face of the surface as if it were perfectly flat, but instead highlights the areas it should, and leaves other areas in the dark.
 
Thanks for the extensive reply qckbeam. I should've been more specific, I might've saved you all that typing. I know pretty much all of that, I was looking for something more technical.

For example:
A normal map is an odd looking texture which is never actually seen in game by the player. It is called a normal map because, unlike conventional textures which store colors, it is storing a set of normals. Using shaders you can have it so the lighting pass looks to the normals encoded in the normal map for lighting information in place of the normals held by the polygons making up the wall. The effect you get in the end is a scene lit in a much more realistic way. Light no longer falls evenly across the face of the surface as if it were perfectly flat, but instead highlights the areas it should, and leaves other areas in the dark.
What is "a set of normals?" How does a set of normals turn a normal, flat-looking texture into a texture with the illusion of depth? That's what I don't understand about bump-mapping; specifically how it works, not the effect.

Your explanation of shaders was helpful, but it leaves me with still more questions. I also don't know what shaders can and can't do. What range of effects can shaders implement? Is the key point just to understand that shaders are run on the GPU, rather than the CPU?

I need a layman's book on graphics. :)
 
Ah, I see what you mean now. I'll break this down bit by bit.

To understand what a normal map is, you'll first have to understand what a normal is, and how it's used in the traditional lighting process. Then you can understand what a normal map is, what it does, and how it is useful.

The basic lighting model can be summed up with this high-level equation:

surfaceColor = emissive + ambient + specular + diffuse

The emissive term is used to show the light given off by a surface, or in other words, the light a surface emits. It's an RGB (which stands for Red, Green, Blue) value that defines the color of the emitted light. It's incredibly easy to compute, since there really isn't anything to compute at all. It's just a value, and can be represented by the following formula:

emissive = Ke

where Ke is the materials emissive color.

The ambient term represents light that has bounced around all over the scene and doesn't seem to come from any particular source. Basically it seems to come from everywhere. Because of this, ambient lighting does not depend on the position of the light source for it's calculations. Like the emissive term, the ambient term on it's own is a single, solid color. However the ambient term is also affected by the global ambient lighting value, and therefore can be represented by the equation:

ambient = Ka * globalAmbient

where Ka is the materials ambient reflectance (the constant color).

The next part to take care of is the diffuse term. This term simulates directed light reflected off a surface equally in all directions. Imagine a diffuse surface as very coarse on a microscopic level. This roughness is what allows for light to be reflected in all directions. The diffuse term can be expressed, and calculated, via the following formula:

diffuse = Kd * lightColor * max(N dot L, 0)

where Kd is the materials diffuse color, lightColor is the color of the diffuse light, N is the normalized surface normal, and L is the normalized vector towards the light source. The one thing you may not recognize in this equation is the 'max(N dot L, 0)' section. This section computes the dot product (which uses a small dot as the operator symbol) of the two normalized vectors N and L. L is a vector pointing towards the light source. It shares it's tail with N. N is the surface normal for the polygon being shaded.

Read if you are not sure what a surface normal is
(A surface normal is a vector with a magnitude of one. It's used when we aren't worried so much about magnitude, but rather direction. In the case of a surface normal we are talking about a vector with 1 as its magnitude. This vector comes out, and shoots away from the face of the triangle. If you imagine a perfectly flat triangular table with a candlestick standing upright, resting at it's center, the candlestick would be the normal, and the surface it's resting upon the surface.)

The dot product returns the measure of the angle between the two vectors N and L. The smaller the angle, the greater the value of the dot product and the more light the surface shall receive. Surfaces that are not facing the light source will in fact produce a negative dot product. So we use 'max(N dot L, 0)' to make sure these areas do not show any diffuse lighting.

The specular term is the last part of the process, and simulates the light scattered around the mirror direction. It shows up best on very smooth, shiny surfaces. Unlike the other terms, specularity depends on the position of the viewer. If the viewer is not at a location receiving any of the reflected rays, the viewer simply won't see any specular highlights. Specularity also depends on the shine of an object. Objects with more shine have smaller, more concentrated specular highlights. Objects with less shine have wider highlights of a more scattered nature. Specularity can be expressed via this equation:

specular = Ks * lightColor * facing * (max(N dot H, 0)) pow shininess

This formula takes the specular color, times the light color, times facing (which is either one and changes nothing, or 0 and stops the equation from producing any results) which is then multiplied by the dot product of N and H raised to shininess (the exponentiation ensures specularity falls off quickly as H and V move apart). V is the view vector, which is not seen directly in the equation. H is the normalized vector halfway between N and V. When the angle between the view vector, and the half-angle is small the specularity becomes apparent.

So when you add those individual terms together, you will have a final surface color for your object. That is a basic lighting equation. Now you're probably wondering what in the hell that was all for. Well, I wanted you to understand how polygons within a scene are lit normally, without any fancy tricks to go along with it. This method doesn't change, no matter if you're using it in a per-vertex or per-pixel (fragment) manner. The only thing that changes is the method of shading used. Up until very recently the formula I gave you was done for every vertex in a scene, and the color that was gained from that calculation was interpolated across the face of the triangle for each fragment (or pixel for simplicity) generated. In a per-pixel lighting environment we use a different sort of shading known as Phong Shading which interpolates surface normals across the fragments (pixels) inside each triangle via a vertex shader, and passes along each of those interpolated normals to a pixel shader which performs the lighting calculation upon each and every one. As you can imagine the results are much more accurate with per-pixel (per-fragment) lighting.

Now, bump-mapping via a normal map combines the per-pixel lighting model described above, with surface normal perturbations supplied by a texture, in order to simulate the lighting interactions on a bumpy surface. A normal map is simply a RGB texture that encodes a new set of normals in a texture for the pixel shader to use in it's lighting calculations. So the vertex shader passes along a 2D texture coordinate set intended for sampling from the normal map texture. The pixel shader reads from the normal map (remember it isn't reading colors, it's reading normals) and uses the results of the read (which returns a normal) for the lighting calculation. The result is a surface that looks like it's composed of thousands, or perhaps millions, of polygons, when in reality it is very, very few. It's a very nice trick. There is a lot more to it, I'd be happy to continue with a more in-depth explanation tomorrow (it isn't as simple as it sounds), but as of now it's 2:14am and I'm exhausted. Hope this was of more help to you.

edit: I'm sorry if anything is unclear. I can explain it all with more clarity tomorrow if you like. I tried to fit a lot into one post.
 
Sir! The huge ass post - o - meeter is going of the charts!

Shamefull spam, I'm sorry. But I had to say it.

I just attempted to understand some of that. I think I just died. I hail Q's mightily large brain thats full of brainy goodness.
 
That really will help explain some of these things to people.
 
Dux said:
Sir! The huge ass post - o - meeter is going of the charts!

Shamefull spam, I'm sorry. But I had to say it.

I just attempted to understand some of that. I think I just died. I hail Q's mightily large brain thats full of brainy goodness.
indeed :P

nice posts though qckbeam, I really feel somehwat enlightened on that now :P now I can go brag to my friends \o/
 
I'm glad someone was willing to spend that much time writing it all down. I know I didn't want to.
 
Thanks a lot qckbeam, that was above and beyond the call. I'm not even finished reading it yet, but your post is just what I needed (especially your parenthetical explanation of normals). I'll have to brush up on a few maths concepts to really grok it (college was a long time ago, and even if it wasn't I don't think I've ever known what a dot product is :P). Thanks very much.

edit: I nominate this thread for a sticky (and a rename to "graphics for dummies" or somesuch).
 
Back
Top