Nice example of normal maps for those who don't know what they actually are

Fenric said:
Basically what your saying is normal maps don't work.. Well, I happen to know a number of very large studios who will disagree with you strongly there. If normal maps didn't work how they should, then to start with, why would Far Cry, HL2, DoomIII, QuakeIV, Stalker and many others be using them.
argh shudder lol

nope,what Basically im saying is that Normals maps/Bumpmaps (and others like specular maps ) are a fake in Hl2. just that, nothing more,nothing less.

it can "work" , for special Static lighting situations ,but for dynamic situations they Dont. in the sense that the end result is not correct.those other games you mentioned -NOrmals Works- correctly- .because their per pixel Lighting/Shade engine is more dynamic,and more accurate that the lighting techniques in Hl2. Hl2 will need to rely in many workarounds or hacks to do those techniques because of the way lighting works there.

and the Professional 3dmarket ,is diferent from games.. because those are non-realtime graphics. games are real time graphics. so you cant compare both . Only the Viewports can be in Realtime in some applications. but the CPu renderings are in Non-realtime. so you get -accurate normals mapping/Bump mapping - there too with almost any techniques.
 
Fenric said:
Basically what your saying is normal maps don't work.. Well, I happen to know a number of very large studios who will disagree with you strongly there. If normal maps didn't work how they should, then to start with, why would Far Cry, HL2, DoomIII, QuakeIV, Stalker and many others be using them.
argh shudder lol

nope,what Basically im saying is that Normals maps/Bumpmaps (and others like specular maps ) are a fake in Hl2. just that, nothing more,nothing less.

it can "work" , for special Static lighting situations ,but for dynamic situations they Dont. in the sense that the end result is not correct. and those other games you mentioned -NOrmals Works- correctly- .because their per pixel Lighting/Shade engine is more dynamic,and more accurate that the lighting techniques in Hl2. Hl2 will need to rely in many workarounds or hacks to do those techniques because of the way lighting works there.thats the reason that those killer features that enhance so much the graphics are slightly touched in all the material ,videos and screenshots released by Hl2.

and the Professional 3dmarket ,is diferent from games.. because those are non-realtime graphics. games are real time graphics. so you cant compare both . Only the Viewports can be in Realtime in some applications. but the CPu renderings are in Non-realtime. so you get -accurate normals mapping/Bump mapping - there too with almost any techniques.
 
vann7 said:
nope,what Basically im saying is that Normals maps/Bumpmaps (and others like specular maps ) are a fake in Hl2. just that, nothing more,nothing less.

it can "work" , for special Static lighting situations ,but for dynamic situations they Dont. in the sense that the end result is not correct.those other games you mentioned -NOrmals Works- correctly- .because their per pixel Lighting/Shade engine is more dynamic,and more accurate that the lighting techniques in Hl2. Hl2 will need to rely in many workarounds or hacks to do those techniques because of the way lighting works there.

and the Professional 3dmarket ,is diferent from games.. because those are non-realtime graphics. games are real time graphics. so you cant compare both . Only the Viewports can be in Realtime in some applications. but the CPu renderings are in Non-realtime. so you get -accurate normals mapping/Bump mapping - there too with almost any techniques.
Look, your not obviously using what you've seen in the INCOMPLETE and VERY OLD version of the stolen files

Stalker works in a very similar way to HL2. Normal maps are allowed via DirectX. So why would they be different, unless one has rewritten DirectX, which obviously hasn't happened.

You say 3D apps can't do it, but some in realtime, but that their not realtime.. which is rubbish cause in one of your first examples you show screenshots of Softimage using normal maps correctly and in realtime using its realtime shaders

Your simply contradicting yourself now :(
 
Hey I'm a normal mapping noob and I'm trying to figure out how normal maps work with models...


so let's say I make the super high poly player model and want to bake the information in to a low poly player model...

Should I trace a new low poly player model over the high poly model and hope the texture baking goes ok?
should I build a low poly model, save it, then tweak it until I get a high poly model?


I'm sorry if this question sound really stupid,,
 
Shinobi said:
Hey I'm a normal mapping noob and I'm trying to figure out how normal maps work with models...


so let's say I make the super high poly player model and want to bake the information in to a low poly player model...

Should I trace a new low poly player model over the high poly model and hope the texture baking goes ok?
should I build a low poly model, save it, then tweak it until I get a high poly model?


I'm sorry if this question sound really stupid,,
It's not a stupid question. You have the basic grasp of how that method works

1 high poly version

1 low poly version with its UV map

match them up so their sharing the same space, it wont be completely matching cause of the differences in polygon counts, but get as close as you can

Then with whatever software your using, you take the normal map information created from the high poly object and bake it directly onto your low poly UV map and heypresto you've got it. It really is as simple as that

Doesn't stop there either, in plugins such as Microwave you can bake at the same time maps for all the other surface types, color, diffuse, specular, global illumination and so on. Then I would imagine Valve will offer a program or Photoshop plugin that lets you put all those maps into one file to use on your low polygon model or within your map

Also a useful side-effect, apart from the previously mentioned future proof texturing. Is you have this super high poly version which can serve you in the same way. Creating different versions with more polys as machines and cards get faster by using the high poly version as a base so you get the shape correct in the lower poly model every time.. Maybe even using the high poly version and just shaving off all the polys you wont need. Then simply do the above with the baking to the low poly's UV map and thats it... If you've got all your textures and models planned out like that you can release updated and higher quality models and textures when hardware allows it, without spending forever redoing them from scratch.


As for which you should do first, high poly vs low poly... I'd suggest high poly, you have no polygon limits atall and can really go OTT, the more the better. Once you've got that done you can model a low poly version around it to match it as best as you can. At that point you'd be able to see first hand what you should and shouldn't model and what will and wont be handled by the normal maps
 
Fenric said:
Stalker works in a very similar way to HL2. Normal maps are allowed via DirectX. So why would they be different, unless one has rewritten DirectX, which obviously hasn't happened.

You say 3D apps can't do it, but some in realtime, but that their not realtime.. which is rubbish cause in one of your first examples you show screenshots of Softimage using normal maps correctly and in realtime using its realtime shaders

Your simply contradicting yourself now :(


no fella.. you are not simply following me.. :E

Softimage Xsi3.0 and up ,allows Realtime graphics in the -viewports-,only there. (realtime shaders) using Nvidia Cg.. :) rendering everything in your video card. (btw.a good question to valve will be what video card they used in the demostration of cavewall video.) :) same with Maya and 3dmax. but not all do this.but those are preview modes.not final rendering modes which are non realtime done in the cpu. and in the viewports the Lighting-normals(complete process) are done in real time ,so can be updated on the fly.

and stalker similarities with Hl2 ends in the exteriors. which rely mostly in static Lightmaps .maybe that why there is no Normals/bumps/speculars in exteriors in that game either , in interiors they use similar techniques to Doom3/FArcry. at least thats what it looks like with the videos and screenshots released by them.
 
How much of a performance hit (relative to current graphics and hardware) is it to use normal maps?
 
blahblahblah said:
How much of a performance hit (relative to current graphics and hardware) is it to use normal maps?
Not very much really.. compared to the huge slowdown you'd get by using polygons instead. Any slowdown from using normal maps is tiny. Might have been an issue a few years back, but now it doesn't have much of an impact.
 
vann7 said:
no fella.. you are not simply following me.. :E

Softimage Xsi3.0 and up ,allows Realtime graphics in the -viewports-,only there. (realtime shaders) using Nvidia Cg.. :) rendering everything in your video card. (btw.a good question to valve will be what video card they used in the demostration of cavewall video.) :) same with Maya and 3dmax. but those are preview modes.not final rendering modes which are non realtime done in the cpu. and there in the viewports Lighting-normals(complete process) can be updated on the fly.

and stalker similarities with Hl2 ends in the exteriors. which rely mostly in static Lightmaps .maybe that why there is no Normals/bumps/speculars in exteriors in that game either , in interiors they use similar techniques to Doom3/FArcry. at least thats what it looks like with the videos and screenshots released by them.

Put these together Straws, At, Grasping. :p

#1 You say

Softimage Xsi3.0 and up ,allows Realtime graphics in the -viewports-,only there. (realtime shaders) using Nvidia Cg.. :) rendering everything in your video card.

then you say

but those are preview modes.not final rendering modes which are non realtime done in the cpu.

So first you agree their realtime then you say their not.....

XSI's realtime shader technology is only preview if you want it to be, XSI users often use them for final work too you know, because its realtime, no need to render anything, just save out the frames directly generated from the realtime output.

your contradicting yourself in the same post now :(

#2 You say

and stalker similarities with Hl2 ends in the exteriors. which rely mostly in static Lightmaps .maybe that why there is no Normals/bumps/speculars in exteriors in that game either

Which is wrong aswell :\

Honestly Vann, I don't know what your upto but it wont work, your wrong and thats that. Get over it ffs.

Pendragon said:
Besides, isn't it true that you have to have one of the latest GPUs to be able to use them? If you're in that league already, you've got no worries.

Yep, my old Geforce4MX couldn't display them so didn't worry about them, my 9600XT can display them and takes them into account. FarCry uses them a lot and I had no slowdown because of them being in the game. Which is good news :)
 
Fenric said:
Put these together Straws, At, Grasping. :p

#1 You say

Softimage Xsi3.0 and up ,allows Realtime graphics in the -viewports-,only there. (realtime shaders) using Nvidia Cg.. :) rendering everything in your video card.

then you say

but those are preview modes.not final rendering modes which are non realtime done in the cpu.

So first you agree their realtime then you say their not.....


XSI can do.
1)realtime graphics :in [viewports] only (the place you model and do your works). using your video card.
2)non realtime graphics -[software rendering] done in the CPu.

where is the contradiction here.? :)

and Yep maybe is possible that artist choose to do their final works using the #1. but i dont know of anyone that will prefer #1 for the final quality versus #2 which can render everything at Much much much higher quality ,using Mentalray (radiosity) quality lighting. unless the really need to show something in realtime ,which was the case of the cavewall video.
 
thx for the info fenric :) I'm gonna go try modelling a head using that technique now
 
vann7 said:
XSI can do.
1)realtime graphics :in [viewports] only (the place you model and do your works). using your video card.
2)non realtime graphics -[software rendering] done in the CPu.

where is the contradiction here.? :)

and Yep maybe is possible that artist choose to do their final works using the #1. but i dont know of anyone that will prefer #1 for the final quality versus #2 which can render everything at Much higher quality ,using Mentalray (radiosity) quality lighting.

Bleah, now your going back on what you said :( jeez if you can't even have a discussion without cheating whats the point.

and FYI you said in the same post that the realtime shaders were realtime then said they were precalculated.. Now you bring Mental Ray up. :rolleyes: :rolleyes: :rolleyes:

Fact is your wrong about normal maps and how they work. Others clearly realise this why can't you?

You've so far based your understanding on incomplete media from the HL2 stolen files, the Stalker Leak and a very bad understanding of Softimage XSI.

And im not going into the why's and what for's about using realtime rendering. You've obviously missed the point entirely simply by bringing up Mental Ray as the be all and end all of 3D rendering.

Fact is, realtime rendering is becoming more and more popular and is now at a level where it can be used in a professional environment as a final solution, often at a better level than pre-rendered. Depending on the project in question. You don't have to like that, I don't like that but the fact is its happening now, right now. Like it or not.

Not to mention architectural work.. I suppose your gonna tell me pre-rendered is better for that than the client being able to walk through a perfectly displayed realtime environment. Where's your pre-rendered work then?

Honestly, unless you can contribute seriously to this thread instead of trying to instigate a flame I suggest you keep quiet. Please?

Shinobi said:
thx for the info fenric :) I'm gonna go try modelling a head using that technique now

no problem :) looking forward to seeing it.. Hey give Jaenos a shout, he's pretty good with the old head modeling, especially the human type. Wish he'd do more cause they look good :D
 
Vann: Typography. Learn it.

Hey, Fenric: Those model pics... woah. You actually have to look for the differences between them. Awesome.

Jaenos... that reminds me.
 
Pendragon said:
The differences are clearest when you look at the model's edges, the only place where the low-poly-ness is obvious, as far as I can tell.
Yep, can't cure that with just normal maps.. Could have used more polygons though, or that truform, though I think im the only person who even likes that technology of ATI's heh. But with clever modeling you could probably eliminate the worst cases of the low poly edges showing up too much.
 
That's where the artistry comes in. The easiest way is to build a high-poly model and then optimize in the 3d cad app (max, maya, xsi, whatever) to the poly count you want (or before it gets too ugly). You could just map the high-poly normal over that, but if you really want it to look nice, its best to go back over the optimized mesh and smooth out ragged polys keeping the normal mapping effects in mind.

What I would like to see is the full polygon model included in a game, so you can set how much the mesh is optimized before the game loads it into memory. That way the folks with the super-computers could really show off, plus it would extend the graphical capability of the game for many years. The high poly models that would come with the game will eventually be considered medium and then low poly as computers get insanely faster. It seems an obvious decision to prolong a game's graphical competitiveness. The game could even decide how much polygon optimization is ideal for the computer automatically, the setting being only a user override.

The issue then becomes fitting the game on a disk for production, and longer load times as insanely large models are optimized dynamically by the program. Perhaps it could be a part of the installation. The user decides the level of mesh detail when installing, and optimized meshes are stored on the hard drive for use later. The downside being of course that you wouldn't be able to flick a slider to try it out at high/low levels for the hell of it.
 
FictiousWill said:
That's where the artistry comes in. The easiest way is to build a high-poly model and then optimize in the 3d cad app (max, maya, xsi, whatever) to the poly count you want (or before it gets too ugly). You could just map the high-poly normal over that, but if you really want it to look nice, its best to go back over the optimized mesh and smooth out ragged polys keeping the normal mapping effects in mind.

What I would like to see is the full polygon model included in a game, so you can set how much the mesh is optimized before the game loads it into memory. That way the folks with the super-computers could really show off, plus it would extend the graphical capability of the game for many years. The high poly models that would come with the game will eventually be considered medium and then low poly as computers get insanely faster. It seems an obvious decision to prolong a game's graphical competitiveness. The game could even decide how much polygon optimization is ideal for the computer automatically, the setting being only a user override.

The issue then becomes fitting the game on a disk for production, and longer load times as insanely large models are optimized dynamically by the program. Perhaps it could be a part of the installation. The user decides the level of mesh detail when installing, and optimized meshes are stored on the hard drive for use later. The downside being of course that you wouldn't be able to flick a slider to try it out at high/low levels for the hell of it.
Or just release Hi-Def packs like Valve did with Blue-Shift that updated some of the models in all three games
 
Yeah but I mean normal map models, with unreasonable masses of polys. Look at the doom3 normal map models to see what I mean.
 
FictiousWill said:
Yeah but I mean normal map models, with unreasonable masses of polys. Look at the doom3 normal map models to see what I mean.
You'd never get the high poly versions in a game, nothing would run it, atall. And they'll never include the high poly models anyway, their worth a lot of money to id and could be used for all kinds of spinoffs, aswell as generating higher poly versions in a few years and re-releasing DIII to make more money. They'd never give them away, each one is likely worth a good few grand.
 
Pen: when you consider what id could use them for. Their actually probably worth much more than that. Depends just how big DIII gets.. and with all that talk of a film in the works, who knows

As for the official model(s) of Shrek... Hell of a lot more. Same goes for the Monsters Inc. and other films. I guess right now the Finding Nemo models will be worth the most though. Oh and of course the Gollum models, their likely worth a small fortune, especially the main detailed model.. Course its assumed the rigging is included in the cost. Otherwise their just good models, but nothing more than well, ragdolls really.


Think of the original models and resources as like your actors and actresses, throw in all the legal and merchandising deals, royalties whenever their used and so on. Their worth a great deal.
 
So the Doom3 base models are detailed enough to be used in a movie then? Hey, a Doom movie would look cool if it was done totally CG...
 
I'm wondering, about something. Have you ever seen in game vegitation/hair? The kind of model where the geometry itself is really blocky, but to compensate they black out the areas of the texture, and make those parts invisible. Affectively the model is more detailed than it actually is. A good example woulf be the leaves in the first 3DS MAX 6 Tutorial.

Think somthing similar could be done with normals? If what I'm thinking of is done right it would make for very convincing hair/leaves, etc.
 
Using an alpha map and a normal map on the same model? Yes, you can do that. You can see it in the Far Cry SP demo.
 
How do they do realistic hair (like in the 3Dmark 03 Troll's Lair demo)? All shiny and 3D looking, and so on...?
 
It's a more complex version of unreal-style grass. Polys with textures, just like everything else. I've not actually seen the demo of which you speak, so it may be a crazy dx9 shader that is completely post-processed, but I'd be surprised if that were so.
 
The hair in the Troll's Lair demo is completely different. Each strand is individually modeled. Then, the hair physics is done using the Havok physics engine and takes into account gravity, hair stiffness, hair curl, etc.
 
Well, I'm surprised. Although stuff like hair will be fully modeled eventually, I didn't think it was being done in realtime yet. I need to get myself a dx9 card and try these demos out.
 
got a link to that cyberman? I gotta admit I find that a bit surprising too, your basically saying they create and simulate the dynamics of thousands upon thousands of individual hair strands, in realtime... It's only been the last few years they could do it pre-calculated, and only even more recently calculate stand-in strands/guide hairs in realtime.. Things slowdown too much when its just a few thousand

I also think its far too OTT, transparency mapped images on polys for faked hair looks just fine in realtime.. I just can't see anyone giving some creature a full fur coat with every hair dynamically calculated when even Pixar can't do that in realtime yet.

So yeah, if you've got a link that would be great, cheers bud.
 
I heard that when Squaresoft made "Spirits Within" each strand of hair on the main character was individually modeled and moved. On a similar note, when Pixar made "Monsters Inc". they created a program to simulate hair movement and let that handle all of the fur on the big blue guy. Why didn't Squaresoft think of that? Seems to make a hell of a lot more sense to me.
 
Well the issue here is this suggestion that all these hairs can be calculated realtime. As I said, even Pixar couldn't do that, nobody to my knowledge has been able to yet, not in realtime, the processing power required for something like that would be astronomical.. and basically if that were possible, then we should already be seeing realtime 3D with billions of polygons in each model all zipping along at 130fps, cause thats a damn sight easier to do than calculating realistic hair in realtime heh

Its dead easy pre-calculated, all decent 3D apps can do it. Realtime though, I'll believe it when Cyberman offers up a link proving it
-

Oh and squaresoft bulled.. they didn't animate each hair individually, they simply did a Pixar and claimed they did (Pixar originally claimed they animated each individual hair on Sully by hand... they later changed this and admitted most of it was done by software while only guide hairs were manually animated in some area's when the dynamics failed and/or they needed a specific effect that was simpler to do by hand than let the software get it right) Squaresoft used possibly the same software Pixar used in some area's.. Which I'm gonna guess was Maya's. That was used in both and while Pixar wrote their own hair program they would have used early Maya ones to test things quickly. Quite possibly Joe Alter's Shave and a haircut, since it is incredibly powerful (thats what XSI has included in it)
 
It's in 3DMark 2003... and if you watch it you'll see they aren't quite doing the actual number of hairs on a human head yet but they are all individual strands and physically simulated. It doesn't look as good as you are probably expecting...
 
First and foremost major animation companies, Like Pixar and Square Pictures( which is dead BTW), use immensely powerful workstation pc's. I'm talk dual opeterons sporting a fir-gl video card here.

Second, ,I used the 3d MArk Demo on my PC ( I have a athlon 3000 XP+ with a radeon 9800 SE, and 512megs of DDR Ram) I saw the geometry based hair. Not only did that demo run at like 10 fps (maybe more if I had a 9800 XT), but the hair wasn't as impressive. I would rather use an normal mapped-Alpha texture for hair and get better performance.

Third, I've used Maya Unlimited 4.5 on My School's PC ( We also just got 3DS MAX 6), and I've actually made things out of fur. It does not make every strand of fur, nor does it Generate it in real time.
 
shot0000.JPG
 
Pendragon said:
Perhaps I'm misinterpreting this, and I won't claim to know more than you do, but to my knowledge, Pixar never renders in real-time. In fact, from what I've read, it can take up to 90 hours for their render farm to render a single frame, so even the Pixar connection doesn't seem to be anywhere near capable of it.

If Pixar could do it in realtime, they would, it all boils down to money. So the fact they can't, means it can't be possible. With the talent they have working for them, if it was a viable solution, and it would be since realtime fur to that level would save them months of render time which means more profit to them

Anyhoo, Cyberman replied and admitted it wasn't to that level, so will be nothing more than a few hairs here and there, which was what I thought it would be.
 
Even if it was possible to render it in realtime, pixar would still send it through the rendering process. So for movie animation studios, it's neither here nor hair. (haha)
 
FictiousWill said:
Even if it was possible to render it in realtime, pixar would still send it through the rendering process. So for movie animation studios, it's neither here nor hair. (haha)
Not always, what I'm saying is if realtime hair could do what pre-calculated hair can. eg: looks as good. Then there would be no reason to do it pre-calculated atall, why do it slow when you can do it in realtime if both methods look the same.

I personally don't see the point of realtime hair/fur dynamics until it is at the same or higher quality as pre-calculated
 
Back
Top