HL2 has parallax mapping

I still don't see how HL2 has parallax mapping looking at the buildings in the recent screenshots the brick sides look really flat.
 
Quixote said:
I still don't see how HL2 has parallax mapping looking at the buildings in the recent screenshots the brick sides look really flat.

That's the big question. The screens have a lot of the effects turned off and no-one knows why.

Maybe to show how the game looks on a lower spec PC?
Anyway, that's already being debated elsewheres in other threads.


Is Parallax the same tech as shown in the video where they texture a 1-poly wall to make it look high-poly?
 
Nope, that's bump mapped subdivision surfaces... or something like that.
 
screw "insert various names here" mapping and gimme lots and lots of geometry.
 
People, displacement mapping is NOT normal mapping.

Displacement mapping is when e.g. a rocket explodes on the ground, and a hole appears. The engine looks up a displacement map (a grayscale picture), and alters the ground's geometry the way its drawn in the displacement map.

Normal mapping is when you want to add some cool detail, but don't want to add lots of extra polies. You do a normal map, a picture which contains poligons painted with different colours, each colour represents an angle which the polygon is facing (the polygon's "normal"). The engine then applies this map to the surface as lightning information, so that shadows and lighting dont apply by the exact geomety, but by the normal map. This can be used with radiosity (lighting and shadows are calculated when you compile your map, and baked into the bsp file), this is called Normal Mapped Radiosity. Or it can be used with real time lighting, in this case it will be nicer because its real time, but a lot more slower.

Parallax mapping is basically displacement mapping, but here the engine doesn't mess with the geometry, instead it just alters the texture to look as if its not just a flat surface.

So parallax mapping to displacement mapping is like a high poly model to normal mapping.
 
That last sentence makes about as much sense as a penguin reading Shakespeare while balancing on a football (standing on the pointy end) being used in an advertisement for HL2.
 
Dile said:
People, displacement mapping is NOT normal mapping.

Displacement mapping is when e.g. a rocket explodes on the ground, and a hole appears. The engine looks up a displacement map (a grayscale picture), and alters the ground's geometry the way its drawn in the displacement map.

Normal mapping is when you want to add some cool detail, but don't want to add lots of extra polies. You do a normal map, a picture which contains poligons painted with different colours, each colour represents an angle which the polygon is facing (the polygon's "normal"). The engine then applies this map to the surface as lightning information, so that shadows and lighting dont apply by the exact geomety, but by the normal map. This can be used with radiosity (lighting and shadows are calculated when you compile your map, and baked into the bsp file), this is called Normal Mapped Radiosity. Or it can be used with real time lighting, in this case it will be nicer because its real time, but a lot more slower.

Parallax mapping is basically displacement mapping, but here the engine doesn't mess with the geometry, instead it just alters the texture to look as if its not just a flat surface.

So parallax mapping to displacement mapping is like a high poly model to normal mapping.


That makes no sense, dont explain stuff you dont really understand.
 
Fenric said:
If qckbeam is reading this, ask him to give you a link to a parallax mapping demo. You'll see that it isn't infact all that special, just a cool trick that looks rather nice. Question is, will mod teams or Valve themselves make good use of it.

I'v seen 5 demo's of it now and it does make a difference, it's still down to the artist to make good use of it I suppose.
 
The only big downsides of parallax mapping are the same problems that come with normal maps... they don't look right at the edges of the surfaces, and they don't look right at high angles of incidence. Though, with some clever tricks and proper level design you can avoid these pitfalls.

Other than those two problems parallax mapping does a good job of simulating actual displacement without as much of a performance cost.
 
Is Parallax Mapping in relation to the 3D skybox thing Gabe talked about?
 
It is a more advanced way of making a flat surface appear to have depth.
 
To clarify since a lot of people dont seem to understand:

Bump Mapping: a technique to make a surface look more geometricly complex than it really is. Uses a grayscale image where white is high and black is low.

Normal Mapping (aka Dot-3 Bump Mapping): this is a technique which stores normals in an RGB image (R = x, G = y, B = z) and uses the normals in the texture instead of taking the vertex normals and interpolating them to calculate the lighting. The result is per-pixel lighting which is generaly better than grayscale bump mapping.

Displacement Mapping: This technique actually modifies the geometry. The map is a grayscale image which represents the y component of a vertex. At render time the map is sampled and the verticies on the object are displaced. This is used for terrain engines (height maps) among other things.

Parallax Bump Mapping (Virtual Displacement Mapping, Offset bump mapping): This technique uses a map which stores normals in the RGB component of the image and the height map in the alpha component. What it does is correct the surface depending on the viewing angle by warping the texture coordinates. It takes into account the height of the surface where a given pixel is rendered and offsets it to where it would be if the object was 3D.This produces a parallax effect when in movement. It is relatively fast and can be implemented easily in SM2.0 (so you dont need SM3.0 for it). It's only about a 15% performance hit compared to regular normap mapping.

Hope this clears up some stuff, any of these effects (apart from displacement mapping) could be implemented with shader model 2. Since HL2 fully supports SM2.0, it will support parallax bump mapping, you just need someone to program the shader.
 
MadMechwarrior said:
To clarify since a lot of people dont seem to understand:

Bump Mapping: a technique to make a surface look more geometricly complex than it really is. Uses a grayscale image where white is high and black is low.

Normal Mapping (aka Dot-3 Bump Mapping): this is a technique which stores normals in an RGB image (R = x, G = y, B = z) and uses the normals in the texture instead of taking the vertex normals and interpolating them to calculate the lighting. The result is per-pixel lighting which is generaly better than grayscale bump mapping.

Displacement Mapping: This technique actually modifies the geometry. The map is a grayscale image which represents the y component of a vertex. At render time the map is sampled and the verticies on the object are displaced. This is used for terrain engines (height maps) among other things.

Parallax Bump Mapping (Virtual Displacement Mapping, Offset bump mapping): This technique uses a map which stores normals in the RGB component of the image and the height map in the alpha component. What it does is correct the surface depending on the viewing angle by warping the texture coordinates. It takes into account the height of the surface where a given pixel is rendered and offsets it to where it would be if the object was 3D.This produces a parallax effect when in movement. It is relatively fast and can be implemented easily in SM2.0 (so you dont need SM3.0 for it). It's only about a 15% performance hit compared to regular normap mapping.

Hope this clears up some stuff, any of these effects (apart from displacement mapping) could be implemented with shader model 2. Since HL2 fully supports SM2.0, it will support parallax bump mapping, you just need someone to program the shader.
My god, someone who gets it right! I was beginning to think it would never happen.

MadMechwarrior, thankyou. It really is a nice change to meet someone who can make a post and get the details correct. Great post. I just hope certain people understand it.
 
Well, I meant the same general thing about the Parallax Normal Mapping... but I think MMW said it more eloquently...
 
Hey... now that I come to think of it... PC Gamer referred to Source using Normal Mapping to make flat ground look uneven and bumpy... I always thought that would look weird unless there was some kind of controlled distortion to make it look like bumps in front obscured bumps behind...

Maybe it uses Parallax Normal Mapping for the terrain?
 
Bah, I guess nobody bothered to click on the link I gave to a PDF that explained it in detail and provided two proof of concept shaders.

EDIT:BrianDamge: I think they were talking about something else, most likely a hieght map that is used to move the vertecies up and down (useing the CPU of course). You can see them demo it at the beggining of the tech video.
 
Hold on, I'll see if I can find the quote... first I need to find the mag...
 
Okay, here it is:

PC Gamer said:
The new terrain system employs bump mapping to trick our eyes into believing flat surfaces are uneven and underwater surfaces appear refracted and distorted.

And one of the VALVe dudes said something about the Strider's main weapon's warping effect using a Normal Map, I think...
 
Here is a question. All normal/bump/parllax mapping techniques are pixel shading techniques. But is all pixel shading either normal/mump/parallax mapping techniques?
 
I dunno... aren't radiosity and similar things also pixel shader effects?
 
You can have dynamic pixel shader effects (dynamic lights), so no, not all pixel shaders are baked up beforehand.
 
i hope it doesn't have the kind that UE3 has, virtual displacement or parallax mapping. BECAUSE NOONE CAN RUN THAT CAUSE ITS A STUPID FRICKIN PS3.0 EFFECT!!!!!!!!!

if they are adding this effect theres gonna be another delay for something nobody can run!
 
*cough* GF6800U *cough*

If they put PS3.0 effects it ATi might get a little peeved :) I hope they dont let ATi stop them making the engine better :/
 
*cough* X880XT *cough*....so?

i hope that valve said that they have parralax mapping because they have the equivalent that works on PS2.0, and they just didn't think people would know about it.
 
well, as far as I know cards like the R9600 Pro can run Parallax Mapping just fine... I've got a demo of it here that runs at a cool 90FPS...
 
Brian Damage said:
well, as far as I know cards like the R9600 Pro can run Parallax Mapping just fine... I've got a demo of it here that runs at a cool 90FPS...

Yeah, but that is only in a small area with no AI, collision detection, no character models, etc.
 
...

AI, physics and actual geometry are all done on the CPU, as far as I know... aren't pixel shaders based in the GPU of the graphics card?
 
thats not parallax mapping...i have that demo too, its someting else that has the same effect except theres stretching of the texture at sharp angles
 
Is that the demo with the two boxes and the floor with the bumpy textures on them?

Says it's Parallax Mapping in the readme file...
 
Brian Damage said:
...

AI, physics and actual geometry are all done on the CPU, as far as I know... aren't pixel shaders based in the GPU of the graphics card?

geometry is sort of done on the GPU as well, along with any kind of colour blending operation, fixed fucntion Transform (i think thats geometry) & lighting and dynamic pixel/vertex shaders which replace the fixed function algorythms with programmable ones.

When it comes to geometry I'm not really sure how much is done on the graphics card. I think it depends on how you program your graphics engine, I remember seeing a demo where some bricks meld into each other, you could choose to do the vertex opperations on either the CPU or GPU. A vertex shader was used for the GPU and the algorythm the CPU used was just done in C++.

As for that PCGamer quote I think it's a bit misleading.
 
Didn't someone say that Gabe Newell said you could spend hours searching City 17, thats how big it is?
 
Well, the AI and Physics are still done in the CPU, I'm sure of that...

I dunno how the quote can be misleading... inaccurate, maybe, but it says what it says pretty clearly...
 
All those different effects sound really cool. Why don't they impliment them on the buildings!?! Has anyone asked Doug what the system specs were for the demo that was shown to PC Gamer?
 
Parallax mapping is a really simple effect, I wrote a shader for it last night in PS2.0 and it runs pretty good even with the software rasterizer (I have a GF4, so no PS2.0 for me :() And parallax mapping does have weird distortions at low angles because the offset becomes almost infinite. If you notice in the unreal 3 engine video they never really show it at less than a 30 degree angle. It could probably be done in PS1.1 with multiple passes also, its not really a next gen effect or anything, just crafty usage of shaders so I can guarentee you HL2 will support it.

Shader Model 3 is really just for performance optimization, all of the effects can be done in Shader Model 2 and usualy with only 1 pass (most shaders dont go past 96 instructions).

As far as image quality is concerned, SM3.0 is just a gimmick, no game will use its full power for a long while and long shaders might choke up cards (SM3 has up to 65536 instructions).

True SM3 effects (that arent really portable down to SM2) will take a few years to start popping up.
 
Back
Top