Expectations of HL3 engine

Oh BOY!! Another breakdown of posts made by graphics newbies!!

U3 has better per pixel implmentation inside parallax maps and bump/nomal maps for radiosity effects. The shadow engine is basically the same precalculated vertex shadows. It's not real time i.e. inverted Z.

Okay, seriously, you have NO idea what you just said there, do you? For one, parallax maps do not exist, but the height maps used for the parallax effect do; no game will be using accurate real-time radiosity for many years to come since light reflections is WAY too costly on the hardware right now; the shadows are NOT done by "precalculated vertex shadows" and in fact the notion of having a precalculated shadow is just plain STUPID (Doom3 used shadow volumes, which are vertex based, and are NOT precalculated); and finally inverted Z is just a cheap, albeit slightly ineffective trick used for converting a normal map to a bump map!

Whatever direct x of the time supports. Id, was the last hold out for a company that wasn't spoon fed graphics technology from microsoft.

I think we should first focus on what DX of THIS time supports, because currently DirectX 9 (or more specifically, Direct3D 9) still has a lot of unused potential, in that floating point surfaces havne't been used a lot, nor have things like post-processing effects.

Source already has parallax mapping. (I think it was in the CS:S beta and testmap) If the engine can handle PS2.0, it can do parallax mapping/virtual displacement mapping/offset bumpmapping.

From what I've seen so far Source doesn't have parallax mapping out of the box, but I'd be shocked if the renderer for Source couldn't be modded to just bind an extra shader to a surface. (Btw, kudos on knowing the three different names of the effect!)

I think it has something to do with the engine how many textures it can display!

The number and quality of textures in a game is only a hardware limitation. To put in an arbitrary limit would just be insanely stupid.

however Doom3 has made the mistake of using low quality models and textures for DX8 and because there is no system or Card that can truly appreciate some of the DX9 stuff implemented it doesnt look that great.

Low quality models I'd whole heartedly agree, but low quality textures? Play it on high quality and tell me that it's textures aren't good. Oh, and there are a lot of people out there who can appreciate the newest stuff implemented: it's called the Radeon 9600 (or up) and the GeForce FX 5200 (and up). Also, the models aren't scaled for any card whatsoever. There is only one character model in the game for each character.

It shows the sheer lack of a clue that so many people have and yet they still post...

You have no idea.... I lose a lot of faith in the board when I see people saying "Yeah, CS:S has HDR, I saw a bit of it on the wall of the tunnel".

Doom3 IS a DirectX engine, its supports it, therefore it is and Doom3 supports DX8 - DX9.
OpenGL cannot match anything like DirectX9 at the moment and Half-Life2 is also using a DirectX engine.
Therefore in this argument since Doom3, Source and Unreal 3.0 are the leading FPS engines at the moment and all support DX its quite valid to use it in the example.
If Doom3 was made to draw upon Glide over DirectX then its losing the battle already.
and like stated. Unreal 3.0's pretty asthetics are due to DirectX9, and what makes them models looks soo good is the high polygons, multi-layered skins with opactiy values and great texture and lighting.
Half-Life2 models have multi-layered skins and different opacity levels already and Vampire the Masquea...whatever, uses Nvidia's new Shaders and also some of the lighting and shadow effects which unreal 3.0 is boasting.

Posts like these help out too. Pi Mu Rho already picked it apart, so I won't do it here, except to add that this guy must have some AWESOME weed.


Anyways, that's all for now folks!
 
From what I've seen so far Source doesn't have parallax mapping out of the box, but I'd be shocked if the renderer for Source couldn't be modded to just bind an extra shader to a surface. (Btw, kudos on knowing the three different names of the effect!)

I know for sure that there is a console command for it, and I've seen pictures of it too. Whether or not CS:S really uses two maps (normalmap and heigh/bumpmap) for it or uses a trick to create either the normalmap from the bumpmap (or visa versa, I don't know if the walls are bump or normal mapped) I don't know, but it's there and definitely supported. It doesn't look as good as in UE3 though, but that's because of the texture resolution.
 
Got any screens of that? You have successfully obtained my interest :)
 
parallax/offset/holographic/displacement mapping (ner, i know 4 names :p ) is a pretty easy and cheap effect anyways, iirc only a few extra instructions in an assembler shader from what i can recall (i'll really have to re-read the thread on the OpenGL forum of the guy who came up with it) so adding it to Source would be easy enuff todo if its not already present.

oh, the lack of floating point stuff is pretty much down to shoddy support (no filtering on textures before the GF6800, useless support in the GFFX series and no floating point frame buffer support pre-GF6800 either).

Post-processing is a bit underused, the Tron2 glow effect is the best example of it and shadow mapping kinda comes under it (its an image space technic at least).
 
Yeah, it is a fairly simple effect of thatI have no doubt (esp. since I've seen it be comprised of 3 HLSL instructions, one of which is just a tex2D lookup) and that's why I feel it could be added to source.

And who come up with calling it holographic mapping?

Edit: If you want to refresh your memory, I did find the PDF that educated me on parallax mapping as it is implemented in OpenGL
 
Holographic mapping was the fault of Tim Sweeney, he refered to it as such, which it pretty much is as the image is dependant on the angle of the camera, which holograms are also dependant on.

edit: I've got the code to the OpenGL implimention 'somewhere' using the assmebler interface, i'm pretty sure i've got the pdf somewhere as well
 
mat_parallaxmap 1 is the consolecommand.

Can't find screenshots of it atm.

BTW, an offtopicish question: does UE3 use realtime SSS? With that demo of that dragon and it's wings that let through light partially and blocked it in areas where there were supposed to be vains.
 
SubSurface Scattering

gives the illusion of depth to skin, making it look more realistic.
 
ah right...
depends on your def of 'real time' i guess, the factors are probably encoded into a texture and a lookup is done to work out the scattering factor (with a bit of maths to allow for light/camera positioning)

*wonders how close it is to atmosphere scattering*
coz i've got an artical on that :D
 
Daiceman9 said:
I woudn't be supprised if seprate "physics cards" came out, kinda like how vid cards first got started, they were getting too intense for the cpu to handle and needed something else to calculate them.

Hey, that's a pretty cool idea. Wouldn't have happened to get it from http://www.halflife2.net/forums/showthread.php?t=36568 would you? :)

[EDIT:] I don't care if you got it from here or if you came up with it on your own, I just thought it funny that mine got moved to hardware while this whole thread is still in HL2 discussion, even though we're talking about HL3. :)
 
Wildhound said:
Valve have already stated that HL3 will be on Source. Probably updated in many ways, however.
Nothing more to add.
 
I'm sorry but don't be fanboys. The Unreal 3 engine is clearly better than Source. Whether it will still be better by the time Unreal 3-engine games are out in 2006 and Source has been updated to 2006 standards is another matter, but still ... I mean, you're comparing an engine that won't be viable until 2006 to one that we'll be playing a game from by the end of the year. Of course the one that's built for 2 years later is going to be better.

Regardless, it doesn't matter anyway. Unreal 2 had a class-leading engine and look how shite that game was.
 
cypher no offense to your knowledge. HL2 does use parallax. U3 also uses parallax. HL2 uses a method for creating radiosity bubbles around the models. This was stated a long long time ago, yet everyone said it wasn't possible to have r2r. They also said it wasn't possible to have realtime hdr also. U3 uses a method to create RtR effects on their models, i'm not sure what the method is for either to create those. It is clear that U3's r2r method on the character models interacts alot better on the per-pixel level than HL2's. Does this clearify what i stated to you?

As for your comment on precalculated shadows, you don't understand. The light sources are precalculated, FOR the projected shadows. That is very much unlike inverted Z(known as ZFail or carmacks reverse, shadow volumes method), which is what doom 3 uses.
 
ailevation said:
:dozey: I think it's safe to say that the Source is better than the Unreal 3 engine. Well, most likely be better sooner or later.

UE3 > all

However, it being restrained by the current hardware standards of today and maybe two years down the road, it's still a big if.
 
UE3 = nothing

There's no games currently available that use it. It won't even run acceptably on today's hardware. It's a PR stunt, nothing more.
 
Petabyte said:
Um, its not a stunt, its a game engine :)

Yes, it's a game engine that currently has no games available using it. Epic's unveiling of the engine this early was purely a PR stunt
 
Pi Mu Rho said:
Yes, it's a game engine that currently has no games available using it. Epic's unveiling of the engine this early was purely a PR stunt

Yup, since they know in afew years there's going to be game engines that are just as good (if not better) than U3.
 
Pi Mu Rho said:
Yes, it's a game engine that currently has no games available using it. Epic's unveiling of the engine this early was purely a PR stunt


if u r in the business of selling engines there is nothing wrong with showing off ur product to attract customers. E3 is a trade show, after all.

it takes 2 years plus to make a game so targeting ur new engine at hardware that wont be mainstream until 2006 is about as sensible as you can get. and the games that come out in 2006 won't be using ur new engine unless they can license it in 2004, will they?

i don't see what about that is a stunt.

edit: I apologise for patronizing tone Pi Mu Ro
bobvodka just read back thru the thread. i agree, boost rocks. i'd hug boost::function first, though.
 
My point is that the other top-end game engines of 2006 will have the same feature set as UE3.0 - Epic's PR stunt was showing them off now
 
and my point is they have to show it off now to get people to licence it.

and remember they showed it off at E3, which is a trade show, attended by the kind of ppl that they want to licence the engine.

no one will licence it for their developed-for-2006-release games if they don't show it off till 2006.
 
i just read the ati doc on HL2 Source engine shading models.

http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf

it explains a few things, like why the bumped specular mapping on the weapon models keeps jarringly changing as the player moves around a level (level designers place cubemap reflection sample points) . it's really obvious at the start of the ravenholm bink.

but i was totally unaware they were using the "real time radiosity" normal mapping.

it's not really real-time-radiosity, just real-time sampling of precomputed radiosity. it's still very clever, but i dunno if it was worth the effort.

from the binks, i just thought it was plain texture maps, with bumped specular. maybe the game will be a lot more impressive.
 
yeah, the lighting in HL2 is pretty sweet, it certainly works better on static models ofcourse, but its nice and thanks to the pdf I wouldnt mind copying it for my own reasons at a later date ;)
(I do love game companies who give out the details like that)

Their shadow algo is a bit poopy however, I've noticed numerous bugs with it on dust (mostly with the shadows appearing under the bridge as someone runs above) so i'm not sure how they are doing that coz it baffles the heck out of me ;)

Another point you can see the useage of the cubemap reflections (at least I assume its the cube mapping they are using) is in the reflections of the gun sights, if you watch it carefully you'll see it display one view for a while and then jump to another, display it for a while, jump to another etc. Unless you are paying attension then you're not going to notice, so its a clever way of doing it, maybe when gfx cards get a little more powerfull real time reflections could be added (with reduced lod on the shaders/models ofcourse)

and boost::function does indeed rock, although i've mostly used boost::bind so far for iterating over continers. Heck the whole boost library rocks in general :D
 
their shadow algorithm renders the model from the (brightest?) lights perspective to a texture, then projects that texture onto nearby BSP surfaces - prolly projects models Bbox away from the light to determine which surfaces.

it ignores moving entities, only checks BSP surfaces in map file, so if the bridge is dynamic, shadows go straight thru it.

edit: late breaking info (first review?) says they've fixed the overlapping shadow misfeature - prolly using stencil or dest alpha buffer imho don't see the moving entity prob going away though.

btw i don't have inside info, just guessing.
 
Probably lots of people have already said this, but...

My expectation for the Halflife3 graphics are equal to Unreal Engine 3.
 
shad3r said:
their shadow algorithm renders the model from the (brightest?) lights perspective to a texture, then projects that texture onto nearby BSP surfaces - prolly projects models Bbox away from the light to determine which surfaces.

it ignores moving entities, only checks BSP surfaces in map file, so if the bridge is dynamic, shadows go straight thru it.

Yeah, I recall Pimpypoo muttering something about that system (which is fair enuff, its cheap and cheerfull but does suffer from issues), however if that bridge/underpass on CS is dynamic then something very crazy is going on ;) I'd say without fail that its a BSP object, so as to how the shadows were passing though the top section onto the lower walls i've no idea..
 
You can turn off map entities through the shift+F1 menu. Turn on cheats first.
 
I Expect Half Life 3 To Cook Me Breakfast And Manage My Money
 
HL2 does use parallax. U3 also uses parallax.

That I never denied, I just didn't like how you called them parallax MAPS as opposed to the parallax EFFECT which does use height maps, and also thinking there was a distinction between parallax and height maps.

HL2 uses a method for creating radiosity bubbles around the models. This was stated a long long time ago, yet everyone said it wasn't possible to have r2r. They also said it wasn't possible to have realtime hdr also. U3 uses a method to create RtR effects on their models, i'm not sure what the method is for either to create those. It is clear that U3's r2r method on the character models interacts alot better on the per-pixel level than HL2's. Does this clearify what i stated to you?

I'm gonna go out on a limb and guess by r2r you mean real-time radiosity? Because if so, I can definitely imagine a radiosity bubble used for the models, and in fact I can see it being fairly effective, allowing a deformed model to be *technically* usable in a PRT map (I'm sure you know I'm referring to precomputer radiance transfer) by collecting light values from those bubbles and transferring them onto the model itself.

It's definitely a good algo (if that is how it is done) but I personally prefer Doom3's style of lighting simply because it allows for dynamic lights much more easily, and still produce fairly similar and consistent results on the models and environment.

As for your comment on precalculated shadows, you don't understand. The light sources are precalculated, FOR the projected shadows. That is very much unlike inverted Z(known as ZFail or carmacks reverse, shadow volumes method), which is what doom 3 uses.

My conclusions mainly came from the fact that you called them "vertex shadows", so I though you WERE referring to the usage of shadow volumes, as opposed to PRTs, which NOW I think you are referring to. Btw, I fail to see how Carmack's Reverse/ZFail (which isn't THE shadow volume method, it was a slight change developed to eliminate problems when the camera was inside the volume) can be called an inverted Z algo. The only place I HAVE seen the term inverted Z used is in reference to, as I mentioned earlier, cheaply converting a normal map to a height map.
 
Cypher19 read the pdf i linked above, it explains Valve's "real time radiostiy" method.

it isn't actually real-time-radiosity, it is real time sampling of pre-computed radiosity. for the entity models, it's done with just a single colour for each major axis (equiv to a 1x1 pixel cube map).
 
Back
Top