Cheating or not?

Will VALVe implement hardware optimisations?

  • Yes!

    Votes: 9 50.0%
  • No!

    Votes: 4 22.2%
  • maybe, maybe not.

    Votes: 0 0.0%
  • Who cares? Both cards will run great!

    Votes: 5 27.8%

  • Total voters
    18

mortiz

Newbie
Joined
Sep 3, 2003
Messages
4,074
Reaction score
0
Do you think VALVe will implement ATi hardware optimisations for Half-Life 2?

This isn’t an nVidia vs ATi thread so don't even start, I'm just interested in the general consensus among the members of the board. How honest is VALVe?
 
there isn't much gain in doing that i don't think.
Once game is sold it shouldn't be a concern to make one version run better than another.
 
Well if they don't cheat, the 6800 will be awesome in HL2, maybe not as good as the X800 but it'll run max settings with AA/AF, no doubt. Then Valve will look like fools. If they do cheat, well then all the people who didn't let predjudice get in the way and bought a 6800 will be pretty mad, because there is no way that the 6800, which is right up with the X800 and performs just in good in all the games, would be bad in HL2 without Valve cheating. I'm probably gonna get a 6800 non gt unless the gt is only $300 by christmas. I'd be mad if Valve went ahead and made the 6800 perform bad. K, there's my rant.
 
No... the only optimizations should be the ones to bring the NV30 up to speed.
 
The real question is if they will utilize NV40 hardware to its fullest potential.

To truly utilize the 6800, they need to enable Shader Model 3.0 support and also use a mix of FP16/FP32 (when using FP16 does not result in precision errors) as per the sm3.0 spec. Many games are using SM3.0, including one based on the HL2 engine - Vampire: Bloodlines. But will HL2 itself use it?

Also, will they use Nvidia's FP16 blending to get true OpenEXR HDR, or will they use the old technology, low quality HDR (GeForce FX/x800/9700/9800 series) exclusively? FarCry is using OpenEXR HDR in the 1.3 patch...

If they run the game in SM2.0 with 3dc support and low quality HDR, it will be obviously more optimized for ATI cards, as 3dc isn't even officially in the dx9 spec, and HDR/shader technology has evolved. HL2 will basically be dated out of the gates in terms of DX9 technology if this is the case, with games like FarCry surpassing it.
 
You mean that DX 9.0 c wasn't tailored around Nvidia hardware? Hmmm...I seem to remember that Nvidia wanted to create their own DX 9 version. When Microsoft and ATI said no and created DX 9.0, Nvidia came crawling back and persuaded Microsoft to add DX 9.0c. DX 9.0c was designed by Nvidia for Nvidia hardware.

As for being outdated, thats not true. Doom 3 (an OpenGL game) uses technology equivalent to DX 8. In fact, the only equivalent DX 9 feature they use is several fragment programs for the heat haze effect in the game. You don't need to have every new DX 9 feature to make the game look good, it is about how you apply the technology to the game. I'd rather have Crytek spend their time fixing the character so they don't look like plastic than having them add SM 3.0 (if I had a 6800 card).

Also, SM 2.0 is on the NV30's and R300's and R400's. SM 3.0 is only on NV40 cards. I think the bias would be obvious if a game developer spent more time implementing SM 3.0 features than a SM 2.0 features.
 
blahblahblah said:
You mean that DX 9.0 c wasn't tailored around Nvidia hardware? Hmmm...I seem to remember that Nvidia wanted to create their own DX 9 version. When Microsoft and ATI said no and created DX 9.0, Nvidia came crawling back and persuaded Microsoft to add DX 9.0c. DX 9.0c was designed by Nvidia for Nvidia hardware.

? ATI is going to be using Shader Model 3.0 in the same fashion Nvidia is using it now in their next-generation. They are simply behind this gen.

As for being outdated, thats not true. Doom 3 (an OpenGL game) uses technology equivalent to DX 8. In fact, the only equivalent DX 9 feature they use is several fragment programs for the heat haze effect in the game. You don't need to have every new DX 9 feature to make the game look good, it is about how you apply the technology to the game. I'd rather have Crytek spend their time fixing the character so they don't look like plastic than having them add SM 3.0 (if I had a 6800 card).

Maybe, but redesigning textures and the graphic engine is going to take far longer and take a lot more time & money than writing/rewriting SM3.0 shaders.

Also, SM 2.0 is on the NV30's and R300's and R400's. SM 3.0 is only on NV40 cards. I think the bias would be obvious if a game developer spent more time implementing SM 3.0 features than a SM 2.0 features.

Not really, the game developer would simply be making his engine more oriented from the future. NV50 and R520 both will use Shader Model 3.0 as well, ATI has stated this.
 
Well you see Nvidia makes their hardware 'special', just like Intel.
They create different funcitons that you have to program specificly for, or optimize for, to get that great performance that a developer desires.

ATI runs the instructions in the standard order and doesn't have the same 'special' functions that developers really have to pay attention to. Valve has stated early on that they liked that about the 9700 series.

The only thing Valve will implement that will be unique to ATI will be 3Dc compression. They will program for increased instruction length in PS2.0b and PS3.0 plus they did tons of optimizing for the FX hardware. ;)

blahblahblah is right about the standards. Nvidia left the MS' directX group which just left ATI and MS to create the original DX9 standard which included min 24Bit precision. Nvidia had no idea what to expect so they created their hardware with 16 and 32 bit precision.

The modification to the DX9 standard that is known as DX9.0c was made to accommodate Nvidia's hardware in specification form when they joined with the group again.

Nvidia's hardware has only slightly higher specifications than ATI's X800 cards. ATI is just below the DX9.0c spec, they left out the features that would use a lot of power, but actually can handle higher quality textures.
 
Asus said:
Well you see Nvidia makes their hardware 'special', just like Intel.
They create different funcitons that you have to program specificly for, or optimize for, to get that great performance that a developer desires.

This was true of the FX series, but is not true of the 6800 series.

ATI runs the instructions in the standard order and doesn't have the same 'special' functions that developers really have to pay attention to. Valve has stated early on that they liked that about the 9700 series.

Actually, it is the reverse this generation. Certain DX9 features, like Geometry Instancing for example, are only officially supported under DX9 standard calls by Nvidia hardware & SM3.0; using Geometry Instancing for ATI hardware is not officially supported by directx9. The same goes for 3dc, face register support, etc on the x800 series.

The only thing Valve will implement that will be unique to ATI will be 3Dc compression. They will program for increased instruction length in PS2.0b and PS3.0 plus they did tons of optimizing for the FX hardware. ;)

Hopefully they will program for greater instruction length and use some of the features like instancing, dynamic flow control, etc avail in sm3.0.
 
Well I hope nobody's saying now that 6800's are bad, because I know I'll be able to get a 6800 non gt for xmas, but I'm hoping I can get a 6800GT. The highest card I could maybe be able to get is the 6800GT or X800 pro, and only if the prices drop. So I'm going with the 6800 as it's "budget" versions are better than the X800 "budget" version.
 
You fail to understand that specifications are simply a minimum, not limiting what it can actually do.
Geometry Instancing is not a min. requirement under DX9.0 but it is under DX9.0c.
That has no bearing on whether ATI's hardware can or cannot do such a function. It's the exact same as Nvidia's hardware.

There are many more improvements with the X800 hardware that Nvidia claims to be it's own. This is why ATI's hardware is just below the full DX9.0c spec.

6800's are not bad but they do special coding just like the FX. For instance the order of instructions that it perfers is not standard but the compiler reorders the instructions that the developers codes so it fits the hardware better because they ran into issues where programers were programing for the 9700 and not the 5800 path. And that was another reason performance is poor that way for the FX series, although their compiler fixes it when it runs. ;)
 
tranCendenZ said:
? ATI is going to be using Shader Model 3.0 in the same fashion Nvidia is using it now in their next-generation. They are simply behind this gen.

The same people who think SM 3.0 is the greatest thing in the world are the same people who bought 256 MB video cards last year and thought they would make a difference.

Maybe, but redesigning textures and the graphic engine is going to take far longer and take a lot more time & money than writing/rewriting SM3.0 shaders.

So you would rather have plastic looking character and an extra 3 FPS than having a realistic looking game? SM 3.0 doesn't add a whole lot to games besides a modest speed boost.

Does anybody know if Nvidia's mid to low end hardware will support SM 3.0?

Not really, the game developer would simply be making his engine more oriented from the future. NV50 and R520 both will use Shader Model 3.0 as well, ATI has stated this.

Game developers are worried about developing their game, not licensing. Even a company like ID software, only 20% of its revenue comes from engine licensing.
 
Asus said:
You fail to understand that specifications are simply a minium, not limiting what it can actually do.
Geometry Instancing is not a min. requirement under DX9.0 but it is under DX9.0c.
That has no bearing on whether ATI's hardware can or cannot do such a function. It's the exact same as Nvidia's hardware.

I don't think you understand what I'm saying. Some of ATI's X800 shader features, such as Instancing and Face Register Support, are not officially supported under *any* DX9 shader model (including SM2.0b) in any version of directx9. They must be unofficially exposed through non-standard DX calls. Same goes for 3dc.

Nvidia, on the other hand, has all its features exposed through official SM3.0 dx9.0c calls. (except for the UltraShadow II feature, which is OpenGL-only AFAIK)

There are many more improvements with the X800 hardware that Nvidia claims to be it's own. This is why ATI's hardware is just below the full DX9.0c spec.

What exactly are you saying here? ATI isn't even close to meeting SM3.0 requirements.

You can read some of the differences between ps2.0b and ps3.0 here:
http://www.beyond3d.com/previews/nvidia/nv40/index.php?p=5
 
As far as I know, DX9.0c brought out the features for both PS2.0b and PS3.0.

I wasn't saying the exact numbers were close but the improvements and extra features over PS2.0 were there, inbetween PS3.0 and PS2.0. Developers have said the extra instruction length will be large enough in both sets for quite awhile.
 
Asus said:
As far as I know, DX9.0c brought out the features for both PS2.0b and PS3.0.

PS2.0b itself was exposed in DX9.0C, as was PS3.0. Neither was exposed in dx9.0b.

However, what I have stated above, and holds true, is that ATI has several DX features that are not part of the PS2.0B standard, such as instancing and face register support. According to the dx9.0c standards, only SM3.0 supports these features. Therefore ATI is forced to make nonstandard calls in order to expose them; this is why they were not available at first - ati decided to make these nonstandard features available due to the attention Nvidia's SM3.0 support was getting. They must also do the same for 3dc support, as it is also not part of the dx9 standards. Therefore ATI needs more "special" or "nonstandard" programming far more than Nvidia this round.

I wasn't saying the exact numbers were close but the improvements and extra features over PS2.0 were there, inbetween PS3.0 and PS2.0. Developers have said the extra instruction length will be large enough in both sets for quite awhile.

The main thing that hurts PS2.0B is its lack of Dynamic Flow Control. That is a big one.
 
I don't see Valve using SM3. It may increase NVidia performance by 3-4 FPS, but nothing to warrent the time it would take. SM3 doesn't make the game look any better anyway.. its only discernable feature being displacement mapping.

And I don't see why using SM2 would be "optimising it for ATi". I would much prefer the damn game come out three months earlier, than it have SM3 support.

You consistantly mention the Far Cry engine as being more advanced, but the fact is the 1.2 patch was pulled because it was so buggy. I certainly don't think CryTek has set a golden standard for catering to your fanbase.
 
ShadowFox said:
I don't see Valve using SM3.

Well, Valve likely has already committed to using SM3 in the HL2 engine as games based off the HL2 engine are announcing SM3 support. The question is whether it will be implemented in HL2 itself or not.

[It may increase NVidia performance by 3-4 FPS, but nothing to warrent the time it would take. SM3 doesn't make the game look any better anyway.. its only discernable feature being displacement mapping.[/quote]

Well, it could increase performance by way more than 3-4fps. It depends how deeply its implemented. Crytek just skimmed the surface of SM3.0 with their implementation. SM3's big claims to fame are true displacement mapping as you pointed out (not sm2's virtual/offset mapping), geometry instancing, and dynamic flow control. These things can make big performance/iq differences.

And I don't see why using SM2 would be "optimising it for ATi". I would much prefer the damn game come out three months earlier, than it have SM3 support.

Using SM2.0 but not SM3.0 would be failing to optimize for Nvidia. HL2 is already optimized for ATI, since ATI can only do SM2.x at best. Since I own a 6800GT, of course I am eager to see what programmers can do with SM3.0. Also SM3 support takes only a matter of weeks to implement.

You consistantly mention the Far Cry engine as being more advanced, but the fact is the 1.2 patch was pulled because it was so buggy. I certainly don't think CryTek has set a golden standard for catering to your fanbase.

The 1.2 patch was primarily pulled due to stability problems (ie crashing) and major visual glitches (disappearing textures) with ATI cards in the 1.2 patch, likely due to the changes they made to the SM2.0 path. They will probably roll back to the 1.1 SM2.0 path for ATI cards for the re-release.

However Crytek did do a somewhat cruddy job in general with this patch - for instance, they forgot to include an SM3.0 compatible FXC.EXE compiler, meaning if SM3.0 users do not hunt this down (avail in dx9 2004 sdk) themselves and replace the included SM2.0 FXC.EXE, they will get visual glitches. Also I was not impressed that the SM3.0 path needed to be activated via commandline or console commands - it should be a menu option.
 
The 'true displacement mapping' that Nvidia's hardware supports (which is offset mapping) does not support creating the geometry for displacment but only using the information that is supplied.
That was one thing that was missing, if I remember correctly.

But even reading interviews with Tim about the Unreal 3 engine, he says they use virtual displacment mapping for large walls and areas for performance reason and then says they don't use any displacement mapping for the models as they are detailed enough with the higher poly and bump mapping.
I see displacement mapping as just a temporary solution waiting for the higher poly detailed models and textures yet it isn't implemented in any game yet. By the time it actually is supported by many games the hardware will be 'outdated'.
 
Using SM2.0 but not SM3.0 would be failing to optimize for Nvidia. HL2 is already optimized for ATI, since ATI can only do SM2.x at best. Since I own a 6800GT, of course I am eager to see what programmers can do with SM3.0. Also SM3 support takes only a matter of weeks to implement.

It isn't optimising for ATi if the latest generation NVidia cards can do the same thing. I'm sure you are very happy with your 6800GT.. but that fact is <5% of users are going to have the latest generation hardware.
 
Asus said:
The 'true displacement mapping' that Nvidia's hardware supports (which is offset mapping) does not support creating the geometry for displacment but only using the information that is supplied.
That was one thing that was missing, if I remember correctly.

But even reading interviews with Tim about the Unreal 3 engine, he says they use virtual displacment mapping for large walls and areas for performance reason and then says they don't use any displacement mapping for the models as they are detailed enough with the higher poly and bump mapping.
I see displacement mapping as just a temporary solution waiting for the higher poly detailed models and textures yet it isn't implemented in any game yet. By the time it actually is supported by many games the hardware will be 'outdated'.

Parrallex bump mapping is offset bump mapping thats what Unreal 3 uses it can be done on any card with ps 2.0 and up and isn't that much more intensive then normal mapping (1 extra calculation of the parrallex depending on a height map)

True displacment is possible on the GF 6 line but there isn't enough vram to support it. It takes 8 bytes of space for every vertex. As Tim Sweeny said in the Unreal 3 tech demos they use 7k to 10k per model. They aren't using true displacement in Unreal 3.

The other problem with true displacement is there is a 20% speed hit from using a texture lookup with a vertex shader in sm 3.0. Thats not as bad as trying to do true displacement with sm 2.0 when you have to take two pass when you frist look the texture up in a pixel shader then displace in the vertex shader.

True displacement can only be used in conjunction with tesselation because trying to send 100k mesh through the apg port isn't really possible yet. By using a tessalator in real time, that increase the polygons quite a bit if you want real bumps. To the neighborhood of over 1,000,000 poly per screen. Unreal 3's tech demo entire level was about 4 million.
 
Back
Top