PS 2.0 usage in HL2

  • Thread starter Thread starter ek0st0ns
  • Start date Start date
E

ek0st0ns

Guest
I just got done reading this article http://www.firingsquad.com/hardware/far_cry_nvidia/ about far cry performance . Ther article says that Far Cry mainly uses ps 1.1 and doesn't use ps 2.0 that much. From the graphics iv'e seen in Far Cry it seemed that the game was ps 2.0 heavy buy I guess not. Even if its only using ps 1.1 shaders it still looks damn good. If ps 1.1 looks that good I can't wait to see how good ps 2.0 is gonna look. My question is how heavily will HL2 use ps 2.0? Does anybody know or has Gabe said anything about it?
 
No we dont' really know what will be which PS version. We do know some things that are going to use pixel shaders though, like characters' eyes for example.

If you do a quick google search you can find some screenie's of FarCry using the PS 3.0 mod. Looks great.
 
psyno said:
If you do a quick google search you can find some screenie's of FarCry using the PS 3.0 mod. Looks great.
I don't suppose you have a Farcry and hardware that can run the true DX9 path?
 
funny you should say that because i bought one of those off ebay the other day
 
Asus said:
I don't suppose you have a Farcry and hardware that can run the true DX9 path?

Ah, I see what you're getting at, but actually what I'm referring to are some images released by nVidia of a tech demo using the CryEngine (or whatever it's called) on a 6800 U, which (supposedly) uses the true PS 3.0 path, unless you know something I don't about these..?
 
DX9.1c includes PS version 3.0 while DX9.1b has 2.0.
PS3.0 doesn't do anything PS2.0 can't. PS3.0 doesn't add anything special really or anything the ATI cards couldn't do before rather it just raises the minimum standards and is a little more efficent.
It should look the same. It's even less of a change than DX8 (GF4) was to DX8.1 (R8500).
All pictures so far from Nvidia are PS3.0 and PS1.1. Benchmarks/reviews have shown pictures of Nvidia's mixed PS2.0/1.1 path which look pretty poor to PS2.0 (ATI) while the shots Nvidia compared with PS3.0 on release day were horride (PS1.1).
 
Mostly true about the actual tech. To clarify, there actually is no such thing as DX9.1 though. I think you just mistyped. :D In 9.0c SM 3.0 will effectively be "unlocked" although it has been in the DX 9 spec the entire time.

Anyway, not only is 3.0 not a big jump from 2.0, but there's actually an intermediate step that ATi is likely to support at least to some extent called 2.x (or ps_2_x). There are some things in there that can't be done with 2.x, but very few, and the visual difference between 2.0, 2.x, and 3.0 would be minimal... If anyone's interested I can go hunting for the article(s) which point this out.
 
The source engine will be upgraded to PS 3.0 Valve has already confirmed this. But it will happen after the release of HL2, probably a few months after.
 
asus, i dunno where you got your info on sm3.0 but it can do alot of things sm2.0 cant.
heres a link i suggest...
http://hardocp.com/article.html?art=NjA5

sm3.0 isnt all about visuals but it does have its perks like displacement mapping, which sm2.0 cannot do.
If developers pick up on this technology and we see it implemented in games, this right here could be the deciding feature that shows the most difference between a game rendered in Shader Model 3.0 and a game rendered in Shader Model 2.0.

and the shader instructions is dramatically increased. so games with tons of shadows and shading, sm3.0 would grant a performance boost in theory.
sm2.0 = 96 instructions for shading
sm3.0 = 65,535 instructions for shading
and in the 6800ultra the instructions are unlimited.

sm2.0 does not doesnt have shader branching that allows a performance boost.

geometry Instancing sounds like the best thing for rpg's + strategy games or just games with alot of characters/models on the screen.

i hate to sound like the fanboy...but just to let you know. i will buy the best card out in june, weather its ati or nvidia. just based on performance and IQ :)
 
1. Branching is costly.
2. Dynamic flow control is possible in 2.x
3. In that very same article that you pointed to, HardOCP points out that HL2's PS instruction lengths were in the 30's, which is nowhere near 96, and certainly does not even approach 65,535.
4. In the same article, HardOCP points out that you can always unravel a shader.
5. nVidia's new AA rocks; it's new texture filtering sucks. OT though!

:cheers:
 
1.
Now lets take a look at Dynamic Branching which is also a new feature that Shader Model 3.0 has that Shader Model 2.0 does not. As we discussed in our GeForce 6 series tech article Dynamic Branching gives the ability for programmers to control the actual flow of the program, starting and stopping code where they see fit instead of a straight execution from the beginning line of the shader program to the end. What this would allow is possibly faster shader performance. What this will not intrinsically allow is any difference in image quality. Even if Dynamic Branching is used in a Pixel or Vertex program the code can be unraveled for use in Shader Model 2.0. The potential for performance increase using Dynamic Branching does exist but it is yet to be seen how efficient the GeForce 6 series is at Dynamic Branching.
costly you say?

2. never said it wasnt.

3. you must of read wrong, they said that hl2's instruction lengths were in the 30's-40's on sm2.0. nothing about future games and how much shader instructions they will have. so if you have more, its better for future games than having a limited amount.
One of the most anticipated games this year Half Life 2 uses Shader Model 2.0 extensively but only has a shader length of 30-40 instructions, not even coming close the 96 instruction limit in Pixel Shader 2.0. So concerning shader length, what we are left with is going to be the performance differences between running Shader Model 2.0 programs in many passes with multipassing versus running Shader Model 3.0 with very long shaders.

4. wha? unravel a shader? ....quote?

5. definately off topic ;) but yeah...beta drivers on revision boards.

If there was no significant value then why would Microsoft develop it?
If there was no significant value then why would so many gaming developers endorse it?
If there was no significant value then why would we need it?
:afro: :naughty: :afro:
 
:)


1. The branching itself is costly, hence HardOCP's concerns about "efficiency."

2. True... carry on! :p

3. Yes, more possible instructions couldn't possibly be construed as bad, but this thread happens to be about "PS 2.0 usage in HL2." ;)

4. From page 2 of HardOCP's article:
Even if Dynamic Branching is used in a Pixel or Vertex program the code can be unraveled for use in Shader Model 2.0.
(Yes, this is part of your quotation above.)

:cheese: :smoking:
 
1. never said anything about being costly as in taking a performance hit, it just means they dont know how much performance it would increase. as the same for the rest of the things in sm3.0 at this point. we dont know much about the performance gains, but in theory for future games they would be very good.

2. indeed

3. agreed. but in your previous post, you said it like it was a bad thing ;)

4. any performance/IQ hit? that we do not know

so like i said...
If there was no significant value then why would Microsoft develop it?
If there was no significant value then why would so many gaming developers endorse it?
If there was no significant value then why would we need it?

:afro:
 
Ok, so we're down to 1 and 4, and really just 1. :p

1. Under the heading of Pixel Shaders Version 3.0 from page 6 of X-Bit labs' article on the NV40, there is a brief discussion of this.
There is one problem that can arise with version 3.0 shaders that use dynamic loops and branches. Processing several pixels, the NV40 may encounter a situation when it must execute one branch of the shader for some pixels and another branch for other pixels. How does the pipeline work in this case?

Possible solutions always hit on the performance. For example, if the pipeline meets a branch and starts processing pixels one by one, rather than several pixels at a time, execution units will mostly be idle. Contrary, if both branches of the shader are executed for all pixels, additional computational expenses arise.
I'll post again if I find a more in-depth explanation.

4. IQ hit? There's not likely to be a change in IQ either way. Performance? Depends on how it's implemented I guess. Indeed we shall see.

As for why...
It seems to me that SM 3.0 is really going to make shaders easier for developers, and not necessarily going to make much of a visual impact. For the most part, it provides more elegant ways of doing things that were already possible in somewhat hack-ish ways with 2.0 and 2.x. Nothing like the change to 2.0.

:cheers:
 
^good points
but im not really into debating so ill just end it with...

we shall see soon :)
 
Back
Top