DirectX 9.1 soon, GFFX range on top form

Originally posted by Fenric1138
Out of interest, would there be anything stopping someone writing a pixel shader 3.0 for their HL2 mods.. Or wont the source engine handle it atall.. Valve commented on how it will be easy to upgrade and keep up with new gear and possibilities. But will it stretch to things like that or will Valve be forced to release a new version to cope with it.. reason I ask is I'm sure some will begin to make use of the new shaders after learning the current 9.0 ones. And if HL2 is capable of making use of them, then the nVidia route would in theory seem like the best choice

Also could someone point me to a url that goes into detail what DX9.1 is capable of, particularly the shaders.. Or isn't that information currently publicly available?

Cheers


PS and VS are done in HW, so until PS3/VS3 HW comes out you can't utilise either one. I don't think there's even a final spec for VS/PS 3, not sure though.

I've not seen a single link to any kind of info regarding dx9.1(except for the site mentioned in the begining of this thread), not even on nvidia's site who would have had it plastered all ove the place.
Until I see an official statement from MS with a detailed list of additions on top of dx9, IMO the whole issue is pure BS.
 
If you guys want to to see valid detailed arguements about why Geforce FX cards can/can't run DX9 titles utilizing PS2.0 correctly, why don't you spam the http://www.beyond3d.com/ boards? At least they don't have their heads up their asses like most of you guys here do.

And comparing the hardware speeds of a videocards GPU/Memory is hardly comparable to comparing the speeds of two architectually different CPU's, in terms of which card is going to perform faster in an application that doesn't use vendor-specific code (i.e. FP24 in HL2, or the general ARB definitions in Doom 3).

If you pull out Quake 3 on a 9800 XT vs. the 5950 Ultra, the Nvidia card will be faster, because it's technologically a faster card.
 
The new 5X.XX dets are already a lot faster in HL2, in fact there was leaked benchmarks on a site soemwhere showing the numbers between the 5950 and the 9800XT and they were pretty close, the 9800XT still holding only a slight lead. If DX9.1 adds to that peformance and the final drivers are a bit better ontop of that then theres no reason why the FX cards wont be running HL2 as fast as the radeon cards. I've tried HL2 beta with the new beta dets and with my 5900 Ultra, it runs about as well as on my mates 9800Pro.
 
Originally posted by Rarehero
If you pull out Quake 3 on a 9800 XT vs. the 5950 Ultra, the Nvidia card will be faster, because it's technologically a faster card.
The hardware is different... you can't just say "It's faster because it runs Q3 faster"... because Q3 doesn't use any of the advanced features.

... and if you know anything about Video Cards they don't all do the same features with the same performance hits (the comparison to CPUs was relevant because sometimes an Athlon processor can easily beat a P4 in a test of speed in business apps while the same P4 can easily beat the same Athlon in memory intensive apps, like games).

One card may spank the other one when all the features are in use, while the other card easily beats it in a game that only uses basic textures & lighting and maybe a few really simple shaders.

There is no one test that says one card is superior to another... that is why benchmarking sites test each card on many different games before making their conclusions.
 
quote:
--------------------------------------------------------------------------------
Originally posted by Rarehero
If you pull out Quake 3 on a 9800 XT vs. the 5950 Ultra, the Nvidia card will be faster, because it's technologically a faster card.
--------------------------------------------------------------------------------


The hardware is different... you can't just say "It's faster because it runs Q3 faster"... because Q3 doesn't use any of the advanced features.

I think that was the point he was trying to make, the example of quake 3 is to help compare the raw speeds of the cards when no complex fucntions are being run. The Raw speed of the 5900 Ultra is greater, but the 9800Pro has more pipelines which gives it, its own unique advantage.

Benchmarking with more standard unoptimised code you would think would be the best way to compare cards, and that EXACTLY what Futuremark2003 did, and thats crap :)

Only way to compare cards is in unique situations (specific games) and if you want a more borad idea of the card, combine those results. Ultimatly each card has its benefits otherwise out of 2 range of cards people would only ever buy the "better" card, which is obviously not what happens.
 
Intel is actually behind in technology right now.
Out of P4EE vs Athlon 64 FX 51 reviews, if you can even find 5 sites that give the lead to P4EE ill find 10 more that show AMD's ahead.
But thats sorta irrelevant as P4EE isnt out and Prescott is pushed back til '04. Intel has Prescott 3.8ghz on their roadmap for Q4 '04 while AMD has Athlon 4300+ for Q4 '04 on their road map...

But AMD's cpus and ATI's gfx cards are a lot alike as they are designed to do all applications efficently without optimization while Intel and Nvidias hardware designs seem almost proprietary and need optimizations for good performance (e.g. for Intel SSE2, for Nvidia the order of instructions). The areas where the efficent hardware wins isnt actually showing just where they are good at but rather where intel/nvidia's hardware is bad at.

Benchmarking with more standard unoptimised code you would think would be the best way to compare cards, and that EXACTLY what Futuremark2003 did, and thats crap
Acutally, it is run as it should and the PS2.0 performance that a lot of people cried about back then shows true in many games for FX.
It is not to be optmimized because it is a benchmark...

I really hate it when Q3 is used as it puts no strain on todays hardware and really doesnt show anything. Usually when an application provides little strain and does not move a lot of data the higher clocked CPU/GPU can perform better (e.g. MP3 encoding) as there is no need for smart hardware or logic.
In my opinion, the best benchmarks are plain, don't use any special features (unless your are benchmarking that feature) but put a lot of strain on the hardware and make it work for those marks.

Granted these magical drivers from Nvidia give you the FPS back and you may not notice quality difference in your eyes. But why people shun Nvidia is because this is unacceptable. Driver "optimizations" for better performance. Graphic cards should just do what a application/game tells it to. It should not think for itself as the developer wants it to be displayed in a certain way/quality.
 
The best benchmarks show real world performance numbers that you can actually use... not any meaningless "raw" performance numbers like what you get from most synthetic benchmarks.

The best benchmarking programs for gaming computers are top-of-the-line game demos.
There is no substitute for testing the hardware on the games you want to play with it.

Synthetic benchmarks mean very little unless the purpose of your computer is mainly to get the highest possible scores on synthetic benchmarks.

The biggest problem with synthetic benchmarks that test features that aren't in games yet is that they don't know how heavily the next-gen games are going to use those features... and may put too much or too little reliance on those features to get an accurate measure of what games will be like.

Your best bet is almost always to wait until the games are out before you buy the hardware to go with them.
 
From what I've seen, most benchmark sites use today's games in combination with synthetic benchmarks. That way both the current performance and the "futureproofness" of a card is measured. I think that's the best way to do it.
 
Acutally, it is run as it should and the PS2.0 performance that a lot of people cried about back then shows true in many games for FX.
It is not to be optmimized because it is a benchmark...

My point was that 3dmark2003 was crap as a benchmark, with such huge amounts of raw power and memory bandwidth most of the cards now a days get awesome standard speeds, whats important it how well they perform when the features are turned on, because this is what slows them down to speeds which could or could not be playable.
 
Originally posted by dawdler
Yeah, here's an attachment with a leaked screen from the new experimental 60.xx drivers! I mean, just DROOL over that AWESOME lighting quality!!! (note: running 16xFSAA/8xAF extreme quality setting, screen resized from 1600x1200 with no quality loss whatsoever from the reference)


lol@ all the people taking you seriously :p


so many people saying things that make no sense in this thread.......


"teh rawzxor speez of the nvidhiaz c@rdz is teh greatxors"
 
I wouldn't go expecting too much from the DX9.1 release for the GFFX. It's problems are not driver related nor DirectX related they are hardware related. There's probably a little more power in the cards but the basic design has flaws. DX9.1 is obviously intended for whatever Nvidia is planning to release for this coming spring which will be the card that fixes those flaws. Expect some great numbers from it.

This is no different from the DX8 fiasco. DX 8.0 was basically all about Nvidia and DX 8.1 was a supplementary release to support PS1.4 for ATI.
 
Originally posted by Asus
Intel is actually behind in technology right now.
Out of P4EE vs Athlon 64 FX 51 reviews, if you can even find 5 sites that give the lead to P4EE ill find 10 more that show AMD's ahead.
But thats sorta irrelevant as P4EE isnt out and Prescott is pushed back til '04. Intel has Prescott 3.8ghz on their roadmap for Q4 '04 while AMD has Athlon 4300+ for Q4 '04 on their road map...

But AMD's cpus and ATI's gfx cards are a lot alike as they are designed to do all applications efficently without optimization while Intel and Nvidias hardware designs seem almost proprietary and need optimizations for good performance (e.g. for Intel SSE2, for Nvidia the order of instructions). The areas where the efficent hardware wins isnt actually showing just where they are good at but rather where intel/nvidia's hardware is bad at.

actually amds always underperform when using benchmarks. you can never tell if you are actually getting the full performance. you maybuy that 2600 or whatever but i have never seen one perform better than a p4 2.6. prove me wrong i would like to see solid evidence that an AMD has outperformed a pent. i would definitly take a pentium over an amd.
 
Right now Intel owns the midrange as far as performance and that includes the 2.6ghz 800fsb.
Amd has the best performance top and low end.
But for price AMD has the lowest.
Low end:
Intels' 533FSB chips and lower than 2.4ghz clocked P4s may match AMD's performance at that level but not price as they cost about twice as much as AMD's. But celerons are just crap.
Midrange:
Amd Athlon 2600+ 100$ or Intel P4 2.6ghz 800fsb 200$
Either would be good. You choose save the money or get the performance. Thats what is great about choice. Intel wins with their 800fsb compared to the similar Athlon XP chips but price/performance ratio is still higher for Intel.
But between an AMD Athlon 64 3200+ ~400$ or Intel P4 3.2ghz 800fsb ~600$
The AMD64 wins performance and cost.
The Athlon 64 FX 51 wins performance no doubt.

actually amds always underperform when using benchmarks. you can never tell if you are actually getting the full performance. you maybuy that 2600 or whatever but i have never seen one perform better than a p4 2.6. prove me wrong i would like to see solid evidence that an AMD has outperformed a pent. i would definitly take a pentium over an amd.
This info is somewhat outdated as it is from last product cycle.
This now fits in at midrange. :)
 
Originally posted by Asus
Midrange:
Amd Athlon 2600+ 100$ or Intel P4 2.6ghz 800fsb 200$
Either would be good. You choose save the money or get the performance. Thats what is great about choice. Intel wins with their 800fsb compared to the similar Athlon XP chips but price/performance ratio is still higher for Intel.
You cannot do a price/performance comparison when both the price and performance of the CPU's are different. It's better to check two CPU's with roughly similar performance and then compare the prices. For example:
Intel P4 2.6ghz 800fsb 200$
AMD Athlon XP3000+ 260$
Intel wins.
 
Half-Life 2 beta + 52.13 drivers + 4xAA + 4xAF = 50-60 FPS with exactly same IQ than ATI cards ( cant post link to screenshots as it's illegal material but google it you should be fine )
 
Originally posted by nicjac
Half-Life 2 beta + 52.13 drivers + 4xAA + 4xAF = 50-60 FPS with exactly same IQ than ATI cards ( cant post link to screenshots as it's illegal material but google it you should be fine )
Technically impossible, it can never get the same or higher IQ unless you use FP32. Do you use that? Not to mention, ATI cards do 16xAF (full trilinear, FX cant do it with those drivers) without breaking a sweat. Can your card to that? Technically impossible :)
 
Originally posted by dawdler
ATI cards do 16xAF (full trilinear, FX cant do it with those drivers) without breaking a sweat.
ATI cards use partial trilinear filtering (instead of full trilinear) in 16xAF, according to this article.
Not that the difference is in anyway noticable during gameplay, but I thought I point that out.
 
Originally posted by Arno
ATI cards use partial trilinear filtering (instead of full trilinear) in 16xAF, according to this article.
Not that the difference is in anyway noticable during gameplay, but I thought I point that out.
Yep, they do. But 3rd party programs (or the program itself, if you can set AF settings in it) will make it use full trilinear.

On a FX, you can NEVER do this. The drivers will override any setting you use. On the 4x series, this was only true for UT2k3. On the 5x series, this applies to all applications (and on the 52.xx this also applies to all texture layers, contrary to 51.xx that didnt do it on the first). The shader speed HAS increased in 5x by quite alot, its a great feat (if its done with fairly good methods). But most game speed increases you see is because of the AF issue.
 
In relevance to that last question, of course, yes i do have one, and no, i wouldnt put it in the fry pan for any money, its precious to me...
 
I agree that optimizations like these (from both Nvidia and ATI) can give unfair results in benchmarks. I doubt many reviewers would take that extra step of installing special software to force the ATI cards into correct trilinear filtering. Even so, you're right, Nvidia should have left the option open for correct trilinear filtering.
In my personal opinion, I don't care about an IQ degradation which is practically invisible during gameplay and I welcome the nice performance increase.
 
Originally posted by Arno
I agree that optimizations like these (from both Nvidia and ATI) can give unfair results in benchmarks. I doubt many reviewers would take that extra step of installing special software to force the ATI cards into correct trilinear filtering. Even so, you're right, Nvidia should have left the option open for correct trilinear filtering.
In my personal opinion, I don't care about an IQ degradation which is practically invisible during gameplay and I welcome the nice performance increase.
Actually, ATI devs have always encouraged people to use application preference (where you can get full trilinear if you wish) instead of forcing it via the CP.
Problem is that the programs usally never offers the option...
And you also have to keep in mind, its invisible to YOU, since you are going from item C to item C-. If you would have used ATI, you would go from A to C-, if you catch my drift :)
 
..

Intel P4 2.6ghz 800fsb 200$
AMD Athlon XP3000+ 260$

Well this is the way I thought when I recently started looking at processors. If money is not an object, p4's are definately better... BUT

The problem with your comparison is that the 3000+ is higher up on the product scale than the p4 2.6. Infact so high that is not at all a mainstream product. To get reasonable price comparisons you MUST look at mainstream products.

Intel P4 2.4ghz 800fsb $162
AMD Athlon XP2500+ $85

Admittedly the p4 is 5-10% better in most benchmarks, but is that really worth paying double for?

Not to mention that you will pay $20-50 more for intel motherboards of the same quality of AMD ones.

There is simply no question that AMD rocks Intel as far as I'm concerned.
 
Originally posted by Princess_Frosty I think that was the point he was trying to make, the example of quake 3 is to help compare the raw speeds of the cards when no complex fucntions are being run. The Raw speed of the 5900 Ultra is greater, but the 9800Pro has more pipelines which gives it, its own unique advantage.

It'd be nice if everyone on these boards used common sense like you :)

That's exactly what I was trying to say. Hence bringing up Quake 3 (You could also use Half-Life 1, Quake 2, Unreal Tournament). Games that don't use specific graphical features, so you can get a general idea on the raw performance of a card.
 
Originally posted by Rarehero
so you can get a general idea on the raw performance of a card.

and what's the point in that ?
so it can do 1+1+1+1 at blazing speed but will suffocate on sqrt(16).

fx5800 had a 500MHz core and a whopping 1GHz RAM and it sucked, still sucks and will suck forever more.

Core design effeciency is the key and not at how many MHzs you can run it.
 
Re: ..

Originally posted by serp
The problem with your comparison is that the 3000+ is higher up on the product scale than the p4 2.6. Infact so high that is not at all a mainstream product. To get reasonable price comparisons you MUST look at mainstream products.

Intel P4 2.4ghz 800fsb $162
AMD Athlon XP2500+ $85

Admittedly the p4 is 5-10% better in most benchmarks, but is that really worth paying double for?

Not to mention that you will pay $20-50 more for intel motherboards of the same quality of AMD ones.

There is simply no question that AMD rocks Intel as far as I'm concerned.
I agree that in the CPU market you sometimes have to pay a lot more cash for relatively little extra performance. But this goes for AMD as well as Intel.
If you have only $100 to spend, then the XP2500+ sure is a nice bargain. But if you want a bit more performance and you have $175 to spend, then there are the following options:
Intel P4 2.4ghz 800fsb $162
AMD Athlon XP2800+ $175
Both have roughly the same performance, according to Anandtech and Tomshardware. So, judging on the price, I would say the P4 is the better choice.

IMO, what CPU to buy depends entirely on your budget. AMD owns the top of the line with the Athlon64. The budget Athlon's ( < $100 ) are also a good deal. But Intel owns the mainstream market ( $160 to $300 ), with a better price/performance ratio.
 
and what's the point in that ?

Honestly, these days Raw Speed (GPU/Mem) means less and less with each new core design. But it generally should be a crucial part on how a card performs. Yeah, the 5800 Ultra Ran @ 500/1000, but it was on a bus that was half the size of the leading part out at that time. So doing the math, it only had 15.6 GB/s, vs the 9700 Pro, which had 19.38 GB/s.
 
Re: Nvidia drivers

Originally posted by laz45
Nvidia is releasing there new drivers (52.14) next month. These drivers will have 60% increased performance. :cheers: with those drivers the geforce fx 5900 beats the ati 9800 pro, on almost all benchmarks.
If you look back, EVERY driver release that nvidia comes out with, they always claim something like a "60% increase" well history has showen us that the only thing that improves by 60% is the bullsh*t they spew out, and any other speed increases are due to cheating/image degridation. in one form or another.
 
Re: Re: Nvidia drivers

Originally posted by shapeshifter
If you look back, EVERY driver release that nvidia comes out with, they always claim something like a "60% increase" well history has showen us that the only thing that improves by 60% is the bullsh*t they spew out, and any other speed increases are due to cheating/image degridation. in one form or another.

The fun part:
30% INCREASE OVER PREVIOUS DRIVER VERSIONS!*

That's what it said in Nvidia's driver description a while ago.
Of course, you have to notice the little '*'... Cause it continues with little text: *Increase tested using UT2k3 and FX 5600.

And of course, that particular version just happened to be (by some odd coincidence) the version where not only the tribilinear hax was in, but it also removed AF completely on some angles and textures (ie large floors, but not small walls (AF wasnt in effect half of the time)). And this happened ONLY in UT2k3. And ONLY on the FX 5600 :p
 
Re: Re: ..

Originally posted by Arno
I agree that in the CPU market you sometimes have to pay a lot more cash for relatively little extra performance. But this goes for AMD as well as Intel.
If you have only $100 to spend, then the XP2500+ sure is a nice bargain. But if you want a bit more performance and you have $175 to spend, then there are the following options:
Intel P4 2.4ghz 800fsb $162
AMD Athlon XP2800+ $175
Both have roughly the same performance, according to Anandtech and Tomshardware. So, judging on the price, I would say the P4 is the better choice.

IMO, what CPU to buy depends entirely on your budget. AMD owns the top of the line with the Athlon64. The budget Athlon's ( < $100 ) are also a good deal. But Intel owns the mainstream market ( $160 to $300 ), with a better price/performance ratio.

Well the thing is...An Athlon 2800+ vs a 2.4ghz P4, the Athlon will give you more performance. The 2.4ghz with 800fsb will excell with media applications though just like all P4 fsb800 compared to Athlon XP's. The performance maybe better with that P4 2.4ghz for you because it performs best in the areas you use but the respective Athlon might perform better than the P4 to someone else because of what applications they use. The benchmarks used are just the popular applications to display performance. Intel optimizes popular applications whether its benchmarks or just applications used in the industry. AMD's will run well on new or old software, whether it supports the lates SSE2 or supports no feature optimizations.

Most benchmarks chosen are tilted to a point of view that most people share but do not show performance in all areas. e.g. Winston general apps benchmark is just one score but it shows/test many things. Because it is only one score people give it very little weight. Also where do you see the most weight placed in reviews? Gaming benchmarks with Quake3,SeriousSam,Comanche,Aquamark and UT2k3. Where's Battlefield 1942 or Morrowind?! also popular games but maybe not benchmark friendly. And besides the normal synthetic benchmarks aimed at CPUs there is PIfast and a lot of benchmarks using mp3 encoding. Mp3 encoding? I guess that time makes a big difference in which cpu i buy. With pinnacle's studio 8 both AMD/Intel perform about the same while with Premiere I believe Intels performs better because it is a popular program in the industry that they optimized for.
 
amd 2500+ plus a good cooling setup, and u have a awesome overclocker, that well boosts ur performance, tons of people can overclock it to 3200+ type processor.

i wish i knew more about p4 overclocking, u could bring the values of that into play...

things change alot though, every 6 months, u have a big increase compared to that last 6 months
 
Tbred-B XP's overclock very well. The core can handle a decent amount of heat, as well as the multiplier is unlocked.

My XP 1800+ Tbred-B runs 2.1Ghz (168x12.5) @ 1.85, without any problems.
 
Am i the only one sick of hearing disertations by fanboys on why their card is best when the games it is supposedly best at arent even released?
 
Back
Top