How often do new graphic cards come out?

Joined
Jun 24, 2004
Messages
1,689
Reaction score
0
I really do not pay that much attention to the hardware side of PC gaming (that's what my computerwise friends are for), so I'm curious, how often do new graphic cards come out, ie: how long between 9700 series to 9800 series to X9800 series.

Are we talking years here or months (maybe decades :), I wish)
 
New cards quite often but they are only revised versions of the older one. E.g - the 9700 wasnt _much_ worse than the 9800...they was both using the same generation technology (chipset, dunno what it was)

The X800 however is a totally new design, hence next-gen cards...they run a different chipset (r400 or r420) and are the 'true' new graphics cards.

Id say new revisions about every 6 months and totally new chipsets (x800/GF6 series) every few years.
 
it works out roughly every year and a half which is for new tech... and they generally do "reviews" minor changes... every 6 months as pointed out above :)


Andy
 
I dont think anyone calls it the x9800's. Just call it the x800 you confused me. :flame: The x stands for ten in roman numerials.
ten comes after nine. The x800's come after the 9800's
Just want to bring something up.
Who thinks that the x800 pro should actually be called the x600.
I think x600 would more acuratlly describe it.
 
Alig said:
New cards quite often but they are only revised versions of the older one. E.g - the 9700 wasnt _much_ worse than the 9800...they was both using the same generation technology (chipset, dunno what it was)

The X800 however is a totally new design, hence next-gen cards...they run a different chipset (r400 or r420) and are the 'true' new graphics cards.

Id say new revisions about every 6 months and totally new chipsets (x800/GF6 series) every few years.
The x800s are not new chipsets, they're revised 9800's. The architecture isn't new, just improved.
 
blackeye said:
I dont think anyone calls it the x9800's. Just call it the x800 you confused me. :flame: The x stands for ten in roman numerials.
ten comes after nine. The x800's come after the 9800's
Just want to bring something up.
Who thinks that the x800 pro should actually be called the x600.
I think x600 would more acuratlly describe it.

If you think the X800 Pro is a mid-range (ie $200) graphics card, you are smoking something really bad for you. The X800 Pro is just a tad slower than the 6800 GT and can beat the 6800 into a coma. The X800 Pro absolutely annhilates the 9800 XT (previous generation high-end card). A next generation mid-range card should not be able to do that.
 
Abom said:
The x800s are not new chipsets, they're revised 9800's. The architecture isn't new, just improved.

So why isnt it a 9900? why does it run on a new chipset? why is it known as 'next-generation'?
 
:dozey: Next generation because it has some new technology in it I.E. 3dc. But yeah, graphics cards come out quite often making high end enthusiasts upgrade quite a lot, hope you got some cheddar.
 
The X800 and the 6800 have roots in the 9800 and 5900 series.
Neither are a complete redesign. The X800 had less changed (Shader pipeline design) because it was a solid one. The 6800 had the pipeline and shaders revamped because it needed an overhaul from the 5900.
Both have their number of Shader instructions they can proccess increased, for example.

Think of generations as a major change. Model updates are simply pushing the current cards further with little change. There was some minor changes between the 9700 and 9800 how they handled filtering, for example.

Before the cards were launched, the X800Pro and 6800GT were said to be 12 pipeline cards. Nvidia changed the GT to be a 16pipeline card and their 6800 became the 12 pipeline card. Depending on the benchmark 6800GT>X800Pro>6800
ATI did something similar to their top end card, They cut their X800XT as their 499$ model and put the X800XT PE in its place.
Then the cards were released!...
 
so Asus since your the hardware guru here, which card would you say is better, overall. I know which I think but I want your opinion.
 
Ready?
Right now they are actually pretty even. They both excell in any game for performance and image quality. They have other features or performance aspects that will perfer one or the other under different uses. It's up you your personal preference to choose the card with the performance and features that you would like.

A number of future titles will have multiple lights which will allow the 6800 to excell under SM3.0.
At the same time, future games will have larger textures and more complex texture effects which will allow the X800 to shine, especially when both math and texture operations are needed.
Even though NVIDIA's scheduler can help to allow more math to be done in parallel with texturing, NV40's texture and math parallelism only approaches that of ATI. Combine that with the fact that R420 runs at a higher clock speed than NV40, and even more pixel shader work can get done in the same amount of time on R420 (which translates into the possibility for frames being rendered faster under the right conditions).
I'm being rather general and basic here, nothing real technical.

People seem to like Nvidia because they think SM3.0 will be 'the future'.
SM3.0 is Nvidia's version of DX9 that they formed after they rejoined the group (MS,ATI,Nvidia) since their absence. It isn't 'the next big thing' but a rather minor change from SM2.0, hence DX9.0c and not DX9.1.
I see SM3.0 as todays developing technology that allows Nvidia to catch-up to ATI, in relation to the FX series.
The X800 design does include improvements over the 9800 series similar to the changes Nvidia brought with SM3.0.
ATI's SM2.0 performance matches Nvidia's SM3.0 performance and in other cases beats it. Notice with the 1.2 patch in FarCry, that Nvidia wins some and looses some. Turn AA/AF and the X800 runs at it's optimum.
As rasterization draws nearer, the ATI and NVIDIA architectures begin to differentiate themselves more. Both claim that they are able to calculate up to 32 z or stencil operations per clock, but the conditions under which this is true are different. NV40 is able to push two z/stencil operations per pixel pipeline during a z or stencil only pass or in other cases when no color data is being dealt with (the color unit in NV40 can work with z/stencil data when no color computation is needed). By contrast, R420 pushes 32 z/stencil operations per clock cycle when antialiasing is enabled (one z/stencil operation can be completed per clock at the end of each pixel pipeline, and one z/stencil operation can be completed inside the multisample AA unit).
The future holds more complex and taxing games. Here are some good interviews. The latest interview with Tim Sweeney talks about Virtual Displacment mapping among other things. Both SM2.0 and SM3.0 support VDM btw.*

With that said, ATI has a few things left such as 3Dc, which will be soon implemented in FarCry. Many games will be using 3Dc including HL2 and Doom3 AFAIK.
Both cards have great AA/AF performance and quality.
If you want the best performance with high image quality and AA/AF then the X800 is your choice.

Other things ATI has going for it would be Lower power, Temoral AA and Overdrive. The difference in power consumption between the X800 and 6800 series is about 20-30 Watts, when under load. Obviously you would need to make sure your system can handle either card.

Nvidia changed the way they use AA to match how ATI filters and they are able to more efficently proccess multiple lights under SM3.0. Notice Farcry's performance improvment in levels where there are many light sources. Other levels don't show a lot of improvement.
Nvidia seems to perform better in OpenGL but that isn't a given.

ATI has stated that they prioritize their driver support and games that perform with very high FPS are not on their list for performance improvments which happend to be mostly OpenGL games. Games that run with low FPS or issues are at the top of their list. At the same time they have stated that they plan to rework their OpenGL code. For the odd ball in OpenGL performance, look up some reviews that show Homeworld 2's performance.

Nvidia uses 32FP and 16FP.
ATI uses 24FP precision.
While Nvidia can use higher precision the filp coin is ATI will never use low precision. Personally, I do not think 32FP is needed.
The difference between 16FP and 24FP is major while 24FP to 32FP is not.

My opinions are obviously toward ATI. It's hard to right a balanced overview. Feel free to bring up other points.

I feel that Nvidia has designed an overly complex card that performs on par with ATI's card but no better in most cases and yet it's power consumptions is higher. To me, that's not worth it.
When AA/AF are enabled it falls behind and it doesn't have the features like Temporal AA and Overdrive. Those are features I can use and look for. Gamers would not likely have a use for Nvidia's Video Processor.

ATI currently has monthly updates for their WHQL drivers,
Nvidia updates their WHQL drivers ever few months since they made their anouncement to slow down their release of drivers. The last time they were released was back in April AFAIK. Beta drivers are released more often though.

*
We're using virtual displacement mapping on surfaces where large-scale tessellation is too costly to be practical as a way of increasing surface detail. Mostly, this means world geometry -- walls, floors, and other surfaces that tend to cover large areas with thousands of polygons, which would balloon up to millions of polygons with real displacement mapping. On the other hand, our in-game characters have enough polygons that we can build sufficient detail into the actual geometry that virtual displacement mapping is unnecessary. You would need to view a character's polygons from centimeters away to see the parallax
True displacement mapping should be able to both take information from existing textures and create the information needed without texture data. Nvidia's SM3.0 design can only do the first.
The vertex pipelines also allow the NV40 chip to display displacement mapping. This is a technique that generates (Z) geometry data from textures.

Basicly, my opinion is that the time to upgrade a X800 to a newest card in the future will come at the same time as with the 6800 series. The performance difference between both will not change much. Their advantages and disadvantages will not really change either.

With the last minute changes with regard to Nvidia's 6800GT, some of their board partners OCing the 6800U model, ATI's X800XT PE entry and the X800XT still lingering, people are confused were these cards compare.
The 6800 Ultra Extreme is a Gainward product that is an OCed 6800U which is a water cooled card. It will cost a pretty penny ($700+).
The X800XT PE is ATI's 499$ 16pipelines card while the 6800U is Nvidia's 499$ 16pipeline card.
The X800Pro is ATI's 399$ 12pipeline card while the 6800GT is Nvidia's 399$ 16pipeline card.
The 6800 is at a lower price but matchs the X800Pro with it's 12Pipelines. The X800Pro walks all over the 6800 though.

I think the cards that really stand out as picks are X800XT PE, 6800GT and X800Pro.
The X800XT PE is a card that out performs it's competition in more games than any other card while it excells when AA/AF are enabled. With deals like Gateway's sales, make this card 'affordable'.
Nvidia raised the bar with the 6800GT (16 Pipelines) which means they will sell even less of their 6800U model. The X800Pro is a great pick because it is the most available card at this time and performs very well.

I'm pretty tired so I hope the info wasn't sloppy.
Whew, that only took 10 minutes. :rolleyes:
 
Asus, you need a vaction, your post always provide great info, NOW ON TO VACATION ;P
 
Back
Top