Where can I find a 6800 Ulra/X800XT IN STOCK!!!??!

GreasedNeut

Newbie
Joined
Jun 3, 2004
Messages
77
Reaction score
0
It seems impossible too find them in stock, under 700 bucks, anyone find one for 600 bucks or less?
 
I'm here from the future to warn you.

Don't buy the X800 XT PE or the 6800 Ultra. In the very near future these video cards are powerful enough to create their own skynet computer system. Computers, robots and machines are taking over the world. Meanwhile, the Earth Defense Forces (EDF) are being anihilated because they are too busy playing Doom 3 and Half-Life 2. Save yourself and don't buy one of those super fast video cards. Besides the next major shipment of X800/6800 cards is expected sometime in September.

;)
 
QUOTE:
I'm here from the future to warn you.

Don't buy the X800 XT PE or the 6800 Ultra. In the very near future these video cards are powerful enough to create their own skynet computer system. Computers, robots and machines are taking over the world. Meanwhile, the Earth Defense Forces (EDF) are being anihilated because they are too busy playing Doom 3 and Half-Life 2. Save yourself and don't buy one of those super fast video cards. Besides the next major shipment of X800/6800 cards is expected sometime in September

That is some seriously funny shit right there! :LOL:
 
i dont know which card to get.. ATi or nVidia (GeForce 6800s/Radeon X800s)? the ever-thinking question that faces us all
 
Well, here is my take:

6800 GT & Ultra
More features than ATI
*SM3.0
*Full hardware video encode/decode (not enabled in drivers yet)
*Both FP16 partial precision and FP32 full precision for best blend of performance and quality, empowering devs to choose what precision they'd like instead of using a compromise for all shaders - this is demonstrated in Doom3, where HardOCP stated 6800 series had the edge in Image Quality, yet beat x800 cards by a landslide in performance, partially due to FP16pp.

Better IQ than ATI
*Full Trilinear Filtering Available unlike x800 cards
*8xS supersampling/multisampling AA for best AA at 1280x1024, especially in games with foliage like CoD
*FP16 blending for high quality OpenEXR HDR Lighting in FarCry 1.3 and future games

Better overall Performance than ATI
*DX performance equal to ATI at $499 pricepoint, faster at $399 pricepoint
*Faster OpenGL performance than ATI - $399 Nvidia card beats $499 ATI card in Doom3
*Brilinear filter not as efficient as ATI, but this can be tweaked with drivers, in fact the 62.xx driver series will feature faster filtering performance
*Even the $399 6800GT has 16 pipes, while you have to spend $499 on an ATI part to get 16 pipes.

---

X800 XT
*Fast performance in DX, not so fast in Opengl (see Doom3 benchmarks)
*Smaller cooler than the 6800U, same size as 6800GT
*Fastest filtering with most efficient brilinear filtering, however full trilinear is not available
*No supersampling AA mode, bad for games like Call of Duty that have lots of foliage
*SM2.0B support is crippled compared to SM3.0, ATI features like Geometry Instancing and Face Register are not even officially supported by DX9.0C's SM2.0B model (can be exposed unofficially), no dynamic flow control, etc
*3DC is cool, but can't make up for X800XT's other feature shortcomings, and Nvidia has DXT5 which is almost as good anyway. 3DC is also not officially supported by DX9 unlike DXT5.
 
blahblahblah said:
I think it is safe to say that your opinion is sufficiently biased.

Anything you'd like to point out that is inaccurate?

BTW, everyone's opinion is biased. If you don't think so you are deceiving yourself :)
 
I think you are more biased then me. If you think the 6800's walk all over the X800's you have some sort of mental condition. :)

- Nvidia actively encourages developers to use FP 16 for performance reasons. I hate to see what happens to 6800 performance when FP 16 stops becoming a viable option. ATI uses FP 24.
- 6800 series audio/video encode/decode has horrible performance. Who knows how much hardware acceleration will help.
- Full trilinear means nothing unless you like your FPS to drop. Unless you are comparing specific screenshot side by side, you can not tell the difference between full trilinear and ATI's "bilinear."
- 8X AA requries a massive performance loss. I'm willing to bet that you can only enable 8X AA on older games without having severe frame rate problems.
- HDR. Let me know when Far Cry has it in. That will be the only game so far that will have OpenEXR HDR.
- You conviently pointed out Nvidia created SM 3.0 Far Cry benchmarks. Maybe you should take a look with the Far Cry with SM 2.0b and Cat 4.8's. Massive performance increase.
- Faster brilinear you say? What are they sacrificing, image quality?
- 16 pipelines does not equate to faster performance. In order to fully utilize 16 pipelines, you must turn up the resolution to 1600 by 1200.
- ATI is rewriting its OpenGL driver.
- If you call SM 2.0B crippled you have some problems. Current games won't use half the featues of SM 3.0 because of performance problems. Crytek stayed away from dynamic flow control because of performance issues.
- DXT 5 is horrible for normal map compression. 3Dc is an open standard and provides excellent normal map compression.

[Edit]: Forgot to mention temporal AA.
 
blahblahblah said:
I think you are more biased then me. If you think the 6800's walk all over the X800's you have some sort of mental condition. :)

- Nvidia actively encourages developers to use FP 16 for performance reasons.

And this is a good thing. We can see from Doom3 and the FarCry 1.2 patch that FP16 offers enhanced performance with no IQ loss.

I hate to see what happens to 6800 performance when FP 16 stops becoming a viable option. ATI uses FP 24.

When FP16 stops becoming a viable option due to complex SM3.0 shaders, Nvidia cards will run FP32 at about 15% slower than FP24 (estimated using 3dmark03 Game Test 4 which runs FP32 precision throughout, and FarCry full FP16 path vs full FP32 path benchmarks), and ATI cards will have IQ loss with precision errors using FP24.

- 6800 series audio/video encode/decode has horrible performance. Who knows how much hardware acceleration will help.

Once the full hardware acceleration is implemented in the drivers, it will mean 0% CPU usage. That is what full hardware means.

- Full trilinear means nothing unless you like your FPS to drop. Unless you are comparing specific screenshot side by side, you can not tell the difference between full trilinear and ATI's "bilinear."

Even if you can't tell the difference, the option should be there in case people can, or in case a future game does show a difference. The x800 is perfectly capable of it, there is no excuse for it not being available

- 8X AA requries a massive performance loss. I'm willing to bet that you can only enable 8X AA on older games without having severe frame rate problems.

These new cards are so powerful they can handle a big performance hit. I've found through my own experimenting that 8xS can be enabled on modern games at 1280x1024, and in some games it looks better than 4xAA at 1600x1200 - because 8xS can fix aliased textures while multisampling cannot. A good example of this is Call of Duty and MMORPGs.

- HDR. Let me know when Far Cry has it in. That will be the only game so far that will have OpenEXR HDR.

So far. ATI has committed to adding it to R520, so odds are there will be much more in the future.

- You conviently pointed out Nvidia created SM 3.0 Far Cry benchmarks. Maybe you should take a look with the Far Cry with SM 2.0b and Cat 4.8's. Massive performance increase.

There appears to be stability and visual problems with Crytek's implementation of SM2.x in the new FarCry 1.2 patch on ATI cards; this is why it was pulled. This should be readdressed once the patch is rereleased.

That being said, SM2.0b should give a big performance increase because Crytek did not use many of the more advanced features of Sm3.0, like dynamic flow control for instance. What they implemented was relatively mild.

- Faster brilinear you say? What are they sacrificing, image quality?

ATI's brilinear is simply more efficient at this point, its doesn't sacrifice anything more than Nvidia's brilinear. Tweaking a brilinear filter is somewhat of an art, and apparently the 62.xx Nvidia drivers will have a faster brilinear filter for Nvidia cards.

- 16 pipelines does not equate to faster performance. In order to fully utilize 16 pipelines, you must turn up the resolution to 1600 by 1200.

Well, by looking at the benchmarks, 16 pipelines equates to faster performance when comparing the 6800GT to the x800pro.

- ATI is rewriting its OpenGL driver.

Actually if you look at the latest Tech Report interview, ATI stated they are improving their OpenGL driver bit by bit, not completely rewriting it from scratch. Expect incremental improvements, not a brand new OpenGL implementation. Although, even with improvements, ATI's architecture simply isn't built for Doom3 as well as Nvidias; no 32x0 down the entire range of cards, no ultrashadow2, no pp, etc.

- If you call SM 2.0B crippled you have some problems.

Its not problems, its reality. SM2.0b is nowhere near SM3.0. You can look at this page for a comparison of the two standards:
http://www.beyond3d.com/previews/nvidia/nv40/index.php?p=5

Current games won't use half the featues of SM 3.0 because of performance problems. Crytek stayed away from dynamic flow control because of performance issues.

FarCry was already finished after SM3.0 was patched in; therefore it would have been very difficult for Crytek to reap the full benefits of SM3.0 as they were basically patching what began as a SM2.0 game. Expect future games to see much bigger differences.

- DXT 5 is horrible for normal map compression. 3Dc is an open standard and provides excellent normal map compression.

Unfortunately this is a myth created by ATI to promote their new format. While DXT5 isn't as good as 3dc at normal maps, it is not horrible in any manner. There are a few worse case scenario cases where 3dc looks better, but for the most part you probably wouldn't see much of a difference in game. If you look at the HardOCP Doom3 image quality comparisons and compare Medium Quality to High Quality, you will see no major differences in textures as outlined by HardOCP. And Medium Quality uses DXT5 compressed normalmaps, while High Quality uses uncompressed normal maps.

[Edit]: Forgot to mention temporal AA.

Temporal AA isn't very useful due to its shimmering artifacting under 60fps.
 
Don't feel like going back and forth.

But you are confused on your normal map compression.

Robert Duffy .plan update said:
One thing of note on the normal map compression is that generally speaking if you DXT a normal map you get really crappy results. NVIDIA hardware supports palettized compression which yields good compression and normal maps retain hard and round edges really well. Unfortunately this compression does a poor job in other cases and you end up getting splotchy areas.
 
Wait wait...you praise Nvidia's 16bit precision yet scorn ATI's 24bit precision when the DX9 standard has 24bit has min.? I know that was extreme wording but hey. ;)
There is a world of difference between 16 and 24 and not much when going to 32bit. Developers will not create games that have artifacts when using 24bit precision.
The thing is, unless developers specify to use full precision it will use 16bit. I don't like that idea.

ATI's optimization is quite different from Nvidia's. I like it's adaptive nature since I don't need full filtering in all cases. Similar to it's adaptive AA.

Funny, I can really tell the difference between Nvidia's optimizations and ATI's.
ATI's doesn't look any different than with Nvidia's disabled though. That's ingame playing the game, not SS.
 
blahblahblah said:
Don't feel like going back and forth.

But you are confused on your normal map compression.

Like I said above, there are cases where DXT5 isn't optimal, but these aren't the majority, and from the Doom3 comparisons they arent all that noticable.
 
Asus said:
Wait wait...you praise Nvidia's 16bit precision yet scorn ATI's 24bit precision when the DX9 standard has 24bit has min.? I know that was extreme wording but hey. ;)

It appears you are confused about the "DX9 Standard"

DX9.0b & Shader Model 2.0 require that the card be capable of a minimum of FP24 precision, but partial precision such as FP16 is allowed for more simple shaders.

DX9.0c & Shader Model 3.0 require that the card be capable of a minimum of FP32 precision, but partial precision such as FP24 or FP16 is allowed for more simple shaders.

If you want to generalize and say the "DX9 standard's" minimum precision, the most advanced model of DX9 shaders currently has a minimum precision of FP32, not FP24. However, since FP32 often offers no IQ benefit over FP24 or FP16 on more simple shaders, FP24 and FP16 can be used as partial precision under SM3.0 for these shaders.

The problem with FP24 is that it is an in-between compromise at this stage in the game. With SM3.0 as the new standard, devs are going to be writing for either FP32 or FP16 in the future, not FP24. FP24 will become irrelevant after this gen of ATI cards (it is already irrelevant for Nvidia cards). FP16 for performance on simple shaders, FP32 for very complex shaders - FP24 will be left in the middle with slower performance on shaders designed for FP16, and precision errors for shaders designed for FP32.

There is a world of difference between 16 and 24 and not much when going to 32bit.

If you believe there is a world of difference between FP16 and FP24 in today's games, I don't believe you've actually seen the difference between FP16 and FP24. In fact, in FarCry there isn't much of a difference between FP16 and FP32 - you can test this yourself using tommti-systems full FP16 patch for FarCry. FP32 has not yet been tapped yet, though once games start to use more complex SM3.0 shaders, we might start to see cases where FP16 and FP24 yields precision errors. This is why Nvidia supports FP32. Again, look at FarCry 1.2 and Doom3 which both use FP16 - no IQ differences. In fact, HardOCP gave the IQ edge to Nvidia in Doom3.

If you are thinking of the FarCry 1.1 patch, that artifacting was not due to FP16, it was because the 1.1 patch replaced SM2.0 shaders with SM1.1 shaders, and used int precision instead of FP precision, along with low resolution cubemaps in the FX path.

Developers will not create games that have artifacts when using 24bit precision.

Obviously since the SM3.0 spec calls for FP32 precision, devs aren't going to be catering to FP24 precision when they are writing complex SM3.0 shaders. They will be writing to the tune of FP32, or partial precision which logically can only be FP16. This means cards with FP24 could be faced with precision errors for shaders designed for FP32, or lower performance for shaders designed for FP16.

Funny, I can really tell the difference between Nvidia's optimizations and ATI's.

Again, if this is the case I don't think you've seen Nvidia's 6800 series brilinear optimizations. They look the same as ATI's - in fact, ixbt labs did a technical analysis on the two and found they use roughly the same amount of bilinear/trilinear filtering.

ATI's doesn't look any different than with Nvidia's disabled though. That's ingame playing the game, not SS.

It sounds like you are basing your judgments off the FX series, not the 6800 series. The 6800 series brilinear looks the same as ATI's brilinear. And it has full trilinear, too.
 
I am going insane with this "will get a shipment in buy the end of the week" shit.
Do they ever get a shipment buy the end of the week?
NOOOOOOOOOOOOOOOOOOO
:flame: :flame: :flame: :flame:
ATI should have had atleast 500,00 cards ready to ship at the time the NDA was lifted.
 
ok...ive decided to get a 6800GT where the hell can i find one reasonably priced, in stock!?
 
Back
Top