Nvidia's Fermi GTX480 is broken and unfixable

Krynn72

The Freeman
Joined
May 16, 2004
Messages
26,095
Reaction score
926
Never really heard of this site. I assume its some kind of rumor site or something. Anyways, its got an interesting take on Nvidia's new card, which is set to be released in the next couple of months. Some excerpts:

Hot, slow, late and unmanufacturable

Number one on Nvidia's hit list is yields. If you recall, we said that the yield on the first hot lot of Fermis that came back from TSMC was 7 good chips out of a total of 416 candidates, or a yield of less than 2 percent.

The chip is big and hot. Insiders have told SemiAccurate that the chips shown at CES consumed 280W.[...] The power wall is simple, a PCIe card has a hard limit of 300W, anything more and you will not get PCIe certified. No certification means legal liability problems, and OEMs won't put it in their PCs. This is death for any mass market card. The power can only be turned up so far, and at 280W, Nvidia already has the dial on 9.5.

Fermi GF100 is about 60 percent larger than Cypress, meaning at a minimum that it costs Nvidia at least 60 percent more to make, realistically closer to three times. Nvidia needs to have a commanding performance lead over ATI in order to set prices at the point where it can make money on the chip even if yields are not taken into account. ATI has set the upper pricing bound with its dual Cypress board called Hemlock HD5970.

Rumors abound that Nvidia will only have 5,000 to 8,000 Fermi GF100s, branded GTX480 in the first run of cards. The number SemiAccurate has heard directly is a less specific 'under 10,000'. There will have been about two months of production by the time those launch in late March, and Nvidia bought 9,000 risk wafers late last year. Presumably those will be used for the first run. With 104 die candidates per wafer, 9,000 wafers means 936K chips.

Even if Nvidia beats the initial production targets by ten times, its yields are still in the single digit range. At $5,000 per wafer, 10 good dies per wafer, with good being a very relative term, that puts cost at around $500 per chip, over ten times ATI's cost. The BoM cost for a GTX480 is more than the retail price of an ATI HD5970, a card that will slap it silly in the benchmarks. At these prices, even the workstation and compute cards start to have their margins squeezed.

Dear Leader has opened the proverbial can of Whoop-Ass on the competition, and on top of that criticized Intel's Larrabee for everything that ended up sinking Fermi GF100.

Link to the full article (LOTS OF TEXT): http://www.semiaccurate.com/2010/02/17/nvidias-fermigtx480-broken-and-unfixable/

This is kind of disheartening. I've been looking forward to these cards for awhile now.
 
If this is in any way true, then Nvidia is pretty much screwed. Perhaps with Larrabee's failure, it wouldn't be a bad idea for Intel to buy Nvidia like AMD has with ATI.
 
Repeat?

Techreport quotes some of Charlie's article (semiaccurate.com) and Anandtech in an article about ATI's 5000 series also mentions similar things (long read but a good inside look).

There are 2 things chip companies have to iron out and tweak as they sell cards and they only do one of those at a time so they don't complicate the issue.
A new architecture (chip design) and scaling to a smaller new process at the fab. Both of these get perfected as they go along. That's why new cards come out in the fall and then in the spring the second release is the same architecture but a little higher clocks and cooler running chips. Also in order to test a new design they don't want to ship the high volume cards first. That is why high priced cards first sport the new design and then get extended to midrange and low end.

Now ATI tested TSMC's new 40nm process on a midrange chip last year (4770). Perfected it and got 40nm down for when they started the 5000 series.
Nvidia tested a low end chip which is much smaller. They had more yields on this first run than if they had used a bigger chip but if they were already doing well they couldn't learn from poor yields and know what to overcome with TSMC's 40nm process. So Fermi is not going well probably because they didn't have enough insight to do a big chip yet. If that is the case then it is as if they are testing a new design on a new process. The 2 things that you don't want to do at once. If it doesn't work out then you get a chip with very low yields and the working chips run hotter than they should.

The other thing that the ATI article above mentioned was the best way to loose a fight was not to show up. ATI used to have tons of things they wanted to stick in their GPUs and when they ended up with a long list of things approved to go into a design the more likely they were to end up delaying a chip. (Like X1800, HD2900 etc).
So they are trying to limit only the most important changes to get approved and if something is starting to slip that would push back the release date they might cut that feature. But then they at least got their GPU to market in a timely fashion.
 
Nvidia has an event planned to show whats hot and next (AKA Fermi or GF100).
"Come see NVIDIA unveil the next generation of PC gaming. Want to see what's hot and what's next? If you're even vaguely a fan of PC games and miss this special event, you'll likely be spending the next few months kicking yourself. Line up early as seating is limited. ‘Nuff said."
"Test drive our highly-anticipated, next-generation GPU…you may even
be able to buy one before anyone else"

The marketing line is the same from the NV30 (GeForce FX 5000 series). D:
 
Looking forward to getting some more official information. Was hoping we'd see it in early march, but i guess we would have heard more by now if that was going to happen.

Its kinda strange that after so much bad press and rumors of so many problems that they would hype it up so much. Seems like a bad move if they really are having problems, because it could only lead to a much bigger letdown. I also like the image from the 5000 series promo better. This one is just awful.
 
I've lost faith in Nvidia since the 9-series as all they seem to be doing is rebranding older cards and coincidentally confusing the **** out of me by making it more difficult to compare their products to each other and I'm sure a lot of people feel the same.
 
I'm actually really interested to see what they can do with GPU compute stuff. But I won't be buying at high prices which is where those type of features will probably be. And if making a card that can do awesome GPU compute stuff makes it bigger and cost more then it probably will be difficult to make a version without it at lower prices (pure game card) which would be bad for gamers. I'm not even talking FPS performance but price and speed at which they push the performance parts down to different price ranges. If ATI is doing top to bottom new tech every Fall and Nvidia still ends up with old tech at the bottom and new tech only at the top with prices not going down as quick then that isn't good for the gamer who wants an affordable gaming card now. It would be difficult for Nvidia to keep their loyal gaming customers if they had to wait for the second refresh in spring to get good $200 gaming cards with new tech. I'm sure performance would be competitive but if they can't keep their base then it doesn't matter.

Now that wouldn't be bad exactly for Nvidia as they are going for the workstation market which actually pays a ton more cash for the hardware (thousands). It would just be bad for the gamers...
 
I've been very unimpressed with Nvidia lately. Mid ATI cards seem to out-perform even the top of Nvidia's cards(Or close to it, from what I've seen).
 
Back
Top