Best Graphics Card in the World

nick_t

Newbie
Joined
Sep 12, 2004
Messages
506
Reaction score
0
What's the best card in the world? The ones they use for producing 3D movies like Shrek, Finding Nemo, and Lord of the Rings must be crazy! I guess it's probably a bunch of subordinate comps with really good cards to make each of the components and then eveyrthing is put together on a few super computers that must've cost a lot of $$$. Please share your knowledge. :D
 
Doing stuff like that take much more processing power than they use the video card.
 
Hazar Dakiri said:
Doing stuff like that take much more processing power than they use the video card.

I agree that's why I said they upload the components to "super computers" but they must have amazing video cards as well!
 
3D graphics cards for Games are very different from Professional Graphic cards for workstations. Also creating video like that is not done by 3D cards either, but rather CPUs.
 
cool, if you have links to more info, please post them. I'll check out encarta when I get a chance... too late now.
 
This is an article summarizing the scale of Pixar Animation Studio's (the company that made Toy Story, Monsters Inc, finding nemo, and plenty of work on other CG movies) renderFarm. It's quite large, but not the largest. I think ILM has a bigger one now. I could be wrong though.
Pixar renderFarm
A renderFarm is a cluster of computers that work together to render images.

This is an excerpt from an article about Weta digital, the comapny that did the Lord of the Rings special effects. On a side note, I saw an article that I can't seem to find now that said that some of the shots took nearly 2 days per frame to render on this renderFarm. That's insane.
Peter Jackson's special effects shop Weta Digital has teamed up with Telecom subsidiary Gen-i to establish a world-class supercomputing facility in Wellington which will be rented out to clients worldwide.

The New Zealand Supercomputing Centre, which launches today, ranks 80th on the list of the world's 500 most powerful computers and is the largest supercomputing cluster available for commercial hire in the southern hemisphere, according to the firms.

It has already attracted interest from potential clients in New York, says Telecom hosting and storage manager Eric Pilon.

The supercomputer itself comprises 504 IBM blade servers, each of which contains two 2.8 Gigahertz Intel Xeon processors, 6 Gigabytes of memory and 40 Gigabytes of storage.

Weta originally purchased the blades to create the special effects for The Lord of the Rings trilogy and together they are capable of performing 2.8 trillion calculations per second.

Plans for the supercomputer facility were first mooted in March.

Having now inked a deal, Weta and Gen-i hope to upgrade the centre by adding extra blades to put it in the top 10 list of the world's largest supercomputers.

Note that there's no mention of video cards anywhere in any of that. It's all about the CPU and RAM when it comes to cinematic rendering.
 
well, for actually drawing out the "rough edges" and whatnot, i imagine they use something like this
 
Dirk Pitt said:
well, for actually drawing out the "rough edges" and whatnot, i imagine they use something like this

I think those kind of cards are purely for fast viewport navigation.

In fact, you can turn your Geforce into a Quadro with purely software changes.

And in rendering, videocards aren't being used, it's all software work (CPU) because videocards don't natively support the stuff the renderer puts out.
 
for those movies, they spenda long ass time rendering them out. probably like 10 hours for a minute worth of video or something. but since its recorded down fram by frame, they can play it back normal speed
 
Wraith said:
for those movies, they spenda long ass time rendering them out. probably like 10 hours for a minute worth of video or something. but since its recorded down fram by frame, they can play it back normal speed
It actually takes longer than that - depending on the project of course. The main reason (AFAIK) they don't render on video cards is because the bandwidth is asymmetric (it would take a lot longer to get data off than it does to put it on*).

*This may change with PCI express but it'll probably still be CPU dominated.
 
I'm sure 3D Labs did (and do) make that sort of thing.
 
Back
Top