Lucid's chip for multi GPUs

Asus

Newbie
Joined
Aug 22, 2003
Messages
10,346
Reaction score
0
http://techreport.com/articles.x/15367
The Hydra 100 then appears to the host OS as a PCIe device, with its own driver. It intercepts calls made to the most common graphics APIs—OpenGL, DirectX 9/10/10.1—and reads in all of the calls required to draw an entire frame of imagery. Lucid's driver and the Hydra 100's RISC logic then collaborate on breaking down all of the work required to produce that frame, dividing the work required into tasks, determining where the bottlenecks will likely be for this particular frame, and assigning the tasks to the available rendering resources (two or more GPUs) in real time—for graphics, that's within the span of milliseconds. The GPUs then complete the work assigned to them and return the results to the Hydra 100 via PCI Express. The Hydra streams in the images from the GPUs, combines them as appropriate via its compositing engine, and streams the results back to the GPU connected to the monitor for display.

As I understand it, because data is streamed from the GPUs into the compositing engine pixel by pixel, and because the compositing engine immediately begins streaming back out the combined result, the effective latency for the compositing step is very low.

So basically they found a better way to do multi GPU (like SLI/Crossfire) with an additional chip to do the load balancing. Instead of splitting the work load in half (each card does every other frame. Or one does top and one does bottom) and live with the bottlenecks (SLI/Crossfire's methods are fixed)...their chip decides on the fly which GPU is free or better to draw textures, models or shaders etc.

The demo they ran showed what each GPU was actually processing. Below is the description.
On one screen, we could see the output from a single GPU, while the other showed the output from either the second GPU or, via a hotkey switch, the final and composited frame. GPU 0 was rendering the entire screen space, but only portions of the screen showed fully textured and shaded surfaces?a patch of the floor, a wall, a column, a sky box?while other bits of the screen were black. GPU 1, meanwhile, rendered the inverse of the image produced by GPU 0. Wiggle the mouse around, and the mix of surfaces handled by each GPU changed frame by frame, creating an odd flickering sensation that left us briefly transfixed. The composited images, however, appeared to be a pixel-perfect rendition of UT3. Looking at the final output, you'd never suspect what's going on beneath the covers.

Maybe we can get near perfect scaling now and maybe SLI without Nvidia chipsets.
 
Maybe we can get near perfect scaling now and maybe SLI without Nvidia chipsets.

I wonder if I could get a government subsidy to create my own powerplant to run a computer with two high end graphics cards. I mean, since it appears may be an actual reason to do SLI now.

I'm siding with the skeptics on this one. Typically companies that come out of nowhere bragging about amazing power/performance/capabilities usually tend to be hoaxes.
 
It does seem like there should be smarter ways of doing multi-GPUs rather than letting each GPU do all texture, model and shader work for a scene (SLI/Crossfire). Because depending on what is in that scene all of the shaders and texture units don't get used per pass. But the big question IMO is if there is a performance increase in that method on paper...then is it big enough to still see a boost after you add that chip into the mix with figuring out how to balance the load and the added latency? (they did have a demo)
 
Back
Top