The Holy Grail of Graphics Processing

Joined
Jul 6, 2003
Messages
4,931
Reaction score
0
Point Clouds and Voxel representations have been arround for a while but they're incredibly hard to implement on the GPU and get them running in realtime. But apparently these guys have figured it out AND THEY DID IT IN SOFTWARE i.e you dont need a graphics card. At this point im not entirely sure if its a hoax or not , but if its true it could be quite revolutionary.

http://www.youtube.com/watch?v=Q-ATtrImCx4&feature=player_embedded

http://www.youtube.com/watch?v=THaam5mwIR8&feature=player_embedded

http://www.youtube.com/watch?v=l3Sw3dnu8q8&feature=player_embedded
 
I saw these a while back. Looks promising - i'd like to see how they handle animation, moving objects, shadows etc
 
I've seen this. It's pretty revolutionary but the issue will be money. They need to get a good artist in to make things in direct comparison of modern games and put them on the same platform - a crappy one - and show how fast they run while looking good.

The money issue is that ATI and Nvidia make a lot of money selling hardcore graphics card... I don't think they'd enjoy having every guy on the street with an e-machine or whatever being able to play games that, with polygons, would take a $4,000 computer.

I really do hope someone picks it up though... can't be that much longer. My friend at Lucas Arts was talking about how impressed they were with it.
 
The major drawback for these presentation videos is that they didn't benefit from the help of professional artists, and unfortunately it shows...
However the idea behind this technology is brilliant. Objects made of an infinity of 3d atoms, but always showing on screen only the amount dictated by your screen resolution.

The big problems I see with this technology is 1) animation, and 2) physics simulations...
 
Yeah they'll basically have to create polygonal surfaces to dictate the physical aspect, unless they code the reaction of every single particle in that matrix to respond to a 'physical' stimulus... which I think would ramp up the processing power required by quite a bit. However, placing them on a fairly simple polygonal and invisible surface would make it possible to achieve physical simulations. I assume they probably have something like that for general modeling...

They have a video of their one of their early animations in that channel, but I'm too lazy to go back and link it. It doesn't look great... but they don't have actual artists, modelers, or animators so... it's not surprising.
 
Also known as: raytracing.

I call (mostly) bullshit. Traversing a tree structure containing the data points can be pretty efficient. But the problem isn't traversing, it's building that data structure. When your scene is static, you can just generate that structure once, but a dynamic scene requires that you recalculate at least part of that structure, 60 times per second, which isn't going to work if your scene really contains billions of points in a scene as dynamic as that from any game. Incidentally, the videos are pretty damn static.

The "infinite detail" is certainly nonsense, but I'll give them that they probably mean "infinite for all practical intents and purposes".
 
This is very intriguing, and it's crazy to think about this possibly being the death of polygons. Seeing a whole new set of authoring tools created to make 3dpoint models instead of polygons. I don't think animation would be too much of a problem, just a little different. You would still have bone systems and everything, and weighting would still have to be done just like with polygons.

Like Remus said though, they can't make the physics power unlimited, because those aren't limited to how many pixels are on your screen. Physical interactions can take place at any time, whether on your screen or not. They would indeed have to use polygons as their collision definitions, because making each atom physically responsible to each other atom in the game is far beyond even realistic at this point (unless some new technology revolutionizes that, much in the way this has revolutionized graphics)

I'm very interested in seeing where this goes.
 
I saw this on Polycount a couple months ago. It is interesting, as this kind of tech might really shake up how artwork for games is made. Seems like it will allow more focus on actual artistic values rather than technical stuff like UVs, edgeflow, and the low poly assets in general. I am however very skeptical. The hyperbolic nature of their presentation so far leads me to be skeptical, and I'll wait for a tech demo to get excited about it. It could very well be total bullshit.
 
The part with the way those creatures were laid out in that fractal pattern made me kind suspicious at first. I heard somewhere that rendering the same model thousands of times like that is very quick.
 
Sounds really amazing, but I am also skeptical. It's like cars that run on water or someshit.

Anyway, if they can do it, then do it.
 
what an utter load of bullshit. Call me back when they explain and demonstrate their product in a SIGGRAPH paper.
 
The part with the way those creatures were laid out in that fractal pattern made me kind suspicious at first. I heard somewhere that rendering the same model thousands of times like that is very quick.

Yep, you could simply cache every point and then apply a scalar offset to each of them. Modern graphics cards are very good at this.
 
I dont understand very well the points vs polygons parts

so point means things made of tiny points instead of the polygons?
 
I dont understand very well the points vs polygons parts

so point means things made of tiny points instead of the polygons?

Right now, pretty much all games use polygons, or shapes, to make 3D models:

snapshot_3.bmp


There can be only so many polygons for the GPU to process, which means that graphics can't be as detailed as the artists want them to be.

What they're supposedly doing is points, think of atoms or molecules, to make the 3D models, which would normally take a long time for the computer to process. I think they said something about using a search algorithm to find which points are being shown on screen so that the computer doesn't need to process the other points, which speeds up the processing. Because points aren't flat shapes, designers can make models much more detailed and, if what they're saying is true, won't have to worry about a limit in that detail.
 
Also known as: raytracing.

I call (mostly) bullshit. Traversing a tree structure containing the data points can be pretty efficient. But the problem isn't traversing, it's building that data structure. When your scene is static, you can just generate that structure once, but a dynamic scene requires that you recalculate at least part of that structure, 60 times per second, which isn't going to work if your scene really contains billions of points in a scene as dynamic as that from any game. Incidentally, the videos are pretty damn static.

The "infinite detail" is certainly nonsense, but I'll give them that they probably mean "infinite for all practical intents and purposes".

Right on. This is nothing new or clever or useful for games. It might be well done, but it's not going to be moving or interactive. Just static. I'm not sure about the physics (meaning collision detection). If that last part would work out, this might be interesting for the parts of a scene that is entirely static (like the level itself).

The amount of storage space needed is another story. I can imagine disk storage could be efficient with usage of procedural algorithms, but for rendering, you need those points in memory (if you want to do it quickly).


If there's anyone here with more knowledge about voxel engines, feel free to shoot my arguments down if they're flawed.
 
Back
Top