What is "image-based rendering" ?

J

Jpennin1

Guest
In the new Gabe Newell interview at driverheaven.net, Gabe states that one technology that got cut from the game but was very cool was "image-based rendering techniques that Gary McTaggart was working on."

Does anyone know what this is referring to exactly?

Thanks!
 
Its a technique that uses actual photos taken of real places to render the surroundings.
 
That would of been really cool. I think Grand Turismo 4 uses that technology.
 
if its what i think it is, alot of games use it. its most comonly referd to as pre-renderd.....uuummmmm......damn it, i just whent retarted and cant think of what its called. its something around the lines of pre-renderd backrounds, and yes, GT4 is using it
 
and on a side note, that post finaly put me into the zombie rank, and to think, ive only been posting here since like june
 
Originally posted by another-user
and on a side note, that post finaly put me into the zombie rank, and to think, ive only been posting here since like june

another-user
Zombie

Registered: Jul 2003


As for the pre-rendered thing, I'm pretty sure that's not what it is.. seeing as how alot of older games have done that, and can do that.. but I could be wrong (I say that alot ;) lol)
 
Originally posted by Shuzer
another-user
Zombie

Registered: Jul 2003


As for the pre-rendered thing, I'm pretty sure that's not what it is.. seeing as how alot of older games have done that, and can do that.. but I could be wrong (I say that alot ;) lol)

its basically HDRI (high dynamic range image) instead of lights you use an image to light and render a scene. People are confusing image based rendering with image based modeling which uses simple geometry and photo's to model objects that appear real(ish)
 
Originally posted by Fenric1138
its basically HDRI (high dynamic range image) instead of lights you use an image to light and render a scene. People are confusing image based rendering with image based modeling which uses simple geometry and photo's to model objects that appear real(ish)

Ah, that makes more sense. Thought it wasn't what another-user was trying to get at
 
Originally posted by Shuzer
Ah, that makes more sense. Thought it wasn't what another-user was trying to get at

I'm interested to see how they implement it in Hammer. Traditionally it was usually just a special kind of image wrapped around the inside of an infinitely large sphere. Maybe it just takes the lighting from the skbox or something
 
That'd be really cool...too bad they didn't do that..

EDIT: Oohh.. prowler..
 
When i model i tend to not use backdrops much i usually just have refrence object and then make my own changes to make it cool looking.
 
Are you sure it is HDRI? I thought HDRI was in the game, I have a bink of it.
 
maybe ..they mean this.. with "image procesing" rendering


http://www.debevec.org/HDRShop/


HighDinamicRangeImage is used a lot in ComputerGraphics ,to simulate very highly reflective surfaces mostly used for chrome balls or mirrors . :cool:

however for games i dont see it used that much .unless in a game you really need very highly reflectivity sheres (metals ,water dont reflect enviroments that much) for people to observe that effect ,at the expense of some performance and/ heavy use of your free Ram ,because each image take more Megs than usual for other images.
 
That would have been nice to see in game. Just load up a .hdr image for your skybox backdrop, and all the lighting would be done with 100% perfection for you automatically. Well, there's always hope for it HL3.
 
Para, they are High Dynamic Range Images that are used.
 
Your all completely wrong :p

Image-Based Rendering Project Overview
In the pursuit of photo-realism in conventional polygon-based computer graphics, models have become so complex that most of the polygons are smaller than one pixel in the final image. At the same time, graphics hardware systems at the very high end are becoming capable of rendering, at interactive rates, nearly as many triangles per frame as there are pixels on the screen. Formerly, when models were simple and the triangle primitives were large, the ability to specify large, connected regions with only three points was a considerable efficiency in storage and computation. Now that models contain nearly as many primitives as pixels in the final image, we should rethink the use of geometric primitives to describe complex environments.

We are investigating an alternative approach that represents complex 3D environments with sets of images. These images include information describing the depth of each pixel along with the color and other properties. We have developed algorithms for processing these depth-enhanced images to produce new images from viewpoints that were not included in the original image set. Thus, using a finite set of source images, we can produce new images from arbitrary viewpoints.





Impact
The potential impact of using images to represent complex 3D environments includes:

* Naturally "photo-realistic" rendering, because the source data are photos. This will allow immersive 3D environments to be constructed for real places, enabling a new class of applications in entertainment, virtual tourism, telemedicine, telecollaboration, and teleoperation.
* Computation proportional to the number of output pixels rather than to the number of geometric primitives as in conventional graphics. This should allow implementation of systems that produce high-quality, 3D imagery with much less hardware than used in the current high-performance graphics systems.
* A hybrid with a conventional graphics system. A process we call "post-rendering warping" allows the rendering rate and latency to be decoupled from the user's changing viewpoint. Just as the frame buffer decoupled screen refresh from image update, post-rendering warping decouples image update from viewpoint update. We expect that this approach will enable immersive 3D systems to be implemented over long distance networks and broadcast media , using inexpensive image warpers to interface to the network and to increase interactivity.


Research Challenges
There are many challenges to overcome before the potential advantages of this new approach to computer graphics are fully realized.

* Real-world data acquisition—We are developing algorithms and building sensors for acquiring the image and depth data required as input to the method. one, the DeltaSphere 3000 is available commercially.
* Compositing multiple source images to produce a single output image—As the desired viewpoint moves away from the point at which a source image was taken, various artifacts appear in the output image. These artifacts result from exposure of areas that were not visible in the source image. By combining multiple source images we can fill in the previously invisible regions.
* Hardware architecture for efficient image-based rendering—Design of current graphics hardware has been driven entirely by the processing demands of conventional triangle-based graphics. We believe that very simple hardware may allow for real-time rendering using this new paradigm.





Research Highlights
This image-based rendering research is the latest step in a twenty-year history of developing custom computer graphics hardware systems at the leading edge of rendering performance. The Pixel-Planes series of machines started in the early 1980s and culminated in Pixel-Planes 5 in 1991. This system was for several years the fastest graphics engine anywhere. The PixelFlow system, built in collaborations with Hewlett-Packard, set a record in rendering performance and image quality.

source: http://www.cs.unc.edu/~ibr/

It has nothing to do with HDRI.
 
Originally posted by iamaelephant
You could have just posted the link fool.

naa let him have his moment :) he's been trying all week to get one up on someone/anyone and for all that effort I think he deserves a couple of minutes of smugness for googling for a site we'd all googled for ages ago.
 
Well if you had googled the site then why were you discussing it?

Also why are you flameing me all i did was quote the relevent part of the website? :flame:
 
Originally posted by mrchimp
Well if you had googled the site then why were you discussing it?

because someone wanted to know what they were using in the game, which is HDR. If you must know the thing you discribed simply used the name image based rendering cause it sounded good, previously thats been a discription of HDRI and high dynamic based rendering methods in the pre-rendered industry (which is what I was replying to, if you notice the quote I replied to, it says pre-rendered). So in effect everyone was right, except you. But hey, you wanted an explanation :p

Also why are you flameing me all i did was quote the relevent part of the website? :flame:

err nobody is flaming you, as for the large quote, notice the bit above my avatar, says staff, if I had a problem with your quote don't you think I would have simply deleted it from your post? Which I haven't done and don't currently see any need to do either, so I don't see how I have a problem with it, please explain this to me?
 
Originally posted by Fenric1138
because someone wanted to know what they were using in the game, which is HDR. If you must know the thing you discribed simply used the name image based rendering cause it sounded good, previously thats been a discription of HDRI and high dynamic based rendering methods in the pre-rendered industry (which is what I was replying to, if you notice the quote I replied to, it says pre-rendered). So in effect everyone was right, except you. But hey, you wanted an explanation :p



err nobody is flaming you, as for the large quote, notice the bit above my avatar, says staff, if I had a problem with your quote don't you think I would have simply deleted it from your post? Which I haven't done and don't currently see any need to do either, so I don't see how I have a problem with it, please explain this to me?


You think I was saying you were wrong? Well i wasn't I know HDR is in the game and wasn't debateing that . Image based rendering is what that website says it is and nothing else.

Also I would consider "naa let him have his moment he's been trying all week to get one up on someone/anyone and for all that effort I think he deserves a couple of minutes of smugness for googling for a site we'd all googled for ages ago." and
"You could have just posted the link fool." flameing.
 
Originally posted by mrchimp
You think I was saying you were wrong? Well i wasn't I know HDR is in the game and wasn't debateing that . Image based rendering is what that website says it is and nothing else.

Thats what I was talking about and said they do get confused between the group of them, similar names but different methods. Another example is the physics. In the pre-rendered world we call those rigid, soft body or particle dynamics, yet in the realtime games dynamics mean something different, dynamic lights, battles, etc. Then its further confused when some developers call standard effects by different names, such as Far Cry's polybump method, which is simply a normal map.

Also I would consider "naa let him have his moment he's been trying all week to get one up on someone/anyone and for all that effort I think he deserves a couple of minutes of smugness for googling for a site we'd all googled for ages ago." and
"You could have just posted the link fool." flameing.


As for the other comment, that wasn't mine so I can't comment on what he meant by it. I personally prefered the quote, regardless how big as the site itself is buggy for me, images aren't working the pages are slow and links are failing, so the quote was appreciated. But if your offended by it or another post I can have it removed if you want me to?
 
holy cow! this shit sounds like ****ing blade runner!



... gimme a hardcopy of that ...

:dork:
 
Originally posted by Fenric1138
Thats what I was talking about and said they do get confused between the group of them, similar names but different methods. Another example is the physics. In the pre-rendered world we call those rigid, soft body or particle dynamics, yet in the realtime games dynamics mean something different, dynamic lights, battles, etc. Then its further confused when some developers call standard effects by different names, such as Far Cry's polybump method, which is simply a normal map.




As for the other comment, that wasn't mine so I can't comment on what he meant by it. I personally prefered the quote, regardless how big as the site itself is buggy for me, images aren't working the pages are slow and links are failing, so the quote was appreciated. But if your offended by it or another post I can have it removed if you want me to?


Your confuseing me now, but image based rendering and modeling appear to be the same thing.

http://www.debevec.org/IBMR99/
 
A little OOC, iamaelephant.

The first comment was about what HDRI was, the second was about it being in the game.
 
Hmm I only know once place where the actually use HDRI images and thats in Computer generated scenes that are way too high in poly for games.

Basically in CG apps, lights are too perfect so to get a descent light set up we use HDRI's as backdrop images since they contain light data. Then with the use of the all brilliant global illumination, it gives the correct lighting for a realistic scene with soft shadows and each lgiht having its own brilliant fine settings.

The only problem is that you need a special camara to get this type of light data in a photo or u got to take multiple pictures or something at different light intensities and make em into one HDRI image.

I dont know much about this kinda stuff, so correct me if im wrong!!!

So is valve using these type of images at all just for clarification?:bounce:
 
Originally posted by jat
Hmm I only know once place where the actually use HDRI images and thats in Computer generated scenes that are way too high in poly for games.

Basically in CG apps, lights are too perfect so to get a descent light set up we use HDRI's as backdrop images since they contain light data. Then with the use of the all brilliant global illumination, it gives the correct lighting for a realistic scene with soft shadows and each lgiht having its own brilliant fine settings.

The only problem is that you need a special camara to get this type of light data in a photo or u got to take multiple pictures or something at different light intensities and make em into one HDRI image.

I dont know much about this kinda stuff, so correct me if im wrong!!!

So is valve using these type of images at all just for clarification?:bounce:

Pretty much right yeah, only poly counts don't make any difference with HDR. Will work and look good with lots or few polys in a scene. Fiat Lux was very very simple geometry with images of the Cathedral mapped onto them and it passed as photorealistic and fooled a lot of people who actually thought they'd really setup a huge domino ralley in there :D

Taking a pic at different intensities with any old camera that can do that is a very common and easy way of doing it, lot of pro's go that method than trusting their equipment. Has its drawbacks though, if the scene is moving it wont work, but usually for a HDR image it wont be moving anyway so thats not a big issue.. HDRshop can take those different exposures and compile them into a single .hdr file, and I think Tif file, I think those have the extra depth available for HDRI.. or was it targa.. erm.... oh whatever, its one or the other :) oh .pic format too, but I dunno if thats the Mac .pic format or the other version

I am curious though _how_ Valve have incorporated it into the engine, maybe their version simply allows lights to have an intensity greater than 255 (which would when radiosity is involved as in the compiling of a map) be considered High Dynamic Range and the source demo does call it HDR not HDRI so it seems it could well be that it does allow greater intensity values. Or maybe it can also make use of HDR backdrops or cube maps or something, Which Lightwave users will be in luck with since LW can generate those out of the box

Someone should email him to find out.. I would but he doesn't reply to my emails no more.. I think their too complex for him ;(
 
Radiosity? Ill be very impressed if hl2 does support light bouncess for soft shadows... isnt that a lightwave term? because in all other apps I was told it is called global illumination..
 
Originally posted by jat
Radiosity? Ill be very impressed if hl2 does support light bouncess for soft shadows... isnt that a lightwave term? because in all other apps I was told it is called global illumination..

called radiosity in the HL engine, its always had it, when you compile a map in Worldcraft/Hammer it calculates the radiosity, bounces of light.. Only works for static lights though.
 
hl2 doesn't calculate gi in real time, that would a waste :)

when you compile your map it calculates a gi solution and 'bakes' it onto the map, which is really the smartest way of doing it.
 
I'm glad I was eventually vindicated, I knew it wasn't HDRI.

Games have had GI for a while, but it is not done dynamically, it is pre-rendered and 'baked' on. Just look at Max Payne 2.

That image based rendering seems like a waste of time to me. Ray Tracing is where it's at baby.

EDIT: I'm getting slow in my old age :)
 
Originally posted by Incitatus
hl2 doesn't calculate gi in real time, that would a waste :)

when you compile your map it calculates a gi solution and 'bakes' it onto the map, which is really the smartest way of doing it.

I never said it did it in real time :)

--

actually come to think of it, I don't really know why this thread is even here, its not a direct HL2 related thread, would be better off in General Editing me thinks.
 
Back
Top