Follow along with the video below to see how to install our site as a web app on your home screen.
Note: this_feature_currently_requires_accessing_site_using_safari
Originally posted by another-user
and on a side note, that post finaly put me into the zombie rank, and to think, ive only been posting here since like june
Originally posted by Shuzer
another-user
Zombie
Registered: Jul 2003
As for the pre-rendered thing, I'm pretty sure that's not what it is.. seeing as how alot of older games have done that, and can do that.. but I could be wrong (I say that alot lol)
Originally posted by Fenric1138
its basically HDRI (high dynamic range image) instead of lights you use an image to light and render a scene. People are confusing image based rendering with image based modeling which uses simple geometry and photo's to model objects that appear real(ish)
Originally posted by Shuzer
Ah, that makes more sense. Thought it wasn't what another-user was trying to get at
Image-Based Rendering Project Overview
In the pursuit of photo-realism in conventional polygon-based computer graphics, models have become so complex that most of the polygons are smaller than one pixel in the final image. At the same time, graphics hardware systems at the very high end are becoming capable of rendering, at interactive rates, nearly as many triangles per frame as there are pixels on the screen. Formerly, when models were simple and the triangle primitives were large, the ability to specify large, connected regions with only three points was a considerable efficiency in storage and computation. Now that models contain nearly as many primitives as pixels in the final image, we should rethink the use of geometric primitives to describe complex environments.
We are investigating an alternative approach that represents complex 3D environments with sets of images. These images include information describing the depth of each pixel along with the color and other properties. We have developed algorithms for processing these depth-enhanced images to produce new images from viewpoints that were not included in the original image set. Thus, using a finite set of source images, we can produce new images from arbitrary viewpoints.
Impact
The potential impact of using images to represent complex 3D environments includes:
* Naturally "photo-realistic" rendering, because the source data are photos. This will allow immersive 3D environments to be constructed for real places, enabling a new class of applications in entertainment, virtual tourism, telemedicine, telecollaboration, and teleoperation.
* Computation proportional to the number of output pixels rather than to the number of geometric primitives as in conventional graphics. This should allow implementation of systems that produce high-quality, 3D imagery with much less hardware than used in the current high-performance graphics systems.
* A hybrid with a conventional graphics system. A process we call "post-rendering warping" allows the rendering rate and latency to be decoupled from the user's changing viewpoint. Just as the frame buffer decoupled screen refresh from image update, post-rendering warping decouples image update from viewpoint update. We expect that this approach will enable immersive 3D systems to be implemented over long distance networks and broadcast media , using inexpensive image warpers to interface to the network and to increase interactivity.
Research Challenges
There are many challenges to overcome before the potential advantages of this new approach to computer graphics are fully realized.
* Real-world data acquisition—We are developing algorithms and building sensors for acquiring the image and depth data required as input to the method. one, the DeltaSphere 3000 is available commercially.
* Compositing multiple source images to produce a single output image—As the desired viewpoint moves away from the point at which a source image was taken, various artifacts appear in the output image. These artifacts result from exposure of areas that were not visible in the source image. By combining multiple source images we can fill in the previously invisible regions.
* Hardware architecture for efficient image-based rendering—Design of current graphics hardware has been driven entirely by the processing demands of conventional triangle-based graphics. We believe that very simple hardware may allow for real-time rendering using this new paradigm.
Research Highlights
This image-based rendering research is the latest step in a twenty-year history of developing custom computer graphics hardware systems at the leading edge of rendering performance. The Pixel-Planes series of machines started in the early 1980s and culminated in Pixel-Planes 5 in 1991. This system was for several years the fastest graphics engine anywhere. The PixelFlow system, built in collaborations with Hewlett-Packard, set a record in rendering performance and image quality.
Originally posted by iamaelephant
You could have just posted the link fool.
Originally posted by mrchimp
Well if you had googled the site then why were you discussing it?
Also why are you flameing me all i did was quote the relevent part of the website? :flame:
Originally posted by Fenric1138
because someone wanted to know what they were using in the game, which is HDR. If you must know the thing you discribed simply used the name image based rendering cause it sounded good, previously thats been a discription of HDRI and high dynamic based rendering methods in the pre-rendered industry (which is what I was replying to, if you notice the quote I replied to, it says pre-rendered). So in effect everyone was right, except you. But hey, you wanted an explanation
err nobody is flaming you, as for the large quote, notice the bit above my avatar, says staff, if I had a problem with your quote don't you think I would have simply deleted it from your post? Which I haven't done and don't currently see any need to do either, so I don't see how I have a problem with it, please explain this to me?
Originally posted by mrchimp
You think I was saying you were wrong? Well i wasn't I know HDR is in the game and wasn't debateing that . Image based rendering is what that website says it is and nothing else.
Also I would consider "naa let him have his moment he's been trying all week to get one up on someone/anyone and for all that effort I think he deserves a couple of minutes of smugness for googling for a site we'd all googled for ages ago." and
"You could have just posted the link fool." flameing.
Originally posted by Fenric1138
Thats what I was talking about and said they do get confused between the group of them, similar names but different methods. Another example is the physics. In the pre-rendered world we call those rigid, soft body or particle dynamics, yet in the realtime games dynamics mean something different, dynamic lights, battles, etc. Then its further confused when some developers call standard effects by different names, such as Far Cry's polybump method, which is simply a normal map.
As for the other comment, that wasn't mine so I can't comment on what he meant by it. I personally prefered the quote, regardless how big as the site itself is buggy for me, images aren't working the pages are slow and links are failing, so the quote was appreciated. But if your offended by it or another post I can have it removed if you want me to?
Orignially posted by mrchimp
Your all completely wrong
Originally posted by mrchimp
You think I was saying you were wrong? Well i wasn't
Originally posted by jat
Hmm I only know once place where the actually use HDRI images and thats in Computer generated scenes that are way too high in poly for games.
Basically in CG apps, lights are too perfect so to get a descent light set up we use HDRI's as backdrop images since they contain light data. Then with the use of the all brilliant global illumination, it gives the correct lighting for a realistic scene with soft shadows and each lgiht having its own brilliant fine settings.
The only problem is that you need a special camara to get this type of light data in a photo or u got to take multiple pictures or something at different light intensities and make em into one HDRI image.
I dont know much about this kinda stuff, so correct me if im wrong!!!
So is valve using these type of images at all just for clarification?:bounce:
Originally posted by jat
Radiosity? Ill be very impressed if hl2 does support light bouncess for soft shadows... isnt that a lightwave term? because in all other apps I was told it is called global illumination..
Originally posted by Incitatus
hl2 doesn't calculate gi in real time, that would a waste
when you compile your map it calculates a gi solution and 'bakes' it onto the map, which is really the smartest way of doing it.