Detailed Information on HDR & "The Lost Coast"

-smash-

Content Director
Joined
Aug 27, 2004
Messages
1,823
Reaction score
340
Must Read:

One of biggest gaming stories of 2004 was the long-awaited release of Half-Life 2, which not only achieved “instant classic” status but also brandished bleeding-edge graphics and gameplay. Attack of the Show unveiled a whole new level for Half-Life 2 that gave gamers a glimpse into the future of PC graphics. Valve’s Doug Lombardi gave an exclusive demonstration of the previously unseen “The Lost Coast,” an upcoming expansion that utilizes a brand-new technology that Valve is implementing into the Source engine. Called “high-dynamic range lighting,” this new technology enables a leap in lighting realism from even the current high benchmark set by Half-Life 2 and the existing Source engine. In this interview, Lombardi describes how you’ll be seeing a lot more of such lighting technology in future games.


What is "The Lost Coast?"

"The Lost Coast" is a single mission that takes place on the Highway 17 area of the game. It is specifically a piece of content that is aspiring to push the envelope in a couple of areas that are more on the technical and art production sides. While it is a new mission and whatnot, it is a free, new thing for all the owners of Half-Life 2 that have the high-end hardware. What we’re specifically trying to do here is not say, “Here’s this big, new piece of content for Half-Life 2,” but instead say, “Here’s this new technology that’s being introduced to the engine,” and it’s being manifested in the way of a new, short mission that people can check out.


What’s this new technology all about?

High-Dynamic Range Lighting is basically the last piece of chasing really good lighting in a digital world. What we’re doing here is completing the promise when we said back in the old days, “We’re going to have real-time lighting.” What that really came down to was, well, your shadow was real and maybe some weapons effects. But for all intents and purposes, each level is basically phony lighting—it’s all full-bright because of the gameplay path, and the hardware wasn’t able to do true lighting. So if you look at an image or any area of Half Life 2, you notice that the sky is usually kind of bright, and the ground is kind of bright—that’s there for gameplay and technology reasons.

But if you look at "The Lost Coast," you’ll notice that there’s contrast as there would be in a photograph. If you took a photo of a sunset, for example, there’ll be hotspots in the picture as well as darker spots, based on your f-stop and aperture speed—how much light you allow in. Your eye is similar to that camera’s eye in that it will adjust to light as different light sources are introduced. So what we’re doing in "Lost Coast" through the use of HDR is, depending on where you are in proximity to the light source and how long you’ve been looking at it, your eyes will adjust and the lighting in that world will adjust.


What are some in-game examples?

When you have a light come in from a corner in the room, it’s pretty easy to do a lens flare on the sun; but when you turn around in that level of the game, there’s really no correlation behind you—no hot spot. It’s only when you’re looking directly at it that folks have sort of faked that lens flare of the sunlight. But with HDR, we’re able to bring home that last bit of realism in the lighting of that world. Because it is graphics-related, it’s one of those things that folks really have to see to understand. But like physics and hardware acceleration on graphics going back several years now, this is going to be one of those things that once gamers experience a game that’s been authored to support HDR, it’s going to be a noticeable step back when they play games that don’t have this. The lighting isn’t going to look quite right; things are going to look a little bit static.

If you talk to folks in the Hollywood film world, when they’re creating sets or shooting, they’ll say one of the most important things is how the scene is lit. If the scene is lit wrong, the whole vibe of it will just completely go out the window. And I think we’re going to see that become more important now in games where we’ll be able to set a mood based on the lighting. When you enter a room, we’ll be able to give folks cues and clues as to what’s going to come next just based on the mood of that room and how the lighting of that room comes together. Obviously, there’ll be gameplay implications that we can do because now you have an active eye that reacts to light and muzzle flashes and grenades and those types of things. So hopefully it’ll be one step closer to providing a sense of realism, and specifically in this instance on the scenes and the sets.


What are the system specs for "The Lost Coast?"

Right now it’s still kind of TBD. It’s going to be for the power user, absolutely. It’s going to have some effects on RAM and on processor speed, so you’re going to need to be in the upper echelon. But it’s specifically attacking the GPU so it’s going to be only the very latest GPUs—and even there, we’re still finalizing the details as to which cards exactly.


Have you set a release date yet?

We’re close. It’ll definitely come out this spring. We don’t have an exact date just yet, but like I said it’ll be made available as a free update to all Half-Life 2 owners probably within the next four to eight weeks.

Do you think HDR-type technology will become widespread in future games?

Yes, definitely. I think lighting is the next really big step. Folks have thrown thousands and thousands of polygons at the characters, and they’ve opened up the levels so we’re not constantly loading the levels every five minutes. So we’ve done a lot to expand the space where the action takes place, we’ve populated the space with more characters, and we’ve made those characters more high-definition. But our lighting model is still messed up. And again, going back to the Hollywood example, people would say, “That’s great—you’ve got a lot of actors and you’ve got this big set, but if it’s not lit correctly, what are you doing?” I think that folks will take different approaches to how they’re attacking this problem, but I think you’re going to see a theme amongst those who are making games, specifically those who are making game engines like us, Id, and Epic.

Right now, it’s getting to be a point of diminishing returns on the graphics side. But on the lighting side, there are still a lot of gains to be made. Games definitely follow those trends of technology that catch on. Once somebody has physics in a game and it works, about every game now has to have physics in it. Once Id introduced GL Quake, all of a sudden graphics acceleration became something you had to have. So hopefully this will be a really big advance that folks will see.


Will HDR become a standard for this kind of lighting technology?

I think we’re going to see a lot of people implement it differently, both from how it manifests itself to the user as well as how they’re doing the magic underneath the hood. I think it’ll take a while for a certain method to be addressed as like, “These guys did it best.” You’ll always see people doing it a little bit differently based on what kind of game they’re trying to do. And the bottom line is that as long as everybody’s moving this forward, it’s good for gamers. Back when hardware acceleration came out, there was Direct3D and GL and a couple of other custom APIs that people used to do hardware acceleration. And while that caused some minor pains for gamers, at the end of the day it was good, right? Today, all games are graphically accelerated, all games look a ton better. I think this is going to be a similar type of phase that people have to work through, and at the end of the day somebody’s going to arrive at something that looks like a standard the way that Direct3D has become on the graphics side. But as long as people are moving this ball forward, I think it’s good for everybody, and it’s probably too early in the race to declare a winner.


Do you think next-generation consoles will be able to handle HDR technology?

Yeah. I think if folks want to see what’s going to happen on next-generation consoles, they should be paying very close attention to what’s going on with the PC right now. ATI is going to be the part in some of the consoles, and NVIDIA is going to be the part in some of the other consoles. And they’re deploying all the stuff they want to bring to those consoles on the PC right now to test out what works and what resonates with consumers. Right now is a real interesting time in the PC space because it’s somewhat of a predictor of what’s going to make it into those boxes in the years to come.

http://www.g4tv.com/attackoftheshow/featur...g_Lombardi.html
 
We thanketh thee.


Seriously, that's freaking awesome.
 
"made available as a free update to all Half-Life 2 owners probably within the next four to eight weeks."

Wo-ah!
 
hmmm i didnt see the livebroadcast "lost coast" level, is there a way to see it again ?
 
supoib smash

i presume a 3ghz+,gig of ram,x800/6800 cards is what they are targetting for peeps who want all the bells and whistles and a good res
 
Funny how they keep hammering on about that it's real-time and dynamic and not static. While exposures and the effects to reveal them are indeed real-time, the levels still use lightmaps, which are as static as you can get them. Not that it's a bad thing, I like them more than the alternative of Doom 3's hard edged lighting.
 
PvtRyan said:
Funny how they keep hammering on about that it's real-time and dynamic and not static. While exposures and the effects to reveal them are indeed real-time, the levels still use lightmaps, which are as static as you can get them. Not that it's a bad thing, I like them more than the alternative of Doom 3's hard edged lighting.
They are claiming HDR as the first step in achieving propper lighting, not the final trophy. I just don't think current hardware has the muscle to calculate realtime lighting.
 
real time photon mapping is the next step, :O and it will be glorious when achieved
 
clarky003 said:
real time photon mapping is the next step, :O and it will be glorious when achieved

Too resource consuming to even think about. There are going to be ways of faking that that make a lot more sense. It's like rendering a shitload more polys instead of using normal maps- a waste of system resources.
 
PvtRyan said:
Funny how they keep hammering on about that it's real-time and dynamic and not static. While exposures and the effects to reveal them are indeed real-time, the levels still use lightmaps, which are as static as you can get them. Not that it's a bad thing, I like them more than the alternative of Doom 3's hard edged lighting.

The 'dynamic' in high dynamic range lighting doesn't refer to dynamic in the way you think of it, i.e. the lights can more freely. It refers to the fact that the range of colours, or the pallette, is not fixed.

Remember: despite the name "high dynamic range lighting", there is literally NOTHING being done to the lighting itself, or any properties added to the lighting. It simply provides a more realistic way of VIEWING the final scene.
 
Oh, and realtime photon mapping is NOT the next step. I personally feel that the next step in the advancement of lighting and shadowing is going to be upping the ante beyond what Doom3 has. Specifically, higher quality lighting (a true phong model, not a hack) and the addition of soft shadows.
 
Yuusharo (at steampowered) said:
Also confirmed, Lombardi admits that the Alyx expansion pack for HL2 is rumor, not reality. Sorry guys, but it looks like it's up to the mod community to make such a game possible.

I knew it!
 
Valve need to impliment depth buffer shadows, thats what the Unreal Engine 3 is using, and that's what Splinter Cell uses. But I think they're pretty expensive.
 
Gah, I bloody knew it would be released when I'm back at uni and steam doesn't work :p

Ah well, sounds cool all the same!
 
The_Monkey said:
I knew it!

I don't call 'hahahah thats just a rumor' a official conformation, more like he just didn't want to talk about it.
 
Cypher19 said:
The 'dynamic' in high dynamic range lighting doesn't refer to dynamic in the way you think of it, i.e. the lights can more freely. It refers to the fact that the range of colours, or the pallette, is not fixed.

Remember: despite the name "high dynamic range lighting", there is literally NOTHING being done to the lighting itself, or any properties added to the lighting. It simply provides a more realistic way of VIEWING the final scene.

I know, that's why I said it. But on G4 they talk about it as if the world lighting becomes real-time, especially when they start comparing it with Doom 3 and stating that every developer does something different with the same end results, while HDR doesn't have anything to do with what Doom 3 does. I'm afraid it might confuzzle people, thinking that HDR brings them the same level of dynamic lighting as Doom 3.
 
PvtRyan said:
I know, that's why I said it. But on G4 they talk about it as if the world lighting becomes real-time, especially when they start comparing it with Doom 3 and stating that every developer does something different with the same end results, while HDR doesn't have anything to do with what Doom 3 does. I'm afraid it might confuzzle people, thinking that HDR brings them the same level of dynamic lighting as Doom 3.

Think of it as quoting for emphasis. It cannot be said enough times, because there are still oodles of people out there who are misinformed on even the basics of it.

Valve need to impliment depth buffer shadows, thats what the Unreal Engine 3 is using, and that's what Splinter Cell uses. But I think they're pretty expensive.

Oh hell yes they ARE. I made a demo implementing shadow maps (aka depth buffer shadows) that is currently fairly unoptimized, and at a decent quality (1024x1024 resolution cubemap) and for a simple scene (<2000 polys) and one light I'm getting a framerate of only 120 fps on my X800Pro. Mind you, I'm still working on them, but there's a reason why shadow maps won't be practical as an all-purpose lighting situation for another couple years ;)
 
i want some screenshots or something of this new technolothingy
 
I think my chances of getting this are slim to none. :( I need an upgrade.
 
Low Dynamic Range (normal) assigns RGB values to a pixel from 0.0 (black) to 1.0 (white)

HDR has an infinite amount of values with 0.0 being black and 1.0 being white. numbers biggr than 1.0 are whiter than white.


Now the real advantage here doesnt come until you combine HDR with some other effect such as blooming, or what seems to be in the case of this last videao, dynamic exposure. Note tht in the first part the windows look brighter and the rest of the world darker in HDR due to exposure adjustments. When the cmaera pans past the pillar exposure goes way up due to a low amount of brightness in the scene and the pillar apears brighter on HDR and pitch black (nearly) on LDR
 
But the end results are still output as a value from 0 to 255. That's what confuses me. So the end brightness isn't increased, it's just a question of the contrast I think.
 
subtlesnake said:
But the end results are still output as a value from 0 to 255. That's what confuses me. So the end brightness isn't increased, it's just a question of the contrast I think.

Well, the image DOES have to be clamped down to a value between 0.0 and 1.0, but HDR is convenient if you have to store intermediate values between full-screen shader passes. For example, let's say you had to do (very simple) tone mapping that divided the intermediate surface value by a hundred, and had a bright spot onscreen that had a value of 100. If you used LDR where the values are already between 0.0 and 1.0, then the spot will have a value of 1.0, and then be reduced to a value of 0.01 after the tone mapping. If you used HDR though, the spot will instead have the expected value of 1.0.

note: this example is almost borrowed directly from the HDRCubeMap example in the DX9SDK. Here're some screen caps demonstrating this:
http://msdn.microsoft.com/archive/en-us/directx9_m_Summer_04/directx/art/dx_sample_WithoutHDR.jpg
http://msdn.microsoft.com/archive/en-us/directx9_m_Summer_04/directx/art/dx_sample_WithHDR.jpg
 
But the end results are still output as a value from 0 to 255. That's what confuses me. So the end brightness isn't increased, it's just a question of the contrast I think.

well, HDR adds glows to pixels that are brighter than 255. Sure, the brightest you can see is white, but in LDR, a white paint stripe would have the same brightness as the brightest light. HDR dynamically adds glows to things if they are brighter than 255, so if you look at a bright light, the whiteness would dynamically bleed into your field of vision, making it appear brighter (kind of like the fake HDR with the sun in hl2)
 
theotherguy said:
well, HDR adds glows to pixels that are brighter than 255. Sure, the brightest you can see is white, but in LDR, a white paint stripe would have the same brightness as the brightest light. HDR dynamically adds glows to things if they are brighter than 255, so if you look at a bright light, the whiteness would dynamically bleed into your field of vision, making it appear brighter (kind of like the fake HDR with the sun in hl2)


actualy, that's blooming.
 
Cypher19 said:
Well, the image DOES have to be clamped down to a value between 0.0 and 1.0, but HDR is convenient if you have to store intermediate values between full-screen shader passes. For example, let's say you had to do (very simple) tone mapping that divided the intermediate surface value by a hundred, and had a bright spot onscreen that had a value of 100. If you used LDR where the values are already between 0.0 and 1.0, then the spot will have a value of 1.0, and then be reduced to a value of 0.01 after the tone mapping. If you used HDR though, the spot will instead have the expected value of 1.0.

note: this example is almost borrowed directly from the HDRCubeMap example in the DX9SDK. Here're some screen caps demonstrating this:
http://msdn.microsoft.com/archive/en-us/directx9_m_Summer_04/directx/art/dx_sample_WithoutHDR.jpg
http://msdn.microsoft.com/archive/en-us/directx9_m_Summer_04/directx/art/dx_sample_WithHDR.jpg
Ok, thanks.

One more thing, what's the exact difference between Valves current shadow mapping implementation and proper depth buffer shadowing?
 
I'm still hoping they're going to put parallax mapping in :)
 
jondyfun said:
I'm still hoping they're going to put parallax mapping in :)

Yeah, they did have plans for that when developing HL2, because I've come across game VMT's which direct to a $parallaxmap.

I wonder btw if the cubemaps are HDRI in the lost coast? In that "HDR mod" (more like just nice bloom) in the trainstation map, the skywindows are really bright, yet the reflection on the floor is dull and bland. That ofcourse because they haven't rebuilt the cubemaps, but would that suffice with real HDR or should the cubemaps also be able to change exposure? And how would that work when HDR is brought to regular HL2 with the old LDR cubemaps?

Another thing (while I'm at it :)) would the current shadow mapping technique be appliable to the entire game? By that I mean that lightmaps are dumped, and everything casts a shadow realtime (if they make shadows castable on entities). Would that work with the current system? A bit like Splinter Cell 3 which uses that I believe.
 
PvtRyan said:
I wonder btw if the cubemaps are HDRI in the lost coast? In that "HDR mod" (more like just nice bloom) in the trainstation map, the skywindows are really bright, yet the reflection on the floor is dull and bland. That ofcourse because they haven't rebuilt the cubemaps, but would that suffice with real HDR or should the cubemaps also be able to change exposure? And how would that work when HDR is brought to regular HL2 with the old LDR cubemaps?

Another thing (while I'm at it :)) would the current shadow mapping technique be appliable to the entire game? By that I mean that lightmaps are dumped, and everything casts a shadow realtime (if they make shadows castable on entities). Would that work with the current system? A bit like Splinter Cell 3 which uses that I believe.

HDR cubemaps would be nice - even nicer though would be render-to-texture reflections on floors with the associated surface modulation used on current cubemaps, as they'd be more accurate.

Splinter Cell uses pre-compiled radiosity lighting as well; without it the game's enviroments just wouldn't look realistic, a la doom3.
 
subtlesnake said:
Ok, thanks.

One more thing, what's the exact difference between Valves current shadow mapping implementation and proper depth buffer shadowing?

I'll try and describe the two systems (Shadow mapping I can go on about, and I'm going to give HL2's shadow mapping my best try).

For regular shadow mapping, first you render the scene from the light's point of view into a surface (a place in the video card's memory. It can later be treated as a texture), storing the distance of each pixel from the light through a shader. Then, from the camera's point of view, you use another shader, that calculates, based on the location of the pixel, where in that surface it needs to look up the corresponding depth value. If the distance from the light is greater than the depth value stored in the surface, then the pixel is in shadow. Else, it's not. That last part I imagine is hard to visualize, so here're a couple pics I had to take when making my shadow mapping demo:

(note: these are both from the light's POV)
This shows what each pixel's distance from the light is: http://img.photobucket.com/albums/v627/Cypher19/ShadowProblem2.jpg
This shows what the corresponding texture look up is: http://img.photobucket.com/albums/v627/Cypher19/ShadowProblem1.jpg

Anyways, here goes my explanation of Half Life 2's method of shadow mapping, based on various observations and bugs. First, from the light's direction, they render each character into a monochrome surface. Then, a decal is created (derived from the bounding box of the character being extruded along the light's direction. HL2's method of generating decals is all based around having some geometric shape, like a rectangular prism, doing intersection checks with the environment to find appropriate vertex locations) that uses that surface as its texture, and is rendered. Some nice things about that method is that it's very nice performance wise, and under 99% of all cases produces some nice looking shadows. The other 1% is stuff like the stacked shadow, and long shadow bugs.
 
And the fact that the light is constant based on a global light, not a per room light...
 
jondyfun said:
HDR cubemaps would be nice - even nicer though would be render-to-texture reflections on floors with the associated surface modulation used on current cubemaps, as they'd be more accurate.

Splinter Cell uses pre-compiled radiosity lighting as well; without it the game's enviroments just wouldn't look realistic, a la doom3.
I'm sure the render-to-texture reflections are possible, as the water is just a special case of this shader with a modulating normal map. But notice how expensive rendering "reflect all" water is. Imagine what the cost would be for EVERY surface, even in small and enclosed areas. I think there are some areas where cube-maps work well. However, I think eventually everything will be rendered-to-texture. It's the wave of the future. man.
 
Well, shadow mapping can be applied to a directional light (you know it as 'global'), and in fact there are some very advanced techniques made just for directional lights, that still apply the same basic technique, I just happen (and prefer) to be working on omnidirectional lights.
 
I hope Radeon 9800PRO/XT could handle HDR/The Lost Coast :]
 
Cypher19 said:
For regular shadow mapping, first you render the scene from the light's point of view into a surface (a place in the video card's memory. It can later be treated as a texture), storing the distance of each pixel from the light through a shader. Then, from the camera's point of view, you use another shader, that calculates, based on the location of the pixel, where in that surface it needs to look up the corresponding depth value. If the distance from the light is greater than the depth value stored in the surface, then the pixel is in shadow. Else, it's not. That last part I imagine is hard to visualize, so here're a couple pics I had to take when making my shadow mapping demo:

(note: these are both from the light's POV)
This shows what each pixel's distance from the light is: http://img.photobucket.com/albums/v627/Cypher19/ShadowProblem2.jpg
This shows what the corresponding texture look up is: http://img.photobucket.com/albums/v627/Cypher19/ShadowProblem1.jpg

Anyways, here goes my explanation of Half Life 2's method of shadow mapping, based on various observations and bugs. First, from the light's direction, they render each character into a monochrome surface. Then, a decal is created (derived from the bounding box of the character being extruded along the light's direction. HL2's method of generating decals is all based around having some geometric shape, like a rectangular prism, doing intersection checks with the environment to find appropriate vertex locations) that uses that surface as its texture, and is rendered. Some nice things about that method is that it's very nice performance wise, and under 99% of all cases produces some nice looking shadows. The other 1% is stuff like the stacked shadow, and long shadow bugs.
Thanks a lot. I'm still not a 100% on everything but I understand things a bit better now.
 
Back
Top