There might be some mistakes here, I'm writing it at 1:30am.
I’m writing this technical FAQ to explain exactly what all these leaks mean from the broad perspective, all the information presented here was obtained from official sources, mainly IRC interviews with the Valve team long before any leak. I’m hoping to demonstrate it does not mean as much as most think by addressing each aspect of the engine and what the source leak means to it.
Everything I know presented here comes from online sources or books. Yes, this is a long post, you don’t have to read it. There are four main aspects to concider but I'm going to cover AI tomorrow.
What does the core of the Source engine do and how does it work?
The job of an game engine is mostly putting graphics to the screen, with a 3D game this means pulling the necessary data out of a map and showing what just needs to be drawn. If you were to display the whole map 60+ times a second that would be a criminal waste of resources.
Valve’s approach with the source engine to this problem is exactly the same as with the first Half Life, this is the same as the Quake [1] source code off of which Half Life was developed. It’s a very simple system but extremely efficient and offers lots of bang for buck, in fact it’s the most efficient possible at the base level and it’s called PVS or Potential Visibility Sets.
When you publish a map from a map editor it is first divided up into small regions, say if two rooms were adjoined the division is best placed where the doorway is, the editor now continues to break the map up into small sets. Then it goes through all of the sets and works out which other sets can be seen from any other, that’s why publishing a map can take a long time when it’s large.
The log of what is visible from where (the PVS) is saved with the map so when Half Life runs it what can be seen is already known. Doom III uses a “portals” system which is very similar but calculates the set visibility in real time, hence the many small areas.
Each set contains what’s called a BSP (Binary Space Partition) for collision detection, the original Doom relied on this algorithm for the best part of its actual rendering system. All polygons (for simplicity’s sake) in a set are recorded in “tree” structure, each polygon has two branches, one is stored either on the left or right branch depending on weather the next is behind or infront, the same is then done for that polygon until the tree is complete. The tree can then be queried for a collision just by asking infront or behind on each branch. Possibly the oldest and most well known 3D data structure.
I don’t know but I suspect this could have been changed to allow better collision detection with the Havock physics, possibly along the lines of AABB trees, but these are also very well known.
Lighting is taken care of at this point too and is rendered using a ray-tracing or radiosity algorithm to light maps like in 3DStudio or POV-Ray or some other 3D package, the light maps are then stretched over the texture maps to produce static lighting during gameplay. Shadows are achieved by Perspective Shadow Maps like Splinter Cell, they allow additional shadows to the static ones already pre-rendered and saved with the map data for freely moving objects and people. What you do is go over to the light source, read in the object you what to shadow into a temporary storage texture, convert it to gray scale color based on distance from the light and use projective texturing to place this texture where the shadow would be based on the light position. There is plenty of information about all these techniques.
What about the physics compromise?
This one is more worrying as the main work on the physics was outsourced and delivered by a Havock, doubtless many changes have since been made to adapt it to Source.
The most basic type of physics is kinematics which is tracing/simulating the path of a particle according to Newton’s laws of motion, the components of which are acceleration, velocity and speed, Integral and differential calculus was developed for the study of these. However movement of particles does not reflect the real world very well, a particle is just like a ball but most things aren’t the same from every angle. If an arbitrary shape needs to be simulated you need to keep track of the angle of the object and it’s angular speed, angular velocity and angular acceleration, these have technical names I wont bore you with.
Once the movement of any shape can be simulated you have ridged body mechanics, many ridged bodys can be attached together to form a ragdoll. Yes this one does get very complex, but there are open source physics engines that offer an awful lot, you’re much better of using one of those than trying to steal the Havock code.
What is shader technology?
(some repeating myself here)
A shader is a program that runs on the processor of the graphics card (GPU) taking the load off of the main CPU for graphics processing, there are currently two types, vertex shaders and pixel (or fragment) shaders. The first can manipulate 3D geometry, perhaps for animation, the second adjusts the colour of individual pixels (fragments) being drawn to the graphics buffer.
The shaders I’ve seen so far, mostly from the DX9 demo video are great looking, but code wise can be easily learned from a book on the subject, these will probably be included with the SDK as you will be able to write your own custom shaders for mods.
High Dynamic Range is a feature touted by the Source engine, it’s the first time that’s been used in a commercial engine, I’m not too sure yet on Valve’s implementation of it, it may be pseudo-HDR. Computer images are known as low dynamic range images, they only capture 0-255 in scales of brightness, but in the real world 255 isn’t good enough to represent something as bright as the sun which could be 10,000, so data is stored in this way instead and photographic exposure is simulated to bring everything into the range of the monitor still in correct scale. This way when you look at a bright object the foreground becomes dark, like when you walk from dark room into bright sunlight and your eyes adjust from being blinded to normal by adjusting the retina.
Is there a bias toward ATI in the Source engine?
Not really, the difference between ATI and nVidia is shader performance, as the beta graphics engine does not contain the final shader code implementation the difference won't be that much. In the benchmark the graphics will be fully functional and will reflect the difference, as a result the game will be much better looking.
Specifications for shader functionality are currently at PS2.0 as used by DirectX 9 and is fully implemented by the current range of ATI cards, the GeforceFX series does not which is where the problem lies. ATI also has fixed data length of 24 bits per color channel where as nVidia's is flexible at cost of speed, the difference means many times you have to program specifically for nVidia to get a fully optimized speed, only Carmack has really gone down this route, others stick to the standard.
Valve have spent over 5 times the budget allocated for improving performance on nVidia cards.
In conclusion, yes a lot of intellectual property was stolen but anybody who can utilize it can easily find the information out themselves and would already need to have done so to understand it and therefore could write their own.
I’m writing this technical FAQ to explain exactly what all these leaks mean from the broad perspective, all the information presented here was obtained from official sources, mainly IRC interviews with the Valve team long before any leak. I’m hoping to demonstrate it does not mean as much as most think by addressing each aspect of the engine and what the source leak means to it.
Everything I know presented here comes from online sources or books. Yes, this is a long post, you don’t have to read it. There are four main aspects to concider but I'm going to cover AI tomorrow.
What does the core of the Source engine do and how does it work?
The job of an game engine is mostly putting graphics to the screen, with a 3D game this means pulling the necessary data out of a map and showing what just needs to be drawn. If you were to display the whole map 60+ times a second that would be a criminal waste of resources.
Valve’s approach with the source engine to this problem is exactly the same as with the first Half Life, this is the same as the Quake [1] source code off of which Half Life was developed. It’s a very simple system but extremely efficient and offers lots of bang for buck, in fact it’s the most efficient possible at the base level and it’s called PVS or Potential Visibility Sets.
When you publish a map from a map editor it is first divided up into small regions, say if two rooms were adjoined the division is best placed where the doorway is, the editor now continues to break the map up into small sets. Then it goes through all of the sets and works out which other sets can be seen from any other, that’s why publishing a map can take a long time when it’s large.
The log of what is visible from where (the PVS) is saved with the map so when Half Life runs it what can be seen is already known. Doom III uses a “portals” system which is very similar but calculates the set visibility in real time, hence the many small areas.
Each set contains what’s called a BSP (Binary Space Partition) for collision detection, the original Doom relied on this algorithm for the best part of its actual rendering system. All polygons (for simplicity’s sake) in a set are recorded in “tree” structure, each polygon has two branches, one is stored either on the left or right branch depending on weather the next is behind or infront, the same is then done for that polygon until the tree is complete. The tree can then be queried for a collision just by asking infront or behind on each branch. Possibly the oldest and most well known 3D data structure.
I don’t know but I suspect this could have been changed to allow better collision detection with the Havock physics, possibly along the lines of AABB trees, but these are also very well known.
Lighting is taken care of at this point too and is rendered using a ray-tracing or radiosity algorithm to light maps like in 3DStudio or POV-Ray or some other 3D package, the light maps are then stretched over the texture maps to produce static lighting during gameplay. Shadows are achieved by Perspective Shadow Maps like Splinter Cell, they allow additional shadows to the static ones already pre-rendered and saved with the map data for freely moving objects and people. What you do is go over to the light source, read in the object you what to shadow into a temporary storage texture, convert it to gray scale color based on distance from the light and use projective texturing to place this texture where the shadow would be based on the light position. There is plenty of information about all these techniques.
What about the physics compromise?
This one is more worrying as the main work on the physics was outsourced and delivered by a Havock, doubtless many changes have since been made to adapt it to Source.
The most basic type of physics is kinematics which is tracing/simulating the path of a particle according to Newton’s laws of motion, the components of which are acceleration, velocity and speed, Integral and differential calculus was developed for the study of these. However movement of particles does not reflect the real world very well, a particle is just like a ball but most things aren’t the same from every angle. If an arbitrary shape needs to be simulated you need to keep track of the angle of the object and it’s angular speed, angular velocity and angular acceleration, these have technical names I wont bore you with.
Once the movement of any shape can be simulated you have ridged body mechanics, many ridged bodys can be attached together to form a ragdoll. Yes this one does get very complex, but there are open source physics engines that offer an awful lot, you’re much better of using one of those than trying to steal the Havock code.
What is shader technology?
(some repeating myself here)
A shader is a program that runs on the processor of the graphics card (GPU) taking the load off of the main CPU for graphics processing, there are currently two types, vertex shaders and pixel (or fragment) shaders. The first can manipulate 3D geometry, perhaps for animation, the second adjusts the colour of individual pixels (fragments) being drawn to the graphics buffer.
The shaders I’ve seen so far, mostly from the DX9 demo video are great looking, but code wise can be easily learned from a book on the subject, these will probably be included with the SDK as you will be able to write your own custom shaders for mods.
High Dynamic Range is a feature touted by the Source engine, it’s the first time that’s been used in a commercial engine, I’m not too sure yet on Valve’s implementation of it, it may be pseudo-HDR. Computer images are known as low dynamic range images, they only capture 0-255 in scales of brightness, but in the real world 255 isn’t good enough to represent something as bright as the sun which could be 10,000, so data is stored in this way instead and photographic exposure is simulated to bring everything into the range of the monitor still in correct scale. This way when you look at a bright object the foreground becomes dark, like when you walk from dark room into bright sunlight and your eyes adjust from being blinded to normal by adjusting the retina.
Is there a bias toward ATI in the Source engine?
Not really, the difference between ATI and nVidia is shader performance, as the beta graphics engine does not contain the final shader code implementation the difference won't be that much. In the benchmark the graphics will be fully functional and will reflect the difference, as a result the game will be much better looking.
Specifications for shader functionality are currently at PS2.0 as used by DirectX 9 and is fully implemented by the current range of ATI cards, the GeforceFX series does not which is where the problem lies. ATI also has fixed data length of 24 bits per color channel where as nVidia's is flexible at cost of speed, the difference means many times you have to program specifically for nVidia to get a fully optimized speed, only Carmack has really gone down this route, others stick to the standard.
Valve have spent over 5 times the budget allocated for improving performance on nVidia cards.
In conclusion, yes a lot of intellectual property was stolen but anybody who can utilize it can easily find the information out themselves and would already need to have done so to understand it and therefore could write their own.