Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
79 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

  Contents

 Fake Shadows
 Vertex Projection
 Shadow Z-Buffers
 Shadow Volumes
 Combining
 Algorithms


 Printable version
 Discuss this article
 in the forums


 


Shadow Z-Buffers

Ok, so we’re ready to get down to the more complex algorithms. This method is not the most complex, but it is probably the most efficient of those discussed here.

Basically, what you do is create two Z-Buffers, one from the viewpoint of the camera, and one from the viewpoint of the light. When it’s time to render, you have to go through several steps for each point. For all of the visible points, you need to map from camera space into light space. Project these light coordinates into 2D, and use x’ and y’ to index into the light-viewpoint Z-Buffer. If the light-space Z coordinate is greater than the Z-Buffer value, then the point is shadowed. See figures 4.1 and 4.2 to clear up any confusion.

Figure 4.1


Simple scene as viewed from the camera’s point of view

Figure 4.2


The same scene as viewed from the light’s point of view (notice that there are no shadows visible)

The light viewpoint Z-Buffer would be created before you even began to render the scene from the camera viewpoint. This step would be done just as if you were rendering from the camera viewpoint, the only difference being the position from which you were viewing the scene. After this Z-Buffer creation you have to go through about three separate steps for each point in the camera Z-Buffer.

The first step you go through for each point is probably the hardest, and it’s not very hard. You just need to translate and rotate the point until it is in the correct position. This sounds complex, but it is actually very simple. You just translate and rotate everything by the difference between the camera and the light. The psuedo-code in listing 4.1 is easier to understand, so just take a look at it to clear up any confusion.

Listing 4.1

point.x += (light.x - camera.x);
point.y += (light.y - camera.y);
point.z += (light.z - camera.z);
point.rot_x += (light.rot_x - camera.rot_x);
point.rot_y += (light.rot_y - camera.rot_y);
point.rot_z += (light.rot_z - camera.rot_z);

project points into 2d

point.screen_x *= LIGHT_X / SCREEN_X;
point.screen_y *= LIGHT_Y / SCREEN_Y;

if point.new_z > light_z_buffer[point.screen_x][point.screen_y] ;
{
  return(shadowed);
}

return(not_shadowed);

The second step is simple, you just project from 3D down into 2D, just like you normally would. The next thing you do is simply change from the aspect ratio of the screen to the aspect ratio of the light Z-Buffer (probably 256x256, or some other square view port with dimensions that are powers of two).

The last thing you need to do is simply compare the mapped Z value to the value in the light Z-Buffer for the projected coordinates. That’s it; you have just produced accurate shadows that, if they are a bit blocky, look great and take very little effort.

Although it is the most efficient algorithm, it still has several downsides. The first is that shadows can appear blocky depending on the resolution of the light Z-Buffer. An obvious downside is that the scene takes longer to render due to the fact that you draw it twice. The worst downside, however, is that the light has to be directional. This means that you can’t have point light sources, unless you want to render the scene six times from the light viewpoint (one for each direction: up, down, left, right, forward, and backwards). As you can imagine, that would make your engine very slow. Ok, so even this engine has some limitations. However, I’m sure it could be optimized so that you could have point light sources.




Next : Shadow Volumes