Depth Cueing
by Matthias Holitzer

Date: April 04, 1999
Copyright © 1999 by Matthias Holitzer
IndyProject - Fate Of Atlantis II

What is depth cueing?

Depth cueing can be used for many rendering effects such as fog, haze, mist but can also be used to render some sorts of water. It can also be used for the effect that objects farther away from the viewer get darker with increasing distance. In this document I'm gonna explain the latter one, the other ones can be rendered pretty easy using an inverse logic (athmosperical effects can be rendered by making use of the fact that in fog objects farther away are getting more and more "milky", that means are getting brighter with increasing distance). The water effect mentioned before refers to rendering murky or cloudy water (or more general each liquid with visibility reduced to a few meters). Generally depth cueing means that objects get darker, brighter or whatever with increasing distance.

Getting started

When I started to implement depth cueing in my engine I searched the web for docs on it but couldn't find nothing reasonable. A reason for this is surely that this effect is usually done in hardware and the programmer musn't care too much about the algorithm. Unfortunately it's only that easy under Windoze (and Watcom) when using 3D accelerator support. But if you're a DOS programmer like me (well, learnin' Windoze, gotta go with the time) and wanna implement depth cueing in your engine there's only one way: do it in software! When I started all I had was a doc from SGI describing the way how it works in hardware and what effects can be done with it together with the information that you specify a value at the far clipping plane which tells the renderer how dark the object should get there. Normally you pass it a percent value (0% leaves the intensity unmodified while 100% states that the intensity should be 0 at the far clipping plane). And that's exactly what we'll do!

The algorithm

I'll present here the algorithm I derived myself. I don't know of any other algorithms (since I didn't find webpages on this topic), there could be an algorithm twice as fast as mine (which I don't think) but I think my works pretty well. The first thing is to be able to compute a percent value at any given z. Sounds difficult but is only a question of linear interpolation most of you should know (if not read any doc on gouraud shading). With a bit of thinking I came to the following equation:

               (d2 - d1)
percent = z * ___________

             (zfar - znear)


where d2 = intensity at far clipping plane
      d1 = intensity at near clipping plane (usually 0)
      zfar = far clipping plane
      znear = near clipping plane

OK. So now we're able to compute a percent value at any given z. But what does it bring us and how can we use it you ask. It's really easy: basically all we have to do now is to multiply it by the current intensity from shading, divide it by 100 (since we're working with percent) to arrive at the value to be subtracted from the intensity. The equation would look like this:

                 i * percent
final_i = i  - ( __________ )

                    100

where i = current intensity 
         final_i = final intensity

So it ain't that hard, you see. By the way as mentioned before when fogging you could rather add than subtract this value to get the final intensity. There's another thing. I'm currently using a depth cue plane that's taken into the computations since my far clipping plane is located at z=50000 and it doesn't allow much control. So instead of subtracting znear from zfar you subtract it from the depth cue plane (hint: if the intensity at the depth cue plane is >= 100%, reset the far clipping plane to the depth cue plane since objects behind the depth cue plane wouldn't be seen anyway due to their intensity).

Implementation

There are two ways to implement it which all have their pros and cons. The first one is to implement it directly into your polygon filling routine (if you should be using flat shading and an extremely fast hline routine from a lib or so for filling your polys I highly recommend you implement gouraud shading first. It ain't that hard and looks really MUCH better). The pseudo code would look something like that:

for(x=x1 -> x2)
    if(z <= zbuffer[x][y])
        percent = z * ((d2 - d1) / (zfar - znear))
        final_i = i - (i * percent / 100)
        putpixel(x, y, final_i)  
    end
end

You should have noticed something: the zbuffer test in line two. Well, I'm of the opinion that when you must compute the value for every pixel (can be done easily by, guess what, linear interpolation), you could use a zbuffer for hidden surface removal, however that's a very important design decision you must do yourself. Also the form I've written the putpixel assumes that you a) don't use texture mapping and b) have a palette with a smooth color gradient most likely a grayscale palette. When texture mapping there are two possibilities: either you're one of this lucky guys (workin' on it) who got their routines to work fast enough in hi- or even better truecolor or you're using a light lookup table (unshaded textures shouldn't appear in any engine). Anyway, the second method requires the use of a zbuffer. Basically you render all your polygons just like you did and after all rendering is done loop thru the zbuffer and apply the depth cueing in an extra step. Again here's some pseudo code:

for(y=0 -> screen_height)
    for(x=0 -> screen_width)
        if(framebuffer[x][y] == 0)
            continue
        else
            percent = zbuffer[x][y] * ((d2 -d1) / (zfar - znear))
            final_i = i - (i * percent / 100)
            putpixel(x, y, final_i) 
        end
    end
end

And again there are some things you should have noticed: the first is the check if the frame buffer at this position is 0. Well, if the screen's already black at this position there's of course no sense to perform the computations (hope I mustn't explain why...). Now to the pros and cons of both implementations. Generally the first one works best with scenes where there's no big overdraw and where only less than half of the screen is updated every frame. The second one should logically be used whenever there's a big overdraw or the entire screen is updated every frame either way.

Conclusion: I think the second algorithm should rather be used in games (I'll explain later why) while the first one works perfect for demo effects with a few objects.

Optimizations

So you've implemented everything, it looks great but your engine is doing 5 FPS even on the fastest machine? That's because the algorithms presented above are totally unoptimized. Let's go. The first thing you could do is to precompute (d2 - d1) and (zfar - znear) or in English precompute the difference of the intensities at both clipping planes (should in most cases simply be d2) and the difference between the near and far clipping planes since they're constant and it's nonsense to recompute them for every pixel. The most important optimization is to use dirty rectangles when zbuffering (process only the regions of the zbuffer which were update during the current frame). There are some good docs on dirty rectangles so I won't go into detail here but it's a very important optimization that should be implemented first. I think with this optimization the second algorithm would be the better one for games (dirty rectangles make sure that only the parts of the zbuffer which were altered during the current frame are being processed and by the way it avoids overdraw which can also slow down things alot if occuring too often). And finally I highly recommend that you use fixed point math. Well, I've read in many docs that floating point performance should nearly be the same as with integers on Pentiums and 486 so fixed point ain't necessary no more but believe me: use it! Especially in time crititcal functions like this and polygon filling you rather should still use fixed point (little example: my polyfilling routine with texture mapping and gouraud shading was doing 600 FPS with floats and 1600 - 1700 FPS with fixed point on my P2 300!!!). By the way: maybe the depth cueing as it's described in here (objects get darker with increasing distance) is halfway physically correct but I'm sure there are better ways to render fog (volumetric fog or whatever) but nevertheless it should look quite realistic I think and should suffice for realtime engines.

OK. Hope this doc helped you getting started and didn't confuse you too much. If you have any questions, found some bugs, wanna gimme money for it (it's free but it if you want to I won't say no...) or have ideas for optimization feel free to mail me.

PS: if you're an Indy fan visit indyproject.indy3d.net

Discuss this article in the forums


Date this article was posted to GameDev.net: 11/19/1999
(Note that this date does not necessarily correspond to the date the article was written)

See Also:
Fog

© 1999-2011 Gamedev.net. All rights reserved. Terms of Use Privacy Policy
Comments? Questions? Feedback? Click here!