Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
109 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

Rendering Overview

The basic rendering process for a single frame looks like:

  1. Clear screen, initialise camera matrix
  2. Fill z buffer with all visible objects.
  3. For every light:
    1. Clear alpha buffer
    2. Load alpha buffer with light intensity
    3. Mask away shadow regions with shadow geometry
    4. Render geometry with full detail (colours, textures etc.) modulated by the light intensity.

The essential point from the above is that a rendering pass is performed for every visible light, during which the alpha buffer is used to accumulate the lights intensity. Once the final intensity values for the light have been created in the alpha buffer, we render all the geometry modulated by the values in the alpha buffer.

Simple Light Attenuation

First we'll set up the foundation for the lighting – converting the above pseudo code into actual code but without the shadow generation for now.

public void render(Scene scene, GLDrawable canvas)
{
  GL gl = canvas.getGL();

  gl.glDepthMask(true);
  gl.glClearDepth(1f);
  gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
  gl.glClear(GL.GL_COLOR_BUFFER_BIT |
             GL.GL_DEPTH_BUFFER_BIT |
             GL.GL_STENCIL_BUFFER_BIT);
    
  gl.glMatrixMode(GL.GL_PROJECTION);
  gl.glLoadIdentity();
  gl.glMatrixMode(GL.GL_MODELVIEW);
  gl.glLoadIdentity();
  gl.glMatrixMode(GL.GL_TEXTURE);
  gl.glLoadIdentity();
    
  gl.glDisable(GL.GL_CULL_FACE);
    
    
  findVisibleLights(scene);
    
  Camera activeCamera = scene.getActiveCamera();  
  activeCamera.preRender(canvas);
  {
    // First we need to fill the z-buffer
    findVisibleObjects(scene, null);
      
    fillZBuffer(canvas);
      
      
    // For every light
    for (int lightIndex=0; lightIndex<visibleLights.size(); lightIndex++)
    {
      Light currentLight = (Light)visibleLights.get(lightIndex);
        
      // Clear current alpha
      clearFramebufferAlpha(scene, currentLight, canvas);
        
      // Load new alpha
      writeFramebufferAlpha(currentLight, canvas);
        
      // Mask off shadow regions
      mergeShadowHulls(scene, currentLight, canvas);
        
      // Draw geometry pass
      drawGeometryPass(currentLight, canvas);
    }
      
    // Emmissive / self illumination pass
    // ..
      
    // Wireframe editor handles
    drawEditControls(canvas);

  }
  activeCamera.postRender(canvas);
}

Note that code here is written in Java, using the Jogl set of bindings to OpenGL. For C++ programmers you simply have to remember that primitives such as int, float, boolean etc. are always passed by value, and objects are always passed by reference. OpenGL commands and enumerations are scoped to a GL object, which leads to the slightly extended notation from the straight C style.

First we reset the GL state ready for the next frame, collect all the lights that we will need to render this frame and retrieve the currently active camera from the scene. Camera.preRender() and .postRender() are used to set the modelview and projection matrices to that needed for the view position.

Once this initialisation is complete we need to fill the z-buffer for the whole scene. Although not discussed here, this would be the perfect place to take advantage of your favourite type of spatial tree. A quad-tree or AABB-tree would make a good choice for inclusion within the scene, and would be used for all testing of objects against the view frustum. To fill the depth buffer we simply enable z-buffer reading and writing, but with colour writing disabled to leave the colour buffer untouched. This creates a perfect depth buffer for us to use and stops later stages blending pixels hidden from view. It is worth noting that by enabling colour writing an ambient lighting pass can be added here to do both jobs at the same time. From this point onwards we can disable depth writing as it no longer needs to be updated.

Now we perform a rendering pass for every light.

First the alpha buffer is cleared in preparation for its use. This is simply a full screen quad drawn without blending, depth testing or colour writing to reset the alpha channel in the framebuffer to 0f. Since we don't want to disturb the current camera matrices that have been set up, we create this quad by using the current camera position to determine the quads coordinates.

Next we need to load the lights intensity into the alpha buffer. This does not need any blending, but depth testing is enabled this time to allow lights to be restricted to illuminating only the objects beneath them. Again colour writing is left disabled since we are not ready to render any visible geometry yet. The following function is used to create the geometry for a single light:

public void renderLightAlpha(float intensity, GLDrawable canvas)
{
  assert (intensity > 0f && intensity <= 1f);
    
  GL gl = canvas.getGL();
  int numSubdivisions = 32;
    
  gl.glBegin(GL.GL_TRIANGLE_FAN);
  {
    gl.glColor4f(0f, 0f, 0f,  intensity);
    gl.glVertex3f(center.x, center.y, depth);
      
    // Set edge colour for rest of shape
    gl.glColor4f(0f, 0f, 0f, 0f);
      
    for (float angle=0; angle<=Math.PI*2; angle+=((Math.PI*2)/numSubdivisions) )
    {
      gl.glVertex3f( radius*(float)Math.cos(angle) + center.x,
                     radius*(float)Math.sin(angle) + center.y, depth);  
    }
      
    gl.glVertex3f(center.x+radius, center.y, depth);
  }
    gl.glEnd();
}

What happens is we create a triangle fan rooted at the centre position of the light, then loop around in a circle creating additional vertices as we go. The alpha value of the centre point is our light intensity, fading linearly to zero on the edges of the circle. This creates the smooth light fall off seen in the first image. If other methods of light attenuation are needed, they can be generated here. An interesting alternative would be to use an alpha texture instead of vertex colours; a 1D texture could happily represent a non-linear set of light intensities. Other unusual effects could be achieved by animating the texture coordinates over a 2D texture, such as flickering candlelight or a pulsing light source.

So now we have our light intensity values in the alpha buffer, we will skip the generation of shadow hulls for the moment and move on to getting our level geometry up on the screen.

The geometry pass is where we really start to see things coming together, using the results we have carefully composed in the alpha of the framebuffer. First we need to make sure we have depth testing enabled (using less-than-or-equals as before), and then enable and set up our blending equation correctly.

  gl.glEnable(GL.GL_BLEND);
  gl.glBlendFunc(GL.GL_DST_ALPHA, GL.GL_ONE);

Simple, yes? What we're doing here is multiplying our incoming fragments (from the geometry we're about to draw) by the alpha values already sitting in the framebuffer. This means any alpha values of 1 will now be drawn at full intensity, and values of 0 being unchanged. This is then added to the current framebuffer colour multiplied by one. This addition to the existing colour means we slowly accumulate our results from previous passes. With our blend equation set up, we simply render our geometry as normal, using whatever vertex colours and textures that takes our fancy.

If you take another look at our render() function near the top, you'll see we've almost finished composing our frame. Once we've looped over all the lights we've practically finished, but we'll insert a couple of extra stages. First an emissive or self illumination pass – this is discussed near the end of the article. After this is a simple wireframe rendering with draws object outlines such as seen in the first image.


Image 2: Per pixel lighting with intensities accumulated in the alpha buffer.

Coloured Lighting

What was once seen as 'the next big thing' in the Quake 2 and Unreal era, coloured lighting is pretty much standard by now, and a powerful tool for level designers to add atmosphere to a scene. Now since we've already got our light intensity ready and waiting for our geometry in the alpha buffer, all we need to do is modulate the geometry colour by the current light colour while drawing. That's a whole lot of multiplication if we want to do it ourselves, but on TnL hardware we can get it practically for free with a simple trick. We enable lighting while drawing our geometry; yet define no normals for we have no need of them. Instead we just enable a single light and set its ambient colour to the colour of our current light. The graphics card will calculate the effect of the light colour on our geometry for us and we need barely lift a finger. Note that because we're accumulating light intensities over multiple lights in the framebuffer we get accurate over brightening effects when lights overlap, and multiple coloured lights will merge and produce white illumination of our objects.





Hard-edged Shadow Casting

Contents
  Introduction
  Rendering Overview
  Hard-edged Shadow Casting
  Soft-Edged Shadow Casting
  Making it robust

  Printable version
  Discuss this article