Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
66 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

 Advanced Topics

 Printable version
 Discuss this article
 in the forums


Cel-Shading is the "art" of rendering your objects to look like cartoons. Various cartoon effects can be achieved with very little modifications to the original source code. You can have anime effects, as seen in DragonballZ or Gundam, or "classic" cartoons such as Loony Toons. Cel-Shading is a very powerful form of rendering, and it's results can completely change the "feel" of a game. For instance, the cartoon-like graphics of Jet Set and Grind Radio on the Dreamcast add to the atmosphere and help create one hell of a funky game. However, don't expect this form of rendering to make your game amazing overnight - just play Loony Toons Space Race and you'll see that sometimes it doesn't really help at all.

The Basics

As this is an advanced topic, before going any further make sure you have adequate knowledge in the following areas:

  • 1D texture mapping.
  • Texture co-ordinates.
  • Software lighting.
  • Vector math.

If you don't, then you will still be able to understand the article, but, since I'm not providing any source code with this article, you may be stuck when the time comes to code a cel shading demo. At the end of each section I will give a brief run-down of how to create the desired effects.

Basic Rendering

We're going to start from the very beginning here. No lights, no outlines, just flat cartoon models. For this, you only need to store two chunks of data - the position and color of each vertex. For basic rendering, we draw after disabling ALL lighting and blending. That's simple enough.

So what's happening here? If we enabled lighting, our objects would look normal, and we wouldn't get the flat cartoon effect we are trying to achieve. We also disable blending to make sure that the vertices don't "bleed" into each other.


  1. Disable lighting.
  2. Disable blending.
  3. Draw the colored vertexes.

Basic Lighting (Directional)

This is where your knowledge of the topics listed in Section 1 come in use. Each vertex needs to store a little extra data - the vertex normal and a lighting value (a single floating point variable). This normal and lighting value are then used to render the object with basic lighting.

Lighting Maps

Just so I don't confuse you, I don't mean lighting maps such as those used to simulate lighting on objects, ala Quake 1 and Quake 2 (look at the wall-lights to see what I mean). Nor do I mean light/dark maps that highlight/darken specific areas of the maps. What we have here is a completely new form of light map. And guess what? It's a 1D texture.

Go find some anime (Cartoon Network is always a good resource) and look at the lighting on the characters. Notice how it isn't smooth like in real life? The lighting is split into distinctive blocks or bands, the process of this is called quantizing, and the result of this is quantizied colors (thanks to Sulphur Dragon for that one).

This is a 1x16 pixel greyscale texture map (very zoomed in). The black boxes are there to show you the individual pixels. We use greyscale values because they will be combined with the color of the vertex at a later stage. You may have noticed that there are only 3 colors in the map that are simiar to the intensities used in anime movies. The reason we make the texture map 16 pixels wide is so we can modify the values at ease to create different effects, different numbers of colors, etc. You could simply have black and white in there if you wanted, but it is not suggested. You should never put 100% black in the texture, because it makes highlights and outlines look horrible when we add them.

Once you have made your desired texture, load it into whatever API you're using (DX, OGL, software) and leave it alone for now. We'll come back to it in a moment.

Calculating the Lighting

Now your software lighting knowledge is important. However, don't worry if you have not researched software lighting yet, as I will explain it in basic English. Directional lighting is easy. Just make sure you normalize the lighting direction vector!

All we have to do is calculate the dot product between the lighting vector and the normal of the vertex. Why? Well, here's a bit of theory.

The dot product function calculates the angle between 2 vectors and returns it as a value with a maximum of 1. The value returned is actually the cosine of the angle. All you have to do is use the inverse cosine on the value and you will get the angle. However, instead of a costly cosine function, think about the value of a texture coordinate. Texture coordinates are stored as a value between 0 and 1 ([0 <= x <= 1]). It turns out that the dot product (set the value to zero if negative) of the normal and the lighting direction vector actually gives us our texture coordinate!

Rendering the Object

Now you have the texture coordinate for each vertex, and it's time to draw the object. Again, disable lighting and blending, but enable texturing (remember it's a 1D texture). Draw the object the same as before, but this time specify the texture coordinate before the position of the vertex. Voila! One lit, cel-shaded object.


For all those people who couldn't care less for theory, here's what you need to do.

  1. Create a Sharp Lighting map.
  2. Calculate and store the dot product between the normal and the lighting direction.
  3. Disable lighting and blending.
  4. Enable texturing.
  5. Set the current texture to the light map.
  6. Draw the polygons, specifying only the texture coordinate, color, and vertex positions.

Positional Lights

This method merely requires a little modification of the method described above.

Positional lights offer more flexibility than directional lights for the simple fact that they can be moved around the scene, dynamically lighting all polygons realistically. Although it looks good, the math required is much longer than for directional lighting. It's not more complicated, just longer :-).

Calculating the Sharp Lighting Coordinate

With directional lighting, we simply needed to get the dot product of the light direction and the vertex normal. Now, because positional lighting has no direction (it emmits light in all directions), each vertex will have it's own "ray of light" shining towards it. That's not too bad, until you realise that we're doing this in software.

First of all, we have to create a vector from the position of the light to the position of the vertex. We then normalize this so it has a unit length (magnitude) of 1. This gives us the direction of the light to that particular vertex. Now, we take the dot product of this vector with the normal of the vertex, and repeat the calculation for every vertex in the scene. Argh! That is gonna slow the frame rate down a lot, so let's look at a quick method of reducing the number of lit vertices.

Radius Checking with Positional Lighting

To reduce the number of lit vertices, we first give each light its own radius. Before calculating the lighting values, we see if the vertex is actually within the light's radius (simple point-in-sphere) test. If so, we apply the lighting to it. If not, then we don't. This is basically a point-in-sphere collision detection test, which there are plenty of articles and tutorials out there on (not to mention the message board here at GameDev.net).


As with directional lighting, we draw the object but only specify the color, texture coordinate, and position.


  1. Create the Sharp Lighting map.
  2. If using a light radius, do a point-in-sphere check to see if the point is within range.
  3. Get the vector from the light position to the vertex and normalize it.
  4. Get the dot product of the new vector and the vertex normal.
  5. Repeat 2-4 for every vertex.
  6. Render as usual.

Outlines and Highlighting

Outlines and Hightlights are the little black lines representing the pencil strokes of the rendered cartoon. It may seem daunting, but it's actually easier than you think.

Calculating Where to Highlight

I'm going to refer to outlining and highlighting simply as "highlighting", since they both use exactly the same technique (and are both calculated at the same time). The rule is simple: draw a line along any edge that has one front facing polygon and one back facing polygon. This might sound daft, but look at your keyboard for a second. Note how you can't see the back of the keys? This is because they are facing away, so we would draw a line along that edge to show that there is an edge there.

Notice how I didn't mention anything about polygon culling in the introduction? This is because we get the API to do it for us.

Rendering Highlights

First, we set the line width - 2 or 3 pixels wide normally gives a nice effect. You can also turn on anti-aliasing for this. We change the culling mode to front facing (i.e. remove all front facing polygons). Next, we switch to wireframe mode so we only draw lines. Now, we draw the polygons as usual, except we don't need to specify the color or texture coordinate. What this will do is draw a wireframe mesh of all backfacing polygons, however, due to the magical power of the depth buffer, only lines that are in front of forward facing polygons are drawn (note that this method wouldn't work if we set the line width to 1).


Okay, here we go:

  1. Draw the object as normal.
  2. Switch face orientation.
  3. Set the color to 100% black.
  4. Change to wireframe mode.
  5. Draw the mesh again, but only specifying the vertex positions.
  6. Restore the original modes.

We're moving along nicely. Let's look at some advanced topics.

Next : Advanced Topics