You probably have no idea what cel-shading is (unless Dave put a decent description with the article), which is hopefully why you're reading this article. If you do know what cel-shading is, then skip this bit and get straight onto the theory. If you know how to program it then go to the Textured Cel-Shading section, because I know that you don't know how to do that. But, if for some reason you do, then for god's sake stop stealing documents from my hard drive!
Okay, 1 paragraph in and you're still none the wiser. Cel-Shading is the "art" of rendering objects to look like cartoons. Various cartoon effects can be achieved with very little modifications to the original source code. You can have anime effects, as seen in DragonballZ or Gundam, or "classic" cartoons such as Loony Toons. Cel-Shading is a very powerful form of rendering, and it's results can completely change the "feel" of a game. Look at Jet Set/Grind Radio on the Dreamcast. The cartoon graphics add to the atmosphere and help create one hell of a funky game. However, don't expect this form of rendering to make your game amazing overnight. Go and play Loony Toons Space Race and you'll see that it didn't really help it at all (it's still a crap, but with good graphics). Onto the basics...
First off, I have to say that this is an advanced topic that has been dumbed down so the people of GameDev.net can understand (hehe). So, before you go any further, you will need to have adiquate knowledge in the following areas:
If you don't, then you will still be able to understand the article, but will be stuck when it comes to coding it. Oh yes, that's another point. I'm not providing any source code because I don't know DirectX, and OpenGL has gone screwy under Windows 2K (any suggestions as to how to sort it out are welcome). However, I won't leave you totally in the dark. At the end of each section I will give a very brief run-down of what you need to program to create the desired effects. Anyway...
Okay, we're going to start from the very beginning here. No lights, no outlines, just flat cartoon models. For this, you only need to store a few pieces of data - the position of each vertex, and the color of each vertex. Now, disable ALL lighting and blending and draw. It really is that simple.
What is going on I hear you cry? Well it's simple. We disable the lighting because otherwise the objects would look normal and not the flat cartoon effect we wish. We also disable blending to make sure that the vertexes don't "bleed" into each other by accident. Simple.
Basic Lighting (Directional)
Awww... and it was so easy up until now. This is where your knowledge of the topics listed in section 1 come in use. First of all, you need to store a little extra data for the vertexes - their normals and their lighting value (a single floating point variable). Okay, I think I'm going to devote an entire subsection to the next area - creating lighting maps.
Just so I dont confuse you, I don't mean lighting maps that are used to simulate lighting on objects like in Quake 1 and 2 (look at the wall-lights to see what I mean). Nor do I mean light/dark maps that highlight/darken specific areas of the maps. No, these are a completely new form of light map. And guess what? It's a 1D texture.
Go find some anime (Cartoon Network is always a good resource) and look at the lighting on the characters. Notice how it isn't smooth like in real life? The lighting is split into distinctive blocks or bands, the process of this is called quantizing, and the result of this is quantizied colors (thanks to Sulphur Dragon for that one).
That is a 1x16 pixel greyscale texture map (very zoomed in). The black boxes are there to show you the individual pixels. The reason we are only using greyscale values is because they will be combined with the color of the vertex at a later stage. Now, what you might notice is that there are only 3 colors in the map, and they look like the intensities used in anime movies. Well done Sherlock, you're catching on. The reason we make the texture map 16 pixels wide is so we can modify the values at ease, creating different effects, different numbers of colours, etc. You could simply have black and white in there if you wanted (but that would look crap). Besides, you should never put black 100% black in there, because when we come to add the highlights and outlines, it looks horrible :).
Once you have made you're desired texture, load it into whatever API you're using (DX, OGL, software) and leave it alone for now. We'll come back to it in a moment.
Calculating the Lighting
Now you're software lighting knowledge comes into play. Don't worry if you were lazy and didn't bother researching it, I will explain it in basic english (unless you speek spanish, in which case this wouldn't be very helpful to you). Directional lighting is easy. Too easy infact. Just make sure you normalize the god-damn lighting direction vector!
All we have to do is calculate the dot product between the lighting vector and the normal of the vertex. Why? Well, here's a bit of theory.
The dot product function calculates the angle between 2 vectors and returns it as a value with a maximum of 1. This is all well and good, but how do you get the actual angle? Simple. The value returned is actually the cosine of the angle. All you have to do is use the inverse cosine on the value and you will get the angle. However, we don't need to do a costly cosine function. Why? Because (if you know you're texture co-ordinates well), texture co-ordinates are stored as a value between 0 and 1. This means that the dot product (set the value to zero if negative) of the normal and the lighting direction actually gives us our textue co-ordinate!
Rendering the Object
Right, now you've gotten the texture co-ordinate for each vertex (I know it's a lot of dot products but... well... tough), we now have to draw the object (not much point doing it otherwise). Again, disable lighting and blending, but enable texturing (remember it's a 1D texture). Now, draw the object the same as before, but in this case specify the texture co-ordinate before the position of the vertex (or things could look a little odd). Voilla. One lit (if only basically lit) cel-shaded object. Don't you just love me (if no, then you will by the end of this article)?
Okay, for all the people who couldn't care less for theory, here's what you've gotta do.
Woo, the list just doubled in size.
Okay, we've covered most of the theory. All this method requires is a little modification of the method described above.
Positional lights offer more flexability than directional lights for the simple fact that they can be moved around the scene, dynamically lighting all polygons realisticly. Although it looks good, the math required is much longer than for direcitonal lighting. It's not more complicated, just longer :-).
Calculating the Sharp Lighting Co-ordinate
With directional lighting, we simply needed to get the dot product of the light direction and the vertex normal. Now, because positional lighting has no direction (it emmits light in all directions), each vertex will have it's own "ray of light" shining towards it. That's not too bad, until you realise that we're doing this in software.
First of all, we have to create a vector from the position of the light to the position of the vertex. We then normalize this so it has a unit length (magnitude) of 1. This gives us the direction of the light to that particular vertex. Now, yup, you guessed it, we take the dot product of this vector with the normal of the vertex. Sounds easy? Now repeat for every vertex in the scene. Argh! That is gonna slow the frame rate down a lot, so let's look at a quick method of reducing the number of lit vertexes.
Radius Checking with Positional Lighting
We give each light it's own radius. Now, before calculating the lighting values, we see if the vertex is actually within the light's radius (simple point-in-sphere) test. If so, we apply the lighting to it. If not, then we don't. Okay, stop moaning, I know I didn't mention anything about point-in-sphere collision detection in the introduction, but if you can't figure it out, then... well... I don't know what you are (clueless newbie?).
Same as directional lighting. Just draw the object but only specify the color, texture co-ordinate and position.
Outlines and Highlighting
This is easy. This is just too easy for my liking. There's no complicated matrix scaling routines, drawing the stencil buffer and then drawing a black quad over the entire screen (not only is that just stupid, it does a very bad job of outlining objects, and won't highlight them). Read on...
Calculating Where to Highlight
Okay, from hereon I'm going to refer to outlining and highlighting as simple highlighting, because they both use exactly the same technique (and are both calculated at the same time). The rule is simple: draw a line along any edge that has one front facing polygon and one back facing polygon. This might sound daft, but look at you're keyboard for a second. Note how you can't see the back of the keys? This is because they are facing away, so we would draw a line along that edge to show that there is an edge there. We dont have to worry about the other sides as they will be lit differently, and so will still be clearly visible.
Now, the next section is going to become API-hack hevean. Notice how I didn't mention anything about polygon culling in the introduction? This is because we get the API to do it all for us (unless you're using 100% software, in which case you're screwed until you read up some more).
First of all, we need to set the line width - 2 or 3 pixels wide normally gives a nice effect. If you're feeling extra happy you can turn on anti-aliasing for this too. First of all, we change the culling mode to front facing (i.e. remove all front facing polygons). Next, we switch to wireframe mode. This is so we only end up drawing lines. Now, we simply draw the polygons as usual, except we don't need to specify the color or texture co-ordinate (they are useless in wireframe mode). Now, what this will do is draw a wireframe mesh of all backfacing polygons, however, due to the magical power of the depth buffer, only lines that are infront of forward facing polygons are drawn (note that this method wouldn't work if we set the line width to 1). I know it sounds stupid, but it is the simplest way of doing it, and all the lines appear in the right place, just as we (well, I) predicted!
Okay, here we go.
Hey, what do ya know! We haven't exceeded 6 list items yet. YET...
Muhahahahaha. Now it gets difficult. Why? Because we're going to cel-shade textures, something that I have never actually seen done before (I've had to work out all the theory behind this section on my own). So, what you're getting now is 100% original and 100% untested (like I said, I don't know DX and OGL doesn't work) methods that I don't garantee will work. Ah well...
Now, there are 2 ways of doing this - multiple texturing and my way. Seeing as not everyone knows how to do multi-texturing, and not every graphics card supports it, we're going to do it my way :-). First off all, let's revisit that Sharp Lighting map.
We are actually switching the roles of the textue and vertex color now. Instead of the texture shading the color, the color is going to shade the texture.
Creating the Sharp Lighting Map
Remember that lovely little image earlier on? If not, then here it is again:
Now, before hand we uploaded it to whatever API we are using. Well, not anymore. This time, we keep the values ourselves. Once we've loaded the texture, we have to create an array of floating point values (if you're storing the textue in byte format then just divide each pixel component by 255) and copy over the values. Now, with the object, we have to store the data a little differently. Here is a list of the data required:
The only thing we've changed is the vertex colors, which have been replaced with textue co-ordinates. Now we have our locally stored Sharp Lighting map, and our object data. Time to do some software lighting (ugh).
This part hasn't changed much since last time around. Directional and positional lighting still both work in exactly the same way (thank God for that, because I've been typing for too long now and my fingers hurt), but the only difference is that when getting the dot product of the vector and the normal, we multiply it by however wide our Sharp Lighting map is (in this case 16) minus 1 (because the range is 0-15) and cast it into an integer. This integer represents an index in our light map, and will be turned into a color when rendering.
Rendering Cel-Shaded Textures
Okay, this is gonna take some explaining. In DirectX and OpenGL, if you specify the color of the vertex along with the textue co-ordinate, the color of the texture will be modified to match the color of that vertex. Now, seeing we we're using greyscale values, when we specify the color of the texture, it will brighten/darken it, but still using Sharp Lighting, so it looks cel-shaded. Pretty clever eh? Took me all of 5 minutes to work that one out (I lie. It took about 1/2 an hour).
So, first of all we specify the color of the vertex. This is done by getting the Sharp Lighting value from the vertex structure, and looking up the value in the index map. This gives us a single value. Now, because we're using RGB (if you're not then you can work this bit out on your own), we simply use this value for all components of the RGB triplet. For example, if the lighting value was 0.4, then red would be 0.4, green would be 0.4, and blue would also be 0.4. Now, we then specify the proper texture co-ordinates of the vertex, and finally the position of the vertex. Remember to disable blending and lighting and enable texturing. Hopefully you should have a cel-shaded texture drawn onto your screen. If you're using a simple quad it will probably look a bit odd - try tessalating it more (4x4 or something) and it will look better. As for the highlights? Heh, same as before my friend.
I think we're going to exceed 6 items this time.
Well what do you know, it was only 6 items.
Well, there you have it. The most extensive article on cel-shading, available only on GameDev.net (just advertising the site incase this article ever gets archived on another site), and I haven't had a chance to test out half of the ideas in it. If you have any problems with the article, just post to the thread attached to this article, because after my last attempt at an article I was flooded with e-mails, and I want my inbox free of irritating newbie questions like "what's the dot product?" and "can you send me some source code?".
For the intelligent ones out there, I hope the information in this article has been benificial. For the stupid ones out there: go back to Visual Basic. Microsoft does most of the work for you there. Oh yeah, I almost forgot...
I've gathered a couple of links that might be useful to you (i.e. they include source code). One of them is for DirectX 8 and uses Vertex Shaders, and the other is for OpenGL and uses the "proper" technique (the DX version tends to look a bit odd with certain objects). Please note that none of these include information about cel-shading textures (I had to figure that one out myself).
These are the two links that I found the most useful (the OpenGL being more useful than the other). There are probably more referances out there (if there isn't then I am very surprised), but these 2 should help you out dramatically.
Appendix A Multiple Light Sources
For those of you who have way to many CPU cycles to spare, this is a good method of using them up :-). If you look back to when we calculate the Sharp Lighting value for the vertex, you will see that is has a maximum value of 1 and a minimum of 0. Now, if we have another light lighting that vertex, you compare the existing lighting value with the newly created one. If the new light value is higher, then replace the existing one with that. If it's darker, then ignore it. That's just another stupidly simple trick that will make your scene look nicer (dispite running at 1fps).