Real-Time Radiosity
by Nathan Vegdahl

What is radiosity?

Radiosity is an addition to typical 3d rendering methods that increases the realism of any given image by a multitude. The theory behind it is that light does not just bounce off of objects and go into a persons eye, it also bounces off of objects onto other objects and then into the persons eye. And it also bounces off of objects onto objects onto yet more objects and then into the persons eye. And you can trace that until certain rays of light have bounced off of every single object in the room before it even reaches the eye. And that is actually what happens in real life! It results in something that one might reffere to as "color bleeding".

For an example of this, you could take a tennis ball, and hold it about a half inch away from a piece of white paper. If you look closely, the part of the paper that the ball is closest to has become a slight tint of yellow. That is the phenomena of radiosity at work.

Probably the most interesting thing about radiosity is that the human eye is very sensitive to it. If radiosity is not present in an image (such as with a typical 3d rendered image) your brain will flash a "not real" warning flag. That is why you can often tell that computer images are... well, computer images. You can tell that they are not real.

But many 3d rendering programs now have an option for rendering with radiosity. Lightflow (http://www.lightflowtech.com) for instance, has radiosity as a standard feature. In fact, it was designed around radiosity in the first place. Also, all composite computer graphics that you see in movies are typicaly rendered with radiosity Maya, for instance--which is a 3d graphics program used in many movies--can render with radiosity.

So how do you calculate radiosity? Well, it is a theoretically simple method, but that when put into practice is extremely complex and often unreasanabley slow. Of course, there is more then one method, but I am speaking of the traditional method. What you do is you first calculate lighting a single time, in the normal way, except you store the light values of any given polygon (i.e. how much light is reflecting off of it) as opposed to merely the color of the polygon after the lighting has been completed. Then, you use a recursive subdivision techneich to subdivide the meshes whereever there is an inconsistancey across it. You then re-calculate the lighting for the scene, storing the lighting values again. Except this time around you also take into account the light that is being reflected off of each of the polygons, and treat each of those as if they were a light source them selves. Also, you now have more polygons to calculate light values for because you subdivided them. You continue to do this until an user set thresh-hold, and then you stop. You now have thousands and thousands of polygons which all have their own light values, with radiosity taken into account. You then render the scene using interpolation ocross the surfaces of all polygons.

As you can see, that is an extremely time consuming, and memory hogging method. It should now be obvious that real-time radiosity is impossible, right?

Well, as I already said, that is only one of many methods. Lightflow, for example, uses an ad-hoc algorithm for all light calculation (including radiosity) that is much much faster then the traditional radiosity method. There are also ways to optimize the traditional method to make it a bit faster. There are also many other methods of calculating radiosity.

But all of those methods, however fast they may be for non-real-time calculations, are still not fast enough for real-time use.

So real-time radiosity is impossible. Right?

Wrong. The reason that all of the non-real-time methods are too slow is that they are all designed to acurately calculate radiosity. They are intended to be fully acurate! But we don't need full acuracey, we merely need enough acuracey for the human brain to not flash the "not-real" flag. And let me tell you, you really don't need all that much acuracey to prevent that.

With that in mind, I developed a real-time radiosity algorithm a year or so ago. I only mentioned it to a very few people, and a couple of businesses: Nvidia so that they could impliment hardware support... they ignored me; and Atari games, who did not ignore me, but were tentative in using the method... I'm not sure if they've decided to use it or not.

Anyway, I know for a fact that my method does not calculate acurate radiosity. However, I do know that it calculates enough acuracey to fool the human eye and brain into thinking that something is real.

So how does the method work?

Before I explain how my method works, I will first take you through my development of the algorithm, after which I will outline the method in full.

Meeting Ton

I while back, in 1998, I was broke. Well, actually, I'm still broke... but that's not the point. The point is that I couldn't afford to buy any 3d animation software (such as 3d Studio, or Lightwave, or even TrueSpace). So, I decided to search the net for a 3d animation program that was free. I already knew about Povray, and it didn't have very good animation capabilites; I ignored it. I then came upon Rhino, when it was still in it's beta, and free. But, of course, Rhino was a modeler, not a 3d animation program. Also, it was a NURBS only program, and I only knew how to model with meshes (from making Quake models and such).

Finally, I came upon a program called Blender (http://www.blender.nl), and it was a full fledged integrated animation/rendering/modeling program. And it was free! So, I downloaded it, and started using it. It turned out, however that the GUI was designed for efficiency, and not for ease of learning. So, I e-mailed one of the programmers of Blender, Ton. He was very helpful, and told me the basics of using Blender. Now, this was two years ago, and only 50 or so people used Blender, and probably only about ten more people knew about it, as opposed to now where it has become very well known, and there must be ten thousand users or so, and probably half of the internet knows about it. In fact, when I first started using Blender, no one had even thought about the C-Key idea, and the manual was still in the middle of being written.

Anyway, because of how few people knew about it, I was able to e-mail Ton many times after that. Most of the time I e-mailed him with feature ideas for Blender, because there were many features that seem standard in other programs that weren't in it (such as environment mapping, and translucent objects, and a UV editor). Then, at about the same time that the C-key system was implemented, I started coming up with ideas of my own, and not ones that I had seen in other programs.

The Reason for Radiosity

One of the C-key features in Blender was radiosity. However, there was a problem with it. The main problem I found was that it was not dynamic. What you did was you loaded the meshes (that you wanted to have radiosity calculated for) into the radiosity feature (which was practically a separate program). Then, you told it to calculate the radiosity for them. Once it had finished, you then spit the subdivided meshes back out into the actual scene. So, you couldn't calculate the radiosity once for each frame, you could only calculate it for one, and that had to be used for all frames.

The second problem was that it was slow. It used the traditional style of calculating radiosity, which is slow! Extremely slow! Other then that, I had no problem with the Blenders radiosity. In other words, everything except for the end image was bad. At that point in time I wished two things:

  1. I wished I had the C-key.
  2. I wished there were a faster and more easily dynamic method for calculating radiosity.

The strange thing about what happened next was that I never sought out to develop a faster radiosity algorithm. In fact, the idea actually came to me at a time where I wasn't even thinking about radiosity at all, it came while I was watching TV.

The Click

So, I was watching X-Files, sinking into the recliner chair. All of the sudden, I realized that diffuse light was merely specular light that was equally reflected in all directions within 90 degrees of the surface normal (a total of 180 degrees), because of micro-facets in surfaces. Then, I thought of environment mapping. It was used to simulate specular reflection. So why not use it for diffuse reflection too? Why couldn't you use a cube-rendered view to calculate radiosity?

That was the beginning. However, I only had a vague idea of how to do it: render cube view, use it to calculate diffuse light interaction. So, I still had no idea how to use the cube view to calculate the radiosity, I just knew, by some intuition, that you could.

The Development

So, I turned off the TV (yes, in the middle of X-Files), and ran to get some paper. Now, perhaps you think that I used it to write things down on. However, I did nothing of the sort. What I used the paper for was I used it to clearly see real-life radiosity. I put the piece of paper on the table, and I quickly ran to by room to get varios objects of different colors.

I first took a red colored marker and put it just above the paper. The red color from the marker casing was slightly visible on the piece of paper. I then moved my head around the paper and marker, making sure to keep them both still. What I saw was that the radiosity did not change as I moved. In other words, diffuse light did not change as the view point changed. The reason I did that was because specular reflections do change as the view point changes: the reflection "deforms" (so to speak) as the viewer moves. Then I moved the marker, and the faint red on the paper moved. So, I now knew that radiosity did not change with viewpoint change, but it did change with object change. I had already known both of those things, but for some reason I just wanted to make sure.

Next, I picked up one side of the paper, so that it was tilted. I noticed that although the paper moved, the red spot did not. That is not so say that it didn't move relative to the paper (which it did), but what I mean is that if you looked at the entire setup from above, looking down, the marker would always be obscuring the same part the red-spot. However, there was a change in the red spot, it merely wasn't it's position. The change was it's intensity. The further and further the paper tilted, the fainter and fainter the red spot got, until eventually, it disappeared. And the place that it disappeared at was when the paper was completely vertical. That meant that it had the same attributes as regular diffuse light from light sources! The higher the difference in angle becomes, the lower the amount of light contributed becomes!

After that, I set the paper down, and then started sliding it, while keeping the marker still. What happened was that the red spot stayed the same relative to the marker, which confirmed what I had discovered by tilting the paper.

I made the following definitions:

  1. the marker is a source.
  2. the paper is a receiver.

And with those definitions, I came up with the following rules:

  1. view point change has no effect.
  2. position of radiosity "color bleed" always stays relative to the source.
  3. intensity of radiosity "color bleed" lowers as angle between receiver-surface-normal and source-to-surface-vector (the vector made by drawing a line from the source to the surface) increases. Also, a rather obvious (given) rule was that the radiosity decreased as the light cast onto the source decreases (i.e. an actual light source changes position or intensity).

And to think, I would have gotten no where if I had used the marker and paper to write out algorithms!

The Application of Rules

The difficult part, however, was not discovering the rules, but figuring out how to use them to my advantage. The situation was much like attempting to create a rocket ship: you know the laws of physics, but how can you use them to get the rocket into the air?

The first thing I figured out was something called normal texture mapping. Now, perhaps that sound rather funny, seeing as normal is a rather subjective term. But that is not what is meant my "normal". What it is referring to is the surface normal of any given point on a surface.

Traditional texture mapping was done by defining an "imaginary" box, with each of the sides of the box having part of the texture map on it. Then, lines were projected and reflected all over the place, and once they intersected a place on the box, that color was recorded and assigned to the origin pixel. In fact, it was quite similar to ray tracing in some respects. However, there was a difference. Instead of actually projecting a ray from the view pixel, you instead projected a line from each vertex of the mesh in question. The direction of the line could be formed in several different ways, all of which caused the texture map to be mapped onto the object in a different way.

A rather standard way of forming the lines direction was by projecting a line from the origin of the object through the vertex in question, which was referred to as spherical mapping. Another method was to project the line from a z/x origin point, but with the y value of that point always being equal to the vertex in question, which was referred to as cylindrical mapping. There were many other methods used, one of which was normal mapping.

Normal mapping formed the direction of the line by merely using the surface normal of the vertex in question. This was very fast, because you didn't have to generate the line direction during the rendering, you merely made it equal to the surface normal.

Then, you used the direction of the line, and projected it from the position of the vertex. The line would intersect with the imaginary cube, and what ever UV coordinate it hit, you would assign to that vertex. You would do this all of the vertices, and all of the sudden you would have all of the UV coordinates you needed to do the texture mapping!

What I decided was that normal mapping would be used to define UV coordinates used from the cube view. But why? The reason why was because of rules #2 and #3. This would make a change in the vertices angle (i.e. surface normal) to cause a change in the color which of the vertex in question, because the UV coordinate that it was assigned to would change, and a change in the vertices position would also cause a change in the UV coordinates. Also, the view point has no effect on normal mapping.

But I still had a problem. A change in angle was not supposed to change the color completely, it was only supposed to cause it to fade in and out. How was I going to fix that? And then it hit me: blur. If you blurred the cube view, it would cause colors to fade in and out, instead of just appearing and disappearing. But how much blurring was I supposed to do? That, also, came to me almost instantly. Since diffuse light was defined as specular light being distributed evenly over 180 degrees. Therefore, I would have to blur the cube view so that any given pixel would contribute evenly to all pixels that were 180 degrees around it. In other words, all pixels contribute evenly to all pixels within the half of the cube view closest to it.

The last problem I had was that using a cube view as a texture map would merely make it look like a deranged reflection, because you would see the image. Then I realized that the blurring I already thought of solved this problem because it would eliminate the sharp defined image, so it wouldn't look like a reflection at all because you wouldn't be able to see what anything in the blurred image was.

The last necessity

The last thing that needed to be added to the method was light. In other words, you couldn't just map the texture onto the object, you had to use the color values as light values. The reason for that is that if you did not, you would never see the color of the texture in completely dark areas, which would completely eliminate the point of the algorithm.

So, you would light the world like normal, and then you render the cube view for all objects, and use the method I described to blur and map the UV coordinates onto the object. After that, you added all of the color values of the cube-view texture maps to the light values of the object. You would then render the scene.

Ton again

I franticly logged onto the net and e-mailed Ton, outlining my method in detail. His reply to my message was that that method had already been developed, and it was called "hemicubes". I was extremely disappointed. I had thought that I had come up with something new, and it turned out that it already existed. So, I decided to do some quick research on hemicubes.

What I found out was that hemicubes was not the same thing as my method. In fact, there was only a single similarity: the cube view rendering. Other then that, it was pretty much completely different. It was used in a completely different way! Instead of mapping it onto the object, it rendered a separate half-cube view for every polygon of the model. In fact, it was merely a modification of the traditional method of radiosity! It was used as a replacement of measuring the distance between polygons. IT wasn't at all the same!

So, I e-mailed Ton back, carefully outlining the differences. He agreed with me that my method was different. However, he still wouldn't put it in Blender... at least, not yet. The main reason behind that was that he had other things that he felt were more important to implement. And I could definitely understand that.

But I was still disappointed. I was hopping to see my method in action. And I didn't want to implement it my self because I had no base 3d renderer to program it for, and I didn't want to take the time to program one merely to test out a single idea of mine.

The realization

At that point I still hadn't realized the true implications of my radiosity method (which I had named "radiosity mapping"). But it happened quickly enough.

I was playing Quake (the original) when it occurred to me. I had suddenly gib'd a knight, and I saw the flash of the rocket exploding. GAMES!!! I suddenly thought to my self. They could do environment mapping with games, and they certainly could do texture mapping.

Gameprog.com

At that point in time I was a frequent visitor to gameprog.com. I had already writen a couple of tutorials on 13h programming and pallete manipulation, but I had never writen anything original. So, I decided to start writing up an article, outlining my method.

The only problem was that I got side tracked, and I never finished the article. And I had never told the webmaster about it. So the article just sat and rotted.

The present

At the present, a year and a half later, I have been inspired again. I have now written this article, outlining my method and it's development. I very much hope that someone will read this article and decide to use my method in their game. I will say right now that I do not care if I get credit or not. I do not care if I get money or not. I just want to see my method in action. If anyone does use radiosity mapping in their game, PLEASE e-mail me and let me know!

Basic outline of radiosity mapping

This last section is a quick list of all of the steps in radiosity mapping.

  1. render cube view from object origin.
  2. blur cube view so that all pixels contribute their color equally to the half of the cube view closest to them.
  3. use normal mapping to acquire UV coordinates for each vertex of the mesh.
  4. texture map the cube view onto the object, except DO NOT USE PIXEL COLORS AS YOU WOULD NORMALLY, instead, use the pixel colors as lighting values, and add those light values to the light already present on the object from traditional lighting methods.
  5. Render the scene with it's new light values.

Optimizations

There are some rather obvious optimizations that could be implemented along with this radiosity method. I am going to lightly outline the ones that I have thought of in this section.

The first one is to not do full texture mapping for the cube view. Instead, only find the light values for each vertex in the mesh, and then interpolate.

The second one is that you could ignore the radiosity calculation of static objects, and only do radiosity mapping for dynamic objects. You could pre-calculate the radiosity for static objects by using traditional methods. Then, for the dynamic objects light contribution to the static objects, you could pre-calculate a volume of light-contribution values that moved with the object. Then, as the light of one part of the dynamic object changes, so does the corresponding section of volume. The volume would move with the object. Then, you light the static objects according to the light volumes of the dynamic objects.

"Thank you"s

There are several people who helped me develop this method (some without there knowledge), and I would like to take the time to give a list of those people here:

Ton Roosendaal (Not a Number)
John Carmack (Id Software)
My mom (yes, she really did help me develop this)
Chris Lattner
Will Kerslake (Atari Games)
Allan Watt

My thanks go out to all of you!

Last thing

One last thing. If you were confused by anything in this article, or if you have any questions, or even if you'd just like to compliment me on my original thinking (smile) then PLEASE feel free to e-mail me! My e-mail address is nathan_vegdahl@yahoo.com

Discuss this article in the forums


Date this article was posted to GameDev.net: 2/19/2000
(Note that this date does not necessarily correspond to the date the article was written)

See Also:
Radiosity

© 1999-2011 Gamedev.net. All rights reserved. Terms of Use Privacy Policy
Comments? Questions? Feedback? Click here!