Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
38 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

  Contents

 What is radiosity?
 Real-time
 Radiosity:
 Impossible?

 The Method
 The Reason
 The Development
 The Application
 of Rules

 The Last Necessity
 The Realization
 Basic Outline
 Conclusion

 Printable version

 


  The Series

 Part I
 Part II

 

The Application of Rules

The difficult part, however, was not discovering the rules, but figuring out how to use them to my advantage. The situation was much like attempting to create a rocket ship: you know the laws of physics, but how can you use them to get the rocket into the air?

The first thing I figured out was something called normal texture mapping. Now, perhaps that sound rather funny, seeing as normal is a rather subjective term. But that is not what is meant my "normal". What it is referring to is the surface normal of any given point on a surface.

Traditional texture mapping was done by defining an "imaginary" box, with each of the sides of the box having part of the texture map on it. Then, lines were projected and reflected all over the place, and once they intersected a place on the box, that color was recorded and assigned to the origin pixel. In fact, it was quite similar to ray tracing in some respects. However, there was a difference. Instead of actually projecting a ray from the view pixel, you instead projected a line from each vertex of the mesh in question. The direction of the line could be formed in several different ways, all of which caused the texture map to be mapped onto the object in a different way.

A rather standard way of forming the lines direction was by projecting a line from the origin of the object through the vertex in question, which was referred to as spherical mapping. Another method was to project the line from a z/x origin point, but with the y value of that point always being equal to the vertex in question, which was referred to as cylindrical mapping. There were many other methods used, one of which was normal mapping.

Normal mapping formed the direction of the line by merely using the surface normal of the vertex in question. This was very fast, because you didn't have to generate the line direction during the rendering, you merely made it equal to the surface normal.

Then, you used the direction of the line, and projected it from the position of the vertex. The line would intersect with the imaginary cube, and what ever UV coordinate it hit, you would assign to that vertex. You would do this all of the vertices, and all of the sudden you would have all of the UV coordinates you needed to do the texture mapping!

What I decided was that normal mapping would be used to define UV coordinates used from the cube view. But why? The reason why was because of rules #2 and #3. This would make a change in the vertices angle (i.e. surface normal) to cause a change in the color which of the vertex in question, because the UV coordinate that it was assigned to would change, and a change in the vertices position would also cause a change in the UV coordinates. Also, the view point has no effect on normal mapping.

But I still had a problem. A change in angle was not supposed to change the color completely, it was only supposed to cause it to fade in and out. How was I going to fix that? And then it hit me: blur. If you blurred the cube view, it would cause colors to fade in and out, instead of just appearing and disappearing. But how much blurring was I supposed to do? That, also, came to me almost instantly. Since diffuse light was defined as specular light being distributed evenly over 180 degrees. Therefore, I would have to blur the cube view so that any given pixel would contribute evenly to all pixels that were 180 degrees around it. In other words, all pixels contribute evenly to all pixels within the half of the cube view closest to it.

The last problem I had was that using a cube view as a texture map would merely make it look like a deranged reflection, because you would see the image. Then I realized that the blurring I already thought of solved this problem because it would eliminate the sharp defined image, so it wouldn't look like a reflection at all because you wouldn't be able to see what anything in the blurred image was.



Next : The Last Necessity