Community

Upcoming Events

Unite 2010

11/10 - 11/12 @ Montréal, Canada

GDC China

12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010

12/24 - 12/27

GDC 2011

2/28 - 3/4 @ San Francisco, CA

More events...

11/10 - 11/12 @ Montréal, Canada

GDC China

12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010

12/24 - 12/27

GDC 2011

2/28 - 3/4 @ San Francisco, CA

More events...

Quick Stats

64 people currently visiting GDNet.

2406 articles in the reference section.

Help us fight cancer!

Join SETI Team GDNet!

2406 articles in the reference section.

Help us fight cancer!

Join SETI Team GDNet!

## RacorX3The main improvement of RacorX3 over RacorX2 is the addition of a per-vertex diffuse reflection model in the vertex shader. This is one of the simplest lighting calculations, which outputs the color based on the dot product of the vertex normal with the light vector. RacorX3 uses a light positioned at (0.0, 0.0, 1.0) and a green color.
As usual we are tracking the life-cycle of the vertex shader. ## Vertex Shader DeclarationThe vertex shader declaration has to map vertex data to specific vertex shader input registers. Additionally to the previous examples, we need to map a normal vector to the input register // vertex shader declaration DWORD dwDecl[] = { D3DVSD_STREAM(0), D3DVSD_REG(0, D3DVSDT_FLOAT3 ), // input register #1 D3DVSD_REG(3, D3DVSDT_FLOAT3 ), // normal in input register #4 D3DVSD_END() }; The corresponding layout of the vertex buffer looks like this: struct VERTICES { FLOAT x, y, z; // The untransformed position for the vertex FLOAT nx, ny, nz; // the normal }; // Declare custom FVF macro. #define D3DFVF_VERTEX (D3DFVF_XYZ|D3DFVF_NORMAL) Each vertex consists of three position floating point values and three normal floating point values in the vertex buffer. The vertex shader gets the position and normal values from the vertex buffer via ## Setting the Vertex Shader Constant RegistersThe vertex shader constants are set in #define CLIP_MATRIX 0 #define CLIP_MATRIX_1 1 #define CLIP_MATRIX_2 2 #define CLIP_MATRIX_3 3 #define INVERSE_WORLD_MATRIX 4 #define INVERSE_WORLD_MATRIX_1 5 #define INVERSE_WORLD_MATRIX_2 6 #define LIGHT_POSITION 11 #define DIFFUSE_COLOR 14 #define LIGHT_COLOR 15 In HRESULT CMyD3DApplication::FrameMove() { // rotates the object about the y-axis D3DXMatrixRotationY( &m_matWorld, m_fTime * 1.5f ); // set the clip matrix m_pd3dDevice->SetVertexShaderConstant(CLIP_MATRIX, &(m_matWorld * m_matView * m_matProj), 4); D3DXMATRIX matWorldInverse; D3DXMatrixInverse(&matWorldInverse, NULL, &m_matWorld); m_pd3dDevice->SetVertexShaderConstant(INVERSE_WORLD_MATRIX, &matWorldInverse,3); return S_OK; } Contrary to the previous examples, the concatenated world-, view- and projection matrix, which is used to rotate the quad, is not transposed here. This is because the matrix will be transposed in the vertex shader as shown below. To transform the normal, an inverse 4x3 matrix is send to the vertex shader via ## The Vertex ShaderThe vertex shader is a little bit more complex, than the one used in the previous examples: ; per-vertex diffuse lighting #include "const.h" vs.1.1 ; transpose and transform to clip space mul r0, v0.x, c[CLIP_MATRIX] mad r0, v0.y, c[CLIP_MATRIX_1], r0 mad r0, v0.z, c[CLIP_MATRIX_2], r0 add oPos, c[CLIP_MATRIX_3], r0 ; transform normal dp3 r1.x, v3, c[INVERSE_WORLD_MATRIX] dp3 r1.y, v3, c[INVERSE_WORLD_MATRIX_1] dp3 r1.z, v3, c[INVERSE_WORLD_MATRIX_2] ; renormalize it dp3 r1.w, r1, r1 rsq r1.w, r1.w mul r1, r1, r1.w ; N dot L ; we need L vector towards the light, thus negate sign dp3 r0, r1, -c[LIGHT_POSITION] mul r0, r0, c[LIGHT_COLOR] ; modulate against light color mul oD0, r0, c[DIFFUSE_COLOR] ; modulate against material The The normals are transformed in the following three You can think of a normal transform in the following way: Normal vectors (unlike position vectors) are simply directions in space, and as such they should not get squished in magnitude, and translation doesn't change their direction. They should simply be rotated in some fashion to reflect the change in orientation of the surface. This change in orientation is a result of rotating and squishing the object, but not moving it. The information for rotating a normal can be extracted from the 4x4 transformation matrix by doing transpose and inversion. A more math-related explanation is given in [Haines/Möller][Turkowski]. So the bullet-proof way to use normals, is to transform the transpose of the inverse of the matrix, that is used to transform the object. If the matrix used to transform the object is called M, then we must use the matrix, N, below to transform the normals of this object. N = transpose( inverse(M) )
That's exactly, what the source is doing. The inverse world matrix is delivered to the vertex shader via By multiplying a matrix with a vector, each column of the matrix should be multiplied with each component of the vector. In case of the normals, no transposition is done. So The normal is re-normalized with the To calculate a unit vector, divide the vector by its magnitude or length. The magnitude of vectors is calculated by using the Pythagorean theorem: x The length of the vector is retrieved by ||A|| = sqrt(x The magnitude of a vector has a special symbol in mathematics. It is a capital letter designated with two vertical bars: ||A||. So dividing the vector by its magnitude is: UnitVector = Vector / sqrt(x The lines of code in the vertex shader, that handles the calculation of the unit vector looks like this: ; renormalize it dp3 r1.w, r1, r1 ; (src1.x * src2.x) + (src1.y * src2.y) + (src1.z * src2.z) rsq r1.w, r1.w ; if (v != 0 && v != 1.0) v = (float)(1.0f / sqrt(v)) mul r1, r1, r1.w ; r1 * r1.w
The underlying calculation of these three instructions can be represented by the following formula, which is mostly identical to the formula postualted above: UnitVector = Vector * 1/sqrt(x Lighting is calculated with the following three instruction: dp3 r0, r1, -c[LIGHT_POSITION] mul r0, r0, c[LIGHT_COLOR] ; modulate against light color mul oD0, r0, c[DIFFUSE_COLOR] ; modulate against diffuse color Nowadays the lighting models used in current games are not based on much physical theory. Game programmers use approximations that try to simulate the way photons are reflected from objects in a rough but efficient manner. One differentiates usually between different kind of light sources and different reflection models. The common lighting sources are called directional, point light and spotlight. The most common reflections models are ambient, diffuse and specular lighting. This example uses a directional light source with an ambient and a diffuse reflection model. ## Directional LightRacorX3 uses a light source in an infinite distance. This simulates the long distance the light beams have to travel from the sun. We treat this light beams as beeing parallel. This kind of light source is called directional light source. ## Diffuse ReflectionWhereas ambient light is considered to be uniform from any direction, diffuse light simulates the emission of an object by a particular light source. Therefore you are able to see that light falls onto the surface of an object from a particular direction by using the diffuse lighting model. It is based on the assumption that light is reflected equally well in all directions, so the appearance of the reflection does not depend on the position of the observer. The intensity of the light reflected in any direction depends only on how much light falls onto the surface. If the surface of the object is facing the light source, which means is perpendicular to the direction of the light, the density of the incident light is the highest. If the surface is facing the light source under some angle smaller than 90 degrees, the density is proportionally smaller. The diffuse reflection model is based on a law of physics called Lambert's Law, which states that for ideally diffuse (totally matte) surfaces, the reflected light is determined by the cosine between the surface normal N and the light vector L.
The left figure shows a geometric interpretation of Lambert's Law (see also [RTR]). The middle figure shows the light rays hitting the surface perpendicularly in a distance d apart. The intensity of the light is related to this distance. It decreases as d becomes greater. This is shown by the right figure. The light rays make an angle ø with the normal of the plane. This illustrates that the same amount of light that passes through one side of a right-angle triangle is reflected from the region of the surface corresponding to the triangles hypotenuse. Due to the relationships that hold in a right-angle triangle, the length of the hypotenuse is d/cos ø of the length of the considered side. Thus you can deduce that if the intensity of the incident light is Idirected, the amount of light reflected from a unit surface is Idirected cos ø. Adjusting this with a coefficient that describes reflection properties of the matter leads to the following equation (see also [Savchenko]): Ireflected = Cdiffuse * Idirected cos ø This equation demonstrates that the reflection is at its peak for surfaces that are perpendicular to the direction of light and diminishes for smaller angles, because the cosinus value is very large. The light is obscured by the surface if the angles is more than 180 or less than 0 degrees, because the cosinus value is small. You will obtain negative intensity of the reflected light, which will be clamped by the output registers. In an implementation of this model, you have to find a way to compute cos ø. By definition the dot or scalar product of the light and normal vector can be expressed as N dot L = ||N|| ||L||cos ø where ||N|| and ||L|| are the lengths of the vectors. If both vectors are unit length, you can compute cos ø as the scalar or dot product of the light and normal vector. Thus the expression is Ireflected = Cdiffuse * Idirected(N dot L) So (N dot L) is the same as the cosine of the angle between N and L, therefore as the angle decrease, the resulting diffuse value is higher. This is exactly what the #define LIGHT_POSITION 11 #define MATERIAL_COLOR 14 ----- dp3 r0, r1, -c[LIGHT_POSITION] mul r0, r0, c[LIGHT_COLOR] ; modulate against light color mul oD0, r0, c[DIFFUSE_COLOR] ; modulate against material So the vertex shader registers are involved in the following way: r0 = (r1 dot -c11) * c14 This example modulates additionally against the blue light color in r0 = (c15 * (r1 dot -c11)) * c14 ## SummarizeRacorX3 shows the usage of an include file to give constants a name that can be remembered in a better way. It shows how to normalize vectors and it just strive the problem of transforming normals, but shows a bullet-proof method to do it. The example introduces an optimization technique, that eliminates the need to transpose the clip space matrix with the help of the CPU and it shows the usage of a simple diffuse reflection model, that lights the quad on a per-vertex basis. |