Community

Upcoming Events

Unite 2010

11/10 - 11/12 @ Montréal, Canada

GDC China

12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010

12/24 - 12/27

GDC 2011

2/28 - 3/4 @ San Francisco, CA

More events...

11/10 - 11/12 @ Montréal, Canada

GDC China

12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010

12/24 - 12/27

GDC 2011

2/28 - 3/4 @ San Francisco, CA

More events...

Quick Stats

82 people currently visiting GDNet.

2406 articles in the reference section.

Help us fight cancer!

Join SETI Team GDNet!

2406 articles in the reference section.

Help us fight cancer!

Join SETI Team GDNet!

## RacorX11This example shows, how to use a cube map as a normalization map for the light vector. Using a cube normalization map helps to prevent the following problem: As light gets closer to the polygon surface, the interpolated light vector will become more and more unnormalized (it will be shortened). The result will be that, as the light approaches the surface, the surface will actually be less illuminated than when the light is further away. The cube normalization map is designed so that, given a texture coordinate representing a 3D vector, the output will always be the normalized vector. Cube maps are made up of 6 square textures of the same size, representing a cube centered at the origin. Each cube face represents a set of directions along each major axis (+x, -x, +y, -y, +z, -z). The normalization cube map is centered about the origin of the earth object. Each texel on the cube represents a unit light vector, oriented to this origin.
## Three PassesBecause it is not possible to use the diffuse reflection with a light vector normalized by a cube map and the specular reflection in one ps.1.1 pixel shader, all the effects are drawn in three passes onto the object. The first pass uses the cube normalization map to normalize the light vector and calculates the diffuse reflection. The second pass calculates the specular reflection and the third pass draws the point light effect into the frame buffer. This all happens in // first pass: diffuse(cubemap) * color m_pd3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE); // SrcColor * 1 + DestColor * 1 m_pd3dDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); m_pd3dDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE); m_pd3dDevice->SetTexture(0,m_pColorTexture); m_pd3dDevice->SetTexture(1,m_pNormalMap); m_pd3dDevice->SetTexture(2,m_pCubeTexture); m_pd3dDevice->SetPixelShader(m_dwPixShaderDot3); m_pd3dDevice->SetVertexShader(m_dwVertexSpecular); m_pd3dDevice->SetStreamSource( 0, m_pVertices, sizeof(ShaderVertex) ); m_pd3dDevice->SetIndices(m_pIndexBuffer,0); m_pd3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, m_iNumVertices, 0, m_iNumTriangles); // second pass: specular m_pd3dDevice->SetTexture(3,m_pIllumMap); m_pd3dDevice->SetPixelShader(m_dwPixelSpecular); m_pd3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, m_iNumVertices, 0, m_iNumTriangles); // third pass: attenuation // SrcColor * 0 + DestColor * SrcColor m_pd3dDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ZERO); m_pd3dDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_SRCCOLOR); m_pd3dDevice->SetTexture(0, m_pPointLightTexture); m_pd3dDevice->SetTexture(1, m_pPointLightTexture); // Set the pixel shader m_pd3dDevice->SetPixelShader(m_dwPixShaderPointLight); // set vertex shader m_pd3dDevice->SetVertexShader(m_dwVertShaderPointLight); m_pd3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST, 0, m_iNumVertices, 0, m_iNumTriangles); m_pd3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE); To be able to blend together the results of the different passes, alpha blending is used. The following code snippet shows the adding of the first and the second pass: // SrcColor * 1 + DestColor * 1 m_pd3dDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); m_pd3dDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE); It is important to note, that the same vertex shader is used for the first and the second pass, although it is explicitely set only in the first pass. Using the same vertex shader in two passes should lead to a small performance gain, because the second time the vertex shader doesn't have to be uploaded to the graphics card once again. I guess that the performance gain on a software vertex shader implementation is bigger. ## First Pass: Normalization of Light Vector/Calculation of Diffuse ReflectionThe shader pair that uses the cube normalization map and calculates the diffuse reflection effect can be found in diffCubeMap.vsh and diffCubeMap.psh. The pixel shader source is pretty short: ps.1.1 tex t0 ; color map tex t1 ; normal map tex t2 ; cube map dp3 r1, t2_bx2, t1_bx2 ; diffuse mul r0, t0, r1 The light vector is stored as a texture coordinate in
Compared to the former example, the diffuse reflection value is the calculated via a ## Second Pass: Specular ReflectionThe pixel shader specular.psh, that is used to calculate the specular reflection is nearly identical to the pixel shader used in RacorX8. ps.1.1 tex t0 ; color map + gloss map tex t1 ; normal map texm3x2pad t2, t1_bx2 ; u = t1 dot (t2) half vector texm3x2tex t3, t1_bx2 ; v = t1 dot (t3) half vector ; fetch texture 4 at u, v ; t3.a = (N dot H)^16 mul r0, t0.a, t3.a ; (N dot H)^16 * gloss value This shader now only handles the specular reflection. Therefore any code relating to the calculation of the diffuse reflection is omitted. It is also important to note, that using the A more elegant solution is possible by using the ## Third Pass: Point Light EffectThe vertex and pixel shader for the point light effect are the same as in RacorX10. |