Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
86 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

  Contents

 Preface
 The Third
 Dimension

 Transformation
 Math

 Down to the Code
 Conclusion

 Get the source
 Printable version

 


  The Series

 The Basics
 First Steps to
 Animation

 Multitexturing
 Building Worlds
 With X Files


 

Down to the Code

The sample uses (as usual in this series) the Direct3D Framework. The application class in animated objects.cpp looks like:

class CMyD3DApplication : public CD3DApplication { D3DVERTEX m_pvObjectVertices[16]; WORD m_pwObjectIndices[30]; Object m_pObjects[2]; // Time reference for calculations FLOAT m_fStartTimeKey, m_fTimeElapsed; static HRESULT ConfirmDevice( DDCAPS* pddDriverCaps, D3DDEVICEDESC7* pd3dDeviceDesc ); protected: HRESULT OneTimeSceneInit(); HRESULT InitDeviceObjects(); HRESULT FrameMove( FLOAT fTimeKey ); HRESULT Render(); HRESULT DeleteDeviceObjects(); HRESULT FinalCleanup(); public: CMyD3DApplication(); };

The objects are described by vertices in m_pvObjectVertices[16] and by indices in m_pwObjectIndices[30]. There's an object structure called object. The fps-independant movement is guaranteed by the two time variables, which holds the start and the elapsed time between two frames. As usual ConfirmDevice() is called as the first framework method, but it's not used here, because we won't need any special capabilities of the graphics card. The other framework methods are called top - down and will be mentioned in this order in the following paragraphs.

OneTimeSceneInit()

The OneTimeSceneInit() function performs basically any one-time resource allocation and is invoked once per application execution cycle. Here it contains the code to construct the two objects:

HRESULT CMyD3DApplication::OneTimeSceneInit() { // Points and normals which make up a object geometry D3DVECTOR p1 = D3DVECTOR( 0.00f, 0.00f, 0.50f ); D3DVECTOR p2 = D3DVECTOR( 0.50f, 0.00f,-0.50f ); D3DVECTOR p3 = D3DVECTOR( 0.15f, 0.15f,-0.35f ); D3DVECTOR p4 = D3DVECTOR(-0.15f, 0.15f,-0.35f ); D3DVECTOR p5 = D3DVECTOR( 0.15f,-0.15f,-0.35f ); D3DVECTOR p6 = D3DVECTOR(-0.15f,-0.15f,-0.35f ); D3DVECTOR p7 = D3DVECTOR(-0.50f, 0.00f,-0.50f ); D3DVECTOR n1 = Normalize( D3DVECTOR( 0.2f, 1.0f, 0.0f ) ); D3DVECTOR n2 = Normalize( D3DVECTOR( 0.1f, 1.0f, 0.0f ) ); D3DVECTOR n3 = Normalize( D3DVECTOR( 0.0f, 1.0f, 0.0f ) ); D3DVECTOR n4 = Normalize( D3DVECTOR(-0.1f, 1.0f, 0.0f ) ); D3DVECTOR n5 = Normalize( D3DVECTOR(-0.2f, 1.0f, 0.0f ) ); D3DVECTOR n6 = Normalize( D3DVECTOR(-0.4f, 0.0f, -1.0f ) ); D3DVECTOR n7 = Normalize( D3DVECTOR(-0.2f, 0.0f, -1.0f ) ); D3DVECTOR n8 = Normalize( D3DVECTOR( 0.2f, 0.0f, -1.0f ) ); D3DVECTOR n9 = Normalize( D3DVECTOR( 0.4f, 0.0f, -1.0f ) ); // Vertices for the top m_pvObjectVertices[ 0] = D3DVERTEX( p1, n1, 0.000f, 0.500f ); m_pvObjectVertices[ 1] = D3DVERTEX( p2, n2, 0.500f, 1.000f ); m_pvObjectVertices[ 2] = D3DVERTEX( p3, n3, 0.425f, 0.575f ); m_pvObjectVertices[ 3] = D3DVERTEX( p4, n4, 0.425f, 0.425f ); m_pvObjectVertices[ 4] = D3DVERTEX( p7, n5, 0.500f, 0.000f ); // Vertices for the bottom ... // Vertices for the rear ...

The sample project shows a simple object. Well... a cube would bore you. The wireframe model shows the polygons and points of the object.

Point #1 is m_pvObjectVertices[0] and m_pvObjectVertices[5], point #2 is m_pvObjectVertices[1] and m_pvObjectVertices[6], point #3 is m_pvObjectVertices[3] and m_pvObjectVertices[11], etc.

Every point is declared as a vector with D3DVECTOR. For every face of the object a normal is defined, so that there are nine normals.

The normal vector is used in Gouraud shading mode, to control lighting and do some texturing effects. Direct3D applications do not need to specify face normals; the system calculates them automatically when they are needed.

The normal vectors are normalized with a call to

D3DVECTOR n1 = Normalize( D3DVECTOR( 0.2f, 1.0f, 0.0f ) );

The Normalize() method divides the vector through its magnitude, which is retrieved by the square root of the Pythagorean theorem.

The last two variables of D3DVERTEX are the texture coordinates. Most textures, like bitmaps, are a two dimensional array of color values. The individual color values are called texture elements, or texels. Each texel has a unique address in the texture: its texel coordinate. Direct3D programs specify texel coordinates in terms of u,v values, much like 2-D Cartesian coordinates are specified in terms of x,y coordinates. The address can be thought of as a column and row number. However, in order to map texels onto primitives, Direct3D requires a uniform address range for all texels in all textures. Therefore, it uses a generic addressing scheme in which all texel addresses are in the range of 0.0 to 1.0 inclusive.

Direct3D maps texels in texture space directly to pixels in screen space. The screen space is a frame of reference in which coordinates are related directly to 2-D locations in the frame buffer, to be displayed on a monitor or other viewing device. Projection space coordinates are converted to screen space coordinates, using a transformation matrix created from the viewport parameters. This sampling process is called texture filtering. There are four texture filtering methods supported by Direct3D: Nearest Point Sampling, Linear Texture Filtering, Anisotropic Texture Filtering, Texture Filtering With Mipmaps.

We're not using a texture here, so more on texture mapping in Tutorial #3 "Multitexturing".

Now on to the next part of the OneTimeSceneInit() method:

// Vertex indices for the object m_pwObjectIndices[ 0] = 0; m_pwObjectIndices[ 1] = 1; m_pwObjectIndices[2] = 2; m_pwObjectIndices[ 3] = 0; m_pwObjectIndices[ 4] = 2; m_pwObjectIndices[5] = 3; m_pwObjectIndices[ 6] = 0; m_pwObjectIndices[ 7] = 3; m_pwObjectIndices[8] = 4; m_pwObjectIndices[ 9] = 5; m_pwObjectIndices[10] = 7; m_pwObjectIndices[11] = 6; m_pwObjectIndices[12] = 5; m_pwObjectIndices[13] = 8; m_pwObjectIndices[14] = 7; m_pwObjectIndices[15] = 5; m_pwObjectIndices[16] = 9; m_pwObjectIndices[17] = 8; m_pwObjectIndices[18] = 10; m_pwObjectIndices[19] = 15; m_pwObjectIndices[20] = 11; m_pwObjectIndices[21] = 11; m_pwObjectIndices[22] = 15; m_pwObjectIndices[23] = 12; m_pwObjectIndices[24] = 12; m_pwObjectIndices[25] = 15; m_pwObjectIndices[26] = 14; m_pwObjectIndices[27] = 12; m_pwObjectIndices[28] = 14; m_pwObjectIndices[29] = 13;

This piece of code generates the indices for the D3DPT_TRIANGLELIST call in DrawIndexedPrimitive(). Direct3D allows you to define your polygons on one of two ways: By defining their vertices or by defining indices into a list of vertices. The latter approach is usually faster and more flexible, because it allows objects with multiple polygons to share vertex data. The object consists of only seven points, which are used by 15 vertices.

There are two ways of grouping the vertices that define a primitive: using non-indexed primitives and using indexed primitves. To create a nonindexed primitive, you fill an array with an ordered list of vertices. Ordered means that the order of the vertices in the array indicates how to build the triangles. The first triangle consists of the first three vertices, the second triangle consists of the next three vertices and so on. If you have two triangles that are connected, you'll have to specify the same vertices multiple times. To create an indexed primitive, you fill an array with an unordered list of vertices and specify the order with a second array (index array). This means that vertices can be shared by multiple triangles, simply by having multiple entries in the index array refer to the same vertex. Most 3D models share a number of vertices. Therefore, you can save bandwith and CPU time sharing these vertices among multiple triangles.

Defining indices into a list of vertices has one disadvantage: the cost of memory. There could be problems with sharing vertices of a cube. Lighting a cube is done by using its face normals, which is perpendicular to the face's plane. If the vertices of a cube are shared, there's only one shared vertex for two triangles. This shared vertex has only one normal to calculate the face normal, so the lighting effect wouldn't be what you want.

In OneTimeSceneInit() the two objects are defined with the help of the m_pObjects structure.

... // yellow object m_pObjects[0].vLoc = D3DVECTOR(-1.0f, 0.0f, 0.0f); m_pObjects[0].fYaw = 0.0f; m_pObjects[0].fPitch = 0.0f; m_pObjects[0].fRoll = 0.0f; m_pObjects[0].r = 1.0f; m_pObjects[0].g = 0.92f; m_pObjects[0].b = 0.0f; // red object m_pObjects[1].vLoc = D3DVECTOR(1.0f, 0.0f, 0.0f); m_pObjects[1].fYaw = 0.0f; m_pObjects[1].fPitch = 0.0f; m_pObjects[1].fRoll = 0.0f; m_pObjects[1].r = 1.0f; m_pObjects[1].g = 0.0f; m_pObjects[1].b = 0.27f; return S_OK; }

To position the first object on the screen, a location has to be chosen. The yellow object should be located on the left and the red one on the right side. The colors for the material properties are chosen in the r, g and b variables. They are set later in the framework function Render() with a call to

// yellow object // Set the color for the object D3DUtil_InitMaterial( mtrl, m_pObjects[0].r, m_pObjects[0].g, m_pObjects[0].b ); m_pd3dDevice->SetMaterial( &mtrl );

InitDeviceObjects()

The InitDeviceObjects() is used to initialize per-device objects such as loading texture bits onto a device surface, setting matrices and populating vertex buffers. First, we'll use it here to set a material. When lighting is enabled, as Direct3D rasterizes a scene in the final stage of rendering, it determines the color of each rendered pixel based on a combination of the current material color (and the texels in an associated texture map), the diffuse and specular colors at the vertex, if specified, as well as the color and intensity of light produced by light sources in the scene or the scene's ambient light level.

You must use materials to render a scene if you are letting Direct3D handle lighting.

HRESULT CMyD3DApplication::InitDeviceObjects() { D3DMATERIAL7 mtrl; D3DUtil_InitMaterial( mtrl, 1.0f, 1.0f, 1.0f ); m_pd3dDevice->SetMaterial( &mtrl ); ...

By default, no material is selected. When no material is selected, the Direct3D lighting engine is disabled.

D3DUtil_InitMaterial() sets the RGBA values of the material. Color values of materials represent how much of a given light component is reflected by a surface that is rendered with that material. A material's properties include diffuse reflection, ambient reflection, light emission and specular hightlighting:

  • Diffuse reflection: Defines how the polygon reflects diffuse lighting (any light that does not come from ambient light). This is described in terms of a color, which represents the color best reflected by the polygon. Other colors are reflected less in proportion to how different they are from the diffuse color.
  • Ambient reflection: Defines how the polygon reflects ambient lighting. This is described in terms of a color, which, as with diffuse reflection, represents the color best reflected by the polygon.
  • Light emission: Makes the polygon appear to emit a certain color of light (this does not actually light up the world; it only changes the appearance of the polygon).
  • Specular highlighting: Describes how shiny the polygon is.

A material whose color components are R: 1.0, G: 1.0, B: 1.0, A: 1.0 will reflect all the light that comes its way. Likewise, a material with R: 0.0, G: 1.0, B: 0.0, A: 1.0 will reflect all of the green light that is directed at it. SetMaterial() sets the material properties for the device.

After setting the material, we can setup the light. Color values for light sources represent the amount of a particular light component it emits. Lights don't use an alpha component, so you only need to think about the red, green, and blue components of the color. You can visualize the three components as the red, green, and blue lenses on a projection television. Each lens might be off (a 0.0 value in the appropriate member), it might be as bright as possible (a 1.0 value), or some level in between. The colors coming from each lens combine to make the light's final color. A combination like R: 1.0, G: 1.0, B: 1.0 creates a white light, where R: 0.0, G: 0.0, B: 0.0 results in a light that doesn't emit light at all. You can make a light that emits only one component, resulting in a purely red, green, or blue light, or the light could use combinations to emit colors like yellow or purple. You can even set negative color component values to create a "dark light" that actually removes light from a scene. Or, you might set the components to some value larger than 1.0 to create an extremely bright light. Direct3D employs three types of lights: point lights, spotlights, and directional lights.

You choose the type of light you want when you create a set of light properties. The illumination properties and the resulting computational overhead varies with each type of light source. The following types of light sources are supported by Direct3D 7:

  • Point lights
  • Spotlights
  • Directional lights

DirectX 7.0 does not use the parallel-point light type offered in previous releases of DirectX. Tip: You should avoid spotlights, because there are more realistic ways of creating spotlights than the default method supplied by Direct3D: Such as texture blending: see the "Multitexturing" tutorials.

The sample sets up an ambient light and, if the graphic card supports it, two directional lights.

... // Set up the lights m_pd3dDevice->SetRenderState( D3DRENDERSTATE_AMBIENT, 0x0b0b0b0b); if( m_pDeviceInfo->ddDeviceDesc.dwVertexProcessingCaps & D3DVTXPCAPS_DIRECTIONALLIGHTS ) { D3DLIGHT7 light; if( m_pDeviceInfo->ddDeviceDesc.dwMaxActiveLights > 0 ) { D3DUtil_InitLight( light, D3DLIGHT_DIRECTIONAL, 0.5f, -1.0f, 0.3f ); m_pd3dDevice->SetLight( 0, &light ); m_pd3dDevice->LightEnable( 0, TRUE ); } if( m_pDeviceInfo->ddDeviceDesc.dwMaxActiveLights > 1 ) { D3DUtil_InitLight( light, D3DLIGHT_DIRECTIONAL, 0.5f, 1.0f, 1.0f ); light.dcvDiffuse.r = 0.5f; light.dcvDiffuse.g = 0.5f; light.dcvDiffuse.b = 0.5f; m_pd3dDevice->SetLight( 1, &light ); m_pd3dDevice->LightEnable( 1, TRUE ); } m_pd3dDevice->SetRenderState( D3DRENDERSTATE_LIGHTING, TRUE ); } ...

An ambient light is effectively everywhere in a scene. It's a general level of light that fills an entire scene, regardless of the objects and their locations within that scene. Ambient light is everywhere and has no direction or position. There's only color and intensity. SetRenderState() sets the ambient light by specifying D3DRENDERSTATE_AMBIENT as the dwRenderStateType parameter, and the desired RGBA color as the dwRenderState parameter. Keep in mind that the color values of the material represent how much of a given light component is reflected by a surface. So the light properties are not the only properties which are responsible for the color of the object you will see.

Additionally there are up to two directional lights used by the sample. Although we use directional lights and an ambient light to illuminate the objects in the scene, they are independent of one another. Directional light always has direction and color, and it is a factor for shading algorithms, such as Gouraud shading. It is equivalent to use a point light source at an infinite distance.

The sample first checks the capabilities of the graphics device. If it supports directional light, the light will be set by a call to the SetLight() method, which uses the D3DLIGHT7 structure.

typedef struct _D3DLIGHT7 { D3DLIGHTTYPE dltType; D3DCOLORVALUE dcvDiffuse; D3DCOLORVALUE dcvSpecular; D3DCOLORVALUE dcvAmbient; D3DVECTOR dvPosition; D3DVECTOR dvDirection; D3DVALUE dvRange; D3DVALUE dvFalloff; D3DVALUE dvAttenuation0; D3DVALUE dvAttenuation1; D3DVALUE dvAttenuation2; D3DVALUE dvTheta; D3DVALUE dvPhi; } D3DLIGHT7, *LPD3DLIGHT7;

The position, range, and attenuation properties are used to define a light's location in world space, and how the light behaves over distance. The D3DUtil_InitLight() method in d3dutil.cpp sets a few default values.

VOID D3DUtil_InitLight( D3DLIGHT7& light, D3DLIGHTTYPE ltType, FLOAT x, FLOAT y, FLOAT z ) { ZeroMemory( &light, sizeof(D3DLIGHT7) ); light.dltType= ltType; light.dcvDiffuse.r = 1.0f; light.dcvDiffuse.g = 1.0f; light.dcvDiffuse.b = 1.0f; light.dcvSpecular = light.dcvDiffuse; light.dvPosition.x = light.dvDirection.x = x; light.dvPosition.y = light.dvDirection.y = y; light.dvPosition.z = light.dvDirection.z = z; light.dvAttenuation0 = 1.0f; light.dvRange = D3DLIGHT_RANGE_MAX; }

Only the light position is set explicitly for the first light. The light position is described using a D3DVECTOR with the x-, y- and z-coordinates in world space. The first light is located under the objects and the second light is located above these. The second light is only set if the graphics device supports it. It's a bit darker.

Directional lights don't use range and attentuation variables. A light's range property determines the distance, in world space, at which meshes in a scene no longer receive light. So the dvRange floating point value represents the light's maximum range. The attentuation variables controls how a light's intensity decreases toward the maximum distance, specified by the range property. There are three attentuation values, controlling a light's constant, linear and quadratic attentuation with floating point variables. Many applications set the dvAttentuation1 member to 1.0f and the others to 0.0f.

Beneath the material and lights, the InitDeviceObjects() method sets the projection matrix and aspect ratio of the viewport.


Next : Code (cont.)