Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
66 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

Contents
 Getting Started
 Setting Up
 Drawing & Texturing
 Wrapping Up

 Printable version
 Discuss this article
 in the forums


Drawing the Panel

Drawing the rectangle is pretty easy. Add the following lines of code to your Render2D function:

g_pd3dDevice->SetVertexShader(D3DFVF_PANELVERTEX);
g_pd3dDevice->SetStreamSource(0, g_pVertices, sizeof(PANELVERTEX));
g_pd3dDevice->DrawPrimitive(D3DPT_TRIANGLEFAN, 0, 2);

These lines tell the device how the vertices are formatted, which vertices to use, and how to use them. I have chosen to draw this as a triangle fan, because it's more compact than drawing two triangles. Note that since we are not dealing with other vertex formats or other vertex buffers, we could have moved the first two lines to our PostInitialize function. I put them here to stress that you have to tell the device what it's dealing with. If you don't, it may assume that the vertices are a different format and cause a crash. At this point, you can compile and run the code. If everything is correct, you should see a black rectangle on a blue background. This isn't quite right because we set the vertex colors to white. The problem is that the device has lighting enabled, which we don't need. Turn lighting off by adding this line to the PostInitialize function:

g_pd3dDevice->SetRenderState(D3DRS_LIGHTING, FALSE);

Now, recompile and the device will use the vertex colors. If you'd like, you can change the vertex colors and see the effect. So far, so good, but a game that features a white rectangle is visually boring, and we haven't gotten to the idea of blitting a bitmap yet. So, we have to add a texture.

Texturing the Panel

A texture is basically a bitmap that can be loaded from a file or generated from data. For simplicity, we'll just use files. Add the following to your global variables:

LPDIRECT3DTEXTURE8 g_pTexture   = NULL;

This is the texture object we'll be using. To load a texture from a file, add this line to PostInitialize:

D3DXCreateTextureFromFileEx(g_pd3dDevice, [Some Image File], 0, 0, 0, 0,
                            D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT,
                            D3DX_DEFAULT , 0, NULL, NULL, &g_pTexture);

Replace [Some Image File] with a file of your choice. The D3DX function can load many standard formats. The pixel format we're using has an alpha channel, so we could load a format that has an alpha channel such as .dds. Also, I'm ignoring the ColorKey parameter, but you could specify a color key for transparency. I'll get back to transparency in a little bit. For now, we have a texture and we've loaded an image. Now we have to tell the device to use it. Add the following line to the beginning of Render2D:

g_pd3dDevice->SetTexture(0, g_pTexture);

This tells the device to render the triangles using the texture. One important thing to remember here is that I am not adding error checking for simplicity. You should probably add error checking to make sure the texture is actually loaded before attempting to use it. One possible error is that for a lot of hardware, the textures must have dimensions that are powers of 2 such as 64x64, 128x512, etc. This constraint is no longer true on the latest nVidia hardware, but to be safe, use powers of 2. This limitation bothers a lot of people, so I'll tell you how to work around it in a moment. For now, compile and run and you should see your image mapped onto the rectangle.

Texture Coordinates

Note that it is stretched/squashed to fit the rectangle. You can adjust that by adjusting the texture coordinates. For example, if you change the lines where u = 1.0 to u = 0.5, then only half of the texture is used and the remaining part will not be squashed. So, if you had a 640x480 image that you wanted to place on a 640x480 window, you could place the 640x480 image in a 1024x512 texture and specify 0.625, 0.9375 for the texture coordinates. You could use the remaining parts of the texture to hold other sub images that are mapped to other panels (through the appropriate texture coordinates). In general, you want to optimize the way textures are used because they are eating up graphics memory and/or moving across the bus. This may seem like a lot of work for a blit, but it has a lot to do with the way new cards are optimized for 3D (like it or not). Besides, putting some thought into how you are moving large chunks of memory around the system is never a bad idea. But I'll get off my soapbox.

Let's see where we are so far. At one level, we've written a lot of code to blit a simple bitmap. But, hopefully you can see some of the benefit and the opportunities for tweaking. For instance, the texture coordinates automatically scale the image to the area we've defined by the geometry. There are lots of things this does for us, but consider the following. If we had set up our ortho matrix to use a percentage based mapping, and we specified a panel as occupying the lower quarter of the screen (for a UI, let's say), and we specified a texture with the correct texture coordinates, then our UI would automagically be drawn correctly for any chosen window/screen size. Not exactly cold fusion, but it's one of many examples. Now that we have the texture working well, we have to get back to talking about transparency.



Next : Wrapping Up