Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
69 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

 Reconfiguring our
 Direct3D application

 Basic 3D Geometry
 Vertex Buffers and
 Index Buffers

 Rendering Summary

 Source code
 Printable version
 Discuss this article

The Series
 Part 1
 Part 2
 Part 3

Welcome back to part 2 of this mini-series, hopefully you’ve read and learnt the information covered in the first part (thanks to all the complements I received about the first article), this article starts where the last one left off - things will not be covered twice, so make sure you know what happened in the first article… It can be found here

By the time you’ve read and learnt the things I am about to cover you should be perfectly capable of creating a simple game/demo - which shows you how quickly you can get started in Direct3D. Having said this, don’t expect to finish this article (or this series) and go on to write the next big 3D engine - it wont happen, I’ve had many emails from people who’ve only read the first couple of tutorials on the basics of D3D and want to get straight on with a "Simple" quake clone… try something like pong/tetris/snakes first.

This article is going to be quite steep - the things covered may well not come to you easily, if not, re-read the article until it does or seek out other beginners guides to 3D graphics / theory. Today we’ll be covering:

1) Setting up the sample application to go full-3D.
2) Extending the last example to use basic 3D geometry.
3) Extending this further to use vertex buffers and index buffers.

The above 3 things would usually be covered by several articles, as they are deceptively big topics, anyway, onwards and upwards:

Reconfiguring our Direct3D application

The sample at the end of the last article was very simplistic - not much use for anything really, before we go into full-3D we’ll need to add a few parameters and configure a few new things.

I also want to take the time to introduce full screen mode, this is the main display format used by games, where, funnily enough, your game occupies the entire screen. Full screen mode is much faster, and isn’t held back by windows (which is effectively suspended in the background). Full screen mode requires you to pick a resolution that the hardware/monitor combination can handle - open up the windows display properties and see what settings you can set the resolution slider to - these (but not always) will be the display modes that Direct3D can use on your hardware. 800x600, 1024x768 are examples of full screen modes. The important thing about this is that the resolutions available will differ from one computer to another - 640x480, 800x600, 1024x768 all tend to be standard resolutions, but there is no guarantee that they will be available (only 1 of my 2 computers supports 1024x768 display modes). To solve this problem we must use enumeration.

Dim tmpDispMode As D3DDISPLAYMODE '//used during the enumeration of avail. modes
Dim I As Long '//so we can loop through the avail. display modes

For I = 0 To D3D.GetAdapterModeCount(0) - 1 'primary adapter
  D3D.EnumAdapterModes 0, I, tmpDispMode
  Debug.Print tmpDispMode.Width & "x" & tmpDispMode.Height
Next I

The previous piece of code will output a list of all the display modes supported by your hardware to VB’s immediate (debug) window. The code will need to go at the start of the Initialise() function, but after the Dx and D3D objects have been initialised. The output of the above code, for my GeForce 256 + generic 15" monitor was:


So why are there two of each resolution? Its not a mistake, it’s down to the format of the display mode. Anyone paying any attention to games will know that you can have 32 bit and 16 bit rendering (amongst various other formats) - the above list does not contain that data, but the first copy of the resolutions will be in 16 bit format, the second set will be in 32 bit format. This requires some discussion:



Above is a selection of members from the enumeration type "CONST_D3DFORMAT" - which we’ll be using later. All of the above describe a format that the display mode should be in, such as 32 bit/16 bit - but it’s not as simple as saying 16 or 32 bit… you can work it out by counting the number of bits in the description:


Add up the 8’s and you get 32 - which indicates that D3DFMT_X8R8G8B8 is a 32 bit mode, all of the ones above are either 16 or 32 bit format (except the D3DFMT_DXT* ones). The next part to notice is the lettering - indicating the channel, all of them will have an RGB triplet - Red, Green and Blue - you should all know that colours on a computer screen are made up of these 3 colours, even using paintbrush you could probably see this in action. We also have an optional X or A channel, the X just means unused, there are bits allocated, but they wont have anything in them, and wont be used for anything. The A channel is alpha, something that will be covered later on. If you don’t need alpha blending/transparencies then you’ll be okay using an X****** format, if you need alpha but accidentally use an X******* format nothing will happen - or at least what you want to happen wont.

A word on accuracy, the more bits to a channel the more colours you can represent, which is why the 32 bit formats will looks substantially better than the 16 bit formats. A channel can represent 2^n (where n is the number of bits) colours. An 8 bit channel can therefore represent 2^8 colours = 256 colours, a 4 bit channel can only represent 2^4 channels = 16 colours. You may have noticed that there is an R5G5B5 format and an R5G6B5 format - this is down to the fact that our eyes are more sensitive to green light, so being able to represent more colours in the green channel is better. On a connected, but not particularly useful note - the total number of colours supported by a display mode will be 2^n again, where n is the total number of bits being used, not including the X channel. I tend not to include the A channel in my calculations, but you can if you want. Therefore a 16 bit colour will have 2^16 colours = 65536, and a 32 bit colour will have 2^32 colours = 4,294,967,296 - roughly 4.3 billion…

Now (hopefully) we have an understanding of display formats, we can go about setting one up. I’ve written a small function that will check for support of a specified display mode - it works fine for this tutorial, but you’ll probably need a more rigid function for a proper project:

Private Function CheckDisplayMode(Width As Long, Height As Long, _
                                  Depth As Long) As CONST_D3DFORMAT
'//0. any variables
  Dim I As Long

'//1. Scan through
  For I = 0 To D3D.GetAdapterModeCount(0) - 1
    D3D.EnumAdapterModes 0, I, DispMode
    If DispMode.Width = Width Then
      If DispMode.Height = Height Then
        If DispMode.Format = D3DFMT_R5G6B5 Or D3DFMT_X1R5G5B5 Or D3DFMT_X4R4G4B4 Then
          '16 bit mode
          If Depth = 16 Then CheckDisplayMode = DispMode.Format: Exit Function
        ElseIf DispMode.Format = D3DFMT_R8G8B8 Or D3DFMT_X8R8G8B8 Then
          '32bit mode
          If Depth = 32 Then CheckDisplayMode = DispMode.Format: Exit Function
        End If
      End If
    End If
  Next I
CheckDisplayMode = D3DFMT_UNKNOWN
End Function

Fairly simple really, it assumes that the D3D object has been created, and will use the global copy of the object - if it hasn’t been created this piece of code will error-out. The only slightly complicated part is the format selection, it defines formats as either 16 bit or 32 bit - whilst the 32 bit selection technically includes a 24 bit format, colour wise it’s identical. I’ve not checked against formats with alpha components - the display mode wont use an alpha channel, that’s for texturing - you can set up a display mode with an alpha channel, but there’s little point, on top of that the formats listed here are more likely to be supported than the equivalent with an alpha channel. We can extend the usage of this function to help select the samples display mode - we’ll hard code this part rather than spend time building a user interface in to allow the user to select the resolution - it’s not hard to do that, and you can probably work it out as you need it.

DispMode.Format = CheckDisplayMode(640, 480, 32)
If DispMode.Format > D3DFMT_UNKNOWN Then
  '640x480x32 is supported
  DispMode.Width = 640: DispMode.Height = 480
  DispMode.Format = CheckDisplayMode(640, 480, 16)
  If DispMode.Format > D3DFMT_UNKNOWN Then
    '640x480x16 is supported
    DispMode.Width = 640: DispMode.Height = 480
    'hmm, neither are supported. oh well...
    MsgBox "Your hardware does not appear to support" _
               & " 640x480 display modes in either 16 bit or 32 bit modes. Exiting" _
              , vbInformation, "Error"
    Unload Me

  End If
End If

Not too complicated really. We could check for higher resolutions, but for this sample it isn’t really necessary. By the time this little segment of code has been executed we will have a properly initialised, valid D3DDISPLAYMODE structure that we can use when setting up our device. If it could not create the structure it will exit out of the program.

We aren’t done yet though, we need to fill out the D3DPRESENT_PARAMETERS structure differently from the windowed mode example. This is mostly down to the device creation requiring more data than in the last sample. The new configuration looks like this:

D3DWindow.BackBufferCount = 1
D3DWindow.BackBufferFormat = DispMode.Format
D3DWindow.BackBufferWidth = DispMode.Width
D3DWindow.BackBufferHeight = DispMode.Height
D3DWindow.hDeviceWindow = frmMain.hWnd
D3DWindow.AutoDepthStencilFormat = D3DFMT_D16
D3DWindow.EnableAutoDepthStencil = 1

Notice that we’re copying the display mode format information to the structure at this point, you could avoid this by writing straight to this structure - but for clarity I left it separate.

The first section deals with configuring a backbuffer. Anyone who’s done any work with DirectDraw/Direct3D in DirectX7 will already know what one of these is (you should do). When Direct3D does the actual rendering of 3D geometry onto a 2D surface it will do it in parts, usually as each piece of geometry is rendered. If we rendered straight to the screen in all but the highest frame rate situations you would be able to see the screen being drawn piece by piece - even if it was going quite quickly you’d be able to pick up on strange artefacts appearing - a tree appearing and quickly being overwritten by the house in front of it (for example). To solve this problem we use a secondary buffer, the image is composed on this surface (an identical size to the screen), and then the whole contents of the backbuffer are copied to the screen in one quick operation (its not actually copied, the addresses/pointers are switched around). This removes the possibility of any drawing artefacts appearing - or at least it should do… We configure our backbuffer here using our display mode, we never actually configure the screen surface, D3D will take the measurements specified in the backbuffer members and use those - saves on any simple data mis-matching errors.

The second part deals with configuring the depth buffer. This is another concept that you’ll need to grasp when dealing with 3D environments. In 3D we, obviously, have 3 dimensions, XYZ, and in 2D we only have X and Y, when we want to project our 3D scene onto our 2D screen we need to know what happens to this 3rd dimension. As things are converted into 2D in any given order we will need to check if the current part we are drawing is in front of, or behind the current piece of scene in the frame buffer (the screen/backbuffer). Or in more technical language, when we draw a pixel in 2D we check it’s depth coordinate against that stored for the same location in depth buffer (which will hold the depth for the pixel currently in the frame buffer), if the depth is greater (the new pixel is behind the old one) then it wont be drawn, if the depth is les (the new pixel is in front of the old one) then it will draw it over the top of existing pixel. The depth buffer, is therefore a surface identical in dimensions to the screen and backbuffer. The only involvement we’ll ever have with it is telling D3D we want to use it, turning it on and clearing it before each frame (to remove the depth information from the previous frame). The only important part at this stage is specifying what format the depth buffer will be in - similar to the way we specified what format the screen/backbuffer will be in. The more bits allocated to each pixel the more accurate the depth testing will be, typically they come as standard at 16bits per pixel, newer hardware is allowing 24 bit or 32 bit depth buffers (usually as combinations with a stencil buffer). We can enumerate what depth buffer modes are available using the following code:

'In order of preference: 32, 24, 16
If D3D.CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, DispMode.Format, _
  '//Enable a pure 32bit Depth buffer
  D3DWindow.AutoDepthStencilFormat = D3DFMT_D32
  D3DWindow.EnableAutoDepthStencil = 1
  Debug.Print "32 bit Depth buffer selected"

Else '//search for a 24 bit depth buffer

  If D3D.CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, DispMode.Format, _
    '//Enable a 24 bit depth buffer
    D3DWindow.AutoDepthStencilFormat = D3DFMT_D24X8
    D3DWindow.EnableAutoDepthStencil = 1
    Debug.Print "24 bit Depth buffer selected"

    If D3D.CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, DispMode.Format, _
      '//Enable a 16 bit depth buffer
      D3DWindow.AutoDepthStencilFormat = D3DFMT_D16
      D3DWindow.EnableAutoDepthStencil = 1
      Debug.Print "16 bit Depth buffer selected"

    Else '//hmm. No depth buffers available... Dont use one then :)
      D3DWindow.EnableAutoDepthStencil = 0
      Debug.Print "no Depth buffer selected"

    End If
  End If
End If

But for clarity this sample just uses a 16 bit depth buffer, in about 99% of cases the hardware will support a 16 bit depth buffer - which is perfectly acceptable for most 3D environments. If one is not available in hardware you may find that Direct3D will emulate ones existence - which is much slower, but it will still function.

The last two things to discuss are the .hDeviceWindow member, and the .SwapEffect member. These are not complicated, the first parameter, hDeviceWindow needs to be the hWnd property of the form that you are using - this is so that Direct3D can keep track of your application/window, if the form is closed, minimised or moved Direct3D can find out. The SwapEffect member indicates how Direct3D should draw to the screen, there are two main choices here, V-Sync or not. Hopefully you are aware of the monitors refresh rate - or how it works, if you use V-Sync then Direct3D will wait until a vertical refresh event occurs before drawing the next frame, therefore, if you only have a 70hz monitor the maximum frame rate will be around 70fps (not always exactly) - using a v-sync is usually better quality than without as it performs the copy at the same time as the monitor and no artefacts should appear. Disabling V-Sync forces Direct3D to copy the frame buffers to the screen as soon as it’s finished rendering - using this method you can achieve considerably higher frame rates (if it’s being locked to the refresh rate that is…) at the cost of visual artefacts; whilst it doesn’t affect some hardware (mine is fine), on others you can get a visible tear line across the screen - where one frame is above and one frame (usually the previous) is below, if this happens it will look pretty ugly! Also note that the drivers have the ability to override these settings, my drivers are setup to ignore v-sync, and I cant, programmatically, force it to use the V-sync; vice-versa when I enable v-sync in the driver properties. The available choices for the SwapEffect member are:


The 3 bolded entries are the ones that you should be using. The _COPY member is for a single backbuffer (like ours) that ignores V-Sync where possible, the _COPY_VSYNC option is the same, but will lock drawing to the monitors refresh rate. The _FLIP member is the same as the _COPY member except for multiple backbuffers (usually when you have 2).

Now we’ve covered the redesigned initialisation process. At last… There are two final lines that we should add after the device has been created:

D3DDevice.SetRenderState D3DRS_ZENABLE, 1

That will enable our depth buffer (AKA Z-Buffer) for rendering. And this next line goes in place of the current line in the Render() function:


All that’s changed there is that it’s clearing the depth buffer as well as the frame buffers. If you leave this line out you’ll start getting some strange artefacts appearing (you can use it to your advantage) where the new scene is drawn according to the depth information of the previous scene…

If you now run the program (F5 or Ctrl+F5 in the IDE) you should be greeted with a 640x480 light blue screen and a small triangle in the top left corner - identical to the sample in the last example - but in fullscreen!

Next : Getting Started With Basic 3D Geometry