Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
79 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

Contents
 Creating a
 DirectDraw Palette

 Pixel Formats
 Locking Surfaces
 Plotting Pixels
 Notes on Speed
 Fading Out
 Basic Transparency

 Printable version
 Discuss this article
 in the forums



The Series
 Beginning Windows
 Programming

 Using Resources
 in Win32 Programs

 Tracking Your
 Window/Using GDI

 Introduction
 to DirectX

 Palettes and Pixels
 in DirectDraw

 Bitmapped Graphics
 in DirectDraw

 Developing the
 Game Structure

 Basic Tile Engines
 Adding Characters
 Tips and Tricks

Pixel Formats

As I said earlier, hen you're writing a pixel into memory in a palettized mode, you write one byte at a time, and each byte represents an index into the color lookup table. In RGB modes, however, you write the actual color descriptors right into memory, and you need more than one byte for each color. The size of the memory write is equivalent to the color depth; that is, for 16-bit color, you write two bytes (16 bits) for each pixel, and so on. Let's start out at the top, because it's easiest to understand. 32-bit color uses a pixel format like this, where each letter is one bit:

AAAA AAAA RRRR RRRR GGGG GGGG BBBB BBBB

The As are for "alpha," which is a value representing transparency. Those are for Direct3D though; like I said, DirectDraw doesn't support alpha blending. So when you're creating a 32-bit color for DirectDraw, just set the high byte to 0. The next eight bits are the intensity of red, the eight following that are for green, and the low byte is for blue.

A pixel in 32-bit color needs to be 32 bits in size, and so the variable type we use to hold one is a UINT, which is an unsigned integer. Usually I use macros to convert RGB data into the correct pixel format, so let me show you one here. Hopefully if you're a little confused at this point, this will clear things up a bit:

#define RGB_32BIT(r, g, b)  ((r << 16) | (g << 8) | (b))

As you can see, this macro creates a pixel value by shifting the bytes representing red, green, and blue to their appropriate positions. Is it starting to make sense? To create a 32-bit pixel, you can just call this macro. Since red, green, and blue have eight bits each, they can range from 0 to 255. To create a white pixel, you would do this:

UINT white_pixel = RGB_32BIT(255, 255, 255);

24-bit color is just about the same. As a matter of fact, it is the same, except without the alpha information. The pixel format looks like this:

RRRR RRRR GGGG GGGG BBBB BBBB

So red, green, and blue still have eight bits each. This means that 24-bit color and 32-bit color actually have the same number of colors available to them, but 32-bit color just has some added information for transparency. So if you don't need the extra info, 24-bit is better than 32-bit, right? Well, not exactly. It's actually kind of a pain to deal with, because there's no data type that's 24 bits. So to write a pixel, instead of just writing in one value, you have to write red, green, and blue each individually. Working in 32-bit color is probably faster on most machines, even though it requires more memory. In fact, many video cards don't support 24-bit color at all, because having each pixel take up three bytes is just too inconvenient.

Now, 16-bit color is a bit tricky, because not every video card uses the same pixel format! There are two formats supported. One of them, which is by far more common, has five bits for red, six bits for green, and five bits for blue. The other format has five bits for each, and the high bit is unused. This is used mostly on older video cards. So the two formats look like this:

565 format: RRRR RGGG GGGB BBBB
555 format: 0RRR RRGG GGGB BBBB

When you're working in a 16-bit color depth, you'll need to determine whether the video card uses 565 or 555 format, and then apply the appropriate technique. It's kind of a pain, but there's no way around it if you're going to use 16-bit color. Since there are two different formats, you'd write two separate macros:

#define RGB_16BIT565(r, g, b)  ((r << 11) | (g << 5) | (b))
#define RGB_16BIT555(r, g, b)  ((r << 10) | (g << 5) | (b))

In the case of 565 format, red and blue can each range from 0 to 31, and green ranges from 0 to 63. In 555 format, all three components range from 0 to 31. So setting a pixel to white in each mode would look like this:

USHORT white_pixel_565 = RGB_16BIT565(31, 63, 31);
USHORT white_pixel_555 = RGB_15BIT555(31, 31, 31);

The USHORT data type is an unsigned short integer, which is 16 bits long. This whole business of having two pixel formats makes things a bit confusing, but when we actually get into putting a game together, you'll see that it's not as bad as it seems at first. By the way, sometimes 555 format is referred to as 15-bit color. So if I call it that sometime later on, you'll know what I'm talking about instead of thinking I made a typo. :)

Here is probably a good place to show you just how exactly how determine whether a machine is using the 555 or 565 format when you're running in 16-bit color. The easiest way to do it is to call the GetPixelFormat() method of the IDirectDrawSurface7 interface. Its prototype looks like this:

HRESULT GetPixelFormat(LPDDPIXELFORMAT lpDDPixelFormat);

The parameter is a pointer to a DDPIXELFORMAT structure. Just declare one, initialize it, and pass its address. The structure itself is huge, so I'm not going to list it here. Instead, I'll just tell you about three of its fields. The members in question are all of type DWORD, and they are dwRBitMask, dwGBitMask, and dwBBitMask. They are bit masks that you logically AND with a pixel value in order to extract the bits for red, green, or blue, respectively. You can also use them to determine what pixel format you are dealing with. If the video card uses 565, dwGBitMask will be 0x07E0. If it uses 555 format, dwGBitMask will be 0x03E0.

Now that we've seen all the pixel formats you can encounter, we can get into actually showing graphics in DirectX. About time, wouldn't you say? Before we can manipulate the actual pixels on a surface, though, we need to lock the surface, or at least a part of it. Locking the surface will return a pointer to the memory the surface represents, so then we can do whatever we want with it.




Next : Locking Surfaces