Upcoming Events
Unite 2010
11/10 - 11/12 @ Montrιal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
100 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

Secrets of Simple Image Filtering

2. Filtering an input image

Even though the only limitation for filter dimensions is the fact that it has to have an odd number of samples, problems arise when we use a filter kernel with d > 1 with more than one non-zero value. That is, the only filter that will not introduce artefacts in the output signal is the impulse filter, which has the following kernel:

Fig. 2.1: a simple two-dimensional impulse where nΜN. If n=1 the impulse response becomes a unity impulse, e.g. it will not change the input signal

In the case of a different PSF we will end up with strange artefacts appearing near the edges of the filtered image (the brightness will not be correct). The breadth of these un-normalized regions will depend on the size of the filter kernel. Although neither the sample code nor the sample application provided with this article address this issue, the solution to this problem is relatively simple: for all output image samples that lie near the edge of the image so that parts of the filter kernel are clipped outside of the image boundaries, count the number of impulse response samples that are clipped and divide the output sample by the fractional part of the non-clipped samples compared to the used samples. That is, if in a 3x3 PSF only 4 samples are used to calculate the output sample value, divide the latter by the overlapping fraction of 4/9.

Some filters produce an output that may require special handling after filtering. For instance, the emboss filter will surely produce a lot of negative values which means that it would be wise to add some constant to the output samples. After some experimenting, you might find that 128 seems like a nice number. In the code sample provided with this article (implementing the edge detection filter) we're multiplying the resultant sample by 9. Why nine? The answer is: pure experimentation!

This solves the problem with clipping negative numbers, but yet another (although somewhat insignificant) problem remains. Since, we're dealing with floating point numbers, our sample values aren't always nicely spaced out between 0 and 256. Instead, some of them may be larger than 255. For this we've introduced a ceiling boundary – anything above 255 is simply discarded.

Equipped with this knowledge, we're now almost ready to produce a working filtering program. Still, there remains the question of how do we come up with the appropriate filter design?

3. Designing a filter

This may seem surprising, but high-profile filter design comprises 40% logic and predicate analysis and 60% experimentation. We will now look at different filters commonly used and the changes they introduce if we vary some parameters. To demonstrate a possible output of the following filters we will be applying them to this source image:

Img 3.1: the original reference image used hence forth

Note: since the sample program accompanying this article only handles greyscale images (that are converted into 24 bit DIB's upon export), the reference images produced in Paint Shop Pro 4.12 have first been converted to greyscale and then filtered.
And yet another note: even though the images produced by Paint Shop Pro are interchangeably similar to the ones produced by our software, notice the discrepancies at the edges of the images we've presented – this "artefact" has been discussed above. Just know that it is not a bug!

Edge detection

The single most important feature one could extract from an image is the edge data. The edge information can be parallelized to the frequency spectrum of an audio signal – one can use it to do a lot of very neat tricks with the image. For instance, if the edges are fuzzy and hard to make out, a modified version of edge detection – edge enhancement – can be used to enforce the contours. Edge detection is the most important thing in image recognition – for instance, if we have a reference image and some target image that we want to locate (or verify if it exists) in the reference image, we first find the edges in both images and do a check using the edge data instead of some weird interpolation that uses pixel information. Cell shading can be accomplished through edge detection – a neat effect to be used in games.

Once again, the kernel of a traditional edge detection filter looks like this (this has been covered above):

Fig. 3.1: the traditional filter kernel of an edge detection filter where c = -1/8

Here are the output images:

Our application Paint Shop Pro

Img 3.2: comparing our output image to the one produce by Paint Shop Pro (first made greyscale, then applied with Image->Edge Filters->Find Edges). Paint Shop Pro also has Find Horizontal/Vertical Edges which will be left up to you to design

Edge enhancement

If we're only stuck with a blurry image, we can use a parameterized edge detection filter to make the edges stick out more. To do this, we modify the constant c in the edge detection filter with the value c = -k/8. Here are the results:

Our application Paint Shop Pro

Img 3.3: the edge enhancement filter does nothing more than makes the edges in the image more prevalent. We used k = 3 in this case

Sharpen and unsharpen (blur)

Sharpening is pretty much the same as edge enhancement – it accentuates the edges, unsharpening, or blurring, being its immediate inverse. We won't be comparing the output images for these filters as we're not using the same constants as Paint Shop Pro (our filter sharpens more at a time). Here are the filter kernels used:

Fig 3.1: the sharpening filter kernel. We're using c=-1 in the sample software

Fig 3.2: the unsharpening filter kernel

Emboss and engrave

Embossing is probably one of the 'coolest' effects that can be applied to an image. Its close relative, bumpmapping, is one of those few texture effects that you can often toggle on and off in a game. Bumpmapping relies on the presence of a "bump image", or the bump map which is essentially nothing more than a simple embossed image that can be created using your favourite image editing program. By combining this bump image with the texture map you can create the final bumpmapped image. Using what we've learnt so far, it isn't difficult to construct a simple embossing filter that does the whole business of creating the bump image for us without the need for fancy software. Here is a set of the filter kernels used for embossing an image:

Fig 3.3: a 3x3 emboss filter kernel

Fig 3.4: a 5x5 emboss filter kernel

Fig 3.5: a 7x7 emboss filter kernel

The following image listing shows how these filters perform and how we should go about trying to create some very specific effect.

3x3 filter kernel 5x5 filter kernel
7x7 filter kernel Paint Shop Pro

Img 3.4: using different filter kernel sizes creates different depths in the embossed image. Can you hazard a guess which kernel size Paint Shop Pro might be using?

"Fine," you say, but what's up with the above images – why is the intrinsic brightness of the images so different? The reason is simple – we're using a constant correction factor of 128 (as reasoned in section 2) for all three filter sizes (after filtering each pixel, we add 128 to its value to compensate for the negative output sample values). The fewer negative numbers there are in the filter the more positive the output sample values will be, meaning that smaller filters produce inherently brighter images and should therefore have a smaller correction value. It will be left up to you to figure out how to obtain the correction values dynamically.

Now from emboss to engrave - these two seem like opposites while, in fact, we can solve the problem with a simple visual "hoax". Try turning any of the embossed images upside down and have a look at it. Most people now see the image as "dented" – having depression areas rather than a hilly look. This can be thought of as changing the direction of the incident light – if we were to rotate the negative values around the central 1, we'd be seeing the image differently – at one point it would simply go from "embossed" to "engraved".

There are other filters that can be implemented using this simple method (convolution), but they won't be discussed in this article. More complex filters, such as artistic filters, cannot be implemented that easily and often require huge amounts of research and experimentation to get to work right. With the filters described above, however, one could do a whole world of things. There are a few points to keep in mind, though – these will be discussed in the next section.

Do's and don'ts

  The Terminology
  Filtering an input image
  Do's and don'ts

  Source code
  Printable version
  Discuss this article