Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
100 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

Secrets of Simple Image Filtering


4: Do's and don'ts and a few clarifications

Speed issues

At current time, implementing real-time filtering is pretty much out of the question. Once the kernel size is pushed upwards of a few samples, speed issues really start to kick in. For instance, convolving a 1024x1024 image with a 64x64 filter kernel, 1024x1024x64x64 (it is a 10-digit number) floating point multiplications and additions would have to be done. There is a way to slacken this situation using FFT (Fast Fourier Transform) convolution, but it won't be discussed here. There are a few but's for FFT convolution as well, anyway… The moral is to always be wary when an image has to be filtered – even on today's high-end systems a large image could take more than several seconds to process using a small filter kernel.

It's all about experimentation

The discussed filter kernels (those introduced in section 3) resulted largely from experimentation and were implemented as static filters (that means no mathematical algorithm was developed to compute them in any size). Coming up with such algorithms can sometimes be rather challenging.

The hidden equilibrium

It isn't that obvious, but a certain kind of "numeric balance" hides itself in filters. If the balance is shifted, the filter starts to distort the image instead of simply processing it. That is, if adding all of the values in a filter kernel together produces a value that doesn't equal 1, the filter is "numerically unbalanced". For instance, figure 3.1 presents a neatly balanced filter while the kernel presented in 3.5 is an unbalanced one. This knowledge serves no practical purpose, but it can be used to assess the "rate of distortion" of a filter.

What is the 'normal' filter size?

This is a fair question that can be answered in the following way: for most applications the very primitive 3x3 filter kernel is more than enough; larger filters become necessary when the output sample needs to be affected by a whole area of surrounding pixels. For instance, increasing the kernel size of the emboss/engrave filter produces an increasingly embossy/engravy effect. Implementing a large median filter (used for removing small artefacts such as speckles from the input image – not done by convolution) will yield an increasingly blurrier while smoother output image.

5: Conclusion

The use of filters in everyday life is pretty concealed. Nevertheless, we couldn't exist if it weren't for filters – they are a product of nature (our eyes and ears silently use them for audio and visual object recognition, for feature enhancement – such as identifying objects in low-level light and a ton of other purposes that most people aren't even aware of) that have been successfully incorporated into technology (one can find numerous filters in the computer next to him or herself, in any electronic music instrument, etc). Image and other filtering techniques can be abusively complex – the mathematical theory behind them spans much further than the simple operation of convolution. This article has been a taste of the easy cream coating on top of hardcore DSP techniques used for optimally fast and effective image manipulation.

Addendum: The accompanying software and source code example

All of the filters discussed here and a few others are implemented in the sample application supplied with this article. There are a few things to keep in mind when filtering images with it or the simple code sample provided. They can only be uncompressed 24 bit bitmaps (either greyscale or color); the output images will always be 24 bit bitmaps converted to greyscale (all color components of a pixel are give the same value). The source code sample is fully documented and implemented as a command line application – see the accompanying edge_detect.txt for instructions on how to use it. The source code was written in Borland C++ 5.02 and will most likely not compile on non-Borland compilers – a few minor adjustments need to be made on your behalf. Very simple modifications should be enough to make the source sample apply a different filter to the input image.

I hope you've found this article to be of some use – if you noticed a mistake or have any questions/comments you'd like to share, contact me at christopheratkinson@hotmail.co.uk.




Contents
  The Terminology
  Filtering an input image
  Do's and don'ts

  Source code
  Printable version
  Discuss this article