Fast texture mapping

 Journal:   Dr. Dobb's Journal  Sept 1992 v17 n9 p141(7)
 -----------------------------------------------------------------------------
 Title:     Pooh and the space station. (fast texture mapping) (Graphics
            Programming) (Tutorial)
 Author:    Abrash, Michael.
 AttFile:    Program:  GP-SEP92.ASC  Source code listing.
             Program:  XSHARP21.ZIP  X-Sharp library for 3-D graphics.

 Abstract:  Texture mapping is accomplished by mapping an image into the
            surface of a polygon that has been transformed in the process of
            three-dimensional drawing.  Rapid texture mapping can be
            accomplished by quickly identifying the value for each pixel in
            the destination polygon.  The pixel values of the polygon are
            found by mapping each destination pixel in the transformed polygon
            back to its bitmapped image by using a reverse transformation.
            The backwards transformation is used in order to avoid gaps in the
            final image.  If the destination polygon is rotated in two
            directions, each transformed vertex should be mapped to the
            corresponding vertex in the bitmap.  The rapid texture mapping
            method uses a two-stage digital differential analyzer (DDA)
            approach.  An example of the texture mapping method using the
            likeness of Winnie the Pooh is presented.
 -----------------------------------------------------------------------------
 Descriptors..
 Topic:     Computer Graphics
            Program Development Techniques
            Tutorial
            Texture
            Color
            Bit-Mapped Graphics
            Methods.
 Feature:   illustration
            program
            chart.

 -----------------------------------------------------------------------------
 Full Text:

 So, here's where Winnie the Pooh lives:  in a space station orbiting Saturn.
 No, really; I have it straight from my daughter, and a six year old wouldn't
 make up something that important, would she?  One day she wondered aloud,
 "Where is the Hundred Acre Wood, exactly?" and before I could give one of
 those boring parental responses about how it was imaginary--but A.A.  Milne
 probably imagined it to be somewhere near London--my daughter announced that
 the Hundred Acre Wood was in a space station orbiting Saturn, and there you
 have it.

 As it turns out, that's a very good location for the Hundred Acre Wood,
 leading to many exciting adventures for Pooh and Piglet.  Consider the time
 they went down to the Jupiter gravity level (we're talking centrifugal force
 here; the station is spinning, of course) and nearly turned into pancakes of
 the Pooh and Piglet varieties, respectively.  Or the time they drifted out
 into the free-fall area at the core and had to be rescued by humans with
 wings strapped on (a tip of the hat to Robert Heinlein here).  Or the time
 they were caught up by the current in the river through the Wood and drifted
 for weeks around the circumference of the station, meeting many cultures and
 finding many adventures along the way.  (Yes, Riverworld; no one said the
 stories you tell your children need to be purely original, just interesting.)

 (If you think Pooh and Piglet in a space station is a tad peculiar, then I
 won't even mention Karla, the woman who invented agriculture, medicine,
 sanitation, reading and writing, peace, and just about everything else while
 traveling the length of the Americas with her mountain lion during the last
 Ice Age; or the Mars Cats and their trip in suspended animation to the Lesser
 Magellenic Cloud and beyond; or most assuredly Little Whale, the baby
 Universe Whale that is naughty enough to eat inhabited universes.  But I
 digress.)

 Anyway, I bring up Pooh and the space station because a great many people
 have asked me to discuss fast texture mapping.  Texture mapping is the
 process of mapping an image (in our case, a bitmap) onto the surface of a
 polygon that's been transformed in the process of 3-D drawing.  Up to this
 point, each polygon we've drawn in X-Sharp (the 3-D animation package we've
 been developing in this column) has been a single, solid color.  Recently, we
 added the ability to shade polygons according to lighting, but each polygon
 was still a single color.  Thus, in order to produce any sort of intricate
 design, a great many tiny polygons would have to be drawn.  That would be
 very slow, so we need another approach.  One such approach is texture
 mapping; that is, mapping the bitmap containing the desired image onto the
 pixels contained within the transformed polygon.  Done properly, this should
 make it possible to change X-Sharp's output from a bland collection of
 monocolor facets to a lively, detailed, and much more realistic scene.

 "What sort of scene?" you may well ask.  This is where Pooh and the space
 station come in.  When I sat down to think of a sample texture-mapping
 application, it occurred to me that the shaded ball we added to X-Sharp
 recently looked at least a bit like a spinning, spherical space station, and
 that the single unshaded, yellow polygon looked somewhat like a window in the
 space station, and it might be a nice example if someone were standing in the
 window ....

 The rest is history.

 Principles of Quick-and-Dirty Texture Mapping

 The key to our texture-mapping approach will be quickly determining what
 pixel value to draw for each pixel in the transformed destination polygon.
 These polygon pixel values will be determined by mapping each destination
 pixel in the transformed polygon back to the image bitmap, via a reverse
 transformation, and seeing what color resides at the corresponding location
 in the image bitmap, as shown in Figure 1.  It might seem more intuitive to
 map pixels the other way, from the image bitmap to the transformed polygon,
 but in fact it's crucial that the mapping proceed backward from the
 destination, in order to avoid gaps in the final image.  With the approach of
 finding the right value for each destination pixel in turn, via a backward
 mapping, there's no way we can miss any destination pixels.  On the other
 hand, with the forward-mapping method, some destination pixels may be skipped
 or double-drawn, because this is not necessarily a one-to-one or onto
 mapping.  Although we're not going to take advantage of it now, mapping back
 to the source makes it possible to average several neighboring image pixels
 together to calculate the value for each destination pixel; that is, to
 antialias the image.  This can greatly improve texture quality, although it
 is slower.

 Mapping Textures Made Easy

 To understand how we're going to map textures, consider Figure 2, which maps
 a bitmapped image directly onto an untransformed polygon.  Here, we simply
 map the origin of the polygon's object coordinate system somewhere within the
 image, then map the vertices to the corresponding image pixels.  (For
 simplicity, I'll assume in this discussion that the polygon's coordinate
 system--its object coordinate system--is in units of pixels, but scaling
 images to polygons is eminently doable.  This will become clearer when we
 look at mapping images into transformed polygons, below.) Mapping the image
 to the polygon is then a simple matter of stepping one scan line at a time in
 both the image and the polygon, each time advancing the X coordinates of the
 edges according to the slopes of the lines, just as is normally done when
 filling a polygon.  Since the polygon is untransformed, the stepping is
 identical in both the image and the polygon, and the pixel mapping is
 one-to-one, so the appropriate part of each scan line of the image can simply
 be block copied to the destination.

 Now matters get more complicated.  What if the destination polygon is rotated
 in two dimensions?  We no longer have a neat direct mapping from image scan
 lines to destination polygon scan lines.  We still want to draw across each
 destination scan line, but the proper source pixels for each destination scan
 line may now track across the source bitmap at an angle, as shown in Figure
 3.  What to do?

 The solution is remarkably simple.  We'll just map each transformed vertex to
 the corresponding vertex in the bitmap; this is easy, because the vertices
 are at the same indices in the original and transformed vertex lists.  Each
 time we select a new edge to scan for the destination polygon, we'll select
 the corresponding edge in the source bitmap, as well.  Then--and this is
 crucial--each time we step a destination edge one scan line, we'll step the
 corresponding source image edge an equivalent amount.

 Ah, but what is an "equivalent amount?"  Think of it this way.  If a
 destination edge is 100 scan lines high, so it will be stepped 100 times,
 then we'll divide the SourceXWidth and SourceYHeight lengths of the source
 edge by 100, and add those amounts to the source edge's coordinates each time
 the destination is stepped one scan line.  Put another way, we have, as
 usual, arranged things so that in the destination polygon we step DestYHeight
 times, where DestYHeight is the height of the destination edge.  The above
 approach arranges to step the source image edge DestYHeight times too, to
 match what the destination is doing, as shown in Figure 4.

 Now we're able to track the coordinates of the polygon edges through the
 source image in tandem with the destination edges.  Stepping across each
 destination scan line uses precisely the same technique.  In the destination,
 we step DestXWidth times across each scan line of the polygon, once for each
 pixel on the scan line.  (DestXWidth is the horizontal distance between the
 two edges being scanned on any given scan line.) To match this, we divide
 SourceXWidth and SourceYHeight (the lengths of the scan line in the source
 image, as determined by the source edge points we've been tracking, as
 described above) by the width of the destination scan line, DestXWidth, to
 produce SourceXStep and SourceYStep.  Then, we just step DestXWidth times,
 adding SourceXStep and SourceYStep to SourceX and SourceY each time, and
 choose the nearest image pixel to (SourceX,SourceY) to copy to (DestX,DestY).
 (Note that the names used above, such as "SourceXWidth," are used for
 descriptive purposes, and don't necessarily correspond to the actual variable
 names used in Listing Two.)

 That's a workable approach for 2-D rotated polygons--but what about 3-D
 rotated polygons, where the visible dimensions of the polygon can vary with
 3-D rotation and perspective projection? First, I'd like to make it clear
 that texture mapping takes place from the source image to the destination
 polygon after the destination polygon is projected to the screen.  That is,
 the image will be mapped after the destination polygon is in its final,
 drawable form.  Given that, it should be apparent that the above approach
 automatically compensates for all changes in the dimensions of a polygon.
 You see, this approach divides source edges and scan lines into however many
 steps the destination polygon requires.  If the destination polygon is much
 narrower than the source polygon, as a result of 3-D rotation and perspective
 projection, we just end up taking bigger steps through the source image and
 skipping a lot of source image pixels, as shown in Figure 5.  The upshot is
 that the above approach handles all transformations and projections
 effortlessly.  It could also be used to scale source images up to fit in
 larger polygons; all that's needed is a list of where the polygon's
 untransformed vertices map into the source image, and everything else happens
 automatically.  In fact, mapping from any polygonal area of a bitmap to any
 destination polygon will work, given only that the two polygons have the same
 number of vertices.

 Notes on DDA Texture Mapping

 That's all there is to quick-and-dirty texture mapping.  This technique
 basically uses a two-stage digital differential analyzer (DDA) approach to
 step through the appropriate part of the source image in tandem with the
 normal scan-line stepping through the destination polygon, so I'll call it
 "DDA texture mapping." It's worth noting that there is no need for any
 trigonometric functions at all, and only two divides are required per scan
 line.

 This isn't a perfect approach, of course.  For one thing, it isn't anywhere
 near as fast as drawing solid polygons; the speed is more comparable to
 drawing each polygon as a series of lines.  Also, the DDA approach results in
 far from perfect image quality, since source pixels may be skipped or
 selected twice.  I trust, however, that you can see how easy it would be to
 improve image quality by antialiasing with the DDA approach.  For example, we
 could simply average the four surrounding pixels as we did for simple,
 unweighted antialiasing in this column last year.  Or, we could take a Wu
 antialiasing approach (see my June column) and average the two bracketing
 pixels along each axis according to proximity.  If we had cycles to waste
 (which, given that this is real-time animation on a PC, we don't), we could
 improve image quality by putting the source pixels through a low-pass filter
 sized in X and Y according to the ratio of the source and destination
 dimensions (that is, how much the destination is scaled up or down from the
 source).

 Finally, I'd like to point out that this sort of DDA texture mapping is
 display-hardware dependent, because the bitmap for each image must be
 compatible with the number of bits per pixel in the destination.  That's
 actually a fairly serious issue.  One of the nice things about X-Sharp's
 polygon orientation is that, until now, the only display dependent part of
 X-Sharp has been the transformation from RGB color space to the adapter's
 color space.  Compensation for aspect ratio, resolution, and the like all
 happens automatically in the course of projection.  Still, we need the
 ability to display detailed surfaces, and it's hard to conceive of a fast way
 to do so that's totally hardware independent.  (If you know of one, drop me a
 line.)

 For now, all we need is fast texture mapping of adequate quality, which the
 straightforward, non-antialiased DDA approach supplies.  I'm sure there are
 many other fast approaches, and, as I've said, there are certainly
 better-looking approaches, but DDA texture mapping works well, given the
 constraints of the PC's horsepower.  Next, we'll look at code that performs
 DDA texture mapping.  First, though, I'd like to take a moment to thank Jim
 Kent, author of Autodesk Animator and a frequent contributor to this column,
 for getting me started with the DDA approach.

 Fast Texture Mapping:  An Implementation

 As you might expect, I've implemented DDA texture mapping in X-Sharp.
 Listing One (page 164) shows the new header files and defines, and Listing
 Two (page 164) shows the actual texture-mapped polygon drawer.  The set-pixel
 routine that Listing Two calls is a slight modification of the mode X
 set-pixel routine from my June 1991 column.  In addition, INITBALL.  C has
 been modified to create three texture-mapped polygons and define the texture
 bitmaps, and modifications have been made to allow the user to flip the axis
 of rotation.  In short, you will need the complete X-Sharp archive (available
 as described below) to see texture mapping in action, but Listings One and
 Two are the actual texture mapping code in its entirety.

 There's a lot I'd like to say about DDA texture mapping, and about the use of
 it in the DEMO1 program in X-Sharp, but I'll have to save most of it for next
 month because I'm running out of space.  Here's the big thing:  DDA texture
 mapping looks best on fast-moving surfaces, where the eye doesn't have time
 to pick nits with the shearing and aliasing that's an inevitable by-product
 of such a crude approach.  Get the X-Sharp archive, compile DEMO1, and run
 it.  The initial display looks okay, but certainly not great, because the
 rotational speed is so slow.  Now press the S key a few times to speed up the
 rotation and flip between different rotation axes.  I think you'll be amazed
 at how much better DDA texture mapping looks at high speed.  This technique
 would be great for mapping textures onto hurtling asteroids or jets, but
 would come up short for slow, finely detailed movements.

 No matter how you slice it, DDA texture mapping beats boring, single-color
 polygons nine ways to Sunday.  The big downside is that it's much slower than
 a normal polygon fill; move the ball close to the screen in DEMO1, and watch
 things slow down when one of those big texture maps comes around.  Of course,
 that's partly because the code is all in C; if I get a chance, I'll convert
 to assembler and turbocharge the DDA texture mapping for next time, and maybe
 speed up the general polygon- and rectangle-fill code, too.  I'll try to get
 to that next month, along with a further discussion of texture mapping and
 attending to some rough spots that remain in the DDA texture mapping
 implementation, most notably in the area of exactly which texture pixels map
 to which destination pixels as a polygon rotates.

 And, in case you're curious, yes, there is a bear in DEMO1.  I wouldn't say
 he looks much like a Pooh-type bear, but he's a bear nonetheless.  He does
 tend to look a little startled when you flip the ball around so that he's
 zipping by on his head, but, heck, you would too in the same situation.  And
 remember, when you buy the next VGA megahit, Bears in Space, you saw it here
 first.

 Where to Get X-Sharp

 The full source for X-Sharp is available in the file XSHRP n.ZIP in the DDJ
 Forum on CompuServe and XSHARP n.ZIP in the programming/graphics conference
 on M&T Online and the graphic.disp conference on Bix.  (XSHARP20 is the first
 version that includes texture mapping.)  Alternatively, you can send me a
 360K or 720K formatted diskette and an addressed, stamped diskette mailer,
 care of DDJ, 411 Borel Ave., San Mateo, CA 94402, and I'll send you the
 latest copy of X-Sharp.  There's no charge, but it'd be very much appreciated
 if you'd slip in a dollar or so to help out the folks at the Vermont
 Association for the Blind and Visually Impaired.

 I'm available on a daily basis to discuss X-Sharp on M&T Online and Bix (user
 name mabrash in both cases).

 By the way, folks, I very much appreciate both your generous contributions to
 the Vermont Association for the Blind and your letters.  However, I
 can't--honest-to-god, can't--write back to all of you.  Not if I ever want to
 get any work done, anyway, and I have a family to feed.  When I have time, I
 try to answer those of you who have written with questions, but there are no
 guarantees, especially given the flood of letters that have materialized in
 the wake of X-Sharp.  You'll stand a lot better chance of getting a reply on
 MCI Mail, Bix, or M&T Online.  Mind you, I do enjoy getting your letters, and
 the feedback and suggestions are great; it's just the responding part that's
 a problem.  So keep those cards and letters (and e-mail messages) coming!

 [BACK] Back

Discuss this article in the forums


Date this article was posted to GameDev.net: 7/16/1999
(Note that this date does not necessarily correspond to the date the article was written)

See Also:
Michael Abrash's Articles

© 1999-2011 Gamedev.net. All rights reserved. Terms of Use Privacy Policy
Comments? Questions? Feedback? Click here!