Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
88 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

  Contents

 Why Bother?
 Quiet, Please!
 Null-Move
 Aspirated Search
 Singular
 Extensions


 Printable version

 


  The Series

 Getting Started
 Data Structures
 Move Generation
 Basic Search
 Advanced Search
 Evaluation
 Functions


 

Aspirated Search and MTD(f)

Plain old alphabeta assumes nothing about a position's ultimate minimax value.  It looks at *everything*, no matter how preposterous.  However, if you have a pretty good idea of what the value will turn out to be (for example, because you are running an iterative-deepening scheme and have received the previous iteration's results), you might be able to identify lines of play that are so out of whack with your expectations that they can't possibly be right, and cut them off pre-emptively.

For example, suppose that you have reason to believe that a position's value will be close to 0, because it is very well balanced.  Now, suppose that an internal node's preliminary evaluation is at +20,000.  You can cutoff with reasonable confidence.

This is the idea behind "aspiration search", a variant of alphabeta in which, instead of using +INFINITY and -INFINITY as the initial bounds of the search, you set a small window around the expected value instead.  If the actual value happens to fall within the window, you win: you'll get it without error, and faster than you would otherwise (because of the many extra cutoffs).  If not, the algorithm will fail, but the error will be easy to detect (because the minimax value you'll receive will be equal to one of the bounds); you'll have to waste a bit of time re-searching with a wider window.  If the former case happens more often than the latter, you win on average.  Obviously, the better your initial guess of the expected value, the more useful this technique is.

In the mid 1990's, researcher Aske Plaat extended aspiration search to its logical conclusion: what if you called an aspirated alphabeta with a search window of width equal to zero?  It would fail all the time, of course...  But it would do so *very quickly*, because it would cutoff every path almost immediately.  Now, if the failure indicates that the actual value is lower than your estimate, you can try again, with another zero-width window around a smaller estimate, etc.  In a sense, you could then use alphabeta to perform a binary search into the space of all possible minimax values, until you reach the only call which will *not* fail because the zero-width window will be centered on the position's actual value!

This brilliant idea, presented in a paper available on the web at http://theory.lcs.mit.edu/~plaat/mtdf.html, has been embodied in the MTD(f) search algorithm, which is all of 10 lines long.  Tacked on top of an alphabeta implementation equipped with a transposition table, MTD(f) is incredibly efficient and highly parallel-friendly.  It also works better with "coarse-grain" (and therefore probably simpler and faster) evaluators: it is easy to see that it takes fewer probes to zero in on the actual value in a binary search if the smallest "atom" of value is equal to, say, 0.1 pawns rather than 0.001 pawns.

There are other alphabeta variants in wider use (namely, the infamous NegaScout; I would rather teach General Relativity to orang-utangs than get into that mess) but Plaat insists that MTD(f) is the most efficient algorithm in existence today and I'll take his word for it.  My own program uses MTD(f); you'll be able to marvel at the algorithm's simplicity very shortly!


Next : Singular Extensions