Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
89 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

  Contents

 Games of Perfect
 Information

 What We Need
 Board
 Representations

 Move Generation
 Search Techniques
 Evaluation

 Printable version

 


  The Series

 Getting Started
 Data Structures
 Move Generation
 Basic Search
 Advanced Search
 Evaluation
 Functions


 

Search Techniques

To a computer, it is far from obvious which of many legal moves are "good" and which are "bad".  The best way to discriminate between the two is to look at their consequences (i.e., search series of moves, say 4 for each side and look at the results.)  And to make sure that we make as few mistakes as possible, we will assume that the opponent is just as good as we are.  This is the basic principle underlying the minimax search algorithm, which is at the root of all chess programs.

Unfortunately, minimax' complexity is O(bn), where b ("branching factor") is the number of legal moves available on average at any given time and n (the depth) is the number of "plies" you look ahead, where one ply is one move by one side.  This number grows impossibly fast, so a considerable amount of work has been done to develop algorithms that minimize the effort expended on search for a given depth.  Iterative-deepening Alphabeta, NegaScout and MTD(f) are among the most successful of these algorithms, and they will be described in Part IV, along with the data structures and heuristics which make strong play possible, such as transposition tables and the history/killer heuristic.

Another major source of headaches for chess programmers is the "horizon effect", first described by Hans Berliner.  Suppose that your program searches to a depth of 8-ply, and that it discovers to its horror that the opponent will capture its queen at ply 6.  Left to its own devices, the program will then proceed to throw its bishops to the wolves so that it will delay the queen capture to ply 10, which it can't see because its search ends at ply 8.  From the program's point of view, the queen is "saved", because the capture is no longer visible...  But it has lost a bishop, and the queen capture reappears during the next move's search.  It turns out that finding a position where a program can reason correctly about the relative strength of the forces in presence is not a trivial task at all, and that searching every line of play to the same depth is tantamount to suicide.  Numerous techniques have been developed to defeat the horizon effect; quiescence search and Deep Blue's singular extensions are among the topics covered in Part V on advanced search.


Next : Evaluation