Upcoming Events
Unite 2010
11/10 - 11/12 @ Montréal, Canada

GDC China
12/5 - 12/7 @ Shanghai, China

Asia Game Show 2010
12/24 - 12/27  

GDC 2011
2/28 - 3/4 @ San Francisco, CA

More events...
Quick Stats
89 people currently visiting GDNet.
2406 articles in the reference section.

Help us fight cancer!
Join SETI Team GDNet!
Link to us Events 4 Gamers
Intel sponsors gamedev.net search:

  Contents

 Why Search?
 Grandpa MiniMax
 AlphaBeta
 Ordering Moves
 Iterative
 Deepening

 Computer
 Playing Style


 Printable version

 


  The Series

 Getting Started
 Data Structures
 Move Generation
 Basic Search
 Advanced Search
 Evaluation
 Functions


 

Grandpa MiniMax

The basic idea underlying all two-agent search algorithms is Minimax. It dates back from the Dark Ages; I believe Von Neumann himself first described it over 60 years ago.

Minimax can be defined as follows:

  • Assume that there is a way to evaluate a board position so that we know whether Player 1 (whom we will call Max) is going to win, whether his opponent (Min) will, or whether the position will lead to a draw. This evaluation takes the form of a number: a positive number indicates that Max is leading, a negative number, that Min is ahead, and a zero, that nobody has acquired an advantage.
  • Max's job is to make moves which will increase the board's evaluation (i.e., he will try to maximize the evaluation).
  • Min's job is to make moves which decrease the board's evaluation (i.e., he will try to minimize it).
  • Assume that both players play flawlessly, i.e., that they never make any mistakes and always make the moves that improve their respective positions the most.

How does this work? Well, suppose that there is a simple game which consists of exactly one move for each player, and that each has only two possible choices to make in a given situation. The evaluation function is only run on the final board positions, which result from a combination of moves by Min and Max.

Max Move Min Move Evaluation
A C 12
A D -2
B C 5
B D 6

Max assumes that Min will always play perfectly. Therefore, he knows that, if he makes move A, his opponent will reply with D, resulting in a final evaluation of -2 (i.e., a win for Min). However, if Max plays B, he is sure to win, because Min's best move still results in a positive final value of 5. So, by the Minimax algorithm, Max will always choose to play B, even though he would score a bigger victory if he played A and Min made a mistake!

The trouble with Minimax, which may not be immediately obvious from such a small example, is that there is an exponential number of possible paths which must be examined. This means that effort grows dramatically with:

  • The number of possible moves by each player, called the branching factor and noted B.
  • The depth of the look-ahead, noted d, and usually described as "N-ply", where N is an integer number and "ply" means one move by one player. For example, the mini-game described above is searched to a depth of 2-ply, one move per player.

In Chess, for example, a typical branching factor in the middle game would be about 35 moves; in Othello, around 8. Since Minimax' complexity is O( B^n ), an 8-ply search of a chess position would need to explore about 1.5 million possible paths! That is a LOT of work. Adding a ninth ply would make the tree balloon to about 50 million nodes, and a tenth, to an impossible 1.8 billion!

Luckily, there are ways to cut the effort by a wide margin without sacrificing accuracy.


Next : Alphabeta: Making Minimax Feasible (a little)