
What games are interesting to us? They should not be too easy and not too hard. By that I mean, if it's too easy one player would always win, if it is too hard nobody would have any clue were to move. Examples of too easy games are abundant as many games for children are in this category. Too hard games are not around as nobody finds them interesting.
A grown up playing
Nim with a child finds it quite boring as he knows what to do to win while the child still finds it interesting. We say that the game of
Nim is
solved. For the general version of
Nim with several piles there exists
a nice theory for how to play it perfectly.
Nim is solved in the strong sense which mean that we know how to play the game perfectly from every possible position.
Some games are weakly solved. Weakly solved means that a player can win the game if the game is played from it's starting position. Played in this way many positions never arise and can still be unknown.
There exists some games were it's known who will win the game but no one
knows how to play it that way, Hex is such a game. These are called ultra-weakly solved.
How do you go about solving a game? One way is to list all possible positions and just test them if they are winning, loosing or draws. This technique only works on very simple games as harder games contain just too many possible positions. Solving harder games you need to search the game tree.
So, how do you search a game tree effectively. You need to prune the tree as you go along. There is the common alpha-beta search method which has the advantage that it does not take much memory. It is also fairly good at pruning the tree. You should also take advantage of how the game is structured. In the game of
Qubic for example there are lots of forcing sequences. For this type of games the Proof Number(
PN) search method is very rewarding.
Since these search algorithms were derived computers have become faster and we have almost solved Checkers and Othello by using the same techniques. But what is needed to go further. Chess seems out of reach and Go is not even on the horizon. When Victor Allis solved Go-
Muko and Connect 4 he incorporated a great amount of knowledge into the algorithm. For harder games this is too troublesome, there is just too much knowledge needed.
I think game solving could actually teach us about what we mean by knowledge. Somehow we must make the computer gather this knowledge as he runs through the game tree and we must provide algorithms where this is possible. Ideally the algorithm is independent of the problem.
PN-Search is very close. It tries to prove nodes. At every moment it
chooses the most proving node and tries to prove it. If that fails it reevaluates the tree and possibly
chooses another node to prove.
Here I see a parallel to how mathematics works. You set up definitions and prove theorems. The theorems sometimes lead to dead ends, mathematics which never get used. Sometimes several theorems from different branches of mathematics converge and you can prove something new, the recent proof of Fermat's Last Theorem is a good example of this. To prove something it's not always best to go straight for he solution, actually taking side paths is sometimes beneficial. Who decides which proofs we should go after? Some conjectures are deemed more important than others, like the
Riemann hypothesis. Why is this hypothesis important? Because even as it's not proven, mathematics has continued to develop assuming it is true. As time passes more and more conjectures are depending on it. Why have the Riemann
hypothesis been given this treatment? Because it is very likely true, and it is powerful. Maybe this technique could be something to consider designing a new tree search algorithm?