What games are interesting to us? They should not be too easy and not too hard. By that I mean, if it's too easy one player would always win, if it is too hard nobody would have any clue were to move. Examples of too easy games are abundant as many games for children are in this category. Too hard games are not around as nobody finds them interesting.
A grown up playing Nim with a child finds it quite boring as he knows what to do to win while the child still finds it interesting. We say that the game of Nim is solved. For the general version of Nim with several piles there exists a nice theory for how to play it perfectly. Nim is solved in the strong sense which mean that we know how to play the game perfectly from every possible position.
Some games are weakly solved. Weakly solved means that a player can win the game if the game is played from it's starting position. Played in this way many positions never arise and can still be unknown.
There exists some games were it's known who will win the game but no one knows how to play it that way, Hex is such a game. These are called ultra-weakly solved.
How do you go about solving a game? One way is to list all possible positions and just test them if they are winning, loosing or draws. This technique only works on very simple games as harder games contain just too many possible positions. Solving harder games you need to search the game tree.
So, how do you search a game tree effectively. You need to prune the tree as you go along. There is the common alpha-beta search method which has the advantage that it does not take much memory. It is also fairly good at pruning the tree. You should also take advantage of how the game is structured. In the game of Qubic for example there are lots of forcing sequences. For this type of games the Proof Number(PN) search method is very rewarding.
Since these search algorithms were derived computers have become faster and we have almost solved Checkers and Othello by using the same techniques. But what is needed to go further. Chess seems out of reach and Go is not even on the horizon. When Victor Allis solved Go-Muko and Connect 4 he incorporated a great amount of knowledge into the algorithm. For harder games this is too troublesome, there is just too much knowledge needed.
I think game solving could actually teach us about what we mean by knowledge. Somehow we must make the computer gather this knowledge as he runs through the game tree and we must provide algorithms where this is possible. Ideally the algorithm is independent of the problem. PN-Search is very close. It tries to prove nodes. At every moment it chooses the most proving node and tries to prove it. If that fails it reevaluates the tree and possibly chooses another node to prove.
Here I see a parallel to how mathematics works. You set up definitions and prove theorems. The theorems sometimes lead to dead ends, mathematics which never get used. Sometimes several theorems from different branches of mathematics converge and you can prove something new, the recent proof of Fermat's Last Theorem is a good example of this. To prove something it's not always best to go straight for he solution, actually taking side paths is sometimes beneficial. Who decides which proofs we should go after? Some conjectures are deemed more important than others, like the Riemann hypothesis. Why is this hypothesis important? Because even as it's not proven, mathematics has continued to develop assuming it is true. As time passes more and more conjectures are depending on it. Why have the Riemann hypothesis been given this treatment? Because it is very likely true, and it is powerful. Maybe this technique could be something to consider designing a new tree search algorithm?
Sunday, October 01, 2006
Subscribe to:
Post Comments (Atom)
2 comments:
Interesting thoughts! I like the analogy between game systems and mathematical systems. I have a response based on my experiences playing Go and also writing code to play chess based on static board evaluation and minimaxing or using alpha-beta cutoff. (This was undergrad CS, so nothing very fancy.) But the thought that occurs to me is that it could be possible that there's a level of complexity beyond which the system is unpredictable or unsolvable.
I'm not quite sure how to express this, or if it can perhaps be shown to be trivially wrong. I just can't help wonder... is it simply the 19x19 size of the board and the relatively unconstrained set of possible moves that makes solving Go hard? Might Go, like the Riemann hypothesis, yield something to brute-force computation techniques, but hold back it's true secret? (It seems like the secret here might be the evaluation function for a given board, and there might not be a perfect one for Go.) Can we prove a game unsolvable, as Godel proved incompleteness? Can we prove the existence of unsolvable games?
Thanks for your feedback mrc :-)
And welcome to the blog fellow Go player.
I think all non random games are solvable. I have been thinking along the same lines as you regarding how to find the 'secret' behind a game. There could very well be one evaluation function that exposes the game's secret in clear view. Like Sprague-Grundy theory of Nim. Programming a computer to find this exposing evaluation function would sure be interesting and a true challenge for an AI. By this the game is not just solved but 'exposed'(my own word for it).
I think humans playing games are much more interested in finding this underlaying scheme than just remembering positions and sequences. This is why Go is so good for humans, we reason about possible connections and possible territory much more than reading out sequences. We find the spots were calculation would do most good by using reasoning humans are good at.
Chess has this to some degree also, attack on the kings wing, take control over the center but Go has it to a much greater extent.
Post a Comment