1 / 31

Solving Ponnuki-Go on Small Board

Solving Ponnuki-Go on Small Board. Paper: Solving Ponnuki-Go on small board Authors: Erik van der Werf, Jos Uiterwijk, Jaap van den Herik. Presented by: Niu Xiaozhen. Outline. Introduction Motivation Method Summary Results and Analysis Conclusions. Introduction.

mathewk
Télécharger la présentation

Solving Ponnuki-Go on Small Board

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Solving Ponnuki-Go on Small Board Paper: Solving Ponnuki-Go on small board Authors: Erik van der Werf, Jos Uiterwijk, Jaap van den Herik Presented by: Niu Xiaozhen

  2. Outline • Introduction • Motivation • Method Summary • Results and Analysis • Conclusions

  3. Introduction • Ponnuki-Go (also known as Atari-Go), the goal is to be the first to capture one or more of the opponent’s stones • Two rules are different with Go: • Capturing directly ends the game • Passing is not allowed (no tie) • Simpler than Go (no ko-fights)

  4. Motivation Why we study Atari-Go? • It contains major concepts of Go such as capturing stones, determining life or death and making territory

  5. Motivation (2) Why we study Atari-Go? • A good benchmark for testing the performance of algorithms • Successful algorithms in small board Atari-Go might be useful for computer Go

  6. Outline • Introduction • Motivation • Method Summary • Results and Analysis • Conclusions

  7. Method Summary • Standard alpha-beta framework with many enhancements: • Iterative deepening Principal Variation Search (PVS) • Transposition table • History heuristic • Enhanced transposition cutoffs • Move ordering

  8. Transposition Table • Use the two-deep replacement scheme: • 225 (32M) double entries

  9. History Heuristic • History Heuristic employs one table for both black and white moves, utilizing the Go proverb “the important move of my opponent is important to me as well”

  10. Move Ordering • First the transposition move is tested • Second are the killer moves • Third the rest of the moves are ordered by the history heuristic

  11. Evaluation Function • Simple evaluation function is to use a three-valued scheme [1(win), 0(unknown), -1(loss)] • Efficient for small boards • Becomes useless for strong play on large boards

  12. Evaluation Function (2) • Proposed heuristic evaluation function is based on four principles: • Maximizing liberties • Maximizing territory • Connecting stones • Making eyes

  13. Maximizing Liberties and Territory • The number of liberties is a lower bound on the number of moves that is needed to capture a stone • Maximizing territory is a long-term goal since it allows the player put more stones inside his own territory (before filling it completely)

  14. Connecting and Making Eyes • Why should connect stones to a larger group? • A small number of larger groups is easier to defend than a large number of small groups • Making eyes is derived from normal Go. • After a player has run out of alternative moves, he might be forced to fill his own eyes

  15. Implementation • Use bit-boards for fast computation of the board features • Territory is estimated by a weighted sum of the number of first-, second- and third-order liberties

  16. Implementation (2) • Connections and eyes are more costly to calculate than the liberties • Use Euler number to estimate the connections and eyes The Euler Number of a binary image is: • The number of objects minus the number of holes

  17. Euler Number • Minimizing the Euler Number thus connects stones as well as creates eyes E = 3 - 19 = - 16 E = 1 - 18 = - 17

  18. Outline • Introduction • Motivation • Method Summary • Results and Analysis • Conclusions

  19. Results and Analysis • The program solved the empty square boards up to 5x5

  20. First Play First Win? • 2x2 board: no • 3x3 board: yes • 4x4 board: no • 5x5 board: yes • 6x6 board: don’t know yet! Test on 6x6 board took a few weeks (before system crash), the solution is at least 24-ply deep!

  21. Experiment Results • The table shows the winner, the depth (in plies) of the shortest solutions, the number of nodes, time and the effective branching factor

  22. 6x6 board • Two alternative way are used for testing:

  23. Another Approach • In 2002, Cazenave solved Atari-Go on 6x6 with crosscut starting • Use Gradual Abstract Proof Search (GAPS) algorithm, which is an combination of alpha-beta with a clever threat-extension scheme • Proved a win at depth 17 in around 10 minutes

  24. Comparison • The authors’ algorithm found the shortest win at depth 15 in a comparable time frame • Using the same search enhancements into GAPS, Cazenave also found the solution at depth 15 in 26 seconds

  25. 6x6 board with Stable Starting • Still too difficult! (estimates that about one month of computation time!) • Prove the black win (at the depth of 31) by manually playing the first move

  26. Solutions for Non-empty 6x6 board

  27. Impact of Search Enhancements • Experiment results show that, on larger boards the enhancements become increasingly effective

  28. Comparison of Evaluation Functions • Authors’ heuristic evaluation function performs better!

  29. Program Performance • Against Rainer Schutze’s freeware “AtariGo 1.0” in 10x10 board, won most of the game • After adding an implementation about extending ladders, won all! • Against an amateur 1D in a 9x9 board, sometimes the program was able to win, but most of the games was lost!

  30. Future Work • Solve the empty 6x6 board and solving the 8x8 board with crosscut starting • Since search extensions for ladders are essential for strong play on larger board, future work will focus on selective search-extensions • Test the algorithm in Go!

  31. Conclusions • Authors‘ conclusions: • solved Atari-Go on the 3x3, 4x4, 5x5 and some non-empty 6x6 boards • the combination of enhancements and the heuristic evaluation fucntion is effective • My conclusions: • Focusing on enhancements, or trying to solve larger board one by one might not be a right direction • We need something different!

More Related