Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
CC+ PowerPoint Presentation

CC+

193 Vues Download Presentation
Télécharger la présentation

CC+

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. CC+ Swinburn Miranda – Computer Science William Moseson – Engineering Physics Ningchuan Wan – Computer Science

  2. Introduction • Goal: To develop AI that plays Chinese checkers at a high level. • Developed an algorithm to make intelligent moves based on heuristic based searches and learning. • Motivated by experience gained and desire to win.

  3. Method - Heuristic • Distance Max • Total distance of our pieces from the starting point • Penalizes stragglers • Penalizes pieces outside a certain width threshold

  4. Method – Strategy Adjustment Endgame • Adjust the search type being used based on the point in the game. Midgame Opening

  5. Method – Strategy Adjustment • Opening • BFS is applied to move max pieces furthest • Mid Game • BFS for yourself as well as the opponent • Enabling blocking & long moves • End Game • Finding the shortest path to the goal

  6. Method – Gray Piece Placement • Block an opponent’s ladder • Block an opponent’s chain • Block a potentially large jump • Help your pieces at the end of the game

  7. Method - learning • Game logs are used for learning • A parser converts the game log into a feature vector • Feature Vector<y= {1,-1}> <Current_BoardPostionofPiece>:<{0,1,2,3}> … <Next_BoardPostionofPiece >:<{0,1,2,3}>… • The feature vector represents a transition from current state to next • Y is assigned based on whether players transitions led to a final win or no. • SVMLight is used for training and prediction

  8. System Architecture(Proposed) Decision module I/O module Communication Server Knowledge base Log Learning Engine

  9. System Architecture • Decision Module • Evaluates a board state • Outputs a move • Learning Engine • Influences the Decision Module based on historical results • Knowledge Base • Contains information obtained from previous games • Data Sources • Training data

  10. Experimental Evaluation: Methodology • Evaluation of our software is based on its performance • Variables: • Heuristic used • Search Depth • Heuristic weighting • Gray piece placement • Search type

  11. Experimental Evaluation: Results • Tournament Results: • Consistently in 5th , 6thor 7thplace. • Wins 100% of the time against greedy.

  12. Future Work • Learning • Training data size • Heuristic • Include more variables • Gray Pieces • More intelligent placement • More analysis of when and where to place

  13. Conclusions • Developed a AI that out performs a greedy strategy • Gained a better understanding of AI and Machine learning

  14. The Team • Swinburn Miranda – Learning Module • Ningchuan Wan – Heuristic Development • Will Moseson – Gray Piece Placement and testing