1 / 15

Outline

Outline. Brief description of the GTAAP system Review ERA Algorithm Adaptations/Changes from Basic ERA Implementation – Optimization Demo/Results Future Research and Conclusions. Outline. Brief description of the GTAAP system Review ERA Algorithm

dillon
Télécharger la présentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions

  2. Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions

  3. ERA: Environment, Rules, Agents [Liu et al, AIJ 02] Environment is an nxa board Each variable is an agent Each position on board is a value of a domain Agent moves in a row on board Each position records the number of violations caused by the fact that agents are occupying other positions Agents try to occupy positions where no constraints are broken (zero position) Agents move according to reactive rules

  4. Reactive rules [Liu et al, AIJ 02] Reactive rules: Least-move: choose a position with the min. violation value Better-move: choose a position with a smaller violation value Random-move: randomly choose a position Combinations of these basic rules form different behaviors.

  5. Big picture Agents do not communicate but share a common context Agents keep kicking each other out of their comfortable positions until every one is happy Charecterization: [Hui Zou, 2003] Amazingly effective in solving very tight but solvable instances Unstable in over-constrained cases Agents keep kicking each other out (livelock) Livelocks may be exploited to identify bottlenecks

  6. Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions

  7. Implementation Details • Use of the rBLR as the main behavior (random-move as a supporting behavior) • Random move’s probability is that it occurs about 2% of the time, the remaining 98% of the time is rBLR. • “r” in rBLR is set as 3 • Termination: 150 time steps

  8. Additions Made to the Basic Algorithm • Optimization: • agent’s assigned TA’s preference for this class • Each agent assumes a better move to be: • If the new position has less constraint violations as the old one. • If the new position has the same number of constraint violations, but the position’s GTA has a higher preference ranking for this course than the current position’s GTA. • This provided much, much improved results in practice, by forcing more movement and overall better values to be selected.

  9. Outline • Brief description of the GTAAP system • Review ERA Algorithm • Adaptations/Changes from Basic ERA Implementation – Optimization • Demo/Results • Future Research and Conclusions

  10. Results • Fall 2007

  11. Results • Spring 2007

  12. Results • Fall 2004

  13. Future Research • Testing upcoming semesters to see how well this aids the assignment process in the real-world. • Setting up courses that are a low priority to be able to remain unassigned. • Looking into other local search techniques (genetic algorithms, etc.) • Creation of hybrids of local searches • Investigations in mimicking the human process (greedy, yet still making a few “back changes”)

  14. Conclusions • As the testing confirmed, this approach seems like it will be a great aid in the assignment process. • Its results are statistically approximately equal to or better than the human-generated solution, though this still needs to be confirmed in the real-world. • This approach seems a very good way to go in situations where a decent solution is needed in a relatively small amount of time.

More Related