1 / 8

Commentary on Session A

Scheduling a Scheduling Competition, Providence, Sept 2007. Commentary on Session A. J. Christopher Beck Department of Mechanical & Industrial Engineering University of Toronto, Canada jcb@mie.utoronto.ca. 1+3 Papers. Ghersi et al. focuses on an operational question

rosine
Télécharger la présentation

Commentary on Session A

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scheduling a Scheduling Competition, Providence, Sept 2007 Commentary on Session A J. Christopher Beck Department of Mechanical & Industrial Engineering University of Toronto, Canada jcb@mie.utoronto.ca

  2. 1+3 Papers • Ghersi et al. focuses onan operational question • how do we judge the results? • important, technical issue • valuable for whoever runs the competition • Other 3 papers focus ona more strategic question • what problem types should the competition address?

  3. Some Operational Issues • Automated verification of results • Entries need to run on the same platform • Not just source code vs. binary • License issues?

  4. Integration vs. Focus • Rich problems (Le Pape; Guerri et al.) vs. a single fundamental issue (Cicirello) • Things to consider: • industry impact • potential for research progress/breakthroughs • will we understand the results? • barriers to entry • easier to participate in Cicirello-style track • is this an OR vs. AI issue?

  5. Multiple Tracks? • Perhaps of increasing difficulty • Things to consider: • spreading the competition too much • one entry per track is not very interesting • “granularity” of tracks • creating a challenge • barriers to entry

  6. Robustness as a Criteria • Bias evaluation toward good performance on all instances (Le Pape) • Best all-round single machine scheduler across different opt. funcs (Cicirello) • Things to consider: • “jack of all trades, master of none” • give up on being “the” best on a given problem • marketing • industry vs. research • another OR vs. AI issue?

  7. Competition or “Challenge” • Competition • like SAT or Planning competition • multiple tracks • Challenge • one problem type (e.g., one of Guerri et al.’s, one of Le Pape’s) • long work horizon (e.g., 6 months – 1 year) • like the CP Modeling Challenge (2005)

  8. Competition or “Challenge” • Things to consider: • marketing • (end user) industry interest and commitment • barriers to entry • potential for lack of community interest • organizational overhead

More Related