1 / 39

Distributed Approaches on Scheduling

Distributed Approaches on Scheduling. Young K. Ko 1999. 4. 15. Distributed Scheduling :. Two Different Approaches Distributed Artificial Intelligence Approach. Distributed Computing. Two Topics and References. Distributed Artificial Intelligence Approach

willa-mack
Télécharger la présentation

Distributed Approaches on Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Approaches on Scheduling Young K. Ko 1999. 4. 15.

  2. Distributed Scheduling : Two Different Approaches Distributed Artificial Intelligence Approach. Distributed Computing

  3. Two Topics and References • Distributed Artificial Intelligence Approach • A computational study on design and performance issues of multi-agent intelligent systems for dynamic scheduling environmentsP.C. Pendharkar, Expert Systems with Applicatoin, 1999 • Application of a behavior-based scheduling approach for distributed scheduling of an assembly shopR. Bemelman, et. Al., Production Planning and Control, 1999 • Coordination of multiple agents for production managementJyi-Shane Liu, Katia Sycara, Annuls of O.R., 1997 • Using multi-agent architecture in FMS for dynamic schedulingKhalid Kouiss, Henri Pierreval and Nasser MebarkJournal of Intelligent manufacturing, 1997, • CAMPS: a constraint-based architecture for multiagent planning and schedulingKazuo Miyashita, Journal of Intelligent manufacturing, 1998, • Coordination Mechanisms for Multi-Agent Manufacturing Systems:Applications to Integrated Manufacturing SchedulingRiyaz Sikora, Michael J. Shaw, IEEE Trans. Eng. Management. 1997

  4. Two Topics and References • Distributed Parallel Computing • Distributed computing approaches toward manufacturing scheduling problems.Thomas K. Keyser, Robert P. Davism, IIE Transactions, 1998

  5. Reviews on Agent Based Scheduling

  6. Related Terms • Monolithic Vs Holonic • Coordination • Negotiation • Dynamic Scheduling

  7. Key Issues • Agent Structure • Scheduling strategy/knowledge for each resources. • Knowledge on Environment • Communication Protocol (Ontology) • System Architecture • Responsibility of each Agents • Negotiation Strategy / Coordination Method • Conflict Resolution

  8. Static Knowledge (on itself and on the others) Objectives,Capabilities, Engagements Problem solving Expertise Know-how Communication Messages, protocols Agent Structure 1 (Khalid Kouiss) other agents perception Environment action

  9. Protocols Behavior Model List of acquaintances Computational Procedures Domain Knowledge and Data Learning Module Agent Structure 2 (Riyaz Sikora) Interactions with other agent Control Unit Functional Component Knowledge Base

  10. Multi-Agent Architecture 1 Supervisory agent Human Operator Agent 1 Agent 2 Agent 3 Agent 4 Database WC1 WC4 WC2 WC3

  11. Agent Model (CAMPS) Clients Clients Orders Jobs Manager Planner Jobs Operations Planner Scheduler Scheduler Precedence Scheduler Operations Precedence Scheduler Scheduler

  12. Multi-Agent Architecture 2 Agent 1 Entity 1 Agent 5 Agent 2 Entity 5 Entity 2 Entity 3 Entity 4 Agent 3 Agent 4 Agent 41 Agent 43 Agent 42

  13. OPij OPij OPij OPij OPij OPij OPij OPij Agents in CONA Resources Jobs Operations Res Agent Resource 1 Job 1 Job Agent Resource 2 Job 2 Job Agent Res Agent Resource 3 Job 3 Job Agent Res Agent Res Agent Resource m Job n Job Agent

  14. Coordination Information Job Agent Resource Agent Consult Write Write • Boundary • Temporal Slack • Weight • Bottleneck Tag • Resource Slack • Change Frequency Operation

  15. Distributed Computing Approaches Toward Manufacturing Scheduling Problem Thomas K. Keyser, Robert P. Davis Dept. of Engineering, Univ. of Southern Colorado, Dept of Industrial Engineering, Clemson Univ. IIE Transactions, 1998

  16. Authors • Assistant Professor, • IMSE,Industrial and Manufacturing Systems Engineering,Ohio University • He conducted 9 projects from 1993 to 1998 related • with this topic. • Those projects are supported by National Science • Foundation and Sun Microsystems. Professor, Dept. of Industrial Engineering, Clemson University South Carolina

  17. Current Technology Barrier Computing Cost Computing Cost # of processors Speed of a processor Idea To utilize more processors in computing, we have to pay more cost. In this case, the cost is linearly increasing. However, if we want to get more speed with single processor, we have to pay exponentially increasing cost. And in spite of increasing cost, because of technological barrier, there is some limit in the increasing the processor speed with money.

  18. Idea • A collection of small processors may have an equivalent capacity to a large mainframe computer. • We can achieve cheap computing with this approach. • Even more, with distributed computing, we can handle the problems which can not be managed with current processor technology.

  19. Introduction • Parallel and Distributed Computing To gain significant computational power. • Distributed/Parallel Computing - Definition • Distribution of the execution of tasks to multiple processors in such a way that overall response time is minimized. • Distributed System - Definition • Computer system made up of many smaller, and potentiallyindependent computers such as network of computers.

  20. Issues associated with parallel/distributed computing • The problem has to be broken into tasks so they can be allocated to individual processors (Decomposition of Problem ) • Tasks can be allocated based on current state of the computer system • Communication has to be established • Tasks may require synchronization • The development of application is dependent upon the target architecture of the computer system.

  21. Performance factors • The number of processors/workstations used • The partitioning of the problems • The allocation of the sub problems • The types of scheduling problem solved • The size of the problem solved.

  22. Distributed and Parallel Processing • Two main distinction between Distributed and Parallel computer system. • Distributed system may accommodate hundreds of different individual computers. Computers may be continually added to distributed system. • Distributed system require more elaborate programming. ( for communication)

  23. Time to solve a problem with a sequential program S(p) = Time to solve the same problem using p workstations (p) (Time to solve a problem with p processors) GIE(p) = (q) (Time to solve a problem with q processors) Performance Measure • Speed Up • Generalized incremental efficiency • Sub-Linear : S(p) < p • Super Linear : S(p) > p

  24. Super Linear Speedup p Sub Linear p Number of processors

  25. Parallel branch and bound algorithm • Branch and bound algorithms are adopted for parallel implementation because they are easily partitioned to address sub-problems and sub-problems require minimum amount of communication and synchronization. • Parallel BB algorithms solutions are strongly affected by the work distribution and allocation scheme.

  26. Literatures • El-Dessouki and Huen, (1980) • BB algorithm using depth first search minimized the amount of overhead associated with distributed computer systems. • Cannon and Hoffman, (1990) • They solved large scale ILP over network of computers. (Cutting plane) • Miller and Penky, (1989) • they solved TSP using parallel computer system. • Sub-linear performance property. • Excessive communications are present. • Rushmeier and Nemhauser (1993) • They solved set covering problem. • Mixed search strategy is beneficial for set covering problem. • Best when algorithm communicated only global bound information to other processers

  27. Scheduling problems • Jobshop scheduling problem • Single Machine earliness/tardiness problem.

  28. Methodology • Partitioning the problem into independent tasks involves balancing the tradeoffs between the overhead and resource utilization. • Partition at top level cause more communications, and partition at lower level may cause lower resource utilization. • Partition Rules • Rule 1 will partition the problem at the First level. • Rule 2 : Second Level • Rule 3 : Third Level

  29. Problem Partition Strategies P1 P0 PR1 P2 P3 P1 PR2 P11 P12 P21 P22 PR3 P111 P112

  30. Parent workstation Start communication process Communication Process Main/Distributed Process Deliver New Global Bound Information Request Final Solution Deliver Final Solution Start Child Process and Deliver Problem Information Deliver New Global Bound Information Child process Deliver Potential New Local Bound Information and Associated Solution Child workstation

  31. Task Allocation • Static • Static assignment assumes that there exists a set of available workstations. • Dynamic 1. A maximum number of workstations to be utilized from the overall number of workstations in the system 2. All available workstations enough available memory to complete tasks ( Memory Availability ) 3. Sort based on potential workstation performance(i.e., speed of processor, MFlops)

  32. Minimize s.t Manufacturing Scheduling Model Average Flow Time Minimization • To solve this problem with distributed computing, • the system first determine the binary variable. Yk. • After a sequence is determined, the problem reduces itself to the optimization of LP.

  33. Model Implementation • 5 machine / 5,6,7 jobs • Number of workstation increased from 4, to 8 and 16. • Environment • Sun Workstation • 10MByte/Sec Ethernet LAN • Algorithms are written in C language, • Communication : PVM (Parallel Virtual Machine environment)

  34. PR #1 PR #2 PR #3 2 4 6 8 10 12 14 16 Experimental Results and Analysis • Elapsed time (7 job E/T problem)

  35. PR #1 PR #2 PR #3 2 4 6 8 10 12 14 16 Experimental Results and Analysis • 5 job - job shop problem

  36. PR #1 PR #2 PR #3 2 4 6 8 10 12 14 16 Experimental Results and Analysis • Average number of nodes investigated(6 job E/T problem)

  37. Conclusions • Static task allocation was superior to dynamic task allocation in 100 of 108 cases. • It is desirable to keep the parent workstation utilization as low as possible. • The utility of this distributed algorithm is associated with taking advantage of unused computing capacity. • Distributed computer model needs to be adjusted to examine specific scheduling problems and other classes of problems with the consideration of specific problem structures and parameters.

  38. Discussion • Scheduling of processors to solve scheduling problems(?) It is a kind of a load distribution problem for the number of processors to solve scheduling problem=> This can be a interesting topic to operations researchers. • Contributions. • We can overcome the technological barrier in computing time with this alternative. • We can utilize wasting computing resources to improve scheduling response time. • If we can develop a scheduling problem specific methods,it will very useful in practical sense!

More Related