html5-img
1 / 14

Presenter: Chien-Chih Chen

National Sun Yat-sen University Embedded System Laboratory Dynamic Scheduler for Multi-core Systems. Presenter: Chien-Chih Chen. Research Tree. Analysis of The Linux 2.6 Kernel Scheduler. Dynamic Scheduler for Multi-core Systems. Optimal Task Scheduler for Multi-core Processor. Abstract.

chick
Télécharger la présentation

Presenter: Chien-Chih Chen

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. National Sun Yat-sen University Embedded System LaboratoryDynamic Scheduler for Multi-core Systems Presenter: Chien-Chih Chen

  2. Research Tree Analysis of The Linux 2.6 Kernel Scheduler Dynamic Scheduler for Multi-core Systems Optimal Task Scheduler for Multi-core Processor

  3. Abstract • Many dynamic scheduling algorithms have been proposed in the past. With the advent of multi core processors, there is a need to schedule multiple tasks on multiple cores. The scheduling algorithm needs to utilize all the available cores efficiently. The multi-core processors may be SMPs or AMPs with shared memory architecture. In this paper, we propose a dynamic scheduling algorithm in which the scheduler resides on all cores of a multi-core processor and accesses a shared Task Data Structure (TDS) to pick up ready-to-execute tasks.

  4. What’s Problem • Conversion of sequential code to parallel code or writing parallel applications is not optimal solution. • Most of the proposed scheduling algorithms for multi-core processors don’t support dependent task.

  5. Related Work Have proposed dynamic scheduling techniques [1] [2] [3] [4] [5] [6] [7] Available data dependency analysis techniques [9] [10] [11] [12] [13] [14] [15] Dynamic Scheduler for Multi-core Systems

  6. Scheduling Techniques • [1] An improvement OFT algorithm for reducing preemption. • [2] A data flow based and discuss data reuse which is intended for numeric computation. • [3] Based on recording resource utilization and throughput to change cores. • [4] A compile time technique that dynamically extract dependency and schedule parallel tiles on the cores to improve scalability.

  7. Scheduling Techniques • [5] Using FFT language to generate one-dimensional serial FFT schedule, multi-dimensional serial FFT schedule and parallel FFT schedules. • [6] Rearranges a long task into smaller subtasks to form another task state graph and then schedule them in parallel. • [7] Using sampling of dominant execution phases to converge to the optimal scheduling algorithm.

  8. Proposed Method • The scheduler will reside in the shared memory of the multi-core system to ensures that all the cores share the scheduler code. • The same scheduler code will be executing on different cores and maintain a shared task data structure (TDS) that contains task information.

  9. Dynamic Scheduler

  10. Task Data Structure (TDS) • Ti unique number identifying the task i • Tis status of task i • Ready (1) • Running (2) • Not ready (-1) • Tid number of dependency on task i • Tia list of tasks that become available due to run task i • Tip priority number of task i • Tidp data pointer of task i • Tisp stack pointer of task i • Tix execution time of task i

  11. Priority of Task • Duration of task (Tix). • Total number of other tasks dependent on the task (Tid) .

  12. Experimental Setup • Tij:Task j can be start after Task i finished Tij time • i: row number j: column number • T01: T1 will start after T0 run 100 seconds

  13. Simulation Result {T2, T3, T5} {T5} {T4, T5} {T1, T2, T3} {T3, T4, T5} 100 200 300 400 500

  14. Conclusion • Attempt to increase utilization of multi-core processors. • Tasks execution can not be limit in one core. • Addition wait time for cores since involves accessing shared task structure through lock.

More Related