1 / 10

An optimal scheme for Multiprocessor task scheduling: A machine learning approach

An optimal scheme for Multiprocessor task scheduling: A machine learning approach. Aryabrata Basu, Shelby Funk University of Georgia. The NQ Wrap Algorithm. 14. 0. 2. 4. 6. 8. 10. 12. 16. 18. Task Set τ = [ T 1, T 2, T 3, T 4 ]

zagiri
Télécharger la présentation

An optimal scheme for Multiprocessor task scheduling: A machine learning approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An optimal scheme for Multiprocessor task scheduling: A machine learning approach • Aryabrata Basu, Shelby Funk • University of Georgia

  2. The NQ Wrap Algorithm 14 0 2 4 6 8 10 12 16 18 Task Set τ= [ T1,T2,T3,T4 ] T1 = (12,6), T2 = (8,5), T3 = (10,6), T4 = (11,5) 30th IEEE Real Time Systems Symposium 2009

  3. The NQ-Wrap Algorithm (cont.) T1 = (12,6), T2 = (8,5), T3 = (10,6), T4 = (11,5) σ1 = [0,8[ ℓ1,0= 8·6/12 = 4 and ℓ2,0= 5, ℓ3,0= 4.8 and ℓ4,0= 3.6 T1 T2 T3 T4 14 0 2 4 6 8 10 12 16 18 T1 T2 p1 T1 T2 T2 T3 p2 T4 T T3 p3 T4 0 2 4 6 8 10 12 σ0 σ1 σ2 σ3 30th IEEE Real Time Systems Symposium 2009

  4. Reducing the Overhead T1 = (12,6), T2 = (8,5), T3 = (10,6), T4 = (11,5) ℓ1,0= 8·6/12 = 4 and ℓ2,0= 5, ℓ3,0= 4.8 and ℓ4,0= 3.6 ℓ1,0 can be increased to 6 without causing any deadline misses T1 T2 T3 T4 14 0 2 4 6 8 10 12 16 18 T1 T2 p1 T2 T3 T1 T1 T2 T2 4 preemptions avoided T2 T2 T3 T3 p2 T4 T T4 T3 p3 T4 T4 0 2 4 6 8 10 12 σ0 σ1 σ2 σ3 30th IEEE Real Time Systems Symposium 2009

  5. Reducing the Overhead T1 = (12,6), T2 = (8,5), T3 = (10,6), T4 = (11,5) ℓ1,0= 8·6/12 = 4 and ℓ2,0= 5, ℓ3,0= 4.8 and ℓ4,0= 3.6 ℓ3,0 can also be increased to 5 and ℓ2,1 can be increased to 2 T1 T2 T3 T4 14 0 2 4 6 8 10 12 16 18 p1 T2 T2 T3 T1 T1 T2 T2 2 migrations and 2 preemptions avoided 4 preemptions avoided T2 T2 T3 T3 p2 T4 T4 p3 T4 T4 0 2 4 6 8 10 12 σ0 σ1 σ2 σ3 30th IEEE Real Time Systems Symposium 2009

  6. A Generic Machine Learning System System … … Input Variables: x = (x1, x2, …, xN) Hidden Variables: h = (h1, h2, …, hK) Output Variables: y = (y1, y2, …, yM) Machine Learning algorithms discover the relationships between the variables of a system (input, output and hidden) from direct samples of the system 30th IEEE Real Time Systems Symposium 2009

  7. Improve the Schedule With Machine Learning Modeling tasks w/in a time-slice Task Ti Remaining work Remaining work Remaining time Remaining time Work done Work done (feedback) 30th IEEE Real Time Systems Symposium 2009

  8. Improve the Schedule With Machine Learning Modeling a time-slice Timeslice Remaining work Remaining work Remaining work Task Tn Task T1 Remaining work Remaining time Remaining time Remaining time Remaining time Work done Work done Work done (feedback) … … Workdone(feedback) System Slack (out) System Slack Overload Overload Underload Underload 30th IEEE Real Time Systems Symposium 2009

  9. Future Work Experimental evaluation of minimization of number of task preemptions and migrations needs to be validated in different scenarios. Extend the Machine Learning technique to sporadic task sets, where deadlines ≠ periods. Explore various implementation of machine learning techniques other than like ANN, like the Case based reasoning (CBR), inductive learning, etc. 30th IEEE Real Time Systems Symposium 2009

  10. THANK YOU

More Related