Download
application level prefetching n.
Skip this Video
Loading SlideShow in 5 Seconds..
Application-level Prefetching PowerPoint Presentation
Download Presentation
Application-level Prefetching

Application-level Prefetching

92 Vues Download Presentation
Télécharger la présentation

Application-level Prefetching

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Application-level Prefetching CS656 semester project Peixan Li, Jinze Liu, Hexin Wang

  2. Outline • Motivation • Solution • Implementation • Evaluation • Test Plan • Conclusion

  3. Motivation • Get Better Performance on Remote Data Access over DOS • Random Data Access • Sequential Data Access • e.g., Video-on-Demand Data Transfer • Problems • Low Bandwidth Network • Wait for your data whenever you need it most • General-Purpose OS • Inappropriate Scheduler (RoundRobin) not addressing timing constraints

  4. Solution(I) -- Client Side • Application level Prefetching Cache • Although the fact -- Low Bandwidth Network • Prefetching Cache can reduce data access time • General Data Service • History-based prefetching • Video-on-Demand • Sequential Prefetching

  5. Solution(II) -- Server Side • Admission Control • Avoiding Server Overload • All the admitted tasks can be satisfied using current scheduling algorithm • Server Level Scheduling algorithm • Isochronous Tasks can run first before GP Tasks

  6. Implementation • Client/Server Model • Client Side • Prefetch via Data Compression • Cache Management • Server Side • Admission Control • Scheduling Algorithm

  7. Controller Controller Cache Cache Prefetch Thread Prefetch Thread Client/Server Model Client 1 Client 2 Server Normal Application V/A Player Admission Control Schedule Control Service Thread Service Thread …. File Server Peixian...

  8. Prefetch via Data Compression • Based on data compression techniques • Why is D/C useful for prefetch • Basic law: To represent more common events with short codes and less common events with longer codes • Must be good at recording history and predicting future data • Be particularly good for databases and hypertext systems

  9. History-based Prefetch • We use Ziv-Lempel algorithm • Simple but very good • Predict based on a probabilistic history tree • e.g. “aaaababaabbbab” => (a)(aa)(ab)(aba)(abb)(b) • Sequential prefetch is used when lack of history • Prefetch thread is activated once a request is finished • History tree need to be rebuilt before it becomes too large

  10. Sequential Prefetch • Two kinds of interfaces provided client module • Hread() is for history-based prefetch • Sread() is for sequential prefetch • More kinds of reads can be added, e.g. real-time • When sequential prefetch is used • No history is needed • Only future data need to be cached • Semaphores are used tosynchronized cache-read and cache-write

  11. Cache Management • Cache size dynamically grows and shrinks • With default size and maximum limit • In order to use memory efficiently • In order to provide better performance • Use LRU replacement algorithm • Simple but good enough • No consistency issue since we only have read-only access

  12. Admission Control(I) • Basic Assumption • Isochronous Tasks • Real-time periodic tasks • MPEG-1 requires about 1.5 MbitsPS • MPEG-2 or MPEG-4 requires about 5-10MbitsPS • Require performance guarantee for throughput, bounded latency. • General-Purpose Tasks • Preemptible tasks • Suitable for low-priority background processing

  13. Admission Control(II) • If a new isochronous task is to be admitted • All the previous tasks must be satisfied whenever new task is taken into account or not • The new task can be satisfied under current workload • High frequency tasks run before low frequency tasks • A periodic task can be satisfied means it can finish within each period. • I.e., Real Execution Time <= Period

  14. Admission Control(III) • To admit a new Isochronous Task • I.e. • n -- Total number of isochronous tasks • Ci -- An execution time per period of task i • Ti -- Period of isochronous task i • Disadvantage • General-Purpose Tasks may suffer from starvation

  15. Admission Control(V) • E.g. Task1(C1 = 6, T1 = 10); Task2(C2 = 6, T2 = 20); T3 T2 T1 0 10 20 30 t

  16. Scheduling algorithm • Schedule Algorithm • Isochronous requests scheduled using rate monotonic • The higher frequency, the higher priority • Normal file requests scheduled with round robin • Can be preempted by isochronous tasks Jinze...

  17. Test Plan • Test programs with different access patterns • Sequential remote multimedia access. • Simulated tree-like web document access. • Simulated database access. • Random remote file access. • To test prefetching performance with different test programs. • To test server performance with concurrent requests of different applications

  18. Evaluation • Performance comparison -- Yes/No prefetching • Cache Hit Rate • Received throughput • Server performance with different tasks • Correctness of Admission Control. • Measurement of capacity

  19. Unsolved Problems • Cache cannot be shared between different applications. • Cache data is lost after the termination of application program. • Cache is read-only.

  20. Conclusion • We’ve simulated a client/server model to support application-oriented isochronous prefetching.