1 / 71

Chap 8. Cosequential Processing and the Sorting of Large Files

File Structures by Folk, Zoellick, and Riccardi. Chap 8. Cosequential Processing and the Sorting of Large Files. 서울대학교 컴퓨터공학부 객체지향시스템연구실 SNU-OOPSLA-LAB 교수 김 형 주. Chapter Objectives(1). Describe a class of frequently used processing activities known as cosequential process

jess
Télécharger la présentation

Chap 8. Cosequential Processing and the Sorting of Large Files

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. File Structures by Folk, Zoellick, and Riccardi Chap 8. Cosequential Processing and the Sorting of Large Files 서울대학교 컴퓨터공학부 객체지향시스템연구실 SNU-OOPSLA-LAB 교수 김 형 주 SNU-OOPSLA Lab.

  2. Chapter Objectives(1) • Describe a class of frequently used processing activities known as cosequential process • Provide a general object-oriented model for implementing varieties of cosequential processes • Illustrate the use of the model to solve a number of different kinds of cosequential processing problems, including problems other than simple merges and matches • Introduce heapsortas an approach to overlapping I/O with sorting in RAM SNU-OOPSLA Lab.

  3. Chapter Objectives(2) • Show how merging provides the basis for sorting very large files • Examine the costs of K-way merges on disk and find ways to reduce those costs • Introduce the notion of replacement selection • Examine some of the fundamental concerns associated with sorting large files using tapes rather than disks • Introduce UNIX utilities for sorting, merging, and cosequential processing SNU-OOPSLA Lab.

  4. Contents 8.1 Cosequential operations 8.2 Application of the OO Model to a General Ledger Program 8.3 Extension of the OO Model to Include Multiway Merging 8.4 A Second Look at Sorting in Memory 8.5 Merging as a Way of Sorting Large Files on Disk 8.6 Sorting Files on Tape 8.7 Sort-Merge Packages 8.8 Sorting and Cosequential Processing in Unix SNU-OOPSLA Lab.

  5. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Cosequential operations • Coordinated processing of two or more sequential lists to produce a single list • Kinds of operations • merging, or union • matching, or intersection • combination of above SNU-OOPSLA Lab.

  6. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Matching Names in Two Lists(1) • So called “intersection operation” • Output the names common to two lists • Things that must be dealt with to make match procedure work reasonably • initializing that is to arrange things • methods that are getting and accessing the next list item • synchronizing between two lists • handling EOF conditions • recognizing errors e.g. duplicate names or names out of sequence SNU-OOPSLA Lab.

  7. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Matching Names in Two Lists(2) • In comparing two names • if Item(1) is less than Item(2), read the next from List 1 • if Item(1) is greater than Item(2), read the next name from List 2 • if the names are the same, output the name and read the next names from the two lists SNU-OOPSLA Lab.

  8. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Cosequential match procedure(1) PROGRAM: match Item(1) Item(1) < Item(2) List 1 same name use input() & initialize() procedure List 2 Item(1) > Item(2) Item(2) SNU-OOPSLA Lab.

  9. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Cosequential match procedure(2) int Match(char * List1, char List2, char *OutputList) { int MoreItems; // true if items remain in both of the lists // initialize input and output lists InitializeList(1, List1); InitializeList(2, List2); InitializeOutput(OutputList); // get first item from both lists MoreItems = NextItemInLIst(1) && NextItemInList(2); while (MoreItems) { // loop until no items in one of the lists if(Item(1) < Item(2) ) MoreItems = NextItemInList(1); else if (Item(1) == Item (2) ) { ProcessItem(1); // match found MoreItems = NextItemInList(1) && NextItemInList(2); } else MoreItems = NextItemInList(2); // Item(1) > Item(2) } FinishUp(); return 1; } SNU-OOPSLA Lab.

  10. 8.1 An Object-Oriented Model for Implementation Cosequential Processes General Class for Cosequential Processing(1) template <class ItemType> class CosequentialProcess // base class for cosequential processing { public: // the following methods provide basic list processing // these must be defined in subclasses virtual int InitializeList (int ListNumber, char *LintName) = 0; virtual int InitializeOutput (char * OutputListName) = 0; virtual int NextItemInList (int ListNumber) = 0; // advance to next item in this list virtual ItemType Item(int ListNumber) = 0; // return current item from this list virtual int ProcessItem(int ListNumber) = 0; // process the item in this list virtual int FinishUp() = 0; // complete the processing // 2-way cosequential match method virtual int Match2Lists (char *List1, char * List2, char *OutputList); }; SNU-OOPSLA Lab.

  11. 8.1 An Object-Oriented Model for Implementation Cosequential Processes General Class for Cosequential Processing(2) • A Subclass to support lists that are files of strings, one per line class StringListProcess : public CosequentialProcess<String &> { public: StringListProcess (int NumberOfLists); // constructor // Basic list processing methods int InitializeList (int ListNumber, char * List1); int InitializeOutput(char * OutputList); int NextItemInList (int ListNumber); // get next String & Item (int ListNumber); // return current int ProcessItem (int ListNumber); // process the item int FinishUp(); // complete the processing protected: ifstream * List; // array of list files String * Items; // array of current Item from each list ofstream OutputLsit; static const char * LowValue; //used so that NextItemInList() doesn’t // have to get the first item in an special way static const char * HighValue; }; SNU-OOPSLA Lab.

  12. 8.1 An Object-Oriented Model for Implementation Cosequential Processes General Class for Cosequential Processing(3) • Appendix H: full implementation • An example of main #include “coseq.h” int main() { StringListProcess ListProcess(2); // process with 2 lists ListProces.Match2Lists (“list1.txt”, “list2.txt”, “match.txt”); } SNU-OOPSLA Lab.

  13. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Merging Two Lists(1) • Based on matching operation • Difference • must read each of the lists completely • must change MoreNames behavior • keep this flag set to true as long as there are records in either list • HighValue • the special value (we use “\xFF”) • come after all legal input values in the files to ensure both input files are read to completion SNU-OOPSLA Lab.

  14. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Merging Two Lists(2) • Cosequential merge procedure based on a single loop • This method has been added to class CosequentialProcess • No modifications are required to class StringListProcess template <class ItemType> int CosequentialProcess<ItemType> :: Merge2Lists (char * List1Name, char * List2Name, char * OutputList) { int MoreItems1, MoreItems2; // true if more items in list (continued … ) SNU-OOPSLA Lab.

  15. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Merging Two Lists(3) InitializeList (1 List1Name); InitializeList (2, List2Name); InitializeOutput (OutputListName); MoreItems1 = NextItemInList(1); MoreItems2 = NextItemInLIst(2); while (MoreItems1 || MoreItems(2) ) { // if either file has more if (Item(1) < Item(2)) { // list 1 has next item to be processed ProcessItem(1); MoreItem1 = NextItemInList(1); } else if (Item(1) == Item(2) ) { ProcessItem(1); MoreItems1 = NextItemInList(1); MoreItems2 = NextItemInList(2); } else // Item(1) > Item(2) { ProcessItem(2); MoreItem2 = NextItemInList(2); } } FinishUp(); return 1; } SNU-OOPSLA Lab.

  16. 8.1 An Object-Oriented Model for Implementation Cosequential Processes (Item(1) < Item(2) )or match NAME_1 List 1 OutputList NAME_2 List 2 Item(1) > Item(2) Cosequential merge procedure(1) PROGRAM: merge SNU-OOPSLA Lab.

  17. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Summary of the CosequentialProcessing Model(1) • Assumptions • two or more input files are processed in a parallel fashion • each file is sorted • in some cases, there must exist a high key value or a low key • records are processed in a logical sorted order • for each file, there is only one current record • records should be manipulated only in internal memory SNU-OOPSLA Lab.

  18. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Summary of the Cosequential Processing Model(2) • Essential Components • initialization -reads from first logical records • one main synchronization loop • - continues as long as relevant records remain • selection in main synchronization loop • Input files & Output files are sequence checked by comparing the previous item value with new one if (Item(1) > Item(2) then .......... else if ( Item(1) < Item(2)) then ......... else ........... /* current keys equal */ endif SNU-OOPSLA Lab.

  19. 8.1 An Object-Oriented Model for Implementation Cosequential Processes Summary of the Cosequential Processing Model(3) • Essential components (cont’d) • substitute high values for actual key when EOF • main loop terminates when high values have occurred for • all relevant input files • no special code to deal with EOF • I/O or error detection are to be relegated to supporting method so the details of these activities do not obscure the principal processing logic SNU-OOPSLA Lab.

  20. 8.2 The General Ledger Program (1) • Account table (Fig 8.6) Acct-No Acct-Title Jan Feb Mar Apr 101 check #1 100 200 170 102 check #2 500 270 320 505 advertize 300 129 230 • Journal entry table (Fig 8.7) Acct-No Check-No Date Description Debit/Credit 101 112 04/02/86 auto-repair -30 505 213 05/13/86 newspaper -39 540 670 04/13/86 printer +60 • Ledger Printout (Fig 8.8) 101 check #1 1271 04/02/86 auto-expense -78 1272 04/03/86 advertise -30 SNU-OOPSLA Lab.

  21. 8.2 The General Ledger Program(2) • Ledger List and Journal List (Fig 8.10) 101 check#1 101 1271 Auto-expense 101 1272 Rent 101 1273 Advertising 102 check#2 102 670 Office-expense • The ledger (master) account number • The journal (transaction) account number • Class MasterTransactionProcess (Fig 8.12) • Subclass LedgeProcess (Fig 8.14) SNU-OOPSLA Lab.

  22. 8.2 The General Ledger Program (3) Template <class ItemType> class MasterTransactionProcess: Public CosequentialProcess<ItemType> // a cosequential process that supports master/transaction processing {public: MasterTransactionProcess(); // constructor Virtual int ProcessNewMaster() = 0; //processing when new master read Virtual int ProcessCurrentMaster() = 0; Virtual int ProcessEndMaster() = 0; Virtual int ProcessTransactionError()= 0; //cosequential processing of master and transaction records int PostTransactions (char * MasterFileName, char * TransactionFileName, char * OutputListName); }; SNU-OOPSLA Lab.

  23. 8.3 Extension of the Model to Include Multiway Merging A K-way Merge Algorithm • A very general form of cosequential file processing • Merge K input lists to create a single, sequentially ordered output list • Algorithm • begin loop • determine which list has the key with the lowest value • output that key • move ahead one key in that list • in duplicate input entries, move ahead in each list • loop again SNU-OOPSLA Lab.

  24. 8.3 Extension of the Model to Include Multiway Merging Selection Tree for Merging Large Number of Lists • K-way merge • nice if K is no larger than 8 or so • if K > 8, the set of comparisons for minimum key is expensive • loop of comparison (computing) • Selection Tree (if K > 8) • time vs. space trade off • a kind of “tournament” tree • the minimum value is at root node • the depth of tree is log2 K SNU-OOPSLA Lab.

  25. 8.3 Extension of the Model to Include Multiway Merging Selection Tree 7, 10, 17....List 0 7 9, 19, 23....List 1 7 11, 13, 32....List 2 11 18, 22, 24....List 3 input 5 12, 14, 21....List 4 5 5, 6, 25....List 5 5 15, 20, 30....List 6 8 8, 16, 29....List 7 SNU-OOPSLA Lab.

  26. 8.4 A Second Look at Sorting in Memory 8.4 A Second Look at Sorting in Memory • Read the whole file from into memory, perform sorting, write the whole file into disk • Can we improve on the time that it takes for this RAM sort? • perform some of parts in parallel • selection sort is good but cannot be used to sort entire file • Using Heap technique! • processing and I/O can occur in parallel • keep all the keys in heap • Heap building while reading a block • Heap rebuilding while writing a block SNU-OOPSLA Lab.

  27. 8.4 A Second Look at Sorting in Memory Overlapping processing and I/O : Heapsort • Heap • a kind of binary tree, complete binary tree • each node has a single key, that key is less than or equal to the key at its parent node • storage for tree can be allocated sequentially • so there is no need for pointers or other dynamic overhead for maintaining the heap SNU-OOPSLA Lab.

  28. 8.4 A Second Look at Sorting in Memory A heap in both its tree form and as it would be stored in an array A (1) * n, 2n, 2n+1 positions B c (2) (3) E H I D (4) (5) (6) (7) F (9) G (8) 1 2 3 4 5 6 7 8 9 A B C E H I D G F SNU-OOPSLA Lab.

  29. 8.4 A Second Look at Sorting in Memory Class Heap and Method Insert(1) class Heap { public: Heap(int maxElements); int Insert (char * newKey); char * Remove(); protected: int MaxElements; int NumElements; char ** HeapArray; void Exchange (int i, int j); // exchange element i and j int Compare (int i, int j) // compare element i and j { return strcmp(Heaparray[i], HeapArray[j]); } }; SNU-OOPSLA Lab.

  30. 8.4 A Second Look at Sorting in Memory Class Heap and Method Insert(2) int Heap::Insert(char * newKey) { if (NumElements == MaxElements) return FALSE; NumElements++; // add the new key at the last position HeapAray[NumElements] = newKey; // re-order the heap int k = NumElements; int parent; while(k > 1) { // k has a parent parent = k/2; if (Compare(k, parent) >= 0) break; // HeapArray[k] is in the right place // else exchange k and parent Exchange(k, parent); k = parent; } return; } SNU-OOPSLA Lab.

  31. 8.4 A Second Look at Sorting in Memory Heap Building Algorithm(1) input key order : F D C G H I B E A New key to be inserted Heap, after insertion of the new key Selected heaps in tree form F 1 2 3 4 5 6 7 8 9 F D 1 2 3 4 5 6 7 8 9 D F C C 1 2 3 4 5 6 7 8 9 C F D D F G 1 2 3 4 5 6 7 8 9 C F D G H 1 2 3 4 5 6 7 8 9 C F D G H (continued....) SNU-OOPSLA Lab.

  32. 8.4 A Second Look at Sorting in Memory Heap Building Algorithm(2) input key order : F D C G H B E A New key to be inserted Heap, after insertion of the new key Selected heaps in tree form I 1 2 3 4 5 6 7 8 9 C F D G H I C F D B 1 2 3 4 5 6 7 8 9 B F C G H I D G H I E 1 2 3 4 5 6 7 8 9 B E C F H I D G B C F A 1 2 3 4 5 6 7 8 9 A B C E H I D G F H G I D (continued....) SNU-OOPSLA Lab.

  33. 8.4 A Second Look at Sorting in Memory Heap Building Algorithm(3) input key order : F D C G H B E A Heap, after insertion of the new key New key to be inserted Selected heaps in tree form A 1 2 3 4 5 6 7 8 9 A B C E H I D G F A C B D H I E G F SNU-OOPSLA Lab.

  34. 8.4 A Second Look at Sorting in Memory Illustration for overlapping input with heap building(1) (Free ride of main memory processing: heap building is faster than IO!) Total RAM area allocated for heap First input buffer. First part of heap is built here. The first record is added to the heap, then the second record is added, and so forth Second input buffer. This buffer is being filled while heap is being built in first buffer. SNU-OOPSLA Lab.

  35. 8.4 A Second Look at Sorting in Memory Illustration for overlapping input with heap building(2) (One Heap is growing during IO time!) Second part of heap is built here. The first record is added to the heap, then the second record, etc Third input buffer. This buffer is filled while heap is being built in second buffer Third part of heap is built here Fourth input buffer is filled while heap is being built in third buffer SNU-OOPSLA Lab.

  36. 8.4 A Second Look at Sorting in Memory Sorting while Writing to the File • Heap rebuilding while writing a block (Free ride of main memory processing) • Retrieving the keys in order (Fig 8.20) • while( there is no elements) • get the smallest value • put largest value into root • decrease the # of elements • reorder the heap • Overlapping retrieve-in-order with I/O • retrieve-in-order a block of records • while writing this block, retrieve-in-order the next block SNU-OOPSLA Lab.

  37. 8.5 Merging as a Way of Sorting Large Files on Disk 8.5 Merging as a Way of Sorting Large Files on Disk • Keysort: holding keys in memory • Two Shortcomings of Keysort • substantial cost of seeking may happen after keysort • cannot sort really large files • e.g. a file with 800,000 records, size of each record: 100 bytes, • size of key part: 10 bytes, then 800,000 X 10 => 8G bytes! • cannot even sort all the keys in RAM • Multiway merge algorithm • small overhead for maintaining pointers, temporary variables • run: sorted subfile • using heap sort for each run • split, read-in, heap sort, write-back SNU-OOPSLA Lab.

  38. 8.5 Merging as a Way of Sorting Large Files on Disk 800,000 unsorted records 80 internal sorts ............. 80runs, each containing 10,000 sorted records ............. Merge 800,000 records in sorted order Sorting through the creation of runs and subsequential merging of runs SNU-OOPSLA Lab.

  39. 8.5 Merging as a Way of Sorting Large Files on Disk Multiway merging (K-way merge-sort) • Can be extended to files of any size • Reading during run creation is sequential • no seeking due to sequential reading • Reading & writing is sequential • Sort each run: Overlapping I/O using heapsort • K-way merges with k runs • Since I/O is largely sequential, tapes can be used SNU-OOPSLA Lab.

  40. 8.5 Merging as a Way of Sorting Large Files on Disk How Much Time Does a Merge Sort Take? • Assumptions • only one seek is required for any sequential access • only one rotational delay is required per access • Four I/Os ( refer to page of 39 ) • during the sort phase • reading all records into RAM for sorting, forming runs • writing sorted runs out to disk • during the merge phase • reading sorted runs into RAM for merging • writing sorted file out to disk SNU-OOPSLA Lab.

  41. 8.5 Merging as a Way of Sorting Large Files on Disk Four Steps(1) • Step1: Reading records into RAM for sorting and forming runs • assume: 10MB input buffer, 800MB file size • seek time --> 8msec, rotational delay --> 3msec • transmission rate --> 0.0145MB/msec • Time for step1: • access 80 blocks (80 X 11)msec + transfer 80 blocks (800/0.0145)msec • Step2: Writing sorted runs out to disk • writing is reverse of reading • time that it takes for step2 equals to time of step1 SNU-OOPSLA Lab.

  42. Four Steps(2) • Step3: Reading sorted runs into RAM for merging • 10 MB of RAM is for storing runs. 80 runs • reallocate each of 80 buffers 10MB RAM as 80 input buffers • access each run 80 buffers to read all of it • Each buffer holds 1/80 of a run (0.125MB) • total seek & rotational time --> 80 runs X 80 seeks --> 6400 seeks. 6400 X 11 msec = 70 seconds • transfer time --> 60 seconds • total time = total seek & rotation time + transfer time SNU-OOPSLA Lab.

  43. 8.5 Merging as a Way of Sorting Large Files on Disk Four Steps(3) • Step4: Writing sorted file out to disk • need to know how big output buffers are • with 20,000-byte output buffers, • total seek & rotation time = 4,000 x 11 msec • transfer time is still 60 seconds • Consider Table 8.1 (323pp) • What if we use keysort for 800M file? --> 24hrs 26mins 40secs 80,000,000 bytes 4,000 seeks 20,000 bytes per seek SNU-OOPSLA Lab.

  44. 8.5 Merging as a Way of Sorting Large Files on Disk Effect of buffering on the number of seeks required 10MB file 1st run = 80 buffers’ worth(80 accesses) 800MB file 2nd run = 80 buffers’ worth(80 accesses) 800,000 sorted records : : : 80 buffers(10MB) 80th run = 80 buffers’ worth(80 accesses) SNU-OOPSLA Lab.

  45. 8.5 Merging as a Way of Sorting Large Files on Disk Sorting a Very Large File • Two kinds of I/O • Sort phase • I/O is sequential if using heapsort • Since sequential access is minimal seeking, wecannot algorithmically speed up I/O • Merge phase • RAM buffers for each run get loaded, reloaded at predictable times -> random access • For performance, look for ways to cut down on the number of random accesses that occur while reading runs • you can have some chance here! SNU-OOPSLA Lab.

  46. 8.5 Merging as a Way of Sorting Large Files on Disk The Cost of Increasing the File Size • K-way merge of K runs • Merge sort = O(K2)( merge op. -> K2 seeks ) • If K is a big number, you are in trouble! • Some ways to reduce time!! (8.5.4, 8.5.5, 8.5.6) • more hardware (disk drives, RAM, I/O channel) • reducing the order of merge (k), increasing buffer size of each run • increase the lengths of the initial sorted runs • find the ways to overlap I/O operations SNU-OOPSLA Lab.

  47. 8.5 Merging as a Way of Sorting Large Files on Disk Hardware-base Improvements • Increasing the amount of RAM • longer & fewer initial runs • fewer seeks • Increasing the number of disk drives • no delay due to seek time after generation of runs • assign input and output to separate drives • Increasing the number of I/O channels • separate I/O channels, I/O can overlap • Improve transmission time SNU-OOPSLA Lab.

  48. 8.5 Merging as a Way of Sorting Large Files on Disk Decreasing the Num of Seeks Using Multiple-step Merges • K-way merge characteristics • a selection tree is used • the number of comparisons is N*log K (K-way merge with N records) • K is proportional to N • O(N*log N) : reasonably efficient • Reducing seeks is to reduce the number of runs • give each run a bigger buffer space • multiple-step merge provides the way without more RAM SNU-OOPSLA Lab.

  49. 8.5 Merging as a Way of Sorting Large Files on Disk Multiple-step merge(1) • Do not merge all runs at one time • Break the original set of runs into small groups and Merge runs in these group separately • Leads fewer seeks, but extra transmission time in second pass • Reads every record twice • to form the intermediate runs & the final sorted file • Similar to have selection tree in merging n lists!! SNU-OOPSLA Lab.

  50. 8.5 Merging as a Way of Sorting Large Files on Disk 25 sets of 32 runs each 32 runs 32 runs 32 runs ...... ...... ...... ...... ...... Two-step merge of 800 runs (25 sets X 32 runs) = 800 runs SNU-OOPSLA Lab.

More Related