1 / 22

Comparison of Optimization Algorithms

Comparison of Optimization Algorithms. By Jonathan Lutu. Bubble sort ( Exchanging).

lis
Télécharger la présentation

Comparison of Optimization Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Comparison of Optimization Algorithms By Jonathan Lutu

  2. Bubble sort (Exchanging) • Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm's average and worst case performance is O(n2), so it is rarely used to sort large, unordered, data sets.

  3. algorithm bubble Sort( A : list of sortable items ) repeat swapped = false for i = 1 to length(A) - 1 inclusive do: /* if this pair is out of order */ if A[i-1] > A[i] then /* swap them and remember something changed */ swap( A[i-1], A[i] ) swapped = true end if end for until not swapped end procedure

  4. Selection sort (Selection) • Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations. • The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps, and thus is useful where swapping is very expensive.

  5. algorithm • For I = 0 to N-1 do: Smallsub = I For J = I + 1 to N-1 do: If A(J) < A(Smallsub) Smallsub = J End-If End-For Temp = A(I) A(I) = A(Smallsub) A(Smallsub) = Temp End-For

  6. Insertion sort(Insertion) • Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists.

  7. algorithm forj <- 2 tolength[A] dokey <- A[j] Insert A[j] into the sorted sequence A[1 . . j - 1]. i <- j – 1 whilei > 0 and A[i]> key do A[i + 1] <- A[i] i <- i – 1 A[i + 1] <- key

  8. Shell sort (Insertion) • Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.

  9. algorithm void shellsort (int[] a, int n) { inti, j, k, h, v; int[] cols = {1391376, 463792, 198768, 86961, 33936, 13776, 4592, 1968, 861, 336, 112, 48, 21, 7, 3, 1} for (k=0; k<16; k++) { h=cols[k]; for (i=h; i<n; i++) { v=a[i]; j=i; while (j>=h && a[j-h]>v) { a[j]=a[j-h]; j=j-h; } a[j]=v; } } }

  10. Merge sort (Merging) • Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list.

  11. algorithm Input: array a[] indexed from 0 to n-1. m = 1 while m < n do i = 0 while i < n-m do merge subarrays a[i..i+m-1] and a[i+m .. min(i+2*m-1,n-1)] in-place. i = i + 2 * m m = m * 2

  12. Quicksort(Partitioning) • Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array an element called a pivot is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice.

  13. algorithm • Heapsort(A as array) BuildHeap(A) for i = n to 1 swap(A[1], A[i]) n = n - 1 Heapify(A, 1) BuildHeap(A as array) n = elements_in(A) for i = floor(n/2) to 1 Heapify(A,i) Heapify(A as array, i as int) left = 2i right = 2i+1 if (left <= n) and (A[left] > A[i]) max = left else max = i if (right<=n) and (A[right] > A[max]) max = right if (max != i) swap(A[i], A[max]) Heapify(A, max)

  14. Counting sort (Non-comparison) • Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order.

  15. Algorithm • for each input item x: Count[key(x)] = Count[key(x)] + 1 total = 0 for i = 0, 1, ... k: c = Count[i] Count[i] = total total = total + c ''' allocate an output array Output[0..n-1] ; THEN ''' for each input item x: store x in Output[Count[key(x)]] Count[key(x)] = Count[key(x)] + 1 return Output

  16. Dijkstra's algorithm • This algorithm is often used in routing and as a subroutine in other graph algorithms. • This algorithm takes a lot of resources and can take a lot of time depending on what type of map.

  17. DATA mining • Classification algorithms- predict one or more discrete variables, based on the other attributes in the dataset. • Regression algorithms • Segmentation algorithms • Association algorithms • Sequence analysis algorithms

  18. Regression algorithm predict one or more continuous variables, such as profit or loss, based on other attributes in the dataset.

  19. Segmentation algorithm • divide data into groups, or clusters, of items that have similar properties.

  20. Association algorithm • find correlations between different attributes in a dataset. The most common application of this kind of algorithm is for creating association rules, which can be used in a market basket analysis.

  21. Sequence analysis algorithm • summarize frequent sequences or episodes in data, such as a Web path flow.

  22. Taravling Sales man http://www.heatonresearch.com/fun/tsp/genetic

More Related