<< Chapter < Page Chapter >> Page >

std::cout<<std::endl;

return 0;

}

Analysis

In sorting n items, merge sort has an average and worst-case performance of O (n log n). If the running time of merge sort for a list of length n is T(n), then the recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the n steps taken to merge the resulting two lists). The closed form follows from the master theorem .

In the worst case, merge sort does approximately (n ⌈ lg  n⌉ - 2⌈lg n⌉ + 1) comparisons, which is between (n lg n - n + 1) and (n lg n + n + O(lg n)). [2]

For large n and a randomly ordered input list, merge sort's expected (average) number of comparisons approaches α·n fewer than the worst case where .

In the worst case, merge sort does about 39% fewer comparisons than quicksort does in the average case; merge sort always makes fewer comparisons than quicksort, except in extremely rare cases, when they tie, where merge sort's worst case is found simultaneously with quicksort's best case. In terms of moves, merge sort's worst case complexity is O (n log n)—the same complexity as quicksort's best case, and merge sort's best case takes about half as many iterations as the worst case.

Recursive implementations of merge sort make 2n - 1 method calls in the worst case, compared to quicksort's n, thus has roughly twice as much recursive overhead as quicksort. However, iterative, non-recursive, implementations of merge sort, avoiding method call overhead, are not difficult to code. Merge sort's most common implementation does not sort in place; therefore, the memory size of the input must be allocated for the sorted output to be stored in. Sorting in-place is possible but is very complicated, and will offer little performance gains in practice, even if the algorithm runs in O(n log n) time. In these cases, algorithms like heapsort usually offer comparable speed, and are far less complex.

Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as Lisp , where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort as long as the merge operation is implemented properly.

As can be seen from the procedure MergeSort, there are some complaints. One complaint we might raise is its use of 2n locations; the additional n locations were needed because one couldn't reasonably merge two sorted sets in place. But despite the use of this space the algorithm must still work hard, copying the result placed into Result list back into m list on each call of merge . An alternative to this copying is to associate a new field of information with each key. (the elements in m are called keys). This field will be used to link the keys and any associated information together in a sorted list (keys and related informations are called records). Then the merging of the sorted lists proceeds by changing the link values and no records need to moved at all. A field which contains only a link will generally be smaller than an entire record so less space will also be used.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Data structures and algorithms. OpenStax CNX. Jul 29, 2009 Download for free at http://cnx.org/content/col10765/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Data structures and algorithms' conversation and receive update notifications?

Ask