<< Chapter < Page | Chapter >> Page > |
function siftUp(a, start, end) is
input: start represents the limit of how far up the heap to sift.
end is the node to sift up.
child := end
while child>start
parent := ⌊(child - 1) ÷ 2⌋
if a[parent]<a[child] then (out of max-heap order)
swap(a[parent], a[child])
child := parent (repeat to continue sifting up the parent now)
else
return
It can be shown that both variants of heapify run in O(n) time.[ citation needed ]
Below is an implementation of the "standard" heapsort (also called bottom-up-heapsort). It is faster on average (see Knuth. Sec. 5.2.3, Ex. 18) and even better in worst-case behavior (1.5n log n) than the simple heapsort (2n log n). The sift_in routine is first a sift_up of the free position followed by a sift_down of the new item. The needed data-comparison is only in the macro data_i_LESS_THAN_ for easy adaption.
This code is flawed - see talk page
/* Heapsort based on ideas of J.W.Williams/R.W.Floyd/S.Carlsson */
#define data_i_LESS_THAN_(other) (data[i]<other)
#define MOVE_i_TO_free { data[free]=data[i]; free=i; }
void sift_in(unsigned count, SORTTYPE *data, unsigned free_in, SORTTYPE next)
{
unsigned i;
unsigned free = free_in;
// sift up the free node
for (i=2*free;i<count;i+=i)
{ if (data_i_LESS_THAN_(data[i+1])) i++;
MOVE_i_TO_free
}
// special case in sift up if the last inner node has only 1 child
if (i==count)
MOVE_i_TO_free
// sift down the new item next
while( ((i=free/2)>=free_in)&&data_i_LESS_THAN_(next))
MOVE_i_TO_free
data[free] = next;
}
void heapsort(unsigned count, SORTTYPE *data)
{
unsigned j;
if (count<= 1) return;
data-=1; // map addresses to indices 1 til count
// build the heap structure
for(j=count / 2; j>=1; j--) {
SORTTYPE next = data[j];
sift_in(count, data, j, next);
}
// search next by next remaining extremal element
for(j= count - 1; j>=1; j--) {
SORTTYPE next = data[j + 1];
data[j + 1] = data[1]; // extract extremal element from the heap
sift_in(j, data, 1, next);
}
}
(From Wikipedia, the free encyclopedia)
Quicksort is a well-known sorting algorithm developed by C. A. R. Hoare that, on average , makes ( big O notation ) comparisons to sort n items. However, in the worst case , it makes Θ(n2) comparisons. Typically, quicksort is significantly faster in practice than other algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data it is possible to make design choices which minimize the possibility of requiring quadratic time.
Quicksort is a comparison sort and is not a stable sort .
Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-lists.
The steps are:
Notification Switch
Would you like to follow the 'Data structures and algorithms' conversation and receive update notifications?