Is anybody able to give a 'plain english' intuitive, yet formal, explanation of what makes QuickSort n log n? From my understanding it has to make a pass over n items, and it does this log n times...Im not sure how to put it into words why it does this log n times.

Each partitioning operation takes O(n) operations (one pass on the array). In average, each partitioning divides the array to two parts (which sums up to log n operations). In total we have O(n * log n) operations.

I.e. in average log n partitioning operations and each partitioning takes O(n) operations.

The `log n`

part comes from the fact that it's (at least ideally) breaking the input in half at each iteration. Starting from N items, and breaking those in half each time means that you're down to 1 item after log_{2}(N) iterations, at which point you obviously can't subdivide it any further. For example, starting from, say, 128 items, you divide into groups of 64, 32, 16, 8, 4, 2, 1 items -- 7 iterations (and log_{2}(128) = 7).

Each of those iterations scans through the entire array to partition it, so you end up with O(log N) operations, each of which has O(n) complexity, for O(n log n) overall complexity.

To be technically correct, the Big-O is O(N^{2}). This arises from the fact that a sufficiently bad choice of partition element could split the array into one element on one side, and the entire rest of the array on the other. If this happens at every iteration, it takes N iterations to split it down into pieces of one element apiece, so you get N operations, each with a complexity of O(N), for O(N * N) overall.

In a real implementation you usually stop before that, but that is the furthest you could go.

Well, it's not always n(log n). It is the performance time when the pivot chosen is approximately in the middle. In worst case if you choose the smallest or the largest element as the pivot then the time will be O(n^2).

To visualize 'n log n', you can assume the pivot to be element closest to the average of all the elements in the array to be sorted. This would partition the array into 2 parts of roughly same length. On both of these you apply the quicksort procedure.

As in each step you go on halving the length of the array, you will do this for log n(base 2) times till you reach length = 1 i.e a sorted array of 1 element.

In-fact you need to find the position of all the N elements(pivot),but the maximum number of comparisons is logN for each element (the first is N,second pivot N/2,3rd N/4..assuming pivot is the median element)

Similar Questions

I've got a homework question that's been puzzling me. It asks that you prove that the function Sum[log(i)*i^3, {i, n}) (ie. the sum of log(i)*i^3 from i=1 to n) is big-theta (log(n)*n^4). I know that

I have the following equation: W(n) = w(n/3) + lg(n) W(1) = Theta(1) and I want to find its time complexity. It can not be solved by the master theorem(can anyone confirm) so I have do that I by han

What is the difference between O(n^2) and O(n.log(n))?

I'm looking for a good (easy to implement, intuitive, etc.) recursive method of generating all binary strings of length n, where 1 <= n <= 35. I would appreciate ideas for a pseudo-code algorit

I have this question on a practice test and I'm not sure of when code would run quicker on O(n*n) over O(log n).

f(n)=(log(n))^log(n) g(n)= n/log(n) f=O(g(n))?

The big-O hierarchy for any constants a, b > 0 is; O(a) ⊂ O(log n) ⊂ O(n^b) ⊂ O(C^n). I need some explanations, thanks.

Whenever I consider algorithms/data structures I tend to replace the log(N) parts by constants. Oh, I know log(N) diverges - but does it matter in real world applications? log(infinity) < 100 for

Mergesort on an array has space complexity of O(n), while mergesort on a linked list has space complexity of O(log(n)), documented here I believe that I understand the array case, because we need aux

We have 2 object types and want to connect them N:N. For example, articles & authors. How would you name relations table? What would you put first at table name, articles or authors? (articles2au

n = number of nodes When you do a binary search ( ask do I go left, or do I go right ) until you find the answer, it will take log(n) time. It is because of the fact, that the height of the tree woul

I struggle masking a uint64_t variable by N bytes. I do not know N but I know it is 8 or less. My current code looks like this: // uint64_t n is given uint64_t mask; for( mask = 0x00; n; n--) { mask =

Given an array of n elements, is there a sorting algorithm that sorts in at most O(n log n) time (and optionally, O(n) time in the best case) is stable takes O(1) auxilliary space All sorting algori

#include <stdio.h> #include <stdlib.h> #include <math.h> int main() { int n,i,ele; n=5; ele=pow(n,2); printf(%d,ele); return 0; } prints= 24. I'm using GNU/GCC in code::blocks. Wh

Edit: I found why. Like i said, i was testing in clojure repl, and generate int[] from clojure's (int-array seq) function, which takes a lot time to run, and out-weights the actual algorithm runtime.

Given n numbers, how do I find the largest and second largest number using at most n+log(n) comparisons? Note that it's not O(n+log(n)), but really n+log(n) comparisons.

The running time of a some algorithm is given by the recurrence relation T(n) = n if n ≤ 3 T(n) = T(n-1) + T(n-2) - T(n-3) otherwise I know that the order is either n, n2, nn, or n log n, but I don'

Is there any way to update the nodes of the balanced binary search tree in O(log n) time? Suppose there is a balanced tree such that each node have an indexed object associated with it. So node 1 will

I'm having text like this: some text \r\n \r\n\r\n\r\n\r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n some text \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n \r\n s

This question already has an answer here: What does O(log n) mean exactly? 25 answers Like the Big O notation O(1) can describe following code: O(1): for (int i = 0; i < 10; i++) { // do

From Kochan's Programming in C: Input: printf (TABLE OF TRIANGULAR NUMBERS\n\n); printf ( n Sum from 1 to n\n); printf (--- ---------------\n); triangularNumber = 0; for(n=1; n<=10; ++n){ t

What is the big-O complexity of the function (log n)k for any k?

Quicksort is a well known algorithm, but it's complex to decipher the C (for me). The inline version speed things up a lot http://www.corpit.ru/mjt/qsort.html. However, could it be easily converted t

Here is my js Regex test. 'AAa\nbBB'.match(/AA[.\n]+BB/);//failed match I thought [.\n]+ could match any characters. Am I wrong?

This is a question to people who are programmers for a living - I just proved (using the Master theorem) that if we use quicksort and we pick the pivot to be the median of the the subarray we are par

I am trying to solve this recurrence T(n) = 3 T(n/2) + n lg n .. I have come to the solution that it belongs to masters theorem case 2 since n lg n is O(n^2) but after referring to the solution manual

Why does grep treat \n and \\n the same way ? For example, both match hallo\nworld. grep(hallo\nworld, pattern=\n) [1] 1 grep(hallo\nworld, pattern=\\n) [1] 1 I see that hallo\nworld is pars

Pretty much as per the title. The spec for std::vector<T>::resize seems to require that the src object be passed by value: void resize(size_type n, T src = T() ); Why isn't a reference to a con

<?php print '\n'; print '\\n'; ?> For the above php code, i got output the following output:(http://ideone.com/vKRtgb) \n\n So, what is the difference between print'\n' and print'\\n' ? I know

Possible Duplicate: Is log(n!) = Θ(n·log(n))? My proof for why lg(n!) is O(nlg(n)) is because n is polynomially larger than lg(n!), so therefore nlg(n) would always be polynomially larger than lg(

What is the solution of this recurrence? T(n) = T(n/1000) + T(999n/1000) + cn. I think its O(n log n) since the work done per level is going to be cn and the height of the tree will be log n to the

In C# if you do something like string newLine = Environment.NewLine; and inspect the value of newLine you'll find that it is \r\n. However, if I do something like; string[] test = new string[] { o

I don't understand, why does this recursion end: In[27]:= MyFunc[n_] := MyFunc[n] = 2; MyFunc[3] Out[28]= 2 Shouldn't it be endless MyFunc[3] MyFunc[3] = 2 (MyFunc[3] = 2) = 2 and so on? Why does th

The \n in the String.Format method below within the Log.Info prints \n as a text instead of starting a new line; why it does not work? Any idea? How can I make it work? Log.Info(String.Format(Some

Does an algorithm exist that finds the maximum of an unsorted array in O(log n) time?

I have n*n grid, where for example n=10. I have to fill it with black and white elements. Every black element has to have one, two or three black neighbors. It is not allowed to contain black elements

Possible Duplicate: Plain English explanation of Big O Dear all, As I read information about some algorithms, occasionally, I encounter algorithm performance information, such as: O(1), O(n), O(n^2)

When does the quicksort algorithm take O(n^2) time?

what does log* (log n) mean? what does the * represent? what is its expansion to compare with other logarithmic function like log(log n)??which one is greater among log* (log n) and (log(log n)^log n)

I use bbruby gem to replace text in bbcode with html. It replaces \r\n \n with <br>, and mutiple \r\n \ns with <p>. # https://github.com/cpjolicoeur/bb-ruby/blob/master/lib/bb-ruby.rb d

# Rate Our Love !! def love(n): if n < 0 : print Why would it be negative?! elif n == 0 : print well that is just hurtful elif n == 1 : print I REALLY love you elif n == 2 : print You make m

I'm implementing MinHeap I know how to implement deleteMax() but it takes O(n) time But I need O(log(n)) algorithm.. I searched and didn't find a way to do this, if it exists Is there any way that I c

I'm using this command to sort and remove duplicate lines from a file. sort file2.txt | uniq > file2_uniq.txt After performing the command, I find the last line with this value: \n which cause me

I have this function which writes content to a text file: function write($text){ fwrite($this->fp, '#' . date('H:i d.m.Y') . ': ' . $text . \n\n); } Each time this is called, text is added and n

Currently, my best effort has resulted in complexity O(log[n]^2): int power(x,n) { int mult=1, temp=x, i=1, j=1; while (n>1) { mult=mult*x; x=temp; for (i=1;i<=log[n];i++) { x=x*x; j=j*2; } n=n-

I am currently developing series of web services in WCF using .NET 4.5. For my logging I have chosen log4net framework but I found that its a big problem to use it in my project design. I have everyth

My assignment for class is to write a method that Returns a string consisting of a Hailstone sequence beginning with the positive integer n and ending with 1. The string should consist of a sequence o

Insertion sort has a runtime that is Ω(n) (when the input is sorted) and O(n2) (when the input is reverse sorted). On average, it runs in Θ(n2) time. Why is this? Why isn't the average case closer to

My knowledge of big-O is limited, and when log terms show up in the equation it throws me off even more. Can someone maybe explain to me in laymen's terms what a O(log n) algorithm is? Where does the

I have heard that adding immutable strings, or other immutable (contiguous) containers of multiple items, in a loop can be O(n²). For example string = repeat n times: string = string + X Why is