I am taking now the big O in ICS202 course, and I really find some dificulty to figure it out from a code, Is there any videos,web pages or blogs that can help me with that?

Generally the big O notation describes how a function behaves when problem size increases. Wikipedia has a good introduction, but the topic is also a staple in nearly every CS book. In the following example I'll talk exclusively about the time complexity of the functions, you can also apply this to other resources, e.g. memory.

When analysing code to figure out what its limiting behaviour is you should look for patterns that are dependent of the problem size (in the following examples this is always *N* – think of it as input to a method which can be variable). For example, code that does not depend on it at all is always O(1):

```
System.out.println("hello");
```

This also holds if we do several such things:

```
System.out.println("hello");
System.out.println("world");
System.out.println("!");
```

Technically this is O(1 + 1 + 1), so O(3). But constant factors can be discarded (due to how *f* ∈ O(*g*) is defined), so we end up at O(1) anyway.

Once you have loops involved this changes, of course, e.g. the following code will be O(*N*) because the loop counts up to *N*:

```
for (int i = 0; i < N; i++)
System.out.println("hello");
```

Things get fun when we have multiple loops and nest them:

```
for (int i = 0; i < N; i++)
for (int j = 0; j < N / 2; j++)
System.out.println("hello");
for (int i = 0; i < N; i++)
System.out.println("hello2");
```

Here we have two nested loops. The inner one demonstrates what I pointed out before about constant factors: It only runs to ^{N}/_{2}. So technically the inner loop has the complexity O(½ · *N*). But since constant factors can be discarded this results in O(*N*) as well. So the two nested loops give us an initial complexity of O(*N*^{2}). But there's also a second loop after the nested ones. So technically we arrive at O(*N*^{2} + N) for the whole snippet. Mathematically this notation is defined as “the function grows slower than what is written inside the O(...) for all *x* greater than a certain *x*_{0}”. Since *N* always grows slower than *N*^{2} there is always a larger *x*_{0} for which *N*^{2} is greater than *N*^{2} + *N*. So for sums of complexities you can leave out summands that grow too slow to be significant (this gets a bit tricky for more complex things though, so just take this explanation as a *very* rough approximation).

For many simple algorithms you can get away with just polynomials, but more complex ones may result in things like O(2^{N}) (those are computationally hard problems) or O(log *N*) (many good sorting algorithms), etc. Try figuring out how the code actually approaches a problem, what the problem size is (sometimes it's more than one number, e.g. graph algorithms often have complexities that depend on the number of edges and/or nodes) and what function might describe the code's limiting behaviour. Simple loops up to the problem size are easy. Recursion often is a little harder to figure out, although there are also common patterns, such as how much the problem size is reduced in each step.

Let's take a look at a few actual algorithms and see how they perform.

Pseudocode of Bubble sort is the following (taken from Wikipedia):

```
procedure bubbleSort( A : list of sortable items )
repeat // (1)
swapped = false
for i = 1 to length(A) - 1 inclusive do: // (2)
/* if this pair is out of order */
if A[i-1] > A[i] then
/* swap them and remember something changed */
swap( A[i-1], A[i] ) // (3)
swapped = true
end if
end for
until not swapped
end procedure
```

The problem size is readily apparent: It's the size of the list. A glance at the code tells us there are two nested loops (1 and 2). The inner one is fairly obvious because it always runs through the whole list minus the last element, so it's O(*N* − 1). Then there is a swap of two numbers (3) which is constant, therefore we can ignore it for our purposes^{1}.

To figure out how long the outer loop has to run we have several options: The list could be sorted from the start. The inner loop therefore runs once through the whole list, determines that nothing has been swapped and the outer loop terminates after having run once. However, if the list is sorted in reverse initially we get a different picture. Recall how Bubble sort works – it always swaps adjacent elements when they are out of order. So for the last element to arrive a its proper place with a reverse-sorted list we have to swap it with its neighbour *N* − 1 times. And then there is every other option in between. So depending on the initial conditions the outer loop has very different behaviour^{2}. Let's just focus on the worst case for now, which means the outer loop runs *N* − times which leaves us with the following complexity for everything: O((*N* − 1) · (*N* − 1) · 1) which are, in order, the outer loop (1), the inner loop (2) and the swap (3), all nested in each other which is why they are multiplied. We can simplify this to O(*N*^{2} − 2*N* + 1) which can be further simplified (according to the rules I noted earlier) to O(*N*^{2}).

A simple recursive method of calculating factorials is the following (I omitted error checking, e.g. for negative arguments, for simplicity):

```
public long fac(int n) {
if (n == 0) return 1;
return n * fac(n - 1);
}
```

As said before, recursion is a little different than simple loops but it's still not too hard. Recursion works by breaking a problem into smaller problems until we arrive at a point where the answer is immediately obvious or can be computed through other means (the terminating condition). In this case the terminating condition is *n* = 0 where we know the answer already. How the problem is broken into smaller parts differs, though. In this case we can see that each step reduces the problem size by 1. So to solve a problem of size *n* (i.e. calculating *n*!) we need *n* steps. So it's not very hard to see that this is O(*n*) again.

Binary search finds an element in a sorted list by exploiting the fact that we can infer quite quickly *where* the element has to be in that list:

```
int binary_search(int A[], int key, int imin, int imax)
{
// test if array is empty
if (imax < imin):
// set is empty, so return value showing not found
return KEY_NOT_FOUND;
else
{
// calculate midpoint to cut set in half
int imid = midpoint(imin, imax);
// three-way comparison
if (A[imid] > key)
// key is in lower subset
return binary_search(A, key, imin, imid-1);
else if (A[imid] < key)
// key is in upper subset
return binary_search(A, key, imid+1, imax);
else
// key has been found
return imid;
}
}
```

Here the problem size is the length of the array again. Binary search works by looking in the middle of the sub-array one is currently looking at and then deciding whether the element to search for is either in the upper or lower half. Since the array is sorted that works. This means that for each recursion step the problem size halves (because we throw away one half each time). For an array of length 16 we need at most 5 steps (since the terminating condition looks for *i*_{max} < *i*_{min}), for 32 it's 6 steps, etc. Since the problem size halves this puts the complexity at O(log_{2}*n*), the binary logarithm of the array size. If the size would reduce to a tenth each step we'd have O(log_{10}*n*) – however, all those are essentially identical in behaviour which is why the base is usually omitted in writing: O(log *n*).

1 Note though, that even though some things may be constant they could still be very expensive – don't take the big O notation as the final word on real-world performance of an algorithm. For example Quicksort is an algorithm with O(log *N*) but for small problem sizes it is more expensive than Insertion sort (an O(*N*^{2}) algorithm), which is why many Quicksort implementations use Insertion sort for the last few unsorted numbers simply because it is faster, although, of course, Quicksort itself scales *much* better to large problem sizes. However, none of this applies in case of the mentioned swap above, it's just to point out that for the big O notation constants are irrelevant; in the real world they are often not.

2 Another reason why one complexity alone is a very poor measure of how good an algorithm is. Quicksort has a *worst* case complexity of O(*N*^{2}) (for a sorted list) but performs very well in almost all other circumstances.

Similar Questions

Possible Duplicate: Big O when adding together different routines What does O(n) + O(log(n)) reduce to? My guess is O(n) but can not give a rigorous reasoning. I understand O(n) + O(1) should reduce

Is there someplace where I can get a Big-O style analysis / comparison of traditional data structures such as linked lists, various trees, hashes, etc vs. cache aware data structures such as Judy tree

For homework, I was given the following 8 code fragments to analyze and give a Big-Oh notation for the running time. Can anybody please tell me if I'm on the right track? //Fragment 1 for(int i = 0;

I am trying to calculate the Big-Oh for this code, which is for performing a binary search on a linked list: public int search( List<T> list, T target ) { int low = 0; int high = list.size() - 1

I've been having some problems trying to grasp the concept of big O notation. So, by definition big O is as follows, T(n) ∈ O(G(n)) if T(n) <= G(n) * C. Since the the constant C can be any integ

Does anyone have insight into the typical big-O complexity of a compiler? I know it must be >= n (where n is the number of lines in the program), because it needs to scan each line at least once. I

What are some examples where Big-O notation[1] fails in practice? That is to say: when will the Big-O running time of algorithms predict algorithm A to be faster than algorithm B, yet in practice algo

If I have the following closed form solution for a recurrence relation, how can I simplify it under big O: f(n) = 3^n + n.9^n I would hazard a guess at: f(n) is a member of O(9^n) -> Am not sure if

Hello it has been some time sins i have used Big O notation, so i'm a bit rusty. I know that having 1 loop, that loops n times is O(n), and having 1 loop that loops n time within another loop that loo

Is there a way to figure out what the top domain for the hostname of the current page is? The problem I have is that the script could be on .com domain, or in an international domain like .co.uk So fo

First, here is my code for the O(n): import java.util.*; public class BigO{ public static void main(String[] args) { Scanner in = new Scanner(System.in); System.out.print(Enter a number: ); String u

The big-O hierarchy for any constants a, b > 0 is; O(a) ⊂ O(log n) ⊂ O(n^b) ⊂ O(C^n). I need some explanations, thanks.

How can I specify computational complexity of an algorithm in Big-O notation whose execution follows the following pattern, according to input sizes? Input size: 4 Number of execution steps: 4 + 3 +

I have the following question: Solve the recurrence relation simplifying the answer using Big 'O' notation: f(0) = 2 f(n) = 6f(n-1)-5, n>0 I know this is a first order inhomogenous recurrence rela

This question already has an answer here: Java Big O notation of 3 nested loops of log(n) 3 answers I am trying to figure out what the big o estimate of the following nested for loop is. for(in

for (int i=0; i < n; i++) for (j=0;j<i*i;j++) x++ Would the big O be O(n^3)? I'm just confused about how the i's relate to the n.

I am going over the Big-Oh notation, and I have a problem understanding the solution to this question: Is 2n + 10 ≡ O(n)? Can we find c and n0? 2n + 10 <= cn (c-2)n >= 10 n >= 10/(c-2) Pick c

I got this question wrong on an exam : Name a function that is neither O(n) nor Omega(n). After attempting to learn this stuff on my own through youtube, I'm thinking this may be a correct answer: (n

Title is self-explanatory. Very easy question. I think it's O(n) but wanted to verify before my final tomorrow.

I need to derive the big-O notation of this validation program. Its job is to accept product entries of this type: 'jacket,8,12,18,16,6', validates it, sort the sizes, sort the entry into a list by al

Can anyone please let me what would be big O time complexity for the following piece of code: for (int i = 0; i < array.length - 1; i++) { for (int j = i + 1; j < array.length; j++) { // do some

Are scalars included in big-O notation, or is O(2n) actually the same as O(n) because scalars are not taken into account? If so, why is this?

I have to write down the Big O notation of an algorithm I had to think up for my homework. I'm able to tell that the code below is O(n^2). Because for every x I have to go through all of the y's and

I'm relatively new to the practice of determining algorithm runtimes using big-O notation and I have a question regarding the runtime of a sorting algorithm. Let's say I have a set of pairs (a, b) in

I need help understanding/doing Big O Notation. I understand the purpose of it, I just don't know how to determine the complexity given a piece of code. Determine the Big O notation for each of the

i'm trying to figure out how to work some transitions? i've got an overlay div that pops up when a link is clicked but i'm trying to make it so it either fades into the div ontop or it just melts into

Maybe I'm mistaken in my understanding of Big-O notation (it has been a while since I've taken a course on algorithms) but the following has never made too much sense to me: This would be considered O

def foo(x): if x > 5: return foo(x–1) – foo(x-1) else: return 11 def bar(a,b): if (b > 0): return bar( bar(a, b+1) , b-1 ) else: return 0 How do I find the running time in Big O notation and ho

This question already has an answer here: Big O of Recursive Methods 2 answers How to find the Big O for the following recursive function using the recursive method: T(n)=(n-1)T(n-1)+(n-1)T(n-2

What would be the Big O or Theta of a loop that runs forever? Just curious, was thinking about it today. Could you even bound it?

The Hash table wiki entry lists its Big O as: Search: O(n) Insert: O(n) Delete: O(n) while a java HashMap is listed with Big O as: get: O(1) put: O(1) remove: O(1) Can someone plz explain why does t

Article at http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html says that Big O notation For accessing middle element in linked list is O(N) .should not it be O(N/2) .Assume we have 100 element

Is O(5n) = 5*O(n) ? From what I understand , O(5n) == O(n). Thus they are not equal? Please correct me if I am wrong.

I'm reading this Big O article (and some other book references) trying to figure out what changes affect my algorithm. so given the following O(N^2) code: bool ContainsDuplicates(String[] strings) {

Main Question: Is there some way to figure out where/when (I'd take dates, tags, commits, basically anything) which branch was merged with another? Not so short explanation: Since I'm stuck with CVS i

What would be the big O notation of the length of the list of permutations of the characters of a list of words of lenght n? I just do not know how to express that because it would be like n! for each

I have to determine the time complexity (big O) of the following function: void BET::makeEmpty(BinaryNode* &n) { if(n != NULL) { makeEmpty(n->left); makeEmpty(n->right); delete n; } n = NULL

I was reading about Big O notation. It stated, The big O of a loop is the number of iterations of the loop into number of statements within the loop. Here is a code snippet, for (int i=0 ;i<n; i+

I was asked to show that f(n) = 3n^2 + 5n + 2 is O(n^2) and to find the values of the required constants. I didn't provide an answer as I didn't understand the question. When I got the paper back, th

I can probably figure out part b if you can help me do part a. I've been looking at this and similar problems all day, and I'm just having problems grasping what to do with nested loops. For the first

I need to find out how efficient the following code is as compared to linear search: int max = Integer.MAX_VALUE; int min = 0; while(min != max) { int check = (max + min) / 2; if(isLessThanX(check)) {

I got asked an interview question that wanted me to discern the Big-O notation of several logarithmic functions. The functions were as follows: f(x) = log5(x) f(x) = log(x5) f(x) = log(6*log x) f(x) =

The first foreach method gets several errors. I can't figure out why and it seems like it should work... foreach - invalid token 'foreach' in class, struct, or interface member declaration. This print

Suppose f(n) is runtime of algorithm. According to function definition of O(n), if f(n)<=c*g(n) then f(n)=O(g(n)) where n0<=n. What is the range of values constant c can take?

A homework question asks me to analyse the following code fragement: for (int i = N; i > 0; i--) for (int j = 0; j < i; j++) I think the inner loop runs the following number of times: N + (N-1)

I'm looking at this website that lists Big O complexities for various operations. For Dynamic Arrays, the removal complexity is O(n), while for Hash Tables it's O(1). For Dynamic Arrays like ArrayList

If I have an array of 1 million integers. Summing it up is considered O(n) because I have to perfom n-1 add operations. Correct ?

What Big-O notation would this fall under? I know setSearch() and removeAt() are of order O(n) (assume they are either way). I know without the for loop it'd be O(n), for certain, but I get confused h

I am trying to find the Big O for stooge sort. From Wikipedia algorithm stoogesort(array L, i = 0, j = length(L)-1) if L[j] < L[i] then L[i] ↔ L[j] if j - i > 1 then t = (j - i + 1)/3 stoogesort

Possible Duplicate: Plain english explanation of Big O I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely. I've gathered it is some