Possible Duplicate:

Plain English explanation of Big O

I was recently asked about my knowledge of how to use Big O notation and I was stumped because I had never come across Big O before. I have read the Wikipedia page about Big O and looked at some of the questions posted in Stackoverflow but I just simply don't understand.

My question: Can someone provide an explanation of Big O in the simplest form and provide an example of how to use it in the following Java method:

```
public int getScore(int[] dice)
{
int[][] dups;
dups = possibleDups(dice);
// Set catScore
for (int[] i : dups)
{
for (int k = 0; k < i.length; k++)
{
if (i[k] > 0)
{
switch (i[k]) {
case 1:
catScore = Category.ONES;
break;
case 2:
catScore = Category.TWOS;
break;
case 3:
catScore = Category.THREES;
break;
case 4:
catScore = Category.FOURS;
break;
case 5:
catScore = Category.FIVES;
break;
case 6:
catScore = Category.SIXES;
break;
case 7:
catScore = Category.SEVENS;
break;
case 8:
catScore = Category.EIGHTS;
break;
default:
catScore = Category.NONE;
break;
}
}
}
}
return sumAll(dice);
}
```

Big O is the worst case scenario for an algorithm to execute. You should see how does you loop depend on the inner loop. Sample:

```
public void doSomething(int n){
for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
}
```

Worst case is 100 times iterate. Change *n* to 20 and then worst case is 400 iteration.

This is O(n^2).

In computer science, a large area of interest is "how long do things take to run?". The answer a lot of the time is "it depends on the size of the input (but it might not)". Big-Oh specifically is a function that describes the *upper bound* on how long an algorithm with run. There are some mathematical details there (limits, asymptotes, etc), but thats a really basic view.

In your example, you loop over a list, and then for everything in that list, you loop over another list. Therefore, the time your algorithm takes to run is directly proportional to the size of the lists. If you think of a list as having 'n' things in it, and your second list as having m things in it your alogrithm's running time is O(m*n). If m ~ n, then it is also accurate to say O(n^2).

For maps, lookups are constant time (assume they are). In that case, the running time of the Map lookup is therefor O(c) which is the same as O(1).

Big O notation details the proportional difference in the time of the solution compared to the number of items in a collection. It actually says nothing about how long the solution takes to solve the issue, but it details how quickly the time to solve a solution grows when you know the time for a fixed point and how many other items you are likely to add.

So, if it always takes 5 minutes to make a coffee, it's not enough information to calculate a Big O solution, but if it takes 5 minutes to make a coffee, and five minutes to make ten coffees, and five minutes to make a million coffees, then it is O(1), where the 1 indicates one unit of time.

Now if you have a single cup coffee maker, where it takes roughly two minutes to make a cup of coffee, four minutes to make two cups of coffee, and twenty minutes to make ten cups of coffee, then the time to make a number of coffees is proportional to the number of cups, making the Big O notation O(x), meaning you need X (one for each coffee) units of time.

Other Big O notations are common, O(x^2) O(xlog(x)), etc. They all describe the proportional rate of time increase based on the number of elements in consideration.

Note that O(1) might be slower than an O(x) solution for some small collection of items, as we are talking about units of time, not actual times. So the unit of time in a particular O(1) might be an hour, while the particular unit of time in an O(x) solution might be ten minutes. In such a case, the O(x) solution might be faster until you need to process six or more items. In the long term, big O terms with lower powers (like O(1)) will always outperform those with higher powers O(x) no matter how large or small the actual time units are.

Similar Questions

For the below function, I did But I must have did wrong ... answer should be O(log n). I am terrible at Big O ... havent fully understood master theorem which is not taught in school yet. They tau

Prove that 1 + 1/2 + 1/3 + ... + 1/n is O(log n). Assume n = 2^k I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

Possible Duplicate: Why aren’t Java Collections remove methods generic? The Java Collection<E> interface has a contains method with the following signature: boolean contains(Object o) Since t

I'm trying to learn Big O analysis, and I was wondering if someone could let me know if I'm doing it right with these two examples (and if I'm not, where did I go wrong?). I got the first one to be O(

This question already has an answer here: What does O(log n) mean exactly? 25 answers Like the Big O notation O(1) can describe following code: O(1): for (int i = 0; i < 10; i++) { // do

I was wondering what the Big O notation for this would be. I know the for loop is O(n). I wasn't sure if the if statements were O(n log n). If so, doesn't that make the run time complexity (n)*((n log

To simplify a big O expression We omit all constants We ignore lower powers of n For example: O(n + 5) = O(n) O(n^2 + 6n + 7) = O(n^2) O(6 * n ^ (1/3) + n ^ (1/2) + 7) = O (n^(1/2)) Am I right in

I am learning algorithm analysis. While doing the theory I across many big-O proofs. I was able to solve them but I need help with omega which is the oposite of big-O? Is 22n = O(2n)? --->My answer

Possible Duplicate: How do I replace accented Latin characters in Ruby? Is there an easy way to convert any letter that is not equal to a-z to a-z? I want for example convert Ü to U, Ö to O and so o

Possible Duplicate: What is Big O notation? Do you use it? Hi all, fairly basic scalability notation question. I recently recieved a comment on a post that my python ordered-list implimentation but

Possible Duplicate: Large Numbers in Java I need to take and manipulate an integer input with <= 500 digits. How can it be done in java? Any help would be appreciated.

I have a simple algorithm that prints the two dimensional matrix (m*n, m and n are different numbers): for(i=0;i<m;i++) for(j=0;j<n;j++) Console.WriteLine({0},A[i,j]); I read that the big O n

If a function has a O(N) complexity and it is called in an if statement is it still O(1)? For example: f(x); if (f2(x)) f3(x); where f(x) is O(N) f2(x) is O(N) and f3(x) is O(Nlog2N). So would the o

Is there a good reference (table or chart) out there somewhere that shows all the time and space complexity in Big-O notation, for all the common operations (add,remove,iterate,etc.) for many of the c

I'm relatively new to the practice of determining algorithm runtimes using big-O notation and I have a question regarding the runtime of a sorting algorithm. Let's say I have a set of pairs (a, b) in

Is O(5n) = 5*O(n) ? From what I understand , O(5n) == O(n). Thus they are not equal? Please correct me if I am wrong.

It seems to be a searching algorithm based off of Mergesort. It is to be used on a sorted array of numbers. Is the Big-O complexity still O(n log n)? public static boolean fastSearch(int[] data, int m

So if you use quick sort to sort an array you can do it in O(nlogn) using quicksort and then once you sort it, you can insert new elements into the array in O(logn) with a binary-search-esque algorith

Which are the corrent hierarchy between above. is that correct? 0(n^3) < O(2^n) < O(n^2)

I have to remove duplicate strings from extremely big text file (100 Gb+) Since in memory duplicate removing is hopeless due to size of data, I have tried bloomfilter but of no use beyond something li

I know in Big O Notation we only consider the highest order, leading polynomial term because we are basically placing this theoretic worst case bound on compute-time complexity but sometimes I get con

def foo(x): if x > 5: return foo(x–1) – foo(x-1) else: return 11 def bar(a,b): if (b > 0): return bar( bar(a, b+1) , b-1 ) else: return 0 How do I find the running time in Big O notation and ho

when using TCP Socket I/O code.. Is there any big difference of performance between below two codes..?? The result of both is the same~~ // -------- 1 -------- // OutputStream out = sock.getOutputSt

I've been working through a recent Computer Science homework involving recursion and big-O notation. I believe I understand this pretty well (certainly not perfectly, though!) But there is one questio

void bar(int N){ int c=0; for (int i=0; i<N*N; i++) for (int j= i; j< N; j++) c++; } The outer (and inner) loops above never get past N. Would this mean the Big O is N*N (N for the outer and N/

Are scalars included in big-O notation, or is O(2n) actually the same as O(n) because scalars are not taken into account? If so, why is this?

So I am trying to understand the data types and Big O notation of some functions for a BST and Hashing. So first off, how are BSTs and Hashing stored? Are BSTs usually arrays, or are they linked list

I've just been introduced to Big-O notation and I've been given some questions. However I'm confused as to how to determine the value of n0. I have to show that 3n^3 +20n^2 + 5 is O(n^3). So far I hav

Possible Duplicate: Plain English explanation of Big O I know Big O notation is used to assess how efficient an algorithm is, but I do not understand how you read Big O notation or what exactly how

I've read the topic: Big O, how do you calculate/approximate it? And am not sure what the Big-O notation for the following function would be: static int build_nspaces_pattern(const char * const value,

For homework, I was given the following 8 code fragments to analyze and give a Big-Oh notation for the running time. Can anybody please tell me if I'm on the right track? //Fragment 1 for(int i = 0;

How can I represent its complexity with Big-O notation? I am little bit confused since the second for loop changes according to the index of the outer loop. Is it still O(n^2)? or less complex? Thanks

I have a question about calculating big o notation for the following code: j =1; While (j<=n ) do for i=1 to j do O(1); endfor; j=j*2; endwhile So far, I have that the loop is calculated (sum from

Suppose an algorithm is known to be O(N2) and solving a problem of size M takes 5 minutes. About how long will it take to solve a problem of size 4M? Is it as simple as ... M=5min 4M=20min ?

Possible Duplicate: in c printf() returns what what will be the o/p of this c code? why? i=printf(hellow); printf(%d,i); Thanks..

while (n >= 1) n /= 2; I am unable to get the Big-O notation for this

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?

Possible Duplicate: Are there any O(1/n) algorithms? This just popped in my head for no particular reason, and I suppose it's a strange question. Are there any known algorithms or problems which act

What is the time complexity for a clear function is std::map? Would I be right in saying that it is O(1)?

Possible Duplicates: What ORM frameworks for .NET Do You Like Best? Best way to access a SQL Server database using C# .Net Hello, Can someone recommend a good O/R mapper for SQL Server 2008/.NET ? I

can anyone help me verifying the following complexities: 10^12 = O(1)? 2^(n+3) + log(n) = O(2^n)? f(n) = Omega(n) and f(n) = theta(n) <=> f(n) = O(n) thanks

If a recursive solution ends up calling itself consecutively for say, ~N times, before going back up a level, the space efficiency is O(N) at best, because each of the N calls uses up a certain amount

I'm trying to learn Big O notation and I'm a little confused with this C++ code: void enterElements(int *s1, int s1Size) { for(int x = 0;x < s1Size;++x) { retry: cout<<Element <<x + 1

Using an ADT Linked List and the following code, how would I express the runtime in Big O Notation?: def reverse (self): (1)if self._head is not None: (2)temp = self._head.item (3)self._head = self._h

Let f(n)= ( (n^2+2n)/n + 1/1000*(n^(3/2)))*log(n) The time complexity for this function could be both O(n²*log(n)) and O(n^(3/2)*log(n)) How is this possible? I thought the dominating term here was n²

Possible Duplicate: Big O, how do you calculate/approximate it? Plain English explanation of Big O I just saw this question: Find nearest number in unordered array. In answers peoples are talking ab

I'm trying to determine the time complexity of the following: while loop that executes n! times { perform an operation that is O(n) } Would the big-o analysis still be O(n!)? I don't see how it would

I am trying to get a grasp on Big O notations. It seems pretty abstract. I selected the most common data structures - array, hash, linkedl list (single and double) and a binary search tree and guessed

How can I prove the following: 10 n log n ∈ O(2n2) n log n + 40 · 2n - 6n ∈ O(2n) In the first one, I'm using this math: 10 n log n ≤ c · 2n2 10 n2 ≤ c · 2n2 divide by 2 5 n2 ≤ c · n2 I'm guessin