I am trying to find a good explanation to quickly understand Big O and Theta theory. I always feel an explanation can be given in a million different ways, and I guess I'm seeking that one explanation that finally makes sense. I know this is a n00b question but any help would be appreciated...

As far as explanations go, StackOverflow folks seem to get a kick out of my **telephone book example**. You might also like **Big O For Eight Year Olds**.

One point of confusion is why something that you might think is O(2n), something that performs two operations for each item in a list, and something else you might think would be O(n) are actually both considered O(n)

The reason for this is that when you start working with huge data sets, the difference between O(2n) and O(n) is not really a huge difference when you consider how that compares to O(n^2) or O(log N). Consider if you had 3 different search algorithms that search a dataset. The dataset contains a million items. The number of operations each algorithm would take:

O(2n) = 2,000,000

O(n) = 1,000,000

O(n^2) = 1,000,000,000,000

The O(2n) is only 2 times as slow as O(n), but O(n^2) is a million times as slow. That's an incredibly massive difference! **So Big O notation is really dealing with how an algorithm "scales", or in other words, how well it performs when you consider larger and larger data sets.**

A O(n^2) algorithm will perform well for very tiny data sets, but with larger datasets its performance degrades rapidly. O(2n) and O(n) would degrade evenly and gradually, which is closer to what a user would expect if they were working with more data.

For this reason, people don't talk about O(2n), they just talk about O(n) since both would represent linear time(linear in that the number of operations increases evenly and gradually as you add data).

It can be frustrating at first to think that someone's algorithm that performs twice as slow would still be considered O(n), but **big O notation is not a measure of relative speed**. Big O notation is a measure of how an algorithm scales in respect to the amount of data it is processing.

Big O is an analysis tool used to compare algorithms. Big O is the upper bound on the function. Think of the upper bound as the maximum amount of time that the algorithm can take to run.

Big O often uses the variable n to denote the varying amount of elements in the algorithm. For example if you are performing an operation on every element of an array, n will denote the size of the array. You will need to perform the operation n times.

Programmers use this notation to have a common ground for speaking about how complicated an algorithm is and (as stated above) how well the algorithm scales (meaning how well it performs as n gets larger and larger).

Algorithms for computing the nth element of a fibonnaci sequence are very insightful for the purposes of Big O notation. Consider a recursive method for solving the nth element of the fibonnaci sequence:

```
fib(n) {
if (n == 0)
return 0;
else if (n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
```

This is a simple implementation of Fibonacci with really slow running time for any n greater than say 100. We classify this algorithm as being O(2^n), which on a graph will increase at an exponential speed. Exponential is great when it is dealing with the money in your bank account but awful when it is dealing with algorithms.

A different implementation of Fibonacci can significantly speed up the algorithm.

```
fib(n) {
fibArr[n + 1];
fibArr[0] = 0;
fibArr[1] = 1;
for(int i = 2; i <= n; i++) {
fibArr[i] = fibArr[i-1] + fibArr[i-2];
}
return fibArr[n];
}
```

This second implementation of Fibonnaci has a Big O running time of O(n). This is what is called linear time. If you plot the implementations of Fibonnaci against each other on a graph you can see a huge difference in the running time of the exponential implementation as compared to the linear implementation.

```
Size - Linear - Exponential
1 1 2
10 10 1024
100 100 1.2676506e+30
1000 1000 1.071509e+301
```

Consider each of the above numbers as the amount of operations that need to be performed. If each operation takes 1 ms (just an estimate) you can start to guess how long an algorithm might take.

There is a lot more to Big O analysis. Consider these different types of classifications for Big O.

```
Constant time - O(1)
Logarithmic - O(logn)
Linear - O(n)
Quadratic - O(n^2)
Exponential - O(2 ^n)
Factorial - O(n!)
```

When choosing an algorithm it is important to understand the approximate running time and space needed to perform the algorithm.

Similar Questions

Suppose there are multiple functions with certain big o notations, anything O(N), O(N^2), etc. If you have a code fragment such as. f1(x); f2(x); f3(x); Are all the big O notations added together or

I have a question regarding complexity theory. If I have a Bubble sort algorithm and I want to find its worst case running time Big O, we can conclude that it is O(n^2). Now, what about If I have a pr

Consider the function F: 2^(3*n) + n^2 Can the function A: 2^(3*n) be used as a Big Theta, Omega or O as a characterisation of F? Why? I'm revising the concepts of Big Omega, Big Theta and Big O and I

I've got a script like below qq.UploadHandlerForm = function(o){ this._options = { action: '/upload', onComplete: function(id, fileName, response){} }; qq.extend(this._options, o); this._inputs = {};

I am attempting to calculate the Big-O of the following algorithm but I am confused and require some assistance: Algorithm 1. DFS(G,n) Input: G- the graph n- the current node 1) Visit(n) 2) Mark(n) 3)

Some standard books on Algorithms produce this: 0 ≤ f(n) ≤ c⋅g(n) for all n > n0 While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visual

Is O(n) considered as faster compared to O(n log n)? If I have a function that does a loop, which is O(n) then a merge sort outside the loop O(n log n) then the run time would be O(n log n) I assume?

I have been programming in PHP for a long time now, and as I don't come from a computer science / math background I only have a basic understanding of the Big O notation, so I have taken a function an

Hey so I am trying to verify some of the sorting algorithms. Insertion Sort Mergesort Quicksort using “median of three” partitioning and cutoff of 10 (using Insertion Sort for the small array portion

I have difficulty understanding SELECT * FROM myTable WHERE 0 = ... in the following SQL statement. SELECT * FROM myTable WHERE 0 = (SELECT COUNT(*) FROM incTable WHERE myTable.export_date <= incTa

I'm attempting to guess and prove the Big O for: f(n) = n^3 - 7n^2 + nlg(n) + 10 I guess that big O is n^3 as it is the term with the largest order of growth However, I'm having trouble proving it. My

I'm a beginner in C# and having hard times understanding Events in C# .. The book i read (Illustrated C# 2008) gives an example about it , and there are few thing i need to ask about , so i will past

I need to derive the big-O notation of this validation program. Its job is to accept product entries of this type: 'jacket,8,12,18,16,6', validates it, sort the sizes, sort the entry into a list by al

I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O

I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be

This question already has an answer here: Is Big O(logn) log base e? 6 answers When articles/question state that the Big O running time of the algorithm is O(LogN) . For example Quicksort has a

I was playing with C# and wanted to speed up a program. I made changes and was able to do so. However, I need help understanding why the change made it faster. I've attempted to reduce the code to so

I really want to know real definition. I have tried to read a book but couldn't understood. O : Big-O notation worst case. Θ : Theta notation average case. Ω : Omega notation best case. Q1> But why

If I have an algorithm which is comprised of (let's say) three sub-algorithms, all with different O() characteristics, e.g.: algorithm A: O(n) algorithm B: O(log(n)) algorithm C: O(n log(n)) How do

Could somebody please help me understand the concept of 'impersonation'? The way I understand it is that when impersonation occurs, code executes on behalf of some identity. So, for a web page, as lo

Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.

I've snatched this piece of tile collision code from another game, but in the spirit of understanding and not just stealing, I'd like to get some help understanding what exactly it does. Specifically

// n > 0 i ← 0 while (i < n) j ← 0 while (j < power(2,i)) j ← j + 1 done i ← i + 1 done Is the overall complexity O(n(log(n)) because the inner while loop has a conditional where 2^i so 2^0

I know there are plenty of resources on this but I'm having a tough time relating any of them to my situation so I was hoping someone could help me clarify how this works: Basically I have a model Act

I'm just learning about Big O notation, and I've been going through a few thought puzzles, and I just thought I'd verify with people whether I'm thinking through things correctly. In Javascript would

void bar(int N){ int c=0; for (int i=0; i<N*N; i++) for (int j= i; j< N; j++) c++; } The outer (and inner) loops above never get past N. Would this mean the Big O is N*N (N for the outer and N/

This question already has an answer here: Plain English explanation of Big O 21 answers I really can't figure out what Big-O is and how to use it in practice, so i hope someone could give me

i had previously asked for help writing/improving a function that i need to calculate a premium based on differing values for each month. the premium is split in to 12 months and earned on a percentag

The question is rather simple, but I just can't find a good enough answer. On the most upvoted SO question regarding the big-O notation, it says that: For example, sorting algorithms are typically co

Given the following code, what is the complexity of 3. and how would I represent simple algorithms with the following complexities? O(n²+n) O(n²+2n) O(logn) O(nlogn) var collection = new[] {1,2,3}; va

I have a counting algorithm for which I am trying to get a general big-o description. It is horribly nested and horribly exponential. Here it is: 1. For each T_i in T 2. For k = 1 to max_k 3. For eac

I have these two questions that I think I understand how to answer (answers after the questions). I just wanted to see if I am understanding time complexity calculations and how to find the BigO. Gene

I am trying to understand Big-O Time complexity and am unfortunately struggling, I cannot seem to grasp the concept, I know my results are correct for the following two code fragments however how I go

I am learning algorithm analysis. While doing the theory I across many big-O proofs. I was able to solve them but I need help with omega which is the oposite of big-O? Is 22n = O(2n)? --->My answer

Hello I am having trouble finding the running time for this algorithm with following assumptions that s = O(logN) and the running time of Random_Integer is constant time: 1 express N-1 as 2^t * u whe

I am trying to understand Big-O Notation. So sorry if I am asking something that is too obvious but I can't seem to wrap my head around this. I have the following C# code function that I am trying to

Hello I just have a simple question, why is the big O notation of a sorted array O(log N)? It will be a sorted array.

How can I specify computational complexity of an algorithm in Big-O notation whose execution follows the following pattern, according to input sizes? Input size: 4 Number of execution steps: 4 + 3 +

What is the Big-O of division on most modern day ISAs? Is there some kind of optimization or is it the naive O(numerator/denominator)? I'm writing code that relies heavily of modulus operation. For ex

What would be the big O notation of the length of the list of permutations of the characters of a list of words of lenght n? I just do not know how to express that because it would be like n! for each

Okay, so here's the deal. To my understanding you can do something like such with the stack in assembly: push 5 push 6 Okay so now we have on the stack: 6 5 so pop eax would put 6 into eax correct?

Can anyone help me understand #pragma? ifndef TARGET_OS_LINUX #pragma once endif What,when, where, why, an example? The above is in some code that I am refactoring....

I would appreciate it if someone could help me understand the difference between using a Yielder in an Enumerator vs. just invoking yield in an Enumerator. The Well-grounded Rubyist suggests that on

I need some help in understanding the following code: What is the meaning of '@' in @Reload button = MakeTestButton(&button_rect, @Reload, content); [button setTarget:web_view]; [button setAc

If I have an array of 1 million integers. Summing it up is considered O(n) because I have to perfom n-1 add operations. Correct ?

Given the following algorithm on a dataset of size N: Separate the data into M=(N/lg N) blocks in O(N) time. Partition the blocks in O(M lg M) time. * What is the big-O? How do I evaluate (N/lg N)

is it true that log(log(n)) = O(log(n)) ? The reason I say so is because when I plug both the functions into wolfram alpha I get the following. I multiplied log(n) by the constant 0.1, and I observed

Give the smallest O() estimate you can for the following functions: 4n2 + 5n – 8 = O(...) log(n)2 + n = O(...) If you guys can, explain the answer rather than giving it to me. A question like this wi

I've read the topic: Big O, how do you calculate/approximate it? And am not sure what the Big-O notation for the following function would be: static int build_nspaces_pattern(const char * const value,

So I am trying to understand the data types and Big O notation of some functions for a BST and Hashing. So first off, how are BSTs and Hashing stored? Are BSTs usually arrays, or are they linked list