I'm revising the formal definitions of Big O and the other associated bounds and something is tripping me up. In the book I'm reading (Skiena) Big O is defined as:

f(n) = O(g(n)) when there exists a constant c such that f(n) is always <= c*g(n) for a some value of n > n0

This generally makes sense to me. We are only concerned with large enough values of n that the growth rates actually matter. But why multiply g(n) by c? It seem like I could choose a very large value for c, and make the whole thing arbitrary by blowing out the size of smaller g(n) values.

Secondary question: When choosing to classify an algorithm into a complexity class, is the general rule of thumb to just choose the lowest growth class that still holds according to the definition of Big O? According to the definition it seems valid to classify a constant time algorithm as O(n!), simply because f(n) will be <= c*g(n). Off course this offers no value.

Thanks!

You can multiply `g(n)`

by an arbitrary constant `c`

is because you want functions that are only a constant `c`

factor away from `f(n)`

. In simple terms you perform your analysis based on `n`

, not constants so what you are concerned with is how those functions change depending on the input size only. For instance when you have `n^3`

and `n`

there's no way you can choose a `c`

where `c*n >= n^3`

unless `c >= n^2`

which isn't constant anymore so `g(n)`

will be running away from `f(n)`

with `n`

.

As Ed mentioned this analysis won't give you an exact run time but a **growth rate** depending on input **n**. If `g(n)`

and `f(n)`

are always only (at most) a constant factor away from each other than the growth rate will be the same for both.

In this kind of time complexity analysis we don't really care about constant which in most cases is ok **but** in some cases you actually should take it into account. For instance if you are working on small sets an O(n^2) algorithm might actually be faster than O(nlogn) because of constants.

Second question: yes this is a common problem with **BigO**, you can use an arbitrary function that's why we usually are trying to find the "tightest" `g(n)`

we can, otherwise there's no much point in finding it. That's also why ***BigTheta** is much more useful than **BigO** as it tells you a tight bound, instead of an upper bound.

When choosing to classify an algorithm into a complexity class, is the general rule of thumb to just choose the lowest growth class that still holds according to the definition of Big O?

In terms of notations, just like we have big-O for upper bounds we have big-Omega for lower bounds and big-Theta for when you are able to show that the upper bound and the lower bounds match.

https://en.wikipedia.org/wiki/Big_O_notation#The_Knuth_definition

Assuming that Knuth quote is correct, then we can say that you are not alone in assuming that results involving tight asymptotic bounds are more useful :) Sometimes people say big-O when they actually meant to say big-Theta but some other times they just don't care or haven't managed to find the lower bound.

It seem like I could choose a very large value for c, and make the whole thing arbitrary by blowing out the size of smaller g(n) values.

For functions with different asymptotic growth rates, the `c`

doesn't matter. No matter how big or how small you choose `c`

to be, there will be an `n`

when the faster growing function catches up. The constant factor is there to allow you to ignore constant multipliers when things have the same growth rate. For example, when it comes to big-O, `f(x) = 2x`

and `g(x) = 3x`

both have the same growth rate.

Similar Questions

What are some examples where Big-O notation[1] fails in practice? That is to say: when will the Big-O running time of algorithms predict algorithm A to be faster than algorithm B, yet in practice algo

i <-- 1 while(i < n) j <--1 while(j < i) j <-- j * 2 i <-- i + 1 done My shot at this would be O(log n) for the inner loop. And I'm guessing the outer loop is O(n), for an overall c

I'm currently studying and trying to implement some algorithms. I'm trying to understand Big O notation and I can't figure out the Big O complexity for the algorithm below: while (a != 0 && b

Has anyone set out a proposal for a formal pseudo code standard? Is it better to be a 'rough' standard to infer an understanding?

Next in my series of Big O questions that I can't find the answer to Take the following example for(int i = 0; i < someNumber; i++) { for(int j = i; j < someNumber; j++) { DoSomething(); } } Wo

There are some questions about what Scala continuations are (here and here). But the answers only try to explain it. So in this question I'm asking for a formal definition of what (Scala's) delimited

I want to know, why this is O(n2) for 1+2+3+...+n? For example, 1+2+3+4 = 4·(4+1)/2 = 10 but 42=16, so how come it's O(n2)?

I need help understanding/doing Big O Notation. I understand the purpose of it, I just don't know how to determine the complexity given a piece of code. Determine the Big O notation for each of the

I am working out a function: total = 0; for (i = 0; i < N; i++){ for (j = 0; j < i*i; j++){ if (j % i == 0){ for (k=0; k < j; k++){ total++; I the Big O number for this N^4 or N^5 when you b

Possible Duplicate: Plain english explanation of Big O I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely. I've gathered it is some

I was reading about Big O notation. It stated, The big O of a loop is the number of iterations of the loop into number of statements within the loop. Here is a code snippet, for (int i=0 ;i<n; i+

How do I go about determining functions, say g(n), that gives about O(g(n)) and Ω(g(n)) on the running time of a loop? I understand that O is the upper bound and Omega is the lower, and I think I can

What would the big O notation of the function foo be? int foo(char *s1, char *s2) { int c=0, s, p, found; for (s=0; s1[s] != '\0'; s++) { for (p=0, found=0; s2[p] != '\0'; p++) { if (s2[p] == s1[s]) {

I've tried to come up with the Big O Running time of the following data structures. Are they Correct? Inserting n integers into an initially empty AVL Tree (best case) O(log n) Inserting n integers i

I'm solving some recurrence relation problems for Big O and so far up till this point have only encountered recurrence relations that involved this form: T(n) = a*T(n/b) + f(n) For the above, it's qu

How can I represent the complexity of std::find_end algorithm as Big-O notation? The complexity of std::find_end is defined as follows: At most (last2 - first2) * (last1 - first1 - (last2 - first2) +

Can some help me with a function which is Big O(1) but not Ω(1) and the other way around? Some explanation would greatly help.

My program of sorting values clocks at: 100000 8s 1000000 82s 10000000 811s Is that O(n)?

I'm confused about the complexity of the following (the operation performed inside the inner loop is in constant time): for(int i=0; i<n; i++) for(int j=i; j<n; j++) is this O(n^2) or O(n)? I f

I promise this is the last Big O question Big O Notation for following loops... for (int i = n; i > 0; i = i / 2){ for (int j = 0; j < n; j++){ count++; } } for (int k = 0; k < n; k++){ for

Is O(2^(n^c)) = O(n^c*(2^(n^c))), where c is some constant? The context for this is showing that NP is a subset of DTIME(2^n^c) for all c > 1.

I have this recurrence: T(n) = T(n-1) + O(n log n) Then I guess the solution is T(n)=O(n log n) I use the substitution method. T(n)<= c*(n-1)*log (n-1) + O(n log n) T(n) <= c*n*log(n) + O(n log

I've got a question about calculating Big O running times for a series of loops, that are nested in an outer for loop. For example: for (50,000 times) { for (n times) { //Do something } for (n-2 tim

I have a question regarding time complexity (big O notation) for Java software. Is there a way to quickly calculate or test it (or any website that could calculate it for me would be welcomed). For ex

sum = 0; for (i=0;i<n/2;i++) for (j=i; j<n/4; j++) sum++; What is the big O for the above code? I calculated the big O but I'm not sure if it's correct. This is my answer the outer loop will r

I have to determine the time complexity (big O) of the following function: void BET::makeEmpty(BinaryNode* &n) { if(n != NULL) { makeEmpty(n->left); makeEmpty(n->right); delete n; } n = NULL

I'm trying to grasp the idea of Big O for part of a project that is due tonight and I don't know if i'm thinking this through right or not... The project included us writing Iterative and Recursive so

Is it common that a higher value for Big-O notation is used for convenience or simpler looks? For example: I'm looking at this algorithm bifragment gap carving shortly explained here (page 66). If I

I think this is a simple question, but if I have something like O(n²/2), should I just get rid of the /2 and conclude that O(n²)?

What is a plain English explanation of Big O? With as little formal definition as possible and simple mathematics.

I'm learning about big O notation and I'm kind of confused. I don't think I really get how complexity effects an algorithm, and I can't tell if I'm looking at things backwards. is the order of compl

Wikipedia says: The statement f(x) is O(g(x)) as defined above is usually written as f(x) = O(g(x)). Some consider this to be an abuse of notation, since the use of the equals sign could be mislead

Give the smallest O() estimate you can for the following functions: 4n2 + 5n – 8 = O(...) log(n)2 + n = O(...) If you guys can, explain the answer rather than giving it to me. A question like this wi

I am trying to find a good explanation to quickly understand Big O and Theta theory. I always feel an explanation can be given in a million different ways, and I guess I'm seeking that one explanation

I would just like to prove the following: Show that 1^k + 2^k+...+n^k is O(n^(k+1)) for every positive integer k I am not sure how to go about it. Normally when I am proving a function is big O of ano

I am aware of different formal verification tools for verifying properties of programs (The SPIN model checker for example). Are there are any common tools/methodologies for verifying timing requireme

I'm testing out some functions I made and I am trying to figure out the time complexity. My problem is that even after reading up on some articles on Big O I can't figure out what the following should

So I have a quick question on how to verify the big O of a function. for example: a quicksort algorithm sorting an array of 5000000 element yields a time interval of 0.008524 seconds, running the same

The great people at MyCodeSchool.com have this introductory video on YouTube, covering the basics of Big-O, Theta, and Omega notation. The following definition of Big-O notation is provided: O(g(n) )

So when someone asks you to give a O(n) or a O(nlogn) algorithm to compute something, how do you know what to answer? It seems the only way of being able to answer this sort of question is knowing the

Do you know IF i can run Cadence Incisive Formal Verifier in 64 BIT MODE ??

for(int i=N; i>0; i=i/2) irrelevant statement; I am asked to find the complexity class and I am unsure of whether I am supposed to use Big-Omega notation or Big-O? But I am assuming that it is O(N

If a recursive solution ends up calling itself consecutively for say, ~N times, before going back up a level, the space efficiency is O(N) at best, because each of the N calls uses up a certain amount

Give a big-O estimate for the following program i=1 while ( i <= n ) { i = 2*i } by drawing a quick table, comparing the value of i to each iteration we see that: if n=5 we need 6 iterations if n=

I am sorting a Dictionary of 10,000 elements using the OrderBy method as shown below and want to know the big O of it. Does anyone know? After Ordering them I then add them to a new Dictionary in that

I'm using Doxygen to document some Objective-C code. I have some constants declared outside the class definition. I want these to show up in the class docs. How can I make doxygen include them? (Using

So... I teach formal methods in software engineering. I also teach agile methodologies. Most people seem to think this is contradictory. I think it makes a lot of sense... I also work for a company,

I've just been introduced to Big-O notation and I've been given some questions. However I'm confused as to how to determine the value of n0. I have to show that 3n^3 +20n^2 + 5 is O(n^3). So far I hav

Please help me on following two functions, I need to simplify them. O(nlogn + n^1.01) O(log (n^2)) My current idea is O(nlogn + n^1.01) = O(nlogn) O(log (n^2)) = O (log (n^2)) Please kindly help

I need help with finding the big oh of this algorithm. This is a search algorithm that is dividing and conquering my array of size n to find the first occurrence of false, a is an array. n=a.length; i