Possible Duplicate:

Big Theta Notation - what exactly does big Theta represent?

I understand it in theory, I guess, but what I'm having trouble grasping is the application of the three.

In school, we always used Big O to denote the complexity of an algorithm. Bubble sort was O(n^2) for example.

Now after reading some more theory I get that Big Oh is not the only measure, there's at least two other interesting ones.

But here's my question:

Big O is the upper-bound, Big Omega is the lower bound, and Big Theta is a mix of the two. But what does that mean conceptually? I understand what it means on a graph; I've seen a million examples of that. But what does it mean for algorithm complexity? How does an "upper bound" or a "lower bound" mix with that?

I guess I just don't get its application. I understand that if multiplied by some constant c that if after some value n_0 f(x) is greater than g(x), f(x) is considered O(g(x)). But what does that mean practically? Why would we be multiplying f(x) by some value c? Hell, I thought with Big O notation multiples didn't matter.

The big O notation, and its relatives, the big Theta, the big Omega, the small o and the small omega are ways of saying something about how a function behaves at a limit point (for example, when approaching infinity, but also when approaching 0, etc.) without saying much else about the function. They are commonly used to describe running space and time of algorithms, but can also be seen in other areas of mathematics regarding asymptotic behavior.

The semi-intuitive definition is as follows:

A function g(x) is said to be O(f(x)) if "from some point on", g(x) is lower than c*f(x), where c is some constant.

The other definitions are similar, Theta demanding that g(x) be between two constant multiples of f(x), Omega demanding g(x)>c*f(x), and the small versions demand that this is true for all such constants.

But why is it interesting to say, for example, that an algorithm has run time of O(n^2)?

It's interesting mainly because, in theoretical computer science, we are most interested in how algorithms behave for large inputs. This is true because on small inputs algorithm run times can vary greatly depending on implementation, compilation, hardware, and other such things that are not really interesting when analyzing an algorithm theoretically.

The rate of growth, however, usually depends on the nature of the algorithm, and to improve it you need deeper insights on the problem you're trying to solve. This is the case, for example, with sorting algorithms, where you can get a simple algorithm (Bubble Sort) to run in O(n^2), but to improve this to O(n log n) you need a truly new idea, such as that introduced in Merge Sort or Heap Sort.

On the other hand, if you have an algorithm that runs in exactly 5n seconds, and another that runs in 1000n seconds (which is the difference between a long yawn and a launch break for n=3, for example), when you get to n=1000000000000, the difference in scale seems less important. If you have an algorithm that takes O(log n), though, you'd have to wait log(1000000000000)=12 seconds, perhaps multiplied by some constant, instead of the almost 317,098 years, which, no matter how big the constant is, is a completely different scale.

I hope this makes things a little clearer. Good luck with your studies!

Similar Questions

I need a little help on figuring out the Big-Theta running time for this function. int recursive(int n) { sum = 0; for (int i = 1; i <= n; i++) sum++ if (n > 1) return sum + recursive(n-1); else

Is there a list of the different data structures and their big-O access times for Python? I was rummaging through the standard library docs and didn't see any, hence the question. :-)

sum = 0; for (i=0;i<n/2;i++) for (j=i; j<n/4; j++) sum++; What is the big O for the above code? I calculated the big O but I'm not sure if it's correct. This is my answer the outer loop will r

Trying to access to database with big tables (at least 15 tables in this database not less then 1 million, maximum - 20 million). Once I select database - phpMyAdmin loading at least 5 minutes (and mo

Hello I just have a simple question, why is the big O notation of a sorted array O(log N)? It will be a sorted array.

For example, if time complexity of merge sort is O(n log n) then why it is big O not theta or omega. I know the definition of these, but what I do not understand is how to determine the notation based

I've come across some code which could definitely be improved, but i'm wondering about the Big-O notation of my improvements. Their original code adds a element to an array, and each time it does this

I understand the concept of big theta, big oh, and big omega.. I'm just having a hard time proving it. It's been a long time since I've done induction, so I'm pretty sure I'm just rusty and missing so

We have 3 functions with big o notations: Func A: O(n) Func B: O(n^2) Func C: O(2^n) If these functions does n proccesses in 10 seconds, how much time need to proccess 2 * n processes for each functi

(mysql innodb) Suppose i want to list a recipe description in steps like below. Should i use multiple smaller varchars for each step or should i use one big one? What would you think is the most flexi

I am trying to find the big O bound for the following recurrence relation: T(n) = T(n-1) + n^c, where c >= 1 is a constant So I've decided to solve this by using iteration: T(n) = T(n-1) + n^c T(n

I am recording audio using AVAudioRecorder and the file is too big. Like 20 seconds, comes out to be 1.2mb? How can I make it smaller? Thanks.

I was wondering what the Big O notation for this would be. I know the for loop is O(n). I wasn't sure if the if statements were O(n log n). If so, doesn't that make the run time complexity (n)*((n log

I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be

I am aware intuitively that two for loops make an O(n^2) function, but what if the loops are unrelated. How is it expressed For example: for(x = 1; x < t; x++) for(y = 1; y < z; y++) do somethin

I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC. Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large t

According to this post, there is a stretchable UIButton image that is red. Does anyone have a green one? (My Photoshop skills are very poor.)

After reading the answers to List of Big-O for PHP functions I got a little curious. The Answer of Kendall Hopkins states that php lookups are O(n) for large arrays and constant time (means O(1) ?!)

What is the Big-O efficiency of a stack, queue, set, and deque as it pertains to insertion, search, indexing, space, and deletion complexities?

What in an exact formal manner does the expression f(n) = 2^O(n) mean?

I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.

I have some code below with a nested while loop. I figured complexity of the outer while loop, but I am not sure how to do so for the inner one is as it has an &&. Can someone explain to me ho

I wonder if Big Query is going to replace/compete with Text Search API? It is kinda stupid question, but Text Search API is in beta for few months and has very strict API calls limit. Bug Big Query is

From an (excellent) answer at another SE site a statement was presented which I reacted on: X is roughly O(10^4) times faster than Y. From the context it was obvious that the meaning was something a

Question: (5n^2)(ln(n)) is big-omega of n(ln(n)^2) What I have tried: Exist c > 0, n0 > 0 (5n^2)(ln(n)) >= cn(ln(n)^2) for all n >= n0 (5n^2)(ln(n)) >= n(ln(n)) (for n >= 1) >= n(

The great people at MyCodeSchool.com have this introductory video on YouTube, covering the basics of Big-O, Theta, and Omega notation. The following definition of Big-O notation is provided: O(g(n) )

This question already has an answer here: Java Big O notation of 3 nested loops of log(n) 3 answers I am trying to figure out what the big o estimate of the following nested for loop is. for(in

For large problems sizes, an algorithm with time cost O(2^n) is faster than an algorithm that has time cost O(N^2) Is this true or false? What I think is that if C^n, C = constant and C > 1, then

What is the Big-O for SQL select, for a table with n rows and for which I want to return m result? And What is the Big-O for an Update, or delete, or Create operation? I am talking about mysql and sql

Consider the function F: 2^(3*n) + n^2 Can the function A: 2^(3*n) be used as a Big Theta, Omega or O as a characterisation of F? Why? I'm revising the concepts of Big Omega, Big Theta and Big O and I

Possible Duplicate: Segmentation fault on large array sizes Hi all I am trying to create a very big array in the VS 2010 using C++. When I try to create a array like below int dp[4501][4501] or int

I'm trying to determine the time complexity of the following: while loop that executes n! times { perform an operation that is O(n) } Would the big-o analysis still be O(n!)? I don't see how it would

I have a few questions so please bear with me. I need some help to clarify Big O and run time. As I understand it Big O is a way of presenting the run time of an algorithm correct? From reading I've b

I've been asked to calculate the big theta for a homework assignment, but the lecture material has been a little sparse on this area. Given the loops for (x = 1; x <= n; x *= 2){ for(y = 1; y <

What is the Big O for the zig-zag merge join algorithm? GAE's Big Table uses it, they go into it in these videos: http://www.youtube.com/watch?v=AgaL6NGpkB8 http://www.bestechvideos.com/tag/zigzag-me

Karatsuba Algorithm involves the recursion relation T(n) = 3T(n/2) + n. By the recursion tree method, we can approximate the big O of T to be O(nlog23) However, by the substitution method I am having

This question already has an answer here: Plain English explanation of Big O 21 answers I really can't figure out what Big-O is and how to use it in practice, so i hope someone could give me

I'm searching for the Big-O complexity of PageRank-Aglorithm. I hardly could found anything, I just found O(n+m) ( n - number of nodes, m - number of arcs/edges) but I didn't believe this complexity b

I've been working through a recent Computer Science homework involving recursion and big-O notation. I believe I understand this pretty well (certainly not perfectly, though!) But there is one questio

I have used the Master Theorem to solve recurrence relations. I have gotten it down to Θ(3n2-9n). Does this equal Θ(n2)? I have another recurrence for which the solution is Θ(2n3 - 1002). In BigTheta

I wrote simple C++/CLI code to calculate a result of big integer numbers operation, but it failed with the warning warning C4307: '*' : integral constant overflow, here is my code: int main(array<S

Hello can somebody help me in expressing (x^3)/1000 - 100*x^2 - 100*x + 3 in big theta notation. It looks like of x^3 to me, but obviously at x = 0 obviously this polynomial gives a value of 3. Multip

I am trying to understand Big-O Notation. So sorry if I am asking something that is too obvious but I can't seem to wrap my head around this. I have the following C# code function that I am trying to

I am just starting to learn about Big O notation and had a question about how to calculate the growth rate of an algorithm. Suppose I had an algorithm with O(√n log n) time complexity, and for n = 10

Some years ago, I used the tag to create a quote on my site (with big quotation marks). Now I want to do the same thing, but it doesn't work anymore. The only thing I get are small and not the big

Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.

I am working on my first iCloud App. I worked through the Apple docs and the Stanford videos but I am still struggling to understand the Big picture of iCloud. My goal is to create a Library sty

I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O

Which are the corrent hierarchy between above. is that correct? 0(n^3) < O(2^n) < O(n^2)

I need to derive the Big-O complexity of this expression: c^n + n*(log(n))^2 + (10*n)^c where c is a constant and n is a variable. I'm pretty sure I understand how to derive the Big-O complexity of