So I've been trying to understand Big O notation as well as I can, but there are still some things I'm confused about. So I keep reading that if something is O(n), it *usually* is referring to the worst-case of an algorithm, *but that it doesn't necessarily have to refer to the worst case scenario*, which is why we can say the best-case of insertion sort for example is O(n). However, I can't really make sense of what that means. I know that if the worst-case is O(n^2), it means that the function that represents the algorithm in its worst case grows no faster than n^2 (there is an upper bound). But if you have O(n) as the best case, how should I read that as? In the best case, the algorithm grows no faster than n? What I picture is a graph with n as the upper bound, like

If the best case scenario of an algorithm is O(n), then n is the upper bound of how fast the operations of the algorithm grow in the best case, so they cannot grow faster than n...but wouldn't that mean that they *can* grow as fast as O(log n) or O(1), since they are below the upper bound? That wouldn't make sense though, because O(log n) or O(1) is a better scenario than O(n), so O(n) WOULDN'T be the best case? I'm so lost lol

Informally speaking, *best case has O(n) complexity* means that when the input meets certain conditions (i.e. is best for the algorithm performed), then the count of operations performed in that best case, is linear with respect to n (e.g. is 1n or 1.5n or 5n). So if the best case is O(n), usually this means that in the best case *it is exactly linear* with respect to n (i.e. asymptotically no smaller and no bigger than that) - see (1). Of course, if in the best case that same algorithm can be proven to perform at most c * log N operations (where c is some constant), then this algorithm's best case complexity would be informally denoted as O(log N) and not as O(N) and people would say it is O(log N) in its best case.

Formally speaking, *"the algorithm's best case complexity is O(f(n))"* is an informal and wrong way of saying that *"the algorithm's complexity is Ω(f(n))"* (in the sense of the Knuth definition - see (2)).

See also:

(1) Wikipedia "Family of Bachmann-Landau notations"

(2) Knuth's paper "Big Omicron and Big Omega and Big Theta"

(3) Big Omega notation - what is f = Ω(g)?

I find it easier to think of `O()`

as about ratios than about bounds. It is defined as bounds, and so that is a valid way to think of it, but it seems a bit more useful to think about "if I double the number/size of inputs to my algorithm, does my processing time double (`O(n)`

), quadruple (`O(n^2)`

), etc...". Thinking about it that way makes it a little bit less abstract - at least to me...

Big-O, Big-Θ, Big-Ω are independent from worst-case, average-case, and best-case.

The notation f(n) = O(g(n)) means f(n) grows *no more quickly than some constant multiple of g(n)*.

The notation f(n) = Ω(g(n)) means f(n) grows *no more slowly than some constant multiple of g(n)*.

The notation f(n) = Θ(g(n)) means both of the above are true.

Note that f(n) here may represent the best-case, worst-case, or "average"-case running time of a program with input size n.

Furthermore, "average" can have many meanings: it can mean the *average input* or the *average input size* ("expected" time), or it can mean *in the long run* (amortized time), or both, or something else.

Often, people are interested in the *worst-case* running time of a program, *amortized over the running time of the entire program* (so if something costs *n* initially but only costs 1 time for the next *n* elements, it averages out to a cost of 2 per element). The most useful thing to measure here is the *least upper bound* on the worst-case time; so, typically, when you see someone asking for the Big-O of a program, this is what they're looking for.

Similarly, to prove a problem is inherently difficult, people might try to show that the *worst-case* (or perhaps average-case) running time is *at least* a certain amount (for example, exponential).

You'd use Big-Ω notation for these, because you're looking for lower bounds on these.

However, there is no special relationship between worst-case and Big-O, or best-case and Big-Ω.

Both can be used for either, it's just that one of them is more typical than the other.

So, upper-bounding the *best* case isn't terribly useful. Yes, if the algorithm *always* takes O(n) time, then you can say it's O(n) in the best case, as well as on average, as well as the worst case. That's a perfectly fine statement, except *the best case* is usually very trivial and hence not interesting in itself.

Furthermore, note that f(n) = n = O(n^{2}) -- this is technically correct, because f grows more slowly than n^{2}, but it is *not useful* because it is not the *least* upper bound -- there's a very obvious upper bound that's more useful than this one, namely O(n). So yes, you're perfectly welcome to say the best/worst/average-case running time of a program is O(n!). That's mathematically perfectly correct. It's just useless, because when people ask for Big-O they're interested in the *least* upper bound, not just a random upper bound.

It's also worth noting that *it may simply be insufficient* to describe the running-time of a program as f(n). *The running time often depends on the input itself, not just its size*. For example, it may be that *even* queries are trivially easy to answer, whereas *odd* queries take a long time to answer.

In that case, you can't just give *f* as a function of *n* -- it would depend on other variables as well. In the end, remember that this is just a set of mathematical tools; it's your job to figure out how to apply it to your program and to figure out *what's an interesting thing to measure*. Using tools in a useful manner needs some creativity, and math is no exception.

Similar Questions

are there a limited amount of basic O Notations, considering you are meant to 'distil' them down to their most important part? O(n^2): O(n): O(1): O(log n) logarithmic O(n!) factorial O(na) polynomia

I'm very much a noob/hobbyist programmer, putting together a few simple Mac apps. I'm confused about resource files. I have some .png images sitting in a folder in my (XCode 4.4) project. I also have

I am trying to understand what exactly Big O notation is. I understand it in the literal and practical sense by going through the answers of similar questions in SO. But what the answers don't explain

I have the following question: Is the following statement true or false? All logs to base 2 log2n is a member of O(log(n)) My attempt: log2n - clogn <= 0 log2 + logn - clogn <= 0 1 + logn(1-c)

I need help in this question. I really don't understand how to do it. Show, either mathematically or by an example, that if f(n) is O(g(n)), a*f(n) is O(g(n)), for any constant a > 0.

I'm learning programming using YouTube and whatever I can find. I stumbled upon some Big O problems and I'm quite confused on one of them. for (int i = 0; i < n; i++) for (int j = 0; j < 2; j++)

For example, if time complexity of merge sort is O(n log n) then why it is big O not theta or omega. I know the definition of these, but what I do not understand is how to determine the notation based

Based on my understanding, big O is essentially similar to theta notation but can include anything bigger than the given function (e.g. n^3 = O(n^4), n^3 = O(n^5), etc.), and big Omega includes anythi

I'm having a hard time wrapping my head around the big-O notation for a pairing operation. The question is pretty simple- Generate all possible pairs for a given list of numbers in an array. My first

I'm a little confused about the HTTP protocol, from what i know HTTP was made for delivering web pages and primarily sending messages between a web server and a browser. But it seems that HTTP is used

I am having trouble understanding this time complexity O(sqrt(B)) given that B is an integer. For example if I have a function... int GetResult(int A, int B) { } ...and this function has a time compl

I understood most of the code however I'm just confused about two lines position = position + 1 N = N - 1 What do they do in the code and why are they at the end? What alternative ways are there to

I am trying to calculate the Big-Oh for this code, which is for performing a binary search on a linked list: public int search( List<T> list, T target ) { int low = 0; int high = list.size() - 1

For binary search tree type of data structures, I see the Big O notation is typically noted as O(logn). With a lowercase 'l' in log, does this imply log base e (n) as described by the natural logarith

I had a test about asymptotic notations and there was a question: Consider the following: O(o(f(n)) = o(f(n)) Write in words the meaning of the statement, using conventions from asymptotic notation

I have to find the big-O Notation of the following expression: 2n + n(logn)10 + (1/2)n If I ignore the coefficients, I get 2n + n (log n)10 plus some term involving 1/2. If I ignore the coefficients

Hello it has been some time sins i have used Big O notation, so i'm a bit rusty. I know that having 1 loop, that loops n times is O(n), and having 1 loop that loops n time within another loop that loo

Possible Duplicate: Plain English explanation of Big O I need to figure out O(n) of the following: f(n) = 10n^2 + 10n + 20 All I can come up with is 50, and I am just too embarrassed to state how I

For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it? Thank you!

Hi I would really appreciate some help with Big-O notation. I have an exam in it tomorrow and while I can define what f(x) is O(g(x)) is, I can't say I thoroughly understand it. The following questio

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics. How theta notation is different from the Big O notation ? Could anyone expla

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

I've looked all around for information about Big-Theta, and I think I've come to a decent understanding of it. However, the question remains: Is Big Theta Notation an efficient measure of algorithm ef

I'm just getting started writing a simple web crawler to get info on links we have coming in to our system. I'm using httpclient 4.x. I have about 100 threads running fetching links and doing head req

What is the Big-O of this loop if someWork(..) does exactly i operations? Algorithm someWork(..) does more work as i increases. How to represent the solution in sigma notation? i <--2 while (i <

I have been given some code to work out big O runtimes on them, could someone tell me if I am on the right track or not? //program1 int i, count = 0, n = 20000; for(i = 0; i < n * n; i++) { count++

I have got a function and want to denote it in terms of bigO notation. f(n) = log4n+n*(1/3). Is this function O(n)? Thanks for your help

I am using MacBook Pro Mac OS 10.5 with related version of XCode. I am new to this development environment. I am learning macports, and I read information about macports from http://www.macports.org/.

So a quick thought; Could one argue that O(∞) is actually O(1)? I mean it isn't depend on input size? So in some way its, constant, even though it infinity. Or is the only 'correct' way to express i

We did an exercise in class today dealing with big-O notation. Here is one of the problems: void modifyArray(int a[], int size) { int max = a[0]; for (int i = 1; i < size / 2; ++i) { if (max < a

In the book Programming Collective Intelligence there is a regular expression, splitter = re.compile('\\W*') From context it looks like this matches any non-alphanumeric character. But I am confused

For the below function, I did But I must have did wrong ... answer should be O(log n). I am terrible at Big O ... havent fully understood master theorem which is not taught in school yet. They tau

I just found Sequelize as a good ORM framework to use in my node + MySQL webapp. But when I was designing my database tables, I got so confused about Sequelize associations. Just think about user and

I'm really confused about the differences between big O, big Omega, and big Theta notation. I understand that big O is the upper bound and big Omega is the lower bound, but what exactly does big Theta

Possible Duplicate: Plain English explanation of Big O I can't find any sufficient help for learn or understand about O-notation, and how learn about time or space complexity. So please suggest me ,

If I have a function of: for(i=0;i<n;i++) for(j=0;j<i*i;j++) for(k=0;k<j;k++) System.out.println(k); Would the big O of this function be n^5 from having: n*((n-1)^2)*((n-1)^2)-1?

I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O

I've read several threads about angularjs services and factories. I understand that services are singletons and factories return instances of objects. But I still don't really understand how to use th

So, I really don't get Big O notation. I have been tasked with determining O value for this code segment. for (int count =1; count < n; count++) // Runs n times, so linear, or O(N) { int count2

I want to create a NodeJS application, and I am learning more about the packages that need to be installed to make development easier. Two packages in particular, ExpressJS and BackboneJS, are confusi

I am a beginner developer and I am a little confused about drawables and bitmaps. From my current understanding: -Bitmaps are like empty images (without a type yet?) -Drawables are images that I can d

Here is an algorithm I am trying to analyse (see below). I do not understand why this has a O(n) time complexity when the merge sorts has O(n logn), they both seems to be doing the same thing. then bo

I don't want to add another username and ID to the world, so I really want to integrate openId into my web site. However, I am confused about it. I looked at various blogs about it, and they all point

Next in my series of Big O questions that I can't find the answer to Take the following example for(int i = 0; i < someNumber; i++) { for(int j = i; j < someNumber; j++) { DoSomething(); } } Wo

What is the worst case time complexity for the following two algorithms assuming items (an ArrayList<Integer>)has enough unused space that it never needs to be re-sized? My initial guess is that

I often here people talk about Big O which measures algorithms against each other Does this measure clock cycles or space requirements. If people want to contrast algorithms based on memory usage what

I'm learning Ajax and I'm confused about something. In a tutorial, these two lines are included document.myForm.time.value = ajaxRequest.responseText; //code <form name='myForm'> Name: <input

Pardon if this question has been answered but I couldn't find it. I'm kinda confused about recv() and recvfrom(). Once the server binds the address (or accepts connection for TCP), recv() is called. D

I am doing the exercise of Skiena's book on algorithms and I am stuck in this question: I need to calculate the big O of the following algorithm: function mystery() r=0 for i=1 to n-1 do for j=i+1 to

When i make the following multiplication in PHP: $ret = 1.0 * 0.000000001; i get the result: 1.0E-9 I want to convert this result into the normal decimal notation, how can i do this? sprintf('%f',$re