I think this is probably a beginner question about big-O notation. Say, for example, I have an algorithm that breaks apart an entire list recursively(O(n)) and then puts it back together (O(n)). I assume that this means that the efficiency is O(n) + O(n). Does this simplify to 2O(n), O(2n), or O(n)? From what I know about this notation, it would be O(2n) and using the rules of asymptotic notation you can drop the 2, giving an efficiency of O(n).

If we were trying to find a lower bound, though, can this rule still apply? If Ω(n) + Ω(n) = Ω(2n), can you still drop the 2? I would think not, since you would actually be lowering the lower bound (since n < 2n).

It's been a while but I think you're right in both cases.

**UPDATE**

Actually looks like Ω(n) + Ω(n) == Ω(n)

Does this simplify to 2O(n), O(2n), or O(n)?"

All of the above, but the first two are overly complex. O(n) would be the canonical notation.

2*N is proportional to N, so if the maximum cost is no more than proportional to 2*N for a sufficiently large N ( O(2*N) ), the maximum cost is also no more than proportional to N for a sufficiently large N ( O(N) ).

If we were trying to find a lower bound, though, can this rule still apply?

Most definitely yes.

2*N is proportional to N, so if the minumum cost is no less than proportional to 2*N for a sufficiently large N ( Ω(2*N) ), the minimum cost is also no less than proportional to N for a sufficiently large N ( Ω(N) ).

For example, say you have an algorithm that takes 300 ms + 100*N ms to execute. The lower bound of its speed is Θ(N) and thus Ω(N).

What if the algorithm were to take twice as long to execute? Even if it took 600 ms + 200*N ms to execute, the lower bound of its speed is still Θ(N) and thus Ω(N).

The most important thing to realise to understand Θ(N) and the like is that these notations are used to study how well something *scales*. That one algorithm takes twice as long as another doesn't say anything about how well either scales, so it shouldn't be a surprise that they can have the same Ω() for the lower bound of their speed.

It would simplify -- usually to O(n), indicating that the amount of time taken to solve the problem is proportional to the problem size. In this case, it may be more appropriate to write 2O(n), though it means the same thing.

I believe according to the definition of Big-O:

If a function f(n) is <= cg(n) for some constant c and sufficiently large values of n, then it can be said that f(n) = O(g(n)). In your example, g(n) = n.

So, for example, if it is possible to pick some value for c which makes f(n) <= cn for sufficiently large n, then you can say that f(n) = O(n).

Big Omega is defined similarly.

Similar Questions

Im studying for a test and a practice problem states: True or False: O(n^3 + n^2) dominates O(n^4) Do we count O(n^3 + n^2) as O(n^5)? If so it does dominate.

I have to write down the Big O notation of an algorithm I had to think up for my homework. I'm able to tell that the code below is O(n^2). Because for every x I have to go through all of the y's and

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

I assume that a particular example in my book is wrong. But am I correct? Example: 3log n + 2 is O(log n) Justification: 3log n + 2 <= 5 log n, for n>=2. I understand how they get the c=5 (sinc

I am doing the exercise of Skiena's book on algorithms and I am stuck in this question: I need to calculate the big O of the following algorithm: function mystery() r=0 for i=1 to n-1 do for j=i+1 to

can anyone help me verifying the following complexities: 10^12 = O(1)? 2^(n+3) + log(n) = O(2^n)? f(n) = Omega(n) and f(n) = theta(n) <=> f(n) = O(n) thanks

Possible Duplicate: Big O when adding together different routines What does O(n) + O(log(n)) reduce to? My guess is O(n) but can not give a rigorous reasoning. I understand O(n) + O(1) should reduce

If I have the following closed form solution for a recurrence relation, how can I simplify it under big O: f(n) = 3^n + n.9^n I would hazard a guess at: f(n) is a member of O(9^n) -> Am not sure if

So I get that the first for loop runs O(n) times, then inside that it runs 3 times, then 3 times again. How do I express this at big O notation though? Then do the 2 print statements matter? How do I

Hello I've tried my best to understand big-theta and now I get the main conception of the proofs for Big-Oh and Big-Omega but i couldn't find and example that is close to my excercise, because i cant

I'm trying to figure out where O(sqrt(n)) and O(n2 log n) fit in this growth hierarchy listed below. This chapter is so confusing and i'm lost on how to figure this out. Any suggestions would be much

Context big project with multi maven module or single maven module structure Question did you finally use multi-maven-module or single-maven-module structure? Details If you've worked on a big pro

For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it? Thank you!

My program of sorting values clocks at: 100000 8s 1000000 82s 10000000 811s Is that O(n)?

What are the space and time complexities, in Big O notation, for the Lempel-Ziv-Welch and Huffman compression algorithms? Google is failing me. Thanks, Francisco

Give a big-O estimate for the following program i=1 while ( i <= n ) { i = 2*i } by drawing a quick table, comparing the value of i to each iteration we see that: if n=5 we need 6 iterations if n=

I'm learning programming using YouTube and whatever I can find. I stumbled upon some Big O problems and I'm quite confused on one of them. for (int i = 0; i < n; i++) for (int j = 0; j < 2; j++)

Possible Duplicate: If f(n) = O(g(n)) , then is exp(f(n)) = O(exp(g(n))) I stumbled upon this question in the Cormen book. If f(n) is O (g(n)) then 2^f(n) is also O (2^g(n)). Is this true? I was try

I am trying to find the Big O for this code fragment: for (j = 0; j < Math.pow(n,0.5); j++ ) { /* some constant operations */ } Since the loop runs for √n times, I am assuming this for-loop is O(√

Anyone knows the Big O of the algorithm used in the Distinct() method, with a custom IEqualityComparer?

How would you characterize the below in big-O notation? rotors = [1,2,3,4,5 ...] widgets = ['a', 'b', 'c', 'd', 'e' ...] assert len(rotors) == len(widgets) for r in rotors: for w in widgets: ... del w

I know that the big-o complexity of the following equation is O(n^3) 4n^3 + 6n + 6 because the n^3 is the dominant term. Would the big-o complexity be the same for the same function, but with a nega

For each of the following algorithms, identify and state the running time using Big-O. //i for (int i = 0; Math.sqrt(i) < n; i++) cout << i << endl; //ii for (int i = 0; i < n; i++){

I am trying to calculate the Big-Oh for this code, which is for performing a binary search on a linked list: public int search( List<T> list, T target ) { int low = 0; int high = list.size() - 1

How do I go about determining functions, say g(n), that gives about O(g(n)) and Ω(g(n)) on the running time of a loop? I understand that O is the upper bound and Omega is the lower, and I think I can

I have a question about calculating big o notation for the following code: j =1; While (j<=n ) do for i=1 to j do O(1); endfor; j=j*2; endwhile So far, I have that the loop is calculated (sum from

Is there a list of the different data structures and their big-O access times for Python? I was rummaging through the standard library docs and didn't see any, hence the question. :-)

I need to derive the Big-O complexity of this expression: c^n + n*(log(n))^2 + (10*n)^c where c is a constant and n is a variable. I'm pretty sure I understand how to derive the Big-O complexity of

What is the Big-O complexity for widespread algorithms of the basic arithmetic operations like multiplication, square root, logarithm, scalar and matrix product? Are there exotic algorithms which are

Consider the function F: 2^(3*n) + n^2 Can the function A: 2^(3*n) be used as a Big Theta, Omega or O as a characterisation of F? Why? I'm revising the concepts of Big Omega, Big Theta and Big O and I

Lets say I have a routine that scans an entire list of n items 3 times, does a sort based on the size, and then bsearches that sorted list n times. The scans are O(n) time, the sort I will call O(n lo

I was just wondering what is the time complexity of the following code. The time complexity (Big O) of the code below in my opinion would be O(n^4) What do you guys think? int result = 0; for(int i =1

I know in Big O Notation we only consider the highest order, leading polynomial term because we are basically placing this theoretic worst case bound on compute-time complexity but sometimes I get con

I have a few problems that I'm trying to find the Big O for. The ones that are confusing me are the (N*N) problems: for (i=1, sum=0, i <= N; i++) { for (j=1; j <= N*N; j++) { sum++; } } I'm gue

I have been searching for a few days now, but I cannot find a big-O notation algorithm for encrypting, decrypting, or attempting to break an encrypted file (brute force) making use of public key encry

This isn't a question as much as it's a sanity check! If you needed to read 4 bytes into Java as a bitmask in Big endian and those bytes were: 0x00, 0x01, 0xB6, 0x02. Making that into an int would be:

I'm trying to figure out the efficiency of my algorithms and I have little bit confusion. Just need some expert idea to justify my answers or reference me to somewhere which is explaining about the be

I'm attempting to guess and prove the Big O for: f(n) = n^3 - 7n^2 + nlg(n) + 10 I guess that big O is n^3 as it is the term with the largest order of growth However, I'm having trouble proving it. My

I have to determine the time complexity (big O) of the following function: void BET::makeEmpty(BinaryNode* &n) { if(n != NULL) { makeEmpty(n->left); makeEmpty(n->right); delete n; } n = NULL

I have a big table (approx. 150 M rows), and if I try to run a simple select count(*) on it, then mysql works for about an hour and then throws an error. I'm assuming this is not due to a limitation o

I'm having a hard time understanding the following statements from Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani - page 24 that they represent the sum of O(n) as O(n2). But my under

If I have a loop construction like this for(int i=1; i<n;i++) for(int j=1; j<n;j++); O(n2) or O(0)? Assume that inside the loop is an if: for(int i=1; i<n;i++) for(int j=1; j<n;j++) if(a=

So when someone asks you to give a O(n) or a O(nlogn) algorithm to compute something, how do you know what to answer? It seems the only way of being able to answer this sort of question is knowing the

Good afternoon all, We say that a hashtable has O(1) lookup (provided that we have the key), whereas a linked list has O(1) lookup for the next node (provided that we have a reference to the current n

For an assignment, I am to create a constructor taking a String[] names and int[] rank as parameters that accomplishes the following tasks in O(n log n): (Data Validation): Checks that names and ran

I am doing some comparisons about where to filter out items from a list. I am unsure of doing it directly which would be O(n), or using .Where(). I made a simple example to test .Where() on a simple d

There is nothing about it in wikipedia. Anyone knows that? I only want to know the average Big-O Complexity of that algorithm.

So, I'm slightly confused by this question on my homework. for ( int j = 0; j < 2*n; j++){ for ( int k = 0; k < n * n * n; k += 3) sum++; } So I am at this conclusion after a bit of confusion

When calculating big-O notation do we also evaluate inherent javascript methods? If we do not, I am reasonably certain this is O(N). If we do how would I express his in big-O and how did you come to t

I've been told the below code is = O(MN) however, I come up with O(N^2). Which is the correct answer and why? My thought process: nested for loops plus if statements --> (O(N^2)+O(1)) + (O(N^2)+O(