why is the statement true : Log_{2}(n) is O(log_{3}(n))

I don't understand this, does big O not mean upper bond of something?.

Isn't log_{2}(n) bigger than log_{3}(n) ? When I graph them, log_{2}(n) is above log_{3}(n)

It is because changing base of logarithms is equal to multiplying it by a constant. And big O does not care about constants.

`log_a(b) = log_c(b) / log_c(a)`

So to get from `log2(n)`

to `log3(n)`

you need to multiply it by `1 / log(3) 2`

.

In other words `log2(n) = log3(n) / log3(2)`

.

`log3(2)`

is a constant and `O(cn) = O(n)`

, thus `O (log2(n)) = O (log3(n))`

Big O doesn't deal with constant factors, and the difference between Log_{x}(n) and Log_{y}(n) is a constant factor.

To put it a little differently, the base of the logarithm basically just modifies the slope of a line/curve on the graph. Big-O isn't concerned with the slope of the curve on the graph, only with the *shape* of the curve. If you can get one curve to match another by shifting its slope up or down, then as far as Big-O notation cares, they're the same function and the same curve.

To try to put this in perspective, perhaps a drawing of some of the more common curve shapes would be useful:

It depends on the context in which O notation is used. When you are using it in algorithmic complexity reasoning you are interested in the asymptotic behaviour of a function, ie how it grows/decreases when it tends to (plus or minus) infinity (or another point of accumulation).

Therefore whereas `f(n) = 3n`

is always less than `g(n) = 1000n`

they both appear in `O(n)`

since they grow `linearly`

(according to their expressions) asymptotically.

The same reasoning pattern can be taken for the logarithm case that you posted since different bases logarithms differ for a constant factor, but share the same asymptotical behaviour.

Changing context, if you were interested in computing the exact performance of an algorithm given your estimates being exact and not approximate, you would prefer the lower one of course. In general all computational complexity comparisons are approximation thus done via asymptotical reasoning.

There are some good answer here already, so please read them too. To understand why Log2(n) is O(log3(n)) you need to understand two things.

1) What is mean by BigO notation. I suggest reading this: http://en.wikipedia.org/wiki/Big_O_notation If you understnad this,you will know `2n`

and `16n+5`

are both `O(N)`

2) how logarithms work. the difference between log_{2} (N) and log_{10}(N) will be a simple ratio, easily calculated if you want it as per luk32's answer.

Since logs at different bases differ only a by a constant ratio, and Big O is indifferent to minor things like constant multiplying factors, you will often find O(logN) actually omits the base, because the choice of any constant base (eg 2,3,10,*e*) makes no difference in this context.

Similar Questions

I am solving Segment tree and Quad Tree related problems; while I noticed that in segment tree we split the 1D array into 2 (2^1) segments and recursively do this until base case arrives. Similarly, i

Hello it has been some time sins i have used Big O notation, so i'm a bit rusty. I know that having 1 loop, that loops n times is O(n), and having 1 loop that loops n time within another loop that loo

We usually have a single word for most complexities we encounter in algorithmic analysis: O(1) == constant O(log n) == logarithmic O(n) == linear O(n^2) == quadratic O(n^3) == cubic O(2^n)

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

I learned that,using Big O notation O(f(n)) + O(g(n)) -> O(max(f(n),g(n)) O( f(n) )* O( g(n)) -> O( f(n) g(n)) but now, I have this equation for running time T for input size N T(N) = O(N^2) //

Can I divide big O expression like O(n^3)/n = O(n^2)? Is this division valid?

I'm building a game engine and I was wondering: are there any algorithms out there for Collision Detection that have time complexity of O(N^log N)? I haven't written any coding yet, but I can only thi

show that n2/log(n) + 105×n×log(n5)) = O(n2/log(n)) I am having a hard time solving this. If someone could explain to me why that is true, that would be fantastic.

The great people at MyCodeSchool.com have this introductory video on YouTube, covering the basics of Big-O, Theta, and Omega notation. The following definition of Big-O notation is provided: O(g(n) )

I have a very big sorted array. How can I count or print all the unique elements of an array?? Suppose my array is [2,3,3,3,4,6,6,7] then output should be 2,3,4,6,7 I know to do it in a n(complexity)

The question is how we can find the median of a receiving stream of integer values(e.g. for 12, 14, 252, 243, 15 the median is 15) in O(log N) where N is number of values. Please note that we have a s

According to Wikipedia, the Selection Algorithm has runtime of O(n), but I am not convinced by it. Can anyone explain why is it O(n)? In the normal quick-sort, the runtime is O(n log n). Every time we

How can I prove that 2^(n+a) is O(2^n)? The only thing I can think of is that n in 2^n is an arbitrary value, therefore n+a is just as arbitrary, so n+a = n. Alternatively, 2^(n+a) = 2^n * 2^a. 2^n is

Prove that for any real numbers, a, b such that a > b > 0, b^n is O(a^n), n >=1. I have searched several textbooks I own on Discrete Mathematics as well as several online searches for any ex

I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be

Are scalars included in big-O notation, or is O(2n) actually the same as O(n) because scalars are not taken into account? If so, why is this?

I'm trying to figure out where O(sqrt(n)) and O(n2 log n) fit in this growth hierarchy listed below. This chapter is so confusing and i'm lost on how to figure this out. Any suggestions would be much

I have noticed that big-O of 1000n or 10n is the same thing as O(n), but big-O of 2^n and 3^n are different: O(2^n) and O(3^n), what I don't get is why can't we ignore the constants in this case (2 or

Possible Duplicate: Plain English explanation of Big O I need to figure out O(n) of the following: f(n) = 10n^2 + 10n + 20 All I can come up with is 50, and I am just too embarrassed to state how I

I need a stable sorting algorithm for .Net Doubles that is faster than O(n log n). I am thinking of Radix sort. However if I understand correctly, it's performance is O(n k) and since I'm sorting Doub

I recently started playing with algorithms from this princeton course and I observed the following pattern O(N) double max = a[0]; for (int i = 1; i < N; i++) if (a[i] > max) max = a[i]; O(N^2

I'm not sure if it's possible but it seems a little bit reasonable to me, I'm looking for a data structure which allows me to do these operations: insert an item with O(log n) remove an item with O(l

Lets say we have a problem we implemented using X algorithm with O(n) or O(log n) or etc.... When is the value of n big enough that we must consider an alternative implementation? Let's see if i can e

I have made a program (hw), which count the frequency of all words. All of my algorithms takes O(n) or O(n log n), however my word counter takes O(n^2) The algorithm looks like this: for (int i = 0; i

I am developing some algorithm with takes up O(log^3 n). (NOTE: Take O as Big Theta, though Big O would be fine too) I am unsure whereas O(log^3 n), or even O(log^2 n), is considered to be more/less/e

I am having trouble figuring out how to prove that t(n) = sqrt(31n + 12n log n + 57) is O(sqrt(n) log n) I haven't had to deal with square root's in big O notation yet so I am having lots of troubl

are there a limited amount of basic O Notations, considering you are meant to 'distil' them down to their most important part? O(n^2): O(n): O(1): O(log n) logarithmic O(n!) factorial O(na) polynomia

This question triggered some confusion and many comments about whether the algorithms proposed in the various answers are O(1) or O(n). Let's use a simple example to illustrate the two points of view:

I saw this question about solving recurrences in O(log n) time with matrix power: Solving a Fibonacci like recurrence in log n time The recurrence relations in this question are homogeneous. Is there

This question is throwing me for a loop, and I hope StackOverflow is the right place to ask this. The question asks n^1.001 = O(n log n) (log is base 2) in other words, does n log n grow faster than

I am working out a function: total = 0; for (i = 0; i < N; i++){ for (j = 0; j < i*i; j++){ if (j % i == 0){ for (k=0; k < j; k++){ total++; I the Big O number for this N^4 or N^5 when you b

I have a simple algorithm that prints the two dimensional matrix (m*n, m and n are different numbers): for(i=0;i<m;i++) for(j=0;j<n;j++) Console.WriteLine({0},A[i,j]); I read that the big O n

I am having some trouble working out the Big O notation for these 2 recursive functions: int calc (int n) { if (n <= 0) return 0 ; else if (n > 10) return n ; else return calc (5 + calc(5n)); }

For binary search tree type of data structures, I see the Big O notation is typically noted as O(logn). With a lowercase 'l' in log, does this imply log base e (n) as described by the natural logarith

What does the sum O(1)+O(2)+ .... +O(n) evaluate to? I have seen its solution somewhere it was written: O(n(n+1) / 2) = O(n^2) but I am not satisfied with it because O(1) = O(2) = constant, so accord

Suppose I have the following array: 1, 4, 5, 2, 3 I need to rearrange it to 5, 1, 4, 2, 3 There is only on extra space; one int. I figured one solution to solve it. But it is O(n^2) complexity. Can

I'm examining the function f(n) = N! + 2^N Supposedly, this is O(N^N) I'm not quite sure why this is, or how to prove this is true. I would think that it is O(N!) Can you provide an explanation wh

f(n)=(log(n))^log(n) g(n)= n/log(n) f=O(g(n))?

Find the big-O running time for each of these functions: T(n) = T(n - 2) + n ^2 Our Answers: n^2, n^3 T(n) = 3T(n/2) + n Our Answers: O(n log n), O(n^(log base 2 of 3)) T(n) = 2T(n/3) + n Our Ans

I've computed complexity of below algorithm as for i = 0 to m for j = 0 to n //Process of O(1) Complexity: O( m * n) This is simple example of O( m * n). But I'm not able to figure out how O(m+n) co

I have a vector<unsigned int> vec of size n. Each element in vec is in the range [0, m], no duplicates, and I want to sort vec. Is it possible to do better than O(n log n) time if you're allowed

I have been doing some self-study on Big-O. I understand how to give examples of the following notations to algorithms: O(N): for(int i = 0; i < n; i++) sum++; O(N^2): for(int i = 0; i < n; i++

I think this is a simple question, but if I have something like O(n²/2), should I just get rid of the /2 and conclude that O(n²)?

I need to prove/disprove the following sentence: for each f(n)=O(logn) implies 2^(f(n)) = O(n) I think it's true, because 2^(log(n)) = n. What do you think?

I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC. Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large t

I think the following code is O(log log n) because it has i*i in it but I am confused between log n and log (log n). for (i=2; i*i<=number; i++) { if (number % i == 0) return 0; }

Article at http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html says that Big O notation For accessing middle element in linked list is O(N) .should not it be O(N/2) .Assume we have 100 element

How would I go about solving this problem: Use big O notation to give the number of nodes that can be contained in a tree of depth n I'm not looking for the exact answer or anything, Just how I would

I created a method which search for duplicates and then stores duplicates index into another array. Then I'm running through my big array and move all entries without duplicates. Now, my problem is th

I have two recursive functions in Python and simply just want to know the Big O Notation for them. What is the Big O for each these? def cost(n): if n == 0: return 1 else: return cost(n-1) + cost(n-1)