I have a couple of questions regarding some algebra using big O notation:

if `f(n)=O(g(n))`

is `log(f(n)) = O(log(g(n)))`

?

is `N^{f(n)}=O(N^{g(n)})`

? (where N is any real number)

Is

`log(f(n)) = O(log(g(n)))`

? No, it is not essential, for example:`f(n)= n`

and`g(n) = n^2`

. Here`f(n) = O(g(n))`

Is

`N^{f(n)}=O(N^{g(n)})`

? No, this is also not true as

for two algorithms , the ratio may remain constant, but the ratio of each raised to certain power will never be constant.

Take

f ( n ) = 2 n and g ( n ) = n .

It is true that `2n`

is `O(n)`

. But consider

This limit is not bounded - it goes to in finity as n goes to in finity. So,

`2^2n`

is not `O(2n)`

i.e. `2f(n)`

is not `O(2g(n))`

in this case.

Similar Questions

I am having trouble understanding this time complexity O(sqrt(B)) given that B is an integer. For example if I have a function... int GetResult(int A, int B) { } ...and this function has a time compl

I am trying to determine the complexity metrics (Big-theta notation) for each method and constructor in my classes that create a queue using an array and a linked list as the underlying structures. Sp

The question asks me to determine the big-O notation for the following efficiencies: 1) 3n3/2 + 3 n1/2 + 10logn + 105n +5 2) 5 n3 +10n2logn 3) 105logn+10n3logn+10n2 4) 3n2+5n3/2 +151logn Here are my

Consider a function whose body is: sum = 0; for (i = 1; i <= f(n); i++) sum += i; where f(n) is a function call. Give a simple and tight big-oh upper bound on the running time of this function, as

I have two recursive functions in Python and simply just want to know the Big O Notation for them. What is the Big O for each these? def cost(n): if n == 0: return 1 else: return cost(n-1) + cost(n-1)

What is the difference between Big-O notation (O(n)) and Little-O notation (o(n))?

While trying to understand the difference between Theta and O notation I came across the following statement : The Theta-notation asymptotically bounds a function from above and below. When we have on

I am having some trouble working out the Big O notation for these 2 recursive functions: int calc (int n) { if (n <= 0) return 0 ; else if (n > 10) return n ; else return calc (5 + calc(5n)); }

Suppose f(n) is runtime of algorithm. According to function definition of O(n), if f(n)<=c*g(n) then f(n)=O(g(n)) where n0<=n. What is the range of values constant c can take?

I've seen two different formal definitions of O notation: f(n) = O(g(n)) if there are constants n0, c where for any n0, we have f(n) < cg(n) And f(n) O(g(n)) if there are constants n0, c where for

Considering the following code: for ( int j = 0; j < 2n; j++) { for ( int k = 0; k < n^3; k += 3) sum++; } Is the complexity O(n^2)? Does the n^3 in the for loop affect the Notation for LARGE N

Given the following algorithm on a dataset of size N: Separate the data into M=(N/lg N) blocks in O(N) time. Partition the blocks in O(M lg M) time. * What is the big-O? How do I evaluate (N/lg N)

We use Ө-notation to write worst case running time of insertion sort. But I’m not able to relate properties of Ө-notation with insertion sort, why Ө-notation is suitable to insertion sort. How does th

I am trying to get the correct Big-O of the following code snippet: s = 0 for x in seq: for y in seq: s += x*y for z in seq: for w in seq: s += x-w According to the book I got this example from (Pyth

I am having trouble determining the runtime of the following pseudocode. while n > 0 do n = n/3 It seems to be rather straight forward, but I keep confusing myself would it be log3n? I know that i

I'm having problems calculating Big O for the following code. Im never the smartest cookie. Can someone kindly explain it. My guess here was O(N^2) due to the nested loops but I know there's more to i

I don´t really know how to express in big O-notation. I´ve seen several sources talking about this but it only made me more uncertain. When I write in big-O should I just ignore the constants? exampl

For example, if time complexity of merge sort is O(n log n) then why it is big O not theta or omega. I know the definition of these, but what I do not understand is how to determine the notation based

What wording would be correct to say I can reduce the complexity from O(n^2) to O(n) but reduce in algorithm analysis means you can cast one problem in terms of another for which there exist a known

What would be the best-case and worst-case complexity in Big-Theta (T) notation of the selection sort algorithm when the array grows by repeatedly appending a 19? For instance: [ 19, 13, 7, 19, 12, 1

From Wikipedia: O(|E| + |V| log|V|) From Big O Cheat List: O((|V| + |E|) log |V|) I consider there is a difference between E + V log V and (E+V) log V, isn't there? Because, if Wikipedia's one is corr

Im studying for a test and a practice problem states: True or False: O(n^3 + n^2) dominates O(n^4) Do we count O(n^3 + n^2) as O(n^5)? If so it does dominate.

I am taking now the big O in ICS202 course, and I really find some dificulty to figure it out from a code, Is there any videos,web pages or blogs that can help me with that?

I'm struggling with understanding Big O and I'm unable to quite comprehend the multiple ways that I read, and watch, that are used to find the Big O of a function. It seems that every video or post ha

I really want to know real definition. I have tried to read a book but couldn't understood. O : Big-O notation worst case. Θ : Theta notation average case. Ω : Omega notation best case. Q1> But why

I've a question regarding big O notation when one is using multiple functions. Lets say I want to find out what the time complexity is for the following pseudo code: heap sort array of size n for i =

I had a question about Big O vs little o notation. It seems intuitively, that Big O is like <= while little o is like <. Does that mean that if something is little o of f(n), it is also Big O of

I am developing a java program that uses curve fitting techniques to determine the time complexities for o-notation. I am provided with input size and time, and i'm to determine which time complexity

I'm trying to learn Big O analysis, and I was wondering if someone could let me know if I'm doing it right with these two examples (and if I'm not, where did I go wrong?). I got the first one to be O(

To simplify a big O expression We omit all constants We ignore lower powers of n For example: O(n + 5) = O(n) O(n^2 + 6n + 7) = O(n^2) O(6 * n ^ (1/3) + n ^ (1/2) + 7) = O (n^(1/2)) Am I right in

void printScientificNotation(double value, int powerOfTen) { if (value >= 1.0 && value < 10.0) { System.out.println(value + x 10^ + powerOfTen); } else if (value < 1.0) { printScie

1) The time cost to add n elements to an initially empty singly linked list by inserting at the front of the list. the answer seems to be one of these O(n) or O(1). I think it is O(1) because insertin

Is it possible to obtain a Big O estimate of Math.random()?

I have the following algorithm that determines the greatest common divisor of two numbers x and y. I need to find the big o notation that describes this algorithm and explain why, but I have no idea h

I have a question, what does it mean to find the big-o order of the memory required by an algorithm? Like what's the difference between that and the big o operations? E.g a question asks Given the fol

why is the statement true : Log2(n) is O(log3(n)) I don't understand this, does big O not mean upper bond of something?. Isn't log2(n) bigger than log3(n) ? When I graph them, log2(n) is above log3(n)

I am new to Big-O notation. While reading I came across an example : Qus : Find upper bound for f(n) = n^2 + 1 Sol : n^2 + 1 <= 2n^2 for all n >= 1 so f(n) = O(n^2) with c = 2 and n0 = 1 after

I need to prove/disprove the following sentence: for each f(n)=O(logn) implies 2^(f(n)) = O(n) I think it's true, because 2^(log(n)) = n. What do you think?

This question already has an answer here: Plain English explanation of Big O 21 answers What is Big O notation? Do you use it? I missed this university class I guess :D Does anyone use it and g

Let f(n)= ( (n^2+2n)/n + 1/1000*(n^(3/2)))*log(n) The time complexity for this function could be both O(n²*log(n)) and O(n^(3/2)*log(n)) How is this possible? I thought the dominating term here was n²

I am trying to get a grasp on Big O notations. It seems pretty abstract. I selected the most common data structures - array, hash, linkedl list (single and double) and a binary search tree and guessed

I have a problem with the Big O: for i:=1 to n do for j:=1 to i*i do begin k:=1; m:=n; while m>=k do begin k:=k*3; m:=m/2 end end Teacher gave the answer - n*n*n*log(n). However, I can't get there

I have a sorted array of doubles (latitudes actually) that relatively uniformally spread out over a range of -10 to -43. Now, if I did a binary search over that list I get O(log N). But, I can further

I am currently trying to work out what are the main quantitative differences between a quadratic algorithm, a cubic algorithm and one which is exponential. What I don't understand is what it means by

I'm trying to find the smallest number evenly divisible by each integer 1-20(inclusive). This is just me trying to improve on a practice exercise I had done before. I wrote a function that takes in an

This isn't actually homework, but I need to understand these concepts in the class. What is the worst-case Big-O performance for the insert, find and remove operations in a general tree? Why is this

What is the Big-O efficiency of a stack, queue, set, and deque as it pertains to insertion, search, indexing, space, and deletion complexities?

I am doing the exercise of Skiena's book on algorithms and I am stuck in this question: I need to calculate the big O of the following algorithm: function mystery() r=0 for i=1 to n-1 do for j=i+1 to

Possible Duplicate: What is Big O notation? Do you use it? Hi all, fairly basic scalability notation question. I recently recieved a comment on a post that my python ordered-list implimentation but

Lets say I have a routine that scans an entire list of n items 3 times, does a sort based on the size, and then bsearches that sorted list n times. The scans are O(n) time, the sort I will call O(n lo