I know in Big O Notation we only consider the highest order, leading polynomial term because we are basically placing this theoretic worst case bound on compute-time complexity but sometimes I get confused on when we can legally drop/consider terms as constants. For example if our equation ran in

O((n^3)/3) --> we pull out the "1/3" fraction, treat it as a constant, drop it, then say our algo runs in O(n^3) time.

What about in the case of O((n^3)/((log(n))^2))? In this case could we pull out the 1/((log(n)^2)) term, treat it as a constant, drop it, and then ultimately conclude our algorithm is O(n^3). It does not look like we can, but what differentiates this case from the above case? both can be treated as constants because their values are relatively small in comparison to the leading polynomial term in the numerator but in the second case, the denominator term really brings down the worst case bound (convergence) as n values get larger and larger.

Any value which is fixed and does not depend on a variable (like n) can be treated as constant. You can separate the constants, remove the lower order terms and classify the result as the complexity. Also big O notation states if

```
f(x) <= c*g(x)
```

Then `f(x) ~ O(g(x))`

. For example-

`n^3 * 5`

-> here 5 is a constant. Complexity is `O(n^3)`

`4*(n^3)/((log(n))^2 + 7`

-> here 7 and 4 are constants. Complexity is `O(n^3/(logn)^2)`

At this point, it starts to be a good idea to go back and look at the formal definition for big O notation. Essentially, when we say that `f(x) is O(g(x))`

we mean that there exists a constant factor `a`

and a starting input `n0`

such that for all `x >= n0`

then `f(x) <= a*g(x)`

.

For a concrete example, to prove that `3*x^2 + 17`

is `O(n^2)`

we can use `a = 20`

and `n0 = 1`

.

From this definition it becomes easy to see why the constant factors get dropped off - its just a matter of adjusting the `a`

to compensate. As for your `1/log(n)`

question, if we have `f(x) is O(g(x))`

and `g(x) is O(h(x))`

then we also have `f(x) is O(h(x))`

. So yes, `10*n^3/log(n) + x is O(n^3)`

but that is not a tighter upper bound and it is a weaker statement than saying that `10*n^3/log(n) + x is O(n^3/log(n))`

. For a tight bounds you would want to use big-Theta notation instead.

Similar Questions

Suppose f(n) is runtime of algorithm. According to function definition of O(n), if f(n)<=c*g(n) then f(n)=O(g(n)) where n0<=n. What is the range of values constant c can take?

Mozilla's website clearly describes hasOwnProperty() and the in operator. However, it does not give any implementation details in regards to their efficiencies. I would suspect they'd be O(1) (constan

My textbook is saying that the Big O Notation for finding a Node in a Binary Tree is O(log2N), if N = 1 then log2N would be 0, which is impossible? Would this just be rounded up to 1 or is there more

What is the Big-O for SQL select, for a table with n rows and for which I want to return m result? And What is the Big-O for an Update, or delete, or Create operation? I am talking about mysql and sql

Are scalars included in big-O notation, or is O(2n) actually the same as O(n) because scalars are not taken into account? If so, why is this?

I have a couple of questions regarding some algebra using big O notation: if f(n)=O(g(n)) is log(f(n)) = O(log(g(n)))? is N^{f(n)}=O(N^{g(n)})? (where N is any real number)

I am trying to understand Big-O Notation. So sorry if I am asking something that is too obvious but I can't seem to wrap my head around this. I have the following C# code function that I am trying to

So let's say we have a function such as 6wn^2 - 6wn + 6w, would the big-o notation be O(n^2) or O(wn^2)? The problem set question asks me to write this function in Big-O form in terms of w and n, so I

def foo(x): if x > 5: return foo(x–1) – foo(x-1) else: return 11 def bar(a,b): if (b > 0): return bar( bar(a, b+1) , b-1 ) else: return 0 How do I find the running time in Big O notation and ho

I am trying to determine the complexity metrics (Big-theta notation) for each method and constructor in my classes that create a queue using an array and a linked list as the underlying structures. Sp

I have 2 algorithms to do something (eg search a list) one has linear complexity and the other has log complexity (O(log n). If I am comparing operation on 100 and 1000 units, do I say the linear algo

Is it possible to access object properties that can only be accessed with the square bracket notation when inside a with statement. Example: var o = { bad-property: 1, another:bad:property: 2,

I have to find the big-O Notation of the following expression: 2n + n(logn)10 + (1/2)n If I ignore the coefficients, I get 2n + n (log n)10 plus some term involving 1/2. If I ignore the coefficients

Can you explain what makes an algoritm to be O(log n)? I would appreciate if you can show it with a simple code. Thanks

How to decide on expressing time complexity of algorithm ? Should we choose to express time complexity in terms of O(n) or theta(n) ? Because a function f(n) can be expressed as either Big-Oh(g(n)) o

How can we find a repeated number in array in O(n) time and O(1) complexity? eg array 2,1,4,3,3,10 output is 3 EDIT: I tried in following way. i found that if no is oddly repeated then we can achieve

I think I am starting to understand at least the theory behind big Oh notation, i.e. it is a way of measuring the rate at which the speed of a function grows. In other words, big O quantifies an algor

When calculating big-O notation do we also evaluate inherent javascript methods? If we do not, I am reasonably certain this is O(N). If we do how would I express his in big-O and how did you come to t

How to find an algorithm for calculating the sum value in the array?? Is is Something like this? Algorithm Array Sum Input: nonnegative integer N, and array A[1],A[2],...,A[N] Output: sum of the N in

The great people at MyCodeSchool.com have this introductory video on YouTube, covering the basics of Big-O, Theta, and Omega notation. The following definition of Big-O notation is provided: O(g(n) )

Maybe this is a stupid question, but I am trying to find the math rule to prove that: O(n^2.3) is less efficient than O(n^2logn)

I had a question about Big O vs little o notation. It seems intuitively, that Big O is like <= while little o is like <. Does that mean that if something is little o of f(n), it is also Big O of

I'm relatively new to the practice of determining algorithm runtimes using big-O notation and I have a question regarding the runtime of a sorting algorithm. Let's say I have a set of pairs (a, b) in

I got this question wrong on an exam : Name a function that is neither O(n) nor Omega(n). After attempting to learn this stuff on my own through youtube, I'm thinking this may be a correct answer: (n

I've a question regarding big O notation when one is using multiple functions. Lets say I want to find out what the time complexity is for the following pseudo code: heap sort array of size n for i =

I am having some trouble working out the Big O notation for these 2 recursive functions: int calc (int n) { if (n <= 0) return 0 ; else if (n > 10) return n ; else return calc (5 + calc(5n)); }

I have a Big O notation question. Say I have a Java program that does the following things: Read an Array of Integers into a HashMap that keeps track of how many occurrences of the Integers exists in

Karatsuba Algorithm involves the recursion relation T(n) = 3T(n/2) + n. By the recursion tree method, we can approximate the big O of T to be O(nlog23) However, by the substitution method I am having

When i make the following multiplication in PHP: $ret = 1.0 * 0.000000001; i get the result: 1.0E-9 I want to convert this result into the normal decimal notation, how can i do this? sprintf('%f',$re

I need help understanding/doing Big O Notation. I understand the purpose of it, I just don't know how to determine the complexity given a piece of code. Determine the Big O notation for each of the

If I have the following code: IterateArray(object[] array) { for(int i=0; i<array.length; i++) { Dosomething(array[i]); } } and the Dosomething(object) method's time performance is O(log n), is th

int ara(int dizi[], int ilk, int son, int deger) { int indeks; if ( ilk > son ) return 0; indeks = (ilk + son) / 2; if ( dizi[indeks] < deger ) return ara(dizi, indeks+1, son, deger); else if (

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?

I have been given some code to work out big O runtimes on them, could someone tell me if I am on the right track or not? //program1 int i, count = 0, n = 20000; for(i = 0; i < n * n; i++) { count++

I need to prove/disprove the following sentence: for each f(n)=O(logn) implies 2^(f(n)) = O(n) I think it's true, because 2^(log(n)) = n. What do you think?

I think this is a simple question, but if I have something like O(n²/2), should I just get rid of the /2 and conclude that O(n²)?

I have the following algorithm that determines the greatest common divisor of two numbers x and y. I need to find the big o notation that describes this algorithm and explain why, but I have no idea h

This question already has an answer here: Is Big O(logn) log base e? 6 answers When articles/question state that the Big O running time of the algorithm is O(LogN) . For example Quicksort has a

I am having trouble finding out the Big-O notation for this fragment of code. I need to find the notation for both for loops. public static int fragment(int n) { int sum = 0; for (int i = n; i >= 1

I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC. Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large t

When I install IBM big insights Quick start editor 3.0.0.0,other components are correct ,but the Big SQL component has error ,can not start the Big SQL Head Node and Big SQL Scheduler Nodes . When I r

I'm having difficulty determining the big O of simple recursive methods. I can't wrap my head around what happens when a method is called multiple times. I would be more specific about my areas of con

This question already has an answer here: Plain English explanation of Big O 21 answers How is Big Oh calculated for sorting algorithms? I have written a program to sort a deck of cards ( n = 5

I'm trying to figure out where O(sqrt(n)) and O(n2 log n) fit in this growth hierarchy listed below. This chapter is so confusing and i'm lost on how to figure this out. Any suggestions would be much

I've been trying for the better part of an hour to find reference to the following: f = Ω(g) But I have had no luck at all. I need to answer a question for an assignment and I can't find references.

I have got problem about understanding the following question. It says: Prove that exponential functions have different orders of growth for different values of base. It looks to me like for example

I found the manual of OCaml List module did not say how the List.nth does. Does it cost O(1) or O(n) like some plain recursive implementation. If List.nth is O(n), can we write a function to find the

when using TCP Socket I/O code.. Is there any big difference of performance between below two codes..?? The result of both is the same~~ // -------- 1 -------- // OutputStream out = sock.getOutputSt

Can we do unit testing for the file I/o operations in C# using VS2010? Since the unit testing is not efficient to access the data layer, do we need to unit test operation or is there any mock or fake

I have the following question: Is the following statement true or false? All logs to base 2 log2n is a member of O(log(n)) My attempt: log2n - clogn <= 0 log2 + logn - clogn <= 0 1 + logn(1-c)