I know that T(n) = T(n/2) + θ(1) can be result to O(Log N) and my book said this is a case of Binary Search. But, how do you know that? Is it just by the fact that Binary Search cuts the problem in half each time so it is O(Log N)?

```
And T(n) = 2T(n/2) + θ(1)
```

why is it that the result is O(N) and not O(Log N) when the algorithm divides in half each time as well.

```
Then T(n) = 2T(n/2) + θ(n)
```

can be result to O(N Log N)? I see the O(N) is from θ(n) and O(Log N) is from T(n/2)

I am really confused about how to determine the Big O of an algorithm that I don't even know how to word it properly. I hope my question is making sense.

Thanks in advance!

an intuitive solution for these problems is to see the result when unfolding the recursive formula:

Let's assume Theta(1) is actually 1 and Theta(n) is n, for simplicity

```
T(n) = T(n/2) + 1 = T(n/4) + 1 + 1 = T(n/8) + 1 + 1 + 1 = ... =
= T(0) + 1 + ... + 1 [logN times] = logn
T'(n) = 2T'(n/2) + 1 = 2(2T'(n/4) + 1) + 1 = 4T'(n/4) + 2 + 1 =
= 8T'(n/4) + 4 + 2 + 1 = ... = 2^(logn) + 2^(logn-1) + ... + 1 = n + n/2 + ... + 1 =
= 2n-1
T''(n) = 2T(n/2) + n = 2(2T''(n/2) + n/2) + n = 4T''(n/4) + 2* (n/2) + n =
= 8T''(n/8) + 4*n/4 + 2*n/2 + n = .... = n + n + .. + n [logn times] = nlogn
```

To formally prove these equations, you should use induction. Assume `T(n/2) = X`

, and using it - prove `T(n) = Y`

, as expected.

**For example**, for the first formula [`T(n) = T(n/2) + 1`

] - and assume base is `T(1) = 0`

Base trivially holds for n = 1

Assume `T(n) <= logn`

for any `k <= n-1`

, and prove it for `k = n`

`T(n) = T(n/2) + 1 <= (induction hypothesis) log(n/2) + 1 = log(n/2) + log(2) = log(n/2*2) = log(n)`

I find an easy way to understand these is to consider the time the algorithm spends on each step of the recurrence, and then add them up to find the total time. First, let's consider

```
T(n) = T(n/2) + O(1)
```

where n=64. Let's add up how much the algorithm takes at each step:

```
T(64) = T(32) + 1 ... 1 so far
T(32) = T(16) + 1 ... 2 so far
T(16) = T(08) + 1 ... 3 so far
T(08) = T(04) + 1 ... 4 so far
T(04) = T(02) + 1 ... 5 so far
T(02) = T(01) + 1 ... 6 so far
T(01) = 1 ... 7 total
```

So, we can see that the algorithm took '1' time at each step. And, since each step divides the input in half, the total work is the number of times the algorithm had to divide the input in two... which is log2 n.

Next, let's consider the case where

```
T(n) = 2T(n/2) + O(1)
```

However, to make things simpler, we'll build up from the base case T(1) = 1.

```
T(01) = 1 ... 1 so far
```

now we have to do T(01) twice and then add one, so

```
T(02) = 2T(01) + 1 ... (1*2)+1 = 3
```

now we have to do T(02) twice, and then add one, so

```
T(04) = 2T(02) + 1 ... (3*2)+1 = 7
T(08) = 2T(04) + 1 ... (7*2)+1 = 15
T(16) = 2T(08) + 1 ... (15*2)+1 = 31
T(32) = 2T(16) + 1 ... (32*2)+1 = 63
T(64) = 2T(32) + 1 ... (65*2)+1 = 127
```

So we can see that here the algorithm has done 127 work - which is equal to the input multiplied by a constant (2) and plus a constant (-1), which is O(n). Basically this recursion corresponds to the infinite sequence (1 + 1/2 + 1/4 + 1/8 + 1/16) which sums to 2.

Try using this method on T(n) = 2T(n/2) + n and see if it makes more sense to you.

One visual solution to find the T(n) for a recursive equation is to sketch it with a tree then:

`T(n) = number of nodes * time specified on each node.`

In your case `T(n) = 2T(n/2) + 1`

I write `the one`

in the node itself and expand it to two node `T(n/2)`

Note `T(n/2) = 2T(n/4) + 1`

, and again I do the same for it.

```
T(n) + 1
/ \
T(n/2)+1 T(n/2)+1
/ \ / \
T(n/4)+1 T(n/4)+1 T(n/4)+1 T(n/4)+1
... ... .. .. .. .. .. ..
T(1) T(1) .......... ............T(1)
```

In this tree the number of nodes equals

```
2*height of tree = 2*log(n) = n
```

Then `T(n) = n * 1 = n = O(n)`

Similar Questions

So let's say we have a function such as 6wn^2 - 6wn + 6w, would the big-o notation be O(n^2) or O(wn^2)? The problem set question asks me to write this function in Big-O form in terms of w and n, so I

The Hash table wiki entry lists its Big O as: Search: O(n) Insert: O(n) Delete: O(n) while a java HashMap is listed with Big O as: get: O(1) put: O(1) remove: O(1) Can someone plz explain why does t

Consider the function F: 2^(3*n) + n^2 Can the function A: 2^(3*n) be used as a Big Theta, Omega or O as a characterisation of F? Why? I'm revising the concepts of Big Omega, Big Theta and Big O and I

As per requirement we need to configure meeting recurrence. We are sending json data as follows `{ MEETING_ROOMS: { Description: test, EndDate: 2014-08-26T14:00:00Z, EventDate: 2014-08-26

If I have an algorithm that takes 4n^2 + 7n moves to accomplish, what is it's O? O(4n^2)? O(n^2)? I know that 7n is cut off, but I don't know if I should keep n^2 coefficient or not. Thanks

I am new to Big-O notation. While reading I came across an example : Qus : Find upper bound for f(n) = n^2 + 1 Sol : n^2 + 1 <= 2n^2 for all n >= 1 so f(n) = O(n^2) with c = 2 and n0 = 1 after

T (n) = T (xn) + T ((1 − x)n) + n = O(n log n) where x is a constant in the range 0 < x < 1. Is the asymptotic complexity the same when x = 0.5, 0.1 and 0.001? What happens to the constant hidd

I need help understanding/doing Big O Notation. I understand the purpose of it, I just don't know how to determine the complexity given a piece of code. Determine the Big O notation for each of the

What are some examples where Big-O notation[1] fails in practice? That is to say: when will the Big-O running time of algorithms predict algorithm A to be faster than algorithm B, yet in practice algo

I have an algorithm that takes a DAG graph that has n nodes and for every node, it does a binary search on its adjacency nodes. To the best of my knowledge, this would be a O(n log n) algorithm howeve

Another Big O notation question...What is the Big O for the folling code: for (int i = n; i > 0; i = i / 2){ for (int j = 0; j < n; j++){ for (int k = 0; k < n; k++){ count++; } } } My Thou

Could anyone please explain me briefly how we can get to a closed-form to solve a recurrence equation. So, for an example: T(n) = {3 if n = 1; T(n-1)+7 otherwise} if n is large (n>1) then I do the

I am going over the Big-Oh notation, and I have a problem understanding the solution to this question: Is 2n + 10 ≡ O(n)? Can we find c and n0? 2n + 10 <= cn (c-2)n >= 10 n >= 10/(c-2) Pick c

I've tried to come up with the Big O Running time of the following data structures. Are they Correct? Inserting n integers into an initially empty AVL Tree (best case) O(log n) Inserting n integers i

I have received the assignment to prove 1/O(n) = Ω(n) However, this would mean that n element of O(n) => 1/n element of Ω(n) which is clearly wrong. So my question is: Is the statement 1/O(n) = Ω(n

I'm having a hard time wrapping my head around the big-O notation for a pairing operation. The question is pretty simple- Generate all possible pairs for a given list of numbers in an array. My first

Can some help me with a function which is Big O(1) but not Ω(1) and the other way around? Some explanation would greatly help.

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?

What is the best/worst/average case complexity (in Big-O notation) of a trie data structure for insertion and search? I think it is O(K) for all cases, where K is the length of an arbitrary string whi

I promise this is the last Big O question Big O Notation for following loops... for (int i = n; i > 0; i = i / 2){ for (int j = 0; j < n; j++){ count++; } } for (int k = 0; k < n; k++){ for

So I am trying to understand the data types and Big O notation of some functions for a BST and Hashing. So first off, how are BSTs and Hashing stored? Are BSTs usually arrays, or are they linked list

I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be

Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying t

How would you characterize the below in big-O notation? rotors = [1,2,3,4,5 ...] widgets = ['a', 'b', 'c', 'd', 'e' ...] assert len(rotors) == len(widgets) for r in rotors: for w in widgets: ... del w

Based on my understanding, big O is essentially similar to theta notation but can include anything bigger than the given function (e.g. n^3 = O(n^4), n^3 = O(n^5), etc.), and big Omega includes anythi

I'm wondering what is the big O notation for each of the below statement: Sum = 0; for i=1 to N^2 do: for j=1 to N do: 'Sum += 1; Sum = 0 is O(1) for sure, because it will only be executed once. But

Problem What is this algorithm doing? What does 0x01 represent? What does it mean that m = m >> 1 within the inner while loop? What is this algorithm big-O of? while(n>0) { m = n; while(m) {

I am confused here which case of master theorem finding tight bound for this recurrence relation: T(n) = 27T(n/3) + Q(n3log n) Here is my solution: f(n) = n3log n a=27 b = 3 so So we can see here t

What wording would be correct to say I can reduce the complexity from O(n^2) to O(n) but reduce in algorithm analysis means you can cast one problem in terms of another for which there exist a known

I'm solving some recurrence relation problems for Big O and so far up till this point have only encountered recurrence relations that involved this form: T(n) = a*T(n/b) + f(n) For the above, it's qu

Can anyone please let me what would be big O time complexity for the following piece of code: for (int i = 0; i < array.length - 1; i++) { for (int j = i + 1; j < array.length; j++) { // do some

A question in one of my past exams is a multi-choice question: Choose the FALSE statement: 7(log n) + 5n + n(log log n) + 3n(ln n) is A. O(n^2) B. Ω(n^2) C. O(n log^2 n) D. Ω(n) E. Θ(n log n) First I

This question already has an answer here: Confused about the time complexity of nested loops and looking for tips 1 answer So I understand that if the two while loops were just while (x < n)

In my Data Structures Class, we're looking at recurrence relations like T(n) and big O problems O(n). I'd appreciate any resources for learning these, my textbook doesn't cover T(n), and the professor

I am having trouble figuring out how to prove that t(n) = sqrt(31n + 12n log n + 57) is O(sqrt(n) log n) I haven't had to deal with square root's in big O notation yet so I am having lots of troubl

I know how to do recurrence relations for algorithms that only call itself once, but I'm not sure how to do something that calls itself multiple times in one occurrence. For example: T(n) = T(n/2) + T

I think this is probably a beginner question about big-O notation. Say, for example, I have an algorithm that breaks apart an entire list recursively(O(n)) and then puts it back together (O(n)). I ass

I've a question regarding big O notation when one is using multiple functions. Lets say I want to find out what the time complexity is for the following pseudo code: heap sort array of size n for i =

What is the Big-O of division on most modern day ISAs? Is there some kind of optimization or is it the naive O(numerator/denominator)? I'm writing code that relies heavily of modulus operation. For ex

Am looking for algorithm to implement calendar recurrence in my android application. frequency of recurrence is daily, weekly, monthly, yearly. I need occurrence dates, number of occurrences before en

I have been trying to solve this recurrence for almost 2 hours but could not get the answer... Let: T(n)= kn+T(n/2) for n>1 and T(1)=1 where n = 2^k for some integer k Show that T(n)= O(n)

for(int i=N; i>0; i=i/2) irrelevant statement; I am asked to find the complexity class and I am unsure of whether I am supposed to use Big-Omega notation or Big-O? But I am assuming that it is O(N

I'm confused on how to create a function T(n) to measure computing time for a nested infinite loop. Here is the code: x=1; for(int i = 0;i<n-1;i++){ for(int j = 1; j<=x; j++){ cout << j &l

I have to find the big-O Notation of the following expression: 2n + n(logn)10 + (1/2)n If I ignore the coefficients, I get 2n + n (log n)10 plus some term involving 1/2. If I ignore the coefficients

What would be the big O notation of the length of the list of permutations of the characters of a list of words of lenght n? I just do not know how to express that because it would be like n! for each

What is the Big-O time complexity ( O ) of the following recursive code? public static int abc(int n) { if (n <= 2) { return n; } int sum = 0; for (int j = 1; j < n; j *= 2) { sum += j; } for (i

I'm having trouble determining the Big-O running time for the following type of code: typedef map<string, vector<string> >::iterator MapIter; while(!myMap.empty()) { for(MapIter it = myMap

Lets say we have a problem we implemented using X algorithm with O(n) or O(log n) or etc.... When is the value of n big enough that we must consider an alternative implementation? Let's see if i can e

How to prove this: x^7 = O(x^10) x^10 = O(x^7)? ı couldnt prove this statement.

If a function body invokes 3 different functions, all of the order O(n), how do I calculate the order of the outer (containing) function? Yes, this is homework, and I've surprisingly failed to find a