I have been given some code to work out big O runtimes on them, could someone tell me if I am on the right track or not?

```
//program1
int i, count = 0, n = 20000;
for(i = 0; i < n * n; i++)
{
count++;
}
```

Is that O(n^2)?

```
//number2
int i, inner_count = 0, n = 2000000000;
for(i = 0; i < n; i++)
{
inner_count++;
}
```

is this one O(n)?

```
//number3
for(i = 0; i < n; i++)
{
for(j = 0; j < n; j++)
{
count++;
}
}
```

O(n^2)?

```
//number4
for(i = 0; i < n; i++)
{
for(j = 0; j < i; j++)
{
for(k = 0; k < j; k++)
{
inner_count++;
}
}
}
```

Is that O(n^3)?

```
//number5
int i, j, inner_count = 0, n = 30000;
for(i = 0; i < n; i++)
{
for(j = 0; j < i; j++)
{
inner_count++;
}
}
```

Is that one O(n^3)?

```
//number6
int i, j, k, l, pseudo_inner_count = 0, n = 25;
for(i = 0; i < n; i++)
{
for(j = 0; j < i*i; j++)
{
for(k = 0; k < i*j; k++)
{
pseudo_inner_count++;
for(l = 0; l < 10; l++);
}
}
}
```

Very confused about this one O(n^3)??

```
//number7
int i, j, k, pseudo_inner_count = 0, n = 16;
for(i = n; i > 1; i /= 2)
{
for(j = 0; j < n; j++)
{
pseudo_inner_count++;
for(k = 0; k < 50000000; k++);
}
}
```

O(n)? (I get more lost as they get harder)

```
//number8
int i, j, pseudo_inner_count = 0, n = 1073741824;
for(i = n; i > 1; i /= 2)
{
pseudo_inner_count++;
for(j = 0; j < 50000000; j++);
}
```

O(n^2)?

one more i didn't see and have absolutely no idea with {

```
int i, wrong_count = 0, n = 200;
for(i = 0; i < square(n); i++)
{
wrong_count++;
}
printf("%d %d\n", wrong_count, square(wrong_count));
return 0;
}
int square(int m)
{
int i, j = 0;
for(i = 0; i < m * m; i++)
{
j++;
}
return j;
}
```

If anyone could clarify these and help me understand them better I would be very grateful.

- number5 is O(N^2)
- number6 is O(N^6)
- number7 is O(N*logN)
- number8 is O(logN)

the rest seems correct.

Try to understand how many operations are done as function of N, all the constant operation, such as

```
for (int i = 0; i < 333333333; ++i) { j++; }
```

are in fact O(1)

number5 = O(n^2) -- the 2nd loop goes from 1 to a number less than n, including n, so it's basically 1..n, or O(n).

number6 = O(n^6) -- the 2nd loop is n^2 and the 3rd is n^3 (n*n^2). The 4th, inner loop is O(1) so it doesnt count.

number7 = O(n log n) -- the first loop is O(log2(n)) (because you keep dividing the index by 2) and the 2nd loop is O(n) 1..n, and the inner loop is again O(1) because it doesn't depend on n.

number8 = O(log n) -- same reason as above.

the rest are fine.

Your answers to 5, 6, 7, and 8 are incorrect if my quick traces are right.

below is a trace of 8

```
1: int i, j, pseudo_inner_count = 0, n = 1073741824;
2:
3: for(i = n; i > 1; i /= 2)
4: {
5: pseudo_inner_count++;
6: for(j = 0; j < 50000000; j++);
7: }
```

so, line 5 is a prim op and thus O(1) same with the checking and assignment in line 3. line 6 looks like it's going to be something big, but because it's always going to be 50000000, it's a constant time operation of O(1) as well, that leaves us with this shell to consider:

```
1: int i, j, pseudo_inner_count = 0, n = 1073741824;
2:
3: for(i = n; i > 1; i /= 2)
4: {
5: }
```

I'm not in the business of doing someone's homework for free but I'll finish this one off, the method of reducing i is that of division of two meaning it will take about log_2 operations to reach 1, giving a runtime of O( log( n ) )

Similar Questions

For large problems sizes, an algorithm with time cost O(2^n) is faster than an algorithm that has time cost O(N^2) Is this true or false? What I think is that if C^n, C = constant and C > 1, then

I'm asked to give a big-O estimates for some pieces of code but I'm having a little bit of trouble int sum = 0; for (int i = 0; i < n; i = i + 2) { for (int j = 0; j < 10; j + +) sum = sum + i +

In big-O notation is O((log n)^k) = O(log n), where k is some constant (e.g. the number of logarithmic for loops), true? I was told by my professor that this statement was true, however he said it wil

I understand that this is O(N^2): Loop from i=1 to N Loop from j=1 to N Do something with i,j But what about this? Loop from i=1 to N Loop from j=1 to i Do something with i,j Is it still O(N^2) or O

Question Hi I am trying to understand what order of complexity in terms of Big O notation is. I have read many articles and am yet to find anything explaining exactly 'order of complexity', even on th

Hello I just have a simple question, why is the big O notation of a sorted array O(log N)? It will be a sorted array.

I am having trouble finding out the Big-O notation for this fragment of code. I need to find the notation for both for loops. public static int fragment(int n) { int sum = 0; for (int i = n; i >= 1

So I get that the first for loop runs O(n) times, then inside that it runs 3 times, then 3 times again. How do I express this at big O notation though? Then do the 2 print statements matter? How do I

Good afternoon all, We say that a hashtable has O(1) lookup (provided that we have the key), whereas a linked list has O(1) lookup for the next node (provided that we have a reference to the current n

I need some help understanding Big O concepts on code. We only went over this for 30 mins and we did not discuss how to interpret code on java (I think?). Any who, I'll try to attempt my solution can

I am having a hard time understanding Dijkstra's big O notation exactly. I have a question regarding Dijkstra with an unsorted array. As from Wikipedia: The simplest implementation of the Dijkstra's

What is the Big-O efficiency of a stack, queue, set, and deque as it pertains to insertion, search, indexing, space, and deletion complexities?

What is the Big-O time complexity ( O ) of the following recursive code? public static int abc(int n) { if (n <= 2) { return n; } int sum = 0; for (int j = 1; j < n; j *= 2) { sum += j; } for (i

In n-element array sorting processing takes; in X algorithm: 10-8n2 sec, in Y algoritm 10-6n log2n sec, in Z algoritm 10-5 sec. My question is how do i compare them. For example for y works faster acc

Just started learning algorithms. So the exercise is to find if statement is always/sometimes true or false. Em, where does my logic fails here? f(n) != O(g(n)) and g(n) != O(f(n)) O-notation is 0 &l

From my textbook: O-notation and Complexity of Algorithms It is important not to try and make comparisons between algorithms using O-notation. For example, suppose algorithm A1 and A2 both solve the

I have a quite simple problem, however I am a bit unsure of what the actual runtime (e.g. Big-O) is of this. The program looks like this. n <- user input for i=1 to n foo(i) foo a: for i=1 to a foo

show that n2/log(n) + 105×n×log(n5)) = O(n2/log(n)) I am having a hard time solving this. If someone could explain to me why that is true, that would be fantastic.

My question arises from the post Plain English Explanation of Big O. I don't know the exact meaning for logarithmic complexity. I know that I can make a regression between the time and the number of

I am trying to find a good explanation to quickly understand Big O and Theta theory. I always feel an explanation can be given in a million different ways, and I guess I'm seeking that one explanation

I'm fairly familiar with algorithm analysis and can tell the Big-O of most algorithms I work with. But I've been stuck for hours unable to come up with the Big-O for this code I write. Basically it's

How can I receive from org.eclipse.gmf.runtime.notation.impl.ShapeImpl Object the corresponding EditPart? Or how can I receive from ShapeImpl(emf.ecore not runtime) the corresponding EditPart?

I'm attempting to guess and prove the Big O for: f(n) = n^3 - 7n^2 + nlg(n) + 10 I guess that big O is n^3 as it is the term with the largest order of growth However, I'm having trouble proving it. My

When deciding to use a specific container (List/Set/Map), I like to consider the performance (big-Oh notation) metrics of operations such as insert, delete, get, etc. This is so I can select the best

Hey i have a question. say t(n) = O(n log(n)) and u know that this is true. and then your given these statements and told to say whether they must be true or false. t(n) = n^4 and t(n) = O(N^4) The s

I have the following algorithm that determines the greatest common divisor of two numbers x and y. I need to find the big o notation that describes this algorithm and explain why, but I have no idea h

I have been programming in PHP for a long time now, and as I don't come from a computer science / math background I only have a basic understanding of the Big O notation, so I have taken a function an

Possible Duplicate: What is Big O notation? Do you use it? Hi all, fairly basic scalability notation question. I recently recieved a comment on a post that my python ordered-list implimentation but

I'm trying to learn Big O notation and I'm a little confused with this C++ code: void enterElements(int *s1, int s1Size) { for(int x = 0;x < s1Size;++x) { retry: cout<<Element <<x + 1

Say there is an algorithm/function that runs as follows: function(n) int x = 1 int y = 1 while( y <= n) { x++ y = y + x print(something) } If I wanted to use Big-O notation to describe its runn

I would just like to prove the following: Show that 1^k + 2^k+...+n^k is O(n^(k+1)) for every positive integer k I am not sure how to go about it. Normally when I am proving a function is big O of ano

I often here people talk about Big O which measures algorithms against each other Does this measure clock cycles or space requirements. If people want to contrast algorithms based on memory usage what

I want to know how can I calculate running time in O_notation in C++ programs? Is there any code for that? I have to use this code for showing the running time clock_t start, end; start = clock(); //C

I've come across some code which could definitely be improved, but i'm wondering about the Big-O notation of my improvements. Their original code adds a element to an array, and each time it does this

Possible Duplicate: Plain English explanation of Big O I was recently asked about my knowledge of how to use Big O notation and I was stumped because I had never come across Big O before. I have rea

I'm confused about how to do big-O analysis for the following problem - find an element from an array of integers. ( an example problem) my solution sort the array using bubble sort ( n^2 ) binary se

I want to implement at least two different solutions to find N-th largest element with O(N*log(N)) average time complexity in Big-O notation, where N is the number of elements in the list ? which sea

I've been told the below code is = O(MN) however, I come up with O(N^2). Which is the correct answer and why? My thought process: nested for loops plus if statements --> (O(N^2)+O(1)) + (O(N^2)+O(

Possible Duplicate: If f(n) = O(g(n)) , then is exp(f(n)) = O(exp(g(n))) I stumbled upon this question in the Cormen book. If f(n) is O (g(n)) then 2^f(n) is also O (2^g(n)). Is this true? I was try

Big 0 attempts to answer the question of inefficiency in algorithmic complexity. N+1 describes inefficiency as it relates to database queries in terms of separate queries to populate each item in a co

I missed the class where big-O was introduced thinking that it was pretty straight forward. It still seems to be however the teacher said something about O(n) deviating from the function when n gets v

I am doing a question which asks to find the complexity of a nested for loop simplified using big O notation. The question is: for i <- 1 to n do for j <- 1 to n do for k <- 1 to (i+j) do a u

For an assignment, I am to create a constructor taking a String[] names and int[] rank as parameters that accomplishes the following tasks in O(n log n): (Data Validation): Checks that names and ran

I'm having problems calculating Big O for the following code. Im never the smartest cookie. Can someone kindly explain it. My guess here was O(N^2) due to the nested loops but I know there's more to i

I want to calculate Big O of x++ in below algorithm. for (int i = 2;i < n;i*=2) for(int j = i;j < m;j*=j) x++; I think a lot about it, but I can't solve it. How can I solve it?

The method hasTwoTrueValues return true if at least two values in an array of boolean are true. Provide the Big-O running time for all three implementations proposed. // Version 1 public boolean hasTw

Well I came across this question in one of the books I was referring. I am not quite certain as to what this logically implies. Neither do I have a solution for any deductions. How can we use mathemat

I have a question regarding time complexity (big O notation) for Java software. Is there a way to quickly calculate or test it (or any website that could calculate it for me would be welcomed). For ex

Considering the below algorithm, Loop1 until(i<n^2) Loop2 until(j<i^2) .... j=j+4 End Loop2 i=i*3 End Loop1 I think this is Theta(n^2*log(n)). This is correct or is the Big Theta higher than th

According to Wikipedia, the Selection Algorithm has runtime of O(n), but I am not convinced by it. Can anyone explain why is it O(n)? In the normal quick-sort, the runtime is O(n log n). Every time we