*Please excuse the intentional verbosity*

Here is a small program excerpt:

```
for i=1 to n
j= 0;
while(j<=n);
j=j+1;
```

If I have to find the complexity(Big O) of this code:

I'll first count how many times the inner loop will execute which in this case is n+1 times because from 1 to n, it is n times and since j is 0, it adds to the while looping. So in total n+1 times for the while loop.

The number of times the outer for loop will execute is n times because from 1 to n, the total count is n. Hence the grand total is n+1+n is 2n+1.

Discarding all constants it's big O(n).

Is it correct? The web page from where I found this example says the outer loop will run n(n+1)/2 times. I didn't understand this. Please help!

No. For each value `i`

is getting (and there are `n`

of those), you run the `while`

(inner) loop `n+1`

times (j=0,j=1,...j=n).

Thus, the total number of times the line `j=j+1`

is being executed is `n*(n+1)`

, which is in `O(n^2)`

Similar Questions

I am attempting to calculate the Big-O of the following algorithm but I am confused and require some assistance: Algorithm 1. DFS(G,n) Input: G- the graph n- the current node 1) Visit(n) 2) Mark(n) 3)

I want to calculate !1000 in clojure, how can I do this without getting a integer-overflow exception? My factorial code is right now: (reduce * (range 1 1001)).

i need help finding time complexity of this function in big O notation: int myfunction(bool exists) { int var=0; int k,j,n; if (exists) for(k=1; k<=n; k*=2) for(j=1; j<=k; j++) var++; else for(k

Is there someplace where I can get a Big-O style analysis / comparison of traditional data structures such as linked lists, various trees, hashes, etc vs. cache aware data structures such as Judy tree

Obviously best-case is O(n), but apparently the worst-case is O(n2), which I don't understand. If you implement the hash table as an array of linked lists, I assume the worst case is where every hash

What is the best/worst/average case complexity (in Big-O notation) of a trie data structure for insertion and search? I think it is O(K) for all cases, where K is the length of an arbitrary string whi

I have been doing some self-study on Big-O. I understand how to give examples of the following notations to algorithms: O(N): for(int i = 0; i < n; i++) sum++; O(N^2): for(int i = 0; i < n; i++

We use Ө-notation to write worst case running time of insertion sort. But I’m not able to relate properties of Ө-notation with insertion sort, why Ө-notation is suitable to insertion sort. How does th

Can someone please explain to me how one can determine the worst-case complexity of an algorithm. I know that the we need to use the equation W(n) = max{t(I)|I element of D), where D is the set of inp

The question is from clrs (exercise 12.4-5) Consider RANDOMIZED-QUICKSORT operating on a sequence of n input numbers. Prove that for any constant k > 0, all but O(1/n^k) of the n! input permutatio

I have a couple of questions regarding some algebra using big O notation: if f(n)=O(g(n)) is log(f(n)) = O(log(g(n)))? is N^{f(n)}=O(N^{g(n)})? (where N is any real number)

I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC. Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large t

Consider the function F: 2^(3*n) + n^2 Can the function A: 2^(3*n) be used as a Big Theta, Omega or O as a characterisation of F? Why? I'm revising the concepts of Big Omega, Big Theta and Big O and I

Big 0 attempts to answer the question of inefficiency in algorithmic complexity. N+1 describes inefficiency as it relates to database queries in terms of separate queries to populate each item in a co

I am working out a function: total = 0; for (i = 0; i < N; i++){ for (j = 0; j < i*i; j++){ if (j % i == 0){ for (k=0; k < j; k++){ total++; I the Big O number for this N^4 or N^5 when you b

What in an exact formal manner does the expression f(n) = 2^O(n) mean?

Hello it has been some time sins i have used Big O notation, so i'm a bit rusty. I know that having 1 loop, that loops n times is O(n), and having 1 loop that loops n time within another loop that loo

This question already has an answer here: Big O, how do you calculate/approximate it? 22 answers How can I learn if an algorithm runs in O(n) time, or O(nlogn)?

Does anyone know how to write a program in Python that will calculate the addition of the harmonic series. i.e. 1 + 1/2 +1/3 +1/4...

Is it possible to obtain a Big O estimate of Math.random()?

I need to design an algorithm that is able to do some calculations in given O notation. It has been some time since I last calculated with O notation and I am a bit confused on how to add different O

I have been programming in PHP for a long time now, and as I don't come from a computer science / math background I only have a basic understanding of the Big O notation, so I have taken a function an

Find the big-O running time for each of these functions: T(n) = T(n - 2) + n ^2 Our Answers: n^2, n^3 T(n) = 3T(n/2) + n Our Answers: O(n log n), O(n^(log base 2 of 3)) T(n) = 2T(n/3) + n Our Ans

How can I write a c++ program to calculate large factorials. Example, if I want to calculate (100!) / (99!), we know the answer is 100, but if i calculate the factorials of the numerator and denominat

I am working on the program just needed in the following to understand it better. What is the worst case running time for Quicksort and what may cause this worse case performance? How can we modify qu

QHull (and perhaps other good implementations of QuickHull) works really well and fast in many cases. However, we know theoretically that its worst case can be O(n^2). In practice I have not seen any

Ok, I am trying to understand the concept of Big O. I have a function I am suppose to find the Big O and I am not quite getting it this is an example in the book of one that is like my homework.. I

I'm trying to grasp the idea of Big O for part of a project that is due tonight and I don't know if i'm thinking this through right or not... The project included us writing Iterative and Recursive so

What is a plain English explanation of Big O? With as little formal definition as possible and simple mathematics.

Could you please explain how I can get the worst-case Big O of this algorithm. I was reading my textbook and I came across a similar algorithm like this one but still don't understand the logic behind

I'm trying to help a friend analyze the complexity of his algorithm but my understanding of Big-O notation is quite limited. The code goes like this: int SAMPLES = 2000; int K_SAMPLES = 5000; int i =

Prove that 1 + 1/2 + 1/3 + ... + 1/n is O(log n). Assume n = 2^k I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated

Ok I am just learning about Big-O and someone gave me a conceptual question, to take as a means of trying to learn. However just barely starting out with Big-O I only know concept per say. I've been

What is the worst case scenario for insertion sort O(n^2)? it strikes me that if the array to be sorted is already sorted in reverse order then the first element is compared 0 times the second 1 time

Given the following code, what is the complexity of 3. and how would I represent simple algorithms with the following complexities? O(n²+n) O(n²+2n) O(logn) O(nlogn) var collection = new[] {1,2,3}; va

I have 2 algorithms to do something (eg search a list) one has linear complexity and the other has log complexity (O(log n). If I am comparing operation on 100 and 1000 units, do I say the linear algo

I understand the worst/avg/best case are used to determine the complexity time of an algorithm into a function but how is that used in asymptotic analysis? I understand the upper/tight/lower bound(big

I'm wondering what is the big O notation for each of the below statement: Sum = 0; for i=1 to N^2 do: for j=1 to N do: 'Sum += 1; Sum = 0 is O(1) for sure, because it will only be executed once. But

I have an algorithm i'm trying to implement. I've been asked to determine a function that describes its worst case running time. As input, it takes an array of some length (lets call it n). Then what

Possible Duplicate: What’s the complexity of for i: for o = i+1 I have done the following sorting algorithm for arrays of length 5: int myarray[5] = {2,4,3,5,1}; int i; for (i = 0; i < 5; i++) {

void function(int N){ int c=0; for (int i =0; i< N; i+= N/5) c++; } What is the Big O of the above? Since for every N the loop would iterate 5 times, would it be O(1)?

This question already has an answer here: Plain English explanation of Big O 21 answers I really can't figure out what Big-O is and how to use it in practice, so i hope someone could give me

I have a counting algorithm for which I am trying to get a general big-o description. It is horribly nested and horribly exponential. Here it is: 1. For each T_i in T 2. For k = 1 to max_k 3. For eac

I've come across some code which could definitely be improved, but i'm wondering about the Big-O notation of my improvements. Their original code adds a element to an array, and each time it does this

Given the following algorithm on a dataset of size N: Separate the data into M=(N/lg N) blocks in O(N) time. Partition the blocks in O(M lg M) time. * What is the big-O? How do I evaluate (N/lg N)

So I have been given a function, and I'll change the function since it is homework, and I want to learn HOW to do this instead of being told what the answer is. Using the definitions of big-Oh and Ω,

I was having problem with the following question Consider the following nested loop construct. Categorize its efficiency in terms of the variable n using big-o notation. Suppose the statements repre

I have two separate pieces of code for which I need to calculate the Big O complexity. The first one is: k:=1; s := 4; while k < N do begin k := 2 * k; m:=1; while m < N do begin for i := m to

What would the big O notation of the function foo be? int foo(char *s1, char *s2) { int c=0, s, p, found; for (s=0; s1[s] != '\0'; s++) { for (p=0, found=0; s2[p] != '\0'; p++) { if (s2[p] == s1[s]) {

Does anyone know of a data structure where I can access and delete the k-th item in worst case O(logn) and also supports the operation of inserting an item after the k-th item in worst case O(logn)?