Hello I just have a simple question, why is the big O notation of a sorted array O(log N)? It will be a sorted array.

The question is somewhat bogus. The complexity doesn't apply to the array but actually to the algorithm that sorts the array (I assume you mean runtime complexity not memory).

So depending on the used algorithm the X in O(X) for sorting an array can vary strongly.

Check pages these for a starter on complexity in general and in special case array sorting.

Introduction to algorithm complexity

Big O notation generally makes sense in context of an algorithm. What operation are you considering when you say the Big O notation is O(log n).

If you mean searching, then it is O(log n) because you can use binary search. Which essentially means you look at the middle element of you array, and if it is greater than the element you're searching for, you then search the larger half (in the same way), and vice versa (assuming you haven't yet found your element of course). You can read a more detailed description on wikipedia.

At each step of searching (looking the middle element), you are cutting the size of the array you must search in half since you can now know which side of the middle element your search element must lie. Of course this only works with sorted arrays. For non-sorted arrays, the only search algorithm you can use is linear search where you examine every element in the array which will take on average n/2 inspections.

In general, Big O describes the runtime characteristics of algorithms, so you can't just ask, what is the Big O of a sorted array, it must be some operation on the array. However, you can consider Big O in terms of the space (memory) taken by some data structure. In this case a sorted array still takes O(n) space to store N elements.

Similar Questions

I've read the topic: Big O, how do you calculate/approximate it? And am not sure what the Big-O notation for the following function would be: static int build_nspaces_pattern(const char * const value,

When calculating big-O notation do we also evaluate inherent javascript methods? If we do not, I am reasonably certain this is O(N). If we do how would I express his in big-O and how did you come to t

For some reason im unable to solve this. what will be the Big-o Notation for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) { c[i][j] = 0; for (int k = 0; k < n; k++) c[i][j] += a[i][k]

I know that the big-o complexity of the following equation is O(n^3) 4n^3 + 6n + 6 because the n^3 is the dominant term. Would the big-o complexity be the same for the same function, but with a nega

The question is rather simple, but I just can't find a good enough answer. On the most upvoted SO question regarding the big-O notation, it says that: For example, sorting algorithms are typically co

What is the difference between Big-O notation (O(n)) and Little-O notation (o(n))?

I am having trouble finding out the Big-O notation for this fragment of code. I need to find the notation for both for loops. public static int fragment(int n) { int sum = 0; for (int i = n; i >= 1

sum = 0; for (i=0;i<n/2;i++) for (j=i; j<n/4; j++) sum++; What is the big O for the above code? I calculated the big O but I'm not sure if it's correct. This is my answer the outer loop will r

can anyone help me verifying the following complexities: 10^12 = O(1)? 2^(n+3) + log(n) = O(2^n)? f(n) = Omega(n) and f(n) = theta(n) <=> f(n) = O(n) thanks

Given 3 sorted array. Find 3 elements, one from each array such that a+b=c. Can it be done less than O(n^3) time complexity? Please help me.

What is the performance in Big-O notation of the following algorithm? It's a function I wrote to print all permutations of a string. I know for an input of length n there are n! different permutations

I've looked all around for information about Big-Theta, and I think I've come to a decent understanding of it. However, the question remains: Is Big Theta Notation an efficient measure of algorithm ef

What is the use of Big-O notation in computer science if it doesn't give all the information needed? For example, if one algorithm runs at 1000n and one at n, it is true that they are both O(n). But I

Possible Duplicate: Cost of len() function Does len() iterate over the objects in a list and then return their count? Thus giving it a O(n). Or.... Does a python list keep a count of any objects tha

I don´t really know how to express in big O-notation. I´ve seen several sources talking about this but it only made me more uncertain. When I write in big-O should I just ignore the constants? exampl

I'm trying to merge to sorted arrays into a third sorted array , but I can't see any way to do that in O(n) , only in O(n*n) .Am I wrong ? is there a way to do that in O(n) ? Edit : Actually the que

Help me solve the time complexity of this method below here: void method(int n, int[] array) { int i = 0, j = 0; for(; i < n; ++i) { while(j < n && array[i] < array[j]) { j++; } } }

A question in one of my past exams is a multi-choice question: Choose the FALSE statement: 7(log n) + 5n + n(log log n) + 3n(ln n) is A. O(n^2) B. Ω(n^2) C. O(n log^2 n) D. Ω(n) E. Θ(n log n) First I

I have a sorted array of integers: {1,2,4,4,5,8,12,15,15,23,54} I want to find how many numbers in that array fall between a range, say 4 and 15. {4,4,5,6,12,15,15} So, there are 7 items in the arr

Question: what is the big oh notation for the cost of a stack (that implements an array) that doubles the size of its array if it needs more space. it dynamically resizes larger, but not smaller. ex:

If I have a loop construction like this for(int i=1; i<n;i++) for(int j=1; j<n;j++); O(n2) or O(0)? Assume that inside the loop is an if: for(int i=1; i<n;i++) for(int j=1; j<n;j++) if(a=

So a quick thought; Could one argue that O(∞) is actually O(1)? I mean it isn't depend on input size? So in some way its, constant, even though it infinity. Or is the only 'correct' way to express i

I am trying to find the big O bound for the following recurrence relation: T(n) = T(n-1) + n^c, where c >= 1 is a constant So I've decided to solve this by using iteration: T(n) = T(n-1) + n^c T(n

Let d p(n) = Σ ai n^i i=0 where ad > 0 is a degree-d polynomial in n, and let k be a constant. Use the definitions of the asymptotic notations to prove the following properties. a) if k >= d,

I have been searching through the forums on big O notation and learned quite a bit. My problem is pretty specific and I think a unique case will better help me understand big O, I am ignoring constant

With regards to Big O notation what is the best method of solving the following problem? Given a String array of size N. Iterate through the array and return a single string containing the string that

for i=1 to n | n+1 for j=1 to i^3 | ??? x=x+1 | Problem: Find O notation for n number of times the statement x=x+1 executes. I am confused with the second for loop here. I have the first statement th

I am currently trying to work out what are the main quantitative differences between a quadratic algorithm, a cubic algorithm and one which is exponential. What I don't understand is what it means by

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

I need help understanding/doing Big O Notation. I understand the purpose of it, I just don't know how to determine the complexity given a piece of code. Determine the Big O notation for each of the

I'm getting objects containing big numbers from a web service and show them in an <input type=number> field. It works until angular begins to show the values in scientific notation. The value

I am completely new here and what I am fighting with is understanding the Big-oh notation concept. Recently I have started data structure and algorithm course in my school and the term Big-oh is ver

If we are given an array that is sorted, what algorithm can we use to create an output array that has the same elements as the sorted array, but the elements should be randomly shuffled. I am looking

In n-element array sorting processing takes; in X algorithm: 10-8n2 sec, in Y algoritm 10-6n log2n sec, in Z algoritm 10-5 sec. My question is how do i compare them. For example for y works faster acc

I'm just learning about Big O notation, and I've been going through a few thought puzzles, and I just thought I'd verify with people whether I'm thinking through things correctly. In Javascript would

I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be

What's the correct big O notation for an algorithm that runs in triangular time? Here's an example: func(x): for i in 0..x for j in 0..i do_something(i, j) My first instinct is O(n²), but I'm not ent

What will be the Big O notation for the above complexity? Is it O(n)

The question asks me to determine the big-O notation for the following efficiencies: 1) 3n3/2 + 3 n1/2 + 10logn + 105n +5 2) 5 n3 +10n2logn 3) 105logn+10n3logn+10n2 4) 3n2+5n3/2 +151logn Here are my

I'm solving some recurrence relation problems for Big O and so far up till this point have only encountered recurrence relations that involved this form: T(n) = a*T(n/b) + f(n) For the above, it's qu

If a function has a O(N) complexity and it is called in an if statement is it still O(1)? For example: f(x); if (f2(x)) f3(x); where f(x) is O(N) f2(x) is O(N) and f3(x) is O(Nlog2N). So would the o

If x is an n-bit integer. What is the size (in bits) of x2? I think the answer is O(n); is that correct? The way I thought about it is adding a number to itself that number amount of times means that

I think this is a simple question, but if I have something like O(n²/2), should I just get rid of the /2 and conclude that O(n²)?

What is a plain English explanation of Big O? With as little formal definition as possible and simple mathematics.

Please help me to describe and solve that why Θ(p^2 log p^2) = Θ(p^2 log p) I really got stun on it.

Arrays in JavaScript are very easy to modify by adding and removing items. It somewhat masks the fact that most languages array's are fixed-size, and require complex operations to resize. It seems tha

here i have wrote code for finding median of two sorted array #include<iostream> using namespace std; #define L 5 #define M 6 const int N=L+M; int A[1000];//define 1 indexed aarray int B[1000];

I am having trouble determining the runtime of the following pseudocode. while n > 0 do n = n/3 It seems to be rather straight forward, but I keep confusing myself would it be log3n? I know that i

My program of sorting values clocks at: 100000 8s 1000000 82s 10000000 811s Is that O(n)?

I'm trying to grasp the idea of Big O for part of a project that is due tonight and I don't know if i'm thinking this through right or not... The project included us writing Iterative and Recursive so