What is the difference between signed and unsigned int?

As you are probably aware, `int`

s are stored internally in binary. Typically an `int`

contains 32 bits, but in some environments might contain 16 or 64 bits (or even a different number, usually but not necessarily a power of two).

But for this example, let's look at 4-bit integers. Tiny, but useful for illustration purposes.

Since there are four bits in such an integer, it can assume one of 16 values; 16 is two to the fourth power, or 2 times 2 times 2 times 2. What are those values? The answer depends on whether this integer is a `signed int`

or an `unsigned int`

. With an `unsigned int`

, the value is never negative; there is no sign associated with the value. Here are the 16 possible values of a four-bit `unsigned int`

:

```
bits value
0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 8
1001 9
1010 10
1011 11
1100 12
1101 13
1110 14
1111 15
```

... and Here are the 16 possible values of a four-bit `signed int`

:

```
bits value
0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 -8
1001 -7
1010 -6
1011 -5
1100 -4
1101 -3
1110 -2
1111 -1
```

As you can see, for `signed int`

s the most significant bit is `1`

if and only if the number is negative. That is why, for `signed int`

s, this bit is known as the "sign bit".

Sometimes we know in advance that the value stored in a given integer variable will always be positive-when it is being used to only count things, for example. In such a case we can declare the variable to be unsigned, as in, `unsigned int num student;`

. With such a declaration, the range of permissible integer values (for a 32-bit compiler) will shift from the range -2147483648 to +2147483647 to range 0 to 4294967295. Thus, declaring an integer as unsigned almost doubles the size of the largest possible value that it can otherwise hold.

Since their is no accepted answer for this one, I though on adding more information to the knowledge pool. I highly recommend reading this, hope it helps! Cheers!

In laymen's terms an unsigned int is an integer that can not be negative and thus has a higher range of positive values that it can assume. A signed int is an integer that can be negative but has a lower positive range in exchange for more negative values it can assume.

`int`

and `unsigned int`

are two distinct integer types. (`int`

can also be referred to as `signed int`

, or just `signed`

; `unsigned int`

can also be referred to as `unsigned`

.)

As the names imply, `int`

is a *signed* integer type, and `unsigned int`

is an *unsigned* integer type. That means that `int`

is able to represent negative values, and `unsigned int`

can represent only non-negative values.

The C language imposes some requirements on the ranges of these types. The range of `int`

must be at least `-32767`

.. `+32767`

, and the range of `unsigned int`

must be at least `0`

.. `65535`

. This implies that both types must be at least 16 bits. They're 32 bits on many systems, or even 64 bits on some. `int`

typically has an extra negative value due to the two's-complement representation used by most modern systems.

Perhaps the most important difference is the behavior of signed vs. unsigned arithmetic. For signed `int`

, overflow has undefined behavior. For `unsigned int`

, there is no overflow; any operation that yields a value outside the range of the type wraps around, so for example `UINT_MAX + 1 == 0`

.

Any integer type, either signed or unsigned, models a subrange of the infinite set of mathematical integers. As long as you're working with values within the range of a type, everything works. When you approach the lower or upper bound of a type, you encounter a discontinuity, and you can get unexpected results. For signed integer types, the problems occur only for very large negative and positive values, exceeding `INT_MIN`

and `INT_MAX`

. For unsigned integer types, problems occur for very large positive values **and at zero**. This can be a source of bugs. For example, this is an infinite loop:

```
for (unsigned int i = 10; i >= 0; i --) [
printf("%u\n", i);
}
```

because `i`

is *always* greater than or equal to zero.

Similar Questions

what's difference between int i=0; and int i(0); int *p=new int; and int *p=new int(0); is int *p=new int is still copy initial style? when to use int i=0; not new int(0)?

This question already has an answer here: C pointer to array/array of pointers disambiguation 9 answers I totally understand what is int *p[3] ( p is an array of 3 pointer meaning we can hav

Coming from a typeless language (PHP) I'm a bit confused about datatypes in C. I'm facing the following strange behavior. //First case unsigned int a; a = -1; printf(a = %u, a); //Outputs a strange

what is the difference between LINQ and ADO.net

What is the difference between running an app and exporting unsigned app (regarding the built .apk) file? As my app fails to export with Conversion to Dalvik format failed with error 1, but when runni

What is the difference between unsigned and unsigned int? This question was already answered for C (there is no difference): Difference between unsigned and unsigned int in C I am interested in knowin

I have the following code : unsigned int a; if (a > numeric_limits<int>::max()) do_stuff(); When compiling, gcc complains about warning: comparison between signed and unsigned OK, I unde

Is there any difference between uint and unsigned int? I'm looking in the site, but all question refers to C# or C++. I'd like an answer about C. Note that I'm using GCC under Linux

Given that signed and unsigned ints use the same registers, etc., and just interpret bit patterns differently, and C chars are basically just 8-bit ints, what's the difference between signed and unsig

unsigned int = unsigned int = signed int = signed signed long long int = singed long long = long long unsigned long long int = unsigned long long signed long int = signed long = long unsigned long int

I expected that the size will be different. But both are showing 8bytes. #include <iostream> using namespace std; int main() { cout<<Size of long:<<sizeof(unsigned long)<<\n

Similarly, is a naked char16_t signed or unsigned? Is it implementation defined?

Possible Duplicate: Where and why use int a=new int? What's the difference between these two below? int i =0; int i = new int(); Is there any difference in case of memory allocation ? Is there any

Possible Duplicate: Signed/unsigned comparisons So I have the below code, which sets an unsigned int to a negative number, and then does a comparison of the unsigned int with the same negative numbe

I have an DICOM image with its type is 16 bit signed. I see some paper, the author often converts it to unsigned 8 bit. But they did not explain why they do it. Could you explain what is benefit of th

In C, signed integers like -1 are supposedly supposed to be declared with the keyword signed, like so: signed int i = -1; However, I tried this: signed int i = -2; unsigned int i = -2; int i = -2; a

I know that the following unsigned short b=-5u; evaluates to b being 65531 due to an underflow, but I don't understand if 5u is converted to a signed int before being transformed into -5 and then re-

I was reading some assembly tutorial in which there were explained the signed integers and the unsigned integers and the difference between their representation in computer memory. I remember somethin

what is the difference between .sqlite and .db file?

Data from sensors comes in 2 formats SaHpiUint64T and SaHpiInt64T and application is designed to handle only SaHpiUint64T data. Casting if possible what could be the side effects ?? /* unsigned 64-bi

This is the source code: int main(void) { int i = 5, *j, *k; j = &i; k = j; // first assignment k = (int*)j; // second assignment return 0; } What is the difference between the two assignments to

I have a function which gets 'unsigned int *& argument The parameter I want to transfer in my code is located in the std::vector<unsigned int> data So,what I do is : I transfer the following

Is there an easy and elegant way to convert an unsigned byte value to a signed byte value in java? For example, if all I have is the int value 240 (in binary (24 bits + 11110000) = 32bits), how can I

Suppose, int numbers [20]; int * p; I think this is statement is valid p = numbers; But this is not numbers = p; Because numbers is an array, operates as a constant pointer, and we cannot assign

Is there any way to probe an NSNumber to see if it is an int or unsigned int. I have tried to do this with objCType, but I can't tell the difference. Consider the following scenario: NSNumber *number1

I'm studying signed-unsigned integer conversions and I came to these conclusions, can someone tell me if this is correct please unsigned short var = -65537u; Steps: 65537u (implicitly converted to u

What will the unsigned int contain when I overflow it? To be specific, I want to do a multiplication with two unsigned ints and want to know what will be in the unsigned int after the multiplication i

What is the difference between 10.6.to_i and 10.6.to_int ?

We're in the process of putting a buy button on our app using the PHP API and callbacks. In the signed request, I have a buyer and a receiver. In other $_REQUEST vars, I get user_id. When called from

add rd, rs, rt addu rd, rs, rt sub rd, rs, rt subu rd, rs, rt In MIPS, what's the difference between signed addition, unsigned addition, signed subtraction and unsigned subtraction? If their results

Does (int*) arr[2] have to with typecasting? If yes, where is it used? This problem arose when I tried to compile the follwing code : int* arr[2]; int arr1[] = { 1,2}; int arr2[] = { 1,6}; arr[0] = a

What is the difference between a Hashtable and Properties?

If I use INT(12) vs INT(10) or INT(8) what will this actually do in terms of me using in code? (This is a spin off of a previous question) I read through the manuals and I think I understand what they

Is an int the same type as unsigned or signed?

I have a value like this: int64_t s_val = SOME_SIGNED_VALUE; How can I get a uint64_t u_val that has exactly the same bit pattern as s_val, but is treated as unsigned? This may be really simple, but

Can someone explain why unsigned int it taking the negative value? As unsigned int should only take positive values. From Wikipedia: word, long, doubleword, longword, int Unsigned: From 0 to 4,294,9

How can i check if given int(or any other data type) is signed or unsigned? I found this function while searching, std::numeric_limits<int>::is_signed But i can only input the data type, is th

i find this in Pointers on C int f[](); /* this one is illegal */ and: int (* f [])(); /* this one legal. */ i really want know what's the usage of the second one. thank you.

It appears, that in 32bit OS ip2long returns signed int, and in 64bit OS unsigned int is returned. My application is working on 10 servers, and some are 32bit and some are 64bit, so I need all them to

I have a confusion about understanding Property and Variables public class ABC() { public int A; public int B { get; set; } } What is the exact difference between in A and B. Thanks.

What is the difference between an Instance and an Object? Is there a difference or not?

I'm writing a datalog parser for a robot controller, and what's coming in from the data log is a number in the range of 0 - 65535 (which is a 16 bit unsigned integer if I'm not mistaken). I'm trying t

I'm trying to understand how this line works: lea (%eax, %eax, 4), %eax So it looks like this essentially says: %eax = 5* %eax But does LEA treat as signed or unsigned?

What's the difference between these classes? What's the difference between their purposes?

What's difference between these defining; #include QWebView class QWebView;

What is the exact difference between getch and getchar functions ?

How to convert a number from unsigned to signed? signed: -32768 to 32767 unsigned: 0 to 65535 I am solving the problem in JavaScript. The situation is that I have a number that goes e.g. from 0 to 655

In C, what is the difference between these two? float myF = 5.6; printf( %i \n, (int)myF ); // gives me 5 printf( %ld \n, floor(myF) ); // also 5? When is one preferable over the other?

What is difference between datetime and timestamp datatype in Sql Server?.

What is the difference between -[UIViewController viewWillAppear:] and -[UIViewController viewDidAppear:]?