int float char
Some of these data types come in different sizes. As we have seen, characters need only 8 bits, since that is enough to encode all the characters we have in the ASCII character set. For numeric data types, the number of bits used to represent the numbers puts limitations on the size of the number which can be stored (or the accuracy with which we represent numbers).
The number of bits we have determines the number of different patterns we can use to represent information.
For example, with 3 bits, we have
3
2 = 8
combinations; namely:
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
We can interpret these patterns of bits as binary numbers (i.e. numbers in base 2). The binary number system is a positional number system, like decimal, where the value of each digit is either 0 or 1, and the weight is determined by its position in the number. For binary, the weights are powers of 2:
2 1 0
2 2 2
2 1 0
1 0 1 = 1 * 2 + 0 * 2 + 1 * 2 = 5
In general, for n bits, we have
n
2
combinations which can represent numbers in the range
n
[ 2 -1 .. 0]
for unsigned numbers or
n-1 n-1
[ +2 -1 .. -2 ]
for signed numbers (using a representation called two's compliment).
Can we control the size of integers in C?
Integer storage can actually be allocated in three different sizes:
short int or short int long int or long
The ANSI C specification says that short variables should contain at least 16 bits, and long variables should contain at least 32 bits. int variables must be no smaller than short and no bigger than long.
Generally, the size of int is chosen to be the "natural" word size of the machine running the program. Nowadays, this is most often 32 bits.
Because we have a litmited number of bits to represent an integer, there is a limit to the "biggest" and "smallest" number we can represent in an integer. For 32 bits this is:
31 31
[ +2 - 1 .. -2 ]
For a short this would be:
15 15
[ +2 - 1 .. -2 ]
This is called the range of numbers we can represent.
We can extend the range of an int a little bit if we are only interested in positive numbers by using a subtype qualifier in declaring an integer. There are two subtype qualifiers for integers:
signed int (the default). unsigned int
The range of unsigned int is
32
[ +2 - 1 .. 0 ]
We can determine the range of values for data types on a particular machine by looking at <limits.h>.
We can also determine the amount of storage used for a value in a program using the sizeof operator:
sizeof <expression>
which evaluates to the number of bytes used to store <expression>.
For example, on wiliki:
sizeof (int)
evaluates to 4, as does
sizeof (long)
but
sizeof (short)
evaluates to 2.
What does
sizeof (char)
evaluate to? We can also use an arbitrary expression with the sizeof operator:
int x,y;
char z;
printf("size of x is %d\n",sizeof x);
printf("size of y is %d\n",sizeof y);
printf("size of z is %d\n",sizeof z);
printf("size of x+y is %d\n",sizeof x + y);
printf("size of x+z is %d\n",sizeof x + z);