PROGRAMMING: Typecasting

What is typecasting?

Typecasting is a method for temporarily converting a variable into a different variable type. If you have a variable declared as a char, you can temporarily convert that variable into an int or a double or even a long. To typecast a variable, use the following syntax:

([variable type]) variable

So if you wanted to typecast a char variable called myChar to an int, you would use the following syntax:

(int) myChar

You will typically see typecasting used when doing arithmetic operations, like so:

uint16_t = (uint16_t) myChar + 255;

(uint16_t just means "unsigned 16-bit integer". Don't be scared by this notation!)

Why would use typecasting?

To understand why typecasting is necessary, we need to understand what is going on when we use arithmetic operations in C. Let's say we had the following code:

uint16_t c;

uint8_t a = 150, b = 235;

c = a + b;

Let's observe that a and b are unsigned, 8-bit integers. This means that the lowest value for these numbers is 0 and the highest value is 255. When the computer sees that a and b are both uint8_t, it will allocate enough memory space to hold a uint8_t (unsigned 8-bit integer) when it does the addition. So, let's do the addition!

150 + 235 = 129

"... HUH?!"

Yes, that's correct. We humans might think 150 + 235 = 385, but we're also not limited to fixed sizes of memory or data. Perhaps it might make more sense if we do the arithmetic in binary instead.

150 = 1001 0110

235 = 1110 1011

If we (the humans, not the computer) add these together, we get...

1001 0110 + 1110 1011 = 1 1000 0001

And indeed, 1 1000 0001 is the same as 385. So what went wrong? Remember that the computer only allocated eight bits of memory to store this result because both data types are eight-bit numbers. As such, only the least significant eight bits are stored in the result, hence 1000 0001 = 129.

Now that the computer has done the computation, it proceeds to store the value into c. So even though c is a sixteen-bit number, it is storing the result of an eight-bit computation.

Well that's just unfortunate. How do we get around it?

Typecasting!

Let's modify the code to incorporate typecasting.

uint16_t c;

uint8_t a = 150, b = 235;

c = (uint16_t) a + b;

Now, a is typecasted into a uint16_t. When the computer allocates memory for a + b now, it will see that a is a uint16_t and b is a uint8_t. With two data types of different sizes, the computer will allocate memory for the larger of the two data types, which in this case is uint16_t, a sixteen-bit number. So the result of a + b will be stored as a sixteen-bit number. With more bits to work with, 150 + 235 will be properly computed as 385! Hooray!

Here's another example. What do you think the result will be? How can you use typecasting to resolve the issue?

unsigned a = 3, b = 5;

double c;

c = a / b;