We work with numbers using base 10 or the decimal system. When we have a number such as
245
This is interpreted as :
2 * 10 to power 2 + 4 * 10 to power 1 + 5
If we have a base such as 10 then the digits range from 0 to 9 ( 0 to base -1 ) . When we go from right to left we
multiply the digit by the base to the power increasing the power by 1 each time.
Computer work with base 2 also called the binary system. The digits are 0 and 1 . The number
101
is 5 in decimal system( 1*2 power2 + 0 + 1 ) .
If we have 3 digits then the unsigned number will look like below:
000 -- 0
001 -- 1
010 -- 2
011 -- 3
100 -- 4
101 -- 5
110 -- 6
111 -- 7
Two's complement.
A negative binary number is represented in two's complement. In this scheme the leftmost bit is "1" and represents the negative number. The right most bits are inverted and then a "1" is added to get the negative number amount.
Ex:
101
The bit "01" are inverted to form "10" and then a "1" is added to form "11" so the number "101" is actually "-3" . If we have 3 digits then the signed numbers look as below:
000 0
001 1
010 2
011 3
100 -4
101 -3
110 -2
111 -1
Base 16 also called Hexadecimal system . The digits are 0-9, A-F with A representing 10 and F representing 15.
Examples:
F0 - 240
AF - 175
The computer stores numbers in binary. The smallest unit is called 1 bit that can be 0 or 1 .
1 Byte = 8 Bits
1 Kb = 1,024 Bytes
1 Mb = 1,024 Kb
1 Gb = 1,024 Mb
We use data types to define our variables. Value types are primitive types like int and char. We also have reference types which correspond to class types.
byte 8-bit unsigned integer 0 to 255
sbyte 8-bit signed integer -128 to 127
short 16-bit signed integer -32,768 to 32,767
ushort 16-bit unsigned integer 0 to 65,535
int 32-bit signed integer -2,147,483,648 to 2,147,483,647
uint 32-bit unsigned integer 0 to 4,294,967,295 u
long 64-bit signed integer -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 l
ulong 64-bit unsigned integer 0 to 18,446,744,073,709,551,615 ul
float 32-bit Single-precision floating point type -3.402823e38 to 3.402823e38 f
double 64-bit double-precision floating point type -1.79769313486232e308 to 1.79769313486232e308 d
decimal 128-bit decimal type for financial and monetary calculations (+ or -)1.0 x 10e-28 to 7.9 x 10e28 m
char 16-bit single Unicode character Any valid character, e.g. a,*, \x0058 (hex), or\u0058 (Unicode)
bool 8-bit logical true/false value True or False
Characters in the computer are stored as numbers. Everything in the computer is stored as numbers. For the English language the mapping is defined by the Ascii table.
We can see from the above link that all English characters can be represented by the range 0 - 127. So 1 byte should be sufficient to accommodate all the characters. However there are many languages such as the Asian languages whose characters run into thousands. The Unicode mapping can accommodate all the languages. The size is 2 bytes and the Ascii mapping is the same for the first 127 characters in both the languages. Below is a sample program that shows how the character type can be used.
File: type1.cs
using System;
class datatype1
{
static void Main(string[] args)
{
char ch ;
ch = 'A' ;
Console.WriteLine( ch );
Console.WriteLine( (int)ch );
ch = (char)66 ;
Console.WriteLine( ch );
Console.ReadLine();
}
}
Output:
C:\CSharp\data_types>type1.exe
A
65
B
d
We see in the above program that the syntax "(int)" . This is known as a cast. It changes or forces the type to something else.
Console.WriteLine( (int)ch );
Normally when we print "ch" variable it will only print the character . We know that the char type is actually an integer and we want the "WriteLine" to print the character as an integer ( 65 instead of 'A' ) . There are implicit casts and explicit casts. The above example demonstrates explicit casts.
The language allows a variable or a constant of a certain type to be set to another type if the first type can fit into the other type . A byte variable can fit into an integer variable.
File: "type2.cs"
using System;
class datatype2
{
static void Main(string[] args)
{
byte b1 = 8 ;
int i1 ;
i1 = b1 ;
b1 = i1 ;
Console.ReadLine();
}
}
C:\CSharp\data_types>csc type2.cs
Microsoft (R) Visual C# 2008 Compiler version 3.5.30729.9034
for Microsoft (R) .NET Framework version 3.5
Copyright (C) Microsoft Corporation. All rights reserved.
type2.cs(14,23): error CS0266: Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you
missing a cast?)
The line "i1=b1" is fine because an integer's range is -2,147,483,648 to 2,147,483,647 while the range of a byte is only 0- 255 . The byte is converted to an integer via an implicit case. However the line :
b1 = i1
gives a compiler error because i1's range is higher than a byte. We can perform an explicit cast to compile the code.
File: "type3.cs"
using System;
class datatype3
{
static void Main(string[] args)
{
byte b1 = 8 ;
int i1 ;
i1 = b1 ;
b1 = (byte)i1 ;
Console.ReadLine() ;
}
}
If we have an expression then C# will examine each of the types in the expression and determine the highest ranking type.