Check The Programming Section
Precision specifiers are most common with floating point numbers. We use them, as you might expect from the name, to indicate how many digits of precision we want to print. Compilers have a default precision for floating point numbers. A common example would be printing floating point numbers that represent monetary amounts. In this case, we will typically want just two digits after the decimal point. The syntax of the precision specifier is:
printf("%.numberf",arg1);
You can see from the syntax that, a period (.) must precede the precision specifier (number). The period really is a syntactic device to help the compiler recognize a precision specifier when no field width exists, but the choice of a period serves to help remind the programmer that it represents the number of places after a decimal point in a floating point number. Just as for the field width specifier, the programmer may use a number or an asterisk as a precision specifier. The required syntax is:
printf("%.*f",width, val);
The asterisk again indicates that the actual value of the precision specifier will be one of the additional parameters (width) to the printf call. For example,
#include<stdio.h>
int main(){
printf ("%.2f\n", 3.676);
}
will print
3.68
|
Note: The printf rounds the number when a precision specifier requires that some digits must not appear. Consider another example,
#include<stdio.h>
int main(){
printf ("%.*f\n", 4,3.676);
}
Here, the width for the precision is set to 4 and asterics represents the width for the decimal place. The printf() statement will print
3.6760
|
Note: Precision specifiers have no effect when used with the %c format specifier. They do have an effect when printing integers or strings, however.
When we use a precision specifier with integer data, one of two things may happen. If the precision specifier is smaller than the number of digits in the value, printf ignores the precision specifier. For example,
#include<stdio.h>
int main(){
printf ("%.2d\n", 205);
}
will print
205
|
The entire number, "205" is printed. Here, the precision specifier is less than the number of digits in the value. Now consider another example, in which precision value is more than the digits presents in the number.
#include<stdio.h>
int main(){
printf ("%.5d\n", 205);
}
will print and printf will "pad" the number with leading zeros:
00205
|
With string data, the precision specifier actually dictates the maximum field width. In other words, a programmer can use a precision specifier to force a string to occupy at most a given number of columns. For instance,
#include<stdio.h>
int main(){
printf ("%.5s", "Hello World");
}
will print only "hello" (the first five characters). It is fairly rare to use precision specifiers in this fashion, but one situation in which it can be useful is when you need to print a table where one column is a string that may exceed the field width. In this case, you may wish to truncate the long string, rather than allow it to destroy the justification of the columns. Generally, when this happens, you will use both a field width specifier and a precision specifier, thus defining both the maximum and minimum number of columns that the string must occupy. For example,
#include<stdio.h>
int main(){
char name[] = "Scoot Styrish";
printf ("%10.5s", name);
}
will force printf to use exactly 10 columns to display the value of name. If the value is less than ten characters long, printf will pad with leading blanks; if the value has more than ten characters, printf will print only the first five.
~~~~~Scoot