SUMMARY
In Microsoft C, the output format resulting from the printf() format
specifier "g" does not exactly match the output format resulting from
either format specifier "e" or "f". The documentation states that "g" will
use either the "f" or "e" format, whichever is more compact. This is true
in the sense of the overall format but there are some differences.
The precision value is interpreted differently in "g" format than in "f"
format. The documentation explains this difference. The precision for "f"
specifies the number of digits after the decimal point. The precision for
"g" specifies the maximum number of significant digits printed.
The following example demonstrates the difference described in the SUMMARY:
Sample Code
#include <stdio.h>
void main (void)
{
double x = 2.0/3.0; /* 0.666... */
double y;
y = 6.0 + x;
printf ("%.4g\n", y);
printf ("%.4f\n", y);
printf ("%.4e\n\n", y);
y = 66.0 + x;
printf ("%.4g\n", y);
printf ("%.4f\n", y);
printf ("%.4e\n\n", y);
y = 666.0 + x;
printf ("%.4g\n", y);
printf ("%.4f\n", y);
printf ("%.4e\n\n", y);
y = 6666.0 + x;
printf ("%.4g\n", y);
printf ("%.4f\n", y);
printf ("%.4e\n\n", y);
y = 66666.0 + x;
printf ("%.4g\n", y); /* switches to "e" notation here */
printf ("%.4f\n", y);
printf ("%.4e\n\n", y);
}
The results of the above program are correct as shown below:
6.667
6.6667
6.6667e+000
66.67
66.6667
6.6667e+001
666.7
666.6667
6.6667e+002
6667
6666.6667
6.6667e+003
6.667e+004
66666.6667
6.6667e+004