IEEE single precision floating point is represented in a 32-bit wide number like so: ( 1 sign bit, 8 exponent bits, 23 fraction bits ), calculated in this manner:

float = (-1)^sign * (1 + fraction) * 2^(exponent - bias)

where single-precision bias is 127, which ensures the exponent is unsigned.

Since the largest number 23 bits can represent is, 8388608, you can see the maximum precision will be in the order of 6 or 7 decimal digits.

To answer your question specifically, 3.1415927.

You can play with conversions

here.