So I understand that floats are accurate up to 6-7 decimal places, but if I want to have a function return a value with only up to 2 decimal places would there be a straightforward way to do so?
Thanks
So I understand that floats are accurate up to 6-7 decimal places, but if I want to have a function return a value with only up to 2 decimal places would there be a straightforward way to do so?
Thanks
One (convoluted) way is to multiply by 100 (or 10^x where x is the number of decimal points you need), cast it to an int or long, then assign it back to a float and divide by 10^x.
eg, using 10.987
If you just need conversion when the number is printed you can specify that in the print statement.