uint32_t var2=pow(2,30)+1;
uint32_t var3=123456789.0;
float var4=123456789.0123;
float var5=pow(2,30)+1.0;
unsigned long var6=pow(2,30)+1;
void setup() {
// put your setup code here, to run once:
Serial.begin(115200);
while(!Serial);
Serial.println(var2);
Serial.println(var3);
Serial.println(var4);
Serial.println(var5);
Serial.println(var6);
}
void loop() {
// put your main code here, to run repeatedly:
}
The above sketch does not give the large numbers that I would expect:
11:56:16.363 -> 1073741824
11:56:16.363 -> 123456792
11:56:16.363 -> 123456792.00
11:56:16.363 -> 1073741824.00
11:56:16.363 -> 1073741824
The right hand side of that expression is evaluated as a 32 bit float, so any digits appearing to the right of "1073741" should be considered spurious.
pow(2,30) yields a result with more significant places than a float can hold.
So add 1, which gets lost and stuff the float result into an unsigned long....
You should be happy with ballpark precision throwing mixed math around.
I'm talking about var2, which the OP assigned in this statement:
uint32_t var2=pow(2,30)+1;
It has not lost any precision. The result is correct to the last digit, which is more than float should be able to hold.
Why is it off by only 1? If you do cast it back to long, you get the correct, exact result.
uint32_t varA = long(pow(2,30))+1;
uint32_t varB = pow(2,31);
uint32_t varC = pow(2,32);
void setup() {
Serial.begin(115200);
Serial.print("uint32_t varA = long(pow(2,30)): ");
Serial.println(varA);
Serial.print("uint32_t varB = pow(2,31): ");
Serial.println(varB);
Serial.print("uint32_t varC = pow(2,32): ");
Serial.println(varC);
}
void loop() {
// put your main code here, to run repeatedly:
}
And here are the results:
uint32_t varA = long(pow(2,30)): 1073741825 //Exact
uint32_t varB = pow(2,31): 2147483648 //Exact
uint32_t varC = pow(2,32): 4294967295 //Too small by 1 only
A uint32_t can represent a maximum of 2^32 - 1. Trying to get a correct output when exceeding the limites of the variable types is going to cause unpredictable results.
Not sure if the compiler is doing the conversions at compile time or run time.
I don't consider this a waste of time. There are several gotchas in this that may help others.
Carrying out the test proposed above, with a modification: long(pow(2,30))+2;
does give a precise answer.
The Arduino pow ref page says that the result of the function is a double. But when you look up the ref page for double, it says it is the same as float on the Uno.
Here are some more examples: (pow(3,15))+1; and (pow(5,10))+1;
The results are precise. And the addition does not get lost in the large number.
The explanation offered above, that pow() results in approximate answers, is not complete.
A more complete answer is:
Raising a whole number to a whole number power will give a precise result, including addition to the result of pow(), as long as the result does not exceed 34,028,235 (from float)
Using the pow() function, casting the result to long will allow addition, etc. as long as condition 1 is met.
#define base 3
#define exponent 15
volatile int b = base; // force compile-time calcs using these variables.
volatile int e = exponent;
void setup() {
Serial.begin(115200);
Serial.print("Compile-time: ");
Serial.println(pow(base, exponent)+1);
Serial.print("Run-time: ");
Serial.println(pow(b, e)+1);
}
1073741824 + 1 = 1073741825 on my 32 bit integer calculator
Sketch got 1073741824, change pow(2,30)+1 to not +1 and see if that's right because powers of 2 should fit IEEE binary-encoded floats so why not interpretation?
Yes, it does give the correct result for pow(2,30).
But, as it turns out, the correct result was due to the compiler working out the answer, and not the AVR doing its thing at run time. @westfw posted a nice sketch above which clearly shows this.
The compile time version of 2^30 gives 1073741824. If I cast the result to long and then add 1, the result is 1073741825.
Interestingly, the run time version gives 1073740416, even though, as you say powers of 2 should fit.
The mantissa and exponent are stored as binary. The extra digits that IEEE floats crank out should be closer to powers of 2 than powers of ten. That is all.
I am no friend of floats though 64 bit floats give acceptable results except in accounting where one cent over on a bill or paycheck is all it takes to end the world. I grew up doing math on paper so for me it's no problem to use integers.