Double precision operations

Hello,

I'm using teensy 4.1 for a project. The numbers are not exactly the same as what I expected. The code is below:

void setup() {
  
  Serial.begin(115200);
  delay(3000);
  
  // put your setup code here, to run once:
 double W[3] = {0.0,0.0,0.0};
 double WN[3] = {0.0,0.0,0.0};
double DW[8][3]= {{1.656,4.0,11.0}, {2.4,5.0,4.0}, {3.54,7.0,8}, {4.64,44.0,8.0}, {0.0, 0.0, 0.0}, {0.0, 0.0, 0.0}, {0.0, 0.0, 0.0}, {0.0, 0.0, 0.0}};
double DATAX[4]= {1.656, 2.4, 3.54, 4.64};
double DATAY[4]= {4.0, 5.0, 7.0, 44.0};
double DATAZ[4]= {11.0, 4.0, 8.0, 8.0};

    
  for (int i = 0; i < 4 ; i++) {
    WN[2] = W[2] + (WN[0] * DATAY[i]) - (WN[1] * DATAX[i]) + DW[i][2];
    WN[1] = W[1] + (WN[2] * DATAX[i]) - (WN[0] * DATAZ[i]) + DW[i][1];
    WN[0] = W[0] + (WN[1] * DATAZ[i]) - (WN[2] * DATAY[i]) + DW[i][0];
    W[0] = WN[0]; W[1] = WN[1]; W[2] = WN[2];
  }

    Serial.println(W[0],12);
    Serial.println(W[1],12);
    Serial.println(W[2],12);

My output is :
-17882300.276224978268
9608005.109401557594
2154463.437236570287

Whereas the expected output is:
-17882300.276224955916
9608005.109401546419
2154463.437236567959

How can I resolve this issue ?

Help us help you.

from where are you getting these values?
why are these the "expected" values?

I'm re-writing a code that was originally written in python.
This is the result that I got from visual studio, so I'm trying to get the same results.

In 32-bit ESP32, I am getting both correct accurcay and precision with double type float numbers. For example:

void setup() 
{
  Serial.begin(9600);
  double x1 = 1234.567812345678;
  double x2 = 1234.567812345678;
  Serial.print(x1+x2, 12);

}

void loop() {}

Output:

2469.135624691356

There is no issue.
Double has 15-16 significant digits. You have printed 7+12 = 19 digits...

This works because you have only 4 digits before the '.'.

Please explain that. It's not called floating point for no reason.

a7

does someone know if python handles float/double differently than C++?

made for a good read

An Essential Guide to Python float Type By Examples.

Why is that result preferred? Do you have some reason to believe it is correct?

See post #6.

1 Like

Python uses 64 bit floats. So that is the same as double. But it does not necessarily the same calculations... the compiler might optimize away some of the intermediate calculations.
Python can also calculate with much higher accuracy using special libraries.

Agree with that. I think both results are good in view of the accuracy that can be reached with 64 bit floats.

This thread reminds me of the discussion in my numerical analysis class between

"Floating point is so easy anyone can do real number computation."
and
"Floating point is so hard to get right only experts should use it."

The answer is somewhere in between. One should know enough to realize when you are in the dark corners and need to consider what you are doing.

presumably a operations using standard double or float notation should always result in the same values regardless of the machine

One of my favorites is the multiply-add instruction available on many processors. It multiplies two numbers and then adds a third number as one operation without rounding the intermediate result. In some cases you get a different results than if you do the operations separately due to intermediate rounding errors. This can be confusing to someone making the mistake of doing an equal compare (a bad idea for floating point). The results will be unequal even with the same inputs. Particularly confusing if you don't know about the multiply-add and the compiler uses it in one path and not the other.

Guess not. It should give the same results within the defined precision. If you go beyond the precision of your float type, you go into the area of undefined behaviour...

what do you mean go beyond the precision of your float type?

both types: float and double have a defined precision, a defined # of mantissa and exponent bits. the calculation is only valid within that precision and if two machines are using the same standard definitions for float/double should always result in the same value limited to the precision of the type

Not true for printed decimal output, which is what this thread is about.

The interpretation of the binary floating point value as a human readable decimal number is associated with the print function, which can differ substantially with operating systems.

Again, see post #6.

1 Like

are you saying different prints round differently?