Why does lcd/Serial.print(float x, 10) show incorrect value?

I have:

float x = 0.1234567890;
float y = 0.0123456789;
float z = x + y;        // expected z = 0.1358024679
Serial.print(z, 10);   // shows: z = 0.1358024835 instead of 0.1358024679
lcd.print(z, 10);       // shows: z = 0.1358024835 instead of 0.1358024679

1. The Serial Monitor shows: 0.1358024835 instead of 0.1358024679

2. The LCD Monitor shows: 0.1358024835 instead of 0.1358024679

I wish to achieve both precision and accuracy; but, I see only precision. The result is inaccurate by
1.148x10-5 %.

Is it acceptable in scientific computation?

Also, who is making the inaccurate calculation? Is it the -
(a) lcd/Serial.print(); function or
(b) processor or
(c) human operator or
(d) all three above?

Float values are 32 bit single precision on Arduino, and accurate only to 6-7 decimal places. Type "double" is treated the same way as "float".

Float values are 32 bit single precision on Arduino, and accurate only to 6-7 decimal places. Type "double" is treated the same way as "float".

So, this is a known feature for all Arduino Boards (UNO, MEGA, and DUE)!

Thanks for the quick reply.

this is a known feature for all Arduino Boards (UNO, MEGA, and DUE)!

I think Due might have 64-bit "double" types...

The not precise nature of your results is a "known limitation" of floating point number formats in general, and the ~7-digit limitation of 32bit numbers is fundamental math.

You might be interested that one of the reasons for BCD existed was to support early floating point formats. Business types were upset that important numbers like $0.01 can not be represented by binary fractions or floating point formats.
AFAIK, BCD-based floating point formats disappeared with advances in hardware and algorithm complexity, but there are still "decimal floating point" formats based on "densely packed decimal" that give you exact versions of decimal numbers.

GolamMostafa:
So, this is a known feature for all Arduino Boards (UNO, MEGA, and DUE)!

Thanks for the quick reply.

Feature of binary and fractions, not specific to Arduino
In decimal, fractions is perfectly written if they are combined from 1/10, 1/100, 1/1000 and so on.

In binary, they need to be combined from 1/2, 1/4, 1/8, 1/16 and so on (0.5, 0.25, 0.125, 0.0625) to be
perfectly described with binary.

I remember in the old days, it was a joke to use the calculator in windows to show someone the difference between win 3.11 and win 3.1.
3.11 - 3.1 = 0.00

@Gabriel_swe

When post #1 was saying - Float values are 32 bit single precision on Arduino... - it was mentioning that the numbers with decimal points (called floating point number) are internally represented using 32-bit wide storage space as per following (Fig-1, 2) format known as IEEE-754 (binary32) format.

Figure-1: IEEE-754 format for the representation of floating point numbers

Figure-2: Real value of the object represented by Fig-1

Because storage spaces are within the MCUs, and because these MCUs are mounted on electronics boards, and because these electronics boards are traded by the Registered Trade Marks (Arduino UNO, Arduino MEGA, Arduino DUE); I was referring to the whole Platform (Fig-1, Fig-2, lcd.print();, MCU, human operator, IDE) as Arduino.

The not-accurate result (the result is precise as I wanted 10-digit after decimal point, and I have got it) of OP has been seen by Post #1 and Post #2 as the limitations of the system which I accepted with surprise (!) as I have to take more time to understand these limitations going through UNO/DUE based experiments.

Many school going students come into this Forum to have their problems either solved or clued. Look at the other polarity; there could be many teachers who are engaged with this Forum (because they have offered academic courses on ATmega using Arduino UNO) to have their conceptual understandings scrutinized before these are delivered to the pupils.

Figure-3: One section (out-of-140) of students with ATmega328P based Arduino UNO Kits

The obvious question is: Why does the accuracy become distorted withing 10-digit precision when the the IEEE-754 standard has kept the precision open to 23 digits after decimal point? It should appear long later. So, where is the problem - the lcd.print(); function? The Arduino UNO (the Compiler) has very well supported the 32-bit data type for IEEE-754 standard. We need to do some manual calculations based on Fig-1, 2 to look for possible sources of inaccuracies.

//----------------------------------------------------------------------------------------------------------------

1. Given variables are:
(a) float x = 0.1234567890;

(b) float y = 0.0123456789;

(c) float z = x + y
= 0.1358024679; (manual addition)
= 0.1358024835; (produced by lcd.print(z, 10); function

2. Data Table

Variable binary32 by hand binary32 by lcd.print(((long)&x/y/z), HEX);

(a) float x = 0.1234567890; 3D FC D6 E0 3D FC D6 EA
(b) float y = 0.0123456789; 3C 4A 45 80 3C 4A 45 88
(c) float z = x+y; 3E 0B 0F C8 3E 0B 0F CE

3. Observation
(a) In Step-1c, we find that lcd.print(z, 10); has produced more value in the fractional part.
(b) In Step-2c, we find that lcd.print(((long)&x/y/z), HEX); has also produced more value in the fractional part.

Why does the accuracy become distorted withing 10-digit precision when the the IEEE-754 standard has kept the precision open to 23 digits after decimal point?

Because you're confusing binary digits (aka bits) with decimal digits?

A single decimal digits needs 3.322 binary digits, so 23 / 3.322 = 6.9.

A single decimal digits needs 3.322 binary digits, so 23 / 3.322 = 6.9.

Aaa! Now, you have come up with the real stuff. People are saying about 6-7 digit precision, but they are not explaining. I was wondering how it could be! There is no good reason to estimate that "one knows every thing."

"Just now and for the first time in my life, I have seen the Greek mechanism of 3.322-bit being embedded within a decimal digit. Please don't ask me if I have understood it or not - it's matter of [Statistical Thermodynamics!"](在线看片免费人成视频久网下载,国产三级视频在线观看视,亚洲视频在线观看,久久精品国产精品青草,日本精品一卡二卡三卡四卡视 to decimal.pdf)

Because you’re confusing binary digits (aka bits) with decimal digits?
A single decimal digits needs 3.322 binary digits, so 23 / 3.322 = 6.9.

Look at it this way. A single decimal digit can encode ten different values (0…9) In binary, each bit can only encode two values (0…1), so three bits can encode eight values (000…111 or 0…710) or four bits can encode 16 values. Therefore it takes between 3 and 4 bits to encode the same precision as a single decimal digit.

The 23bit mantissa in 32bit floating point encodes 223 possible values; 8388608. That’s ALMOST but not quite enough to do seven decimal digits…

Mathematically, if you have D digits in base B, you can represent N = BD different values. Applying a little algebra,
D = logB(N) that’s where you get 3.322bits per decimal digit log2(10) = 3.32

Now, the conversion errors show up because exact decimal fractions can’t be expressed as exact binary fractions, just the way you can’t express all fractions in decimal 1/3 is 0.33333333… in decimal, and it turns out that if your convert 1/10 (0.1) to binary, you get a repeating binary fraction 0.000110011001100110011…

...and the ~7-digit limitation of 32bit numbers is fundamental math.

Where is this fundamental math? Click here: @westfwPost#8

As already stated in this thread, on an Arduino DUE, a double is a 64_bit variable whereas a float is a 32_bit variable.

See this snippet which shows the 15 digits precision of a double:

void setup() {
Serial.begin(250000);

float var1 = - 2.0/3.0;
Serial.println(var1, 10);  // 7 digits precision

double var2 = - 2.0/3.0;
Serial.println(var2, 20);  // 15 digits precision

var1 = - 16666666666e-10;
Serial.println(var1, 10);  // 7 digits precision

var2 = - 166666666666666666666e-20;
Serial.println(var2, 20); // 15 digits precision
/****************  OUTPUTS  *************
-0.6666666865
-0.66666666666666660745
-1.6666666269
-1.66666666666666678509
*****************************************/
}

void loop() {

There is no fundamental difference in these two IEEE-754 representations; 32 bit uses an eight bit, excess-127 exponent and a 23 bit mantissa, and the 64 bit an 11 bit, excess-1023 exponent and a 52 bit mantissa

1. In binary32 format, the exponent (e) is an 8-bit unsigned binary number (0 to 255). I manually decoded the binary32 (3F 0A 3D 70) value of 0.54 on the basis that e is an unsigned binary number, and I got the number back. @AWOL is saying that it is excess-127. Are they equivalent?

2. I had been playing with Arduino DUE to have confirmation that it really supports binary64 format. Seeing 15 digits (52/3.32) accuracy after decimal point in the result of a simple FLP addition, I was convinced that the DUE supports the format. I tried to validate my conviction by reading the 64-bit binary64 formatted data of the double z1 = 2.24691357824691362, but I was apparently not successful! My codes are:

double x1 = 1.12345678912345678;
double y1 = 1.12345678912345678;

  double z1 = x1 + y1; 
  // Manual Computation: 2.246913578246913 56
  lcd.print(z1, 17);   // shows: 2.246913578246913 62
//------------------------------------------------------

(a) I tried this way first to read the binary64 formatted value

double *ptr;
ptr = (double*)&z1;
double m = *ptr;
lcd.print(m, HEX);     //shows: 2.246913578246913 (now I know why? because double itself is float!)

(b) Next I tried this way:
long *ptr1;
ptr1 = (long*)&z1;
long m1 = *ptr1;
lcd.print(m1, HEX);   // gives: D3 7C 12 16

(c) I repeated the codes of (b) with ptr = ptr +4 (intuition) to get the remaining part of binary64's data
lcd.setCurSor(0, 1);
long *ptr2;
ptr2 = (long*)&z1;
ptr2 = ptr2+4;
long m2 = *ptr2;
lcd.print(m2, HEX);   // gives: 00 08 10 15

(d) Can I take:

D3 7C 12 16 00 08 10 15 as the binary64 formatted data for z1 = 2.246913578246913 62?
We can check it manually. We will try it someday (computing manually 52 fractional binary digits...!!!!!).

Would appreciate comments!

I had been playing with Arduino DUE to have confirmation that it really supports binary64 format. Seeing 15 digits (52/3.32) accuracy after decimal point in the result of a simple FLP addition

A simple "Serial.println (sizeof (double));" would have sufficed.

It is pre-known that the size of double is 64-bit (8-byte). So, the *sizeof()*function will always return 8 and it does so. That's why, I wanted to read the actual binary64 formatted data from memory and then decode it manually to find the original FLP.

It is pre-known that the size of double is 64-bit (8-byte). So, the sizeof()function will always return 8 and it does so

Not on a Uno it doesn't

On UNO double means float, which is 32-bit (4-byte).

...which is why a simple sizeof would suffice.

There is another simpler way to do it: just copy the known number 8; there is no need to write codes.

Mind it! You have not yet answered my question of Post#12. It is again going to be late night!

Thank you and Good Night!

GolamMostafa:
You have not yet answered my question of Post#12.

Is Google broken again?

I hate it when that happens