A 2 byte float?

Use

    Serial.print(theFloat, 6);

so your aren't mislead by the rounding to two decimal places that the print() method does by default.

Print the float to six places at transmit side and again to six places at the receiver side.

Post your conclusion results.

a7

Congratulations, now you know it works for 1 of the 4 Billion+ combinations of 32 bits.
"Proof by Working Example" is not a rigorous proof technique by any stretch of the imagination.

3 Likes

That still doesn't guarantee that you get exactly the same 4 bytes on the RX side as you had on the TX side.

I should have said, "and leave the transmission as is, that is to say defaulting to 2 places".

My point was to look deeply at the two numbers on both sides. After transmission, we see loss of precision, but ony if we look.

a7

Yes! It looses precision heavily. I have sent 17.356789; but, I have got 17.359998.

21:53:18.343 -> 17.359998
21:53:19.316 -> 17.359998
21:53:20.339 -> 17.359998

At Sender:

void loop()
{
  Wire.beginTransmission(0x23);
  Wire.print(17.356789, 6);
  Wire.endTransmission();
  delay(1000);
}

At Receiver:

void loop()
{
  if (flag == true)
  {
    float y = atof(myData);
    Serial.println(y, 6);
    flag = false;
  }
}

So, Write.print() is not a way to transfer floating point number using I2C Bus.

1 Like

Which evidently our colleague @GolamMostafa did, inadvertently or not!

a7

1 Like

If a ATtiny is used as a I2C Slave to send information, then perhaps it is possible to send the integer value of analogRead() and do the math on the Master.
In many situations it is possible to do the math with integers.
Below is my test with the float16 library:

// Testing float16 library
// https://github.com/RobTillaart/float16
// For: https://forum.arduino.cc/t/a-2-byte-float/1170014
// This Wokwi project: https://wokwi.com/projects/376313228108456961


#include <TinyDebug.h>    // a feature of Wokwi, an internal serial output
#include <float16.h>

void setup() 
{
  Debug.begin();
  Debug.println("Test sketch for float16");

  int iterations = 20;

  Debug.println(PiGregoryLeibniz16(iterations));
  Debug.println(PiGregoryLeibniz32(iterations),10);
}

void loop() {}


// The Gregory-Leibniz method
float16 PiGregoryLeibniz16(int n)
{
  float16 pi(1);
  const float16 one(1);
  const float16 two(2);
  unsigned int count = 3;
  for(int i=0; i<n; i++)
  {
    pi -= one / float16(count);
    count += 2;
    pi += one / float16(count);
    count += 2;
  }
  return(float16(4)*pi);
}


// The Gregory-Leibniz method
float PiGregoryLeibniz32(int n)
{
  float pi = 1.0;
  unsigned int count = 3;
  for(int i=0; i<n; i++)
  {
    pi -= 1.0 / (float) count;
    count += 2.0;
    pi += 1.0 / (float) count;
    count += 2.0;
  }
  return(4.0 * pi);
}

Try it in Wokwi simulation:

Result:

Test sketch for float16
3.1797
3.1659789085

1 Like

what did you expect really? :wink:

that being said,if it's a temperature read with the usual cheap sensors then 1 decimal digit is probably all what you need...

1 Like

What's that exponent worth in decimal? It allows for numbers roughly between E-4 to E+4.

For more significant digits or higher exponents I've developed a floating exponent format, with up to 13 bits significand and 8 bits exponent as opposed to only 11 and 5 bits of 16 bit IEEE 754 numbers.

Explained in the article.

I meant:

The maximum representable value is (2−2^−10) × 2^15 = 65504.

What's the practical worth of so restricted numbers?

The maximum representable value is (2−2^−10) × 2^15 = 65504.

So what?

FP16 is used extensively in neural networks and computer graphics in applications where speed is much more important than accuracy. Multiplication and division are an order of magnitude or more faster than with 32 or 64 bit floats.

They represent a reasonable number of useful fractions as well as whole numbers.

Really "whole" numbers extend to 2047 with 11 bits. All numbers above that can not be incremented by 1 because the difference between adjacent numbers (precision) becomes 2 or more.

Evidently, FP16 doesn't fit your particular needs.

Give the following item from post #33 @jremington :

To obtain the real value (0.333325195) from the binary16 value (0x3555), can I use the following template/formula? If yes, then what will be the value for m? (It was 127 for binary32 format.)

What do you consider as "real value"? Floating point numbers are iprecise by nature, with immanent deficiencies in binary/decimal notation conversion.

It is said here (Fig-1, excerpt from Wikipedia)!


Figure-1:

That's a correct representation of a binary number. You can not always expect the exactly same value in decimal representation of the binary number.

Untested: How does the decimal value 0.2 (1/5) look as a floating point number (bit pattern)? What's its real value?

The original value which is here: 0.2.

The binary32 formatted 32-bit pattern is in hex: 0x3E4CCCCD (I have used online converter)

The Real Value is what will appear after converting 0x3E4CCCCD into float, and it is: 0.2 (this time I have used codes in UNO.)

Other way around. Any value representable in binary is exactly representable in decimal. The reverse is not true.

2 Likes