define a float

(Is it correct?)

Of course not. This discussion has been had MANY, MANY times, so use the forum search function.

jremington:
Of course not. This discussion has been had MANY, MANY times, so use the forum search function.

the engineer in me says using one over the other is trivially negligible anyways.

  1 ÷ 1023 = 0.0009775171
  1 ÷ 1024 = 0.0009765625

difference = 0.0000009546

the difference is detected at two/three orders of magnitude lower...

but the 1024 is correct...

Student took a test and got 67 out of 100 questions correct... Student scored 67%, not 68% (67 ÷ 99).

all of this is off topic, so my apologies to OP

but the 1024 is correct...

That means:
When Vin is equal to Vref, the ADC value is 11-bit (100 0000 0000 = 1024); whereas, the ADC, in question, has 10-bit resolution!

But the definition says:
Full Scale is the value when all the bits of the ADC assume LH states in response to the input value that is equal to Vref.

GolamMostafa:
That means:
When Vin is equal to Vref, the ADC value is 11-bit (100 0000 0000 = 1024); whereas, the ADC, in question, has 10-bit resolution!

But the definition says:
Full Scale is the value when all the bits of the ADC assume LH states in response to the input value that is equal to Vref.

you are correct! max value is 1023 not 1024... I stand corrected!

Thanks!!

you are correct! max value is 1023 not 1024... I stand corrected! :wink:

and +1.

robtillaart:
This is typical one of the problems of #define that const solves

try

const float R1 = 984000;
const float R2 = 472000;

no decimal point needed and the compiler gets the type information it needs.

I have understand that const would be better than #define in this case. But the open source project that i am using (and trying to improve) all uses #define for all values that the end user are supposed to adjust to there specific hardware setup . I would like to follow that style.

strixx:
I have understand that const would be better than #define in this case. But the open source project that i am using (and trying to improve) all uses #define for all values that the end user are supposed to adjust to there specific hardware setup . I would like to follow that style.

so look at reply 11 using the constexpr function:

#define R1 984000
#define R2 472000

constexpr float batFact(float a, float b)
{
  return ((1.1/1023)*((a+b)/b));
}

const float myValue = batFact(R1, R2);

void setup()
{
  Serial.begin(9600);
  Serial.println(myValue, 8);
}

void loop() 
{
}

BulldogLowell:
so look at reply 11 using the constexpr function:

#define R1 984000

#define R2 472000

constexpr float batFact(float a, float b)
{
  return ((1.1/1023)*((a+b)/b));
}

const float myValue = batFact(R1, R2);

}

What would be the difference between this solution and the ones suggested in #4 and #6

#define R1 984000
#define R2 472000
const float batFact = ((1.1/1023)*(((float)R1+R2)/R2));

strixx:
What would be the difference between this solution and the ones suggested in #4 and #6

#define R1 984000

#define R2 472000
const float batFact = ((1.1/1023)*(((float)R1+R2)/R2));

just a few:

  • TYPING: a hell of a lot easier because of all the typing machinations you need to sort using macros, as you learnt.
  • crystal clear as to your programming intent
  • easier to demystify versus complex algebraic macros (particularly when a bit more complex)
  • it is free in that the expressions are evaluated at compile-time
  • it is an available tool in C++ since '11, so why not

OK. I understand.

Thank you very much and everybody for all the help and explanations.
Going to bed tonight a little bit wiser... :slight_smile: