does it maybe, perhaps, have a tendency to drop the first zero after the decimal point?
I've been blaming my analog sensor circuits for not being accurate.
I've been blaming my math - and proper use of the (float) - for why SOMETIMES I am not receiving the number my meter says I should expect.
My sensor readings should be accurate to AT LEAST one decimal place.
I'm using the Virtual Wire library to send the data. It needs characters.
I'm starting to strongly suspect that the reason a number that should be something like '9.08' is being transmitted as '9.8', is because ftoa() is screwing up. ( I say 'should be' because my data source is always fluctuating - I don't REALLY know what it is.)
Is that possible? Or have I not got enough sleep - again. ![]()
Here's the function. And no, I did not write it. I copied it from a program a friend sent me. I don't think he wrote it. I don't know who to give credit to.
char *ftoa(double f, char *a, int precision) { // Convert float to ascii!
long p[] = {0,10,100,1000,10000,100000,1000000,10000000,100000000};
char *ret = a;
long heiltal = (long)f;
itoa(heiltal, a, 10);
while (*a != '\0') a++;
*a++ = '.';
long desimal = abs((long)((f - heiltal) * p[precision]));
itoa(desimal, a, 10);
return ret;
}
There is one thing I notice right off-the-bat, though...
abs().
I thought I remember reading something about doing math inside abs() being a 'no-no'.
Anyone out there know for sure?