I have this standard binary integer to decimal conversion, which works very well with c++ compilers. When I try to use the same function on the Arduino IDE, it doesn't generate the correct output.
So, for example, when the int binary is (1100001), corresponding to 97 in decimal, or character (a) as a symbol, it is converted into 95 instead. As another example, I tried (1111000), which corresponds to 120, I get the conversion output to 116.
I am attaching the full code below, can someone kindly help by pointing where the error is? The conversion function seems logical to me.
void setup() {
Serial.begin(9600);
}
int binary2decimal(long int n)
{
int decimal = 0, i = 0, remainder;
while (n != 0)
{
remainder = n % 10;
n /= 10;
decimal += remainder * pow(2, i);
++i;
}
return (decimal);
}
void loop() {
long int b = 1111000;
int d = binary2decimal(b);
Serial.write(d);}
The problem comes from the pow function, which seems not accurate. I also change write to println
Try this instead:
void setup() {
Serial.begin(115200);
while (!Serial);
long int b = 1111000;
int d = binary2decimal(b);
Serial.println(d);
}
int binary2decimal(long int n)
{
int decimal = 0, i = 0, remainder;
while (n != 0)
{
remainder = n % 10;
n /= 10;
decimal += remainder * (1 <<i);
++i;
}
return (decimal);
}
void loop() {
}
Since 'pow()' is a floating-point function it can return values that are slightly off. For example 'pow(2, 3)' returns 7.999998 which is then multiplied by 1 and truncated to an integer to get added to your integer. The result is adding 7 instead of 8. Similar for 2^4, 2^5, and 2^6. You can work around the truncation problem by adding 0.5 to the result of pow() so that when it gets truncated it gets truncated to the nearest integer.
Of course floating-point exponentiation on an 8-bit processor with no FPU is much slower than shifting so '1 << i' is a much better solution for powers of 2.
Algorithm:
for(int i = 7; i>=0; i--)
{
IPBCD = IPBCD*2 + bi
IPBCD = adjusted IPBCD to get correct BCD //ATmega328P does not have DAA instruction for this adjustment
}
RayLivingston:
That is about the most inefficient method ever to accomplish a simple assignment:
Once, 8085 was running only at 4 MHz and was making pay slips for the employees of the offices. That was efficient at that time. In a Forum, we get familiar with various options/ideas of solving a problem. I hope that we are not here for competitions but for better learning.
TheMemberFormerlyKnownAsAWOL:
Or, more succinctly (and avoiding the obvious Arduino macro expansion trap)
byte bin = 0b01111000;
byte dec = bin;
Yes! I am back to the square which I have noticed now as DEC is not holding 120 (000100100000) which is the BCD/decimal image of the given binary 01111000. In fact, I was motivated to explore the use of bitRead() function and the positional weights of the bits having thought erroneously that a human being is doing the math and not a binary machine. k+
WARNING: The sketch assumes you are using decimal constants, such as "long int b = 1111000;". You will get the wrong result if you add a leading '0' which makes the value an octal constant. For example "long int b = 01111000;" is equivalent to "long int b = 299520;". Since the code doesn't check to see that the decimal digits are only 0 or 1 you will get a very strange answer:
RayLivingston:
It made perfect sense on the octal-based PDP-7 for which c was originally designed....
I've seen yougsters argue that they still could have decided to make that 0o, mimicking 0b or 0x but this is just forgetting about history.
Originally the BCPL language was representing octal number with a leading 8, a space and then the number. (eg 8 543. The use of three-bit groups made octal notation natural as in the 60's, the main programming number systems were decimal and octal and mainframes had 12, 24 or 36 bits per byte, which was nicely divisible by 3 (log2(8)). Data was represented in octal. No one was using binary nor hexadecimal at the time as it just did not make sense for the architecture and thus there was no syntax for it.
When Ken Thompson created B from BCPL, he decided to go for a 0 prefix instead of the 8 and grouping the digits all together. Using a leading 0 in the code was a trick to make the parser faster. It would know it was dealing with a constant integer literal and not a keyword or identifier with 1 character and no look ahead. Integer constant would always consists of a single token and the parser could immediately tell the base (and the value 0 is the same in decimal and octal, you just write 0 or 00). smart
When C was created from B, the DEC architecture (PDP-11 had 16-bit words) and the market was moving to hexadecimal but since octal was still needed for other machines, 0[color=blue]x[/color] was arbitrarily chosen for hexadecimal, making the parser a bit more complicated. It was decided to disambiguate with a letter rather than using another digit like 00 probably because it was easier to read and was letting the parser know with the second character it was not octal.
For backward compatibility it was just not making sense to re-label octal as 0[color=blue]o[/color] and it was not until C++14 that 0[color=blue]b[/color] was introduced for binary integer literal representation.
Now in 2020 when you look back it seems a bit cumbersome, but that's what it takes standing on the shoulders of giants/pioneer and dealing with backward compatibility