FYI: BCD format

Just FYI:
I saw a thread (discussion) related to BCD: Binary Encoded Decimal:

This is just a "convention" about the value range possible in a register (half of register byte = nibble), as a value.
BCD covers values as 0..9. But any nibble byte can cover 0..15 - before it overflows, in regular hex (base 16) math.

So, when you do math on BCD values, such as:

char myBCD = 9;
char otherBCD;
     otherBCD = myBCD + 1;

you want to get "otherBCD" also as BCD: the next higher digit is also 0..9 and the +1 results in an "overflow" (carry bit) so that 9 + 1 = 10. But 10 here here as BCD is 1 in higher nibble, 0 in lower nibble. So, the "9 + 1" already overflows (0..9) and next higher nibble has to take the carry bit. So, it would be come "10" (instead of 0x0A on hex mode).

This BCD stuff is "very old" MCU related. You might find support on very old MCU (e.g. Zilog Z80) for this "format". Actually it is not a format: it is a "convention": when you do a VAL +1 on a BCD value - it should "roll-over on value 10" (instead of 16). But a regular nibble byte rolls over at 16 (0..15 possible as value for a nibble).

It needs a special instruction. And modern MCUs do not have anymore BCD format support. They assume all as 0..15 (for 4bit, as a nibble, in hex). BCD can be handled in software (if needed): just make sure that a value overflow (value 10) results in a carry bit taken to next higher nibble.

BCD is a "convention" about possible values in a byte, in a "nibble" (half byte, 4bit). If you have support for BCD: it forwards the carry bit (if larger as 9) to the next higher nibble. The carry bit is set on transition from value 9 to 10.
But regular hex nibbles roll-over on value 16: so, the carry bit is taken to next higher nibble just of it is larger as 15. BCD arithmetic is not there anymore on modern MCUs. They assume real hex values (0..15) in a nibble. BCD math can be done in SW.
But you cannot do so much with BCD, except some external devices need the data in BCD format. A processor would never use internally BCD format, instead hex (4bit covers 0..15 instead of 0..9).

1 Like

BCD is prepared to use by cheap IC without much computing power, like :
DCF77, MH19 CO2 sensor, Blood Pressure measuring device, etc.

Actually the Burroughs medium systems main frames used nibble based addressing and math instructions were there for either BCD or binary.

Many current processors, including the Intel x86 have instructions to manipulate BCD values, and yes, BCD is used in some applications.

1 Like

If I want to save 123456789123 into memory location as decimal numbers (base 10), how will I do it?

There are probably "better" ways, but you might try printing your number to a string, then converting the individual chars of the string back to single-digit ints. Depending on what you're doing, that might be as efficient as anything else.
C

But I'm presuming you've a number in some format to begin with. Or are you receiving the string 123456789123, and wondering how to save it one digit at a time? If so, then, since numerics are ascii 30x and above, I'd start by subtracting 30x from each char.
So again, it comes down to an imprecise problem statement.

Say, I have the following declaration:

unsigned long long y = 123456789123; // 0x02DFDC192B

The number will be saved in memory in bit form and possibly in binary if user wants; where, the base is 2.

I desire to have such a representation in memory so that the contents are really decimal digits/numbers and base 10 can be applied to extract the original script (12345678123).

It comes to the question: "What is BCD format?"
It is: every digit (nibble), 4bits, have just values 0..9 (instead of 0..15 possible).

unsigned long long y = 123456789123;

Is not a BDC number: it is the decimal representation of a hex value (0x1C_BE99_1A83) which needs to be a long long (64bit) value - correct. Why do you think it is 0x02DFDC192B? It would be d'12345678123 (the 9 missing there).

In order to create a BCD value in a simple way - here in a 64bit value (long long):

  • 64bit can contain 16 BCD digits

  • each nibble (4bit) has value 0..9 (not 0..F anymore)

  • you had to pack 2 BCD values (as two nibbles) into one byte

  • Open question: do you want to have the nibbles as Little or as Big Endian?

Let's assume you convert from a string (presenting the value, not a value as 123456... itself!), and taking care about the ASCII representation of numbers in string (they have + 0x30), you could do this:

char myBCDString = "123456789123";

long long ConvertStrBCDintoBinaryBCD(char *s) {
    long long BCDres = 0;                 //pre-initialize with all as zero
    char *ptr;

    ptr = (char *)&BCDres;               //we fill the BCDres as long long like a byte array
    ptr += 7;                                        //we fill the long long as Big Endian, lowest value is on MSB
                                                          //the last byte of long long - you can modify code for Little Endian

    int i;                                               //index to iterate over the BCD string
    unsigned char v;                         //a helper variable to fill a byte with two BCD values

    for (i = strlen(s); i > 0; i -= 2) {    //we handle two ASCII characters per on iteration
          //now go backwards from end of string, convert ASCII number into hex (which is here BCD)
          //we do two steps to handle a nibble, as 4bits and set on byte (8bit) in BCDres
          v = *(s + i - 1) - 0x30;            //strlen() is +1 too large as used for array index, from ASCII to BCD
          if (i == 1) {
              *ptr = v;
              break;                                 //handle the case: odd number of digits provided
          }
          v |= (*s + i - 2) - 0x30) << 4;  //the next nibble as BCD
          *ptr-- = v;                                //now we set the double-BCD (two nibbled) in long long output
    }

    return BCDres;
}

long long mBCD = ConvertStrBCDintoBinaryBCD(myBCDString);

You get the point (BCD is nibbles with values 0..9).

You can also do a simple assignment as BCD:

long long myBCD = 0x123456789123ll;

See, that it needs HEX format of the value! (not decimal value!). This makes sure that each nibble is just 0..9 as value.
But here in this case - it places the bytes (double-nibbles) as Little Endian format (the smallest digits, the last 23 is on the lowest byte in long long).
But not really Little Endian: the nibbles in a byte are not Little Endian, they are still Big Endian (highest digit on highest part of Byte). This would be a very confusing format (Little Endian for Bytes, but Big Endian inside a byte).

BCD Arithmetic
You CANNOT do operations with such a long long BCD value. If you do

long long myBCDresult = myBCD1 + myBCD2;

it will fail! Result is not a BCD number anymore: the processor instructions assume that the values in myBCD1 and myBCD2 are hex numbers (0..F). The addition would not 'overflow' by doing 5 + 6 (as BDC, which should result in BCD format as 0x11). It results in 0xB (in one nibble), which is not a BCD number anymore!

I am not aware of functions and libraries for ARM processors to do BCD arithmetic. Potentially there is. But you need LIB functions, not regular C-code (C does not know anything about BCD - it is your "definition how to interpret values, bytes, variable content etc."!).

About BCD, e.g. see here:
BCD format

16, in fact. Or 15 plus sign.

Actually, you can also do such one:

char *bcdValueString = "0x123456789123";
long long myBCDvar;
sprintf((char *)&myBCDvar, "%x", bcdValueString);

This would also place a value 0x123456789123 into myBCDvar. But again as Little Endian and byte-nibbles as Big Endian.

For me, it sounds more like: handle all your BCDs as arrays for chars. Just to make sure that a byte (one char), contains two BCDs. And how should they be placed? (as Little or Big Endian).
Or:
if you want to have bytes as BCD, where just one nibble is used - it would look like this:

long long myBCD = 0x0102030405060708ll;

No idea what your "BCD format" is. Here, a BCD as a byte - you can have just 8 digits.

Sorry: in previous comment: a long long (as 64bit) value - can contain only 16 BCD values (not 19)!

Think about that 0x12345678 is a 32bit value where every nibble has these BCD numbers.
But d'12345678 (as a decimal number) - is never a BCD encoded value (instead: 0x00BC614E). in BCD: A..F as digit values are "prohibited".

correct, my mistake.

BTW: is there a "sign bit" in BCD format and values? How would it look like to have a negative BCD value?
Interesting question! What is a negative BCD encoded value? What is the "sign bit coding" in BCD?

If I assume: in BCD every nibble (4bit) is a value as 0..9 - I can have a (positive) BCD values as:
b' 1001_1001 0001_0002 ... (binary format)
but it would mean "9912 ..."
The highest bit is already set and needed to encode a 9.

So, it looks to me: in BCD format there is not a sign bit. There could be a special coding for a sign character, e.g. violating the value range 0...9.
If we reserve one nibble as a sign encoding, e.g. as b'0000 (positive), vs. b'1111 (negative) - it would consume one nibble, one digit, for the sign notification.

I guess: the regular sign bit (as MSB bit = 1) is not there in BCD format (at least not when all nibbles are used in a word for BCD encoding).

More correct should be:

char *bcdValueString = "123456789123";
long long myBCDvar;
sprintf(bcdValueString, "%llx", &myBCDvar);

assuming "x" does not need this 0x in the string (considered as hex representation already) and "llx" in order to force a long long conversion to memory of myBCDvar.
(my example was wrong! wrong usage of sprintf)

do you want it "packed" BCD, as in two 4-bit nibbles per byte, or do you really want an array of bytes that each contains a value between 0 and 9?

1 Like

Why not take 30 seconds and look it up? There are several sign conventions.

You are right.
(even your posted copy from Wikipedia does not show me how "sign" is encoded).
Yes, as I thought: it is a convention!

  • Standard sign values are 1100 (hex C) for positive (+) and 1101 (D) for negative (−). (cited from Wikipedia)

Wikipedia BCD

Same Wikipedia page gives me the answer to my question: you would lose one digit when coding with a sign "nibble" (not a sign bit!).
And: different conventions are possible. Which one is used - depends on the chip we want to use. It is not a "standard" or "regular" coding of signed values. As a "convention" we had to know "what was agreed". (this was my conclusion already)

Not a "copy", but a clickable link. You have to actually click on it.

I had to Google, as I could not remember after all these years. On the Burroughs Medium system, a 4 bit addressable memory system machine. All BCD storage was assumed to be positive. If a sign was necessary, a SEPARATE high-order nibble was used. 1011 was negative, I think 1010 was positive.

Golam Mostafa
I gave you an A/B choice, and you 'liked' my post but didn't answer. I'll assume you got your answer from the ensuing posts. We still have no idea what, exactly, you want to do, but hey, it's your thread, do as you wish.
C