Converting dec int to 8bit hex array and to hex. but also back again

Hey guys! I'm working on a project that I can't share too much about, but one of the things we need to do is convert decimal values to an array of 8bit hex values. and back again. but also from dec to just hex. and from hex to 8bit array of hex.

so for example from:
the values are just examples
int num = 609 to uint8_t bitArray[4] = {0x01, 0x01, 0x02, 0x01} //whatever 609 value may be in this format
but also the other way around.
I also need to be able to go from:
0x45e454f32 to the array format and back.

I've done some extensive googling over the course of 2 days and found nothing quite like this. it either means that this is silly and shouldn't be done in the first place, or I'm just completely misunderstanding the explanations out there.

the idea is to standardise all the data that the Arduino sends into the array format.

It is straightforward to write a program that accepts numerical input, and outputs text, suitable for incorporation into your source code, in forms similar to this:

uint8_t bitArray[4] = {0x01, 0x01, 0x02, 0x01};

so that's kinda the idea.

we have a few sensors, some outputting in integer format eg: 4542 and other in hex format eg: 0x438e82 and others in the uint8_t format eg: {0x01, 0x01, 0x02, 0x01, 0x01, 0x01, 0x02, 0x01}

the idea is to convert all of it to the array format and send that array over serial.

If you can't do that yourself, you are welcome to post on the Jobs and Paid Consultancy forum section.

This forum is generally focused on open source projects and code.

I'm only having trouble with the conversion because for some reason there isn't a recourse on that kind of conversation in one convenient place and thought I would kill 2 stones with one bird and get it in one spot, and ask about it for myself.

I'm sort of confused on whether you want to convert an integer value to a hexadecimal string representation or to an array of bytes?

array of bytes

That will requires some decision making on your part, based on the value(s) represented by the input data.

You typically use bitwise operators for that.

The following example takes a 32 bit integer and stores it in a bytearray with the most significant byte first.

int32_t someInteger = 1;

uint8_t arrayOfBytes[sizeof(int32_t)];
arrayOfBytes[0] = (uint8_t)(someInteger >> 24);
arrayOfBytes[1] = (uint8_t)(someInteger >> 16);
arrayOfBytes[2] = (uint8_t)(someInteger >> 8);
arrayOfBytes[3] = (uint8_t)someInteger;

The >> is the right bitshift operator which shifts the binary pattern to the right. Casting to an 8 bit datatypes truncates the most significant bits.

The other way around can be done like this.

int32_t someOtherInteger =
            arrayOfBytes[0] << 24 |
            arrayOfBytes[1] << 16 |
            arrayOfBytes[2] << 8 |

The << is the left bitshift operator and the | is the binary OR operator. Explanations on how these operators work is readily available.

The important thing to remember is that you're not really "converting" between hex & decimal. Everything in the processor (or computer) is binary and the conversions are just for input/output, usually for humans so most-often we use decimal in our code and we print/display in decimal...

But you can also print-as hex, octal, or binary. Nothing about the variable changes, just what's displayed/printed.

There was a "related" question here.

So... If you copy a type long into a type char (or type byte) we get the low byte (8 least-significant bits). If you shift-right 8-bits, those bits are lost and you can do the same thing again ("copy" to a char or byte) to get the next byte and save it as a different variable or to a different array position.

BTW - For troubleshooting the Windows Calculator in Programmer mode can convert between decimal & binary. It won't group the hex value as bytes, but each pair of hex values represents a byte so you'll see what should be in your array.

And if you don't already know this the text character representing a digit is different from its numerical value. The character representing a '1' is stored-as 49 decimal (= 31 hex =00110001 binary). Most of the time the software "knows" the context so it knows it's supposed to be the character '1' and not the value 49). (ASCII table )

Remember the size of int depends on the platform. Make it a habbit to use the types defined int the stdint.h header.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.