LED troubles with Dec to hex

Im working with a TM1803 addressable led strip,

and i need to send 24 bytes of data to talk to the, i followed some examples, and got it working.

Im sending these lists of data, to the strip each hex value represents the color of that led. There are 10 leds in my strip, so i send 10 24 byte values.

This is working fine.

PROGMEM const unsigned long pattern_test_mine[2][10]={
  {0x0000ff,0x00ff00,0x0000ff,0x000000,0x000000,0x000000,0x000000,0x000000,0x000000,0x000000},
  {0xff0000,0x00ff00,0x0000ff,0x000000,0x000000,0x000000,0x000000,0x000000,0x000000,0x000000},
};

What im looking for is a way to sub in variables for each "0xff00ff" so that i can use more human readable RGB values of 0-255.

I need some way of defining " Red = random(255) Green = random(255) blue = random(255) "

and then merging those variables into a value i can sub into that array of data being sent to the strip of leds

any thoughts?

The difference between decimal and hex is only apparent when the human is reading or writing code. In the processor it is all going to be binary.

If you want 24bits, you need a long. You can use bitshifting to put three bytes together into a long. An even easier solution is to use a union. I’ll leave that to you to google and figure out.

byte blue = 123;
byte green = 98;
byte red = 74;

long combined = (long)blue << 16 + (long)green << 8 + red;

You might need to change the order to match what you’re doing. But that’s the basic idea. The long will be 32bits, 4 bytes. The most significant byte will be 0, the second will be blue, then green, then red in the least significant byte.

You would use a #define for this

#define RGB(r,g,b) ((((long)( r))&0xFF) | (((long)(g)&0xFF)<<8) | (((long)(b)&0xFF)<<16))

We use a lot of parenthesis so that we can be certain that order of operations comes out right.

A thing to watch out for is sign extension - that’s why I bit-and everything with 0xFF.

#defines get expanded in-line before the compiler proper sees the code. When you are using constants, this means that the complier will pre-calculate the number at compile time - it will simply evaluate the constant expression.

If you want to do this

Red = random(255)
Green = random(255)
blue = random(255)

at run time it might be better to use a function, because sometimes #defines need to use their argument twice, and doing that with a function call means the function gets called twice, which is not what you want. Here, it doesn’t make much difference. Still:

long rgb(int r, int g, int b) {
  return RGB(r,g,b);
}

By using the #define inside the function, I only have to write the bit-shifting code once. :slight_smile:

Speaking of sign extension. We both should have used casts to unsigned long instead of long.

The simplest and easiest answer is still the union. No worries about bit shifting or sign extension there.