Need explanation of POV Code

I'm trying to figure out some POV code. The program takes a color image and displays it, in color on a moving string of LED (for POV). What I can't figure out is how it determines the colors. So, when it processes an image, it outputs the following:

prog_uchar image1[] PROGMEM = {
0x74,0x54,0x2C,0x5E,0x2A,0x00,0x2A,0x5E,0x5E,0x5E,0x7E,0x2A,0x02,0x2A ...};

Then, in the processing code, I see this:

// mask to select 2 bits of color
#define TLEDS 48

#define Rmask   0x60
#define Gmask   0x18
#define Bmask   0x06

uint16_t i,j,k;
uint8_t Gtmp, Rtmp, Btmp;
...
Gtmp = 0;
Rtmp = 0;
Btmp = 0;
...
SPDR = Gtmp;
Gtmp = 0x80 | (uint32_t)((pgm_read_byte_near(&image1[i+j*TLEDS])) & Rmask);
while(!(SPSR & (1<<SPIF)));
SPDR = Rtmp;
Rtmp = 0x80 | ((uint32_t)((pgm_read_byte_near(&image1[i+j*TLEDS])) & Gmask) << 2);
while(!(SPSR & (1<<SPIF)));
SPDR = Btmp;
Btmp = 0x80 | ((uint32_t)((pgm_read_byte_near(&image1[i+j*TLEDS])) & Bmask) << 4);
while(!(SPSR & (1<<SPIF)));
...

Specifically, I'm interested in learning how the colors are figured out. I don't understand the masking part, R = 0x60, G = 0x18, B = 0x06 and then later in the for loop, how each color is set using the data from image1[] and the mask. I don't understand what the data in the image1[] array represents, I'm assuming a color value.

Can someone explain this in layman's terms to me please?

This is an instance where I can see the value (and would use for my own sanity) binary notation. I would have written the mask values thusly:

#define Rmask   0b01100000
#define Gmask   0b00011000
#define Bmask   0b00000110

This way it's obvious scanning through the code which bits the masks will pass if used with a bitwise AND (&), and/or which bits will be set if used with a bitwise OR (|).

I may or may not have encoded the images in binary or hex (as shown). If in binary I would probably have had each value on a separate line so I could easily see the patterns. So for the bit you quoted, I would have had:

prog_uchar image1[] PROGMEM = {
  0b01110100,
  0b01010100,
  0b00101100,
  0b01011110,
  0b00101010,
  0b00000000,
  0b00101010,
  0b01011110,
  0b01011110,
  0b01011110,
  0b01111110,
  0b00101010,
  0b00000010,
  0b00101010 ...};

But, as you can see this can get really long and unwieldy so the hex representation might be better after all...

So basically, the masking works by only allowing where the masking value has a 1 to pass through the bitwise AND. Lets take the first value of the image array, 74 in hex which is 01110100 in binary. When we bitwise AND this value with our three masks, we have the following long form equations:

   01110100 (0x74)
 & 01100000 (0x60)
------------ (Rmask)
   01100000 (0x60)

   01110100 (0x74)
 & 00011000 (0x18)
------------ (Gmask)
   00010000 (0x10)

   01110100 (0x74)
 & 00000110 (0x06)
------------ (Bmask)
   00000100 (0x04)

It looks like each color value has 4 levels of intensity (00, 01, 10, and 11).
* *That would make a color value 3 digits long in base 4, so a total number of 4^3 or 64 distinct shades of color.* *
The format of the image array is 0b0rrggbb0 (note the padding zeros as the MSb and LSb), where rr, gg, and bb are the two bits for each color. After shifting and setting the MSb to one, the processing routine as quoted by you essentially reduce down to Gtmp = 0b1rr00000;, Rtmp = 0b1gg00000;, and Btmp = 0b1bb00000;.

I'm not sure why Gtmp uses Rmask and Rtmp uses Gmask. I'm also not sure why the code is setting SPDR to the previous RGB values before reading and calculating the new RGB values from the image array. Too much missing code to answer those questions. But, those don't seem to be the questions you are asking us. :wink:

Yeah, I don't know why R and G are reversed like that either. This is what the full for loop looks like:

            uint16_t i,j,k;
            uint8_t Gtmp, Rtmp, Btmp;
            #ifdef PAT1
            for (k=0; k < REPEAT1; k++){
                for (j=0; j < (sizeof(image1) / (sizeof(prog_uchar) * TLEDS)); j++) {
                    Gtmp = 0;
                    Rtmp = 0;
                    Btmp = 0;
                    for (i=0; i <= TLEDS; i++) {
                        SPDR = Gtmp;
                        Gtmp = 0x80 | (uint32_t)((pgm_read_byte_near(&image1[i+j*TLEDS])) & Rmask);
                        while(!(SPSR & (1<<SPIF)));
                        SPDR = Rtmp;
                        Rtmp = 0x80 | ((uint32_t)((pgm_read_byte_near(&image1[i+j*TLEDS])) & Gmask) << 2);
                        while(!(SPSR & (1<<SPIF)));
                        SPDR = Btmp;
                        Btmp = 0x80 | ((uint32_t)((pgm_read_byte_near(&image1[i+j*TLEDS])) & Bmask) << 4);
                        while(!(SPSR & (1<<SPIF)));
                    }
                    lockLatch();
                    _delay_us(ST1);
                }
                _delay_ms(D1);
            }
            #endif

Then the same loop repeats for PAT2/image2[], PAT3/image3[], etc., etc.

lockLatch() looks like:

void lockLatch(void) {
    uint8_t l = 3;
    while(l--) {
        SPDR = 0;
        while (!(SPSR & (1<<SPIF)))
        ;
    }
}

Make better sense now? Still doesn't explain the R and G swap though ...

Sembazuru:
I'm also not sure why the code is setting SPDR to the previous RGB values before reading and calculating the new RGB values from the image array.

Apparently, it's because of how the SPI is handled. From the AT90USB1286 datasheet:

When configured as a Master, the SPI interface has no automatic control of the SS line. This
must be handled by user software before communication can start. When this is done, writing a
byte to the SPI Data Register starts the SPI clock generator, and the hardware shifts the eight
bits into the Slave. After shifting one byte, the SPI clock generator stops, setting the end of
Transmission Flag (SPIF). If the SPI Interrupt Enable bit (SPIE) in the SPCR Register is set, an
interrupt is requested. The Master may continue to shift the next byte by writing it into SPDR, or
signal the end of packet by pulling high the Slave Select, SS line. The last incoming byte will be
kept in the Buffer Register for later use.

The code example is documented as:

void SPI_MasterTransmit(char cData)
{
/* Start transmission */
SPDR = cData;
/* Wait for transmission complete */
while(!(SPSR & (1<<SPIF)))
;
}

Which is exactly what the code is doing by setting SPDR to the previous value each time ... it's sending it out. At least, that's how I understand it.

Yeah, I figured setting SPDR was for SPI communication. (The HC908s that I've worked on did it the same way.)

What I was remarking on is for each loop iteration they are sending out the byte that was calculated in the previous iteration and then calculating a value to send out in the next iteration. Seeing the full loop, it looks like they are using this to enforce sending 0x00 first. But the last value that is calculated comes from what ever is randomly after the array in FLASH memory... (That value is thrown away, but it still doesn't seem a wise way to do things...)

Yeah, I don't know. But, I rewrote parts of the code to use the FastSPI library instead. Did a test tonight with a 32 pixel string (instead of a 48 which I'll have to make this weekend). Attached is the result. The missing 16 pixels account for the upper part of the image that's missing. It's stationary right now with me swinging my camera to capture it.

KirAsh4:
Yeah, I don't know. But, I rewrote parts of the code to use the FastSPI library instead. Did a test tonight with a 32 pixel string (instead of a 48 which I'll have to make this weekend). Attached is the result. The missing 16 pixels account for the upper part of the image that's missing. It's stationary right now with me swinging my camera to capture it.

Would you be willing to share your finished code? :roll_eyes: