Binary shift of 32-bit number

I’m working with LED Matrix display, trying to encode animation in 32 bit array.
Something is strange is happening tho, when I try to shift bits.

const unsigned long PROGMEM bitmap=0x280A00;

void setup() {
   Serial.begin (115200);
   unsigned long rowDots = pgm_read_dword_near(&bitmap);
   for (byte col=0; col<32; col++)
    {
      if (rowDots & (1<<(31-col))) 
         Serial.print (1);
       else
         Serial.print (0);
    }
}

0x280A00 should look like this in binary:
00000000001010000000101000000000

Instead I get:
00000000000000001000101000000000

I’m not sure I get syntax of rowDots & (1<<(31-col)). I know it supposed to shift rowDots left by specified number of bytes…

if (rowDots & (1ul<<(31-col)))

1 is a 16 bit number. If you shift it left by more than 16 places you get 0. 1ul is a 32 bit number. The ul tells the compiler to use an unsigned long for that.

That did it!!! Thank you so much!

Something a bit more adaptive -

#include <limits.h>		// CHAR_BIT

const unsigned long PROGMEM bitmap = 0x280A00UL;

void setup()
{
    Serial.begin (115200);

    const unsigned long row_data = pgm_read_dword_near(&bitmap);

    for ( size_t bit = 1 << ((sizeof(row_data) * CHAR_BIT) - 1); bit; bit >>= 1 )
    {
        Serial.print(((row_data & bit) ? "1" : "0"))
    }
}

Wow! Slick! :) Do you by any chance know of any utility that can convert pixels draw in excel spreadsheet to binary array? :)

What's an "excel spreadsheet"?

Just kidding, and the answer is no!