I'm curious about what your thoughts are on my thinking process for a future project.
While brainstorming on a future LED display project from a non-electricronics background, I've realized how much I take animated gifs for granted. If I were to store bytes in an array representing a 2x2 RGB LED matrix (12x12 LEDS total) with dimming for each LED, it may look something like this (pardon any programming errors):
That alone is 12-13 bytes, correct? So if I wanted to create 1 RGB "frame" with dimming on a 32x16 RGB 1,536 LED matrix (my ultimate goal), that would be 1,537 bytes for that one variable+values. Am I off on this? Make 6 "frames" for a cool animation to take up 9,216 bytes of memory. If I were a microprocessor, I'd be upset right now. What if I want 30 frames? Crazy!
With that said, I STILL want to make it happen someday. My initial thoughts are to have one microprocessor load the animation from somewhere (internet or SD card), then assign small fragments of the animation to a series of "child" microcontrollers. Afterwards, the master controller would orchestrate the animation by telling all the children to project the next frame in the sequence, which they would tell their assigned LED drivers to help make it all happen. I would guess that if I had a microcontroller for every 64 LEDs (18 child microcontrollers), I could run 140 dimming RGB frames at a time. Is that overkill?
What are your thoughts on this? Keep in mind that I'm not too familiar with the electronics, but share much enthusiasm. :)
Thank you much