For the last couple of days I've been mulling over trying to get color video from an arduino onto a TV. It started when I revisited a Namco TV PlugNPlay game with Pacman, DigDug, Etc. I was wondering if I could use an arduino to hijack its graphics chip. Anyways, I opened it up and found a bunch of surface mount components. Lol, not surprising. I guess I could still hijack the controls.
Anyways, I was looking into various options like the TVOut library. Google Code Archive - Long-term storage for Google Code Project Hosting.
Black and white and low resolution.
In fact, until yesterday I had little understanding of analog NTSC signals. From TVOut I learned that a sync signal is 0V, black is some fraction, and white is 1V.
Another thing I looked into was an antiquated monitor that has inputs for H and V Sync and R, G, and B. Component video.
Then I stumbled upon the Uzebox. It had both SVideo and Composite video. The Uzebox uses an ATMega644 overclocked a bit and another chip, the AD725, to do the NTSC modulation: http://www.analog.com/en/digital-to-analog-converters/video-encoders/ad725/products/product.html
Looking at the image, it looks like it takes component video and outputs both SVideo (Y/C) and composite video. After much research, much of which was on wikipedia, composite video is like SVideo except mixed together (with more subcarrying of course).
So, as far as I can tell, it's like having TVOut drive the Y part of SVideo and writing a new piece that does color (with some color burst timing that I'll mention after this next bit).
Although, there's a bit I don't quite understand yet.
Here's this that I'll be quoting from: NTSC - Wikipedia
"In NTSC, chrominance is encoded using two 3.579545 MHz signals that are 90 degrees out of phase, known as I (in-phase) and Q (quadrature) QAM. These two signals are each amplitude modulated and then added together. The carrier is suppressed. Mathematically, the result can be viewed as a single sine wave with varying phase relative to a reference and varying amplitude. The phase represents the instantaneous color hue captured by a TV camera, and the amplitude represents the instantaneous color saturation."
Notice that last sentence. Anyways, that sounds like HSV colorspace: HSL and HSV - Wikipedia
Does this mean I can generate one sinewave (with phase and amplitude modulation) that represents Hue and Saturation? If so, I could cut out RGB from the graphics entirely and have it native HSV. It would be a bit different than most video chips, but it could provide interesting advantages. If you wanted to implement pacman, you could have one ghost bitmap with S and V. H would then depend on the ghost. If you want a basic bitmap a sprite, you still only need to store 3 values. Of, course, any sort of color (or even grayscale) throws a framebuffer out the window on our ATMega chips. Tile and sprite graphics to the max! One thing I was curious about was if you could do a square wave instead of a sine for the color modulation. Although that probably doesn't work, due to the signals only matching up at the top and bottom of the sine.
I think what's left is color burst, syncs, and interlacing. Uzebox has some helpful info on that:
http://belogic.com/uzebox/video_primer.htm
http://belogic.com/uzebox/hardware.htm
Also this: NTSC - Wikipedia
The interlacing means there's two fields per frame at a framerate of about 30Hz. That's like having two half (vertical) resolution frames at a rate of 60Hz. Something I'd like to experiment in is displaying 640x480@30 and also 320(or even 640 or 720 or 256 or whatever)x240@60. Although, if I understand corectly, the two frames will blur together because of how the interlacing was designed. This could be useful for "transparency" effects that flicker a sprite off and on between "fields."
What I'm worried about is generating the sine wave for the color. This page says: Uzebox - The ATMega Game Console
"The AD725 requires two more input to perform its job. One is the composite sync signal generated by the MCU. The other one is a clock signal at 14.31818Mhz. This clock happens to be four times the frequency of the color burst and is absolutely required in order to generate color. That clock's frequency also happens to be exactly half that of the MCU. This is no coincidence. The MCU has a timer set to toggle an output pin at half its main clock."
This is why the ATMega on the Uzebox is overclocked to 28 and some MHz. I don't think our Arduino, clocked at 16MHz, can generate a clear sine wave at 3.579545MHz. 16/3.579545 = 4.469841837. That means the maximum we could get at that frequency would be a resolution of 3 values. -1, 0, and 1 (on the sine). That's like two square waves ith one following sine and the other following cosine. So at this point I think we need some sort of (voltage controlled?) external oscillator to do the color. And then we need some way to control the amplitude of that sine. For that we might need a transistor that does that. So far I think we have 3 DACs. One for black/white value, one for the frequency (or phase) modulation of the sine, and one for the amplitude modulation. Also an extra pin for the DC offset for sync stuff on the Y part of SVideo just like TVOut does.
So, that's a bit more hardware than I expected coming in. But it does reduce the need for your surface mount AD625 NTSC encoder.
As I stated, until yesterday I knew none of this. I hope I got it right, though.
EDIT: I just thought of something. Maybe a filter could be used to filter a 3.57945MHz square wave into a comparable sine wave?