Video Experimenter shield

I'm very happy to announce a new Arduino shield called the Video Experimenter:

Some things you can do with the Video Experimenter: + Overlay text and graphics onto a video signal from a camera, DVR, DVD player, VCR or any other source of composite video. + Capture low-res video image frames for display or video processing. Give your Arduino the gift of sight! + Perform computer vision projects with object tracking, edge detection, etc. + Decode closed captioning or XDS (extended data services) data embedded in television broadcasts. + Works with NTSC (North America) or PAL (rest of the world) television standards.

All this for a much lower price than you might expect. The capabilities are documented in detail with source code, etc. There's a video on the product page that shows an overview of the capabilities, too. Check out the product details and video here:

Nice job.

Don't forget to add this to


How much CPU time is required to keep solid video?

Bit-banging NTSC seems kinda resource intensive...


Yes, the ISR to write pixels to the screen takes quite a bit of time, but the TVout library has been successfully used to make fast-action video games with the remaining processor time. I don't know how to quantify the time taken by TVout, but in developing games for Hackvision (http://nootropicdesign/hackvision/) I found that I had to introduce artificial delays to prevent games from running too fast. So, the question is, what kinds of Arduino projects really require serious CPU time? I guess I've never found myself wishing the ATmega328 was faster. Other constraints usually get in the way (communication latency, human interaction, etc.) Other thoughts?

Humans are notoriously slow. Does anyone else remember the old “disable turbo” button on PCs? Games written for “normal” (4MHz?) PCUs were im-freakin’-possible to play at “turbo” (8MHz?) speed.

There are operations that take considerable time - e.g. GPS reading and parsing (especially if bit-banged serial is used), reading I2C sensors/ADCs, etc. My use of “CPU” time may have been a bit of a misnomer, but if you’re bit-banging anything that is CPU time.

Maybe a more quantifiable scenario would be: Say I read a minimum of 2 sentences from an NMEA GPS running at 4800 baud on a NewSoftSerial port, sample at least 4 I2C sensors at a 1Hz rate, wrote all this data out to a hardware serial port, and tried to keep the screen updated at 1Hz. How many I/O errors would I see? How would the video look?

I’m just trying to get a feel for the capabilities of bit-banged NTSC, without buying hardware and setting up experiments myself. I’m not that lazy, I just don’t have time…


Yes, you'd run into some problems like jittery video if you were trying to do all those things while rendering video. TVout comes with a serial interface that is implemented by polling instead of being interrupt driven. It allows serial communication to occur at the beginning of each scanline, and the bit rate is up to 57600 I think. I2C communication when using TVout is tricky because it's interrupt driven, and interrupts interfere with the precise timing required by rendering video. One approach I've taken is to only perform I2C communication during the vertical blanking interval at the end of every frame, 60 times a second. This rate would be too slow for some applications. I've used this approach for communicating with a Wii nunchuk. Also, I understand there's a polling implementation of I2C, but have not used it.

I haven't got one of your shields, but as you know I'm a huge fan of the TvOut lib and Arduino in general.. and remember way too many of the old "Compute!" and "Antic" and "Circuit Cellar" articles. For a tinkerer on a shoestring budget, Arduino is a dream.

My current video-based project you may like-- "light pen" or "light gun" routines. All a matter of refresh rates, making it visible to a phototransistor without making it visible to the human eye. If you remember "Duck Hunt", a simple point-and-click interface (I like to point out that Smith and Wesson may actually have invented "point and click" interfacing, but I digress), the idea is workable from all standpoints. I've had moderate success so far, it's a matter of tweaking code and making some ghetto optics now.

I'm also working on a video capture routine- at least concept at this point. Though yours is probably superior, my goal is to skip the sync chip and simply use the 0v - colorburst voltage rise to trigger an interrupt (if it will work with tvout) or just do it all via software.. memory buffering is going to be the issue I think, but your success tends to make me think it should be possible. I just dig doing things with as few external components as possible- makes it easier for others to copy your efforts.

Awesome toy, I may have to get one soon..

I have just come across this new shield.

I see it can do object tracking. What is the positioning accuracy? When I feed it a PAL signal from a small camera, what's the tracking resolution of a small white point on a black background (i.e. a star seen by the camera through a telescope)?

It can only be as accurate as the resolution of the display, which is low. Typically 128x96 or 136x96. Not sure if that's good enough for tracking stars. And the tracking is shifted to the right a bit since it takes time to process the image.

Hmmm I don't think that resolution is good enough...

Do you think it would be possible to increase the resolution by using the shield on a NetDuino Plus or FEZ Domino board? I guess it would just be a matter of writing a library?

I have no experience with these boards nor with the TVout library yet. I just came across this shield in a Cool Components newsletter and was wondering if it was possible to use it for a stand-alone telescope autoguider.

Those microcontrollers have more memory, but it's not as simple as writing a new library. The Video Experimenter uses features of the ATmega328 that are wired to particular pins. For example, Arduino digital pin 8 is input capture pin on the ATmega328, and digital pin 2 is an external interrupt pin that the VE uses. These chip-specific features are probably available on the other Atmel microcontrollers, but not assigned to the same pins.

Also, the video generation and capture routines are written in AVR assembly to get the required perfect timing for video. Is the Microsoft-based toolchain going to let you write in AVR assembly? The more you abstract away the details of how things work (the way Microsoft does) the less power you have to do really cool low-level stuff (like capture images in a $3 chip). Sorry, but I have a bias toward staying close to the hardware and really learning code/electronics versus introducing a vendor-specific framework into the mix. You'll learn far more with the gcc toolchain that Arduino is based upon. Just my opinion based on 20 years of software design. :D

Thanks for that. I totally agree with you. I love the Arduino for its low level "hacking"....

It just seems to be underpowered for this particular idea that I had in mind... ;)

Your idea of guiding a telescope is really cool. Have you considered feeding the video images somehow to a computer and processing the image with OpenCV code? That would give you much more power. The computer could then talk to an Arduino (for example with Firmata) to guide the telescope with servos.

That solution already exists. ;) You can use a webcam or dedicated astro guiding cam and feed the signal into PHDguide for example, which in turn moves the telescope to keep the star on the same pixels.

I was thinking of a stand-alone autoguider without the need for a computer.

have you done anythnig similar with sound? sound recording seems to be hard/difficult/impossible in a 'duino

but then so was video...

Personally I don't know much about audio processing yet, but I found this very intriguing:

nootropic: Personally I don't know much about audio processing yet, but I found this very intriguing:

thanks for the link!