Video Experimenter object tracking

I have recently purchased the video experimenter shield from Video Experimenter: Arduino shield that lets you do all kinds of experiments with video

I can't find anything on the internet where people actually track object with it other then finding brightest or darkest spots. I need to be able to track a tennis ball. I have played with the image capture code but I have not done any video processing before, can anyone give advice/code/experience please

I can't find anything on the internet where people actually track object with it other then finding brightest or darkest spots. I need to be able to track a tennis ball.

Paint your tennis ball white or black, and ensure it is the brightest/darkest thing the camera can see.

I wouldn't really recommend the Arduino as an image processor, particularly for someone with no experience of image processing.
Image processing is usually characterised by large data sets, processed very rapidly, neither of which the AVR is very good at.

Get some experience on a more able platform and find out what you need and what you can do without.

Just my two cents.

I could swear this shows how to do it (for simple "objects", with "proper" setup):

http://nootropicdesign.com/projectlab/2011/03/20/arduino-computer-vision/

Basically AWOL is right - you just need to make sure the ball is the brightest thing; this would likely mean either painting (as suggested) or illuminating the ball, and tracking its motion against a dark background. Unfortunately, I doubt the Arduino/VE combo will be fast enough to do that to track the tennis ball at normal speeds; you didn't say how fast the ball was going to move, nor the environment it would be in - but if you are trying to track a tennis ball being batted back and forth on a tennis court by players in varying lighting conditions, then an Arduino/VE is not going to do it (it won't have the speed/frame-rate, nor the resolution needed - among other things).

If your "hardware + software" infrastructure could provide X Y (in some cases Z) coordinates of the object, than tracking is very simple operation:
Direction = New(x, y, z) - Old(x , y, z) ;
Speed = New(x, y, z) - Old(x , y, z) / Time;
Acceleration = New(speed) - Old(speed) / Time; etc.
Creating a buffer for coordinate samples/data, allows to reconstruct complex trajectory.

You can download a code from here, which shows the idea:
http://fftarduino.blogspot.com/2011/12/arduino-laser-3d-tracking-range-finder.html

Sorry I did thoroughly explain my intent. I have some image processing experience with roborealm ( I have a licensed copy) and I understand how the image capture and edge detection samples work for the shield. I know the atmega 328 isn't the ideal processor for this type of thing but the point of this project is to make it cheap.

The robot would be sitting on the side of the court about 2 feet parallel to the net, pointed towards the other doubles lane. The robot needs to determine where the ball is on demand (push of button), knowing the ball is near the net (at a standstill). All I need is some algorithm to determine a circular shape and eliminate the rest of the area sort of like blob detection, then I would use the sample code that finds the brightest spot in the picture and the rest I can do.

The bright yellow ball seams easily distinguished from a fairly light background.

EDIT: Also, I may be a highschooler, but i'm no noob to this. I have done gps navigation, indoor navigation, FIRST robotics, ethernet robotics, swarm systems, servers, etc...

To identify round (or any other) shape which interlaced or "connected" with background, as it shown on the posted image, would require pattern recognition algorithm, cross-correlation what I'd think of, and there is no time to do it. All you have is about 1.2 millisecond during 20 lines in the beginning of new frame, and that makes task impossible, especially for varying size of the object depends on distance from the camera.

Edited:

Better way is comparison between pre-stored image w/o ball (desk & net), and image with it, than algorithm would outputs X Y coordinate where "activity - movement" has been detected. But even with this "simplified" - no shape tracking task, math would take too mach time. Giving one compare or subtract 2 cycles operation, 16 MHz cpu can do 8 op/usec, 8000 op/millisec,. image size 300x300 = 90 000.

The best chance to fulfill a mission is make a ball "only" one object in video frame, I mean create a "sharp" contrast, that only the image of the ball would get in the video buffer. Black background or colorful - yellow filter in front of cam definitely would help to increase a contrast.

Thank you, the image size is only 128x96 or 12288 pixels. It does not matter how long the operation takes. I was thinking the robot could take 1 snapshot, process that and determine position, move, take another shot, move forward again, etc. I have thought of using a yellow filter and will do so but I know the white lines on the court will cause a problem. Anyone have any raw circular detection code laying around, that's all I really want.

I think you're being ambitious trying to detect the seams (strictly speaking it's a single seam) on a tennis ball - I wouldn't have though you'd have sufficient contrast.
Are trying to do this to tell which way the ball is lying or something?

I don't know how you organize your video buffer array, 128 x 96 as 12 k-pixels doesn't fit in arduino memory, only if you do bit-staffing keeping memory 1 bit per pixel - 1536 bytes.
Anyway, this is how I'll calculate if the shape is round. If you noticed, I choose 1d video array in my project, to overcome memory limits the same time not sacrifice resolution ( 832 x 512 ). First of all, I'd run array in "for" loop to find upper and lower line of detected object. (Power measuring subrouting does it in the linked code) It provides X1:Y1 and X11:Y11 (look at the pictures below).
Radius = (Y11 - Y1) / 2;
Yc = Y1 + Radius;
Than , for any points,
Rn = sqrt( (Xn - X1)^2 + (Yn - Yc)^2)).
All Rn must be equals, with specified accuracy. Simply running check, algorithm would make a decision if its round or not. Could be some kind of optimization - filtering/noise reduction implemented. I think arduino is good at sqrt, about 50-100 per 1 msec, so math should be optimized as well for 8-bit. One more things, correction coefficient has to be precalculated, defining number of pixel per line (Y expressed as lines numbers, but X - is 16 MHz "ticks"). For example, for camera with 4:3 vision field, ( 832 / 4 ) x 3 = 624, that isn't 512 ( 0r 492 ?)

@AWOL I never said I was trying to detect the seams on the ball, I was saying the lines on the COURT would probably cause some issues.

@Magician That is the resolution used in the library included for the Video Experimenter shield, every pixel is only 1 bit, black or white. Plenty of res to establish shape. Thank you for the math, I will try this out and post the results.

I never said I was trying to detect the seams on the ball,

The bright yellow ball seams easily distinguished from a fairly light background.

What's that then?

Okay I have the ball finding part figured out now.

Now I can't seem to get a valid PWM signal from the pins that aren't being used by the shield. I need them to control the motor controller. The sweep (servo example) code works fine with the same hardware setup and I have tried servos in place of the motor controller as well. I know the problem has to do with interrupts in the TVout library but I do not know how to work around this... Any ideas/ experiences?

There is a library, servotimer2.
http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1230479947

That library does not seem to work anymore (from 2008). I also tried Software Servo library and that doesn't compile either...

EDIT: I have also tried the Simultaneous Servo library which uses interrupts to control servos, does the same thing as the vanilla library... All I need to do is send a valid PWM signal from the thing, does anyone know how? :{