remotely controlled/autonomous turret

I want to build a remotelly controlled/autonomous turret. I know how to build a remotelly controlled turret - that seems simple, one double H-bridge IC (like l298 or l293D), two geared DC motors , code that receives data sent over serial and a simple application written in processing for mouse control.

The tricky part is how to turn a remotelly controlled turret into an autonomous turret. It seems that everyone has used RC servos + stationary mounted camera. This is cool and all but the method is limited as far as accuracy is concerned (think how RC servos are limited in range and accuracy and how that affects the whole system). In other words it's ok if all you want is something that can hit a human sized target at let say 30 feet max, but it isn't good enough for me. What is more most ppl use motion recognition algorithms based on comparing individual frames to sense motion.

So my idea is to use the same setup that I want to use on a remotelly controlled turret (geared DC motors) and this method of tracking >>

...but rather than facetracking I want to set it so that it tracks human shaped targets.

Basically speaking what I am saying is that:
A) I have a turret with geared DC motors
B) Camera is mounted on the gun so when the turret moves so does the camera
C) Speed and direction of turret is determined on the basis of position of sensed targets in respect to the centre of the camera view. In other words if centre of the target is 200 pixels to the left and 150 up processing programme sends data over serial to arduino to move both motors in the right direction. after a while teh target is closer to the centre of the camera view so the speed of motors is reduced.

What do you guys think? Is it feasible ? Do you think it will work ?
I do know I would need some way to sense if the target moves in respect to surroundings but I guess some sort of background tracking should be able to accomplish that

ohh BTW I deleted my previous topic on this as I think this has more to do about feasibility than motors and mechanics only

The mechanical side seems easy to do, but much harder to do well. I suspect the hard part will be the image processing to decide where your target are. People do have autonomous paintball turrets so no doubt it's possible, but finding a good free solution may not be easy. Writing it yourself from scratch I'd suggest is not feasible.

ohh if you want to see an arduino based autonomous turret check out this page >>> https://sites.google.com/site/projectsentrygun/successful-projects
You can see everything there, arduino code, wiring, processing GUI etc.

The way I understand it algoritsms that he (and others use) can't be used with a moving camera as it compares frames to find changes (which basically means movement). Thus, if the camera is moving individual frames are boung to differ completely.

It would be great if someone who has some experience in video tracking (esspecially the method I described in my first post) say how difficult it is to find and track ppl with it. I have done some experiments with Jmyron library but that was mostly colour tracking and simple stuff.

ok another question...

how do I deal with lead?
let's say that I have object moving at 0.5 deg/second... how do I make sure the turret does not 'lag behind the target' ?

... but notices that the object is moving and speeds up to keep itself centred on the target (or better yet... aims in front of a moving target)?

(I do realize that lead has to take into consideration distance to the target but for the time being let's ignore that)

EXAMPLE
target is (x,y) pixels from the centre of the camera -> processing sends commands over serial to turn the turret by (x,y)

now the target is just (x1,y1) pixels from the centre of the camera view. The target moves as fast as the turret is turning

RESULT ?
the turret is turning but it lags behind the target

What can I do to sense that condition and tell the turret to speed up a little bit and keep the crosschair centred on a moving target ?

how do I deal with lead?

A couple of centimetres of Kevlar and ceramic. ]:smiley:

A couple of centimetres of Kevlar and ceramic

I was expecting that response

I got a PID controller and I more or less know how to implement htat... but what if the input value (position of the target in respect to the camera view centre stays the same (as the target is moving)) ??

I am lost how to implement that... do I need something that tells me how fast the turret is turning or not ?

B) Camera is mounted on the gun so when the turret moves so does the camera
And
let's say that I have object moving at 0.5 deg/second... how do I make sure the turret does not 'lag behind the target' ?

In current set - up you can't make sure, you noticed that target is moving only when it already moved to new position, so camera aimed in last known / seen position of the target. I think, you have to use two turrets, one is main and second exclusively for gun only. Than you need to calculate ballistic trajectory of the target ( you have to know distance too ), bases on this information and know how much time do you need to reach point in space (bullet or antimissile speed) you point second turret to this point in space, collision point or whatever they called it. You don't need processing/second computer if target "marked", arduino can do a job alone:
http://arduino.cc/forum/index.php/topic,81998.new.html#new

There are some very clever people putting a lot of work into this sort of thing so I'm sure that Google would take you to some really neat solutions. The approach that seems most practical to me is to use a steerable camera mount and whatever gun you're planning to use, and have the camera feed back to a laptop of similar with enough processing power to do image stabilisation, motion detection and and object recognition. The image processing side is the part that would be extremely difficult, but I'm certain that other people have already done a lot of the work. The job I'd see for the Arduino is essentially as a means to drive the turret's servos from the PC.

I've seen video of some turrets that seem to have cracked the detection side but were very poorly designed from the mechanical point of view, and wobbled all over the place. So I'd be looking to design a solid and well-damped turret with all the moving parts supported at their center of mass, and the gun recoil taken care of so it doesn't throw the camera around.

There are some very clever people putting a lot of work into this sort of thing so I'm sure that Google would take you to some really neat solutions

That's the problem - 99% of setups I found consist of a stationary camera + 2 rc servos + barely holding together turret

Which is probably enough if you want to impress your friends with an autonomous turret but overall it sucks

basically I want something like this:

(relevant part starts @ 0:55)

or this

What I don't know is how to sense that the object is moving? As I said earlier I know how to use PID (thx to some ppl who were kind enough to create a ready made library for arduino :wink: ) but I realized that a moving target will f### things up a little bit as the turret will lag behind the target

The turret will have to move where the target will be not where it is at the moment. But first it has to sense that the object is moving. How do I do that ?

My first guess is that rotary encoders on the motor shaft could do the trick, but I am lost how to calculate the object angular velocity in respect to the turret using them (lol I know it's probably really simple but I can't work it out).

kerimil:
What I don't know is how to sense that the object is moving?

As I said earlier I know how to use PID (thx to some ppl who were kind enough to create a ready made library for arduino :wink: ) but I realized that a moving target will f### things up a little bit as the turret will lag behind the target

The turret will have to move where the target will be not where it is at the moment. But first it has to sense that the object is moving. How do I do that ?

My first guess is that rotary encoders on the motor shaft could do the trick, but I am lost how to calculate the object angular velocity in respect to the turret using them (lol I know it's probably really simple but I can't work it out).

Ok - this is going to be conceptual:

1: Your camera is on your PTZ mount (whatever it is; if you want stability - you're going to have to build it or pay for it - you might want to look into the heavy-duty PTZ mounts that Servo City sells; otherwise, build it using steppers or DC motors, with some kind of encoder feedback for the servomechanism portion - you could even get away with precision potentiometers, if you can find a cheap source), looking at a scene.

2: Every frame is compared to the last one.

3: To make things easier, you might want to do some filtering of the frames to reduce them to black and white or similar "blobs" - reduce the amount of data the computer or microcontroller needs to deal with.

4: You might look into using an LM1881 to extract timing signals from the image, and grab the blob data that way (there's a guy here on the forums doing a laser-tracking/triangulation/distance measure project this way - you might also look into the Nootropic TV Experimenter board).

5: Here's one hard part: You compare one blob in frame 1 to the blob in frame 2 - and notice they are different; now, calc the approx center points of the blob from the "edge points" (or the "cloud"?) on both frames, then use trig to determine the angle of motion, and use the difference in distance to determine approximate velocity.

6: "Lead" by guessing the next "step", by keeping a statistical average of velocity vs angular motion over the past several frames; and move the camera to where you "think" the object/blob will be (of course, if the object is moving erratically, this won't work well). Lead for hitting with a shot (gun/missile) is more complex, of course (as has already been explained).

7: Another difficult issue - discerning the motion of the camera from the motion of the object...

8: Still another - multiple objects confusing tracking system (how to "lock on" to a target; this also gets into "threat levels", etc - to switch targets as needed)...

You might find that the answers for some of these questions to be more difficult than it first appears; a lot of this seems to fall into the areas of "machine learning" and "artificial intelligence", as well as "computer/machine vision"...

In order to calculate deflection shots you fundamentally need to know the object's range and speed.

If you can recognise the object's shape and guess the true size of the object, you can infer it's distance by simple scaling. But that is only feasible for objects with a clearly recognisable shape. For example, your software would find it very difficult to tell the difference between a tabby cat at six feet or an elephant at sixty feet. So, while it's possible in theory, I think it would be extremely difficult to get working in practice. What would probably work better is a bodge, such as guessing the approximate range of the object and then shooting in front of it and waiting for it to run into your shots. If you have a high enough rate of fire, you could even strafe it.