Simple Image Processing

I too did image processing over twenty years ago.
We used 32 bit processors, often lots of them in parallel, with (for then) bucket loads of wide, fast RAM.
It wasn't a beginner's project.
Have a look at the video experimenter's board.
There's example code.

Using Kinect Camera to do color tracking and determine its coordinate using depth sensor, it is interesting since you also could use Processing to receive any data from Kinect and send data to your Arduino. Check this out: Kinect Color Tracking | KinectDuino

Forget colour.
Use optical filters.

szangvil:
I am no expert at this, but the company I work for was using image processing hardware in their machines over 20 years ago(!). Isn't the Arduino (or a few of them connected together) "strong" enough to do what was done by hardware 20 years ago?

I did image processing 30 years ago on a TRS80 computer.
While the processing power was roughly the same as an arduino the memory capacity was not. That is what stops you using the Arduino to do any meaningful image processing. Also I build my own real time frame grabber. I programmed it in Fourth.

Arduino could process 1-bit video, extracting 240 points on regular NTSC frame. Applying simple math it would be possible to distinguish straight lines, cycles etc.
http://coolarduino.wordpress.com/2012/07/28/visual-navigator-making-it-mobile/
and
http://coolarduino.wordpress.com/2011/12/29/optical-magnet-arduino-project-next-in-a-series-laser-tracking-3d/

Pauses thread here for a moment and rewinds.....

Is this really an image processing issue or just light following?

JimboZA:
Pauses thread here for a moment and rewinds.....

Is this really an image processing issue or just light following?

It will be image processing cause the light which the bot has to follow, I will be holding it in my hand waving around in a 3-D space. If it was just following something on the floor I would've used LDR's.

Using Kinect is an awesome idea...but Sir I have no Idea how to send Kinect's data wirelessly back to my computer.
Could you help me a bit in that?

If that can be worked out it will be amazing!

It will be image processing

Cool... it was just a thought, always good to keep things simple if possible!

Using Kinect is an awesome idea...but Sir I have no Idea how to send Kinect's data wirelessly back to my computer.
Could you help me a bit in that?

If that can be worked out it will be amazing!

Truly it would. But you can't do it with an Arduino, maybe the Due but so far I have not seen anything to talk to the USB port and input video. That sort of setup would require a laptop on your robot.

Grumpy_Mike:

Using Kinect is an awesome idea...but Sir I have no Idea how to send Kinect's data wirelessly back to my computer.
Could you help me a bit in that?

If that can be worked out it will be amazing!

Truly it would. But you can't do it with an Arduino, maybe the Due but so far I have not seen anything to talk to the USB port and input video. That sort of setup would require a laptop on your robot.

Well the Arduino won't have to do any processing other than recieving data wirelessly and telling the motors what to do. Rest everything will be done by the laptop.

As creativen said that I could use Kinect to capture the image data and process it using Processing on the computer and send the data to Arduino for driving the motors. So if I could figure out a way to send images captured by Kinect wirelessly to my PC, it would make things really simple. Well for starters I could actually just wire my laptop to it. As this project is no school work, I am just doing it to increase my own knowledge about image processing and embedded systems.

Okay, nevermind. In India the Kinect for PC is costing around $400. That is way over my budget at the moment. So we are back to webcam.

So I will have to use a wireless webcam and process it on the computer.

I will study and learn processing. But can anyone guide me on how to exactly send data to my controller from the results generated by Processing?
Or once I study it from a normal tutorial/wiki I will know that and nothing special would be required.

I'm no expert, but I would imagine that you write code for processing to send info to the serial port. You connect an XBEE to an FDTI breakout board, which is connected to the computer.

Just a thought, maybe you should start with a small B/W bitmap image file and see if you can process what it contains line by line.

zoomkat:
Just a thought, maybe you should start with a small B/W bitmap image file and see if you can process what it contains line by line.

Yes before buying all the hardware, I am going to start experimenting with processing and various images.

Thanks for the help. I guess I've figured out what exactly has to be done. I will ask again if I get stuck somewhere specific.

HappyTron:
Okay, nevermind. In India the Kinect for PC is costing around $400.

Wow, that is expensive.
I bought it in Indonesia, only cost around US$250..

Hmm it might be a little bit impossible to get the image data from kinect wirelessly since, it require very high data speed, remember if you wanna operate Kinect for Windows in PC, you need to install a lot of things, Simple OpenNI, OpenNI, KinectSDK, etc. It is my opinion, correct me if I am mistaken.

It is easy to follow light, if you turn all the rest of the lights in the room off.

How reliable does this thing have to be ? Work outdoors, or indoors only ? Can you guarantee that there
will be no other "red" things in the room for it to home in on ?

Arduino is not really capable of real "machine vision" capability.

One idea, might to be make your led blink at some specific and unusual frequency ( not necessarily visible),
and set up an antenna on the robot which is going to notice that specific frequency and home in on it.

Points at grumpy Ha Haaaa!
Beat ya! :stuck_out_tongue: Before the TSR80, with a home made camera, a home designed/built Z80 'pc' and a neural net; I was able to identify if the "camera" was looking left, right or centre of a scene.
The camera used a loop of super 8 film (hand cranked, never motorized it) as a pixel mask to give a 16 and then a 64 pixel (8x8) device (just filmed a white card on a black background to create a moving window) and a single photo transistor :slight_smile:

No forums (or web) back then :stuck_out_tongue: The point being, don't let them tell you it's impossible, keep plodding and use your imagination. There are many ways to the solution, but the fastest way there is to understand the problem. :wink:

I was talking to a guy a couple of years ago who was into neural nets, and he said he uses high end video cards, and hacks the multiple cores/processors as a very fast engine to "parallel process" his nets.

Now matter how fast the processor, we will always overload it with crud and make out it's slow! Like cupboard space, you always have 10% more rubbish than space! :slight_smile:

I know now!!!! Used an IR led flashing at 38 khz and put an IR receiver on the Arduino robot!

HappyTron:
It will be image processing cause the light which the bot has to follow, I will be holding it in my hand waving around in a 3-D space.

So you have a light, which the 'bot has to follow. Could you explain how you came to the conclusion that this is not a light-following problem?