So I'm getting bored since University is out for the summer. I want to build a robotic arm that can pick up objects using the android as a camera. Has anyone done anything like this before. If should any help would be much appreciated.
I know about a year back I came across some projects online where person were able to do optical tracking using the android. I trying to find this desperately but I can't does anyone know where I can find this.
Then I would need to work out the vector analysis for the arm relative object so the arm can traverse the imaginary vector in real life, the general idea is after I work out the vectors I will make the tip of the arm follow the vector and if it follows the vector it will aways come in contact with the object. At least that how I see it. Unless I can mount the camera in a position where I can get it to see the robotic arm and the object and track them together.
All ideas are welcome. My first way of doing it might be very computational, better idea are very much welcome.
I am also thinking of using the android to do depth measurement that is distance in both X and Y plane. So I could figure out how far the object is. Any thought on that I have no idea where to start on this one.
You don't mention, though, what you plan on using for the robot arm; ultimately, though, you are going to want to write software to come up with what is called an "arm solution". This is not easy - not by a long shot.
That said - this site should offer some good insight (and code examples in QBASIC, IIRC) on how to do it:
It's in spanish, so you may have to use google translate on the site; I found that when using google translate, the site translated very well to english - it may actually translate better to french or other "romance" languages.
Well the arm is just basically used to pick up things. But first got to get the arm to go to right place before I can get it to pick up things.
In terms of writing a arm solution library. I done work in my university with writing libraries for vector analysis using IR sensors.
In terms of the arm solution library there will only be one unknown as I see it that is the object which is not moving. The camera and the arm will always be set into know position for X,Y,Z planes. The camera will not be mounted on the arm though.
I then need to use sensor to measure where the object is relative to sensor (if I can use the camera it measure the distance that would be good but I don't think I can do that would good enough accuracy). Once I get that data since the arm and sensor are in a know position relatively to each other I can then figure out the vector between the object and the arm. Doing this will allow to get the arm to traverse the vector that i created in 3D space. I could do 2D space and then for the 3rd dimension which is depth in my case calculate another vector for this and this should allow me to move the arm to correct location.
The arm will have sensor on it which will give me angles relative to it's start position which is always know. Since I know the length of each section of the arm and the angles I can calculate where the arm should go.
The hard part I really see if getting the sensor to measure the object since I will need to aim the sensor directly at the object which is relative to camera that will be looking at the object. If I can get the sensor data good. The vector won't be hard to calculate that first year university mechanics.
I will work out the math tomorrow. Once I get my model correct I'll start to build
If the object is known, you can use readily available AR libraries to establish its (approximate) position and orientation in 3D space relative to the camera. At short range and assuming you have a clear contrast between the object and the background, the approximation should be pretty accurate and probably within a couple of pixels of camera resolution.
If the object is not known, a single point of view will only give you a vector of possible positions - you'd need to use stereo vision to produce a second vector and intersect those to find the position. I think that OpenCV has some capabilities for stereo image processing but I haven't played with them.
Having for the position of the object relative to the camera, and knowing the relationship between the camera and the arm, you can apply a simple transformation to convert the position to coordinates relative to the arm.
It would be somewhat easier (but perhaps less fun) to mount the camera on the grab so that the problem is simply to keep moving the camera towards the object until it makes contact.
If the object is known, you can use readily available AR libraries to establish its
Could you point me in the direction to where I can find these libraries, for android cable device.
The object will be unknown when it comes to 2D space for now because it will always lie on a surface "Y" (height) distance away from the arm. Once I get that working I will start to work in 3D space. For now the object can only move around in an X and Z plane that is left and right of the camera and up and down in depth view of the camera.
I am planning to use my android phone to do the camera tracking, for now it won't really be tracking anything it just going to find the object and try to figure out where it is in real space. If I can't do this I might have to mount the camera on the arm as you said.
It's been a few years since I played with AR and I've never done it on an Android, so I can't give you any specific recommendations. However, last time I was playing with this it was already at the point where marker based object detection was easy and markerless detection was possible. I expect things have improved in the meantime and I'm confident that you'll be able to find algorithms for what you need. I think there is also a good chance that you will be able to find them already ported to the Android environment. If not, they're pretty abstract so I wouldn't have thought that porting them to your platform would be particularly hard.
If you know what the objects are (i.e. you can teach your application to recognise specific objects) then you can do a lot of this in 2D with OpenCV as long as you don't mind getting your head around a fair bit of image processing theory. As I said before, you'd need some way to deal with the third dimension and I would have thought that stereo image processing would be feasible - essentially, you would be calculating a vector from the center of the first camera's viewport through some recognisable feature of the detected object, do the same for the second camera, and then use your knowledge of the relative position of the two cameras to bring these two vectors into the same coordinate system, then look for the intersection / closest point between them to estimate where the feature is in 3D.