I have built a fairly simple locomotive robot with a conning tower that houses an ultrasonic range finder. The current short-term goal is that the range-finder will determine a safe path to progress. However, I'm finding that the robot is advancing towards and colliding with my couch at an angle. Although a direct scan reveals the distance of the couch, a pulse that hits the couch at a diagonal angle returns an abnormally large distance. Any ideas on how I can mitigate this?
I run a wall follower with two sensors. 1 in front and 1 to the side.
Sounds awesome, explain?
SilverAnalyst:
I have built a fairly simple locomotive robot with a conning tower that houses an ultrasonic range finder. The current short-term goal is that the range-finder will determine a safe path to progress. However, I'm finding that the robot is advancing towards and colliding with my couch at an angle. Although a direct scan reveals the distance of the couch, a pulse that hits the couch at a diagonal angle returns an abnormally large distance. Any ideas on how I can mitigate this?
Couches tend to be soft, and thus tend to make poor reflectors for ultrasound; even a "hard" object, at an angle, is still likely to not reflect the sound back properly (that's just the way reflection works, unfortunately). So what can you do?
Well, first you could try adding additional sensors - maybe a Sharp IR sensor or two would help (and/or scan using a servo). You might also want to implement some bump sensors for those obstacles that slip thru (if it's good enough for a commercial product like the Roomba, it should be good enough for DIY).
If the idea of contact sensors doesn't suit you, you could try some form of a 2D LIDAR system; such sensors are still fairly expensive, but a few options that are cheaper spring to mind:
- For this you would need a PC on-board or similar (beagleboard?) - but a Microsoft Kinect is a possibility here
- Another cheap method would be to purchase a Neato XV-11 robot vacuum and rip the LIDAR unit off it
- You could mount a small webcam and laser to an "arm" mounted on a servo (to scan it), and use a PC/beagleboard for processing
- In theory you might be able to "hack" a Sharp IR sensor to disable the onboard IR LED and use an IR laser instead (then scan it with a servo)
Another possibility is to build a vision system (again, you would need a PC or something for processing the data), using OpenCV or Roborealm, or other similar vision processing libraries (if you were ambitious - though likely this would fail, but it might be worth trying - you could try using the Nootropic Electronics Video Experimenter shield along with an Arduino to try to do simple vision/video processing - probably wouldn't have the resolution or be fast enough, but maybe you can make it work?).
Wow, thanks for all those ideas! I'm definitely going to have to get my hands on a Sharp IR sensor to supplement the conning tower. The tower on my bot rotates via servo motor, so I envision one sweep would collect long range data from the ultrasonic range finder and (more reliable?) short range data from the IR range finder.
I love the idea of giving my bot some kind of vision, but that is definitely out of my budget. As an alternative, I was thinking of giving it meagre colour sensing by adding three photoresisters with home made colour filters to try and read RGB. Would that work?
I love experimenting and learning about all of this.
SilverAnalyst:
Wow, thanks for all those ideas! I'm definitely going to have to get my hands on a Sharp IR sensor to supplement the conning tower. The tower on my bot rotates via servo motor, so I envision one sweep would collect long range data from the ultrasonic range finder and (more reliable?) short range data from the IR range finder.
Something to note is that Sharp use to distribute a much longer range (10 meters or so?) version of their sensor; unfortunately, you can't buy one new any longer, but if you can find one surplus...
SilverAnalyst:
I love the idea of giving my bot some kind of vision, but that is definitely out of my budget. As an alternative, I was thinking of giving it meagre colour sensing by adding three photoresisters with home made colour filters to try and read RGB. Would that work?
That would probably work ok; you could also try using a phototransistor or photodiode as well. You could build a color wheel, mount it to a servo, and use a single sensor element. Another (somewhat underexplored) option for robotics vision is "low resolution vision":
http://www.seattlerobotics.org/encoder/jan97/lowresv.html
Do some googling on the term, and you'll find a few white papers and other amounts of research. Back in the day (early 1980s) there use to be a way to build a low-res 64 x 64 "pixel" camera using a particular style RAM chip that had a metal cover over the die. If you carefully removed the cover (without disturbing the bonded wires), you could focus an image onto the exposed chip, then "refresh" the cells and time the decay of the cells as they changed states (1 to 0 or vice-versa, I forget), to get a gray-scale "image". It was basically a primitive version of a CMOS sensor (but much cheaper than what such sensors cost then).
You could build a small 8 x 8 phototransistor "eye", and focus an image on it, then use a similar setup as what is used for 8 x 8 LED arrays on the Arduino to read out the array...
On my robot, I use 2 sonars and a GP2D12 IR-ranger, all mounted on a
pan'n'tilt servo pod. I also plan to add a mechanical bumper but haven't
gotten to it as yet. The sonars also give bad values when aimed at a "hard"
wall at a large oblique angle. I have not tried the GP2D12 on a soft
surface like a couch, but will give it a try when I get home tonight.
The other thing I have, whose range goes inbetween sonars/GP2D12 and a
mechanical bumper is called an "IR proximity detector". Look this up on
google. Good to about 12-18", and gives yes/no output. Typically uses 38-Khz
to drive an IR Led, and uses a TV-remote IR-receiver for pickup.
Ah - here it is - regarding that 64 x 64 pixel camera:
It was described in the book "Android Design: Practical Approaches for Robot Builders" by Martin Bradley Weinstein (ISBN 0-8104-5192-1), published in 1981 by Hayden Book Company:
Weinstein called it the "Ramera Camera" because it used a dynamic RAM chip for the imaging device; he notes that it was originally developed in 1978 at Case-Western Reserve University in Ohio, based off of something called "Cyclops" published earlier in Popular Electronics (and appeared in their "Electronic Experimenter's Handbook" in the 1978 edition).
Also - Steve Ciarcia described a device he built called the "Micro D-Cam" that used dynamic RAM as well for the sensor; you can find the complete setup on Google Books (in volume 7 of the Circuit Cellar books) here:
Oh wow, you are a font of knowledge! I'm afraid I'm going to have to comb through some of those sources for a while before I can get my head around how all of it works.
That doesn't mean I'm not going to try anything I can get my hands on though. XD
I like figuring things out for myself, but being in this place for five minutes is enough to garner the fact that I'm missing out on a lot. Time to do some research
For vision, you can buy CMUcams and AVRcams, but in general it will take a lot
more cpu power and RAM than Arduino has to do vision processing.
However, Parallax sells a line scanner, TSL401, that has lens and 128 pixel
line scanner, and which can be mounted on a panning servo to produce
low-res vision that can be processed by an Arduino.
Parallax is a good place to find sensors for small robots that don't take a
supercomputer to operate. Most are made to integrate with lowly Basic Stamps
[2KB Flash, 26-bytes of RAM, about 10KIPS operation, oof].
SilverAnalyst:
Oh wow, you are a font of knowledge! I'm afraid I'm going to have to comb through some of those sources for a while before I can get my head around how all of it works.
I would only bother with the Ramera camera and Ciarcia's similar camera as more of something with "historical interest"; I doubt you could even purchase the needed memory chips today (they would be antiques, and probably fetch a good price on the collector market).
...but the "low vision" sensors using LDRs or phototransistors - now those might make for something interesting to experiment with.
Also - right now at Electronic Goldmine they are selling some interesting "dual" element LDRs, where both LDRs are in a single device; depending on how they are oriented, you could probably detect movement and direction of movement (along a line) with one, and with several oriented properly, perhaps in a polar vector to some extent.
There are certainly other solutions (CMUCam was mentioned; there is also an AVR-based camera of similar design and use floating out there) - but none quite as cheap (though you get more capability with them, certainly).
oric_dan(333):
However, Parallax sells a line scanner, TSL401, that has lens and 128 pixel
line scanner, and which can be mounted on a panning servo to produce
low-res vision that can be processed by an Arduino.
Interesting; I wonder if that is the same device used in this?:
If you could mount that in such a manner to be able to spin/scan it quickly, you could have a form of 2D LIDAR fairly cheaply...
Interesting; I wonder if that is the same device used in this?:
No, that is an actual laser. There was an article about it in Servo mag or
NutsVolt mag or Robot mag a couple of months ago, but I didn't take
notice.
...but the "low vision" sensors using LDRs or phototransistors - now those might make for something interesting to experiment with.
Speaking of which ....
http://www.seattlerobotics.org/encoder/jan97/lowresv.html
Also .... [not quite LDRs]
David Buckley is the real "Robot Man Extraordinaire", at least for the past
20-years or so :-).
oric_dan(333):
Interesting; I wonder if that is the same device used in this?:
No, that is an actual laser. There was an article about it in Servo mag or
NutsVolt mag or Robot mag a couple of months ago, but I didn't take
notice.
What I was meaning was whether the linear-ccd sensor you referenced is used as the sensor in that laser range finder (it's a complete range finder, using a laser, a linear-ccd sensor, and parallax/trig for ranging).
oric_dan(333):
...but the "low vision" sensors using LDRs or phototransistors - now those might make for something interesting to experiment with.
Speaking of which ....
http://www.seattlerobotics.org/encoder/jan97/lowresv.html
Also .... [not quite LDRs]
David Buckley is the real "Robot Man Extraordinaire", at least for the past
20-years or so :-).
Well - that looks like an interesting article; I've known about him and his site for a while (mainly via cyberneticzoo), but I never saw that article - thanks for posting the link!
I think it's a 2D CCD or CMOS camera, but will look up the magazine article
when I get home.
Alright, I looked up the article, located in Oct + Nov 2011 Servo magazine.
It has some real potential. Uses a laser pointer and Omnivision CMOS 2D
camera mounted on the same pcb, and uses laser triangulation to do range
finding. Kind of a glorified GP2D12. The article is not online.
However, see:
The most interesting thing about this is that it uses a Propeller chip for cpu,
and is apparently open-source, so you can apparently write your own computer
vision routines. Unfortunately, he didn't add any additional RAM to the pcb,
so it's limited to 32KB.
Also for some background on the basic idea [not mentioned in the article],
see here:
http://www.seattlerobotics.org/encoder/200110/vision.htm
Also, kind of like surveyor, but simpler and cheaper:
David just informed me of the following too:
http://davidbuckley.net/RS/RogerStarnes/CLAWAR03_An_Autonomous_Humanoid.htm
VISION SYSTEM
The optical vision system was designed to be able to detect obstacles in the path of the robot. The vision system also tracks moving objects. Each eye consists of a lens in front of a 4 x 4 array of light dependent resistors. The resistance values are sampled at 30 frames per second. A single servo motor enables the two eyes to converge. The left eye analyses the image for verticals, horizontals, top right to bottom left edges and top left to bottom right edges. The right eye detects movement of dark to bright edges and can pan and tilt, with the left eye, to make such edges fall on the centre of the sensor array. Compensation is provided for low and high levels of ambient lighting giving an automatic iris effect. The system reliably tracks moving objects but is currently confused by very high contrast lighting such as spotlights. The system is however working more reliably than an ultrasonic system which was tried on the earlier prototypes. Difficulties encountered with that system were multiple reflections resulting from low angular resolution (a consequence of the wide beamwidth due to using small diameter transducers), specular reflection from smooth objects, absorption from soft objects and motor interference (sound and electrical). Some of the difficulties experienced were described in more detail in (11). The difficulties are greater with electromagnetic and sonar range measurement systems in low cost autonomous mobile robots where lightness and low current consumption is particularly important. It is hoped that implementing stereoscopic vision using convergence of the two 16 sensor arrays will give adequate range resolution for obstacle avoidance.
However, Parallax sells a line scanner, TSL401, that has lens and 128 pixel
line scanner, and which can be mounted on a panning servo to produce
low-res vision that can be processed by an Arduino.
"It produces a clocked analog data output, whose voltage levels correspond to the light intensity at each pixel. By means of an analog-to-digital converter (or even a simple digital logic threshold), image data are easily transferred to a microcontroller to detect objects, edges, gaps, holes, liquid levels, textures, emissive sources, simple barcodes, and other visible features."
Wow, this looks really useful. If only it was in the UK, and much cheaper.
Speaking of which ....
Yes! This is the low cost option I'm looking for!
David just informed me of the following too:
An Autonomous Humanoid with Vision for Obstacle Detection and Ranging
VISION SYSTEM ...
Excellent! Although I'm going to bookmark every link in this thread for when I have better funding, light dependent resistors are nice and cheap. I may start with a 3x3 array and play around with that to see what sensing abilities it can provide. Good think I've asked my wife to buy me an Arduino Mega for my birth day later this month as my Uno only has 6 analogue inputs. Maybe I will try 3x2 in the mean time.
Do you think I will need a lense, or perhaps just an old fasioned pin-hole camera approach?