Hey,
Do you guys think it would be reasonable to make a rangefinder type device with an arduino? I don't know if avrs have any built in high speed timers, but 16mhz and 16 mips might be fast enough for ranging. Light travels just under 18 meters in a single 16mhz clock cycle, which gives you 9 meter resolution with a laser at 16mhz. Could you use a simple interrupt plus a photodiode and transistor and a simple increment routine to measure distance? On that note, could you use 2 separate micros, each with a radio, and (factoring in lag from the radio hardware) find the distance apart based on ping time? My guess is the atmega has some internal timer circuitry to make this easier, but I've never used it.
I suppose depending on your application, a 9-meter resolution could be sufficient, but I think that resolution will be much larger due to other overhead in the processor; this isn't determined by MHz alone.
Most rangefinder applications for the Arduino consist of using ultrasonic transducers, either discretely, or more commonly using an ultrasonic ranging sensor (which for some reason, perhaps ease of implementation, consists of two transducers; one for sending the pulse, and one for receiving the echo - I find it curious that there are few designs out there that use a single transducer for both purposes, like the old Polaroid ultrasonic rangefinders used). With these sensors, one can typically achieve useful "close-range" measurements, but they are less useful for longer range uses (beyond 10 meters or so, depending on the power output of the system, of course - which also affects precision, unfortunately).
Laser-based rangefinders, due to the speed of the electronics needed if performing "time-of-flight" measurements (and thus the higher expense and complexity of the design), typically are not done at the hobbyist level. Instead, one of two methods are employed, though both are similar in implementation.
The first and most widely known uses a laser at a known distance from a CCD array sensor, usually a cheap web camera hooked up to a PC, although more advanced versions used a linear array (essentially a line of CCD elements, instead of a grid). Web camera based setups are more preferred because of the ease of obtaining cheap USB web cameras that are easily interfaced with both at the hardware and software level, and have a high enough resolution to be useful in this endeavor. The laser shines a beam of light, the web camera sees this at a "point blob" on the CCD, and using some simple mathematics and geometry, the location of the center of the point blob is used to determine the distance from the camera/laser setup to the object being ranged. An example can be found here:
Such an approach could be done with a small microcontroller like an Arduino/ATMega, but only likely using a linear CCD (perhaps one taken from a scanner?), or by interfacing directly with a matrix CCD and reading the data along a single line (assuming a matrix CCD is addressable - never having worked with them I don't know).
An alternative approach, one which could be more easily adapted to a microcontroller system (and indeed has, as we shall see), can be done with a spinning mirror. In this system, a mirror is spun off of which a laser beam is bounced. The angle of the mirror is kept track of via a simple optical encoder which reads when a revolution of the mirror occurs, and the timing of the speed of the motor. As the laser is scanned and the beam is reflected off of objects, if an optical sensor with a suitable amplifier is focused in the direction of the scan, the point at which the sensor receives the brightest pulse of light is considered the object of interest. At that point, the angle of the mirror is known, and once again, mathematics and geometry come into play to calculate the distance. As noted, this design, for a microcontroller, has been implemented:
http://letsmakerobots.com/node/2651
Of a salient note, both of these systems can only measure the distance to a single point directly in front of the system. In theory, one could mount such a system on a pan/tilt mechanism, and build up a "point cloud" of scans in order to gain a three-dimensional map of the area in which the sensor is located. However, this isn't really suitable for microcontroller processing; indeed, once you are delving into 3D point clouds for machine vision, you better be using a more capable system like a PC (or at least an embeddable system like a BeagleBoard), or you'll be almost dead in the water before you start.
There are better methods available. If for the web camera (or any other digitized camera source - high-definition firewire-based cameras are best for this work, though they tend to be expensive - however, high-resolution USB cameras are available now, so that is changing) you imaged a line, instead of a point, then you could digitize the line's resulting contours, and be able to tell - along the length of the line - the relative heights of objects in the path of the line. An example of such a system as this can be found here:
http://www.seattlerobotics.org/encoder/200110/vision.htm
If you scanned the line vertically, to cover an area of ground directly in front of the camera, from near to far, you would easily be able to develop a height-map of the area in front of the sensor. Swapping that for a left-right motion (and a vertically-oriented line, of course), you could map a point cloud of data in front of the system as well. Line-projecting laser modules are readily available and cheap; or one could use a small piece of acrylic rod as a double-convex cylindrical lens to obtain a similar result.
Another option, if one didn't want to mess around with scanning a line across the scene being studied, would be to project a grid of lines or points onto the surface (such modules are also available, one could also utilize various diffraction gratings cheaply available); the lines or points imaged could then be used to compute the height-map. Resolution of this system, however, would be dependent on the density of the grid or points used, of course - while for that of the scanned system would be dependent mainly on the camera resolution and the step/angular resolution of the scanning mechanism.
Hope this helps or inspires...
Thanks! I'd already though about the solution with the camera, and I think I actually have a really good idea here: use the wiimote camera and an IR laser, it's totally doable by arduino and should have a high enough resolution.
will
I would seriously think about avoiding the usage of an IR laser, unless you have a specific need for it, and know that there will be no humans or animals around. The reason being is that since the eye cannot see IR wavelengths, but can still be damaged by them, you won't blink if you happen to have it shining in your eye until the damage is so great that it is too late. It is better to use a visible laser (keep it under 5 mw) if usage will be around people or animals, as it will provoke a blink/avoidance response should the laser inadvertently shine in their eyes.
[edit]Using the wii camera sounds interesting though; if you could place four laser dots and scan them around - you could potentially do the multi-point scanning system. BTW - don't worry about IR vs visible red; it should still pick it up unless it has filters in the camera, in which case you may want to give thought to what I said, unless you don't give a whit about your's or anyone else's vision...[/edit]
The wii sees exclusively IR-in fact I would be lucky if it picked up 808nm IR at all. And don't worry about power levels, I would use at max maybe 30mW at a high enough divergence that it becomes less dangerous than a 5mW pen at a few yards. I figure the laser doesn't need to be any smaller than a pixel at its respective range.