I suppose depending on your application, a 9-meter resolution could be sufficient, but I think that resolution will be much larger due to other overhead in the processor; this isn't determined by MHz alone.
Most rangefinder applications for the Arduino consist of using ultrasonic transducers, either discretely, or more commonly using an ultrasonic ranging sensor (which for some reason, perhaps ease of implementation, consists of two transducers; one for sending the pulse, and one for receiving the echo - I find it curious that there are few designs out there that use a single transducer for both purposes, like the old Polaroid ultrasonic rangefinders used). With these sensors, one can typically achieve useful "close-range" measurements, but they are less useful for longer range uses (beyond 10 meters or so, depending on the power output of the system, of course - which also affects precision, unfortunately).
Laser-based rangefinders, due to the speed of the electronics needed if performing "time-of-flight" measurements (and thus the higher expense and complexity of the design), typically are not done at the hobbyist level. Instead, one of two methods are employed, though both are similar in implementation.
The first and most widely known uses a laser at a known distance from a CCD array sensor, usually a cheap web camera hooked up to a PC, although more advanced versions used a linear array (essentially a line of CCD elements, instead of a grid). Web camera based setups are more preferred because of the ease of obtaining cheap USB web cameras that are easily interfaced with both at the hardware and software level, and have a high enough resolution to be useful in this endeavor. The laser shines a beam of light, the web camera sees this at a "point blob" on the CCD, and using some simple mathematics and geometry, the location of the center of the point blob is used to determine the distance from the camera/laser setup to the object being ranged. An example can be found here:
Such an approach could be done with a small microcontroller like an Arduino/ATMega, but only likely using a linear CCD (perhaps one taken from a scanner?), or by interfacing directly with a matrix CCD and reading the data along a single line (assuming a matrix CCD is addressable - never having worked with them I don't know).
An alternative approach, one which could be more easily adapted to a microcontroller system (and indeed has, as we shall see), can be done with a spinning mirror. In this system, a mirror is spun off of which a laser beam is bounced. The angle of the mirror is kept track of via a simple optical encoder which reads when a revolution of the mirror occurs, and the timing of the speed of the motor. As the laser is scanned and the beam is reflected off of objects, if an optical sensor with a suitable amplifier is focused in the direction of the scan, the point at which the sensor receives the brightest pulse of light is considered the object of interest. At that point, the angle of the mirror is known, and once again, mathematics and geometry come into play to calculate the distance. As noted, this design, for a microcontroller, has been implemented:
http://letsmakerobots.com/node/2651
Of a salient note, both of these systems can only measure the distance to a single point directly in front of the system. In theory, one could mount such a system on a pan/tilt mechanism, and build up a "point cloud" of scans in order to gain a three-dimensional map of the area in which the sensor is located. However, this isn't really suitable for microcontroller processing; indeed, once you are delving into 3D point clouds for machine vision, you better be using a more capable system like a PC (or at least an embeddable system like a BeagleBoard), or you'll be almost dead in the water before you start.
There are better methods available. If for the web camera (or any other digitized camera source - high-definition firewire-based cameras are best for this work, though they tend to be expensive - however, high-resolution USB cameras are available now, so that is changing) you imaged a line, instead of a point, then you could digitize the line's resulting contours, and be able to tell - along the length of the line - the relative heights of objects in the path of the line. An example of such a system as this can be found here:
http://www.seattlerobotics.org/encoder/200110/vision.htm
If you scanned the line vertically, to cover an area of ground directly in front of the camera, from near to far, you would easily be able to develop a height-map of the area in front of the sensor. Swapping that for a left-right motion (and a vertically-oriented line, of course), you could map a point cloud of data in front of the system as well. Line-projecting laser modules are readily available and cheap; or one could use a small piece of acrylic rod as a double-convex cylindrical lens to obtain a similar result.
Another option, if one didn't want to mess around with scanning a line across the scene being studied, would be to project a grid of lines or points onto the surface (such modules are also available, one could also utilize various diffraction gratings cheaply available); the lines or points imaged could then be used to compute the height-map. Resolution of this system, however, would be dependent on the density of the grid or points used, of course - while for that of the scanned system would be dependent mainly on the camera resolution and the step/angular resolution of the scanning mechanism.
Hope this helps or inspires...
![]()