Open source, TOF laser rangefinder

It's just the effects of optical parallax on the signal strength. You can actually extract a result if you use a smart signal processing algorithm but the signal is getting down to the level of the noise. An easier way to measure shorter distances is to use a shorter focal length lens on the receiver side or even remove the lens altogether.
The SETS circuit starts measuring before the laser fires. In other words, the expanded timebase zero point (t = 0) occurs well before the optical zero point when the photons start to leave the laser. On the circuit diagram, the two gates of IC5 introduce a delay of about 8 nanoseconds between the start of the SETS and the firing of the laser, just to make sure that things happen in the right sequence.

thanks for the detailed explain. (y)

Has anyone had any luck making this work?
I tried using unix_guru's code but I am having trouble. I can't get the correct values.
I am not sure if it's because I am using an Arduino Uno instead of the Arduino FIO.
Does anyone know if that will make a difference?

Thank you in advance

Some more progress on the manual for the OSLRF-01 can be found here:
http://lightware.co.za/shop/en/index.php?controller=attachment&id_attachment=7
This update includes some instructions on how to assemble the OSLRF-01 KIT.

Several people are contributing open source software including unix_guru here:
http://letsmakerobots.com/node/40633
He has some preliminary software saved here:

Great work on the range finder!

Is there a shorter focal length lens on the receiver which could be substituted to make the minimum distance less than .5M? I am looking for measuring ranges of 5cm to 500cm with <1 cm error rate.

Thanks

Thanks MarkJ :slight_smile:
I can't say for certain that the OSLRF-01 will meet your requirements but it's these kinds of unusual applications that make this open source product such an exciting platform to run experiments on. There's nothing that prevents different lens combinations from being used, even to the extent of having no lenses at all! You can use a smaller lens on the laser - perhaps one of those that you get inside visible laser pointers, and you could leave the lens off the detector so that there are no parallax effects.

For short range measurements, you could also try running the laser at lower power (change the voltage regulator) to reduce electrical firing noise and also narrow the width of the laser pulse (RC network) to get a cleaner signal.

Just remember that with a smaller lens on the laser, the beam intensity goes up and you could exceed the Class1 eye safety limit. Reducing the power and pulse width of the laser will help. These kinds of modifications should be done whilst watching the expanded timebase signals on an oscilloscope. This will give you a very clear idea of how each change to the design affects the performance.

If you really want to break new ground, the OSLRF-01 circuit is capable of quasi-phase measurement, a much higher resolution method of measuring a distance. This is a pretty complicated modification and will perhaps form the basis of a future discussion!

Hello,

Using simple rising signals threshold method I get 10-50cm measurement errors (closer-larger). So I badly want to try this CFD algorithm and have few questions about using it:

  1. How do You choose to set a threshold for return signal ? If dark/light/close/far objects return amplitude varies 20 times ?

  2. If Return signal is measured by CFD, so Zero signal also has to be measured in this way, to keep correct distance between them ?

P.S. About Sync signal frequency. First, I assumed it is constant (if not adjusted by Control) . But then I noticed strange drift of measurements over time. Started to examine it and found that at the beginning it was 27770(micro secs) and after about half minute it dropped to 27140. After few minutes it was settled about 27190. Not much, but can't ignore 0,6ms (~40cm?). So I can't measure it once in setup, and than use as fixed number in calculations ?

Hi ig-x.

I'm delighted to see your questions on this forum. You are asking about some of the critical issues that face LRF designers and you can probably imagine how much harder these issues would be to resolve if the signals were all running at the speed of light.

  1. There are many different strategies that can be used to set the threshold. In many professional LRFs it is a dynamic level that is adjusted according to the strength of the return signal. Of course, this can be done in software if the entire waveform has been digitized. If you don't want to use an ADC then you can dynamically adjust the reference voltage of a comparator, typically by using a PWM output and setting the duty cycle. A good rule of thumb is to adjust the threshold so it is about half the height of the return signal. This gives the best signal-to-noise ratio (SNR).

  2. It is not necessary to use the same algorithm on both the return signal and the zero. The reason is that the zero has a very stable amplitude whilst the return signal has a continuously changing one. Of course, using different algorithms changes the offset between the zero and the return, but since they are not perfectly matched anyway, you always need to include some offset in software. In practice, you often find that the algorithm measuring the zero is much more accurate than the one measuring the return signal. This is because there is plenty of time to collect the zero data and perform complicated statistical or correlation mathematics, since it is not changing. Conversely, the return signal can change very fast so a quicker algorithm may be needed.

  3. The sync signal is NOT at a constant frequency - it drifts with temperature. As a result, all measurements need to be taken as a proportion of the sync time. It is the ratio of the return time to the sync time that is a constant.

Please consider writing about your tests and their results so that other people using this open source device can benefit from your experience :).

Thank You Laser Developer !

maybe You can share some details/examples about Convert signal usage? How it can be used to help do ADC conversions?

At the moment everything is a little bit fuzzy for me. If I get stable readings at some distances i.e. 3m and 4m, than instead of 1 meter I get 1,5 meter. If I achieve 1 meter than others shifts and so on. I think I will end making some lookup correction table.

"Convert" is a clock that runs at a fixed frequency of 31.772kHz. It can be used as an interrupt to trigger successive ADC conversions, rather than letting the ADC convert the data at some random speed. The frequency of the Convert signal gives an ADC conversion rate that results in a perfect digital reproduction of the analog signals without any oversampling or aliasing. How convenient!

I love that you've used the term "fuzzy" to describe the characteristics of the return signal. Fuzzy logic would be an ideal way to analyse the results to get an accurate distance. In practice, it's not necessary to go to such lengths, since, whilst your observations are correct, your interpretation is missing one insight. A non-trivial insight I must add. The errors in the distance readings are not caused by the distance itself, they are caused by the strength of the return signal ONLY!

So it's possible to correct the distance errors by finding the relationship between the magnitude of the error and the signal strength without any concern for the absolute distance to the target surface. A simple way to do this is to have two targets, one white and one black. Put the white target at 5m. Measure the distance to the front of the return signal AND to the back of the return signal. Record the results and repeat the test using the black target.

You know that the target hasn't moved so the distance to the front of the return signal should come out the same, namely 5m. But the width of the signal (which is a proxy for signal strength) has changed and therefore the associated error is different. If you plot the width of the signal on the X-axis of a graph (width = distance to rear - distance to front), and the error on the y-axis (error = distance to front - 5m), you get two points on the relationship between signal strength and error. In CFD, you make the assumption that the relationship is linear, and therefore you can construct the linear equation that predicts how much error you would expect from any given signal strength. This error can be subtracted from the distance reading taken at the front of the return signal to get the actual distance. The slope of this equation is negative, meaning that stronger signals have less error than weaker signals.

Most designers completely underestimate the effect of electronic delays when analyzing signals that are moving at the speed of light (we're talking nano and pico seconds here). Consequently, they find it hard to make an accurate laser range finder. So whilst the OSLRF-01 slows down all the signals to make them easier to analyze, it is still dealing with phenomena that are actually at the speed of light, so it too suffers from delays caused by stray capacitance and such like. Correcting for these delays is what your are doing with CFD.

Just to stretch your imagination one step further. Electronics is more of an art than a science and nothing in electronics is ever perfect. So, inevitably, it turns out that the linear CFD approach only gives a better approximation of the actual distance. Correcting for non-linear errors, which are typical of capacitive charging or amplifier saturation, requires more data points and a more sophisticated correction algorithm. Such fun!

PS For those of you who don't want to worry about all this stuff, don't forget that lightware.co.za makes all kinds of complete laser range finders as well :slight_smile:

OK, finally I'm getting close to the usable data from this sensor.
What about OSLRF-01 parameters consistency from one unit to other ? I wrote a program with empirical corrections to get correct data. Do I have to recalibrate everything on other OSLRF unit ? Or they all are equal ? I'm asking because of that "analogue" feeling...

Well done!
The basic errors are the same for any given design, so the slope and offset will be similar. However, delays inside the detector and amplifiers are subtly different from one unit to the next because the stray capacitance and silicon delays depend on how the PCB is assembled and how the silicon is fabricated. Even small changes to the thickness of the copper have an affect. In my experience, between 80% and 90% of the errors are consistent and the rest need to be corrected in each unit.

Made a long post about my life with OSLRF-01 :slight_smile:

Congratulations ig-x! A superb analysis of a complex signal processing problem. You are now one of those rare people who have understood and appreciated how difficult it is to analyze events happening at the speed of light. Imagine trying to do all this with real-time data!

@Laser_Developer,

Just to say thanks for taking the time to post this excellent information. I'd been pondering for a few days on how to achieve laser TOF without crazy-high-bandwidth electronics, and the SETS technique hadn't occurred to me.

One question, if you have a few seconds. I understand your control logic is responsible for generating a continuous SETS sampling control signal with gradually incrementing, precisely controlled delta-T over the course of many samples, to produce the expanded timebase. But I'm having difficulty parsing how the precise delta-T delay is actually generated: is it a 'beating' signal generated by phase difference between the two crystal oscillators? Could you please give a very brief description of the principle behind the control logic? (I'm mostly a software guy and can't read schematics as easily as I'd like).

Thanks,

David.

ps Just FYI, I believe there's a discrepancy in component numbering between the circuit schematic and BOM that you've posted: eg, the schematic has the receiver trans-impedance amp as IC4, but the BOM has it labelled as IC13. Just so you know...

hi
is it still possible to buy an assembled -OSLRF-01- ?

also, why don't you use a phase difference approach instead of time of flight ?

if you get a modulated laser fire with a sinusoidal signal then you can measure the phase difference between the laser-fire and returning signal using phase detector chip like ad8302

may it sound cool ? better than measuring the time of flight using equivalent time sampling ?

Is it possible to extend the measurement range of the OSLRF-01 to more than 9m. I suppose that the use of stronger laser diode right solution.

Miroslav

Hello,

What resolution (cm, mm?) is it possible to theoretically obtain using the OSLRF-01? I would like to have precision up to mm on measurements in the ranges 5m - 15m for a project do you think it is possible to tweak the OSLRF-01 in order to obtain such precision?

Thanks in advance!

Hello!

I find it awesome that you open-sourced this device! I checked your website lightware, but you dont have application examples (places/situations where it can be put to use). A similar type of device called LMS-500 from SICK (also from Leuze, Jenoptik, ifm etc) works on Time of flight LaserDistanceMeasurement in 2D and 3D and they have included some very good applications. Nevertheless, it has a drawback in tems of Wave divergence and the Scan rate(measurements/second = max 100 Hz covering 190°). Yes, they use a rotating mirror, which includes mechanical parts, but it can also be dont without one. Moreover, the light spot diameter is not appreciable as Wave divergence is 4,7 mili-radian for the high resolution type. For that, they charge few thousand Euros! (2000-6000 € depending on the manufacturer)
Kindly check the their product website to have a feel of possible applications... Market your project man, there is huge demand.

Cheers!
-derRam

Great job!