Go Down

Topic: Smoothing Ultrasonic Rangefinder Data (Read 2 times) previous topic - next topic

Brandon121233

Hey guys
I am trying to use the LV-MaxSonar-EZ1 rangefinder with the Arduino, but I've been getting some outliers that could affect my robots preformance. I would like to know if anyone has some general knowledge on smoothing out sensor data and would be willing to help me, as I am still not used to the Arduino language. The way I would do it (I'm just not sure how to put it into code) would be to collect about 11 readings in an array, arrange them from lowest to highest, then find the median. From there I would calculate an 80% upper and lower threshhold to which I would then constrain all the data in the array to, then its just a matter of getting the mean. If anyone knows a better more efficient way to smooth sensor I would greatly appreciate some help.
Thanks in advance- Brandon

zitron

You can just do a running average, this will smooth out really bad spikes:

loop {

new distance = (old distance + sensor distance())/2

old distance = new distance

}

I had the same problem on my robot, the above worked fine for me. If you want it to be more fancy you can use a weighted average, or more recorded data points.

-Z-

Brandon121233

Yeh thanks for the reply, but I need to remove outliers, and averaging just isnt good enough for me. For example I may get readings like 40 41 40 39 42 45 310 43 41 44 38, that 310 is definitally not right, but if I use that in an average it will completely throw off my data, I need to remove it instead of covering it up.

cavedave

Trying to figure out which values you trust is an interesting problem.
I think bayesian filtering may be of more use for this sort of problem. Here is a paper on using bayesian filtering for location estimation http://seattle.intel-research.net/people/jhightower/pubs/fox2003bayesian/fox2003bayesian.pdf. And another description http://www.randomhacks.net/articles/2007/04/19/robot-localization-particle-system-monad

cavedave

If bayesian filtering is not your cup of tea. Something like this may be suitable
http://www.tigoe.net/pcomp/code/archives/arduino/000710.shtml

Brandon121233

Thanks cavedave, I think I'll try using the weighted average

Hugobox

Hi!
I had the same problem. This takes care of data spikes just fine:

void loop()  {
 readSensor();
 Smooth();
 SendData();
}

void Smooth(){
 if (rawValue < filteredValue){
   filteredValue --;
 }
 else if (rawValue > filteredValue){
   filteredValue++;
 }
}

Brandon121233

where are you getting the variable filteredValue from initially? And what does subtracting and adding 1 to the filteredValue have to do with anything?

Hugobox

The initial value for filteredValue can be anything, I initialize it to zero: it will gradually adjust to the target raw sensor value after a little while. It works best for sensor data that is expected to behave like a sine wave but has data spikes sometimes. In pure data or Max MSP world it is called  the line object, which is a ramp generator. It is used a lot to filter raw data into nice smooth data. The only difference is the line object uses a time argument, which I mimic changing:
filteredValue ++;
to
filteredValue = filteredValue + step;
where step can be defined according to the application. Bigger steps for faster changing sensor data, or dynamic step according to the difference between rawValue and filteredValue.

Brandon121233

so what is the function smooth returning? Is it filtered value? I think I understand it now- so it will take a little while to adjust to the sin wave of data and then only change by 1 if there is an outlier.  This seems a little rough, and in a robotics applicaton it can go from consistent 10 inch readings to contistent 60 inch readings and then this function would slow down my robot. Is there any possible way to put like a tolerance filter not a smoothing filter on it. All I want it to do is for my fist example like I'm getting these values 40 40 40 5 40 41 82 82 80 84 80 81 81, the 5 in the middle of the 40's is not right and would cause a malfuntion, but the transistion form 40 to 80 is correct because it is a consistent change and would nt be filtered. Is there any "kinda easy" way to do this with out going into theroectical time continuium calculus theorms? Thanks for your help.

Kirk

Yet Another Method

Approximate Moving average
where n is the sample number. thus MovAve(n-1) is the previous MovAve
Pick any interger for NumPts - Bigger is smother.
MovAve =  MovAve(n-1) + Data/NumPts - MovAve(n-1)/NumPts
Simplify
MovAve =  MovAve(n-1)  - MovAve(n-1)/NumPts + Data/NumPts
MovAve =  MovAve(n-1) * (NumPts - 1)/NumPts  + Data/NumPts

Kirk

Go Up