Go Down

Topic: "Autocalibrating" sensor readouts. (Read 2577 times) previous topic - next topic


Hey Microchip masters,
Give me your ideas on this, if you want:

Right now I read out the values from some bend sensors (within a range from 0 - 1023)
I then try to "calibrate" these values by with the following logic
Then I "compress" this value into the 0 - 255 range.

- Measure sensor value
- Check if measured value is either higher / lower then any of the previous measured values
- If so, store this value as the highest / lowest value so far.
- Then map this measured value between the highest and lowest values into the 0 - 255 range.

It's not a very sophisticated system, and a rogue reading can mess up (and does so, since the bend sensors are not that stable)
the whole "calibration". Also, you have to move the sensors up and down their range once to get decent readings from them with this system.

Can somebody suggest another system that
- Does not need the initial calibration movement.
- Holds up better over time?

Thanks in advance,


- Does not need the initial calibration movement.

You could write a sketch that stored the "calibration" data in EEPROM. Then, other sketches could simply read the data from the EEPROM.

- Holds up better over time?

Better sensors, perhaps. Depends on what you are doing with the "bend sensors".
The art of getting good answers lies in asking good questions.


Hey Microchip masters

That's fightin' talk around here, stranger - this here's AVR territory.
"Pete, it's a fool looks for logic in the chambers of the human heart." Ulysses Everett McGill.
Do not send technical questions via personal messaging - they will be ignored.
I speak for myself, not Arduino.


Hi Jim,

Okay so I can't help with the initial calibration though what you're suggesting seems relatively feasible and takes into account the real values you are seeing from the sensor itself.

As per one of the other responses - better quality sensors would help too as your error rate will be a lot lower (diff in +-5% to +-1% is pretty significant and often not huge cost change on a couple of units).

To help with your auto calibration over time however there are a couple of ways to do it. What you're trying to do from your description is to avoid a seriously anomalous result (that *may* be caused by some other interruption) skewing your range into an area where you'll get few or any readings at all.

To that end here are the ways I'd approach it:

Use some kind of weighting - eg:


- Current number of calibration readings taken is X
- Max so far
- Min so far

- Read sensor input = A(in)

- If A(in) is < Min or > Max (assume > Max for logic below - reverse signs for < Min)
     Range = A(in) - Max
     Change = Range / (X +1)
     Max += Change

- X++

What you'll see in this instance is quick initial changes which will then slow down over time. It should suppress the wild fluctuations such that over time it will get harder to shift the min and max as X increases. Obviously you could create a method to reset X, Min and Max and start the calibration process again.

A variation of this is to include decay into the system as well such that Min and Max trend back in towards the middle of their range over time as well. This means there will be an opposing force to the constant push outwards on that range and will take into account any extended anomalous periods during calibration (eg wild fluctuations occurring right at the moment of calibration starting).

The next option is to use truly weighted system but means storing a lot of data as well so may not be appropriate. The way I'd do this is as follows:

- create a 1024 length array of bytes probably
- Every time you read a value increment the count in the byte array by one.
- you now have a histogram of values.
- The sum of all the counts in the array gives you the total counts in the system.

Here's where some processing comes in though (so you don't want to be doing this very often)

- Find the median point - that is the number at which exactly 50% of the counts are below the reading level (just start at array[0] and keep summing upwards until you hit (sum/range >= 50) ) that's the centre of your mapped range.
- Find the interquartile range - this is the same as the median except you go to 25% on the bottom and 75% on the top.
- From here you take the difference between median and 25 percentile and put that on the bottom and then the difference between the median and 75 percentile and add it on the top and you're pretty much going to get a good spread that should encompass a reasonable Min and Max.


Say your Median is 455
Say your 25 percentile is 400
Say your 75 percentile is 895

Min = 25 percentile - (Median - 25 percentile) = 400 - (455-400) = 400 - 55 = 345
Max = 75 Percentile + (75 percentile - Median) = 895 + (895 - 400) = 1390 - obviously this is too high so you'd cap it at 1024

The downside with this approach is as follows:

1. Requires more processing to calculate - you'd probably limit this to a calibration cycle if you wanted to do it.
2. Has issues with clumpiness in your data. For example, I have a thumbstick that ranges 0-1023. At centre it sits at about 455 and goes down quite linearly to 0 if I move it left. If I move right though it goes linear to 620ish then jumps a bunch and goes straight to 850 before going linear again to 1023. This approach won't deal with that even though it's totally legitimate data (I deal with it using custom mapping that I know works for the thumbstick in question but that's hardly effective for random components).

Apologies for the detail in this response but thought it would be better answered with some pseudo code and examples.

I imagine there are other ways to tackle this as well.



Thanks! This looks like it'll be really useful information.
I'll try to implement what you're suggesting here.


No probs - let us know how you get on, and maybe post your code and we can optimise it if it looks a little process intensive. Would make a good HOW TO for someone else coming along.


Hey Guys,
So this is what I've got.
I've implemented the first of AJ's suggestions.

Code: [Select]
#include "WeightedValue.h"
#include "WProgram.h"

WeightedValue::WeightedValue(int inputMinTarget, int inputMaxTarget)
    minTarget = inputMinTarget;
    maxTarget = inputMaxTarget;
    sensorTop = 0;
    sensorBottom = 1024;
    readingsSoFar = 0;
int WeightedValue::returnWeightedValue(int inputValue)
     if (inputValue < sensorBottom)
       int range  = inputValue - sensorBottom;
       int change = range / (readingsSoFar);
       sensorBottom -= change;
     if (inputValue > sensorTop)
       int range  = inputValue - sensorTop;
       int change = range / (readingsSoFar);
       sensorTop += change;
     int outputValue;
     outputValue = map(inputValue, sensorBottom, sensorTop, minTarget, maxTarget);
     outputValue = constrain(outputValue, 0, 254);
     return outputValue;

First question, am I calculating the range correctly?
Last time I tested on the hardware I used if (inputValue < sensorBottom)  {int range  = inputValue - sensorBottom;}
So this might have messed up the results a little.

Also, I run this code around 25 times per second.
(So after one second, X is already around 25)

So, I should probably tone that impact down a bit?

Go Up