Hi Jim,
Okay so I can't help with the initial calibration though what you're suggesting seems relatively feasible and takes into account the real values you are seeing from the sensor itself.
As per one of the other responses - better quality sensors would help too as your error rate will be a lot lower (diff in +-5% to +-1% is pretty significant and often not huge cost change on a couple of units).
To help with your auto calibration over time however there are a couple of ways to do it. What you're trying to do from your description is to avoid a seriously anomalous result (that may be caused by some other interruption) skewing your range into an area where you'll get few or any readings at all.
To that end here are the ways I'd approach it:
Use some kind of weighting - eg:
Pseudocode:
-
Current number of calibration readings taken is X
-
Max so far
-
Min so far
-
Read sensor input = A(in)
-
If A(in) is < Min or > Max (assume > Max for logic below - reverse signs for < Min)
Range = A(in) - Max
Change = Range / (X +1)
Max += Change
-
X++
What you'll see in this instance is quick initial changes which will then slow down over time. It should suppress the wild fluctuations such that over time it will get harder to shift the min and max as X increases. Obviously you could create a method to reset X, Min and Max and start the calibration process again.
A variation of this is to include decay into the system as well such that Min and Max trend back in towards the middle of their range over time as well. This means there will be an opposing force to the constant push outwards on that range and will take into account any extended anomalous periods during calibration (eg wild fluctuations occurring right at the moment of calibration starting).
The next option is to use truly weighted system but means storing a lot of data as well so may not be appropriate. The way I'd do this is as follows:
- create a 1024 length array of bytes probably
- Every time you read a value increment the count in the byte array by one.
- you now have a histogram of values.
- The sum of all the counts in the array gives you the total counts in the system.
Here's where some processing comes in though (so you don't want to be doing this very often)
- Find the median point - that is the number at which exactly 50% of the counts are below the reading level (just start at array[0] and keep summing upwards until you hit (sum/range >= 50) ) that's the centre of your mapped range.
- Find the interquartile range - this is the same as the median except you go to 25% on the bottom and 75% on the top.
- From here you take the difference between median and 25 percentile and put that on the bottom and then the difference between the median and 75 percentile and add it on the top and you're pretty much going to get a good spread that should encompass a reasonable Min and Max.
Example:
Say your Median is 455
Say your 25 percentile is 400
Say your 75 percentile is 895
Min = 25 percentile - (Median - 25 percentile) = 400 - (455-400) = 400 - 55 = 345
Max = 75 Percentile + (75 percentile - Median) = 895 + (895 - 400) = 1390 - obviously this is too high so you'd cap it at 1024
The downside with this approach is as follows:
- Requires more processing to calculate - you'd probably limit this to a calibration cycle if you wanted to do it.
- Has issues with clumpiness in your data. For example, I have a thumbstick that ranges 0-1023. At centre it sits at about 455 and goes down quite linearly to 0 if I move it left. If I move right though it goes linear to 620ish then jumps a bunch and goes straight to 850 before going linear again to 1023. This approach won't deal with that even though it's totally legitimate data (I deal with it using custom mapping that I know works for the thumbstick in question but that's hardly effective for random components).
Apologies for the detail in this response but thought it would be better answered with some pseudo code and examples.
I imagine there are other ways to tackle this as well.
Cheers
ajfisher