I'm using a MS5611 pressure sensor (datasheet) and it discusses total error band as well as Resolution RMS.
I'm fairly confident I don't understand what it means by Resolution RMS beyond RMS meaning Root-Mean-Squared as it lists the highest resolution giving an RMS value of 0.012 mbar, but also states that the error band at the highest resolution is +/- 0.5 mbar which seems a fair bit larger than 0.012 mbar.
I would like to create a deadband where I (theoretically) throw out bad reads caused by either random fluctuations or rounding errors using this RMS value but my web searches so far have not produced any useful information.
I know there are other options such as a weight filtered, averaging over a time span, or Kalman Filters, but I was specifically interested in figuring out more about RMS to either tailor make a filter or at least better understand my sensor.
Can anyone point me in the right direction for better understanding RMS in relation to a sensor or how I might use it to filter out jitter?
The error band applies to a single value while the Resolution RMS supposedly is the RMS of a number of samples, with 0.012mbar for 4096 samples. OSR=OverSampling Rate.
For that highest resolution you have to measure the pressure 4096 times and then calculate the RMS, or you use a circular buffer for 4096 samples and compute a moving RMS with every new sample. Or you use a smaller buffer and/or calculate a moving average or median or whatever you like.
Okay... hmm. The over sampling rate is done automatically by the sensor, I just tell it what rate I want it to use and it will do all the buffering and calculations for me. So in theory what that 0.012 is just a representation of the variance in the 4096 readings it took, and each of those 4096 readings has a resolution (accuracy?) of +/- 0.5 mbar.
So what would be a useful way to combine these two things together? If I set the resolution to 4096 then a plausible deadband could be 0.02 mbar (since the RMS of each value I get sent back is 0.012) which I could then smooth out further with a simple weighted formula of oldvalue 0.9 + newvalue0.1. Is that a reasonable application of the error and accuracy or am I still misunderstanding things?
At this point I more interested if my understanding and potential application are well founded or if my assumption of a deadband of 0.02 based on an RMS of 0.012 is a misapplication of what is actually being represented by that figure.
My thinking is the sensor can only provide a resolution of 0.01 mbar. And since the RMS can swing one way or the other by 0.012 (<- that's the part I'm not sure is sound) any time my reading deviates by 0.02 it should be an actual change in pressure?
I guess one way to test this is a simple program that I left running for a couple hours that recorded the minimum and maximum values read by the sensor while it sat stationary.
Ah, I see that now. And would it surprise you to learn that after half an hour of being left alone the min and max values were right about 1 mbar from each other? Or in other words I had a range of +/- 0.5 mbar. I swear somedays my brain isn't turned on.