I'm confused about a few points on how to best calibrate an accelerometer, whose data will be used in an orientation sensor fusion algorithm.
As a summary, the most common approaches I've seen take measurements in 6 different orientations (1G in +x, -x, +y, -y, +z, -z), to then arrive at a max, min measured value. Offsets are then calculated as averages: (max - min)/2. And then a scaling factor is calculated. There's a few aspects of this that I'm unclear about, and was hoping it's okay to ask in a single question, since they are related.
1) the examples calculate the average as written above, rather than sum all vals / N vals. Isn't that approach more error-prone?
2) should the offset per axis be calculated once for each orientation, and then summed?
3) what's the best approach to calculate scale bias?
Thanks for responding, jremington. I saw responses from you on similar posts several times on Google, and I see you mentioned in that post as well! I've actually come across it before, but gave up as it looked too complicated. After your suggestion though, I gave it another shot and this is the summary of my understanding (as far as accelerometer calibration is concerned, I was using MotionCal for the magnetometer):
Install magneto
record accelerometer data in mG (not sure whether I should be moving the sensor about, or record data in several(?) static orientations?)
enter a value of 1000 milliGalileo as the “norm” for the gravitational field in magneto (if I'm getting data in G, I just multiply by 1000 to get value in mG?)
adjust your uncalibrated accelerometer data by the scale factor matrix (default positive?) and bias that you get from magneto
test again in magneto with calibrated data (hoping to get scale matrix close to identity matrix). I can't draw charts in Python yet, so it's difficult to verify the outcome of calibration, but so it goes for the time being.
The procedure is a bit complicated, but if you are careful, it works much better than any other.
The measurement units don't matter. Record the raw data, as the corrections are applied to those.
Orient the sensor in all possible directions, 200 to 300 measurements. Ideally the directions should cover the entire 3D sphere uniformly. The sensor should be still when each measurement is made.
For the "norm", enter a value that is roughly the maximum reading (or the average vector length). You want the diagonal elements of the correction matrix to be about 1.
And this is just some stats about the uncalibrated, and later calibrated data from the accelerometer.
Sum 193.74
Average 0.461 0.818 -0.170 0.736
Median 0.060
Noise 59.410 19.630 19.820 19.960
Mid / 0.205 -0.310 -0.170
Sum 44.78
Average 0.083 0.090 0.051 0.108
Median 0.080
Noise 59.030 19.650 19.670 19.710
Mid / 0.005 0.035 -0.025
The lower total averages are making me believe there was a positive improvement? Although, it seems the sensor was fairly accurate to begin with? I'm wondering now, if I did things properly and if all the floating-point math is too much on a ATSAMD21G?
The initial calibration shows that there were significant offsets (up to 0.3 m/s^2 instead of 0) and that the X and Z axes differed in sensitivity by more that 1%.
Those errors have been corrected, so the effort will make a big difference, for example, much better directional accuracy when used in navigation.
If you do the same with a magnetometer, you will see much larger corrections.