There is a perfectly simple method for calibrating accelerometer offset and sensitivity using 4-points rotation. The advantage is this method does not depends on the angle between sensor axis and g-axis. http://www.rocklandscientific.com/LinkClick.aspx?fileticket=ph72xW5yI7Y%3D&tabid=68&mid=455 The only problem with it is that it does not work. Equations look correct and quite obvious. Does anybody have an idea what is wrong in this method? Details: I simplified calibrating device. Instead of calibrated disks etc. I took a wooden bar with 2 inches sides, 20 inches long, attached metal angle to one side, bent it so that angle's surface is at approx. 30-40degrees related to bar surface and mounted sensor on metal angle surface. Then I took sensor readings (averaged over 1000 values) when turning the bar so that it stands on each of four sides, so this guarantees that measurements were taken on 4 'equally spaced angular positions' with 90 degrees difference. Readings were stable and permanent, and reproducible for each of four sides. I calculated offset values using formula from the above source. The problem is when I re-mounted sensor under a different angle and repeated measurements, I get completely different values for offset and sensitivity.
So there is a flaw somewhere in this approach; besides it looks very obvious and simple, but I never met this method described anywhere else, which makes me to suspect there are some reasons why it is not used. Any ideas?