Hello guys,
I decided to play with some MPU6050 I had around and I stumbled on some doubts regarding calibration.
First of all, I use Jeff Rowberg's library (link) to interface with the IMU.
So, my questions are:
1- In this library, there are methods to set offsets for sensor calibration (e.g. accelgyro.setXAccelOffset(-4336);).
What exactly do they do with the values sampled?
I have read somewhere (unfortunately can't track where) that such calibrations only sums (or substracts, depend on the signal) an offset from all digital values sampled.
So, applying a offset using these methods has the same effect of taking each sample and summing (or subtracting) the respective offset?
To give a practical situation: suppose I need to measure a vertical acceleration motion, that can be 2g up and 2g down (without accounting the presence of gravity). And that I want maximum sensitivity, therefore would like to stay in the 2g range.
So, in absolute values, I would have 1g up (2g motion minus 1g from gravity) and 3g down (2g motion plus 1g from gravity). With raw values, any motion with down acceleration higher than 1g would overflow the reading, since 1g from motion plus 1g from gravity would already reach the accelerometer maximum range.
But, if we calibrate the sensor so it outputs a 0 value when aligned with vertical axis (therefore 1g from gravity would give a 0 output), would we be able to properly measure motion accelerations higher than 1g (e.g. 1.2g, 1.3g, etc.)? Or would the output now simply overflow at 1g, i.e. would still be sensing values higher than 2g (e.g. 2.2g, 2.3 g), therefore overflowing but outputting all these values as 1g (would take the overflowed output of 2g and "subtract" 1g because of the offset given)?
This is really confusing me and I even tried to devise some experiments to determine that myself, but could not take any conclusions from the results I had.
2- Looking for a calibration sample code, I found one at I2Cdevlib forum (post with code). But in the code, the offset values inserted on the offset methods cited above are divided by a factor (8 for accelerations and 4 for gyros). My question is: where do these values come from?
I saw some questions regarding this in the post, and even asked myself, but since the post is very old, I think I will not get an answer there.
Searching in Google, I found a note from Freescale (link to note) saying that to calibrate their accelerometer, one had to divide the offset values by 4 or 2, depending on the scale range (+-2g, +-4g, +-8g), so the offset value could fit in a 8-bit register. When I read that, that seemed like a good reason for those 8 and 4 division, but after a lot of struggling to find a logic for MPU6050 scale range, I gave up.
In fact, looking at the MPU6050.h source file, the methods for gyro offset indicate they need to receive a 8 bit value, while accelerometer should receive a 16 bit value (look here). That seemed like a good clue, but I could not make much of it.
Further searching made me found a post by the creator of the calibration sample code where he said: "It is not so straigthforward to update your offset every time. In my case, I had to put my initial readings, negated (change sign) and divided by 4 (gyros) and by 7.8 (accels)." (link to post). From this, it seemed to me to be some kind of trial and error, but I may be wrong. From another post, here or on the I2Cdevlib, that I can't find anymore, I remember reading Jeff Rowberg (if I am not mistaken) commenting that the input values in the offset methods should be divided by a scaling factor, but since that wasn't the focus of the topic no further explanations where presented.
Best regards