Magnetometer Autocalibration

I have been using a magnetometer in one of my projects for some time now and have been trying different methods for calibrating the magnetometer. I have found most methods to be quite tedious and difficult to implement when you cannot view the serial console.

I created an algorithm which I think works pretty well and is easier to use than most approaches I have come across before, so I thought I should share it.

The approach is to assume that the magnetic moment is constant in magnitude and use the data read from the sensor to learn the offset and bias parameters. This alleviate the common requirement to find the true maximum along each axis and is not very sensitive to erroneous outliers which might occur when reading the sensor.

The code is available on GitHub - antsundq/Magnetometer-Auto-Calibration: An algorithm that learns the inherent offsets and biases from empirical magnetometer measurement data..

To illustrate the results, here is a plot of raw (decimated) data projections into the different cartesian planes.


In three dimension the data set looks like an ellipsoid offset from origo and there are a few points lying far out which are not representing the true value of the magnetic field. The magnetic field should be a sphere centered in origo.

After using this data set to train the model with roughly 1 000 data points, the predictions by the model looks like below.

And after repeating the training 10 times, the result looks like this:

Importantly, training the model does not require any user interface and there is no requirements that the data is uniform either. One may use pre-trained estimations of the offsets and biases and then either fixate these or let them be tuned by the model once deployed. There is only one parameter to adjust, the learning rate, and from my tests the model is not very sensitive to this parameter, so it should work reasonably well left at the default value.

Let me know if you find this useful or if something is unclear.

It would be helpful if you would describe in plain text/graphics (not requiring Jupyter) the theory behind the method, what the method actually does, and how to use your Arduino code.

there is no requirements that the data is uniform either

What does this mean? What is required?

Note: with just six adjustable parameters, it is not possible to correct for certain "soft iron" distortions of the Earth's magnetic field. Nine or twelve parameters are usually used. A clear example of a rather severe such case is posted here.

Thanks for the feedback and the link. I tried to export the notebook, but I am missing some dependencies for my jupyter installation. Will get back to this later later.

The approach is a bit more naive than the the the more advanced models using e.g. 12 internal parameters.

In short, it is assumed that the magnetic field is a constant vector and the sensor has a bias and a gain error for each axis. What is done is essentially a sphere fitting, but the approach does not require min and max to be measured along each axis. Counting parameters we would end up at 6, but the approach could possibly be generalized to 12 with a linear model. If we get a point which lies outside our best estimate for the optimal sphere, we change the parameters so that we move closer to the sphere in a way which is proportional to the error. I will get back with an export with more details when I find some time.

There is an example sketch provided here.

The requirements are that the algorithm is fed enough data. How much is enough is difficult to tell, because it will depend on the actual impairments of the sensor. As seen in the pictures, I got reasonable results with 10k data points. Convergence should be faster and the tolerance to noise and errors not modeled is better if the whole solid angle is covered. Because of the symmetry, this is not strictly necessary as long as the assumptions hold (which, as you point out, might not be the case).

it is assumed that the magnetic field is a constant vector

Unfortunately, the magnitude of the magnetic field vector is not constant in the rather common case of soft iron distortion (e.g. when the sensor is built into a robot with nonuniform local magnetic fields). To handle that case your approach could be extended to fit an ellipsoid rather than a sphere, with axes not constrained to lie on the coordinate axes.

The requirements are that the algorithm is fed enough data. How much is enough is difficult to tell

It might take quite a bit of experimentation to come up with a suitable convergence test, but it could be worth the effort, because the automatic procedures I've seen fail pretty miserably. As you say, some approximation to full spherical angle coverage is needed, and the commonly recommended "Figure 8" pattern is just wrong.

I'll give your method a try, but the nine parameter method described here is actually rather easy to implement, and has worked well for me in several difficult cases. It should need to be run only once, when the magnetometer is installed in the final destination.

the magnitude of the magnetic field vector is not constant in the rather common case of soft iron distortion

Well, the physical magnetic field vector which we try to measure is constant (locally), assuming we are talking about the earths magnetic field. What is not constant is the magnitude of the measured magnetic field vector.

your approach could be extended to fit an ellipsoid rather than a sphere

I need to correct myself here, what is done is more of an ellipsoid fitting, not a sphere. I am not completely sure how to use the terminology, and there is not an explicit fitting done to all the input data at once. Instead, the estimation is improved incrementally and the result of the algorithm is a mapping of the data from an ellipsoid offset from origo into a sphere centered around origo. If the model were to be extended to 12 parameters I think we could handle coupling along the axes, i.e. an ellipsoid which is not aligned with the carteesian axes of the coordinate system.

It might take quite a bit of experimentation to come up with a suitable convergence test

Agreed, this is not trivial in any sense. What I have done is generated random data with different caractheristics and used this to evaluate the performance. I have then used the magnetometer of an MPU9250 to test with experimental data. I used this (raw) experimental data to evaluate the convergence for different values of the learning rate parameter. I finally tested with the code running on an arduino due. The results so far looks good, but I there is still a lot that could be done. It is all in the notebooks in the GitHub repo.

I'd be interested to hear how it works for you or if you have questions on the implementation.

he nine parameter method described here is actually rather easy to implement,

I cannot seem to find where to actually download the referenced application and pdf. I'd be interested in reading how it is implemented if you could share the details.

Managed to get some exports from the Jupyter notebooks. It does not look the best and the interactivity is gone, but implementation details are available [here ](https://github.com/sunqn/Magnetometer-Auto-Calibration/blob/main/Algorithm Development/Gradient Descent Magnetometer auto-calibration.pdf)and some of my testing results [here](https://github.com/sunqn/Magnetometer-Auto-Calibration/blob/main/Algorithm Deployment/Deployment Validation.pdf).

Just as a disclaimer: it is a workbook and not an article so the flow might not be overly clear. Just let me know if you find it cryptic.

Well, the physical magnetic field vector which we try to measure is constant (locally), assuming we are talking about the earths magnetic field.

But there is no way of separating the Earth's magnetic field from the local, possibly nonuniform field arising from the immediate environment.

The C code to calculate the nine-parameter correction is posted on my Github page, along with some Arduino code to collect data and apply the corrections, for various consumer grade magnetometers.

Download the theory paper by the University of Calgary PLAN group.

But there is no way of separating the Earth's magnetic field from the local, possibly nonuniform field arising from the immediate environment.

No, not from one measurement, without knowing more about the perturbations. But the whole purpose of the modelling is to estimate the physical magnetic field from the perturbated measurements.

Thanks for sharing the link to the paper. It was an interesting read and I have a few comments.

My suggested approach is using a similar model to the one proposed in the paper. For now, I have assumed that the off-diagonal elements of Eq. (6) are all zero. For the sensor I have worked with they are at least very small. As I wrote earlier, It would be possible to generealize and allow for off-diagonal a-coefficients if needed.

They have made the same assumption as I have about the magnetic field vector being constant, see Eq. (10). So either we'll need to accept this assumption or reject both methods.

I did not go through the maths in detail, but from what I get it seems like they use a least square method to fit an ellipsoid to the data set and extract the parameters for the model. This is essentially what is done in my proposed solution, but the mathematical method is different. In my approach I use gradient descent to iteratively estimate the parameters.

There is some discussion in the paper about the noise term and how to treat this, and I am not sure I understand this or agree with it. I have assumed the noise is a zero-meaned, and as such it will average out over time. If it was not zero-mean, we could separate out the mean from the noise term in Eq. (7) and include it in the b-term. They state that the expected value for the noise can be positive in Eq.(20), but I cannot see how this is true for a zero-mean noise. Anyhow, it is not very important.

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.