Interesting. Since I like doing things completely from scratch like a moron, I'm curious about the exact details/math since I'll be implementing it in Python. From the linked post, I didn't see much math explanation, so I'll ask here.
Let me take an initial stab at the calibration algorithm:
- Collect 3D mag readings
- Find the average X, Y, and Z offset for the mag readings
- Subtract the found offsets from the mag readings (remove hard iron error)
- Find the average magnitude of the zero-mean mag readings
- Using least squares, Find the matrix that maps the zero-mean readings to a sphere with a radius equal to the average magnitude as found in step 4
- Apply the matrix to the zero-mean readings (remove soft iron, scaling, and axis non-orthogonality error)
Is this the correct process?