Q: How do you calibrate yaw by measuring pitch and roll from orthogonal positions.

This is important for accurately translating an accelerometer's coordinate system to a device. I've been trying to wrap my head around this but Euler terms and talk of quaternion matrices are scrambling my brain. Heres the basic situation:

Lets say I've got an accelerometer mounted in a square or rectangular box. Due to the fact that the accelerometer is on a chip that's mounted to a PCB that's then mounted to the box the two coordinate systems of the accelerometer and the box are different. If the box is sitting on a level surface then the accelerometer data can easily yield the pitch and roll (rotation about the x and y axis) as well as theta (angle to the z-axis), but it can't detect yaw (rotation about the z-axis). If I tilt the box on it's side, rotating it 90 degrees, then the theta value and pitch or roll, depending on which side was rotated, should simply be swapped, if yaw is 0; however if yaw is non-zero then the 90 degree rotation of the box will have introduced a distribution of the rotation about both pitch and roll. Another way of putting it is that pitch roll and yaw are order sensitive in terms of expressing a final orientation of an object, so introduction of an initial yaw value should affect pitch and roll.

I can intuitively understand that if I take measurements from orthogonal positions then by comparing the two, I can deduce the yaw, but I can't figure out how to express it mathematically.

I do know the following: pitch = arctan(Ax/sqrt(Ay^2+Az^2)) roll = arctan(Ay/sqrt(Ax^2+Az^2)) theta = arctan(sqrt(Ax^2+Az^2)/Az)

If Ax, Ay, and Az represents a point in respect to our original frame of reference, then the translated position due to a given set of pitch, roll, and yaw angles is given by the following formula: Ax' = cos(yaw)*cos(pitch)*Ax + (sin(yaw)*cos(roll)+cos(yaw)*sin(pitch)*sin(roll))*ay + (sin(yaw)*sin(roll)-cos(yaw)*sin(pitch)*cos(roll))*az Ay' = -sin(yaw)*cos(pitch)*Ax + (cos(yaw)*cos(roll)- sin(yaw)*sin(pitch))*Ay+(cos(yaw)*sin(roll)+sin(yaw)*sin(pitch)*cos(roll))*Az Az' = sin(pitch)*Ax-cos(pitch)*sin(roll)*Ay+cos(pitch)*cos(roll)*Az where roll, pitch, and yaw are all between 0 and PI

Conceivably I could implement a best fit algorithm that would find a value for yaw that made the measured translated coordinates match expectations, but that seems a bit brute force and I'm sure its some sort of trig relationship between x and y' and y and x'.

Here's a simple case that reveals the limits of my conceptual abilities in figuring this out: Consider two cases of 0, and 90 degrees yaw respectively where the device is tilted (pitch) 45 degrees, roll is zero. In each case initial pitch and roll are zero. Case 1, 0 degrees yaw: theta = 45 degrees, pitch 45, roll 0 Case 2, 90 degrees yaw theta = 45 degrees, pitch = 0, roll = 45 The complete transfer of the pitch to the y-axis as a result of the yaw would have been a negative angle had the yaw been the other way. I suspect the following case is also true: Case 3, 45 degrees yaw: theta =45, pitch = 30, roll=30