I think the problem is in your scaled math. If you scale a multiplicative constant, you can only use it ONCE, after which you have to start re-normalizing your results.

For instance, using a scale factor of 1000 (base 10 to make things easy), multiplying by 0.234 would mean multiplying by 234 instead. So far so good. But if you take that result and multiply by the scaled constant again, you've multiplied by 54756 (234*234), which is not at all the same as multiplying by a scaled (.234*.234*1000 != 55)

I didn't see the renormalization that you'd have to do. It might have been buried in the mysterious shifts and divisions, but

1) You should make such things obvious by letting the compiler do constant math`#define SCALEDFACTOR(a) ((long)(a*2.0^12))`

#define SCALEDMULT(a,x) ((SCALEDFACTOR(a)*x)>>12

.... + (SCALEDMULT(0.904347, Lyv[0])

Compilers these days are very good at doing math on constants; don't make your code more obscure by doing it for them!

2) I think this would explain the results you're seeing. Multiply things by a few extra thousands a couple time in a row and you'll overflow your longs, as well as having intermediate incorrect results.

3) It'd be nice to see the actual equations, scaling factors, and etc CLEARLY DOCUMENTED.

You're trying to do: ` yv[3] = (xv[0] + xv[3]) + 3 * (xv[1] + xv[2])`

+ ( 0.9390989403 * yv[0]) + ( -2.8762997235 * yv[1])

+ ( 2.9371707284 * yv[2]);

Using integers scaled by a factor of 4096 (12 bit shift), right?