Integer instead of float maths

Hello experts. I have read many threads about how much faster it is to use integer rather than floats for dynamic calculations on Arduino. However I could not find any practical example that would help solve my specific case.
I am building a robot and using polynomial approximations for reverse kinematics. But with floats, it gets very slow to process on my Arduino Mega. So I would like a methodology to switch to integers calculations.
For example, I am using a third degree polynom (one of many) :

x = k0 + k1 * z + k2 * z2 + k3 * z3

my z vary from 0 to 250, and I would like my x to have at least 4 significant digits e.g. 61.26
my coefficients are constants and something like k0=64.8589, k1=-6.2598e-01, k2=2.60208e-03, k3=-1.00266e-05

How would you perform this kind of calculation using integers ?

For example you can multiply by 100 or 1000 so 6.23 becomes 6230 and do all your math using ints the last step is divide by 1000 and cast to float.

It would be far easier to use floating-point math on a faster mcu alternative board. Otherwise you will need to study up on fixed-point arithmetic, decide/test precision level (sign v. unsigned?), and use/adapt/develop a library. Here is a place to start, with some additional info here too. Good luck.

Consider using a Teensy, 96 MHz (3.2) to 600 MHz (4.0) clock speed, compared to 16 MHz for the Mega.

please consider what a processor has to do. the reason int's are faster is simply because floats use 32 bits and ints only use ten. if you are doing math on a float number like 3.16789 its going to be more steps for a processor than an int of 3.

if you where only interested a couple numbers before the decimal point you could multiply your numbers by 100 before your calculation. so a number like 3.16789 could be cast into an int of 349. do all your math then move the decimal back over when you are done. of course at the cost of the fidelity. the only reason to use floats is if you truly need 32 bits of resolution. I will assume that you do if you are using them in the first place.

there are many ways to gain speed for heavy calculations. the biggest is limiting how often they are called or limiting them to happen at opportune times. its hard to say without knowing the whats and whys of what you are trying to accomplish

the reason int’s are faster is simply because floats use 32 bits and ints only use ten.

Which exotic platform uses ten bits for integers?

x = k0 + k1 * z + k2 * z2 + k3 * z3

Using Horner’s method could remove some multiplications.

x = k0 + z * (k1 + z * (k2 + z * k3))

Using Horner’s method could remove some multiplications.

Horner Rule is based on Newton’s polynomial of the following form; where, the change of the orders of the variables and the associated coefficients are regular.

A(x) = anxn + an-1xn-1 + … + a0x0

And because of this regularity, we could use the Horner Rule (a modified version of Newton polynomial) to convert Binary number to BCD number in the architectures 8085, 8051, and 80x86 which support ‘decimal adjust after addition’ instruction. For example:
BIN = b7b6…b0
==> BCD = b7*27 + b6*26 + … + b020
==> BCD = ((…((b7)2 + b6)2 + … + b1)2 + b0
==> BCD = ((…((IPBCD
2 + b7)2 + b6)2 + … + b1)2 + b0

where: IPBCD = initial partial BCD = 0

for (int i = 7; i>=0; i–)
IPBCD*2 + bi ----> IPBCD
Adjust incorrect IPBCD into correct IPBCD

In the case of OP, the given expression is of this form: x = k3z3 + k2z2 + k1z1 + k0z0; where the positional values of the coefficients are not regular rather are arbitrary, can the Horner rule be applied to evaluate OP’s expression?

excuse me. i meant 16 instead of 32. but its still double the bits.