I suggest the creation of datatypes for handling fixed-point binary arithmetic. This would, I suppose, run much more efficiently than floating-point arithmetic, as well as freeing up a few more bits for the significand. Such datatypes could, I suppose, be used in many (maybe the majority) of places where we now use floating-point binary.

I have a serious application for this, mainly in timekeeping. I suggest keeping track of the current time as a nine-byte fixed-point number, with exactly five bytes on the "low" side of the radix point, thus:

00 00 00 00.00 00 00 00 00 (in hexadecimal notation)

Every 128 clock ticks, we update this number by simple addition.

Suppose, by way of example, our oscillator is exactly 16 000 000 hertz. Then, 128 ticks of our crystal would equal:

0.00 00 86 37 BD (hexadecimal, and rounded to 5 bytes) seconds

If our oscillator runs fast or slow, we just change this figure to compensate. If we have a temperature sensor, all we need is a lookup table, and presto! Instant Chronodot clone!

Again, assuming a 16 MHz oscillator, 128 ticks is about 8 microseconds. Maybe we want finer resolution than that. So, if we want to check the time between updates, we just take the number of "extra" ticks and add it directly to the 3rd byte to the right of the radix point. What we are doing is using 0.00 00 01 00 00 (hex) as an approximation for 0.00 00 01 0C 6F ... (hex), and though not perfect, we will be within half a microsecond of the correct figure. (Our oscillator need not be *exactly* 16 MHz for this approximation to be useful.)

When coming up with this timekeeping idea, I was thinking of a thread in which I had asked for ideas regarding binary-coded decimal arithmetic. One of the responses I got said, in effect, "Don't bother. Do all math in binary, and convert if you need decimal." I can see where this is coming from: the Arduino has no real support for decimal, or binary-coded decimal, arithmetic. So I thought: if decimal arithmetic is unsupported and therefore to be avoided, why then internally use milliseconds and microseconds, which are *decimal* divisions of a second? Better perhaps to just keep track of seconds, in straight binary, and then if milliseconds or microseconds are needed, then convert, just as is done to convert binary numbers to decimal for display.